*3.8.1 Useable software without installation*

Often some desired functionality will be the main drivers in choosing software architecture. When starting development of software for digitalMLPA it was seen as an advantage to create the opportunity for users to be able to use the computer that is used to run the NGS system. These computers are often powerful systems that could make the conversion of FASTQ files much faster as opposed to using their personal computers. However, users in institutes will often not have installation rights on such systems, as such one of the main requirements was to develop software that could be used without the need of installation.

### *3.8.2 Data sharing*

The software system supporting digitalMLPA experiments requires the capability to handle data sharing among multiple user groups. To optimize costs, users can utilize a central facility for NGS runs, enabling the merging of digitalMLPA experiment products for sequencing. Consequently, the centralized unit must be capable of redistributing specific experiment-related portions of the data from a FASTQ file. To fulfill this requirement, the software should employ a file-based data storage system utilizing a customized format that allows for data updates throughout each phase of the analysis. In this setup, a FASTQ file can be divided into separate files containing data specific to individual samples. These files can then be grouped together within separate folders associated with each experiment, facilitating redistribution to the respective experiment owners. For example, on a MiSeq device from Illumina, it is possible to combine approximately 64 reactions for probemixes with around 600 probes, with single reads of 110 nucleotides or longer. On a Nextseq device, it is feasible to combine 192 reactions for probemixes with a similar probe count. Combining data in this manner can lead to significant cost reductions in NGS runs by minimizing the number of plates and reagents required. Automation, when coupled with data combination, further enhances sample processing efficiency

#### *Quality Assurance When Developing Software with a Medical Purpose DOI: http://dx.doi.org/10.5772/intechopen.113389*

while reducing variation and waste. While file-based data systems may have certain disadvantages, such as potential limitations in performance and scalability, they offer notable advantages in terms of data retrieval, backup, and sharing. The simplicity and efficiency of file-based systems make them an attractive option, eliminating the need for installing third-party components. Overall, the software's file-based data storage system, coupled with the ability to merge and redistribute specific experiment data, enhances collaboration and cost-effectiveness in digitalMLPA experiments, without the complexity of additional software installations.

#### *3.8.3 Reusing of components*

In the development of components for digitalMLPA analysis, it is worth considering their potential for reuse in future software programs, including updates to the analysis software for conventional MLPA data [34]. By accurately identifying the components that can be shared between conventional and digitalMLPA and designing them in a way that allows their availability as framework components within the company's infrastructure, economic benefits can be realized. By reusing validated and verified components, a significant portion of the work required for certification may already be completed even before the entire solution is finished. This approach not only saves time and effort but also improves the overall quality by standardizing procedures that are recognized as part of the general functionality for MLPA. For instance, techniques like normalization and population statistics are commonly employed in MLPA, and their standardization through the reuse of components can enhance quality and reliability. Moreover, by establishing a framework of reusable components, the company can leverage this infrastructure for other software programs and projects in the future. This approach promotes efficiency and consistency across different applications, reducing redundant development efforts and fostering a more streamlined development process. In summary, by identifying and designing components for digitalMLPA analysis in a way that enables their reuse and incorporation into a framework, economic benefits can be achieved. Reusing validated components saves time, effort, and potentially contributes to meeting certification requirements. Furthermore, standardizing common procedures enhances quality and establishes a foundation for future software programs within the company.

#### *3.8.4 Continuous integration/continuous delivery*

Continuous integration and continuous delivery (CI/CD) are important aspects of software development, especially when integrating software into an organization. In scientific software development, the end goal is often uncertain. A monolithic software architecture, which consists of a single deployment unit, may be costeffective and easier to design and implement initially. However, it lacks scalability, fault tolerance, and elasticity. Domain-Driven Design (DDD) offers an alternative approach, focusing on the function of the domain rather than the workflow or technical components (**Figure 2**).

In DDD, all functionality related to a specific domain is grouped together, allowing changes within that domain to be self-contained within a specific area of the system. Microsoft has been promoting their version of domain architecture, which is based on developing microservices-based applications and managing them using containers. Microservices, as single-purpose, separately deployed units of software, excel in scalability, agility, and fault tolerance. They also facilitate risk management due to their

focused functionality. However, implementing and maintaining microservices in an environment with rapidly changing data sources and extensive database content across multiple functionalities can be challenging. Domain-driven design offers advantages in validation and verification testing, as separate components have well-defined inputs and outputs, enabling quality control and integrity testing. These components can be easily integrated into automated pipelines. DDD also allows close collaboration between development teams and domain experts, focusing on specific key parts of the system. Over time, the individuals involved in development may change, depending on the scope of the development cycle, which can limit team sizes while enabling specific testing and a focused approach. DDD also facilitates the gradual integration of components into the organization for evaluation purposes while development on other components continues. This iterative approach allows for feedback and evaluation at different stages, ensuring a smoother integration process. CI/CD is crucial for software development, and DDD offers a beneficial approach that emphasizes domain functionality. Microservices-based architectures provide scalability and fault tolerance, although they can be challenging to implement and maintain. DDD enables efficient validation and verification testing, close collaboration with domain experts, and gradual integration of components into the organization for evaluation.

### *3.8.5 Separation of concerns (design)*

Separation of concerns in software design, particularly in layered architecture styles, ensures that each component within a specific layer focuses solely on the logic

#### *Quality Assurance When Developing Software with a Medical Purpose DOI: http://dx.doi.org/10.5772/intechopen.113389*

relevant to that layer. Each layer has its own role and responsibility in the application, which enhances its effectiveness and testability. For example, the presentation layer handles user interface and communication logic, while the business layer applies specific rules associated with the application's domain. Each layer forms an abstraction around a particular request, isolating its functionality from the other layers. The advantage of closed layers is that they provide independence to each layer, ensuring that updates or changes made in one layer do not affect the others. This isolation makes risk management easier and allows for specific scoping of development cycles or sprints, as changes are contained within isolated parts of the code. Understanding the impact of these changes enables better definition of associated risks and improves quality assurance. Additionally, the separation of layers enables the design of presentation layers to be deferred to a later stage while other components are being tested and integrated into the organization. In scientific software development, where functionality often takes precedence over usability, separating end-user presentation design features such as GUI and reporting options from the analysis modules proves beneficial. Analysis modules produce visual output primarily for verification purposes, allowing their development, documentation, and testing to be carried out independently, possibly even by separate groups. The output design can be optimized for easy automated integrity testing, promoting stability, and simplifying quality control while other parts of the software are being developed. Back-end specialists can focus on developing the functionalities of analysis modules, while design specialists can handle front-end design and report generation. Usability is an essential aspect that often requires close interaction with the intended user groups to ensure that the software efficiently and effectively achieves the set goals. By separating presentation design features and analysis modules, software developers can engage with design specialists and users to create a user-friendly and goal-oriented application. *S*eparation of concerns, particularly in layered architecture styles, improves testability, risk management, and quality assurance. Separating presentation design features from analysis modules allows for independent development, testing, and documentation. Design specialists can focus on usability aspects while back-end specialists work on functional modules, leading to a more efficient and effective software application.

#### *3.8.6 Interfaces*

Interface design may be a process more akin to an art than a science and requires a different skill set and possible several rounds of iterations with communication between different end user groups and the developers to reach a satisfactory result. Challenges involved in usage of interfaces are often of a "more subjective order." The applications that a scientist works with are often less advanced, less secure and deliver an inferior user experience compared to the applications the same scientist uses on their smartphone after work [35]. Though a digital revolution is taking place biotechnology companies are slow to adapt and many companies still believe that software must be developed and maintained in-house, and that data is more secure on internal servers than on the cloud. Cloud services are often avoided being believed to be less secure than local storage which requires companies to spend both capital and operation expenses on in-house servers and in-house software development. By investing in components and codes that are encapsulated and independent, movement to a different platform in later stages may be made easier. Thus, when the overall organization has gained more trust in cloud services, much of the previous work can be easily

transferred and documentation, testing procedures and risk assessment information are more readily available.
