**5. Hardware and software**

In order to obtain a final result, several steps are necessary, which can be divided into preprocessing, processing and postprocessing [76, 115]. The individual steps are explained in more detail in **Figure 2**. Typically, several of these steps are included in today's CFD software solutions. Using the Ansys software solution, the geometry can be created (SpaceClaim), the mesh can be generated (Meshing), the calculations can be performed (Fluent) and with CFD-Post the postprocessing can be completed. However, separate software is often used for geometry generation, such as Autodesk Inventor, SolidWorks, Solid Edge or Salome, and for postprocessing steps such as Tecplot and Paraview. The most widely used CFD software solutions for bioreactor modelling are Ansys Fluent and CFX (commercial) and OpenFOAM (open source). However, Simcenter Star-CCM+ from Siemens, Autodesk CFD, COMSOL Multiphysics and M-Star CFD are also used, with the latter specifically advertised for bioreactor applications (all commercial software).

In addition to the software, the hardware is also of critical importance for a simulation to be economic [116], with the hardware performance, the purchase price and the power consumption all playing a role. All current CFD software solutions allow for the parallelisation of the calculations. For this purpose, the computational mesh is partitioned into different domains. Individual processors then execute the computations in the individual domains. The communication between the domains is regulated, as usual in parallel computing, by means of the message passing interface (MPI) or other interfaces [116]. Different algorithms exist for creating partitions,

#### **Figure 2.**

*Visualisation of the process steps ranging from problem definition to validation. The Minifors 2 stirred bioreactor from Infors AG hase been used as an example.*

which differ in their degree of automation and partitioning time [117]. For example, the Scotch algorithm of Pellegrini and Roman [118], which is based on dual recursive bi-partitioning and implemented in OpenFOAM, only needs the number of domains to perform the process. However, this algorithm requires a longer partitioning time than less automated algorithms [117].

While parallelisation reduces the required computing time, higher degrees of parallelisation also require more data to be exchanged between the processors, which has a negative impact on computing time. **Figure 3** shows how the relative simulation time for modelling a stirred 3 L bioreactor changes as parallelisation increases (interFOAM, OpenFOAM v9). With the hardware setup described in Seidel and Eibl [8], the relative simulation time can be described as a power function with an exponent of 0*:*817. If there were no loss due to communication, an exponent of 1 would be expected. Harasek et al. [119] also performed parallelisation studies with up to 1024 cores, form which an exponent of 0*:*93 could be determined. According to Haddadi et al. [116], parallelisation should be performed in such a way that there are between 50,000 and 100,000 cells per domain, whereby the number of cells should be reduced as the complexity of the model increases.

Of the publications listed in **Table 1**, only one-third of the authors made statements about the hardware used. Between 4 [120] and 504 [121] cores were used for the calculations. Only five of the authors stated that they had used an HPC system [44, 121–124], with the remaining authors having used desktop machines. The current versions of Ansys Fluent and Siemens Simcenter Star-CCM+ emphasised the use of graphics processing units (GPU) instead of central processing units for the calculations. Multiple GPUs can also be used here. Benchmarks from Ansys and Siemens show that using GPUs can shorten simulation time at the same time as reducing purchasing costs and power consumption [125, 126]. Of the authors listed in **Table 1**, only authors who used M-Star CFD stated that a GPU was used (M-Star CFD relies exclusively on GPUs). COMSOL, Autodesk CFD, Ansys CFX and OpenFOAM are not able to use GPUs for calculation by default. There are, however, a number of modifications such as MixIT, that allow OpenFOAM to us GPUs [127, 128]. Current

*Computational Fluid Dynamics for Advanced Characterisation of Bioreactors Used… DOI: http://dx.doi.org/10.5772/intechopen.109848*

#### **Figure 3.**

*Relative simulation time depending on the processors used. A stirred bioreactor, which was examined with OpenFOAM v9, was used as a test case (VOF model, transient observation for 10 s). All simulations were performed three times to capture temporal variance. In each case, the Scotch algorithm was used for decomposition. The hardware setup is described in more detail in Seidel and Eibl [8].*

developments show that classical CFD simulations could be complemented or replaced by other techniques. Machine learning techniques can be used to accelerate simulations or to improve turbulence modelling [129], and physics-informed neural networks (PINN) are increasingly used, for their ability to perform calculations 200 times faster to the same degree of accuracy [130, 131]. Another technique, which currently has no real application for bioreactor modelling, is quantum CFD (QCFD) [132–135]. In quantum computing, the system can be in a superposition of multiple states at the same time. By adapting conventional algorithms to quantum computing, an enormous increase in speed could be achieved and the use of models for turbulence etc. would become obsolete [136].




*The bioreactors were subdivided according to their power input into stirred, wave-mixed and orbitally shaken systems. 3BEE, 3-Blade elephant ear stirrer; 3BSS, 3-Blade segment stirrer; A310, Hydrofoil A310; MI, Marine impeller; P, Paddle stirrer; PB, Pitched blade stirrer; RT, Rushton turbine; SD, Special design; ST, Smith turbine; the number after the label indicates how many stirrer blades were present.*

#### **Table 1.**

*Overview of process characterisations of bioreactors using CFD.*
