Power System Small-Signal Stability Enhancement Using Damping Controllers Designed Based on Evolutionary Algorithms

Komla Agbenyo Folly, Severus Panduleni Sheetekela and Tshina Fa Mulumba

#### Abstract

This chapter is concerned with the stability enhancement of a power system using power system stabilizers (PSSs) designed based on four evolutionary algorithms (EAs), namely, genetic algorithms (GAs), breeder genetic algorithm (BGA), population-based incremental learning (PBIL), and differential evolution (DE). GAs have been widely applied in many fields of engineering and science and have shown to be a robust and powerful adaptive search algorithm. However, GAs are known to have several limitations. To deal with these limitations, many variant forms of GAs have been suggested often tailored to specific problems. In this research, we investigated the performances of GA-PSS and three other EAs-based PSSs (i.e., BGA-PSS and PBIL-PSS and DE-PSS) in improving the small-signal stability of a power system. These EAs have been selected on the basis of their simplicity, efficiency, and effectiveness in solving the optimization problem at hand. Frequency domain and time-domain simulation results show that DE-PSS, PBIL-PSS, and BGA-PSS performed better than GA-PSS. Time domain simulations suggest that overall, DE-PSS performs better than PBIL-PSS and BGA-PSS in terms of undershoot and subsequent swings, albeit with a relatively large first swing overshoot. The performances of BGA-PSS and PBIL-PSS are similar. On the other hand, GA-PSS gives a better response than the conventional PSS (CPSS).

Keywords: breeder genetic algorithm, damping ratio, genetic algorithms, differential evolution, low-frequency oscillations, power-system stabilizer, population-based incremental learning

#### 1. Introduction

Over the past decades, low-frequency oscillatory modes have been a major concern to power system engineers [1]. These oscillatory modes ranging from 0.1 to 3 Hz tend to be poorly damped especially in moderately to heavily loaded systems that are

equipped with high gain, fast-acting automatic voltage regulators (AVRs) [2, 3]. Generally, we distinguish two main oscillation modes: local and inter-area modes. Local modes (0.8–2 Hz) involve local generators oscillating against each other. On the other hand, inter-area modes are caused by groups of generators in one part of the system swinging against other groups in the interconnected power system having frequencies ranging from 0.1 Hz to 0.8 Hz. Compared to local modes, inter-area modes are generally the most critical modes that need to be damped [4, 5]. These modes are found in almost all interconnected power stems. If they are not adequately damped, the oscillations may sustain and grow, and this may lead to system blackout. Power system stabilizers (PSSs) have been proposed to modulate low-frequency oscillations and increase the damping of electromechanical modes [1, 2]. Tuning the PSS parameters is not a trivial task. Power utilities have preferred using conventional PSSs (CPSSs) designed around a nominal operating condition. The design of the CPSS is generally based on conventional control approaches such as root locus, phase compensation, and pole placement techniques [1–5]. However, since these approaches are not robust, the designed CPSS tends to deviate from optimal operation when the system experiences a range of changes away from the nominal operating conditions. Therefore, new design approaches are required to design a PSS that can operate optimally under a wide range of operating conditions [3, 6]. Evolutionary algorithms (EAs) such as genetic algorithms (GAs) [7–12], differential evolution (DE) and its variants [13, 14], particle swarm optimization (PSO) [15], population-based incremental learning (PBIL) [16–19], and breeder genetic algorithms (BGA) [11, 20–24] are efficient heuristic search methods that are capable of solving complex optimization problems. They do not require the objective function to have properties such as continuity, smoothness, and differentiability. They have many advantages over traditional optimization methods and have attracted considerable attention in recent years. Many of these methods have been applied to power system damping controller design with encouraging results. In particular, GAs have been extensively used to solve global optimization problems in academia and are now being accepted by some industries [9]. DE, PBIL, and BGA are easy to implement yet efficient and robust in solving optimization problems. Therefore, they are considered in this work.

GAs are biologically motivated adaptive systems based on natural selection and genetics. GAs are generally used to solve optimization problems by the exploitation of a random search [7, 8]. Although GAs are seen to be robust and powerful adaptive search mechanisms, they have several drawbacks [9]. One of these drawbacks is related to "genetic drift." This phenomenon prevents GAs from maintaining diversity in their population. Other issues include the nonexistence of theoretical guidance for selecting optimal GA parameters such as population size, crossover, and mutation rates. Moreover, the natural selection approach used by GAs is not immune from failure [22]. Breeder genetic algorithm (BGA) has been proposed to cope with some of these drawbacks. It applies almost the same ideas as in GA, except that it is based on artificial selection as practiced in animal breeding rather than using natural selection based on Darwinian evolution [23, 24]. Artificial selection (selective breeding) refers to the intentional breeding for certain qualities or a combination of qualities [23]. This is in contrast with the natural selection that is the process whereby organisms survive and produce offspring by naturally adapting to their environment. Generally, individuals in BGA are represented as real numbers instead of binary or integers. The main advantage of using BGA over GA is its simplicity in the selection method and the fewer parameters. The major limitation of this algorithm is that there is a likelihood of Power System Small-Signal Stability Enhancement Using Damping Controllers Designed Based… DOI: http://dx.doi.org/10.5772/intechopen.105591

premature convergence that could lead BGA to converge to the local optimum rather than the global one. To deal with the problem of premature convergence, an adaptive mutation is used [23, 24]. In this case, the mutation rate is not fixed but varies according to the convergence and performance of the population. This is the type of BGA that will be discussed later in this chapter.

Population-based incremental learning (PBIL) is a combination of GA and competitive learning. It extends the features of the evolutionary genetic algorithm (EGA) through the reexamination of the performance of the GA in terms of competitive learning [16–19]. It was originally proposed by Baluja [18, 19]. In PBIL, the crossover operator is removed, and the role of the population is redefined. PBIL works on probabilistic vectors (PVs), which control the random bit strings generated by PBIL. The PVs are used to create other vectors through competitive learning. The PV is then updated to increase the likelihood of producing solutions corresponding to the current best individual. It has been shown that PBIL is simpler than GA and in many cases performs better than GA and has less overhead [11, 16–19].

Differential evolution (DE) is a powerful stochastic optimizer whose search mechanism involves a differential mutation technique [12, 13, 25]. The algorithm is both simple and robust, with several variants exhibiting different tradeoffs between convergence speed and robustness. Most often DE outperforms its counterparts in efficiency and robustness [12–14, 25].

This chapter discusses the optimal design of power system stabilizers (PSSs) using four evolutionary algorithm (EAs) techniques, namely, genetic algorithms (GAs), breeder genetic algorithm (BGA) with adaptive mutation, population-based incremental learning (PBIL), and differential evolution (DE). For comparison purposes, the conventional PSS (CPSS) is also included in this work. The performance and effectiveness of the PSSs in damping the electromechanical modes are investigated using both frequency-domain analysis and time-domain simulations. Simulation results show that all the EA-based PSSs (GA-PSS, BGA-PSS, PBIL-PSS, and DE-PSS) perform better than the CPSS for all the operating conditions considered. Frequency domain simulation suggests that DE-PSS, PBIL-PSS, and BGA-PSS have similar performances in terms of the damping ratios that they provided. Time-domain simulations however suggest that overall, DE-PSS performs slightly better than PBIL-PSS and BGA-PSS in terms of undershoot and subsequent swings, albeit with a slightly large 1st swing overshoot. GA-PSS is shown to give the worst performance amount to the EAs. The chapter is organized as follows: Sections 2–4 present the overview of BGA, PBIL, and DE, respectively; Section 5 discusses the system model; Section 6 is concerned with the objective function; Section 7 presents the design of the PSSs; Section 8 discusses the simulation results; and the conclusions are presented in Section 9.

#### 2. Overview of breeder genetic algorithm

As discussed previously, breeder genetic algorithm (BGA) is similar to genetic algorithms (GAs), with the exception that it uses artificial selection and has fewer genetic parameters. Also, BGA uses real-valued representation as opposed to GAs that mainly use binary and sometimes floating or integer representation. BGA is a versatile and effective function optimizer. It has the advantage of being simpler than GA. To deal with the issue of premature convergence that is common with BGA, a modified version of BGA called adaptive mutation BGA is used in this work [11, 20, 23]. In the truncation selection method that has been adopted, the T% of the fittest individuals is selected from the current population of N individuals and goes through recombination and mutation to form the next generation. The rest of the individuals are discarded. In the truncation method, the fittest individual in the population is automatically part of the next generation. The other top T%-1 goes through recombination and mutation to form the rest of the individuals in the next generation. The process is repeated until an optimal solution is obtained or the maximum number of iterations has been reached.

#### 2.1 Recombination

Recombination is similar to a crossover in GAs. The adaptive mutation BGA proposed in this work allows various possible recombination methods to be used, each of them searching the space with a particular bias. Because we do not have prior knowledge as to which bias is likely to suit the optimization task, it is better to include several recombination methods and allow selection to do the elimination. Two recombination methods were used in this work: volume and line recombination [11].

In volume recombination, a random vector ri equal to the parents' length is generated and the child ci is produced by the following expression:

$$
\sigma\_i = r\_i a\_i + (1 - r\_i) b\_i \tag{1}
$$

Where ci is a component of the child, ai and bi are the two respective parent components, and ri is a random vector component.

The child can be said to be located at a point inside the hyper box defined by the parents as shown in Figure 1.

In line recombination, a single uniformly random number r is generated between 0 and 1, and the child is obtained as shown below [23].

$$
\sigma\_i = r a\_i + (1 - r) b\_i \tag{2}
$$

Where ci, ai, and bi are defined as in Eq. (1).

#### 2.2 Adaptive mutation

As mentioned before, one of the main concerns in GA has been the issue of premature convergence. This issue is also encountered in the classical BGA. This problem can be reduced in BGA by using an adaptive mutation [11, 21, 23]. The

Figure 1. Volume recombination.

Power System Small-Signal Stability Enhancement Using Damping Controllers Designed Based… DOI: http://dx.doi.org/10.5772/intechopen.105591

diversity in the population is preserved by adding small, normally distributed zeromean random numbers to each child before inserting it into the population. The random numbers have a certain standard deviation R [18]. The value of R should be selected carefully because it is critical in determining the convergence of the optimization. If the value of R is too small, the solution might result in premature convergence, while a high value of R might be detrimental to the optimal convergence of the algorithm [11, 23]. The adaptive mutation method proposed here allows us to determine the appropriate value of R. To achieve this, the population is divided into two halves, P1 and P2. P1 is assigned a mutation rate of double R (2R), while P2 is assigned a mutation rate of half R (R/2). The mutation rate R is adjusted depending on the performance of each half of the population (P1 or P2). If P1 gives better and fitter individuals, the mutation rate is increased by a certain percentage (10% in this case); similarly, if P2 produces better and fitter individuals, then the mutation rate gets reduced by a similar percentage. The pseudo code for BGA with adaptive learning can be found in [11, 23].

#### 3. Overview of population-based incremental learning algorithm

Population-based incremental learning (PBIL) is a combination of competitive learning derived from artificial neural networks and genetic algorithms [18, 19]. There is no crossover operator in PBIL, instead, the probability vector is updated using a solution with the highest fitness values [18]. The values of the probability vector are initially set to 0.5 to ensure that the probability of generating 0 or 1 is equal. As the search progresses, these values are moved away from 0.5, toward either 0.0 or 1.0.

#### 3.1 Learning rate

Learning in PBIL is based on using the current probability distribution to create N individuals. The probability vector is updated using the best individual so far, thereby increasing the probability of producing solutions similar to the current best solutions. Learning rate is required to update the probability vector. The selection of the learning rate value should be made with care as it determines how fast or slow the prototype vector is shifted toward the best individuals. A larger rate speeds up convergence, but it reduces the function space to be searched, while a smaller rate will slow down the convergence, even though it increases the exploration of a bigger search space, thereby increasing the likelihood of better optimal solutions. The (positive) update rule of the probability vector is given as:

$$PV\_i = (1 - LR)PV\_i + (LR)B\_i \tag{3}$$

where PV is the probability vector, LR ∈ [0 1] is the learning rate, B is the best solution, and i denotes each locus (i = 1, 2, … l) where l is the binary encoding length.

#### 3.2 Mutation

Like in GA, the mutation is used in PBIL to maintain diversity in the population. Mutation in PBIL can be performed in two ways: either on the sample solutions

generated or on the PV. In this study, the mutation is performed on the PV; a forgetting factor is used to relax the probability vector toward a neutral value of 0.5 [11, 16, 17] as shown in the equation below.

$$PV\_i = PV\_i - FF(PV\_i - 0.5) \tag{4}$$

where FF is the forgetting factor that was chosen to be 0.005. The pseudo code for PBIL can be found in [17–19].

#### 4. Overview of differential evolution

Differential evolution (DE) can be defined as a parallel direct search method that uses a population of points to search for a global minimum or maximum of a function over a wide search space [13]. It is a simple and efficient adaptive scheme for global optimization over continuous space. DE is designed to efficiently solve nondifferentiable and nonlinear functions and yet retains its simplicity and good convergence to a global optimum [12]. Similar to most EAs, DE explores the search space by maintaining a population of candidate solutions and by using Darwinian evolution theory to direct its search toward prospective areas. The candidates with better fitness values survive and enter the next generation [12–14, 25]. The process continues until the termination criterion is satisfied. It should be mentioned that DE has proved to be one of the best among EAs. It was able to secure competitive rankings in CEC competitions [25]. One of the main advantages of DE over GA is the mutation scheme and the selection process. Unlike GAs where the best solutions are selected for the next generation, in DE, all solutions have an equal chance of being selected as parents independently of their fitness values.

#### 4.1 Mutation

In the context of DE, "mutation" is defined as a process of taking a small random sample of vectors from the current population and combining them algebraically to form a new vector, which is referred to as a mutant vector [12, 13]. In the so-called classical version of DE, the mutant vector is formed as follows:

$$V\_{i, \mathbf{g}} = X\_{r1, \mathbf{g}} + F(X\_{r2, \mathbf{g}} - X\_{r3, \mathbf{g}}) \tag{5}$$

where i, r1, r2, and r3 are all distinct indices in the interval [1, Np]. The mutation scale factor F is a positive real number between 0 and 2 that controls the rate at which the population evolves [13]. The vector Xr<sup>1</sup> is the base vector, while Xr<sup>2</sup> � Xr<sup>3</sup> is the difference vector, g = 0, 1, … gmax are the generations and Np is the population.

The above process is repeated Np-times to constitute a mutant population. In the classical version, each base vector is used only once per generation, in order to preserve diversity in the population. The classical version described above is designated as "DE/rand/1" and is widely used, although it has the drawback of relatively slow convergence [12]. Some alternative mutation strategies to the classical version are given below [12–14, 25]:

DE/best/1: This strategy resembles DE/rand/1, except that all mutants use the best vector in the current generation as the base vector:

Power System Small-Signal Stability Enhancement Using Damping Controllers Designed Based… DOI: http://dx.doi.org/10.5772/intechopen.105591

$$V\_{i, \mathbf{g}} = X\_{\text{best}, \mathbf{g}} + F(X\_{r1, \mathbf{g}} - X\_{r2, \mathbf{g}}) \tag{6}$$

where Xr<sup>1</sup> and Xr<sup>2</sup> are distinct random vectors and Xbest is the best individual in the current population.

This strategy has faster convergence than DE/rand/1, but often fails to reach the global optimum [12].

DE/best/2: This strategy uses two mutation differences to create a mutant vector:

$$W\_{i, \mathbf{g}} = X\_{\text{best}, \mathbf{g}} + F(X\_{r1, \mathbf{g}} - X\_{r2, \mathbf{g}}) + F(X\_{r3, \mathbf{g}} - X\_{r4, \mathbf{g}}) \tag{7}$$

where Xr1, Xr2, Xr3, and Xr<sup>4</sup> are distinct random vectors and Xbest is the best individual in the current population. This strategy attempts to balance between convergence speed and robustness. However, it may still converge to a local but non-global optimum due to the fact that the base vector Xbest draws the population toward itself [13].

DE/local-to-best/2: This strategy resembles DE/best/2 in that two mutation differences are used, but the base vector is randomly sampled and the "best" vector is used in one of the scaled differences:

$$W\_{i, \mathbf{g}} = X\_{r1, \mathbf{g}} + F(X\_{\text{best}, \mathbf{g}} - X\_{r2, \mathbf{g}}) + F(X\_{r3, \mathbf{g}} - X\_{r4, \mathbf{g}}) \tag{8}$$

This approach has similar convergence properties to DE/best/2 [13].

DE/rand/2: This strategy samples 5 random vectors in the current generation to form two random differences that are scaled and added to the base vector:

$$V\_{i\text{g}} = X\_{r1\text{g}} + F(X\_{r2\text{g}} - X\_{r3\text{g}}) + F(X\_{r4\text{g}} - X\_{r5\text{g}}) \tag{9}$$

where, r1 6¼ r2 6¼ r3 6¼ r4 6¼ r5. This approach converges more slowly but is very robust [13].

DE/rand/2 has been used in this work due to our objective to appropriately tune the PSS with optimal time constants values for a robust performance.

#### 4.2 Crossover

In DE, "crossover" refers to the process of creating a new vector (called the trial vector) by combining a mutant vector with a target vector [13]. The target vector for the mutant vector Vi,<sup>g</sup> is Xi,<sup>g</sup> . The trial vector Ui ¼ u1,<sup>i</sup>, u2,<sup>i</sup> … , uD,<sup>i</sup> ½ �, is then obtained as follows:

$$u\_{j,i,g} = \begin{cases} v\_{j,i,g} \text{ if } \left(rand\_j(0, 1) \le CR \text{ or } j = j\_{rand}\right), \; j = 1, 2, \dots, D \\\\ \chi\_{j,i,g} \text{ otherwise} \end{cases} \tag{10}$$

where CR∈½ � 0, 1 is the crossover probability, and CR is the fraction of the parameter values that are copied from the mutant vector, and 1-CR is the fraction of parameter values copied from the trial vector. To determine whether the parameter to be copied is from the mutant or trial vector, a uniformly-distributed random number, randj between [0, 1] is generated and compared to the predefined value of CR. In addition, a random index j rand ∈ 1, Np � � is chosen and the corresponding mutant parameter is copied to ensure that the trial vector is not a duplicate of the target vector.

#### 4.3 Selection

This process consists of choosing the individuals that will enter the next generation. DE employs a "one-to-one survivor selection," which consists of comparing each trial vector to its corresponding target vector. Mathematically, the vector Xi,g + 1 in the g+1'th generation is obtained from the trial vector Ui,g and target vector Xi,g as follows in the case of a minimization problem:

$$X\_{i\_{\mathcal{G}}+1} = \begin{cases} U\_{i\_{\mathcal{G}}} \text{if } f\left(U\_{i\_{\mathcal{G}}}\right) \preceq f\left(X\_{i\_{\mathcal{G}}}\right) \\ X\_{i\_{\mathcal{G}}} \text{ otherwise} \end{cases} \tag{11}$$

This process ensures that the best vector at each index is retained. Furthermore, this also guarantees that the very best-so-far solution is kept. Once the selection is performed for all target vectors in the current generation g, the processes of mutation, crossover, and selection are repeated with the Np vectors in the g + 1st generation. This process is iterated until a termination criterion is satisfied.

#### 5. System model

The power system considered in this paper is the two-area four-machine power system as shown in Figure 2 [1]. Each machine is represented by the detailed sixorder differential equations. The machines are equipped with simple exciter systems of first-order differential equations as given in the Appendix [11]. The system consists of two similar areas connected by a tie-line. Each area consists of two coupled conventional generator units, each generator is rated 900 MVA and 20 kV. The generator parameters can be found in [1, 11]. The dynamics of the system are described by a set of nonlinear differential equations. However, for the purpose of controller design, these equations are linearized around the nominal operating conditions. The linearized equation of the system is given by:

$$\begin{aligned} \boldsymbol{\omega} &= \boldsymbol{A}\_o \boldsymbol{\omega} + \boldsymbol{B}\_o \boldsymbol{u} \\ \boldsymbol{y} &= \boldsymbol{C}\_o \boldsymbol{\omega} + \boldsymbol{D}\_o \boldsymbol{u} \end{aligned} \tag{12}$$

Figure 2. Two-area system model.

#### Power System Small-Signal Stability Enhancement Using Damping Controllers Designed Based… DOI: http://dx.doi.org/10.5772/intechopen.105591

where, x is the state variable, y is the system output, and u is the control input. A0, Bo, Co, and Do are constant matrices of appropriate dimensions.

Several operating conditions have been considered during the design stage of the controller. However, only three operating conditions are listed in Table 1 for simplicity. Case 1 is the nominal operating condition. At the nominal operating condition, approximately 146 MW is transferred from area 1 to area 2 via the two tie lines, with each line carrying half of the total power. Under these conditions, the load on bus 4 was 1137 MW, while the load on bus 14 was 1367 MW. Case 2 is the moderate load condition, where about 409 MW of real power is transferred from area 1 to area 2. For this case, the load on bus 4 was 967 MW, while the load on bus 14 was 1767 MW. case 3 is the heavy load condition (worst case scenario) where approximately 512 MW of power is transferred from area 1 to area 2. For this case, the load on bus 4 was 876 MW, while the load on bus 14 was 1876 MW. It should be mentioned that the system exhibits inter-area oscillatory modes due to the flow of power between the two areas that causes the two areas to oscillate against each other. In addition, two local area modes were also observed, one in each area. However, in this chapter, we will concentrate only on the inter-area modes since they are the most critical and difficult to control. Table 2 shows the open-loop eigenvalues of the inter-area modes. It can be seen that without PSSs, the inter-area modes were stable but poorly damped for case 1, with a damping ratio of 0.011. However, the system became unstable for case 2 and the instability became more pronounced for case 3 with damping ratios of 0.0057 and 0.0130, respectively. This suggests that with the increase in active power transfer between the two areas, the oscillations have now increased making the system unstable. The frequency of oscillations of the inter-area modes ranges from 0.588 Hz to 0.634 Hz.

Therefore, a supplementary controller known as a power system stabilizer (PSS) will be required to damp the system's oscillations. The block diagram of the PSS is shown in Figure A.1 in the Appendix.

#### 6. Objective function

The objective is to optimize the parameters of the PSSs simultaneously such that the controllers can stabilize the system over a wide range of operating conditions. The parameters that were to be optimized are K (gain of the PSS) as well as the lead-lag time constants T1, T2, T3, and T4. The objective function used was to maximize the lowest damped ratio over a wide range of operating conditions. This objective function was used for GA, BGA, PBIL, and DE. The objective function is given as:


Table 1. Selected operating conditions.


Table 2.

Open-loop eigenvalues of the inter-area modes for selected operating conditions.

$$val = \max\left(\min\left(\mathfrak{g}\_{ij}\right)\right) \tag{13}$$

where

i = 1,2, … n, j = 1, 2, … .m

$$\mathfrak{L}\_{\vec{ij}} = \frac{-\sigma\_{\vec{ij}}}{\sqrt{\sigma\_{\vec{ij}}^2 + \sigma\_{\vec{ij}}^2}} \tag{14}$$

where <sup>ζ</sup>ij <sup>¼</sup> �σ<sup>i</sup> ffiffiffiffiffiffiffiffiffi σ2 <sup>i</sup> <sup>þ</sup>ω<sup>2</sup> i p <sup>j</sup> is the damping ratio of the ith eigenvalue of the jth operating conditions. The number of the eigenvalues is n, and m is the number of operating conditions. σij and ωij are the real part and the imaginary part (frequency) of the eigenvalue, respectively.

#### 7. Design of the PSSs

In total 10 PSSs parameters were optimized (i.e., 5 parameters for each area) for generators 1–4. The parameters that were optimized are K, T1, T2, T3, and T4. The washout time constant (Tw) was set at 10 seconds and was not optimized since Tw is not critical to the design. The following parameter domain constraints were considered when designing the PSSs.

$$0 < \mathbf{K} \le 20$$

$$0.001 \text{ T}\_i \le \mathbf{S}$$

where K and T<sup>i</sup> (i = 1, 2, 3, 4) denote the controller gain and the lead–lag time constants, respectively.

For comparison purposes, a CPSS was also designed using the phase compensation technique. Details can be found in [1, 2].

#### 7.1 Parameters of GAs, BGA, PBIL, and DE

The parameters used in the optimization for GAs, BGA, PBIL, and DE are shown in Table 3.

An observation of the parameters given below in Table 3 shows that PBIL uses few parameters. There is no crossover or selection in PBIL compared to BGA, GA, and DE. In addition, 500 generations were used in the PBIL optimization to allow for adequate learning to take place within the optimization. This is because PBIL that works by learning from the previous best and trying to find the very best individual takes time to explore the Power System Small-Signal Stability Enhancement Using Damping Controllers Designed Based… DOI: http://dx.doi.org/10.5772/intechopen.105591


#### Table 3.

Parameters used in GA, BGA, PBIL, and DE.

search space. Another difference is the way in which the initial population is generated. In GA, BGA, and DE, the initial population is selected randomly, while in PBIL the role of the population is redefined using probability vectors (PV). It should be mentioned that a population size of 50 was also tested in PBIL and it was found that it yielded similar results to the population size of 100. However, in this work a population of 100 was used.

#### 7.2 Conventional PSS

The parameters of the conventional PSS (CPSS) were tuned at the nominal operating condition using the phase compensation method and trial and error approach. Details of this approach can be found in [1–3].

#### 8. Simulation results

#### 8.1 Fitness values

Figures 3–6 show the fitness value (minimum damping ratio) of the system when GA, BGA, PBIL, and DE are used in the optimization. The final value obtained from the GA optimization is 0.1867 as compared to 0.205, 0.2095, and 0.227 for BGA, PBIL, and DE, respectively. As discussed previously, GA and BGA were run for 120 generations, DE for 180 generations, while the PBIL was run for 500 generations. Since a smaller population was used for DE, it was decided to increase its generations. The reason for using 500 generations in PBIL is that it starts to settle only around 300 generations and therefore there is a need for a longer simulation period.

#### 8.2 Eigenvalue analysis

Table 4 shows the inter-area modes for the system with the PSSs. It can be seen that with the PSSs, the inter-area modes are very well damped as compared to the open-loop system in Table 2. CPSS performs adequately for the nominal operating condition. The damping ratios provided by the CPSS under the three cases 1, 2, and 3, are 0.1666, 0.1442, and 0.1339, respectively. BGA-PSS provides a damping ratio of 0.2321, 0.2393, and 0.2412 for cases 1, 2, and 3, respectively. On the other hand, the

Figure 3. Fitness value curve from the GA optimization.

Figure 4. Fitness value curve from the BGA optimization.

PBIL-PSS and DE-PSS provide a damping ratio of 0.2341 and 0.2377, respectively, for case 1; 0.2387 and 0.2321, respectively, for case 2; 0.2385 and 0.23, respectively, for case 3.

It is observed that PBIL-PSS, DE-PSS, and BGA-PSS provide similar damping ratios to the system for operating condition considered. In case 1, DE provides the best damping ratio, whereas BGA provides the best damping ratios for cases 2 and 3. Among the evolutionary algorithm-based PSSs, GA-PSS provides the lowest damping ratios of 0.2029, 0.2013, and 0.1993 for cases 1, 2, and 3, respectively.

Figure 7 shows the spread of the eigenvalues for the system equipped with the different PSSs. CPSS is the lowest compared to the damping provided by all the other EA-based PSSs. It is observed that among the EA-based PSSs, GA-PSS provides the least damping. The damping provided by the PBIL-PSS, BGA-PSS, and DE-PSS is very similar and higher than that provided by GA-PSS.

Power System Small-Signal Stability Enhancement Using Damping Controllers Designed Based… DOI: http://dx.doi.org/10.5772/intechopen.105591

Figure 5. Fitness value curve from the PBIL optimization.

Figure 6. Fitness value curve from DE optimization.


#### Table 4.

Inter-area modes and the respective damping ratios in brackets.

#### 8.3 Small disturbance

To investigate the performance of the PSSs under small disturbance, a small disturbance of 5% step response is applied to the reference voltage of generator 2 in area 1. The responses of the active power output of generators 2 and 3 are are shown in

Figure 7. Spread of the eigenvalues for the different PSSs-nominal condition.

Figures 8–13 for cases 1, 2, and 3, respectively. It can be seen that the system is welldamped across all three operating conditions when it is equipped with DE-PSS, BGA-PSS, GA-PSS, and PBIL-PSS. The CPSS is seen to give the worst performance.

Figures 8 and 9 show the active power output responses of generators 2 and 3, respectively, for case 1. The system equipped with GA-PSS, BGA-PSS, DE-PSS, and

Figure 8. Response of G2 under the 5% step change in Vref of G2 – Case 1.

Power System Small-Signal Stability Enhancement Using Damping Controllers Designed Based… DOI: http://dx.doi.org/10.5772/intechopen.105591

Figure 9. Response of G3 under the 5% step change in Vref of G2 – Case 1.

Figure 10. Response of G2 under the 5% step change in Vref of G2 – Case 2.

PBIL-PSS has a similar settling time of approximately 4 sec., whereas the system equipped with CPSS has a longer settling time of around 6 sec. DE-PSS is seen to give the best performance in terms of undershoot and the amplitude of subsequent swings, albeit with a relatively large 1st swing overshoot as seen in Figure 8. It is observed that DE-PSS gives a large 1st swing overshoot in Figure 8. The relatively large 1st swing overshoot can be attributed to the high gain of the controller. Note that DE-PSS's gain has almost reached the allowable maximum value [20]. The performance of BGA-PSS is comparable to that of PBIL-PSS. Compared with other EA-based PSS, GA-PSS gives the worst performance. However, it performed better than the CPSS. In Figure 9, BGA-PSS is seen to give a slightly high 1st swing overshoot but the subsequent swings are well-damped. Overall, CPSS is seen to give the worst performance.

Figures 10 and 11 show the active power responses of generators 2 and 3, respectively, for case 2. It can be seen that the CPSS has a longer settling time of around 7 sec. Compared to a settling time of around 4 sec. for the EA-based PSSs. This suggests that the oscillations have increased in case 2 compared to case 1. The EAbased PSSs are able to damp the oscillations adequately when compared to the CPSS. In terms of undershoot and subsequent swings, DE-PSS is seen to give the best responses albeit with a relatively large 1st swing overshoot as seen in Figure 10. The performances of BGA-PSS and PBIL-PSS are similar. Overall, CPSS gives the worst performance followed by GA-PSS.

Figures 12 and 13 show the active power responses of generators 2 and 3, respectively, for case 3. It can be seen that the system response is similar to case 2 except that the oscillations have now increased as can be seen in the system's responses. The system equipped with the CPSS settled around 10 sec. (see Figure 13). It can be seen that the performance of the CPSS has now deteriorated significantly. On the other hand, the performances of GA-PSS, BGA-PSS, PBIL-PSS, and DE-PSS have deteriorated only slightly. This means that the EA-based PSSs are more robust. In terms of settling time, the EA-based PSSs have similar settling times of approximately 6.5 sec., which is comparable to case 2. Although DE-PSS has a larger 1st swing overshoot as seen in Figure 12, it gave the best responses in terms of undershoot and subsequent swing amplitudes, followed by BGA-PSS and PBIL-PSS. The performance of GA-PSS although better than that of CPSS is not as good as the other EA-based PSS.

Figure 11. Response of G3 under the 5% step change in Vref of G2 – Case 2.

Power System Small-Signal Stability Enhancement Using Damping Controllers Designed Based… DOI: http://dx.doi.org/10.5772/intechopen.105591

Figure 12. Response of G2 under the 5% step change in Vref of G2 – Case 3.

Figure 13. Response of G3 under the 5% step change in Vref of G2 – Case 3.

#### 8.4 Large disturbance

A large disturbance was considered by applying a three-phase fault to the system at bus 3. The fault was cleared by removing one of the transmission lines between bus 3 and bus 13. The fault was applied for 0.1 seconds. After the fault was cleared, the faulted line was removed and the system settled to a different operating condition with only one tie line transmitting power from area 1 to area 2. This means the system is weaker after the fault was cleared compared to its state before the fault. Figures 14 and 15 show

Figure 14. Electric power output of generator 3 following a three-phase fault on bus 3 for case 1.

Figure 15. Electric power output of generator 3 following a three-phase fault on bus 3 for case 2.

the electric power output for generator 3 for case 1 and case 2, respectively. The responses for case 3 are not shown because the system was unable to survive this large disturbance after the fault was removed. It can be seen from Figure 14 (case 1) that the output power of generator 3 has a high overshoot in the first swing after the fault was cleared but settled down quickly after a few seconds, with all the PSSs providing adequate damping to stabilize the system. However, when the power that was transferred from area 1 to area 2 increased,the CPSS was unable to maintain the stability of the system as seen in Figure 15 (case 2). On the other hand, all the EA-based PSSs were able to stabilize the system, which suggests that they are more robust than the CPSS.

Power System Small-Signal Stability Enhancement Using Damping Controllers Designed Based… DOI: http://dx.doi.org/10.5772/intechopen.105591

#### 9. Conclusions

An optimal PSS design for small signal stability improvement of a multi-machine power system using four evolutionary algorithms (GA, BGA, PBIL, and DE) has been presented. Frequency-domain and time-domain simulations have been presented to show the effectiveness of the EA-based PSSs in damping low-frequency oscillations. It is shown that in the frequency domain, the performances of BGA-PSS, PBIL-PSS, and DE-PSS are comparable and better than that of the GA-PSS for all cases investigated. However, time-domain simulations show that DE-PSS performs better than BGA-PSS and PBIL-PSS in terms of undershoot and subsequent swings albeit with a relatively large 1st swing overshoot. This overshoot could be attributed to the high gain of the controller. One way to deal with this overshoot is to reduce the gain of the controller; however, this could also affect the damping. GA-PSS is shown to give the worst performance among the EA-based PSSs, but it performed better than the CPSS. In designing the PBIL-PSS, more generations were required compared to GA-PSS, BGA-PSS, and DE-PSS. Since PBIL works by learning from the previous best individual, it takes time for the algorithm to explore the search space. Compared to the EA-based PSS, the CPSS that was designed using the conventional method has been shown to perform poorly and is not robust. Further research will be done in the direction of improving the EAs algorithms by self-adapting the genetic parameters and using multi-objective functions in the optimization.

#### Acknowledgements

This research was funded in part by the National Research Foundation (NRF) of South Africa, Grant UID 118550.

#### Appendix

#### Generator and automatic voltage regulator (AVR) equations

$$\frac{d}{dt}E\_{fd} = \frac{K\_A}{T\_A} \left(V\_{ref} - V\_t\right) - \frac{E\_{fd}}{T\_A}$$

where KA and TA are the gain and time constant of the AVR. Vt is the terminal voltage of the generator. In this work, KA = 200 and TA = 0.05 sec.

#### PSS block diagram

where K is the gain of the PSS, T1 to T4 are lead/lag time constants, and Tw is the washout time constant. T1 and T2 form the first lead/lag block, while T3 and T4 form the second lead/lag block of the PSS.


Figure A.1. PSS block diagram.

### Author details

Komla Agbenyo Folly\*, Severus Panduleni Sheetekela and Tshina Fa Mulumba University of Cape Town, South Africa

\*Address all correspondence to: komla.folly@uct.ac.za

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Power System Small-Signal Stability Enhancement Using Damping Controllers Designed Based… DOI: http://dx.doi.org/10.5772/intechopen.105591

#### References

[1] Kundur P. Power System Stability and Control. USA: Prentice-Hall; 1994

[2] Klein M, Rogers GJ, Kundur P. A fundamental study of inter-area oscillations in power systems. IEEE Transactions on Power Systems. 1991; 6(3):914-921

[3] Chen L. A Novel Method for Power System Stabilizer Design. Cape Town, South Africa: University of Cape Town; 2003

[4] Du W, Dong W, Wang Y, Wang H. A method to design power system stabilizers in a multi-machine power system based on single-machine infinitebus model. IEEE Transaction on Power Systems. 2021;36(4):3475-3486. DOI: 10.1109/TPWRS.2020.3041037

[5] Chow JH, Sanchez-Gasca JJ. Power system stabilizers. In: Power System Modeling, Computation and Control. 2020. pp. 265-294. DOI: 10.1002/ 9781119546924.ch10

[6] Folly KA, Yorino N, Sasaki H. Improving the robustness of H∞-PSSs using the polynomial approach. IEEE Transactions on Power Systems. 1998; 13(4):1359-1364

[7] Holland JH. Adaptation in Nature and Artificial Systems. Ann Arbor: University of Michigan Press; 1975

[8] Goldberg DE. Genetic Algorithms in Search, Optimization & Machine Learning. USA: Addison-Wesley; 1989

[9] Mitchell M. An Introduction to Genetic Algorithms. Cambridge MA, United States: The MIT Press; 1996

[10] Alkhatib H, Duveau J. Robust design of power system stabilizers using

adaptive genetic algorithms. In: Proceeding of the Word Academy of Science, Engineering, and Technology. 2010. pp. 267-272

[11] Sheetekela S. Design of Power System Stabilizer using Evolutionary Algorithms. Cape Town, South Africa: University of Cape Town; 2010

[12] Mulumba TF, Folly KA. Application of evolutionary algorithms to power system stabilizer design. In: Subair S, Thron C, editors. Implementation and Application of Machine Learning. Studies in Computational Intelligent (SC 782). 2020. pp. 29-62

[13] Price K, Storn R, Lampinen J. Differential Evolution—A Practical Approach to Global Optimization. Berlin, Germany: Springer; 2005

[14] Ahmad MF, Isa NAM, Lim WH, Ang KM. Differential evolution: A recent review based on state-of-the-art works. Alexandria England Journal. 2022;61: 3831-3872

[15] Verdejo H, Pino V, Kliemann W, Becker C, Delpiano J. Implementation of particle swarm optimization (PSO) algorithm for tuning power system stabilizers in multi-machine electric power systems. Energies. 2020;13(8): 2093. DOI: 10.3390/en13082093

[16] Folly KA. Performance of power system stabilizers based on populationbased incremental learning (PBIL) algorithm. International Journal of Electrical Power and Energy System. 2011;33(7):1279-1287

[17] Folly KA. Parallel PBIL applied to power system controller design. Journal of Artificial Intelligence and Soft

Computing Research. 2013;3(3): 215-223. DOI: 10.2478/jaiscr-2014-0015

[18] Baluja S. Population-Based Incremental Learning: A method for integrating Genetic Search Based Function Optimization and Competitive Learning. Technical Report CMU-CS-49-163, 1994

[19] Baluja S, Caruana R. Removing the genetics from the standard genetic algorithm. In: Proceedings of the 12th International Conference on Machine Learning, Lake Tahoe, CA; 1995

[20] Sheetekela S, Folly KA. Multimachine power system stabilizer design based on evolutionary algorithm. In: Proceedings of the 44th International Universities' Power Engineering Conference. 2009

[21] Sheetekela S, Folly KA.: Breeder genetic algorithm for power system stabilizer design. In: Proceedings of 2010 IEEE Congress on Evolutionary Computation (CEC), Barcelona, Spain; 2010

[22] Mühlenbein H, Schlierkamp-Voosen D. Predictive models for the Breeder Genetic Algorithm, I. continuous parameter optimization. Evolutionary Computation. 1993;1(1):25-49

[23] Greene J. The Basic Idea behind the Breeder Genetic Algorithm. Cape Town, South Africa: University of Cape Town; 2005

[24] Folly KA, Sheetekela SP. Optimal design of power system controller using breeder genetic algorithm. In: Gao S, editor. Bio-Inspired Computational Algorithms and Their Applications. InTech-open science; 2012. pp. 303-316. DOI: 10.5772/38447

[25] Das S, Suganthan PN. Differential evolution: A survey of the state-of-theart. IEEE Transaction on Evolutionary. Computation. 2011;15(1):4-31

#### **Chapter 4**

## ADDC: Automatic Design of Digital Circuit

*Conor Ryan, Michael Tetteh, Jack McEllin, Douglas Mota-Dias, Richard Conway and Enrique Naredo*

#### **Abstract**

Digital circuits are one of the most important enabling technologies in the world today. Powerful tools, such as Hardware Description Languages (HDLs) have evolved over the past number of decades to allow designers to operate at high levels of abstraction and expressiveness, rather than at the gate level, which circuits are actually constructed from. Similarly, highly accurate digital circuit simulators permit designers to test their circuits before committing them to silicon. This is still a highly complex and generally manual task, however, with complex circuits taking months or even years to go from planning to silicon. We show how Grammatical Evolution (GE) can harness the standard tools of silicon design and be used to create a fully automatic circuit design system. Specifically, we use a HDL known as SystemVerilog and Icarus, a free, but powerful simulator, to generate circuits from high level descriptions. We apply our system to several well known digital circuit literature benchmarks and demonstrate that GE can successfully evolve functional circuits, including several which have been subsequently rendered in Field Programmable Gate Arrays (FPGAs).

**Keywords:** digital design, VLSI design, microelectronics design, evolvable hardware, HDL, verilog, grammatical evolution

#### **1. Introduction**

This chapter describes the application of Evolutionary Computation to the task of digital circuit design. Although many Electronic Design Automation (EDA) tools exist to aid designers, digital circuit design remains a time consuming and expensive task that requires skilled engineers.

The cost of errors in silicon is enormous and this has led to extremely powerful and accurate simulators that designers use to verify their designs before committing them to silicon. These simulators provide a huge opportunity for Evolutionary Computation as they can be used to test individuals.

This chapter gives an overview about how digital integrated circuits are designed and how the tools used to develop them have evolved over the past few decades. These tools, when linked together with GE produce a system we call the Automatic Design of Digital Circuits (ADDC), which can evolve circuits using massive levels of abstraction rather than simple logic gates.

We demonstrate the system on three real-world problems, including one with 220 test cases, showing that ADDC successfully evolves solutions in all cases.

#### **2. Background**

Digital circuit design began in the 1960's with the arrival of semiconductor transistor based circuits and the Integrated Circuit (IC). Up until the 2010's, *Moore's Law* has successfully predicted the shrinking in size of manufacturing technologies allowing for lower cost, faster speeds and lower power consumption. However in the last decade, this shrinking has slowed due to the difficulties involved in the fabrication process. Integrated circuits are extremely common in many products today and their use is not always obvious.

Integrated circuits come in three different varieties; Digital, Analogue or Mixed-Signal. Digital integrated circuits process digital information, often represented using bits, bytes or words. Many of these circuits employ the use of one or more processors (often referred to as a core) with support logic, memories and I/O interfaces. The microprocessor is a famous example of a digital circuit. Analogue integrated circuits are used for handling continuous-time signals and to perform operations such as amplification, analogue filtering and power management. Mixed-Signal integrated circuits contain both analogue and digital circuitry in the same package and use ADC (Analogue to Digital Converters) and DACs (Digital to Analogue Converters) to share information between both domains.

In modern circuit design, signal processing tends to be performed in the digital domain instead of the analogue domain. This is due to the reliability of digital circuitry and the existence of advanced digital algorithms with performance that cannot be obtained with analog circuitry alone [1]. This move towards using digital designs for signal processing has required the use of circuit representations like Hardware Description Languages (HDLs) to be used to describe these extremely complex circuits. New devices such as Complex Programmable Logic Devices (CPLDs) and Field Programmable Gate Arrays (FPGAs) are increasingly being used due to their ability to replicate the behaviour of these circuits without requiring the fabrication of new chips. The following sections will go more in-depth into HDLs, the differences between CPLD and FPGA devices and an overview of the Digital Design Flow.

#### **2.1 Hardware description languages**

The first modern HDL, Very High Speed Integrated Circuit Hardware Description Language (VHSIC-HDL), more commonly known as VHDL, was created in 1983. VHDL was developed for the US Department of Defense as part of the VHSIC project. The project was launched in 1980 [2], while the first version of VHDL was launched in 1983 by Intermetrics Inc., Texas Instruments and IBM [3, 4]. VHDL is a verbose and strongly-typed language. It grew steadily in popularity, resulting in both logic simulators and logic synthesis tools being developed for it. IEEE Standard VHDL was standardised in 1986 [5] with the adoption of VHDL version 7.2 and was finalised in 1987 in the IEEE Standard 1076-1987 [5]. VHDL would become the first HDL language that would gain widespread adoption, and is still in use today.

Another modern HDL developed around this time was Verilog, created by Phil Moorby in 1983 [6] while working for Gateway Design Automation, who were

#### *ADDC: Automatic Design of Digital Circuit DOI: http://dx.doi.org/10.5772/intechopen.104410*

acquired by Cadence Design Systems in 1989 [7]. In comparison to VHDL, Verilog is less verbose and is a weakly-typed language. Originally it was designed only for logic simulation, but later had logic synthesis features added after the language gained widespread popularity. Verilog-XL, a Verilog simulator owned by Cadence, became the *de facto* Verilog simulator throughout the 1990's. Due to the increasing popularity of VHDL, Cadence created the the Open Verilog International (OVI, now known as Accellera) organisation and transferred the rights of Verilog into the public domain [8]. Verilog was eventually standardised in IEEE Standard 1364-1995 [6]. Verilog would be superseded by SystemVerilog in IEEE Standard 1800-2005 [9], adding features for design verification. SystemVerilog is more popular than VHDL today due to the language being less verbose and having a similar structure to the C programming language. **Table 1** provides a comparision between VHDL and SystemVerilog hardware description languages.

With the introduction of Hardware Description Languages for digital circuit design, two discrete time based simulation methods came into prominence. Both cycle-driven and event-driven simulation methods were orders of magnitude faster than the traditional continuous time based simulation method "SPICE". One limitation of the cycle-driven simulation method is that the output is only updated on each clock edge This means it can only be used for synchronous digital designs, but is much faster than event-driven simulation. It also cannot detect glitches and doesn't take the timing of the design into consideration.

Event-driven simulation updates the output on any input event meaning it can be used for both synchronous and asynchronous designs. Although still quicker than SPICE methods, it is much slower than cycle-driven simulation. Modern circuit designs utilise techniques such as clock and power gating, allowing parts of a design to be "turned off". This can help reduce the simulation time of an event-driven simulation, bringing it closer to cycle-driven simulation while providing a more accurate simulation. **Table 2** provides a comparison between cycle-driven and event-driven simulation methods. Practically all commercial and open-source simulation tools today utilise one of these methods.

#### **2.2 CPLD vs FPGA devices**

As digital designs grew in complexity, early Programmable Logic Devices (PLD) such as Programmable Array Logic (PAL) became obsolete as they could only replicate the behaviour of a few hundred logic gates. To address this shortcoming, PALs were soon replaced by Complex Programmable Logic Devices (CPLD). Modern CPLDs are able to replicate the behaviour of hundreds of thousands of logic gates.


#### **Table 1.**

*A comparison between VHDL and verilog hardware description languages.*


#### **Table 2.**

*A comparison between cycle-driven simulation and event-driven simulation.*

One advantage of CPLD devices is that they use non-volatile memory to store their configuration. As a result, their logic is already configured at power-up. This makes them ideal devices for systems where the logic is required to be ready for initialisation, such as glue logic for circuits.

**Figure 1** shows the internal structure of a CPLD. These logic blocks consist of programmable PAL blocks. The inputs can be connected together to different AND gates using programmable fuses. The OR gate connections are fixed and cannot be reconfigured. Although less configurable than a PLA (which contains both programmable AND and OR planes), this reduction in complexity makes PAL blocks cheaper to manufacture. In order for PAL blocks to be able to implement sequential designs, a D flip-flop can be used to store the state of the output. CPLDs can connect multiple logic blocks together using the programmable interconnection matrix in order to implement more complex designs.

While CPLD devices are still used for specific tasks, the most common PLD in use today is the Field Programmable Gate Array (FPGA). These devices are quite similar in structure to the mask-programmed gate array (MPGA) [10] which was one of the first commercial programmable PLDs available. One benefit of using FPGAs is that they can be electronically reconfigured, whereas the previous MPGAs configuration was specified at the time of manufacture. The first FPGA, the Xilinx XC2064 was invented by Ross Freeman and Bernard Vonderschmitt in 1985 [11]. Early FPGAs were mainly used in the telecommunications and networking sectors as they were often cheaper than manufacturing custom silicon for these tasks.

#### **Figure 1.**

*Structure of a Complex Programmable Logic Device (CPLD) and Programmable Array Logic (PAL) block. The programmable AND plane and the fixed OR plane are shown on the right.*

#### *ADDC: Automatic Design of Digital Circuit DOI: http://dx.doi.org/10.5772/intechopen.104410*

**Figure 2** shows the internal structure of the FPGA. Similarities can be seen between FPGA and CPLD devices where a programmable interconnect is used to connect programmable logic blocks. In an FPGA, the Configurable Logic Blocks (CLB) consist of Look Up Tables (LUTs). The output of these programmable memories are defined by their input signals. The multiplexer then selects either the output of the LUT or the D flip flop to allow for combinational or sequential logic, similar to PAL blocks in CPLDs. These blocks can then be connected together using the Switching Blocks (SB). Modern FPGAs are able to replicate the behaviour of tens of millions of logic gates and contain logic like RAM and multipliers. Today, they are often used in high-performance computing applications due to their performance and efficiency over processor-based algorithms.

#### **2.3 Digital design flow**

In digital design, it is often not practical to use gate-level descriptions. Instead, a representation called Register Transfer Level (RTL) is used. RTL allows for a highlevel model of the design to be represented without having to think about the lowlevel logic structures required to implement the functionality [12]. This abstraction uses constructs like logic statements, arithmetic operations and control flow. Similarities can be drawn between programming using mnemonics in Assembly Language compared to functional programming in C. Using RTL allows the designer to focus on the functionality of the design rather than on the implementation. **Figure 3** presents the different stages a digital design must goes through in order to convert a RTL representation into an implementable design.

When a high level language is used for programming, the code written by the programmer must first go through a process called compilation before the code is executed. Similarly with digital hardware, a design specified using RTL (often using a HDL like VHDL or SystemVerilog) must go through a process called logic synthesis. This process analyses the given RTL and converts it into a set of primitives that is functionally equivalent. Primitives are the basic building blocks of any logic design and consists of both combinational and sequential blocks. Examples include boolean logic (NOT/AND/ OR/XOR etc.), multiplexers and flip-flops. This output is stored in a file called a net-list, which contains a list of all the primitive blocks and the nets that connect them together.

#### **Figure 2.**

*Structure of a Field Programmable Gate Array (FPGA). The Complex Logic Block (CLB) consists of a programmable memory called a Look Up Table (LUT), a D flip flop to store state and a multiplexer to select the output signal. The programmable Switching Block (SB) is used to connect the CLBs together.*

#### **Figure 3.**

*A flowchart showing the different stages of the logic synthesis/digital design process.*

The net-list generated from the logic synthesis is not optimized and must go through a process known as logic minimization. There can be many parameters to optimize for, such as area usage, power consumption and timing delay. There are many different methods that can be used to perform this optimization. Some early algorithmic methods include Karnaugh maps [13] and the Quine–McCluskey algorithm [14]. However as designs have become increasing complex, these algorithmic methods are not computationally feasible. This has lead to the use of heuristic optimizers such as ESPRESSO [15] and BOOM [16]. When using heuristic optimizers, it cannot be guaranteed that the minimized design is the global minimum. However in practice, these methods are sufficient and are widely used in logic synthesis tools today.

Following the logic minimization process, the optimized net-list is in an intermediate representation. A process called Technology Mapping must be performed before the design can be implemented in silicon or on a PLD. For silicon, this intermediate representation is compared against a library of available "building blocks" called leaflevel cells. The mapper then selects and connects these leaf-level cells, rebuilding the circuit. Further optimization may be performed here as the available leaf cells may be able to replace multiple blocks in the intermediate representation. For PLDs, the process is similar. The PAL blocks in CPLDs and the LUTs in FPGAs can be configured to replace one or multiple blocks. These are then connected together using the programmable interconnects. In comparison to logic minimization, the optimizations performed here are much simpler. After the technology mapping process is complete, the designer now has an implementable design. This is often in the GDSII/OASIS format for silicon manufacturing and in a bitstream format for PLDs. The top EDA companies for ASIC digital design tools include Cadence Design Systems, Siemens and Synopsys, with Xilinx and Intel providing FPGA tools.

#### **2.4 Grammatical evolution and circuit design**

Grammatical Evolution (GE), the tool used in this chapter and described in detail in the next section, has been used to evolve Verilog circuits, such as the one-bit adder and D-type latch at the gate level [17]. Notably, the one-bit adder is frequently used as a case study to evolve combinational circuits at the gate-level through GE [18–20]. However, gate-level evolution is less likely to scale to highly complex circuits from scratch [21]. In response to scalability issues inherent in gate level evolution, [22]

proposed functional level evolution through Genetic Algorithms, which uses higherlevel functions such as multiplexers, adders, subtractors instead of primitive gates to help reduce the search space. Similarly, [23] evolved a 3-bit multiplier using only binary multiplexers. 9- and 25-Median approximate circuits have also been designed at the functional level through Cartesian GP [24]. We address the scalability concern by performing circuit evolution through GE at a more abstract level – RTL modeling, where the focus is on describing the circuit's behavior [25, 26].

#### **3. Grammatical evolution**

Biological evolution has been a source of inspiration for many techniques that formed the field of evolutionary computation (EC), and has been used to address a wide range of problem domains ranging from the small to the huge, solving molecular to astronomical related problems. One of the most successful evolutionary techniques is GP, introduced by John R. Koza in his book "Genetic Programming—On the programming of Computers by Means of Natural Selection" [27], which mimics natural selection in an iterative way to find an optimal (best) solution. Algorithm 1 details the steps required to implement a standard GP. A survey of the different GP techniques current available in the literature is out of the scope of this work, the interested reader can find in [28] a comprehensive review of various aspects and techniques of GP and their categorization.


Grammatical evolution (GE) is an evolutionary computation and, more specifically, a genetic programming (GP) technique [29] that addresses the closure issue of Koza-style GP, which effectively confines GP to single-type problems. This is achieved through the use of a grammar, generally in Backus-Naur Form (BNF) [29, 30], or Attribute Grammar (AG) [31–34].

The GE system shown in **Figure 4** automatically generates programs using three main components: (i) grammar; (ii) cost function; and (iii) search engine. The grammar describes the program's syntax, the cost function evaluates the quality of each program, and the search engine, typically a GA, searches within the program space defined by the grammar.

In GE, a typical representation for an individual is a binary string grouped into codons (e.g. 8 bits). The linear representation of the genome allows the application of genetic operators such as crossover and mutation in the manner of a typical GA, unlike tree-based GP.

#### **Figure 4.**

*The GE system uses a search engine (typically a GA) to generate solutions for a given problem, by recombining the genetic material (genotype) and mapped onto programs (phenotype) according to a language specification (interpreter/compiler).*

#### **3.1 Initialisation**

In GP, the standard initialisation is the ramped-half-and-half (RHH) technique, introduced in [27]. In order to ensure diversity in the population, GP individuals typically represented as trees are created with different depths. The RHH technique uses two methods to create a tree: full and grow. Typically there is a probability of 0.5 to select either method for a particular individual. The full method creates trees with full branches at the maximum specified depth, whereas the grow method creates trees with different length of branches and different depth size up to the maximum allowed.

*Sensible Initialisation* is an adaptation of ramped-half-and-half initialisation routine in GP [35]. SI requires the grammar to labelled— whether a production rule is recursive or non-recursive and the minimum derivation tree depth to fully expand a production rule. SI requires a maximum tree depth to be specified prior. Similar to ramped-half-and-half in GP, SI applies both the grow and full method. When applying the grow method any production can be selected while the full method chooses only recursive productions. Both grow and full methods are subject to the constraints of not exceeding the maximum specified tree depth and the availability of enough tree depth budget to fully expand all non-terminals to terminals.

#### **3.2 GE operators**

Generally, GE uses a one-point crossover as it has been shown to be effective [36]. In crossover, two individuals are selected as parents and a single crossover point within each parent's genome is randomly chosen, dividing the genome into two halves: left and right sub-genomes. The right sub-genomes of both parents are swapped to create two offspring. However, crossover points that lie within non-coding regions (unused codon(s) from the mapping step) may not be so useful. As a result, a variant of one-point crossover known as *effective crossover*, which constrains the selection of the crossover point to be within the effective length of an individual's genome is preferred. Point mutation is a commonly used mutation operator in GE. Each bit within the binary string genome is mutated or flipped using the specified mutation probability. However, neutral mutations can occur whereby mutated codons select the same productions as the original codon; as a result, the corresponding phenotype remains the same.

#### **3.3 GE example**

To illustrate the application of GE, we first explain the evolutionary process using a mathematical optimisation problem as study case.

GE begins with the start symbol of the grammar, then the codons are used to select and apply the grammar production rules to finally build a program. This mapping process is illustrated in **Figure 5** with a simple example, where the production rules in the grammar contains a set of user-defined functions: *max a*ð Þ , *b* , *min a*ð Þ , *b* , *addition a*ð Þ , *b* , *subtraction a*ð Þ , *b* , *multiplication a*ð Þ , *b* , *division a*ð Þ , *b* , const, and *X*, which is a value (or a vector) sampled from the independent variable.

The production rules for each non-terminal are indexed starting from 0 and, when selecting a production rule (starting with the left-most non-terminal of the developing program) the next codon value in the genome is read and interpreted using the formula: *p* ¼ *c* % *r*, where *c* represents the current codon value, % represents the modulus operator, and *r* is the number of production rules for the left-most nonterminal.

#### **Figure 5.**

*Example of a GE genotype-phenotype mapping process for the Iris dataset, where the binary genotype is grouped into codons (e.g. 8 bits; red & blue), transcribed into an integer string, then used to select production rules from a predefined grammar (BNF-Grammar), and finally translated into a sequence of rules to build a solution (phenotype).*

To prevent reaching the end of the genome without consuming all the available codons, then a wrapping process is used to continuing reading from the beginning of the genome. This mapping process stops when all of the non-terminal symbols have been replaced, in order to get a valid program. An exemption to this process is in the case when it fails to replace all of the non-terminal symbols after a maximum number of iterations, then it is considered an invalid individual and it is penalized with the lowest possible fitness.

#### **4. ADDC**

ADDC is an evolutionary HDL circuit design framework mainly driven by GE. ADDC requires a grammar and a testbench as inputs for circuit evolution and verification respectively. The designed grammar must be BNF compliant and must satisfy the grammar sufficiency property. Thus, the grammar must contain all the necessary building blocks required to potentially evolve an optimal circuit. ADDC is technology agnostic and easily configurable as the choice of HDL and simulator are left to the user to choose. Illustrated in **Figure 6** is ADDC's design flow for functional evolution of circuits.

During the initial phase of the circuit design process, ADDC creates an initial population of circuit designs using a suitable GE initialisation routine such as sensible

**Figure 6.** *ADDC Functional Circuit Evolution Overview.*

#### *ADDC: Automatic Design of Digital Circuit DOI: http://dx.doi.org/10.5772/intechopen.104410*

initialisation. These individuals then undergo fitness evaluation. The fitness evaluation phase entails a number of steps. First, the genotype (genome) of each individual is translated to a HDL (SystemVerilog in this work) circuit design (phenotype) by the GE mapper, using the grammar designed for the circuit. Functional simulation of each circuit takes place, assuming all circuit designs are valid. For these experiments, *Icarus Verilog*, a lightweight and open-source Verilog simulator is used. The simulator uses a testbench which must be provided by the user for circuit verification. A testbench is a verification description written in the same HDL as the circuit that ensures the device under test (DUT) or evolved circuit meets functional and timing requirements. A typical testbench uses a test vector(s) which contains circuit inputs and their respective expected outputs. For example, test vector(s) for combinational circuits are usually created from their truth table. The simulator drives these input(s) through the DUT and verifies if the circuit output is the desired output. The sum of the number of passed cases out of the total number of cases represents the fitness value of the circuit. After all circuits have been evaluated, ADDC makes available the best circuit design if the specified termination criterion is satisfied, otherwise the circuit design evolution continues.

The next phase is reproduction, where usually individuals with either good overall fitness score or individuals that perform best on certain cases are selected for crossover and mutation to create a new population of circuit designs. Lexicase selection performs well on circuit design benchmarks [25, 26], hence selected as the choice of selection routine. Also, depending on the genetic algorithm (GA) of choice, for example steady state, generational GAs etc., events like replacement or elitism may take place in creating the new population. The new population undergoes fitness evaluation in similar manner as described in the previous section. The process continues until the termination criterion is satisfied and the best circuit design returned as solution.

#### **5. Experiments**

Three circuit benchmark problems are considered, namely: *Hamming Code (7,4) Encoder* (corresponding decoder evolved in [25]), *Seven Segment Display* [25, 37] and *16-to-4 Multiplexer* [25, 38, 39]. These problems are not only standard benchmarks in circuit design literature, but are used in real world applications. For example, hamming codes, seven segment displays and multiplexers are used in satellite communication, digital calculators and telephone networks respectively.

#### **5.1 Benchmark problems**

#### *5.1.1 Hamming code (7,4) encoder*

Hamming codes are a linear error-correcting codes capable of detecting a single error and at most two errors, but are only capable of correcting a single error. They belong to a category of codes referred to as Linear Block Codes. A Hamming Code (7,4) Encoder encodes a 4-bit data word into a 7-bit code word prior to data transmission by generating and adding three parity bits to the data word.

The structure of the code words generated by hamming codes can be classified into two categories: *systematic* and *non-systematic* encodings. The structure of systematic encoding separates the data word and code word while the data word and parity bits

are interspersed in non-systematic encoding. We adopt systematic encoding as it is easier to separate the data word from the parity bits.

The grammar designed for evolving the Hamming Code (7,4) Encoder is shown in **Figure 7**. The circuit's interface is defined using the begin h i ‐module rule. The grammar uses four bitwise operators and a bitwise negation operator in Verilog defined using h i bitwise‐op and bitwise h i ‐neg rules respectively. Hamming Code (7,4) Encoder generates three redundant bits stated earlier and these have been defined using parity‐bit‐<sup>1</sup> , parity‐bit‐<sup>2</sup> and parity‐bit‐<sup>3</sup> rules. The *expr* rule is use by the parity‐bit‐ <sup>∗</sup> rules to evolve variable length expressions that generate the correct parity bit. The grammar also features an important Verilog construct, the always procedural block, defined using always‐block which behaves similarly to an infinite loop and which is triggered by a change in any signal (indicated with ∗ ) to evaluate the statements within its scope.

#### *5.1.2 Seven segment display*

A Seven Segment Display is an electronic device used for the display of decimal numerals. It is also capable of displaying letters, though some letters such as K, X, Z etc. are difficult to recognise on the device. The specification for Seven Segment Display considered in this work supports decimal numerals (0-9) and A-F letter representations. These numbers and characters are encoded as a 4-bit binary number (0000– 1111) referred to as binary coded decimal (BCD) which are sent as inputs to the Seven Segment Display. The 4-bit binary numbers starting from 0000 to 1001 are used to encode decimal numbers 0 to 9 respectively; while 1010 to 1111 are used to encode letters A-F. Upon receipt, the Seven Segment generates a 7-bit binary number (each bit corresponding an ON/OFF state of a segment) to turn on the appropriate LEDs to


**Figure 7.** *Hamming code (7,4) encoder grammar.*

display the digit/letter. **Figure 8** shows the grammar designed to evolve the seven segment display. The BCD and the 7-bit binary numbers are defined by the bcd h i and h i seven‐segment rules respectively. The grammar uses switch-case construct defined using the switch h i ‐case as well as the always procedural block ( always ). h i begin � module defines the circuit interface of the Seven Segment Display.

#### *5.1.3 16-to-4 multiplexer*

A multiplexer is a multiple-input single-output device that accepts data (data lines) and an address (select lines) as inputs and uses the address to select the corresponding data line to be transmitted. The 16-to-4 multiplexer has 16 data lines and 4 select lines.

**Figure 9** shows the grammar designed to evolve the multiplexer. Similar to the Seven Segment Display Grammar, the 16-to-4 Multiplexer Grammar also uses the always procedural block. However, here an if-else ( if h i ‐else ) construct is used, although the switch-case construct is also suitable in this context. The addresses used to select a data line as output are defined using the address h i rule. The address h i is used to generate a conditional statement ( cond h i) by the if h i ‐else rule to determine the data line to select. The data h i ‐index defines the indexes of the 16-bit data which is used by the data h i ‐bit rule to select data line. begin h i ‐module defines the circuit interface of the 16-to-4 Multiplexer.

#### **5.2 Evolutionary parameters**

Experimental parameters used for running the experiments are shown in **Table 3**. The generation number and population size were selected based on preliminary


**Figure 8.** *Seven segment display grammar.*


#### **Figure 9.**

*16-to-4 multiplexer grammar.*


#### **Table 3.**

*Experimental run parameters.*

experiments. The generation sizes used for the preliminary experiments were 50, 100 and 200; the population sizes were 100, 200, 500, 1000 and 2000. For each problem, 5 independent runs were conducted. The choice of generation number and population size for the actual experiments were based on setups with majority of the runs with mean best fitness of the final generation within the fourth quartile of the maximum fitness. The other parameters used remain the same as used in [25, 26].

50 generations were used for evolving the Hamming Code (7,4) Encoder, while 100 generations was used for each of the Seven Segment Display and 16-to-4 Multiplexer designs as preliminary results revealed these problems were relatively

challenging to evolve compared to the Hamming Code (7,4) Encoder. All other parameters remain the same for all benchmark problems.

#### **5.3 Training and testing**

The number of training and testing cases are tabulated in **Table 4**.

Each of the Hamming Code (7,4) Encoder and Seven Segment Display have only 16 cases. However, for the Hamming Code (7,4) Encoder every correct bit in each bit position in the codeword is counted as part of the total fitness score for a candidate circuit, giving a total of 112 (7 16) cases. Given that Hamming Code (7,4) and Seven Segment Display have so few cases, it is feasible to perform exhaustive testing during training, leaving no cases for testing as shown in **Table 4**.

On the other hand, the 16-to-4 Multiplexer has 2<sup>20</sup> cases making exhaustive testing infeasible. As a result we uniformly sample 4,100 and 5,000 cases for training and testing respectively as shown in **Table 4**.

#### **5.4 Results and discussions**

Results obtained from experiments conducted demonstrate ADDC is ideal for evolving digital circuit designs due to the use GE and a HDL which permits designs to be done at a more abstract level. The evolutionary performance for the experiments conducted for Hamming Code (7,4) Encoder, Seven Segment Display and 16-to-4 Multiplexer described in Section 5.1 are visualized in **Figures 10**–**12** respectively. The success rate per benchmark problem is tabulated in **Table 5**. A representative solution per each circuit benchmark problem is shown in **Figures 13**–**15** in the Appendix. The advantages and disadvantages of the proposed approach is discussed in Section 5.7.

#### **5.5 Success rate**

A successful run is a single independent evolutionary run that evolved an optimal circuit for the target problem. Fifty independent runs were conducted for all three benchmark problems. The success rate is the number of successful runs divided by the total number of evolutionary runs (i.e. 50) as tabulated in **Table 5**. A 100% success rate was attained for the Hamming Code (7,4) Encoder. The Seven Segment Display and 16-to-4 Multiplexer obtained 60 and 86% success rates, respectively.

#### **5.6 Evolutionary performance**

Visualization of the evolutionary performance as evolution progressed for Hamming Code (7,4) Encoder, Seven Segment Display and 16-to-4 Multiplexer are shown in **Figures 10**–**12** respectively.


**Table 4.** *Number of training and testing cases.*

**Figure 10.** *Mean best and mean average across runs for hamming code (7,4) encoder.*

**Figure 11.** *Mean best and mean average across runs for seven segment display.*

**Figure 12.** *Mean best and mean average across runs for 16-to-4 multiplexer.*


#### **Table 5.**

*Success rate for benchmark problems.*

The red line represents the mean best fitness per generations across the 50 independent runs conducted, while the black line represents the mean average fitness. Also plotted are error bars representing the standard error. The error bars are short to non-existent indicating small variability between the fitnesses of individuals. Furthermore, all three plots reveal a steady and progressive increase in fitness as the evolution progressed indicating the evolutionary search is continuously searching regions of the solutions where fitter individuals are located. Hamming Code (7,4) Encoder and 16-to-4 Multiplexer problems discover individual(s) that solve more that 50% of the test cases from the initial generations while the Seven Segment Display evolves individual(s) that solve 25% of the test cases on average.

#### **5.7 Advantages and disadvantages of proposed approach**

First, evolved circuit designs are quite interpretable compared to gate-level designs. This is due to the high level of abstraction at which these designs are performed which focuses on evolving circuit behaviours as opposed to evolving gatelevel designs. Gate-level design approaches are challenging to scale to complex circuits [40]. The use of constructs such as if-else, switch-case, always procedural blocks, bitwise operators etc., makes it easier for humans to understand the behaviour of a circuit and if required effect manual modifications. Second, verification of circuits is easier as the circuit behaviour are easier to interpret enabling robust testbenches to be written.

Third, like any other methodology, there exist few disadvantages. The use of HDL requires the user to have technical knowledge about the HDL of choice— how to design grammars free of syntax errors and modelling errors. Syntax errors are easier to find and fix as most simulators will report such errors at the functional simulation phase. Modelling errors are a bit more challenging to fix, as they are only noticeable during synthesis (conversion of RTL or high level designs to gate-level representation) phase of the circuit design when designed grammars do not adhere to the guidelines of the HDL of choice. For example, fully functional representative solutions for the 16-to-4 Multiplexer and Seven Segment Display shown in **Figures 13** and **14** respectively may not be directly synthesizable (depending on the synthesis tool), as the case statements and if conditions are not mutually exclusive. Some of these errors can be fixed by defining new rules, modifying existing rules, the use of attribute grammars in order to impose the necessary constraints etc. The redundant logic can be gotten rid of by via a number of approaches: manually by hardware designer, design of SystemVerilog/Verilog redundant logic removal algorithm, use unique case statements (for switch case constructs) etc.

The choice of operators to use for evolving circuits is key as it has been shown to increase simulation time of circuits if inappropriate operators are chosen [26].

Furthermore, some circuit designs may contain redundant block of code which impede interpretability as observed in representative best solutions for 16-to-4 Multiplexer and Seven Segment Display shown in **Figures 13** and **14** respectively in Appendix. Only 16 of the if conditions and case statements are valid for the 16-to-4 Multiplexer and Seven Segment Display representation circuit designs respectively.

#### **6. Conclusions and future directions**

We have presented a system for the automated design of digital circuits, ADDC. ADDC is the next logical step in the evolution of Electronic Design Automation and this chapter has described how the history of integrated circuits has led to the confluence of GE, circuit simulators and HDLs. ADDC has been demonstrated on three difficult, real-world problems and was successful on all three of them, including one with 220 possible inputs.

The HDLs employed here are hugely powerful and expressive. Digital designers often operate at very high levels of abstraction using *IP blocks*, which is somewhat similar to using libraries when programming software. IP blocks are typically very powerful and can have complexities equivalent to tens of thousands of gates. Making some of these blocks available to ADDC will dramatically increase its scalability and doing so will simply involve expanding the grammar.

As the problems scale up, the number of test cases can become astronomical, as was the case in this chapter. While in this case we randomly sampled the training and test cases, it is also possible to use a more intelligent approach. Recent work [41] has investigated using clustering to select a representative set of test cases. This will permit us to operate at greater scales with confidence.

A circuit that functions correctly on a simulator is not guaranteed to be fit for purpose when rendered in silicon. This is because there are often other considerations, such as silicon area, power dissipation and delay. Future work will use multi-objective optimisation to include pressure on individuals to adhere to these constraints too.

While some of our automatically generated circuits have successfully been implemented in silicon on a Xilinx Artix-7 FPGA, e.g., an 8-to-1 multiplexer [25], ADDC does not yet include that step in its toolchain; to be fully automated it will need to include this.

#### **Acknowledgements**

The authors are supported by Research Grants 13/RC/2094 and 16/IA/4605 from the Science Foundation Ireland and by Lero, the Irish Software Engineering Research Centre (www.lero.ie). The third is partially financed by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brazil (CAPES), Finance Code 001, and Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro (FAPERJ).

#### **Conflict of interest**

The authors declare no conflict of interest.

#### **Appendix**


**Figure 13.** *16-to-4 multiplexer representative solution.*


#### **Figure 14.**

*Seven segment display representative solution.*

*ADDC: Automatic Design of Digital Circuit DOI: http://dx.doi.org/10.5772/intechopen.104410*

**Figure 15.** *Hamming code (7,3) encoder representative solution.*

#### **Author details**

```
Conor Ryan1
            *†, Michael Tetteh1†, Jack McEllin1†, Douglas Mota-Dias1,2†,
Richard Conway1† and Enrique Naredo1†
```
1 University of Limerick, Limerick, Ireland

2 Rio de Janeiro State University, Rio de Janeiro, Brazil

\*Address all correspondence to: conor.ryan@ul.ie

† These authors contributed equally.

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Conway T, Conway R, Mulvaney K, Mahony SO, Billon C, Khan MK, et al. Low power application specific processor for ISM band transceiver. In: IET Irish Signals and Systems Conference (ISSC 2010). Cork, Ireland: IET; 2010. pp. 272-277. Available from: https://digital-library.theiet.org/content/ conferences/10.1049/cp.2010.0525

[2] Barbe DF. VHSIC Systems and Technology. Computer. 1981;**14**(2):13-22

[3] Dewey A. VHSIC Hardware Description (VHDL) Development Program. In: 20th Design Automation Conference Proceedings. Miami Beach (FL): IEEE Press; 1983. pp. 625-628

[4] Dewey A, Gadient A. VHDL Motivation. IEEE Design Test of Computers. 1986;**3**(2):12-16

[5] IEEE Standard VHDL Language Reference Manual. IEEE Std 1076-1987; 1988

[6] IEEE Standard Hardware Description Language Based on the Verilog(R) Hardware Description Language. IEEE Std 1364-1995. 1996

[7] Ap. COMPANY NEWS; Cadence to Buy Gateway Design. The New York Times. 1989. Available from: https:// www.nytimes.com/1989/10/05/business/ company-news-cadence-to-buy-gatewaydesign.html [Accessed: January 10, 2022]

[8] Verilog Hardware Description Language Reference Manual (LRM). Version 1.0. Open Verilog International (OVI); 1991

[9] IEEE Standard for SystemVerilog: Unified Hardware Design, Specification and Verification Language. IEEE Std 1800-2005. 2005

[10] Section 8—XC157. In: The Integrated Circuit Data Book. Motorola Semiconductor Products Inc.; 1968. p. 3

[11] Freeman R. Xilinx Inc. Configurable Electrical Circuit Having Configurable Logic Elements and Configurable Interconnects. US4870302; 1989

[12] De la Guia Solaz M, Conway R. Razor based programmable truncated multiply and accumulate, energy-reduction for efficient digital signal processing. IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 2014;**23**(1): 189-193

[13] Karnaugh M. The Map Method for Synthesis of Combinational Logic Circuits. Transactions of the American Institute of Electrical Engineers, Part I: Communication and Electronics. 1953; **72**(5):593-599

[14] McCluskey EJ. Minimization of Boolean Functions. The Bell System Technical Journal. 1956;**35**(6):1417-1444

[15] Brayton R, Hachtel G, Hemachandra L, Newton A, Sangiovanni-Vincentelli A. A Comparison of Logic Minimization Strategies Using ESPRESSO: An APL Program Package for Partitioned Logic Minimization. In: Proceedings of the International Symposium on Circuits and Systems. New York (NY): IEEE Press; 1982. pp. 42-48

[16] Hlavicka J, Fiser P. BOOMa—A heuristic boolean minimizer. In: IEEE/ ACM International Conference on Computer Aided Design. ICCAD 2001. IEEE/ACM Digest of Technical Papers (Cat. No.01CH37281). San Jose, CA, USA: IEEE; 2001. pp. 439-442

[17] Cullen J. Evolving Digital Circuits in an Industry Standard Hardware

#### *ADDC: Automatic Design of Digital Circuit DOI: http://dx.doi.org/10.5772/intechopen.104410*

Description Language. In: Li X, Kirley M, Zhang M, Green D, Ciesielski V, Abbass H, et al., editors. Simulated Evolution and Learning. Berlin, Heidelberg: Springer; 2008. pp. 514-523

[18] Youssef A, Majeed B, Ryan C. Optimizing combinational logic circuits using Grammatical Evolution. In: 2021 3rd Novel Intelligent and Leading Emerging Sciences Conference (NILES); 2021. pp. 87-92

[19] Karpuzcu UR. Automatic verilog code generation through grammatical evolution. In: Proceedings of the 7th Annual Workshop on Genetic and Evolutionary Computation. GECCO '05. New York, NY, USA: Association for Computing Machinery; 2005. pp. 394-397. DOI: 10.1145/1102256.1102346

[20] Kratochvil O, Osmera P, Popelka O. Parallel grammatical evolution for circuit optimization. In: Ao SI, Douglas C, Grundfest WS, Burgstone J, editors. Proceedings of the World Congress on Engineering and Computer Science, WCECS '09. vol. II. International Association of Engineers. San Francisco, USA: Newswood Limited; 2009. pp. 1032–1037. Available from: http://www. iaeng.org/publication/WCECS2009/ WCECS2009\_pp1032-1037.pdf

[21] Vassilev VK, Miller JE. Scalability problems of digital circuit evolution evolvability and efficient designs. In: Proceedings. The Second NASA/DoD Workshop on Evolvable Hardware; 2000. pp. 55-64

[22] Murakawa M, Yoshizawa S, Kajitani I, Furuya T, Iwata M, Higuchi T. Hardware evolution at function level. In: Voigt HM, Ebeling W, Rechenberg I, Schwefel HP, editors. Parallel Problem Solving from Nature — PPSN IV. Springer: Berlin, Heidelberg; 1996. pp. 62-71

[23] Vassilev VK, Miller JF. Embedding landscape neutrality to build a bridge from the conventional to a more efficient three-bit multiplier circuit. In: Proceedings of Genetic and Evolutionary Computation Conference. Morgan Kaufmann; 2000

[24] Vasicek Z, Sekanina L. Evolutionary approach to approximate digital circuits design. IEEE Transactions on Evolutionary Computation. 2015;**19**(3): 432-444

[25] Ryan C, Tetteh M, Dias D. Behavioural Modelling of Digital Circuits in System Verilog using Grammatical Evolution. In: Proceedings of the 12th International Joint Conference on Computational Intelligence—ECTA,. INSTICC. SciTePress; 2020. pp. 28-39

[26] Tetteh MK, Mota Dias D, Ryan C. Evolution of complex combinational logic circuits using grammatical evolution with systemverilog. In: Hu T, Lourenço N, Medvet E, editors. Genetic Programming. Cham: Springer International Publishing; 2021. pp. 146-161

[27] Koza JR. Genetic Programming—On the programming of Computers by Means of Natural Selection. Complex Adaptive Systems. Cambridge (MA): MIT Press; 1992

[28] Eiben SJ, Agoston E. From evolutionary computation to the evolution of things. Nature. 2015;**521** (7553):476-482. DOI: 10.1038/ nature14544

[29] Ryan C, Collins JJ, O'Neill M. Grammatical evolution: Evolving programs for an arbitrary language. In: Banzhaf W, Poli R, Schoenauer M, Fogarty TC, editors. EuroGP. vol. 1391 of Lecture Notes in Computer Science. Berlin: Springer; 1998. pp. 83-96

[30] O'Neill M, Ryan C. Grammatical evolution. IEEE Trans Evolutionary Computation. 2001;**5**(4):349-358

[31] Patten JV, Ryan C. Attributed grammatical evolution using shared memory spaces and dynamically typed semantic function specification. In: Genetic Programming—18th European Conference, EuroGP 2015, Copenhagen, Denmark, April 8-10, 2015, Proceedings. 2015. pp. 105-112. DOI: 10.1007/978-3- 319-16501-1\_9

[32] Karim MR, Ryan C. On improving grammatical evolution performance in symbolic regression with attribute grammar. In: Genetic and Evolutionary Computation Conference, GECCO '14, Vancouver, BC, Canada, July 12-16, 2014, Companion Material Proceedings. 2014. pp. 139-140. DOI: 10.1145/ 2598394.2598488

[33] Karim MR, Ryan C. A new approach to solving 0-1 multiconstraint knapsack problems using attribute grammar with lookahead. In: Genetic Programming— 14th European Conference, EuroGP 2011, Torino, Italy, April 27-29, 2011. Proceedings. 2011. pp. 250-261

[34] Karim MR, Ryan C. Degeneracy reduction or duplicate elimination? an analysis on the performance of attributed grammatical evolution with lookahead to solve the multiple knapsack problem. In: Nature Inspired Cooperative Strategies for Optimization, NICSO 2011, Cluj-Napoca, Romania, 2011. Vol. 387. Berlin: Springer. Studies in computational intelligence; 2011. pp. 247-266. DOI: 10.1007/978-3-642- 24094-2\_18

[35] Ryan C, Azad R. Sensible initialisation in grammatical evolution. In: GECCO 2003: Proceedings Of The Bird Of A Feather Workshops, Genetic And Evolutionary Computation

Conference. Chigaco (IL): AAAI; 2003. pp. 142-145

[36] O'Neill M, Ryan C, Keijzer M, Cattolico M. Crossover in Grammatical Evolution. The Search Continues. Genetic Programming. 2001:337-347

[37] Miller J. Cartesian Genetic Programming. Cartesian Genetic Programming. 2011. pp. 17-34. DOI: 10.1007/978-3-642-17310-3\_2

[38] Kruse R, Borgelt C, Klawonn F, Moewes C, Steinbrecher M, Held P. Fundamental evolutionary algorithms. Computational Intelligence: A Methodological Introduction. London (UK): Springer London; 2013. pp. 227- 274. DOI: 10.1007/978-1-4471-5013-8\_13

[39] Fredivianus N, Prothmann H, Schmeck H. XCS revisited: A novel discovery component for the extended classifier system. Simulated Evolution and Learning. Berlin, Heidelberg: Springer Berlin Heidelberg. Lecture Notes in Computer Science. 2010;**6457**: 289-298. DOI: 10.1007/978-3-642-17298- 4\_30

[40] Zdenek, V. Bridging the Gap Between Evolvable Hardware and Industry Using Cartesian Genetic Programming. 2018. DOI: 10.1007/978- 3-319-67997-6\_2

[41] Ryan C, Kshirsagar M, Gupt K, Rosenbauer L, Sullivan J. Hierarchical clustering driven test case selection in digital circuits. In: Proceedings of the 16th International Conference on Software Technologies—ICSOFT,. INSTICC. SciTePress; 2021. pp. 589-596

#### **Chapter 5**

## Genetic Algorithms for Chemical Engineering Optimization Problems

*Thi Anh-Nga Nguyen and Tuan-Anh Nguyen*

#### **Abstract**

Chemical engineering processes are frequently composed of multiple complex phenomena. These systems can be represented by a set of several equations, which are referred to as mathematical model of the process. Optimization in chemical engineering utilizes specialized techniques to determine the values of the decision variables at which the performance of the process, measured as the objective function(s), is minimum or maximum. The profitability of the process improves remarkably as a result of this selection. This benefit has encouraged the broad application of optimization for important industrial challenges. However, many problems in chemical engineering processes are hard to find the optimum using gradient-based algorithms. For example, the cases when the objective functions of the processes are multimodal, discontinuous, or implicit. Genetic algorithms (GAs) are a kind of metaheuristic searching optimization methods, which are inspired by nature, the mechanics of natural evolution and genetics. Genetic algorithms have received significant attention due to their remarkable advantages over classical algorithms. Compared with traditional optimization approaches, GAs are straightforward, robust, capable of handling the non-differentiable, discontinuous, or multimodal problems. The purpose of this paper is to give several case studies using genetic algorithms in chemical engineering optimization problems.

**Keywords:** optimization, genetic algorithm, chemical engineering, modeling

#### **1. Introduction**

Chemical engineering processes are frequently composed of multiple complex phenomena. These systems can be represented by a set of several equations, such as **z** = **f**(**d**; **x**; **p**), where **z** is the vector state of the system. The system of equations is referred to as the mathematical model of the process. The performance of the process is predicted by the model from the assigned data of several "input" variables, **d** and **x**, and a group of parameters, **p**. Among the input variables, some, referred as **x**, can be changed and are known as design variables, while others, referred as **d**, are predetermined. The performance of the process can be evaluated through a set of output variables, **y=g** (**z**), referred as the function of state of the system. Optimization in chemical engineering utilizes specialized techniques to determine the values of the decision variables, **x**, at which the performance of the process, measured as the objective function(s), *I*(**x**), is minimum or maximum. The profitability of the process improves remarkably as a result of this selection of input/operating/decision variables. This benefit has encouraged the broad application of optimization for important industrial challenges. However, many problems in chemical engineering processes are hard to find the optimum using gradient-based algorithms. For example, the cases when the objective functions of the processes are multimodal, discontinuous, or implicit. Genetic algorithms (GAs) are a kind of metaheuristic searching optimization methods, which are inspired by nature, the mechanics of natural evolution, and genetics [1]. Genetic algorithms have received significant attention due to their remarkable advantages over classical algorithms. Compared with traditional optimization approaches, GAs are straightforward, robust, capable of handling the nondifferentiable, discontinuous, or multimodal problems. GAs have been effectively employed in a wide range of various engineering, manufacturing, and management applications [2]. The purpose of this chapter is to give several case studies using genetic algorithms in chemical engineering optimization problems. The case studies include the optimization of an autothermal ammonia synthesis reactor, a separation module using membrane technology, and data-driven modeling optimizations of solid oxide fuel cells.

#### **2. Optimization of an autothermal ammonia synthesis reactor**

In the chemical process industries, ammonia is one of the most widely manufactured inorganic compounds [3]. The majority of ammonia produced commercially is consumed in fertilizers, with the rest going into plastics, synthetic fibers and resins, pharmaceuticals, explosives, papers, and refrigeration [4]. As a result, modeling and optimization of ammonia synthesis process have received a significant attention from both the academia and industry. Ammonia is produced predominantly from the combination of elements such as nitrogen and hydrogen in a catalytic process using a promoted iron catalyst firstly established by Haber and Bosch as the reaction [4]:

$$\text{N}\_2 + \text{ 3H}\_2 \rightleftharpoons \text{2NH}\_3 \tag{1}$$

The reaction is reversible and exothermic, releasing a significant amount of heat. In order to achieve a high conversion, the heat of the reaction should be removed. Therefore, the process is typically carried out in an autothermal synthesis reactor, in which the heat of reaction is utilized to preheat the feed gas and ensure the suitable temperature inside. The production of ammonia depends on several factors such as the reactor length, the operating pressure, temperature of the feed and reacted gas, the flow rate, and composition of the gas mixture. The optimization problem of the process is to maximize the economic return. Many studies discussing the modeling, simulation, and optimization of an autothermal ammonia synthesis reactor can be found in literature. Some of them can be mentioned here as in Babu et al. [5], Babu and Angira [6], Carvalho et al. [7], Edgar et al. [8], Ksasy et al. [9], Murase et al. [10], Upreti and Deb [11], Yusup et al. [12]. However, the model discussed in the studies of Edgar et al. [8], Murase et al. [10] has some minor errors and has been corrected in Upreti and Deb [11]. Moreover, the studies primarily focus on optimizing reactor length for a specific reactor top temperature, usually 694 K [6, 7, 12], or for a limited set of temperatures [9, 11]. However, as reported in some studies [11, 12], the

*Genetic Algorithms for Chemical Engineering Optimization Problems DOI: http://dx.doi.org/10.5772/intechopen.104884*

economic return is determined by the top temperature and also the reactor length (the temperature of feed gas entering to the reaction zone). As a result, rather than a single variable problem of reactor length, the optimization problem should be viewed as a multivariable problem.

In the study [13], both the reactor length and the reactor top temperature are considered in the design variables for maximizing the profit return of the process. In order to solve the multivariate optimization problem, the cyclic coordinate search technique was employed. This method alters the value of one decision variable at a time, and for each coordinate direction, the golden section search was utilized to solve the single variable optimum problem. However, this traditional searching approach is prone to get caught in local optima. Therefore, the genetic algorithm has higher chance to obtain the global optimum profit of the process.

#### **2.1 Problem formulation of ammonia synthesis reactor**

The system discussed here is an autothermal synthesis reactor, which is described in [10] and contains the correction of the objective function reported in [6, 11]. The feed gas contains 21.75 mole% nitrogen, 65.25 mole% hydrogen, 5.0 mole% ammonia, 4.0 mole% methane, and 4.0 mole% argon. In an autothermal reactor, the feed gas mixture enters from the bottom of the reactor, flows upward, enters the catalyst zone from the top, and moves downward. In the catalyst zone, the reaction takes place at around 500°C and 200 atm of pressure. The heat generated by the reaction is utilized to preheat the feed gas mixture in counter current flow. **Figure 1** shows the schematic diagram of an autothermal ammonia synthesis reactor. The considered factors affecting the synthesis process are the temperature of feed gas at the entrance of the reaction zone (top temperature) and the reactor length. The goal of the optimal design is to determine the conditions that will give the highest economic return from the reactor operation.

#### *2.1.1 Objective function*

The return of the process, which is calculated from the value of the product gas (heating value and ammonia value), subtract the cost of feed gas (as a source of heat

**Figure 1.** *Schematic diagram of an autothermal ammonia reactor [10].*

only) and minus the amortization of reactor capital expenses, is the objective function for maximization (*F*). Other operating costs are not considered [11].

$$F = 1.3356 \times 10^7 - 1.708 \times 10^4 N\_{N\_2} + 704.09 \left( T\_g - T\_0 \right) - 699.27 \left( T\_f - T\_0 \right) \tag{2}$$

$$- \left( 3.4566 \times 10^7 + 1.9837 \times 10^9 L \right)^{\frac{1}{2}}$$

in which,

*Tf* is the temperature of the feed gas.

*Tg* is the temperature of the reacting gas (the gas in the catalyst zone).

*T*<sup>0</sup> is the top temperature or temperature at the inlet of the catalyst zone.

*NN*<sup>2</sup> is the flow rate of nitrogen.

*L* is the length of the reactor.

#### *2.1.2 Equality constraints*

The heat balance for the feed gas and the reacting gas and the mass balance for the nitrogen flow along the catalyst zone, respectively, give the mathematical model for the system:

$$\frac{dT\_f}{d\mathbf{x}} = -\frac{\mathbf{US}\_1}{\mathbf{WC}\_{\mathcal{pf}}} \left( T\_\mathbf{g} - T\_f \right) \tag{3}$$

$$\frac{dT\_{\rm g}}{d\mathbf{x}} = -\frac{\mathbf{US}\_1}{\mathbf{WC}\_{\rm pg}} \left( T\_{\rm g} - T\_f \right) + \frac{(-\Delta H)\mathbf{S}\_2}{\mathbf{WC}\_{\rm pg}} \left( -\frac{d\mathbf{N}\_2}{d\mathbf{x}} \right) \tag{4}$$

$$\frac{dN\_{N\_2}}{d\mathfrak{x}} = -f\left(k\_1 \frac{p\_{N\_2} p\_{H\_2}^{1.5}}{p\_{NH\_3}} - k\_2 \frac{p\_{NH\_3}}{p\_{H\_2}^{1.5}}\right) \tag{5}$$

in which

$$k\_1 = 1.78954 \times 10^4 \exp\left(\frac{-20800}{RT\_\text{g}}\right) \tag{6}$$

$$k\_2 = 2.5714 \times 10^{16} \exp\left(\frac{-47400}{RT\_{\rm g}}\right) \tag{7}$$

$$p\_{H\_2} = \mathfrak{B}p\_{N\_2} \tag{8}$$

$$p\_{N\_2} = \frac{286N\_{N\_2}}{2.598N\_{N\_2}^0 + 2N\_N} \tag{9}$$

$$p\_{NH\_3} = \frac{286\left(2.23N\_N^0 - 2N\_{N\_2}\right)}{2.598N\_N^0 + 2N\_N} \tag{10}$$

The differential equations are valid in the interval [0, *L*], in which L is the reactor length (or the length of the catalyst zone).

The notations *T*<sup>0</sup> *<sup>f</sup>* , *T*<sup>0</sup> *<sup>g</sup>* , and *N*<sup>0</sup> *<sup>N</sup>*<sup>2</sup> denote the initial value at *x* = 0 (at the inlet of the catalyst zone) for *Tf*,*Tg*, and *NN*<sup>2</sup> , respectively, and are given by

$$T\_f^0 = T\_g^0 = T\_0, \ N\_{N\_2}^0 = 701.2 \,\text{kmol/} \text{m}^2 \tag{11}$$

Other notations of the system are summarized in **Table 1**.

#### *2.1.3 Box constraints*

The variables are subjected to the following physical constraints, as is typical in industries [10]:

$$0 < L \le 10, \quad 400 \le T\_f \le 800, \quad 0 \le N\_{N\_2} \le 3220 \tag{12}$$

The length of the reactor and the top temperature are chosen as the design variables. The remaining variables (*Tf*,*Tg*, and *NN*<sup>2</sup> ) can be calculated from the model by three differential Eqs. (3), (4), and (5). Then, the objective function is determined by Eq. (2). Due to the constraints of temperatures, the variable *T*<sup>0</sup> is also set to be within 400 and 800.

The optimal design problem is summarized as follows:

$$\begin{aligned} \text{maximize } F &= F(\mathbf{x}, T\_0) \\ \begin{cases} \frac{dT\_f}{d\mathbf{x}} = -\frac{US\_1}{\overline{WC\_{pf}}} \left( T\_\mathcal{g} - T\_f \right) \\ \frac{dT\_f}{d\mathbf{x}} = -\frac{US\_1}{\overline{WC\_{pg}}} \left( T\_\mathcal{g} - T\_f \right) + \frac{(-\Delta H)S\_2}{\overline{WC\_{pg}}} \left( -\frac{dN\_2}{d\mathbf{x}} \right) \\ \frac{dN\_2}{d\mathbf{x}} = -f \left( k\_1 \frac{p\_{N\_2} p\_{H\_2}^{1.5}}{p\_{NH\_3}} - k\_2 \frac{p\_{NH\_3}}{p\_{H\_2}^{1.5}} \right) \\ T\_f^0 = T\_\mathcal{g}^0 = T\_0, \ N\_{N\_2}^0 = 701.2 \\ 400 \le T\_f \le 800, \quad 0 \le N\_{N\_2} \le 3220 \\ 0 < L \le 10, \ 400 \le T\_0 \le 800 \end{cases} \tag{13}$$


**Table 1.** *Notation of the synthesis system.*

#### **2.2 Optimization strategy**

The system of ordinary differential Eqs. (3), (4), and (5) with initial conditions (11) was solved by Runge–Kutta fourth-order method. The system is well defined when the top temperature (*T*0) and the interval of integration (the reactor length *L*) are assigned. After that, the economic return is clearly evaluated, and it can be considered as two-variable function of the top temperature and reactor length. The genetic algorithm is employed to find the optimal solution. In order to handle the constraints, a penalty or barrier is defined to the objective function whenever any of variable limits is violated. The proposed strategy is detailed as follows.

#### *2.2.1 Genetic algorithm for optimization problem*

The range of the design variables is 0< *x*≤10, 400 ≤*T*<sup>0</sup> ≤800. The parameters of GA such as population size, crossover probability, mutation probability values were set to be 50, 1.0, and 0.30, respectively. The selection was chosen as roulette wheel selection with elitism. The number of generations was set to be 500.

#### *2.2.2 Barrier method for constrained optimization*

In barrier or penalty methods, the objective function will receive an undesired value when one of the constraints is violated. Therefore, the solution will be kept in the feasible region. The objective function has been modified as

$$F = \begin{cases} F \text{ if } 400 \le T\_f \le 800, & 0 \le N\_{N\_2} \le 3220 \\ 0 \text{ otherwise} \end{cases} \tag{14}$$

#### *2.2.3 Parameters*

The parameters were obtained from the literature [7, 10] and summarized in **Table 2**.


**Table 2.** *Model parameters [7].* *Genetic Algorithms for Chemical Engineering Optimization Problems DOI: http://dx.doi.org/10.5772/intechopen.104884*

#### **Figure 2.**

*Fitness value versus generations.*


**Table 3.** *Maximization results.*

#### **2.3 Optimization results**

**Figure 2** shows the fitness values as a function of generation. As can be observed, the fitness function value achieved the highest after roughly 20 generations and then stayed unchanged. After 100 generations, it was obtained that the reactor length should be 6.772 m, and the top temperature should be 707.09 K. The process produces a profit of 5.018 106 \$ per year. The other parameters of the process are summarized in **Table 3** and compared with the findings of a cyclic coordinate search [13]. The profit value is slightly higher than those reported in the literature, which focused solely on reactor length optimization. From the results, the temperature at the entrance of the catalyst zone should be slightly higher, and that the reactor length should also be slightly longer than previously reported.

#### **3. Economic optimization of membrane module for ultrafiltration of protein solution**

The behavior of permeate flux has a significant impact on the performance of cross-flow ultrafiltration. Many factors cause flux declination, such as solution

properties, membrane properties, and operation conditions. The majority of current research has centered on increasing membrane performance in terms of permeability and selectivity [14, 15]. Just a few studies have paid attention to the configuration and operation of the membrane module [16].

Various factors determine the decision of membrane module geometry for a given application, including fabrication method, power consumption, and fouling potential [17]. Manufacturers frequently recommend the membrane module design from the fabrication standpoint [17]. There is virtually no evidence that their approach prioritizes the energy efficiency. Currently, with a growing in energy concern and a falling in membrane cost, the membrane module design should place a higher attention on energy efficiency. As a result, it is necessary to propose a module design methodology that takes into account the energy factor.

Furthermore, membrane operating conditions are usually decided by user experience, a handbook, or a manual from the membrane supplier. However, the permeate flux equation governing the performance of the membrane system varies greatly between different situations. In this aspect, for any specific application, a general methodology for the design and operation conditions should be studied.

In cross-flow ultrafiltration of protein solution, Nguyen et al. [18] proposed a simple combined model, which simultaneously considers pore blockage and cake filtration, to describe the flux declination. Then, in the study [19], the correlation between the steady-state permeate flux and operation parameters was reported. From the steady-state operation equation, optimal design and operation conditions for each particular application could be established.

However, just a few reports on the optimization of membrane processes and cost estimation have been published, or the cost estimation is too general. For example, Wiley et al. [17] optimized the membrane module configurations for brackish water desalination. However, the operation mode is single-pass and only the membrane cost and energy cost were taken into account. Sethi and Wiesner [20] developed the cost model for the removal of natural organic matter, but the study has not conducted the optimization. In membrane technology, the feed and bleed operation mode, which combines the batch and the single-pass configurations, is commonly utilized for continuous full-scale filtration [21, 22]. Therefore, the optimization of a membrane module operated in feed and bleed mode for protein ultrafiltration is considered. The membrane geometry dimensions and operating conditions are design variables in the problem. The system is represented by a set of ordinary differential equations. The objective function is the annual cost, which consists of various types of capital investments and an operating expense. The capital investments are classified into several categories, which are individually correlated to plant scale, particularly the membrane area. The operating expense is the power consumption.

#### **3.1 Process configuration and model calculation**

#### *3.1.1 Membrane plant configuration*

The configuration of filtration system is continuous feed and bleed, which is shown schematically in **Figure 3**. The notations are summarized in **Table 4**. There are two main pumps in this operation: the feed pump provides the necessary transmembrane pressure, while the recirculation pump maintains the cross-flow rate

*Genetic Algorithms for Chemical Engineering Optimization Problems DOI: http://dx.doi.org/10.5772/intechopen.104884*

**Figure 3.** *Schematic configuration of feed-and-bleed mode membrane system.*


#### **Table 4.**

*Summary of system configuration notations.*

through the modules. The concentrate is continually withdrawn from the system at a flow rate (*R*).

#### *3.1.2 Modeling of membrane modules*

The material balance for total mass and protein give:

$$F = R + P$$

$$F\phi\_0 = R\phi\_\mathbf{f} + P\phi\_\mathbf{p} \tag{15}$$

$$F\phi\_0 + Q\phi\_\mathbf{f} = (F + Q)\phi\_\mathbf{i}$$

The viscosity and density of protein solution correlate to its concentration [23]:

$$\begin{aligned} \mu\_{\rm m} &= 8.94 \times 10^{-4} \exp\left(13.5482 \phi\_{\rm m}\right) \\\\ \rho\_{\rm m} &= 1000(1 - \phi\_{\rm m}) + 1360 \phi\_{\rm m} \end{aligned} \tag{16}$$

in which, *μ*m, *ϕ*<sup>m</sup> are the viscosity and concentration of the mixture (solution), respectively. *ρ*<sup>m</sup> is the density of protein solution, 1000 kg/m<sup>3</sup> is the density of water, and 1360 kg/m3 is the density of dry protein powder.

The permeate flux through the membrane is [19].

$$\frac{J}{\mu} = 3.66 \times 10^{-7} \left(\frac{P}{\rho\_{\text{m}} u^{2}}\right)^{0.27} \left(\frac{\rho\_{\text{m}} u d\_{\text{h}}}{\mu\_{\text{m}}}\right)^{0.52} \tag{17}$$

in which *P* is the trans-membrane pressure, *u* is the cross-flow velocity, *d*<sup>h</sup> is the hydraulic diameter of the flow channel.

The equation for permeate flux can be rewritten as:

$$J = 5.124 \times 10^{-9} \left(\frac{P}{\rho\_{\rm m} u^2}\right)^{0.27} \left(\frac{\rho\_{\rm m} u d\_{\rm h}}{\mu\_{\rm m}}\right)^{0.52} \left(\frac{u}{d\_{\rm h}}\right) \tag{18}$$

or in terms of shear rate *<sup>γ</sup>*\_ <sup>¼</sup> <sup>6</sup>*<sup>u</sup> <sup>h</sup>* <sup>¼</sup> <sup>12</sup>*<sup>u</sup> <sup>d</sup>*<sup>h</sup> , where *h* is the channel height:

$$J = 4.27 \times 10^{-10} \left(\frac{P}{\rho\_{\rm m} u^2}\right)^{0.27} \left(\frac{\rho\_{\rm m} \mu d\_{\rm h}}{\mu\_{\rm m}}\right)^{0.52} \dot{\gamma} \tag{19}$$

The flow rate/velocity drop and channel length change are calculated from the total mass balance and component balance within the control volume *dz* across the channel. Protein is assumed to be fully rejected by the membrane:

$$d(\text{Flow} \times \phi\_{\text{m}}) = \mathbf{0} \tag{20}$$

$$d(\text{Flow}) = -f \times d(wz) = -fw \, dz \tag{21}$$

The pressure loss is estimated by the Darcy-Weisbach Equation [24, 25].

$$dP = -4f\rho\_{\rm m}\frac{dz}{d\_{\rm h}}\frac{u^2}{2} \tag{22}$$

in which *f* is the friction factor.

The set of ordinary equations that describes the membrane module system was established as follows [26].

*Genetic Algorithms for Chemical Engineering Optimization Problems DOI: http://dx.doi.org/10.5772/intechopen.104884*

$$\begin{cases} \frac{d(Flow)}{d\phi\_{\rm m}} = -\frac{Flow}{\phi\_{\rm m}}\\ \frac{dz}{d\phi\_{\rm m}} = \frac{Flow}{\phi\_{\rm m} wf} \\ \frac{dP}{d\phi\_{\rm m}} = -4f\rho\_{\rm m} \frac{1}{h\_{\rm h}} \frac{u^2 Flow}{2} \frac{1}{\phi\_{\rm m} wf} \\ \mu\_{\rm m} = 8.94 \times 10^{-4} \exp\left(13.5482\phi\_{\rm m}\right) \\ \rho\_{\rm m} = 1000(1 - \phi\_{\rm m}) + 1360\phi\_{\rm m} \\ f = 5.124 \times 10^{-9} \left(\frac{P}{\rho\_{\rm w} u^2}\right)^{0.27} \left(\frac{\rho\_{\rm w} ud\_{\rm}}{d\_{\rm m}}\right)^{0.52} \left(\frac{u}{d\_{\rm h}}\right) \\ u = \frac{Hlow}{h \times w} \\ f = \frac{24}{\text{Re}} \quad \text{if } \text{Re} < 2000 \\ f = \frac{0.079}{\text{Re}^{0.25}} \text{ if } \text{Re} > 2000 \end{cases} (23)$$

in the range of concentration [*ϕ*i, *ϕ*f] and the initial condition

$$\begin{cases} \text{Flow}(\phi\_{\text{i}}) = F + Q \\ z(\phi\_{\text{i}}) = \mathbf{0} \\ P(\phi\_{\text{i}}) = P\_{\text{i}} \end{cases} \tag{24}$$

The system of the ordinary equations can be solved numerically by Runge–Kutta fourth-order method [27] to obtain the flow rate, the length, and the pressure. From that, the two important factors determining the total cost, membrane area, and total energy were calculated:

$$A\_{\text{membrane}} = w \times L \tag{25}$$

$$\begin{split} E\_{\text{pumps}} &= E\_{\text{P}} + E\_{\text{Q}} \\ &= F \times P\_{i} + Q \times \Delta P\_{\text{drop}} \end{split} \tag{26}$$

In this equation,

*E*P, *E*<sup>Q</sup> are the power supplied by the feed pump and recirculation pump, respectively.

*P*F, *P*i, *P*<sup>o</sup> are the pressure at the outlet of the feed pump, at the inlet and outlet of the membrane module, respectively.

Δ*P*drop is the pressure drop in the membrane module.

#### *3.1.3 Cost estimation*

#### *3.1.3.1 Operating cost*

The operating cost consists of power consumption of the pumps and membrane replacement. The annual energy expense of the pumps is calculated as

$$C\_{\text{energy}} = \frac{E\_{\text{pump}}}{\eta} \times 8000 \times \frac{3600}{1000} \times \text{electricity price} \qquad \qquad \qquad \qquad \qquad \left[ \text{\\$/year} \right] \tag{27}$$

*E*pumps is from Eq. (26). The system is assumed to work 8000 hr./year (24 hr./day and 333 days/year). *η* is the efficiency of the pumps, which is set to be 0.7. The electricity price is supposed to be 0.08 \$/kWh, which is the price for the industrial sector in the United States [28].

The membrane replacement cost is calculated as

$$\mathbf{C\_{membrane}} = \mathbf{A\_{membrane}} \times \mathbf{c\_{membrane}} \times \left(\frac{\mathbf{A}}{P}\right) \qquad\qquad\qquad \left[\\$/\text{year}\right] \tag{28}$$

where *C*membrane [\$/m<sup>2</sup> /year] is the membrane replacement cost calculated per year.

*c*membrane [\$] is the membrane price per unit area,

*A P* is the amortization factor, which presents the time value of money [29] and is calculated as a function of interest rate *i* and the membrane life.

The membrane price is usually about 200 \$/m2 [9], and membrane life is 12–18 months. Therefore, the membrane replacement cost per year is roughly estimated as 200 \$/m2 /year for the interest of *i* = 8%.

#### *3.1.3.2 Capital cost*

It is widely observed that capital costs are correlated to the size in the power-law form [30]:

$$\mathbf{Cost} = k(\text{size})^n \tag{29}$$

In order to achieve higher accuracy, rather than simply predicting the whole capital cost of the membrane plant to capacity, Sethi and Wiesner [20] divided the capital investment into several major categories, which was correlated to the size independently. The major categories include pumps and other manufactured equipment.

#### a. Pump capital cost

The pumps capital cost can be estimated as (Perry et al. [31]):

$$\mathbf{C}^\*\_{\text{pump}} = I \times f\_1 \times f\_2 \times \mathbf{C}\_L \times \mathbf{81.27} \times \left(\mathbf{Q} \times \mathbf{P}\right)^{0.4} \tag{30}$$

in which.

*I*: a cost index ratio for updating the cost to the recent year.

*f*1: an adjust factor for pump construction material.

*f*2: an adjust factor for suction pressure range.

*C*L: a factor used to incorporate labor costs.

*Q*: flow capacity of the pump [m<sup>3</sup> /h].

*P*: pressure outlet of the pump [kPa].

The cost index, *I* in Eq. (30), can be referred to as the chemical engineering (CE) index to update the cost. It can be obtained from [32], and the value of 2.4 is used. *C*<sup>L</sup> = 1.4 with the assumption that 40% of the cost is required to install the equipment [33]. The factors *f*<sup>1</sup> and *f*<sup>2</sup> can be found in [31]. *f*<sup>1</sup> = 1.5 when the material is stainless steel. *f*<sup>2</sup> = 1.0 when the pump pressure is below 10 bar (1 MPa).

The pump size (*Q* � *P* in Eq. (30)) of the two pumps (feed and recirculation pump) is

$$\text{pump size} = (F + Q) \times P\_{\text{i}} \tag{31}$$

#### b. Capital cost of other equipment

In membrane application, the membrane area is the key parameter, which determines the plant capacity [34]. Thus, the membrane area is chosen as the basic for the estimation of various components in the capital costs.

Non-membrane equipment and facilities, excluding the pumps, were grouped into four main categories: (1) pipes and valves; (2) instruments and controls; (3) tanks and frames; and (4) miscellaneous. The capital cost of each is correlated to the membrane area as follows (Sethi and Wiesner [20])

1.Pipes and valves

$$\mathbf{C}^\*\_{\text{PV}} = \mathbf{6000} (A\_{\text{membrane}})^{0.42} \qquad \qquad [\\$] \tag{32}$$

2. Instruments and controls

$$\mathbf{C}^\*\,^\*\_{\mathrm{IC}} = \mathbf{1500}(A\_{\mathrm{membrane}})^{0.66} \qquad \qquad [\\$] \tag{33}$$

3.Tanks and frames

$$\mathbf{C}^\*\_{\text{TF}} = \mathbf{3100}(A\_{\text{membrane}})^{0.53} \qquad \qquad [\\$] \tag{34}$$

4.Miscellaneous

$$\mathbf{C}^\*|\_{\text{MI}} = \mathbf{8000} (A\_{\text{membrane}})^{0.57} \qquad \qquad [\\$] \tag{35}$$

#### c. Annual capital cost

The capital cost can be annualized using the amortization factor as

$$\mathbf{C}\_{\text{capital cost}} = \mathbf{C}^\* \, \_{\text{capital}} \times \left(\frac{A}{P}\right) \qquad\qquad\qquad \left[\mathfrak{s}/\text{year}\right] \tag{36}$$

For the plant design year of 20 years and the interest rate 8%, the amortization factor will be about 0.1.

#### **3.2 Formulizations of the problem**

#### *3.2.1 Fix parameters and design variables*

In the problem, some variables, called input variables, are fixed due to the requirement of the design. In membrane design, these are feed flow *F*, inlet concentration *ϕ*0, and outlet concentration *ϕ*f. The protein is assumed to be entirely rejected by the membrane (*ϕ*<sup>p</sup> = 0).

The design variables were: channel geometry (width � length � height), the inlet pressure (*P*i), and recirculation flow rate (*Q*).

#### *3.2.2 Objective function*

The objective function is the sum of capital cost and operating cost, which were annualized:

$$\begin{aligned} \text{minimum} \, f(\mathbf{x}) &= \text{annual total cost} = \text{annual capital cost} + \text{annual operating cost} \\ &= \mathbf{C}\_{energy} + \mathbf{C}\_{membrane} + \mathbf{C}\_{capital \, (pump\*+other)} \end{aligned}$$

#### *3.2.3 Constraints*

The pressure at the outlet point should be positive. This constraint is satisfied by assigning a high value to the objective function if the outlet pressure is negative.

The decision variables are frequently limited on a finite range

$$\mathbf{x} \le \mathbf{x} \le \mathbf{x}\_u \tag{38}$$

(37)

in which **x** is the vector of decision variables **x** = (*P*i,*Q*,*w*,*h*), subscripts *l* and *u* indicate the lower and upper bound.


**Table 5.** *System parameters and variables.* The system parameters and variables are summarized in **Table 5**.

#### *3.2.4 Optimization by genetic algorithm*

The parameters of GA such as population size, crossover probability, mutation probability values were set to be, 100, 1.0, and 0.30, respectively. The selection was based on roulette wheel with elitism, which means the most fit individual is guaranteed a place in the next generation. The number of generations was assigned to be 500. Because the problem is to minimize the cost, the fitness function was defined as:

$$\text{fitness} = \begin{cases} 0 & \text{if } f(\mathbf{x}) > 5 \times 10^5 \\ 5 \times 10^5 - f(\mathbf{x}) & \text{otherwise} \end{cases} \tag{39}$$

#### **3.3 Optimum design**

For the demonstration of this method, optimum designs of several feed flow rates have been carried out. The lower limit of the membrane width is 0.1 m, the lower limit for the module height is 0.5 mm. The designs are shown in **Table 6**.


#### **Table 6.**

*Optimum designs of membrane module.*

**Figure 4.** *The behavior of cost per unit flow rate design in optimum condition with plant capacity.*

**Figure 4** presents the optimum total cost per unit of feed flow. The cost per unit of feed flow decreases with an increase in plant capacity. It reflects the economies of scale.

The results also suggest that the membrane module dimensions and operation condition will change greatly depending on the process requirements, such as the required feed capacity. It is challenging to predict the direction. It might be concluded that the permeate flux also greatly affects the geometric design and operation strategy in membrane separation processes. It is difficult to find a general rule for the design, for each specific system, the correlation between the permeate flux and operating conditions and membrane geometry should be investigated.

#### **4. Modeling and optimization of the BSCF-based single-chamber solid oxide fuel cell by artificial neural network and genetic algorithm**

Fuel cells that are highly effective and green technology for converting chemical energy stored in fuel to useable power are currently regarded as one of the most promising approaches for future energy requirements [35]. The solid oxide fuel cell (SOFC) has demonstrated an exceptional integration of advantages, such as high efficiency, fuel flexibility, wide contamination acceptance, and low pollution [36, 37]. Modeling and simulation are valuable tools for determining the impact of various design factors and operating conditions on cell performance, as well as for improving fuel cells [38–40]. Plenty of models have been reported to add to the understanding of fuel cells. Modeling approaches can be categorized into two types: theoretical and empirical one [39, 41, 42]. In the theoretical approach, the spatial dimensions of the models range from simple 0 (0-D) [43, 44] and 1 (1-D) [42, 45–47], to more complicated 2 (2-D) [48–51] and 3 (3-D) [52–54], all with various characteristics and directed at different objectives. The mathematical models, which are based on conservation principles, require a lot of data on parameters and properties of fuel cell, as well as complicated equations and time-consuming calculation.

Empirical or data-driven approach may be more feasible for fuel cell users since the behavior can be quickly and simply deduced without a comprehensive understanding of the internal components, just based on the experimental data [39, 41]. Least squares support vector machine (LS-SVM) [55], Hammerstein model [56, 57] are examples of these approaches. In this approach, artificial neural network (ANN) shows several advantages, including high nonlinearity, rapid computation, a low degree of error in matching experimental data. Using ANNs to model SOFCs appears to be a very promising method.

In this section, an ANN was used to model the performance of the BSCF/GDCbased cathode SOFC. The cell voltage was predicted from cathode sintering temperature, cell operating temperature, and cell current. Several network architectures were examined to find the best structure, and the network was trained using backpropagation methods. The data for training, validation, and testing were taken from our study [58]. The genetic algorithm and the developed ANN were then used to find the best conditions for achieving maximum power.

#### **4.1 Artificial neural network models**

Artificial neural networks (ANNs), which were analogous to biological nervous systems, consist of interconnected nodes known as neurons to receive and transfer *Genetic Algorithms for Chemical Engineering Optimization Problems DOI: http://dx.doi.org/10.5772/intechopen.104884*

data [59]. The most basic form, feed-forward architecture, is made up of an input layer, one hidden layers, and an output layer. The input and output layers have the same number of neurons as the number of inputs and outputs in the system to be modeled. Weighted connections connect each neuron to every other neuron in the next layer. In any layer except the input, the weighted sum of data from the previous layer is the input of a neuron. The neuron then activates the data using a function and transfers the response to all neurons in the next layer. The size of the hidden layers is a significant factor that affects the estimation precision because it can make the network become insufficient or overfitting [60]. The number of neurons in hidden layer is generally determined through trials. **Figure 5** illustrates a 3–5-1 feed forward artificial neural network with operating temperature, sintering temperature, and current as inputs.

The activation function employed in this model is the logistic sigmoid*f x*ð Þ¼ <sup>1</sup>*<sup>=</sup>* <sup>1</sup> <sup>þ</sup> *<sup>e</sup>*�*<sup>x</sup>* ð Þ.

The input data (*x*i) are scaled to normalized value (xnorm) to enhance the performance of the network:

$$\left(\boldsymbol{\omega}\_{\text{norm}} = \mathbf{0}.8 \,\, \frac{(\boldsymbol{\pi} - \boldsymbol{\pi}\_{\text{min}})}{\boldsymbol{\pi}\_{\text{max}} - \boldsymbol{\pi}\_{\text{min}}}\right) + \mathbf{0}.\mathbf{1} \tag{40}$$

in which xmax and xmin are the bounded interval of the experimental data.

To assess the performance of ANN, the mean squared error (MSE) and coefficient of determination (R2 ) are usually used [61].

Various factors affect the performance of fuel cells such as cathode and anode structure, electrolyte material and thickness, cell temperature, inlet and outlet gas compositions. Two important factors, cathode sintered temperature and cell operating temperature, were considered in this model. The sintered temperature is from 1000– 1050°C, whereas the operating temperature ranges from 625–700°C. The sintered temperature affects the structure of the obtained cathode as reported in [58]. The explanation of the range for the investigated parameters can be found in [62].

An ANN with one input layer, one hidden layer, and one single output layer was proposed. Current density, sintered temperature of the cathode, and cell operating temperature are the inputs. Back-propagation algorithm [63] was used to train the network. The maximum number of iteration and minimum performance gradient were set to 400 and 10�<sup>5</sup> , respectively, to stop the training. The proper network structure is determined through a series of trial tests. The data were split into three subsets at random: training, validation, and test, each containing 70, 20, 10% of the

**Figure 5.** *Artificial neural network (3–5-1) structure.*

total samples, respectively. The validation and test sets are necessary for evaluating the validation and power of the networks.

The parameters of the neural network were saved and utilized in the next stage to optimize the power density using genetic algorithms.

#### **4.2 Optimization by genetic algorithm**

The objective function is the power density of the fuel cell

$$P = I \times V \tag{41}$$

where *P* is the power density (mW.cm�<sup>2</sup> ), *I* is the current density (mA.cm�<sup>2</sup> ) and is the input to the ANN model, and V is the cell voltage (V), which is calculated from the model.

The design variables and their corresponding ranges are summarized as follows:


The parameters of GA as population size, mutation probability values were set to be 100 and 0.10, respectively. The survival of the individuals was decided by roulette wheel with elitism. The number of generations was 500.

#### **4.3 Optimization results**

**Figure 6** depicts the fitness values (maximum and mean) of the population versus generation. As indicated in the figure, after about 20 generations, the value of fitness function attained to a maximum value and then remained unchanged. After 100

**Figure 6.** *The fitness values versus generation.*

generations, the maximum fuel cell power density of 451.64 mW/cm<sup>2</sup> could be achieved at the sintered temperature of 1005°C, operating temperature of 668°C, and current density of 777 mA/cm<sup>2</sup> .

### **5. Conclusions**

The application of genetic algorithm in chemical engineering processes has been illustrated by three case studies. The results suggest that the optimum conditions of complex chemical problems can be easily obtained using genetic algorithm. The successes of genetic algorithm for the challenging problems reported herein, the development of many faster and flexible versions of GA, the improvement of computing ability all suggest the continually increasing impact of metaheuristic methods in chemical engineering systems.

### **Author details**

Thi Anh-Nga Nguyen<sup>1</sup> and Tuan-Anh Nguyen<sup>2</sup> \*

1 Faculty of Applied Sciences, Ton Duc Thang University, Ho Chi Minh City, Vietnam

2 Faculty of Chemical Engineering, Ho Chi Minh City University of Technology, VNU-HCM, Vietnam

\*Address all correspondence to: anh.nguyen@hcmut.edu.vn

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### **References**

[1] Goldberg DE. Genetic Algorithms in Search, Optimization, and Machine Learning. Boston, MA, United States: Addison-Wesley; 1989

[2] Deb K. Introduction to genetic algorithms for engineering optimization. In: Onwubolu GC, Babu BV, editors. New Optimization Techniques in Engineering. Berlin, Heidelberg: Springer Berlin Heidelberg; 2004. pp. 13-51

[3] Nielsen A, Aika K, Christiansen LJ, Dybkjaer I, Hansen JB, Nielsen PEH, et al. Ammonia: Catalysis and Manufacture. Berlin Heidelberg: Springer; 1995

[4] Ammonia AM. Priciples & Industrial Practice. Weinheim, Germany: Wiley; 1999

[5] Babu B, Angira R, Nilekar A. Optimal design of an auto-thermal ammonia synthesis reactor using differential evolution. In: Proceedings of the Eighth World Multi-Conference on Systemics, Cybernetics and Informatics (SCI-2004). Orlando, Florida, USA: International Institute of Informatics and Systemics; 2004

[6] Babu BV, Angira R. Optimal design of an auto-thermal ammonia synthesis reactor. Computers & Chemical Engineering. 2005;**29**(5):1041-1045. DOI: 10.1016/j.compchemeng. 2004.11.010

[7] Carvalho EP, Borges C, Andrade D, Yuan JY, Ravagnani MASS. Modeling and optimization of an ammonia reactor using a penalty-like method. Applied Mathematics and Computation. 2014; **237**:330-339. DOI: 10.1016/j. amc.2014.03.099

[8] Edgar TF, Himmelblau DM, Lasdon LS. Optimization of Chemical Processes. New York, NY, USA: McGraw-Hill; 2001

[9] Ksasy M, Areed F, Saraya S, Khalik MA. Optimal reactor length of an auto-thermal ammonia synthesis reactor. IJECS: International Journal of Electrical and Computer Sciences. 2010;**10**(3):6-11

[10] Murase A, Roberts HL, Converse AO. Optimal thermal Design of an Autothermal Ammonia Synthesis Reactor. Industrial & Engineering Chemistry Process Design and Development. 1970;**9**(4):503-513. DOI: 10.1021/i260036a003

[11] Upreti SR, Deb K. Optimal design of an ammonia synthesis reactor using genetic algorithms. Computers & Chemical Engineering. 1997;**21**(1):87-92. DOI: 10.1016/0098-1354(95)00251-0

[12] Yusup S, Zabiri H, Yusoff N, Yew YC. Modeling and Optimization of Ammonia Reactor Using Shooting Methods. Bucharest, Romania: Proceedings of the 5th WSEAS international conference on Data networks, communications and computers, World Scientific and Engineering Academy and Society (WSEAS); 2006. pp. 258-268

[13] Nguyen TAN, Nguyen TA, Vu TD, Nguyen KT, Dao TKT, Huynh KPH. Optimization of an auto-thermal ammonia synthesis reactor using cyclic coordinate method. IOP Conference Series: Materials Science and Engineering. 2017;**206**:012059. DOI: 10.1088/1757-899x/206/1/012059

[14] Mores PL, Arias AM, Scenna NJ, Caballero JA, Mussati SF, Mussati MC. *Genetic Algorithms for Chemical Engineering Optimization Problems DOI: http://dx.doi.org/10.5772/intechopen.104884*

Membrane-based processes: Optimization of hydrogen separation by minimization of power, membrane area, and cost. Processes. 2018;**6**(11):221. DOI:

10.3390/pr6110221

[15] Ramírez-Santos ÁA, Bozorg M, Addis B, Piccialli V, Castel C, Favre E. Optimization of multistage membrane gas separation processes. Example of application to CO2 capture from blast furnace gas. Journal of Membrane Science. 2018;**566**:346-366. DOI: 10.1016/j.memsci.2018.08.024

[16] Ohs B, Lohaus J, Wessling M. Optimization of membrane based nitrogen removal from natural gas. Journal of Membrane Science. 2016;**498**: 291-301. DOI: 10.1016/j.memsci. 2015.10.007

[17] Wiley DE, Fell CJD, Fane AG. Optimisation of membrane module design for brackish water desalination. Desalination. 1985;**52**(3):249-265. DOI: 10.1016/0011-9164(85)80036-9

[18] Nguyen T-A, Yoshikawa S, Karasu K, Ookawara S. A simple combination model for filtrate flux in cross-flow ultrafiltration of protein suspension. Journal of Membrane Science. 2012;**403-404**:84-93. DOI: 10.1016/j.memsci.2012.02.026

[19] Nguyen T-A, Yoshikawa S, Ookawara S. Steady state permeate flux estimation in cross-flow ultrafiltration of protein solution. Separation Science and Technology. 2014;**49**(10):1469-1478. DOI: 10.1080/ 01496395.2014.893533

[20] Sethi S, Wiesner MR. Performance and cost modeling of ultrafiltration. Journal of Environmental Engineering. 1995;**121**(12):874-883. DOI: 10.1061/ (ASCE)0733-9372(1995)121:12(874)

[21] Cheryan M. Ultrafiltration and Microfiltration Handbook. Boca Raton, FL, USA: Taylor & Francis; 1998

[22] Zeman LJ, Zydney AL. Microfiltration and Ultrafiltration: Principles and Applications. New York, NY, USA: CRC Press; 1996

[23] Karasu K. A Study on Permeation Phenomena in Cross-Flow Ultrafiltration Producing a Compressible Cake Layer. Tokyo, Japan: Tokyo Institute of Technology; 2010

[24] White FM. Fluid Mechanics. 8th ed. New York, NY, USA: McGraw-Hill Education; 2016

[25] Fox RW, McDonald AT, Mitchell JW. Fox and McDonald's Introduction to Fluid Mechanics. Hoboken, NJ, USA: Wiley; 2020

[26] Nguyen T-A, Yoshikawa S. Modeling and economic optimization of the membrane module for ultrafiltration of protein solution using a genetic algorithm. Processes. 2020;**8**(1):4. DOI: 10.3390/pr8010004

[27] Dorfman KD, Daoutidis P. Numerical Methods with Chemical Engineering Applications. Cambridge, United Kingdom: Cambridge University Press; 2017

[28] U.S. Energy Information Administration. Average Retail Price of Electricity to Ultimate Customers Total End-Use Sector. U.S., USA: Department of Energy; 2019

[29] Park CS. Fundamentals of Engineering Economics. London, United Kingdom: Pearson Education; 2013

[30] Turton R, Bailie RC, Whiting WB, Shaeiwitz JA. Analysis, Synthesis and Design of Chemical Processes. London, United Kingdom: Pearson Education; 2008

[31] Perry RH, Perry RH, Chilton CH, Perry JH. Chemical Engineers' Handbook. 5th ed. New York, NY, USA: McGraw-Hill; 1973

[32] Green DW, Perry RH. Perry's Chemical Engineers' Handbook. Eighth ed. New York, NY, USA: McGraw-Hill Education; 2007

[33] Holland FA, Wilkinson JK. Process economics. In: Perry RH, Green DW, editors. Perry's Chemical Engineers' Handbook. 7th ed. New York, NY, USA: McGraw-Hill; 1997

[34] Mir L, Michaels SL, Goel V, Kaiser R. Crossflow Microfiltration: Applications, Design, and Cost. In: Ho WSW, Sirkar KK, editors. Membrane handbook: Newyork, NY, USA: Springer; 1992. pp. 571-594

[35] Wachsman ED, Marlowe CA, Lee KT. Role of solid oxide fuel cells in a balanced energy strategy. Energy & Environmental Science. 2012;**5**(2): 5498-5509. DOI: 10.1039/C1EE02445K

[36] Ghezel-Ayagh H, Borglum BP. (invited) review of Progress in solid oxide fuel cell at FuelCell energy. ECS Transactions. 2017;**80**(9):47-56. DOI: 10.1149/08009.0047ecst

[37] Minh N, Mizusaki J, Singhal SC. Advances in solid oxide fuel cells: Review of Progress through three decades of the international symposia on solid oxide fuel cells. ECS Transactions. 2017;**78**(1):63-73. DOI: 10.1149/ 07801.0063ecst

[38] Kakaç S, Pramuanjaroenkij A, Zhou XY. A review of numerical modeling of solid oxide fuel cells. International Journal of Hydrogen Energy. 2007;**32**(7):761-786. DOI: 10.1016/j.ijhydene.2006.11.028

[39] Hajimolana SA, Hussain MA, Daud WMAW, Soroush M, Shamiri A. Mathematical modeling of solid oxide fuel cells: A review. Renewable and Sustainable Energy Reviews. 2011;**15**(4): 1893-1917. DOI: 10.1016/j. rser.2010.12.011

[40] Ma L, Ingham DB, Pourkashanian M, Carcadea E. Review of the computational fluid dynamics modeling of fuel cells. Journal of Fuel Cell Science and Technology. 2005;**2**(4): 246-257. DOI: 10.1115/1.2039958

[41] Wang K, Hissel D, Péra MC, Steiner N, Marra D, Sorrentino M, et al. A review on solid oxide fuel cell models. International Journal of Hydrogen Energy. 2011;**36**(12):7212-7228. DOI: 10.1016/j.ijhydene.2011.03.051

[42] Karcz M. From 0D to 1D modeling of tubular solid oxide fuel cell. Energy Conversion and Management. 2009; **50**(9):2307-2315. DOI: 10.1016/j. enconman.2009.05.007

[43] Costamagna P, Magistri L, Massardo AF. Design and part-load performance of a hybrid system based on a solid oxide fuel cell reactor and a micro gas turbine. Journal of Power Sources. 2001;**96**(2):352-368. DOI: 10.1016/ S0378-7753(00)00668-6

[44] Zabihian F, Fung AS. Macro-level modeling of solid oxide fuel cells, approaches, and assumptions revisited. Journal of Renewable and Sustainable Energy. 2017;**9**(5):054301. DOI: 10.1063/ 1.5006909

[45] Ota T, Koyama M, Wen C-j, Yamada K, Takahashi H. Object-based modeling of SOFC system: Dynamic behavior of micro-tube SOFC. Journal of *Genetic Algorithms for Chemical Engineering Optimization Problems DOI: http://dx.doi.org/10.5772/intechopen.104884*

Power Sources. 2003;**118**(1):430-439. DOI: 10.1016/S0378-7753(03)00109-5

[46] Li P-W, Suzuki K. Numerical modeling and performance study of a tubular SOFC. Journal of The Electrochemical Society. 2004;**151**(4): A548. DOI: 10.1149/1.1647569

[47] Bove R, Lunghi P, M. Sammes N. SOFC mathematic model for systems simulations—Part 2: Definition of an analytical model. International Journal of Hydrogen Energy. 2005;**30**(2): 189-200. DOI: 10.1016/j.ijhydene. 2004.04.018

[48] Ma R, Gao F, Breaz E, Huangfu Y, Briois P. Multidimensional reversible solid oxide fuel cell modeling for embedded applications. IEEE Transactions on Energy Conversion. 2018;**33**(2):692-701. DOI: 10.1109/ TEC.2017.2762962

[49] Ni M. 2D thermal-fluid modeling and parametric analysis of a planar solid oxide fuel cell. Energy Conversion and Management. 2010; **51**(4):714-721. DOI: 10.1016/j. enconman.2009.10.028

[50] Geisler H, Dierickx S, Weber A, Ivers-Tiffee E. A 2D stationary FEM model for hydrocarbon Fuelled SOFC stack layers. ECS Transactions. 2015; **68**(1):2151-2158. DOI: 10.1149/ 06801.2151ecst

[51] Luo XJ, Fong KF. Development of 2D dynamic model for hydrogen-fed and methane-fed solid oxide fuel cells. Journal of Power Sources. 2016;**328**: 91-104. DOI: 10.1016/j.jpowsour. 2016.08.005

[52] Nikooyeh K, Jeje AA, Hill JM. 3D modeling of anode-supported planar SOFC with internal reforming of methane. Journal of Power Sources.

2007;**171**(2):601-609. DOI: 10.1016/j. jpowsour.2007.07.003

[53] Andersson M, Paradis H, Yuan J, Sundén B. Three dimensional modeling of an solid oxide fuel cell coupling charge transfer phenomena with transport processes and heat generation. Electrochimica Acta. 2013;**109**:881-893. DOI: 10.1016/j.electacta.2013.08.018

[54] Yang C, Yang G, Yue D, Yuan J, Sunden B. Computational fluid dynamics model development on transport phenomena coupling with reactions in intermediate temperature solid oxide fuel cells. Journal of Renewable and Sustainable Energy. 2013;**5**(2):021420. DOI: 10.1063/1.4798789

[55] Huo H-B, Zhu X-J, Cao G-Y. Nonlinear modeling of a SOFC stack based on a least squares support vector machine. Journal of Power Sources. 2006;**162**(2):1220-1225. DOI: 10.1016/j. jpowsour.2006.07.031

[56] Huo H-B, Zhong Z-D, Zhu X-J, Tu H-Y. Nonlinear dynamic modeling for a SOFC stack by using a Hammerstein model. Journal of Power Sources. 2008;**175**(1):441-446. DOI: 10.1016/j.jpowsour.2007.09.059

[57] Jurado F. A method for the identification of solid oxide fuel cells using a Hammerstein model. Journal of Power Sources. 2006;**154**(1):145-152. DOI: 10.1016/j.jpowsour.2005.04.005

[58] Le M-V, Tsai D-S, Nguyen T-A. BSCF/GDC as a refined cathode to the single-chamber solid oxide fuel cell based on a LAMOX electrolyte. Ceramics International. 2018;**44**(2): 1726-1730. DOI: 10.1016/j.ceramint. 2017.10.103

[59] Baughman DR, Liu YA. Neural Networks in Bioprocessing and Chemical Engineering. Amsterdam, Netherlands: Elsevier Science; 1995

[60] Sheela KG, Deepa SN. Review on methods to fix number of hidden neurons in neural networks. Mathematical Problems in Engineering. 2013;**2013**:425740. DOI: 10.1155/2013/ 425740

[61] Himmelblau DM. Accounts of experiences in the application of artificial neural networks in chemical engineering. Industrial & Engineering Chemistry Research. 2008;**47**(16): 5782-5796. DOI: 10.1021/ie800076s

[62] Le M-V, Nguyen T-A, Nguyen TA-N. Modeling and optimization of the BSCF-based single-chamber solid oxide fuel cell by artificial neural network and genetic algorithm. Journal of Chemistry. 2019;**2019**:7828019. DOI: 10.1155/2019/ 7828019

[63] Wythoff BJ. Backpropagation neural networks: A tutorial. Chemometrics and Intelligent Laboratory Systems. 1993; **18**(2):115-155. DOI: 10.1016/0169-7439 (93)80052-J

### **Chapter 6**
