**2. Materials and methods**

#### **2.1 Regression modeling and response surface methodology (RSM)**

The design optimization problem that is handled in this study is solved in three steps: *i*) design the experiments and perform the experimental runs, *ii*) perform regression modeling to determine the mathematical relations between the responses and the factors, *iii*) perform optimization to determine the optimum factor levels. The goal of this paper is to calculate the optimum levels of magnet thickness (X1), offset (X2Þ, and embrace (X3) to maximize the efficiency and to minimize the rated torque, armature current density, and armature thermal load, while keeping the air-gap flux density at 1.0 Tesla. Linear, quadratic, and interaction terms can all be found in regression models. These three terms occur simultaneously in a full quadratic model. Eq. (1) provides the full quadratic model's general representation [16–18].

$$Y\_i = \beta\_0 + \sum\_{k=1}^{m} \beta\_k X\_{ki} + \sum\_{k=1}^{m} \beta\_{kk} X\_{ki}^2 + \sum\_{k$$

$$\mathbf{g}^{\mathrm{T}} = [\beta\_0, \beta\_1, \beta\_2, \dots, \beta\_m] \tag{2}$$

The response value for the ith experimental run is represented by Yi. In this study, five different regression equations—which belong to five responses—will be calculated in the next section. Xki and X2 ki terms are the linear and quadratic terms, respectively, while XkiXli terms represent the interactions (X1X2, X1X3, X2X3). Finally, ei is the residual error. The vector given in Eq. (2) contains the model's coefficients given in Eq. (1) and calculated as shown below [16–18]:

$$\boldsymbol{\beta} = \left(\mathbf{X}^T\mathbf{X}\right)^{-1}\left(\mathbf{X}^T\mathbf{Y}\right) \tag{3}$$

**Y** is referred to as the response, and it is denoted by a column vector. In this study, the response values are obtained using Maxwell simulations. **X** is a matrix, and it is made up of the various combinations of the design parameters involved in the experimental design. The first column of the X is made up of 1 s for the model's constant term (β0Þ. The second, third, and fourth columns contain the factor values X1, X2, and X3, respectively. The experiments in this study are divided into 14 runs. These three columns (the second, third, and fourth columns) and 14 rows are identical to the experimental design. The squares of X1, X2, and X3 make up the 5th, 6th, and 7th columns of the **X** matrix, respectively. The same issue applies to interactions. By multiplying the related columns of X1, X2, and X3, the interactions are placed in the 8th, 9th, and 10th columns of the **X** matrix.

After mathematical modeling, *R<sup>2</sup>* (coefficient of determination) is calculated to determine whether the factors are sufficient to describe the response change. To put it another way, *R<sup>2</sup>* —which is presented in Eq. (4)—represents the level of explanatory power between the regression model and the factors.

$$R^2 = \frac{\beta^T X^T Y - n\bar{Y}^2}{Y^T Y - n\bar{Y}^2} \tag{4}$$

In order to use these models established in Eq. (1)–(3) during the optimization phase, *R<sup>2</sup>* must be closer to 1 (which means 100%). Then this means the factors of the mathematical models are sufficient to explain the shifts at Y, and in this case this means there is no need to add new factors to the regression model. The significance of the models must be determined in the final step before optimization. This is done using analysis of variance (ANOVA). The F-test is used in ANOVA to test the significance of a regression model. In this study, we used "p-value" technique (where the p-values of the each model are calculated using Minitab statistical analysis program). When the p-value is less than the alpha (type-I error), the model is considered significant. We set confidence level at 95% in the statistical analysis. This indicates that the type-I error==0.05 (5%).

In the second phase, the optimization algorithms will be run through these five regression models to calculate the optimum factor levels. In this study, four different classes of optimization methods (RSM, GA, PSO, and MSGO) are tried on this optimization problem. RSM is a gradient-based deterministic optimization method; however, GA, PSO, and MSGO are the meta-heuristics. Meta-heuristic algorithms can be classified into different groups such as evolutionary, swarm-based, human-based, etc.

Since its introduction in 1951, the RSM has become a commonly preferred design of experiment (DOE) approach for modeling and optimizing processes with a small number of experimental runs [16–18]. In this study, RSM is applied by using "Minitab Response optimizer Module," which uses gradient search algorithm in its background.

#### **2.2 Genetic algorithm (GA)**

Meta-heuristic algorithms are stochastic optimization methods that are heavily influenced by nature. In 1975, Holand created GA, a search and optimization technique [18]. Natural selection and genetic concepts are used to replicate the evolutionary process in nature. It operates based on probability laws and simply requires the purpose function. The solution area is partially investigated by GA, resulting in a more efficient search in a shorter amount of time. Chromosomes are created in the initial

*Design Optimization of 18-Poled High-Speed Permanent Magnet Synchronous Generator DOI: http://dx.doi.org/10.5772/intechopen.106987*

phase of GA to explore potential solutions. Chromosome set represents the generation's population. Selection, crossover, and mutation are the three GA operators. These operators drive the evolution of chromosomes in a generation toward the following generation. There are several uses for GA, including scheduling, vehicle routing, and transportation. GA is an evolutionary-based algorithm [19, 20]. According to Haupt and Haupt [21], since the chromosomes are not decoded before calculating the cost function, continuous GA is faster than binary GA. As a result, instead of binary GA, continuous GA was used in this study because it has the advantage of requiring less storage. In this study, the crossing method, in which Haupt & Haupt [19] combine the extrapolation method with a crossing method, is used.

#### **2.3 Particle swarm optimization (PSO) algorithm**

Particle swarm optimization (PSO) algorithm is invented by Kennedy & Eberhart [22] in 1995, and it is the first swarm-based meta-heuristic algorithm. Every possible solution in PSO is represented by a particle. The distances between a particle's present position and its best position and the best position of the group are used in PSO to update a particle's velocity [23–25].

The velocity vector and position vector for the *i*th particle are shown by *vi* and *xi*, respectively, in the D-dimensional search space (where *vi* ¼ ð Þ *vi*1, *vi*2, … , *viD* and *xi* ¼ ð Þ *xi*1, *xi*2, … , *xiD* ). After random initialization of particles, each particle's velocity and position are updated as specified in Eq. (5) and Eq. (6) [25].

$$\mathbf{v\_i(t+1) = wv\_i(t) + c\_1r\_1(p\_i - x\_i(t)) + c\_2r\_2(p\_g - x\_i(t))} \tag{5}$$

$$\mathbf{x}\_{i}(\mathbf{t}+\mathbf{1}) = \mathbf{x}\_{i}(\mathbf{t}) + \mathbf{v}\_{i}(\mathbf{t}+\mathbf{1})\tag{6}$$

In these equations, w stands for the inertia weight and is used to regulate how the previous velocity affects the new. The best past positions of the ith individual and all particles in the current generation are represented, respectively, by *pi* and *pg*. The constants *c*<sup>1</sup> and *c*<sup>2</sup> are used to weight the positions. The uniformly distributed values between [0, 1] are [*r*1] and [*r*2]. **Figure 1** shows the algorithm's progress [25].

#### **2.4 The modified social group optimization (MSGO) algorithm**

MSGO is a human-based optimization algorithm and invented in 2020. It is proposed by Naik et al. [26] by improving the acquiring phase of social group optimization (SGO) algorithm [27] and introducing a self-awareness probability factor. It is based on an individual's social behavior in a group to solve complex problems.

In MSGO, each member of the group (person) stands in for a potential solution, and the human traits—which stand in for a person's dimension—represent the amount of design variables in the issue. **Figure 2** below shows the pseudocode for the improvement phase. In **Figure 2**, *Pi* represents the members of the social group made up of *N* individuals, where *i* ¼ 1,2,3, … ,*N*. Each individual additionally has D traits (*Pi* ¼ ð Þ *Pi*1, *Pi*2, … , *PiD* ). The self-introspection parameter between [0,1] and *rand* � *U*ð Þ 0, 1 is called c. The best member of the group is *gbest*, who works to spread knowledge among all people. *Gbest* will then be able to assist the group as a whole in learning more. Eq. (7) presents the aim as a minimization problem. **Figure 2** and Eq. (7) illustrate the update for each individual [26, 27].

**Figure 1.** *The PSO algorithm.*

#### **Figure 2.** *The improving phase.*

$$[\text{minimize}, \text{index}] = \min \{ f(P\_i), i = 1, 2, \dots, N \} \\ \text{and} \\ \text{glest} = \text{P}(\text{index}, \ \cdot) \tag{7}$$

A person interacts with the group's best member (*gbest*) as well as other group members at random during the learning phase in order to gain information. In other terms, gbest is the best member of the group. A person learns something new if someone else is more knowledgeable. The person with the most knowledge, or "gbest," has the most influence over others to learn from. Even if they are more knowledgeable than they are, group members can teach a person something new. The acquiring phase is represented by Eq. (8) and **Figure 3** [26, 27].

$$[\text{minimize}, \text{index}] = \min \left\{ f(P\_i), i = 1, 2, \dots, N \right\} \text{ and } \text{gbest} = P(\text{index}, \cdot) \tag{8}$$

*Design Optimization of 18-Poled High-Speed Permanent Magnet Synchronous Generator DOI: http://dx.doi.org/10.5772/intechopen.106987*

#### **Figure 3.**

*The acquiring phase for SGO.*

#### **Figure 4.**

*The acquiring phase for MSGO.*

where the updated values at the conclusion of the improving phase are *Pi* values. By changing the acquisition step of the SGO algorithm, the MSGO algorithm was created. The improving phase, however, is identical to SGO. Each social group member is still interacting with the finest individual throughout this period (best*p*). Each person also engages in interaction with the other group members to learn. During this stage, if the other person knows more, the person learns something new. If one person knows more than another and that person has a greater self-awareness probability (SAP) of learning that knowledge, then that person learns something new from that other person. SAP is the capacity to learn from others, according to its definition. Modified acquisition phase is shown in Eq. (9), and **Figure 4** below shows a minimization problem [26, 27]:

$$\left[ \text{value}, \text{index\\_num} \right] = \min \left\{ \mathbf{f}(\mathbf{P}\_i), \mathbf{i} = \mathbf{1}, \mathbf{2}, \dots, \mathbf{N} \right\} \text{ and } \mathbf{best}\_{\mathbf{P}} = \mathbf{P} (\text{index\\_num}, \dots) \tag{9}$$

The relevant design variable's upper and lower bounds are shown in **Figure 4** as *lb* and *ub*, respectively. It is proposed to select the SAP between: 0*:*6≤ *SAP*≤ 0*:*9. According to the literature, MSGO shows best performance for SAP = 0.7 and *c* = 0.2 [26, 27].
