**3.1 Population initialization**

To generate the initial population of *K*ð Þ 0 candidate solutions, candidate solutions are randomly selected so that the population is uniformly distributed over the solution space. For a hyper-rectangular solution space, an initial population of candidate solutions can be generated in Matlab using

$$\begin{aligned} G(\mathbf{0}) &= kron(\mathbf{X}\_{\epsilon}, ones(\mathbf{1}, K(\mathbf{0}))) \\ &+ diag(\mathbf{X}\_{\epsilon}) \* (rand(N, K(\mathbf{0})) - \mathbf{0}. 5); \end{aligned} \tag{1}$$

where *G*ð Þ 0 is a matrix containing the *K*ð Þ 0 initial candidate solutions, **X***<sup>c</sup>* is a vector containing the solution space's center in rectangular coordinates, **X***<sup>s</sup>* is a vector containing the solution space's size for each dimension in rectangular coordinates, and *N* is the solution space's dimension. For example, for a two dimensional (*N* ¼ 2) hyperrectangular solution space of 0½ � � <sup>1</sup> *<sup>T</sup>* <sup>≤</sup>**x**<sup>≤</sup> ½ � 6 1 *<sup>T</sup>*, **<sup>X</sup>***<sup>c</sup>* <sup>¼</sup> ½ � 3 0 *<sup>T</sup>* and **<sup>X</sup>***<sup>s</sup>* <sup>¼</sup> ½ � 6 2 *<sup>T</sup>*.

This selection of initial candidate solutions can be adapted for other types of *N* dimensional spaces. For example, the initial population of *K*ð Þ 0 candidate solutions for a hyper-ellipsoid solution space can be generated in Matlab using

$$\begin{aligned} G(\mathbf{1}:\mathbf{2}:N,\ :) &= kron(\mathbf{X}\_r, ones(\mathbf{1},K)) . ^ {\ast} rand(celi(N/2),K); \\ G(\mathbf{2}:\mathbf{2}:N,\ :) &= \mathbf{2} ^ {\ast} pi ^ {\ast} rand(floor(N/2),K); \end{aligned} \tag{2}$$

where **X***<sup>r</sup>* is a vector containing the the hyper-elliptical solution space's radii, the terms, *G*ð Þ 1 : 2 : *N*, : , represent the magnitudes of each candidate solution and the terms, *G*ð Þ 2 : 2 : *N*, : , represent their respective phases. If the solution space is not centered at the origin, then candidate solutions of the form ½ � *<sup>r</sup> <sup>θ</sup> <sup>T</sup>* and centered at ½ � 0 0 *<sup>T</sup>* can be moved to ½ � *<sup>R</sup>* <sup>Θ</sup> *<sup>T</sup>* and centered at ½ � *<sup>r</sup>*<sup>0</sup> *<sup>γ</sup> <sup>T</sup>* using the transformations,

$$\begin{aligned} R &= \sqrt{r^2 + r\_0^2 + 2rr\_0\cos\left(\theta - \chi\right)}\\ \Theta &= \arctan\left(\frac{r\sin\left(\theta\right) + r\_0\sin\left(\chi\right)}{r\cos\left(\theta\right) + r\_0\cos\left(\chi\right)}\right) . \end{aligned} \tag{3}$$

If appropriate, these initial candidate solutions could be converted to rectangular coordinates using

$$\boldsymbol{\kappa}\_{k} = \boldsymbol{R}\cos\Theta \quad \text{and} \quad \boldsymbol{\kappa}\_{k+1} = \boldsymbol{R}\sin\Theta. \tag{4}$$

#### **3.2 Ranking and stochastic selection**

This algorithm uses an elitist, linear ranking, random selection method. Because the selection operator is elitist, the fittest individual, or the candidate solution vector with the best optimization criterion cost, is guaranteed to survive the selection process. Elitist selection algorithms can increase an algorithm's exploitation and therefore increase the algorithm's ability to converge, especially when steady-state misadjustment is significant [15]. Linear ranking selection methods evaluate each candidate solution by the cost function and rank the candidate solutions according to their costs [16, 17]. Starting with the candidate solution with the best cost, each candidate solution is assigned a selection probability in linearly decreasing increments so that all candidate solutions have a nonzero probability of selection. This method of selection allows diverse candidate solutions that might contain useful vector elements but have a poor cost to survive the selection process. This can improve an algorithm's exploration and prevent the algorithm from converging in a local minima or maxima.

The selection operator is the first operation performed for each generation, or iteration of the algorithm. At the start of the algorithm's *n*th iteration, the selection operator evaluates the *K n*ð Þ candidate solutions, **x***k*ð Þ *n* , with respect to the cost function, *J*, and ranks the candidate solutions according to their cost. For a minimization problem, the ranked candidate solutions are sorted from highest cost to lowest cost and are assigned consecutive integers from 1 to *K n*ð Þ so that **x**1ð Þ *n* is candidate solution with the highest, or worst, cost and **x***K n*ð Þð Þ *n* is assigned to the candidate solution with the lowest, or best, cost. After ranking, each candidate solution is assigned a selection probability, *P*ð Þ **x***k*ð Þ *n* , so that

$$P(\mathbf{x}\_k(n)) = \sum\_{m=1}^k \Delta p\_m \tag{5}$$

where

$$
\Delta p\_m = \frac{1}{K(n)} \left[ \eta^- + (\eta^+ - \eta^-) \frac{m - \mathbf{1}}{K(n) - \mathbf{1}} \right],\tag{6}
$$

*η*<sup>þ</sup> is a constant, and *η*� is a constant that is selected so *P*ð Þ¼ **x**1ð Þ *n η*�*=K n*ð Þ which is the selection probability of the worst candidate solution [16, 17].

Because this algorithm uses an elitist selection method, the best candidate solution is assured survival during the selection process which implies that

$$P\left(\mathbf{x}\_{\mathcal{K}(n)}(n)\right) = \mathbf{1}.\tag{7}$$

Substituting Eq. (6) into Eq. (5), and the resulting equation into Eq. (7),

$$\sum\_{m=1}^{K(n)} \frac{1}{K(n)} \left[ \eta^{-} + (\eta^{+} - \eta^{-}) \frac{m-1}{K(n)-1} \right] = 1. \tag{8}$$

Solving Eq. (8), an elitist selection method requires that

$$
\eta^{+} = 2 - \eta^{-} \tag{9}
$$

where 0 <*η*� < *η*þ.

The set of surviving candidate solutions are referred to as the mating pool. After this selection method, the mating pool's mean size is

$$E[\mathbf{M}(n)] = \sum\_{k=1}^{K(n)} P(\mathbf{x}\_k(n)) = \frac{2\eta^- + \eta^+}{6} (K(n) + 1) \tag{10}$$

where *E* is the expectation operator and *M n*ð Þ is the number candidate solutions in the mating pool after selection during the *n*th iteration. Because this algorithm uses an elitist selection method, Eq. (10) can be simplified by substituting Eq. (9) into Eq. (10) which results in

$$E[\mathbf{M}(n)] = \frac{2+\eta^{-}}{6} \left(\mathbf{K}(n) + \mathbf{1}\right) \tag{11}$$

which is the expected number of candidate solutions that survive this elitist linear ranking selection process during the *n*th iteration. Ref. [17] shows that setting *η*�≈0*:*9 often provides an adequate balance between selective pressure which allows for exploitation of the objective function and population diversity which allows for exploration of the objective function.

#### **3.3 Differential evolution operator to improve convergence**

Differential evolution algorithms generate new candidate solutions by adding a weighted difference between two randomly selected candidate solutions to a third randomly selected candidate solution. For this algorithm, the differential evolution operator to improve convergence generates a new candidate solution, **v**, using

$$\mathbf{v} = \mathbf{x}\_{k}(n) + R\left[\mathbf{x}\_{m}(n) - \mathbf{x}\_{\circ}(n)\right] \tag{12}$$

where **x***k*ð Þ *n* is the candidate solution randomly selected for differential evolution, **x***m*ð Þ *n* and **x***j*ð Þ *n* are two randomly selected candidate solutions from the mating pool, and *R* is a uniformly distributed random number from the interval [0,1]. The two candidate solutions, **x***m*ð Þ *n* and **x***j*ð Þ *n* , should be distinct and chosen so that **x***m*ð Þ *n* 6¼ **x***j*ð Þ *n* ; however, this can become difficult when the algorithm is converging.

Because this algorithm is an elitist algorithm, the best candidate solution, **x***K n*ð Þð Þ *n* , is always selected for this differential evolution operator. The other candidate solutions are selected randomly for differential evolution with a probability of *PDE*1ð Þ *n* . Because *R* in Eq. (12) is a uniformly distributed random number from the interval [0,1], the value *R* attenuates the difference between the two randomly selected candidate solutions. When this attenuated difference is added to the candidate solution selected for differential evolution, it creates a new candidate solution within a neighborhood of the candidate solution selected for differential evolution. As a result, this differential evolution operator improves the algorithm's ability converge to an

#### **Figure 1.**

*A plot showing an example of the differential evolution operator that improves convergence for a two dimensional cost function, J. The plot shows the contour lines of the cost function and the candidate solution vectors involved in the differential evolution operation.*

optimal point in the neighborhood. **Figure 1** shows a plot of the contours of a two dimensional cost function, *J*, and three candidate solutions selected for differential evolution. The figure illustrates how this differential evolution operator creates candidate solutions within a neighborhood of the candidate solution, **x***k*ð Þ *n* , selected for differential evolution.

On average, this operator creates

$$(M(n) - 1)P\_{DE1}(n) + 1\tag{13}$$

new candidate solutions.

#### **3.4 Differential evolution mutation operator to improve diversity**

Because differential evolution algorithms generate new candidate solutions by adding a weighted difference between two randomly selected candidate solutions to a third randomly selected candidate solution, the differential evolution operator creates new candidate solution vectors that contain elements that are different from the candidate solutions that formed the new solution vector. As a result, the differential evolution operator is often referred to as a mutation operator whether the operator creates similarity or diversity. In this chapter, the differential evolution operator that increases diversity is referred to as the differential evolution mutation operator.

For this algorithm, the differential evolution mutation operator generates a new candidate solution, **v**, using

$$\mathbf{v} = \mathbf{x}\_k(n) + \frac{1}{4} diag(\mathbf{R})diag(\mathbf{X}\_s) \left[ \mathbf{x}\_m(n) - \mathbf{x}\_j(n) \right] \tag{14}$$

where **x***k*ð Þ *n* is the candidate solution randomly selected for differential evolution mutation, **x***m*ð Þ *n* and **x***j*ð Þ *n* are two randomly selected candidate solutions from the mating pool, **R** is a vector whose elements are uniformly distributed random number from the interval [0,1], and **X***<sup>s</sup>* is a vector containing the solution space's size for each dimension in rectangular coordinates or the diameters of an elliptic solution space.

### *A Hybrid Genetic, Differential Evolution Optimization Algorithm DOI: http://dx.doi.org/10.5772/intechopen.106204*

Again, the randomly selected candidate solutions, **x***m*ð Þ *n* and **x***j*ð Þ *n* , should be distinct and chosen so that **x***m*ð Þ *n* 6¼ **x***j*ð Þ *n* .

Because this algorithm is an elitist algorithm, the best candidate solution, **x***K n*ð Þð Þ *n* , is always selected for this differential evolution mutation operator. The other candidate solutions are selected randomly for differential evolution with a probability of *PDE*2ð Þ *<sup>n</sup>* . Because the term, <sup>1</sup> <sup>4</sup> *diag*ð Þ **R** *diag*ð Þ **X***<sup>s</sup>* , in Eq. (14) is a diagonal matrix with uniformly distributed random numbers from the interval zero to <sup>1</sup> <sup>4</sup> the size of each dimension of the solution space, the term, 1 <sup>4</sup> *diag*ð Þ **R** *diag*ð Þ **X***<sup>s</sup>* , typically increases each dimension of the difference between the two randomly selected candidate solutions randomly. As the entire population begins to converge and the differences between any two randomly selected candidate solutions begins to decrease, the term, <sup>1</sup> <sup>4</sup> *diag*ð Þ **R** *diag*ð Þ **X***<sup>s</sup>* , increases these small differences and these increased differences are added to the candidate solutions selected for differential Evolution. Therefore, the new candidate solutions typically lie outside the neighborhood of the candidate solutions selected for differential evolution. As a result, this differential evolution operator improves the algorithm's diversity until the entire population begins to converge within very small differences.

The mean number of mutant solutions created by this process is

$$(M(n) - 1)P\_{DE2}(n) + \mathbf{1}.\tag{15}$$

#### **3.5 A recombination operator to improve convergence and diversity**

Taguchi crossover can greatly increase convergence rates [11, 18]. As a result, when the differential evolution operators discussed earlier are combined with Taguchi crossover, this algorithm can converge too quickly. To prevent this algorithm from converging too quickly into a local minima or maxima, a recombination operator that creates a pair of new candidates is added to this algorithm. To improve convergence, this recombination operator generates a new candidate solution, **v**, by averaging the selected candidate solution, **x***k*ð Þ *n* , with another randomly selected candidate solution, **x***m*ð Þ *n* , from the mating pool so that

$$\mathbf{v} = \left[\mathbf{x}\_k(n) + \mathbf{x}\_m(n)\right]/2. \tag{16}$$

To improve diversity, this recombination operator generates the another candidate solution by circularly shifting the elements of the newly formed candidate solution, **v**, by a uniformly distributed integer and then randomly changing the signs of the elements. In Matlab, this new vector, **w**, can be created by

$$w = \text{sign}(randi(2, N, \mathbf{1}) - \mathbf{1}.\mathbf{5}). \, ^\ast cinchift(v, randi(N)). \tag{17}$$

Because this algorithm is an elitist algorithm, the best candidate solution, **x***K n*ð Þð Þ *n* , is always selected for this recombination operator. The other candidate solutions are selected randomly with a probability of *Pcr*ð Þ *n* . On average, this operator creates

$$(M(n) - 1)2P\_{cr}(n) + 2 \tag{18}$$

new candidate solutions.

#### **3.6 Solution space**

A candidate solution is considered infeasible if it does not lie within the solution space. If a new candidate solution is infeasible, that solution is made feasible by one of two methods. If a convergence operator, such as the differential evolution for convergence or the recombination operator for convergence, creates an infeasible candidate solution, the infeasible solution vector is moved to the nearest edge of the solution space by changing the vector's elements that lie outside solution space to the nearest edge of the solution space. This method attempts to generate feasible solutions within the neighborhood of the original infeasible solution so that the intent of the convergence operator that created the infeasible solution is maintained.

If a diversity operator, such as the differential evolution mutation for diversity operator or the recombination operator for diversity, creates an infeasible solution, the infeasible solution vector is moved into the solution space by performing a spatially circular shift of the infeasible solution vector's elements. For example, if an infeasible solution, **v**, is created by a diversity operator in a hyper-rectangular solution space, the infeasible solution vector is moved into the solution space using

$$\mathbf{v} = mod[\mathbf{v} - (\mathbf{X}\_{\varepsilon} - \mathbf{0}.\mathbf{S}\mathbf{X}\_{\varepsilon}), \mathbf{X}\_{\varepsilon})] + (\mathbf{X}\_{\varepsilon} - \mathbf{0}.\mathbf{S}\mathbf{X}\_{\varepsilon})\tag{19}$$

where **X***<sup>c</sup>* is a vector containing the center of the solution space in rectangular coordinates, and **X***<sup>s</sup>* is a vector containing the size of each dimension of the solution space in rectangular coordinates. Similarly, if an infeasible solution, **v**, is created by a diversity operator in an elliptical solution space centered at the origin, the infeasible solution vector, **v**, expressed in polar coordinates, *rej<sup>θ</sup>*, is moved into the solution space using

$$\begin{aligned} r\_k &= r \text{em}(r\_k, r\_{k\text{max}}) \\ \theta\_k &= \theta\_k + \pi \end{aligned} \tag{20}$$

where *rem* is the remainder function, *rk* is a radius that places the candidate solution outside of the solution space, *rk* max is the maximum value of *rk* that keeps the candidate solution inside the solution space and *θ<sup>k</sup>* is the angle associated with the radius, *rk*. This method attempts to generate feasible solutions away from the neighborhood of the original infeasible solution so that the intent of the diversity operator that created the infeasible solution is maintained.

#### **3.7 Taguchi crossover**

A crossover operator is a recombination operator that combines the elements from two or more parent candidate solutions to generate a new offspring candidate solution. Taguchi crossover generates new candidate solutions by intelligently selecting elements from the two or more parent solutions vectors [11]. Taguchi crossover is a simple design of experiments method that creates a near optimal candidate solution from the parent candidate solutions. Consequently, Taguchi crossover can greatly increase an algorithm's rate of convergence [11, 18].

Before selecting candidate solutions for Taguchi crossover, all new candidate solutions created by the other operators are added to the mating pool. The mean number of candidate solutions in the mating pool at this stage can be obtained by summing

Eq. (14), Eq. (15) and Eq. (18) which implies that the mean number of candidate solutions in the mating pool at this stage is

$$M(n) + (M(n) - 1)P\_3(n) + 4\tag{21}$$

where *P*3ð Þ¼ *n PDE*1ð Þþ *n PDE*2ð Þþ *n* 2*Pcr*ð Þ *n* .

Because this is an elitist algorithm, the best candidate solution is always selected for Taguchi crossover. The other candidate solutions from the mating pool are selected randomly for Taguchi crossover with a probability of *PTc*. For two level Taguchi crossover, crossover involving two parent solutions, one other candidate solution is selected randomly from the mating pool. For three level Taguchi crossover, crossover involving three parent solutions, two other candidate solutions are selected randomly from the mating pool.

On average, the Taguchi crossover operator creates

$$[(M(n) - \mathbf{1})(\mathbf{1} + P\_3(n)) + \mathbf{4}]P\_{Tc} + \mathbf{1} \tag{22}$$

new candidate solutions.

#### **3.8 Managing population size**

Because the selection operator, the differential evolution operators, the recombination operators and Taguchi crossover operator generate a random number of new candidate solutions, the population size and mating pool size vary each generation, or iteration of the algorithm. After the Taguchi crossover operator, the average number, *EK n* ½ � ð Þ , of the candidate solutions can be calculated by adding Eq. (21) and Eq. (22) which results in

$$E[K(n)] = [(M(n) - 1)(1 + P\_3(n)) + 4](1 + P\_{Tc}) + 2. \tag{23}$$

To maintain the population's size, *K n*ð Þ, at the population's initial size, *K*ð Þ 0 , the probabilities of at least one of the operators must vary so that

$$E[K(n)] = K(\mathbf{0}).\tag{24}$$

Substituting Eq. (23) into Eq. (24) and solving for *P*3ð Þ *n* ,

$$P\_3(n) = \frac{K(\mathbf{0}) - \mathbf{6} - 4P\_{Tc}}{(\mathbf{1} + P\_{Tc})(M(n) - \mathbf{1})} - \mathbf{1} \tag{25}$$

where *PTc* is assumed to be fixed and *P*3ð Þ *n* varies. In this algorithm, the requirement in Eq. (25) is met by fixing *PDE*1ð Þ *n* and *PDE*2ð Þ *n* , and letting

$$P\_{cr}(n) = \left[P\_3(n) - P\_{\rm DE1}(n) - P\_{\rm DE2}(n)\right]/2. \tag{26}$$
