**4. Particle swarm optimization**

Particle swarm optimization is an inhabitance-based computational procedure motivated from the simulation of gregarious behaviors (social-psychological): fish schooling, bird flocking, and swarm theory. PSO was firstly invented and established by Eberhart and Kennedy [25, 26]. In the PSO algorithm, in place of utilizing evolutionary operators such as mutation and crossover to operate algorithms, the population dynamics emulates a "bird flock" behavior, where social sharing of information takes place and individuals can yield from the finds and prior experience of all the other companions through the search for food. Therefore, each companion, called particle, in the population, which is called swarm intelligence as shown in **Figure 4**, is assumed to fly in several directions over the search space to meet the request fitness function [27, 28].

#### **4.1 Particle swarm optimization algorithm**

The PSO algorithm is one of the evolutionary computation techniques to solve optimization troubles. In this algorithm, a swarm of individuals or entities called particles flies over the exploration space [29, 30]. Each particle acts as a probable

**Figure 4.** *Swarm intelligence.*

solution to the optimization troubles. The position of a particle is influenced by the best position visited by itself, i.e., its own knowledge or experience and the position of the best particle in its knowledge of neighboring particles. When the neighborhood is the entire swarm, the best position in the neighborhood of the particle is denoted as the global best position and the resulting algorithm is referred to as the global best position PSO, where the finest prior position that gives the minimum fitness value of any particle is called local best position (lbest). The index of the best particle of all particles in the population is called global best position (gbest). The algorithm is generally referred to as the lbest PSO when smaller neighborhoods are used. For each particle, the performance is measured utilizing an objective function that differs depending on the optimization challenge. The basic PSO algorithm is given below according to the flow chart which is shown in **Figure 5** [31–35].

Step 1. Generation of population particles

Create particles regularly distributed over x, then choose the number of particles, number of iterations, modification accelerating coefficients c1 and c2, the inertia weight (w) and random numbers R1, and R2 to start the optimum searching.

Step 2. The initialization for each particle

Initialize the present position xið Þt and the velocitiy við Þt for each particle. The particles are randomly produced among the minimum and maximum limits of parameter values. Each particle is treated as a point in a D-dimensional space. The ith particle is denoted as xi ¼ ð Þ xi1, xi2, … , xiD . The velocity for the particle i is

**Figure 5.** *General flow chart of PSO.*

*Wavelet Neural Networks for Speed Control of BLDC Motor DOI: http://dx.doi.org/10.5772/intechopen.91653*

represented as vi ¼ ð Þ vi1, vi2, … , viD , then the local best position (lbest) and the global best position (gbest) are initialization.

Step 3. Evaluation of fitness function

The overall performance (speed of convergence, efficiency, and optimization accuracy) of the PSO algorithm counts on the objective function that observes the optimization search. The objective function is chosen to minimize the reference constraints. The popular performance standards based on the error condition are integrated absolute error (IAE), integrated of time weight square error (ITSE), and integrated of square error (ISE) that can be estimated theoretically in the frequency domain [31, 32, 36]. In this chapter, multiobjective functions are utilized based on the integral of the squared error (ISE) criterion and overshoot (MpÞ criterion as follow [37, 38]:

$$\text{fitness function} = \min \left( \text{ISE} \right) + \min \left( \mathbf{M}\_{\text{p}} \right) \tag{9}$$

where

$$\text{ISE} = \int \mathbf{e}^2(\mathbf{t})d\mathbf{t} \tag{10}$$

$$\mathbf{M}\_{\rm p} = \max\left(\mathbf{n}\right) - \left(\mathbf{n}\_{\rm ref}\right) \tag{11}$$

$$\mathbf{e}(\mathbf{i}) = \mathbf{D}(\mathbf{i}) - \mathbf{y}(\mathbf{i}) \tag{12}$$

where y(i) is the system output and D(i) is the desired output, while n is the actual speed and nref is the desired speed.

Step 4. Update the swarm

The updating of the velocity við Þt and the present position xið Þt for each particle in the swarm is done according to Eqs. (13) and (14). Then the main loop and the objective function are calculated for updating positions of particles. If the new value is improved than the previous lbest, the new value is fixed to lbest. Similarly, gbest value is also updated as the best lbest. Velocity of each agent can be modification by the following:

$$\mathbf{v\_i^{k+1}} = \mathbf{w} \ast \mathbf{v\_i^k} + \mathbf{c\_1} \ast \mathbf{R\_1} \ast \left(\mathbf{lbest\_i} - \mathbf{x\_i^k}\right) + \mathbf{c\_2} \ast \mathbf{R\_2} \ast \left(\mathbf{gbest\_i} - \mathbf{x\_i^k}\right) \tag{13}$$

And, the present position can be modification by the following:

$$\mathbf{x\_i^{k+1} = x\_i^k + v\_i^{k+1}} \tag{14}$$

where xk <sup>i</sup> is the present position of particle i at iteration k, vk <sup>i</sup> is the velocity of particle i at iteration k, w is the inertia weight which can be represented in Eq. (15), c1, c2 represent positive acceleration constants and R1, R2 are random variables uniformly distributed in the range [0; 1].

$$\mathbf{w} = \mathbf{w}\_{\text{max}} - \frac{(\mathbf{w}\_{\text{max}} - \mathbf{w}\_{\text{min}})}{\text{iter}\_{\text{max}}} \tag{15}$$

where, wmin is the inital weight, wmax is the final weight, itermax is the maximum iteration number.

Step 5. Stopping criteria

If the current iteration number reaches the predetermined maximum iteration number, then exit. Otherwise, execute another initialization for each particle and reiterate the process.
