**3.1 Particle swarm optimization (PSO)**

In proposed particle swarm optimization (PSO) algorithm inspired by intelligent behaviour of birds [11], Craig Reynolds simulated flock social birds behaviour for the first time and later studied by Frank Heppner [12]. PSO search for optimal solution similar to flyting birds with specific velocities determined from previous results and


#### **Table 1.**

*Classification of optimization.*

neighbours in identified search areas [13]. Given a problem identified in search space represent solution in different n-dimension as result in PSO as n particles. The particle moves in n dimension solution space with different velocities. Particles move and store previous behaviours of it and share experiences to store search space. The key merit of POS is its experience to share particle communicate to part or complete swarm to lead motion to detect search space [14]. Each particle will compare the current fitness value with previous optimized results and neighbours in every iteration. Entire particles' global and local algorithm is considered. Each particle best global value is stored as a local value. The entire search space particles' best result is stored as the global best optimal solution. In further iterations value will be adjusted to best optimal if current is best when compared to previous results.

#### *3.1.1 PSO concept*

Each particle is running in PSO to identify a feasible solution to the optimization problem in a given search space. The behaviour of flight of particles is considered as search of individual particle. Velocity of particles is dynamically updated based on position of particle and optimal swarm population. The swarm population is composed of M particles in D dimensional space and historical optimal position of the ith particle is represented by pi, iϵ {1,2,3 … M} and optimal position of swarm population is denoted by pg. In every step velocity and position of each particle are updated dynamically tracking its corresponding previous positions and optimal position of swarm population. The detailed equations are expressed as follows,

$$\mathbf{V}\_{i\_{\dot{j}}}^{t+1} = \omega \mathbf{V}\_{i\_{\dot{j}}}^{t+1} + c\_1 r\_1^t \left( pbest\_{i\_{\dot{j}}} - \mathbf{X}\_{i\_{\dot{j}}}^t \right) + c\_2 r\_2^t \left( gbest\_{\dot{j}} - \mathbf{X}\_{i\_{\dot{j}}}^t \right) \tag{1}$$

$$X\_{i\_j}^{t+1} = X\_{i\_j}^t + \mathcal{V}\_{i\_j}^{t+1} \tag{2}$$

#### *Bio-inspired Optimization: Algorithm, Analysis and Scope of Application DOI: http://dx.doi.org/10.5772/intechopen.106014*

In Eqs. (1) and (2) t indicates iteration number , d ϵ {1, 2, 3, … .. D} indicates dimension, *xi*,*d*(*t*) is the *dth* dimension variable of the *ith* particle in the *tth* iteration, and variables *vi*,*d*(*t*), *pg*,*d*(*t*), and *pi*,*d*(*t*) have the similar meanings in turn. ? is inertial weight, and *c1* and *c2* denote acceleration coefficients, *r1* and *r2* are random numbers uniformly distributed in interval [0, 1]. The objective function to be set, and resultant objective values of each particle correspond to fitness values. These fitness values are used to measure position of particles, historical optimal position of particles and the optimal position of swarm population.

The main concept of PSO is clear from the particle velocity equation that a constant balance between three distinct forces hauling on each particle: (i) particles previous velocity (inertia), (ii) Distance from the individual particles' best-known position (cognitive force) and (iii) Distance from the swarms best known position (social force). These forces are dependent on c1 and c2 weight constants and randomly concerned by r1 and r2 constants. Three forces are shown in vector form as in **Figure 1a** where weight values are specified in vector magnitude. The particles will continue to explore as in search space similar to bird as shown in **Figure 1b** to converge to best position.

PSO shows sufficient better performance on optimization related problems of small scale. The original POS later on improved versions of PSO have been proposed by many researchers. Few incremental works of PSO has been discussed in this sub section which support for large scale and multiple optima [14].

Opposition based PSO discussed by Jabeen et al. [11]. Particle has been classified into two class bad and good. Population of two class generated with fitness computation then original PSO applied. The opposite particle computed using equation

$$Popi = a + b - pi \tag{3}$$

Where in Eq. (3) D is the dimension and R is real number. Quasi-oppositional comprehensive learning particle swarm optimizers (QCLPSO) proposed Chang et al. [7]. Swarm initialization applied by qausi opposite number. The constriction factor balance incremental approach to proposed by Clerc [8]. The equation with constriction factor velocity updated equation is summarized in Eqs. (4)–(6),

$$v\_{\vec{\eta}}(t+1) = \mathbf{x} \left[ v\_{\vec{\eta}}(t+1)v\_{\vec{\eta}}(t) + \rho \left( v\_{\vec{\eta}}(t) \right) - \mathbf{x}\_{\vec{\eta}}(t) + \rho \left( v\_{\vec{\eta}}(t) \right) - \mathbf{x}\_{\vec{\eta}}(t) \right] \tag{4}$$

$$\propto = 2/|\, 4 - \varrho - \sqrt{2\varrho - 4\varrho}|\tag{5}$$

$$
\rho = c\mathbf{1} + c\mathbf{2}, \rho\mathbf{1} = c\mathbf{1}r\mathbf{1} \tag{6}
$$

**Figure 1.** *(a) Exploration of PSO (b) search of new position [4].*

#### **Figure 2.** *PSO algorithm & flowchart.*

A random value is distributed between [0, 1] for particles by Zhang et al. [13]. The dependency inertia weight to maximize number of iterations to another one is applied to avoid problems in original PSO to search local ability to end. Speed- and accumulation-based inertia weight computation is proposed in Wei et al. [15]. A Cauchy mutation an improved PSO proposed by Wang et al. [14]. The original fitness of particle is selected to mutate the particle to distribute with increase in parameter scale t = 1. It is defined between test function to choose randomly to assign velocity and pop size for particles in swarm. A variation in computation of power distribution among particles applies global best value with power mutation function. The fitness calculation for both particles select appropriate one in Wu et al. [13]. Power mutation function based another opposition-based power mutation function applied for PSO by Imran et al. [15]. Two times mutation being applied on opposite swarms and global best particle power mutation. Global selection for best mutation avoids stagnation. Still improved PSO presented by Imran et al. [16] with student T mutation. Global best particle T student particle identified to work over adaptive and Cauchy mutation (**Figure 2**).
