**4. Various optimization techniques**

In the literature we have different types of mathematical approaches for optimization are found like Simplex approach, gradient approach, dynamic programming, branch and bound and integer programming etc. for all these approaches the size will be limited to solve. These approaches are more efficient in solving linear issues. With the increase in variables number and constraints number, the time of computation is also increases to obtain the solution. This increase in time increases exponentially. With the increase in complexity in solving complex issues with the help of mathematical approaches is also become more and more complex.

In last few years number of meta-heuristic approaches has developed to sole such complex issues with limited flexibility. Many numbers of bio and nature inspired optimization approaches has been developed like Genetic Algorithm (GA) and Swarm Intelligence (SI) approaches.

Here GA will work on the Darwinian principle for survival of the best solution in the population. GA uses the selection, cross-over and the mutation etc., as the operators to evaluate the population. Proved that, GA will give the closest solution for the global optimization with the introducing required improvements into it. Furthermore, the Differential Evaluation (DE) approach has explored to reach the global optimal solution.

Taking an Inspiration from living organism social behavior like fishes, birds, inspects etc. these can be communicate with each other might be directly or indirectly with a set of patterns used for the individual optimization. These approaches will operate on the cooperation of organisms instead of competing among them. All these exchange of information from one to other.

Here the Particle Swarm Optimization (PSO) is from the nature inspired behavior of food searching behavior of fishes and birds flocking behavior. To obtain the present best or local optimal solution and the global best or global optimal solution the fishes and the birds will be considered as particles.

As said previously PSO is the stochastic optimization method on swarm based. The finding the food is with the cooperative way in these swarms. Every particle

(member) in this swarm ready to modify their pattern for search based on experience from learning from other particles (members) or by own.

Velocity and the position of particles will update in PSO based on the change in environment to reach proximity and the quality requirement. Here the proximity is the swarms must carry time and space calculations and the quality is swarms have to find the appropriate environmental change and have to respond.

According to cornfield model proposed by Heppner on the birds or fish flock behavior, let us assume a food location in the plane. Birds will randomly move or search for the food initially. Now (x0, y0) are the coordinates of position of cornfield. The individual birds coordinate for position and the velocity can be considered as (x, y) and (vx, vy) respectively. Distance from cornfield and the current position is used to determine the performance of the speed and the current position. If each particle has its own memory and can be able to memorize the best position it reached.

This present best position is represented with *Pbest* (local best). Adjustment of velocity denoted with a constant *'a'*, random numbers between [0, 1] denoted with *'rand'*. The below mathematical equations used to change the velocity as.

$$\begin{array}{l} \text{If } P\_{\text{best}} \ge \propto \text{, then} \\ V\_{\text{x}} = V\_{\text{x}} - (a \cdot rand) \\ else \\ V\_{\text{x}} = V\_{\text{x}} + (a \cdot rand) \\ end \\ \text{If } P\_{\text{best}} \ge \text{y} \\ V\_{\text{y}} = V\_{\text{y}} - (a \cdot rand) \\ else \\ V\_{\text{y}} = V\_{\text{y}} + (a \cdot rand) \\ end \end{array}$$

Now if there is a communication between swarms in some other way, every individual able to memorize their best location, considering global best position (gbest) of entire swarm.

Now if the constant for the velocity adjustment is denoted with 'b', by using above mathematical equations if the velocity adjusted, Gbest is also changed (or) updated as.

$$\begin{array}{l} \text{If } \text{gbest} < \text{x},\\ V\_{\text{x}} = V\_{\text{x}} \cdot (\text{b} \cdot \text{rand})\\ \text{else} \\ V\_{\text{x}} = V\_{\text{x}} + (\text{b} \cdot \text{rand})\\ \text{end.} \\ \text{If } \text{gbest} < \text{y} \\ V\_{\text{y}} = V\_{\text{y}} \cdot (\text{b} \cdot \text{rand})\\ \text{else} \\ V\_{\text{y}} = V\_{\text{y}} + (\text{b} \cdot \text{rand})\\ \text{end} \end{array}$$

The final fixed velocity and position updating after some trial and errors as. Velocity updating:

$$V\_{\mathbf{x}} = V\_{\mathbf{x}} + 2 \ast (\mathsf{pbestx} - \mathsf{x}) \ast rand + 2 \ast (\mathsf{gbestx} - \mathsf{x}) \ast rand \tag{7}$$

Position updating:

$$
\mathbf{x} = \mathbf{x} + V\_{\mathbf{x}} \tag{8}
$$

*Particle Swarm Optimization DOI: http://dx.doi.org/10.5772/intechopen.107156*

Here every individual is considered as a particle and these particles are not having any mass and volume. But these particles are having position as well as velocity. Hence this optimization is known as 'Particle Swarm Optimization Algorithm'.

This PSO algorithm can be explained using the following flowchart (**Figure 2**). From the flowchart we assume that 'N' is the size of swarm. Position vector of every individual particle in D - dimensional vector space is

$$X\_i = (X\_{i1}, X\_{i2}, X\_{i3}, \dots, X\_{id}, \dots, X\_{iD}) \tag{9}$$

Velocity vector,

$$V\_i = (V\_{i1}, V\_{i2}, V\_{i3}, \dots, V\_{id}, \dots, V\_{iD}) \tag{10}$$

Present best (or) local best position of individual particle is,

$$P\_i = (P\_{i1}, P\_{i2}, P\_{i3}, \dots, P\_{id}, \dots, P\_{iD}) \tag{11}$$

Optimal (or) the best global position of individual particle

**Figure 2.** *The Flowchart of the PSOA [2].*

$$P\_{\mathfrak{g}} = \left(P\_{\mathfrak{g}1}, P\_{\mathfrak{g}2}, P\_{\mathfrak{g}3}, \dots, P\_{\mathfrak{g}d}, \dots, P\_{\mathfrak{g}D}\right) \tag{12}$$

Now without generality loss, by considering minimization is an objective function, the mathematical equation for updating of individual particle optimal position as,

*if f P*ð Þ*<sup>i</sup>* >*f x*ð Þ *<sup>i</sup>* ð Þ *t* þ *1 Pd <sup>i</sup>* , ð Þ¼ *<sup>t</sup>* <sup>þ</sup> <sup>1</sup> *xd i else*

$$P\_i^d(t+\mathbf{1}) = P\_i^d$$

Now the velocity and position formula are given as,

$$\mathbf{V}\_{i}^{d}(t+\mathbf{1}) = \mathbf{V}\_{i}^{d}(t) + \mathbf{C}\_{1} \ast \left(\mathbf{P}\_{i}^{d}(t) - \boldsymbol{\pi}\_{i}^{d}(t)\right) \ast rand + \mathbf{C}\_{1} \ast \left(\mathbf{P}\_{i}^{d}(t) - \boldsymbol{\pi}\_{i}^{d}(t)\right) \ast rand \tag{13}$$

$$\boldsymbol{\pi}\_{i}^{d}(t+\mathbf{1}) = \boldsymbol{\pi}\_{i}^{d}(t) + \mathbf{V}\_{i}^{d}(t+\mathbf{1}) \tag{14}$$

This proposed PSO was good enough for optimization. But later this PSO was modified to improve the effectiveness by adding Inertia weight. With Inertia weight introduction, the velocity update equation modified as,

$$\mathbf{V}\_{i}^{d}(t+1) = \boldsymbol{\alpha} \ast \mathbf{V}\_{i}^{d}(t) + \mathbf{C}\_{1} \ast \left(\mathbf{P}\_{i}^{d}(t) - \boldsymbol{\pi}\_{i}^{d}(t)\right) \ast \mathbf{rand} + \mathbf{C}\_{1} \ast \left(\mathbf{P}\_{i}^{d}(t) - \boldsymbol{\pi}\_{i}^{d}(t)\right) \ast \mathbf{rand} \tag{15}$$

Now by taking the convergence rate into an account, the construction factor (x) is introduced into PSO. Now the new update equation for velocity is became,

$$V\_i^d(t+1) = \mathbf{x}\left(V|\vec{r}^d(t) + \mathbf{C}\_1 \ast \left(P\_i^d(t) - \boldsymbol{\varkappa}\_i^d(t)\right) \ast rand + \mathbf{C}\_1 \ast \left(P\_i^d(t) - \boldsymbol{\varkappa}\_i^d(t)\right) \ast rand\right) \tag{16}$$

Here the distance from the current position of the particle to its own best position called 'Cognitive' i.e., own thinking. Hence '*C*1' is called cognitive acceleration factor (or) cognitive learning factor.

Next the distance from current position of the particle to the global best position It is known as 'Social' factor. This indicates that there is a coordination and sharing of information between particles. Good particles movement through the cognition. Hence '*C*2' is the social acceleration factor (or) social learning factor.

Theory and practical application of this PSO algorithm have found great progress. This PSO used in various domains by every researcher with the understanding principle of operation and application.

PSO algorithm does not required any continuous derivative and differential optimized functions. This has fast rate of convergence. This PSO is very easy to understand, execute using programming.

Now coming to disadvantages: multiple local extreme functions, PSO may local minima and cannot produce optimal result because premature convergence of particle. PSO may not produce good results because of lack of best search techniques because of not using information sufficiently in procedure of calculations. In every Iteration PSO using only local optima Information which may not produce correct results. This PSO can be able to provide global search possibility but cannot guarantee for convergence to the global test. This PSO more suitable for optimization Issues s

class with high dimensional and getting accurate outcome is not required because PSO is a meta-heuristic optimization algorithm. It never gives the correct explanation from the principle why it is efficient and not specified any of application range.
