3. Analysis of PSO characteristics & modification

## 3.1. Velocity clamping

Particle's velocity, a significant parameter of PSO algorithm, is the step size of swarm in every iteration. With all time step, the particles alter their velocity & travel in all direction in the problem space. If the velocity is extreme, the assessment attribute of the particle turn out to be high and simultaneously the particle might hastily vacant the periphery of the search space and swerve. On a converse if velocity is low, the movement of particles is limited upon a small boundary and it happens confined in a local optima. Therefore, it is required to preserve an equilibrium amidst exploration & exploitation by situating a parameter Vmax, assumed as Vmax <sup>¼</sup> ð Þ Xmax�Xmin <sup>k</sup> . The empirical value of k is set as 2 [14].

## 3.2. Inertia weight

2.4.3. Velocity component

66 Particle Swarm Optimization with Applications

2.4.4. Acceleration coefficients

solution whatsoever.

3.1. Velocity clamping

Vmax <sup>¼</sup> ð Þ Xmax�Xmin

The velocity update factor in Eq. (1) comprises of three terms. The first term is the preceding velocity vector i.e. the former direction & the magnitude of the velocity of a particle. This component averts a particle from a radical alteration in velocity in current iteration. The following component is the cognitive one. It is centered on the memory of an agent in agreement with its proficiency. The cognitive component continuously inspire a particle to reappear to its position, suitable for it in local. The subsequent component is the social component. This is the knowledge to an individual by social communication, which constantly encourage the

The variables c<sup>1</sup> and c<sup>2</sup> are known as acceleration coefficients, which attempt to generate an

• If c<sup>1</sup> ¼ c<sup>2</sup> ¼ 0, Eq. (1) will be Vi,t þ 1 ¼ Vi,t. This implies that all the particles retain to

that all the particles revolve around their searching space autonomously. Since they are not interacting with the neighbours, they are incapable to obtain the global optimal

<sup>þ</sup> 0. This implies

. It infers that

particle to travel in the direction of the best position, knowledgeable by its vicinity.

equilibrium amid the cognitive component and social component of the velocity.

hover with their initial velocity, ensuing no search condition.

• If c1 > 0 and c2 ¼ 0, Eq. (1) resolves to Vi,tþ<sup>1</sup> ¼ Vt þ c1∗r1∗ pbest � Xi,<sup>t</sup>

• If c1 ¼ 0 and c2 > 0, Eq. (1) resolves to Vi,tþ<sup>1</sup> ¼ Vt þ 0 þ c2∗r2∗ gbest � Xi,<sup>t</sup>

• If c1 ¼ c2, all particles will travel towards average pbest and gbest values.

stances the particles sprint precipitately to the optimum solution.

propose their values as 2 for getting decent optimal results [14].

3. Analysis of PSO characteristics & modification

<sup>k</sup> . The empirical value of k is set as 2 [14].

all particles are fascinated to a single point, which is not revised in each time step.

• If c1 ≫ c2, it results in manipulating the particles in the direction of pbest position and c2 ≫ c1, resulting the particles to be enticed towards the gbest position and in both circum-

Usually c<sup>1</sup> and c<sup>2</sup> are considered as equal, constant values and various intellectual articles

Particle's velocity, a significant parameter of PSO algorithm, is the step size of swarm in every iteration. With all time step, the particles alter their velocity & travel in all direction in the problem space. If the velocity is extreme, the assessment attribute of the particle turn out to be high and simultaneously the particle might hastily vacant the periphery of the search space and swerve. On a converse if velocity is low, the movement of particles is limited upon a small boundary and it happens confined in a local optima. Therefore, it is required to preserve an equilibrium amidst exploration & exploitation by situating a parameter Vmax, assumed as

Inertia Weight (w) was additionally familiarized [13] progressing PSO-W to substitute Vmax, to regulate the momentum of the particle in assessing the updated velocity. It is presented to regulate the exploration and exploitation aptitudes of the swarm with the intention of the algorithm to converge more efficiently upon time. Therefore Eq. (1) is adapted as Eq. (3).

$$V\_{i,t+1} = w \ast V\_{i,t} + c\_1 \ast r\_1 \ast \left(p\_{\text{best}} - X\_{i,t}\right) + c\_2 \ast r\_2 \left(g\_{\text{best}} - X\_{i,t}\right) \tag{3}$$


Typically the inertia weight w is selected dependent to the size of the search space. A high value of w is essential for complex high dimensional problem space and trivial value for small dimensional search space.

The inertia weight can be differed by Eq. (4), where s is the population size, D is the Dimension size and R is relative quality of corresponding solution standardized to [0,1].

$$w\left[3-\exp\left(\frac{-s}{200}\right)+\left(\frac{R}{8}\*D\right)^2\right]^{-1}\tag{4}$$

## 3.3. Constriction factor

The PSO algorithm is reorganized to substitute the inertia weight w & max velocity Vmax by a fresh parameter χ, known as constriction factor given in Eq. (6). Clerc [17] pioneered this factor, which proved to be exceptionally significant in regulating the exploration & exploitation tradeoff, thus guaranteeing an efficient conjunction of algorithm. Eq. (1) gets amended as Eq. (5).

$$V\_{i,t+1} = \chi \* \left[ V\_{i,t} + \Phi\_1 \* \left( p\_{\text{best}} - X\_{i,t} \right) + \Phi\_2 \* \left( g\_{\text{best}} - X\_{i,t} \right) \right] \tag{5}$$

$$\chi = \frac{2}{\left[2 - \phi - \sqrt{\phi^2 - 4\phi}\right]}\tag{6}$$

Here, ɸ ¼ ɸ<sup>1</sup> þ ɸ2, ɸ<sup>1</sup> ¼ c1∗r<sup>1</sup> and ɸ<sup>2</sup> ¼ c2∗r2. Characteristically applying the value of ɸ as 4.1 the value of χ results to 0.729. Therefore, χ∗w ¼ 0:729∗w < w, infers that the particles rapidly alter their course manipulated by pbest and gbest with assured convergence. Both pbest � Xi,t � �and gbest � Xi,t � �are multiplied by 2∗0:<sup>729</sup> <sup>¼</sup> <sup>1</sup>:458 [18]. Generally these values are preferred for improved stability and convergence.

## 3.4. Acceleration coefficient

Acceleration coefficient as a PSO parameter has previously been explained in the preceding segment. Usually both the values of c<sup>1</sup> and c<sup>2</sup> are applied to be 2 [19]. The equilibrium concerning these parameters can be monitored in two distinct ways, mentioned underneath, to accomplish superior result in perspective of path minimization of VLSI global routing.

partitions or blocks in VLSI layout which can be achieved by reducing the complete wire length of interconnected terminals or blocks. The Global routing problem can be formally stated where N={N1,N2,N3…Nm} be the set of nets denoting interconnections among blocks in the VLSI layout and the estimated wirelength of net Ni, 1< i < m is denoted by Di. The problem

Performance Comparison of PSO and Its New Variants in the Context of VLSI Global Routing

routing formulation is done by mapping the required VLSI layout in classical graph theory as Grid Graph model. Here the grid graph model is regarded as to execute the above proposed algorithms. The grid graph, G ¼), is an exemplification of a routing region layout where region is carved into a number of unit square cells as shown in Figure 1. Each cell representing routing areas between blocks as empty area is signified by vertex a vi and the edge eij, linking the two neighboring vertices vi and vj. The vertices resemble to the nodes and edges resemble

To obtain the solution of the VLSI routing problem for a multi-terminal net, the primary assignment is to articulate it as the problem of obtaining an RMST (Rectilinear Minimum Spanning Tree) from a Graph. The Minimum spanning tree of the interconnected terminal nodes is generated using graph algorithms results in measuring the minimum cost of interconnected length. With introduction of random Steiner nodes along with the terminal nodes of multi-terminal VLSI layout the cost or the overall wirelength is further reduced generating the minimum Steiner tree cost (length) in the graph. Depending on the position and the number of Steiner nodes the cost or overall length can be further minimized. With large number of terminal nodes the probability of determining the number of Steiner nodes and desired positioning of these Steiner node become computationally hard and hence the PSO algorithm is used to select probable number of Steiner nodes and generate these random

The algorithm commenced with random generation of swarms size of z particles and they are placed in required graph of nxn dimension. Each of the swarm consists at most (p-2) randomly generated Steiner points drawn from Steiner set S with ð Þ njj2–p points where p is

<sup>i</sup>¼<sup>1</sup> Diis minimized. Global

69

http://dx.doi.org/10.5772/intechopen.72811

function can be expressed such that overall total wirelength P<sup>m</sup>

to the routing paths between blocks in a VLSI layout.

position in order to optimize the Steiner cost.

Figure 1. Routing region layout in grid graph.

## 3.4.1. Self-tuned

In this algorithm PSO-ST [20], the acceleration constants both c<sup>1</sup> and c<sup>2</sup> are reduced linearly throughout the time steps in the range of 2 to 1.49. At the beginning, the algorithm is primed with c1=c<sup>2</sup> =2. By this changing of linear decrement, both exploration and exploitation abilities of the swarm can be preserved efficiently for velocities updating and can deliver a swift convergence to the algorithm. This algorithm turns out to be competent to obtain optimal result with lofty convergence rate.

## 3.4.2. Self-adaptation

An algorithm PSO-SAAC is introduced where the two acceleration constant parameters c<sup>1</sup> and c<sup>2</sup> have been assorted in such a style that they got enhanced influence over the trade-off in between global exploration and local exploitation. The algorithm commences with highest exploration and lowest exploitation aptitudes of swarm, which have eventually been altered in every time step over the entire iteration process. Therefore the particles of the swarm are capable of dispersing all over the search space consistently, motivated by the social component of the velocity vector at the first phase of experiment. Since the cognitive component outpace the social component in the subsequent phase of the experiment, the swarm accomplish the local search process entered on the assessed results of the Global search process with the intention of obtaining the finest local optima. Throughout the whole searching process this self-adaptive procedure can be effectual in producing most significant gbest value and thus in this manner heightening the optimization rate.

## 3.5. PSO with mutation

A fresh algorithm is presented where the principle of Genetic Algorithm is featured in PSO [21]. The algorithm after utilizing some time steps initiates with selection of swarms from existing generation in the first phase. The swarms with high fitness probability get selected where the probability of selection factor is <sup>f</sup> P j N <sup>j</sup>¼<sup>1</sup> <sup>f</sup> <sup>j</sup> , where N is the population size. The high fitness factor is extracted from the selected pool generating a mutant in the second phase. This enhanced knowledge of high fitness property is induced in the position vector Eq. (2) to evolve a new generation of swarms causing mutation in PSO [22]. The proposed position vector in Eq. (7) is given below.

$$X\_{i,t+1} = (\psi \* X\_{i,t} + \xi) + V\_{i,t+1} \tag{7}$$

where, ψ is the randomization factor and ξ is the mutant fitness factor.
