2.1. Multiobjective problems

A minimize MOP contains several conflicting objectives which is defined as:

$$\begin{array}{ll}\text{minimize} & F(\mathbf{x}) = \left(f\_1(\mathbf{x}), f\_2(\mathbf{x}), \dots, f\_m(\mathbf{x})\right)^\top, \\\text{subject to } & \mathbf{x} \in \Omega, \end{array} \tag{1}$$

where m is the number of objectives, x is the decision variable, fi() is the ith objective function. A decision variable y is said to dominate the decision vector z, defined as y dominates z or y≺z, which is indicated as:

$$\forall i: f\_i(\mathbf{y}) \le f\_i(\mathbf{z}) \text{ and } \exists j: f\_j(\mathbf{y}) < f\_j(\mathbf{z}),\tag{2}$$

where i = 1,2, …, m, j = 1,2, …,m. When there is no solution that can dominate one solution in MOPs, this solution can be used as the Pareto optimal solution. This Pareto optimal solution comprises the Pareto front.

#### 2.2. Particle swarm optimization

PSO is a stochastic optimization algorithm, in which a swarm contains a certain number of particles that the position of each particle can stand for one solution. The position of a particle which is expressed by a vector:

$$\mathbf{x}\_i(t) = [\mathbf{x}\_{i,1}(t), \mathbf{x}\_{i,2}(t), \dots, \mathbf{x}\_{i,D}(t)],\tag{3}$$

AðÞ¼ t

<sup>∇</sup><sup>u</sup>jð Þ<sup>t</sup> <sup>f</sup> <sup>i</sup> <sup>a</sup>jð Þ<sup>t</sup> � �<sup>¼</sup> lim

then, the gradient direction of MOP can be represented as:

Xm i¼1

where <sup>U</sup><sup>b</sup> ðÞ¼ <sup>t</sup> ½ � <sup>u</sup>b<sup>1</sup>ð Þ<sup>t</sup> , <sup>u</sup>b<sup>2</sup>ð Þ<sup>t</sup> ;…, <sup>u</sup>b<sup>m</sup>ð Þ<sup>t</sup> , <sup>α</sup>iðÞ¼ <sup>t</sup> k k <sup>u</sup>b<sup>i</sup>

αðÞ¼ t

The weight vector can be set as

According to Eq. (11), the minimum direction of MOP is calculated as

<sup>u</sup>b<sup>i</sup>ðÞ¼ <sup>t</sup>

<sup>α</sup>ið Þ<sup>t</sup> <sup>u</sup>b<sup>i</sup>ðÞ¼ <sup>t</sup> <sup>0</sup>, <sup>X</sup><sup>m</sup>

1 <sup>U</sup><sup>b</sup> <sup>T</sup>U<sup>b</sup> � � �

� � �

where A(t)=[a1(t), a2(t),…, aK(t)]<sup>T</sup>

found according to [24].

derivative can be rewritten:

point aj(t) if

Að Þ t � 1 ∪ p<sup>i</sup>

ð Þ<sup>t</sup> , otherwise, (

which removed the solutions dominated by the best previous position pi(t), K is the dimensionality of archive A(t) which will be changed in the learning process, ajð Þ t � 1 ≺ ≻p<sup>i</sup>

means aj(t-1) is not dominated by pi(t) and pi(t) is not dominated by aj(t-1). Moreover, g(t) is

In AGMOPSO, to enhance the local exploitation, the archive A(t) is further updated by the MOG method using the gradient information to obtain a Pareto set of solutions that approximates the optimal Pareto set. Without loss of generality, assuming all of the objective functions are differ-

where δ > 0, ūj(t)=[ū1,j(t), ū2,j(t), …, ūD,j(t)], i = 1, 2, …, m; j = 1, 2, …, K, and the directional

<sup>∇</sup><sup>u</sup>jð Þ<sup>t</sup> <sup>F</sup> <sup>a</sup>jð Þ<sup>t</sup> � �<sup>¼</sup> <sup>∇</sup><sup>u</sup>jð Þ<sup>t</sup> <sup>f</sup> <sup>1</sup> <sup>a</sup>jð Þ<sup>t</sup> � �; <sup>∇</sup><sup>u</sup>jð Þ<sup>t</sup> <sup>f</sup> <sup>2</sup> <sup>a</sup>jð Þ<sup>t</sup> � �;…; <sup>∇</sup><sup>u</sup>jð Þ<sup>t</sup> <sup>f</sup> <sup>m</sup> <sup>a</sup>jð Þ<sup>t</sup> � � h i<sup>T</sup>

and k k <sup>u</sup>b<sup>i</sup>ð Þ<sup>t</sup> <sup>¼</sup> 1. In addition, the smooth criteria fi(aj(t)) are said to be Pareto-stationary at the

i¼1

<sup>2</sup> k k <sup>u</sup>b<sup>1</sup> 2 ; k k <sup>u</sup>b<sup>2</sup> 2 ;…; k k <sup>u</sup>b<sup>m</sup> <sup>2</sup> h i<sup>T</sup>

> 2 <sup>=</sup> <sup>U</sup><sup>b</sup> <sup>T</sup> <sup>U</sup><sup>b</sup> � � �

� � � 2

<sup>∇</sup><sup>f</sup> <sup>i</sup> <sup>a</sup>jð Þ<sup>t</sup> � � <sup>∇</sup><sup>f</sup> <sup>i</sup> <sup>a</sup>jð Þ<sup>t</sup> � � � � �

� ,

<sup>∇</sup><sup>f</sup> <sup>i</sup> <sup>a</sup>jð Þ<sup>t</sup> � �<sup>¼</sup> <sup>∂</sup><sup>f</sup> <sup>i</sup> <sup>a</sup>jð Þ<sup>t</sup> � �=∂a1,jð Þ<sup>t</sup> , <sup>∂</sup><sup>f</sup> <sup>i</sup> <sup>a</sup>jð Þ<sup>t</sup> � �=∂a2,jð Þ<sup>t</sup> , …, <sup>∂</sup><sup>f</sup> <sup>i</sup> <sup>a</sup>jð Þ<sup>t</sup> � �=∂aD,jð Þ<sup>t</sup> � �, (12)

entiable, the directional derivative in fi(aj(t)) in a direction ūj(t) at point aj(t) is denoted as

δ!0

Að Þ t � 1 ∪ p<sup>i</sup>

ð Þt , if ajð Þ t � 1 ≺ ≻ p<sup>i</sup>

<sup>f</sup> <sup>i</sup> <sup>a</sup>jð Þþ <sup>t</sup> <sup>δ</sup>ujð Þ<sup>t</sup> � � � <sup>f</sup> <sup>i</sup> <sup>a</sup>jð Þ<sup>t</sup> � � δ

ð Þt ,

A Gradient Multiobjective Particle Swarm Optimization http://dx.doi.org/10.5772/intechopen.76306

, aj(t)=[a1,j(t), a2,j(t),…, aD,j(t)], Að Þ t � 1 is updated archive

� �, (9)

αiðÞ¼ t 1, αið Þt ≥ 0, ð Þ ∀i : (13)

, and k kα ¼ 1.

, (14)

<sup>∇</sup><sup>u</sup>jð Þ<sup>t</sup> <sup>f</sup> <sup>i</sup> <sup>a</sup>jð Þ<sup>t</sup> � � <sup>¼</sup> <sup>∇</sup><sup>f</sup> <sup>i</sup> <sup>a</sup>jð Þ<sup>t</sup> � �ujð Þ<sup>t</sup> , (10)

(8)

81

ð Þt

, (11)

where D is the dimensionality of the searching space, i = 1, 2, …, s; s is the swarm size. Also each particle has a velocity which is represented as:

$$\mathbf{v}\_i(t) = [v\_{i,1}(t), v\_{i,2}(t), \dots, v\_{i,D}(t)].\tag{4}$$

During the movement, the best previous position of each particle is recorded as p<sup>i</sup> (t)=[pi,1(t), pi,2(t),…, pi,D(t)], and the best position obtained by the swarm is denoted as g(t)=[g1(t), g2(t),…, gD(t)]. Based on pi(t) and g(t), the new velocity of each particle is updated by:

$$\mathbf{w}\_{i,d}(t+1) = a\mathbf{w}\_{i,d}(t) + \mathbf{c}\_1 r\_1 \left(\mathbf{p}\_{i,d}(t) - \mathbf{x}\_{i,d}(t)\right) + \mathbf{c}\_2 r\_2 \left(\mathbf{g}\_d(t) - \mathbf{x}\_{i,d}(t)\right),\tag{5}$$

where t denotes the tth iteration during the searching process; d = 1, 2, …, D is the dimension in the searching space; ɷ is the inertia weight; c<sup>1</sup> and c<sup>2</sup> are the acceleration constants and r<sup>1</sup> and r<sup>2</sup> are the random values uniformly distributed in [0, 1]. Then the updating formula of the new position is expressed as:

$$
\mathbf{x}\_{i,d}(t+1) = \mathbf{x}\_{i,d}(t) + \boldsymbol{\upsilon}\_{i,d}(t+1). \tag{6}
$$

At the beginning of the searching process, the initial position of each particle is randomly generated. As the searching process goes on, the particle swarm may appear as an uneven distribution phenomenon in the evolutionary space.

## 3. Multiobjective gradient method

The key points of AGMOPSO, compared to the original MOPSO, are that the MOG method is taken into account. In AGMOPSO, the population with N particles intends to search for a set of non-dominated solutions to be stored in an archive with a predefined maximal size.

In MOPSO, the position of each particle can represent the potential solution for the conflicting objectives. The gBest and pBest can guide the evolutionary direction of the whole particle swarm. The position x<sup>i</sup> and velocity v<sup>i</sup> of the ith particle are the D-dimensional vectors x<sup>i</sup> (0)∈ RD, v<sup>i</sup> (0)∈RD. The particle updates the velocity and position by the motion trajectory in Eqs. (5) and (6). The external archive A(0) is initialized as a null set. Meanwhile, the best previous position pi(t) is computed by:

$$\mathbf{p}\_{i}(t) = \begin{cases} \mathbf{p}\_{i}(t-1), & \text{if } \mathbf{x}\_{i}(t) \prec \mathbf{p}\_{i}(t-1), \\ \mathbf{x}\_{i}(t), & \text{otherwise,} \end{cases} \tag{7}$$

where ajð Þ t � 1 ≺ ≻p<sup>i</sup> ð Þt means x(t) is not dominated by pi(t � 1). The process of archive A(t) is updated based on the previous archive A(t � 1) and the best previous position pi(t)

A Gradient Multiobjective Particle Swarm Optimization http://dx.doi.org/10.5772/intechopen.76306 81

$$\mathbf{A}(t) = \begin{cases} \mathbf{A}(t-1) \cup \mathbf{p}\_i(t), & \text{if } \mathbf{a}\_{\dot{\boldsymbol{\gamma}}}(t-1) \prec \mathbf{p}\_i(t), \\\overline{\mathbf{A}}(t-1) \cup \mathbf{p}\_i(t), & \text{otherwise}, \end{cases} \tag{8}$$

where A(t)=[a1(t), a2(t),…, aK(t)]<sup>T</sup> , aj(t)=[a1,j(t), a2,j(t),…, aD,j(t)], Að Þ t � 1 is updated archive which removed the solutions dominated by the best previous position pi(t), K is the dimensionality of archive A(t) which will be changed in the learning process, ajð Þ t � 1 ≺ ≻p<sup>i</sup> ð Þt means aj(t-1) is not dominated by pi(t) and pi(t) is not dominated by aj(t-1). Moreover, g(t) is found according to [24].

In AGMOPSO, to enhance the local exploitation, the archive A(t) is further updated by the MOG method using the gradient information to obtain a Pareto set of solutions that approximates the optimal Pareto set. Without loss of generality, assuming all of the objective functions are differentiable, the directional derivative in fi(aj(t)) in a direction ūj(t) at point aj(t) is denoted as

$$\nabla\_{\overline{\mathbf{u}}\_{\dot{\boldsymbol{\eta}}}(t)} f\_i \left( \mathbf{a}\_{\dot{\boldsymbol{\beta}}}(t) \right) = \lim\_{\delta \to 0} \left\{ \frac{f\_i \left( \mathbf{a}\_{\dot{\boldsymbol{\beta}}}(t) + \delta \overline{\mathbf{u}}\_{\dot{\boldsymbol{\beta}}}(t) \right) - f\_i \left( \mathbf{a}\_{\dot{\boldsymbol{\beta}}}(t) \right)}{\delta} \right\},\tag{9}$$

where δ > 0, ūj(t)=[ū1,j(t), ū2,j(t), …, ūD,j(t)], i = 1, 2, …, m; j = 1, 2, …, K, and the directional derivative can be rewritten:

$$\nabla\_{\overline{\mathbf{u}}\_{\rangle}(t)} f\_i \left( \mathbf{a}\_{\rangle}(t) \right) = \nabla f\_i \left( \mathbf{a}\_{\rangle}(t) \right) \overline{\mathbf{u}}\_{\rangle}(t) \ , \tag{10}$$

then, the gradient direction of MOP can be represented as:

$$\nabla\_{\overline{\mathbf{u}}\_{\uparrow}(t)} \mathbf{F} \{ \mathbf{a}\_{\dot{\boldsymbol{\}}}(t) \} = \left[ \nabla\_{\overline{\mathbf{u}}\_{\uparrow}(t)} f\_1 \left( \mathbf{a}\_{\dot{\boldsymbol{\}}}(t) \right), \,\nabla\_{\overline{\mathbf{u}}\_{\uparrow}(t)} f\_2 \left( \mathbf{a}\_{\dot{\boldsymbol{\}}}(t) \right), \ldots, \,\nabla\_{\overline{\mathbf{u}}\_{\uparrow}(t)} f\_m \left( \mathbf{a}\_{\dot{\boldsymbol{\}}}(t) \right) \right]^T,\tag{11}$$

According to Eq. (11), the minimum direction of MOP is calculated as

$$\widehat{\mathbf{u}}\_{i}(t) = \frac{\nabla f\_{i}(\mathbf{a}\_{\restriction}(t))}{||\nabla f\_{i}(\mathbf{a}\_{\restriction}(t))||},$$

$$\nabla f\_{i}(\mathbf{a}\_{\restriction}(t)) = \left[\partial f\_{i}(\mathbf{a}\_{\restriction}(t)) / \partial \mathbf{a}\_{1,\restriction}(t), \,\partial f\_{i}\left(\mathbf{a}\_{\restriction}(t)\right) / \partial \mathbf{a}\_{2,\restriction}(t), \,\dots,\,\partial f\_{i}\left(\mathbf{a}\_{\restriction}(t)\right) / \partial \mathbf{a}\_{D,\restriction}(t)\right],\tag{12}$$

and k k <sup>u</sup>b<sup>i</sup>ð Þ<sup>t</sup> <sup>¼</sup> 1. In addition, the smooth criteria fi(aj(t)) are said to be Pareto-stationary at the point aj(t) if

$$\sum\_{i=1}^{m} \alpha\_i(t)\hat{\mathbf{u}}\_i(t) = 0, \quad \sum\_{i=1}^{m} \alpha\_i(t) = 1, \ a\_i(t) \ge 0,\ \text{(\forall i)}.\tag{13}$$

The weight vector can be set as

xiðÞ¼ t ½ � xi,1ð Þt ; xi,2ð Þt ;…xi,Dð Þt , (3)

viðÞ¼ t ½ � vi, <sup>1</sup>ð Þt ; vi, <sup>2</sup>ð Þt ;…vi,Dð Þt : (4)

xi,dð Þ¼ t þ 1 xi,dð Þþ t vi, <sup>d</sup>ð Þ t þ 1 : (6)

ð Þ t � 1 ,

ð Þt means x(t) is not dominated by pi(t � 1). The process of archive A(t) is

(7)

<sup>þ</sup> <sup>c</sup>2r<sup>2</sup> gdð Þ� <sup>t</sup> xi, <sup>d</sup>ð Þ<sup>t</sup> , (5)

where D is the dimensionality of the searching space, i = 1, 2, …, s; s is the swarm size. Also

During the movement, the best previous position of each particle is recorded as p<sup>i</sup> (t)=[pi,1(t), pi,2(t),…, pi,D(t)], and the best position obtained by the swarm is denoted as g(t)=[g1(t), g2(t),…,

where t denotes the tth iteration during the searching process; d = 1, 2, …, D is the dimension in the searching space; ɷ is the inertia weight; c<sup>1</sup> and c<sup>2</sup> are the acceleration constants and r<sup>1</sup> and r<sup>2</sup> are the random values uniformly distributed in [0, 1]. Then the updating formula of the new

At the beginning of the searching process, the initial position of each particle is randomly generated. As the searching process goes on, the particle swarm may appear as an uneven

The key points of AGMOPSO, compared to the original MOPSO, are that the MOG method is taken into account. In AGMOPSO, the population with N particles intends to search for a set of

In MOPSO, the position of each particle can represent the potential solution for the conflicting objectives. The gBest and pBest can guide the evolutionary direction of the whole particle swarm. The position x<sup>i</sup> and velocity v<sup>i</sup> of the ith particle are the D-dimensional vectors x<sup>i</sup> (0)∈ RD, v<sup>i</sup> (0)∈RD. The particle updates the velocity and position by the motion trajectory in Eqs. (5) and (6). The external archive A(0) is initialized as a null set. Meanwhile, the best previous

ð Þ t � 1 , if xið Þt ≺ p<sup>i</sup>

xið Þt , otherwise,

updated based on the previous archive A(t � 1) and the best previous position pi(t)

non-dominated solutions to be stored in an archive with a predefined maximal size.

gD(t)]. Based on pi(t) and g(t), the new velocity of each particle is updated by:

vi,dð Þ¼ t þ 1 ωvi, <sup>d</sup>ð Þþ t c1r<sup>1</sup> pi,dð Þ� t xi,dð Þt

each particle has a velocity which is represented as:

distribution phenomenon in the evolutionary space.

pi

ðÞ¼ <sup>t</sup> <sup>p</sup><sup>i</sup>

3. Multiobjective gradient method

position pi(t) is computed by:

where ajð Þ t � 1 ≺ ≻p<sup>i</sup>

position is expressed as:

80 Optimization Algorithms - Examples

$$\boldsymbol{\alpha}(t) = \frac{1}{\left\| \widehat{\mathbf{U}}^{T} \widehat{\mathbf{U}} \right\|^2} \left[ \left\| \widehat{\mathbf{u}}\_1 \right\|^2, \left\| \widehat{\mathbf{u}}\_2 \right\|^2, \dots, \left\| \widehat{\mathbf{u}}\_m \right\|^2 \right]^T,\tag{14}$$

where <sup>U</sup><sup>b</sup> ðÞ¼ <sup>t</sup> ½ � <sup>u</sup>b<sup>1</sup>ð Þ<sup>t</sup> , <sup>u</sup>b<sup>2</sup>ð Þ<sup>t</sup> ;…, <sup>u</sup>b<sup>m</sup>ð Þ<sup>t</sup> , <sup>α</sup>iðÞ¼ <sup>t</sup> k k <sup>u</sup>b<sup>i</sup> 2 <sup>=</sup> <sup>U</sup><sup>b</sup> <sup>T</sup> <sup>U</sup><sup>b</sup> � � � � � � 2 , and k kα ¼ 1. To find the set of Pareto-optimal solutions of MOPs, the multi-gradient descent direction is given as follows:

$$\nabla F(\mathbf{a}\_{\rangle}(t)) = \sum\_{i=1}^{m} a\_i(t)\hat{\mathbf{u}}\_i(t), \quad \sum\_{i=1}^{m} a\_i(t) = 1, \ a\_i(t) \ge 0,\tag{15}$$

This multi-gradient descent direction is utilized to evaluate the full set of unit directions. And the archive A(t) is updated as follows:

$$\overline{\mathbf{a}}\_{j}(t) = \mathbf{a}\_{j}(t) + h \cdot \nabla F(\mathbf{a}\_{j}(t)),\tag{16}$$

where, h is the step size, aj(t) and āj(t) are the jth archive variables before and after the MOG algorithm has been used at time t and the fitness values are updated at the same time.

Moreover, the archive A(t) can store the non-dominated solutions of AGMOPSO. But the number of non-dominated solutions will gradually increase during the search process. Therefore, to improve the diversity of the solutions, a fixed size archive is implemented in AGMOPSO to record the good particles (non-dominated solutions). During each iteration, the new solutions will be compared with the existing solutions in the archive using the dominating relationship. When a new solution cannot be dominated by the existing solutions in the archive, it will be reserved in the archive. On the contrary, the dominated new solutions cannot be accepted in the archive. If the capacity of the archive reaches the limitation, a novel pruning strategy is proposed to delete the redundant non-dominated solutions to maintain uniform distribution among the archive members.

Assuming that there are K points which will be selected from the archive serve. The maximum distance of the line segment between the first and the end points (namely whole Euclidean distance Dmax) are obtained. Then, the average distance of the remained K-2 points are set

$$d = D\_{\text{max}} / (K - 1) \,\text{.}\tag{17}$$

In MOPSO, it is desired that an algorithm maintains good spread of solutions in the nondominated solutions as well as the convergence to the Pareto-optimal set. In this AGMOPSO algorithm, an estimate of density is designed to evaluate the density of solutions surrounding it. It calculates the overall Euclidean distance values of the solutions, and then the average distance of the solutions along each of the objectives corresponding to each objective is calculated. This method is able to get a good spread result under some situations to improve the

Updating the velocity xi(t) and position vi(t) % Eqs. (5–6)

Figure 1. Illustration of points selection procedure. (a) Is the original points and (b) is the selection result of the proposed

A Gradient Multiobjective Particle Swarm Optimization http://dx.doi.org/10.5772/intechopen.76306 83

Getting the non-dominated solutions % Eq. (8)

Updating the archive using MOG method % Eq. (16)

Initializing the flight parameters, population size, the particles positions x(0) and velocity v(0)

In this section, three ZDT and two DTLZ benchmark functions are employed to test the proposed of AGMOPSO. This section compares the proposed AGMOPSO with four stateof-the-art MOPSO algorithms—adaptive gradient MOPSO (AMOPSO) [41], crowded distance

searching ability. And the pseudocode of AGMOPSO is presented in Table 1.

4. Simulation results and analysis

strategy.

Loop

End

End loop

Calculating the fitness value

Pruning the archive

Storing the non-dominated solutions in archive A(t)

If (the number of archive solutions exceed capacity)

Selecting the gBest from the archive A(t) Calculating the flight parameters

Table 1. AMOPSO algorithm.

MOPSO (cdMOPSO) [32], pdMOPSO [31] and NSGA-II [11].

where d is the average distance of all points. The average values of d are used to guide to select the non-dominated solutions of more uniform distribution. In addition, for the three objectives, all of the solutions (except the first and the end) are projected to the Dmax. The points can be reserved, the projective points and the average distance points can be found. However, most projective distances of the adjacent points are not equal to the average distance. Thus, the next point is likely to be selected when it has the distance more closely to the average distance. Once the search process is terminated, the solutions in archive will become the final Pareto front. Taking DTLZ2 as an example, Figure 1 shows this strategy with three objectives in details.

Local search is a heuristic method to improve PSO performance. It repeatedly tries to improve the current solution by replacing it with a neighborhood solution. In the proposed MOG algorithm, the set of unit directions is described by the normalized combination of the unit directions that map to the intersection points as Eq. (12). Then, each single run of the algorithm can yield a set of Pareto solutions. Experiments demonstrate that the improvements make AGMOPSO effective.

Figure 1. Illustration of points selection procedure. (a) Is the original points and (b) is the selection result of the proposed strategy.


Table 1. AMOPSO algorithm.

To find the set of Pareto-optimal solutions of MOPs, the multi-gradient descent direction is

i¼1

This multi-gradient descent direction is utilized to evaluate the full set of unit directions. And

where, h is the step size, aj(t) and āj(t) are the jth archive variables before and after the MOG

Moreover, the archive A(t) can store the non-dominated solutions of AGMOPSO. But the number of non-dominated solutions will gradually increase during the search process. Therefore, to improve the diversity of the solutions, a fixed size archive is implemented in AGMOPSO to record the good particles (non-dominated solutions). During each iteration, the new solutions will be compared with the existing solutions in the archive using the dominating relationship. When a new solution cannot be dominated by the existing solutions in the archive, it will be reserved in the archive. On the contrary, the dominated new solutions cannot be accepted in the archive. If the capacity of the archive reaches the limitation, a novel pruning strategy is proposed to delete the redundant non-dominated solutions to maintain uniform

Assuming that there are K points which will be selected from the archive serve. The maximum distance of the line segment between the first and the end points (namely whole Euclidean distance Dmax) are obtained. Then, the average distance of the remained K-2 points are set

where d is the average distance of all points. The average values of d are used to guide to select the non-dominated solutions of more uniform distribution. In addition, for the three objectives, all of the solutions (except the first and the end) are projected to the Dmax. The points can be reserved, the projective points and the average distance points can be found. However, most projective distances of the adjacent points are not equal to the average distance. Thus, the next point is likely to be selected when it has the distance more closely to the average distance. Once the search process is terminated, the solutions in archive will become the final Pareto front. Taking DTLZ2 as an example, Figure 1 shows this strategy with three objectives in details.

Local search is a heuristic method to improve PSO performance. It repeatedly tries to improve the current solution by replacing it with a neighborhood solution. In the proposed MOG algorithm, the set of unit directions is described by the normalized combination of the unit directions that map to the intersection points as Eq. (12). Then, each single run of the algorithm can yield a set of Pareto solutions. Experiments demonstrate that the improvements make

algorithm has been used at time t and the fitness values are updated at the same time.

αiðÞ¼ t 1, αið Þt ≥ 0, ð Þ ∀i : (15)

<sup>a</sup>jðÞ¼ <sup>t</sup> <sup>a</sup>jð Þþ <sup>t</sup> <sup>h</sup> � <sup>∇</sup><sup>F</sup> <sup>a</sup>jð Þ<sup>t</sup> � �, (16)

d ¼ Dmax=ð Þ K � 1 , (17)

<sup>α</sup>ið Þ<sup>t</sup> <sup>u</sup>b<sup>i</sup>ð Þ<sup>t</sup> , <sup>X</sup><sup>m</sup>

given as follows:

82 Optimization Algorithms - Examples

<sup>∇</sup><sup>F</sup> <sup>a</sup>jð Þ<sup>t</sup> � � <sup>¼</sup> <sup>X</sup><sup>m</sup>

the archive A(t) is updated as follows:

distribution among the archive members.

AGMOPSO effective.

i¼1

In MOPSO, it is desired that an algorithm maintains good spread of solutions in the nondominated solutions as well as the convergence to the Pareto-optimal set. In this AGMOPSO algorithm, an estimate of density is designed to evaluate the density of solutions surrounding it. It calculates the overall Euclidean distance values of the solutions, and then the average distance of the solutions along each of the objectives corresponding to each objective is calculated. This method is able to get a good spread result under some situations to improve the searching ability. And the pseudocode of AGMOPSO is presented in Table 1.
