1. Introduction

Most of the engineering and practical applications, such as wastewater treatment processes and aerodynamic design problem, often have a multiobjective nature and require solving several conflicting objectives [1–3]. Handling with these multiobjective optimization problems (MOPs), there are always a set of possible solutions which represent the tradeoffs among the objectives known as Pareto-optimal set [4–5]. Evolutionary multiobjective optimization (EMO) algorithms, which are a class of stochastic optimization approaches based on population characteristic, are widely used to solve the MOPs, because a series of Pareto-optimal solutions can be obtained in a single run [6–9]. The multiobjective optimization algorithms are striving to acquire a Paretooptimal set with good diversity and convergence. The most typical EMO algorithms include the

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited. © 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

non-dominated sorting genetic algorithm (NSGA) [10] and NSGA-II [11], the strength Pareto evolutionary algorithm (SPEA) [12] and SPEA2 [13], the Pareto archived evolutionary strategy (PAES) [14], the Pareto envelope-based selection algorithm (PESA) [15] and PESA-II [16].

parameter mechanism for updating the personalized flight parameters of the mutated particles in [36]. The results show that this flight parameter mechanism performs efficiently in exploring solutions close to the true Pareto front. In the above MOPSO algorithms, the improved strategies are expected to achieve better performance. However, few works have been done to

Motivated by the above review and analysis, in this chapter, an adaptive gradient multiobjective particle swarm optimization (AGMOPSO) algorithm, based on a multiobjective gradient (MOG) method, is put forward. This novel AGMOPSO algorithm has faster convergence in the evolutionary process and higher efficiency to deal with MOPs. The proposed AGMOPSO algorithm contains a major contribution to solve MOPs as follows: A novel MOG method is proposed for updating the archive to improve the convergence speed and the local exploitation in the evolutionary process. Unlike some existing gradient methods for single-objective optimization problems [38–40] and MOPs [41], much less is known about the gradient information of MOPs. One of the key feathers of the MOG strategy is that the utilization of gradient information for MOPs is able to obtain a Pareto set of solutions to approximate the optimal Pareto set. In view of the advantages of the MOG strategy, this AGMOPSO algorithm can obtain a good Pareto set and reach smaller testing error with much faster speed. This characteristic makes this method ideal

A minimize MOP contains several conflicting objectives which is defined as:

ð Þ y ≤ f <sup>i</sup>

∀i : f <sup>i</sup>

minimize <sup>F</sup>ð Þ¼ <sup>x</sup> <sup>f</sup> <sup>1</sup>ð Þ<sup>x</sup> ; <sup>f</sup> <sup>2</sup>ð Þ<sup>x</sup> ; <sup>⋯</sup>; <sup>f</sup> <sup>m</sup>ð Þ<sup>x</sup> <sup>T</sup>

where m is the number of objectives, x is the decision variable, fi() is the ith objective function. A decision variable y is said to dominate the decision vector z, defined as y dominates z or

ð Þz and ∃j : f <sup>j</sup>

where i = 1,2, …, m, j = 1,2, …,m. When there is no solution that can dominate one solution in MOPs, this solution can be used as the Pareto optimal solution. This Pareto optimal solution

PSO is a stochastic optimization algorithm, in which a swarm contains a certain number of particles that the position of each particle can stand for one solution. The position of a particle

,

A Gradient Multiobjective Particle Swarm Optimization http://dx.doi.org/10.5772/intechopen.76306 79

ð Þz , (2)

subject to <sup>x</sup><sup>∈</sup> <sup>Ω</sup>, (1)

ð Þ y < f <sup>j</sup>

examine the convergence of these MOPSO algorithms [37].

for MOPs.

2. Problem formulation

2.1. Multiobjective problems

y≺z, which is indicated as:

comprises the Pareto front.

2.2. Particle swarm optimization

which is expressed by a vector:

The notable characteristic of particle swarm optimization (PSO) is the cooperation among all particles of a swarm which is attracted toward the global best (gBest) in the swarm and its own personal best (pBest), so the PSO have a better global searching ability [17–19]. Among these EMO algorithms, owing to the high convergence speed and ease of implementation, multiobjective particle swarm optimization (MOPSO) algorithms have been widely used [20– 23]. MOPSO can also be applied to multiple difficult optimization problems such as noisy and dynamic problems. However, apart from the archive maintenance, in MOPSO, two issues still remain to be further addressed. The first one is the update of gBest and pBest, because the absolute best solution cannot be selected by the relationship of the non-dominated solutions. Then, the selection of gBest and pBest results in the different flight directions for a particle, which has an important effect on the convergence and diversity of MOPSO [24].

Zheng et al. introduced a novel MOPSO algorithm, which can improve the diversity of the swarm and improve the performance of the evolving particles over some advanced MOPSO algorithms with a comprehensive learning strategy [25]. The experimental results illustrate that the proposed approach performs better than some existing methods on the real-world fire evacuation dataset. In [26], a multiobjective particle swarm optimization with preferencebased sort (MOPSO-PS), in which the user's preference was incorporated into the evolutionary process to determine the relative merits of non-dominated solutions, was developed to choose the suitable gBest and pBest. The results indicate that the user's preference is properly reflected in optimized solutions without any loss of overall solution quality or diversity. Moubayed et al. proposed a MOPSO by incorporating dominance with decomposition (D2MOPSO), which proposes a novel archiving technique that can balance the relationship of the diversity and convergence [27]. The analysis of the comparable experiments demonstrates that the D2MOPSO can handle with a wide range of MOPs efficiently. And some other methods for the update of gBest and pBest can be found in [28–31]. Although many researches have been done, it is still a huge challenge to select the appropriate gBest and pBest with the suitable convergence and diversity [32–33].

The second particular issue of MOPSO is how to own fast convergence speed to the Pareto Front, well known as one of the most typical features of PSO. According to the requirement of the fast convergence for MOPSO, many different strategies have been put forward. In [34], Hu et al. proposed an adaptive parallel cell coordinate system (PCCS) for MOPSO. This PCCS is able to select the gBest solutions and adjust the flight parameters based on the measurements of parallel cell distance, potential and distribution entropy to accelerate the convergence of MOPSO by assessing the evolutionary environment. The comparative results show that the self-adaptive MOPSO is better than the other methods. Li et al. proposed a dynamic MOPSO, in which the number of swarms is adaptively adjusted throughout the search process [35]. The dynamic MOPSO algorithm allocates an appropriate number of swarms to support convergence and diversity criteria. The results show that the performance of the proposed dynamic MOPSO algorithm is competitive in comparison to the selected algorithms on some standard benchmark problems. Daneshyari et al. introduced a cultural framework to design a flight parameter mechanism for updating the personalized flight parameters of the mutated particles in [36]. The results show that this flight parameter mechanism performs efficiently in exploring solutions close to the true Pareto front. In the above MOPSO algorithms, the improved strategies are expected to achieve better performance. However, few works have been done to examine the convergence of these MOPSO algorithms [37].

Motivated by the above review and analysis, in this chapter, an adaptive gradient multiobjective particle swarm optimization (AGMOPSO) algorithm, based on a multiobjective gradient (MOG) method, is put forward. This novel AGMOPSO algorithm has faster convergence in the evolutionary process and higher efficiency to deal with MOPs. The proposed AGMOPSO algorithm contains a major contribution to solve MOPs as follows: A novel MOG method is proposed for updating the archive to improve the convergence speed and the local exploitation in the evolutionary process. Unlike some existing gradient methods for single-objective optimization problems [38–40] and MOPs [41], much less is known about the gradient information of MOPs. One of the key feathers of the MOG strategy is that the utilization of gradient information for MOPs is able to obtain a Pareto set of solutions to approximate the optimal Pareto set. In view of the advantages of the MOG strategy, this AGMOPSO algorithm can obtain a good Pareto set and reach smaller testing error with much faster speed. This characteristic makes this method ideal for MOPs.
