**4. Multi-objective optimization**

Real-world design problems involve the simultaneous optimization of two or more (often conflicting) objectives, known as multi-objective optimization problems (MOOP). The solution of such problems is different from the one of the single-objective optimization problems. The main difference is that MOOP normally have not one but a set of solutions, which should be equally satisfactory [20,21].

Traditionally, the treatment of such problems is done by transforming the original MOOP into a scalar single-objective problem. Several studies dealing with multi-objective optimization techniques have been reported over the past decades, based on the Kuhn-Tucker's criterion. These techniques follow the preference-based approach in which a relative preference vector is used to rank multiple objectives. Classical searching and optimization methods use a point-to-point approach, in which the solution is successively modified so that the outcome of the classical optimization method is a single optimized solution. However, Evolutionary Algorithms (EA) can find multiple optimal solutions in one single simulation run due to their population-based search approach. Thus, EA are ideally suited for multi-objective optimization problems.

When dealing with MOOP, the notion of optimality needs to be extended. The most common one in the current literature is that originally proposed by Edgeworth [22] and later generalized by Pareto [23]. This notion is called Edgeworth-Pareto optimality, or simply Pareto optimality, and refers to finding good tradeoffs among all the objectives. This definition leads to a set of solutions that is known as the Pareto optimal set, whose corresponding elements are called non-dominated or non-inferior. The concept of optimality in the single objective context is not directly applicable in MOOPs. For this reason a classification of the solutions is introduced in terms of Pareto optimality, according to the following definitions [20]:

• **Definition 1** - The Multi-objective Optimization Problem (MOOP) can be defined as:

$$f\begin{pmatrix}\mathbf{x}\end{pmatrix} = \begin{pmatrix}f\_1\begin{pmatrix}\mathbf{x}\end{pmatrix}, f\_2\begin{pmatrix}\mathbf{x}\end{pmatrix}, \dots, f\_m\begin{pmatrix}\mathbf{x}\end{pmatrix}\end{pmatrix}, \ m = 1, \dots, M\tag{4}$$

subject to

202 Simulated Annealing – Single and Multiple Objective Problems

function value.

**3.3. Perturbation mechanism** 

changes to the current solution.

present solution by:

the process [18].

**3.5. Termination criterion** 

**3.4. Temperature update** 

application based approach (*ad-hoc*) since it is related with the magnitude of the objective

This operator permits the creation of new solutions from the current one. In other words it deals with the exploration of the neighbourhood of the current solution by adding small

A solution *s* is defined as a vector *s* = (*x*1, ..., *xn*) representing a point in the search space. A new solution is generated by using a vector *σ* = (*σ*1, ..., *σn*) of standard deviations to create a perturbation from the current solution. A neighbour solution is then produced from the

<sup>1</sup> (0, ) *ii i x xN*

where *N*(0,*σi*) is a random Gaussian number with zero mean and *σi* standard deviation.

The most common cooling schedule is the geometric rule for temperature variation:

log exp <sup>1</sup>

*n* <sup>+</sup>

1

acceptance rate and the maximum computational time.

**4. Multi-objective optimization** 

σ

( )

*stop start T T*

*i i temp*

where *stoptemp* and *starttemp* are the final temperature (standard deviation) and the initial temperature, respectively, and *ntemp* is the number of temperatures considered. However, other schedules have been proposed in the literature [19]. Another parameter is the number of iterations for each temperature, which is often related with the size of the search space or with the size of the neighbourhood. This number of iterations can even be constant or, alternatively, can be defined as a function of the temperature or based on a feedback from

Among the several strategies proposed for the termination of the algorithm, we can cite some very common approaches: the maximum number of iterations; the minimum temperature value; the minimum value of the objective function; the minimum value of the

Real-world design problems involve the simultaneous optimization of two or more (often conflicting) objectives, known as multi-objective optimization problems (MOOP). The solution of such problems is different from the one of the single-objective optimization

<sup>=</sup> <sup>−</sup>

*temp temp*

<sup>+</sup> = + (2)

(3)

$$h(\mathbf{x}) = \begin{pmatrix} h\_1(\mathbf{x}) \ \vdots \ h\_2(\mathbf{x}) \ \dots \ \vdots \ \ h\_i(\mathbf{x}) \end{pmatrix}, \ i = 1, \dots, H \tag{5}$$

$$\log\left(\mathbf{x}\right) = \left(g\_1(\mathbf{x}), \ g\_2(\mathbf{x}), \ \dots, \ g\_j(\mathbf{x})\right), \ j = 1, \ \dots, l \tag{6}$$

$$\mathbf{x} = \begin{pmatrix} \mathbf{x}\_{1'} \ \mathbf{x}\_{2'} \ \dots \ \mathbf{x}\_{n} \end{pmatrix}, \ n = 1, \ \dots \\ N, \ \mathbf{x} \in \mathbf{X} \tag{7}$$

where *x* is the vector of design (or decision) variables, f is the vector of objective functions and *X* is denoted as the design (or decision) space. The constraints *h* and *g* (≥ 0) determine the feasible region.


optimal solutions in the case of single-objective optimization, there could be global and local Pareto-optimal sets in multi-objective optimization.

Design and Identification Problems of Rotor Bearing Systems Using the Simulated Annealing Algorithm 205

Optimization Simulated Annealing (MOSA) algorithm is proposed. This approach is based on the classical SA associated with the so-called Fast Non-Dominated Sorting operator and

• All dominated solutions are removed from the population through the operator Fast Non-Dominated Sorting. In this way, the population is sorted into non-dominated fronts *μj* (sets of vectors that are non-dominated with respect to each other) [20,21]; • Following, SA is applied to generate the new population (potential candidates to solve

• If the number of individuals of the population is larger than a number defined by the

The steps presented are repeated until a determined stopping criterion is reached. The

The so-called Fast Non-Dominated Sorting operator was proposed by Deb et al. [21] in order to sort a population of size N according to the level of non-domination. Each solution must be compared with every other solution in the population to find if the solution is dominated. This requires O(MN) comparisons for each solution, where M is the number of objective functions. When this process is continued to find the members of the first non-dominated class for all population members, the total complexity is O(MN2). At this point, all individuals in the first non-dominated front are found. In order to obtain the individuals in the next front, the solutions of the first front are temporarily discarded and the above procedure is repeated. In the worst case, the task of obtaining the second front also requires O(MN2) computations. The procedure is repeated so that subsequent fronts are found.

This operator describes the density of solutions surrounding a vector. To compute the Crowding Distance for a set of population members the vectors are sorted according to their objective function value for each objective function. To the vectors with the smallest or largest values, an infinite Crowding Distance (or an arbitrarily large number for practical purposes) is assigned. For all other vectors, the Crowding Distance (*distxi*) is calculated

*i*

*x*

Front and to promote the diversity in terms of space objectives [21].

, 1 ,1 1 max min

<sup>−</sup> <sup>=</sup> <sup>−</sup> (8)

*<sup>m</sup> ji ji*

*f f* + −

*j j j*

where *f*j corresponds to the *j*-th objective function and *m* equals the number of objective functions. This operator is important to avoid many points close together in the Pareto's

*f f dist*

=

user, it is truncated according to the Crowding Distance criterion [20,21].

has the following structure:

the MOOP);

• An initial population of size *NP* is randomly generated;

operators used in the MOSA are described below.

**5.1. Fast non-dominated sorting** 

**5.2. Crowding distance operator** 

according to [20,21]:

In the multi-objective context, various Multiple-Objective Evolutionary Algorithms (MOEAs) can be found. This group of algorithms conjugates the basic concepts of dominance described above with the general characteristics of evolutionary algorithms. Basically, the main features of these MOEAs are [20,21]:


In the literature, various multi-objective algorithms based on SA have been proposed. Basically, the first extensions were proposed by Serafini [24,25] and by Ululgu and Teghem [26], where various ways of defining the probability in the multi-objective framework and how they affect the performance of SA based multi-objective algorithms. Czyzak et al. [27] combined mono-criterion SA and genetic algorithm to provide efficient solutions for multi-criteria shortest path problem. Ulungu et al. [28] designed a MOSA (Multi-objective Optimization Simulated Annealing) algorithm and tested its performance using multi-objective combinatorial optimization problems. Suppapitnarm et al. [29] used the neighbourhood perturbation method to create a new point around an old point using MOSA. In this algorithm, the single objective SA is modified to give a set of non-dominated solutions by using archiving of solutions generated earlier, and using a sorting procedure (based on non-dominance and crowding). Kasat et al. [30] used the concept of jumping genes in natural genetics to modify the binary-coded non-dominated sorting genetic algorithm (NSGA-II) to give NSGA-II-JG. Smith et al. [31] compared the candidate to the current solution according to the cardinalities of their dominant subsets in the file. Marcoulaki and Papazoglou [32] proposed a new multiple objective optimization approach by using a Monte Carlo-based algorithm stemmed from SA. Since the expected result in a multiple objective optimization task is usually a set of Paretooptimal solutions, the optimization problem states assumed here are themselves sets of solutions.
