**3.3. Perturbation mechanism**

This operator permits the creation of new solutions from the current one. In other words it deals with the exploration of the neighbourhood of the current solution by adding small changes to the current solution.

A solution *s* is defined as a vector *s* = (*x*1, ..., *xn*) representing a point in the search space. A new solution is generated by using a vector *σ* = (*σ*1, ..., *σn*) of standard deviations to create a perturbation from the current solution. A neighbour solution is then produced from the present solution by:

$$\mathbf{x}\_{i+1} = \mathbf{x}\_i + \mathbf{N}(\mathbf{0}, \sigma\_i) \tag{2}$$

Design and Identification Problems of Rotor Bearing Systems Using the Simulated Annealing Algorithm 203

problems. The main difference is that MOOP normally have not one but a set of solutions,

Traditionally, the treatment of such problems is done by transforming the original MOOP into a scalar single-objective problem. Several studies dealing with multi-objective optimization techniques have been reported over the past decades, based on the Kuhn-Tucker's criterion. These techniques follow the preference-based approach in which a relative preference vector is used to rank multiple objectives. Classical searching and optimization methods use a point-to-point approach, in which the solution is successively modified so that the outcome of the classical optimization method is a single optimized solution. However, Evolutionary Algorithms (EA) can find multiple optimal solutions in one single simulation run due to their population-based search approach. Thus, EA are

When dealing with MOOP, the notion of optimality needs to be extended. The most common one in the current literature is that originally proposed by Edgeworth [22] and later generalized by Pareto [23]. This notion is called Edgeworth-Pareto optimality, or simply Pareto optimality, and refers to finding good tradeoffs among all the objectives. This definition leads to a set of solutions that is known as the Pareto optimal set, whose corresponding elements are called non-dominated or non-inferior. The concept of optimality in the single objective context is not directly applicable in MOOPs. For this reason a classification of the solutions is introduced in terms of Pareto optimality, according to the

• **Definition 1** - The Multi-objective Optimization Problem (MOOP) can be defined as:

where *x* is the vector of design (or decision) variables, f is the vector of objective functions and *X* is denoted as the design (or decision) space. The constraints *h* and *g* (≥ 0) determine

• **Definition 2** - Pareto Dominance: for any two decision vectors *u* and *v*, *u* is said to dominate *v*, if *u* is not worse than *v* in all objectives and *u* is strictly better than *v* in at

• **Definition 3** - Pareto Optimality: when the set *P* is the entire search space, or *P* = *S*, the resulting non-dominated set *P*' is called the Pareto-optimal set. Like global and local

() () () () ( ) 1 2 , , ..., , 1, ..., *<sup>m</sup> f x fx fx f x m M* = = (4)

() () () () ( ) 1 2 , , ..., , 1, ..., *<sup>i</sup> hx h x h x h x i H* = = (5)

() () () () ( ) 1 2 , , ..., , 1, ..., *<sup>j</sup> g x gx gx gx j J* = = (6)

( ) 1 2 , , ..., , 1, ..., , *<sup>n</sup> x x x x n Nx X* = =∈ (7)

which should be equally satisfactory [20,21].

ideally suited for multi-objective optimization problems.

following definitions [20]:

subject to

the feasible region.

least one objective.

where *N*(0,*σi*) is a random Gaussian number with zero mean and *σi* standard deviation.
