*7.1.5. The steps and flowchart of the SMOSA*

228 Simulated Annealing – Single and Multiple Objective Problems

base after the start of search is updated as given:

search in the trade-off. The number of iterations *NBi* to be executed prior to the ith return-to-

where *rB* is a constant parameter which varies between 0 and +1 and dictates the frequency of return. Recommendation values for *rB* and *NB1* may be chosen as 0.9 and 2*NT2*, respectively (Suppapitnann, 1998). In order to fully develop the trade-off, solutions that are more isolated from the rest of the trade-off solution should be favored in returns-to-base. The extreme solutions, those solutions that correspond to minimum values for each objective in the trade-off, also require special consideration. These solutions are almost invariably only just feasible, which makes the design space around them difficult to search.

For these reasons, a base set of candidate solutions was proposed which consists of a number of the most isolated of those solutions currently held in the archive and the *M* extreme solutions in the archive. Therefore, when a return-to-base is activated, the search diversifies into less well explored regions of the trade-off. To evaluate the degree of isolation

1 1 max min

*As <sup>M</sup> ki kj*

<sup>−</sup> <sup>=</sup> <sup>−</sup>

*i k k k*

where *I(Xj)* is the normalized value for distance in objective space for the jth solution from all other archived solutions and *Xj* denotes the jth archived solution. *As* and *M* are the total number of solutions and extreme solutions stored in the archive, respectively. *fkmax* and *fkmin* are the maximum and minimum values for kth objective function (*fk*), respectively*.* Each solution - except for the extreme solutions - is ranked in order to decrease isolation distance, thereby, establishing an ordered set with the most isolated solutions at its top and the least

An improvement in SA performance may be gained by varying the maximum allowable step changes in each of the decision variables during perturbation between iterations (Parks, 1990). Hence, the value of each design variable is rescaled to *Uik* such that it varies between - 1 and +1 at its lower and upper bounds, respectively. At the next iteration, *Ui(k+1)* is modified

where *rand* is a uniformly distributed random number between -1 and +1, and *Si* is the maximum (positive) step-size for each design variable. If the solution is accepted, *Si* is

*f f* = =

() ()

*fX fX*

for a solution, the following formula was proposed (Suppapitnarm et al., 2000):

*i j*

≠

( )

*I X*

isolated solutions at the bottom.

updated using following equation:

*7.1.4. Step size control* 

as given:

*j*

<sup>1</sup> 2,3,4,... *N rN i Bi B Bi*<sup>−</sup> = = (18)

2

(19)

*U U rand S i k ik* ( 1) <sup>+</sup> *<sup>i</sup>* =+ × (20)

The basic steps involved in the SMOSA algorithm for a problem having *N* objective functions and *n* decision variables are as follows (Suman, 2004):


In addition, the flowchart of SMOSA optimizer is illustrated in Fig. 5.
