**7.1. The method of Suppapitnarm and Parks (SMOSA)**

The concept of archiving the Pareto-optimal solutions for solving multi-objective problems with SA has been used by Suppapitnarm et al. (2000). The Suppapitnarm Multi-Objective Simulated Annealing (SMOSA) enables the search to restart from an archived solution in a solution region, where each of the pair of non-dominated solutions may be superior with respect to at least one objective. Since SA only generates a single solution at a given iteration, an independent archive is required to record all non-dominated solutions found during search. Pioneering work in this area was first performed by Engrand (1997), and was further developed by Suppapitnarm et al. (2000).

#### *7.1.1. Acceptance probability and archiving*

226 Simulated Annealing – Single and Multiple Objective Problems

These are summarized by Davidson and Harel (1996):

from the neighborhood of the current one.

function and/or the control parameter.

**7. SA for multi-objective optimization** 

further developed by Suppapitnarm et al. (2000).

**7.1. The method of Suppapitnarm and Parks (SMOSA)** 

mathematical programming techniques.

(which is often chosen at random).

analogue of the energy).

implementation of SA.

difference between the corresponding function values and also on a global parameter *T* (temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when *T* is large, but increasingly "downhill" as *T* goes to zero (Fleischer, 1995). The allowance for "uphill" moves potentially saves the method from becoming stuck at local optima. Several parameters need to be included in an

• The set of configurations, or states, of the system, including an initial configuration

• A generation rule for new configurations, which is usually obtained by defining the neighborhood of each configuration and choosing the next configuration randomly

• The target, or cost function, to be minimized over the configuration space. (This is the

• The cooling schedule of the control parameter, including initial values and rules for when and how to change it (This is the analogue of the temperature and its decreases). • The termination condition, which is usually based on the time and the values of the cost

SA is a popular optimization algorithm due to the simplicity of the model and its implementation. However, due to CPU time-consuming nature of standard SA, a fast temperature schedule to fulfill the required conditions is suggested. In fact, simulated annealing algorithm with the fast cooling process is called simulated quenching (SQ) which is used as an optimization method in this chapter to overcome the slow SA optimization process.

SA has been used as an optimization method for solving a wide range of combinatorial optimization problems. It has also been adapted for solving multi-objective problems due to its simplicity and capability of producing a desirable Pareto set of solutions. In addition, it is not susceptible to the shape of the Pareto set, since shape may be considered as a concern for

The concept of archiving the Pareto-optimal solutions for solving multi-objective problems with SA has been used by Suppapitnarm et al. (2000). The Suppapitnarm Multi-Objective Simulated Annealing (SMOSA) enables the search to restart from an archived solution in a solution region, where each of the pair of non-dominated solutions may be superior with respect to at least one objective. Since SA only generates a single solution at a given iteration, an independent archive is required to record all non-dominated solutions found during search. Pioneering work in this area was first performed by Engrand (1997), and was A new acceptance probability formulation based on an annealing schedule with multiple temperatures (one for each objective function) was also proposed. The changes in each objective function values are compared with each other directly before archiving. This ensures that the moves to a non-dominated solution are accepted. It does not use any weight vector in the acceptance criteria. Hence, the acceptance probability step is given as:

$$P = \min\left(1, \prod\_{i=1}^{N} \exp\left\{\begin{matrix} -\Delta s\_i\\ \nearrow\\ 1 \end{matrix} \right\} \right) \tag{16}$$

where Δ*S=(Zi(Y)-Zi(X))* and *N* is the number of objective functions, *X* is the current solution, *Y* is the generated solution, *Zi* is the objective function, and *Ti* is the annealing temperature. Thus, the overall acceptance probability is the product of individual acceptance probabilities for each objective associated with a temperature *Ti* .

#### *7.1.2. Annealing schedule*

A new annealing schedule is developed to control the lowering of individual temperatures associated with each objective function. If the temperatures are lowered too fast the chance of accepting solutions reduces rapidly and large parts of the search space are never explored. In contrast, if the temperature is reduced too slowly, many redundant solutions which do not lead to non-dominated solutions are accepted and the Pareto-optimal set of solutions develops very slowly. The latter is particularly undesirable if objective function evaluations are expensive and/or if computation time is a important factor.

A statistical record of the values of each of the objective functions (*fi*) is maintained. First, the temperatures are lowered after *NT1* iterations by setting each temperature to the standard deviation (*σ*) of the accepted values of *fi* (*Ti* = *σi*). Thereafter, the temperatures based on the quenching schedule are updated after every *NT2* iterations or *NA* acceptances as follows:

$$T\_{i(k+1)} = T\_{i(k)} \times \alpha \qquad 0 < \alpha\_i < 1 \tag{17}$$

where *Ti* is the temperature, *k* is the time index of annealing, and α*<sup>i</sup>* is the cooling ratio of each objective function. The suitable values for *NT1* and *NT2* were chosen 1000 and 500 iterations, respectively (Suppapitnann, 1998).

#### *7.1.3. Return to base strategy*

In order to completely expose the trade-off between objective functions, the random selection of a solution from the archive, from which to recommence search, is systematically controlled using an intelligent return-to-base strategy. After the start of search, a return-tobase is first activated when the basic features of the trade-off between objectives have developed. It seems sensible that this take place when the temperatures are first lowered, i.e., after *NT1* iterations. Thereafter, the rate of return is, naturally, increased to intensify the search in the trade-off. The number of iterations *NBi* to be executed prior to the ith return-tobase after the start of search is updated as given:

$$N\_{Bi} = r\_B N\_{Bi-1} \qquad \text{i = 2, 3, 4, \dots} \tag{18}$$

Optimum Functionally Gradient Materials for Dental Implant Using Simulated Annealing 229

A suitable value for the upper bound for each *Si* was set to 0.5 to permit, initially, a wide search around the current position. Accordingly, a lower bound of 0.0001 was chosen for the smallest possible value of each *Si* to prevent stagnation during search (Parks, 1990). Therefore, the maximum step change in the design variables is monitored and is varied to

The basic steps involved in the SMOSA algorithm for a problem having *N* objective

**Step 1.** Start with a randomly generated initial solution vector, *X* (an *n×1* vector whose

**Step 2.** Generate a random perturbation and a new solution vector, *Y*, in the neighborhood

**Step 4.** If the generated solution vector is archived, assign it as the current solution vector

**Step 5.** If the generated solution vector is not archived, accept it with the probability using

**Step 6.** If the generated solution vector is not archived, retain the earlier solution vector as

**Step 7.** Periodically, restart with a randomly selected solution from the Pareto set. While

An ideal situation would be to attain a consistent optimal material gradient, where the maximum one turnover can be made and the displacement is kept to minimal. The problem described in previous sections was solved by multi-objective SA code written in MATLAB software programming and run on Pentium IV, 2500 GHz and 4 GB RAM. The SMOSA was run 5 times and the obtained averaged results were compared to results obtained from RSM. The parameters which must be specified before running the algorithm are initial temperature, frozen state represented by the final temperature, cooling ratio (annealing), i.e.

recommended biasing towards the extreme ends of the trade-off surface.

**Step 8.** Reduce the temperature using Equation (17) and annealing schedule. **Step 9.** Repeat Steps 2 to 8, until a predefined number of iterations is carried out.

**8. Optimization of the two objective functions using SMOSA** 

In addition, the flowchart of SMOSA optimizer is illustrated in Fig. 5.

function approach to the corresponding objective functions, if necessary. **Step 3.** Compare the generated solution vector with all the solutions in the Pareto set and

elements are decision variables) and evaluate all objective functions and put it into

of current solution vector, *X*, re-evaluate the objective functions and apply a penalty

Equation (16). If the generated solution is archived, assign it as the current solution

periodically restarting with the archived solutions, Suppapitnarm et al. (2000) have

reduce violation of the constraints.

*7.1.5. The steps and flowchart of the SMOSA* 

a Pareto set of solutions.

update the Pareto set, if necessary.

by putting *X=Y* and go to Step 7.

vector by putting *X=Y* and go to Step 7.

the current solution vector and go to Step 7.

functions and *n* decision variables are as follows (Suman, 2004):

(0.9 0.21 ) *i i S S* = +× *rand* (21)

where *rB* is a constant parameter which varies between 0 and +1 and dictates the frequency of return. Recommendation values for *rB* and *NB1* may be chosen as 0.9 and 2*NT2*, respectively (Suppapitnann, 1998). In order to fully develop the trade-off, solutions that are more isolated from the rest of the trade-off solution should be favored in returns-to-base. The extreme solutions, those solutions that correspond to minimum values for each objective in the trade-off, also require special consideration. These solutions are almost invariably only just feasible, which makes the design space around them difficult to search.

For these reasons, a base set of candidate solutions was proposed which consists of a number of the most isolated of those solutions currently held in the archive and the *M* extreme solutions in the archive. Therefore, when a return-to-base is activated, the search diversifies into less well explored regions of the trade-off. To evaluate the degree of isolation for a solution, the following formula was proposed (Suppapitnarm et al., 2000):

$$I(X\_{\cdot j}) = \sum\_{\substack{i=1 \ k=1 \\ i \neq j}}^{A\_{\times}} \left| \frac{\left| f\_k(X\_i) - f\_k(X\_j) \right|}{\left| f\_{k \text{ max}} - f\_{k \text{ min}} \right|} \right|^2 \tag{19}$$

where *I(Xj)* is the normalized value for distance in objective space for the jth solution from all other archived solutions and *Xj* denotes the jth archived solution. *As* and *M* are the total number of solutions and extreme solutions stored in the archive, respectively. *fkmax* and *fkmin* are the maximum and minimum values for kth objective function (*fk*), respectively*.* Each solution - except for the extreme solutions - is ranked in order to decrease isolation distance, thereby, establishing an ordered set with the most isolated solutions at its top and the least isolated solutions at the bottom.

#### *7.1.4. Step size control*

An improvement in SA performance may be gained by varying the maximum allowable step changes in each of the decision variables during perturbation between iterations (Parks, 1990). Hence, the value of each design variable is rescaled to *Uik* such that it varies between - 1 and +1 at its lower and upper bounds, respectively. At the next iteration, *Ui(k+1)* is modified as given:

$$\mathcal{LI}\_{i(k+1)} = \mathcal{LI}\_{ik} + rand \times \mathcal{S}\_i \tag{20}$$

where *rand* is a uniformly distributed random number between -1 and +1, and *Si* is the maximum (positive) step-size for each design variable. If the solution is accepted, *Si* is updated using following equation:

Optimum Functionally Gradient Materials for Dental Implant Using Simulated Annealing 229

$$S\_i = S\_i(0.9 + 0.21 \times \left| rand \right|) \tag{21}$$

A suitable value for the upper bound for each *Si* was set to 0.5 to permit, initially, a wide search around the current position. Accordingly, a lower bound of 0.0001 was chosen for the smallest possible value of each *Si* to prevent stagnation during search (Parks, 1990). Therefore, the maximum step change in the design variables is monitored and is varied to reduce violation of the constraints.
