**6. Simulated annealing**

In 1953, Metropolis developed a method for solving optimization problems that mimics the way thermodynamic systems go from one energy level to another (Metropolis et al., 1953). He thought of this after simulating a heat bath on certain chemicals. This method is called Simulated Annealing (SA). Kirkpatrick et al. (1983) originally thought of using SA on a number of problems. The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become free from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy.

The system is cooled and as the temperature is reduced the atoms migrate to more ordered states with lower energy. The final degree of order depends on the temperature cooling rate. The slow cooling process is characterized by a general decrease in the energy level for with occasional increase in energy. On the other hand, a fast cooling process, known as quenching, is characterized by a monotonic decrease in energy to an intermediate state of semi-order which is used as temperature schedule in this chapter.

At the final stages of the annealing process, the system's energy reaches a much lower level than in rapid cooling (quenching). Annealing (slow cooling) therefore allows the system to reach lower global energy minimum than is possible using a quick quenching process, equivalent to a local energy minimum.

By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends both on the

difference between the corresponding function values and also on a global parameter *T* (temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when *T* is large, but increasingly "downhill" as *T* goes to zero (Fleischer, 1995). The allowance for "uphill" moves potentially saves the method from becoming stuck at local optima. Several parameters need to be included in an implementation of SA.

Optimum Functionally Gradient Materials for Dental Implant Using Simulated Annealing 227

A new acceptance probability formulation based on an annealing schedule with multiple temperatures (one for each objective function) was also proposed. The changes in each objective function values are compared with each other directly before archiving. This ensures that the moves to a non-dominated solution are accepted. It does not use any weight

> 1 min 1, exp *N*

*Y* is the generated solution, *Zi* is the objective function, and *Ti* is the annealing temperature. Thus, the overall acceptance probability is the product of individual acceptance probabilities

A new annealing schedule is developed to control the lowering of individual temperatures associated with each objective function. If the temperatures are lowered too fast the chance of accepting solutions reduces rapidly and large parts of the search space are never explored. In contrast, if the temperature is reduced too slowly, many redundant solutions which do not lead to non-dominated solutions are accepted and the Pareto-optimal set of solutions develops very slowly. The latter is particularly undesirable if objective function

A statistical record of the values of each of the objective functions (*fi*) is maintained. First, the temperatures are lowered after *NT1* iterations by setting each temperature to the standard deviation (*σ*) of the accepted values of *fi* (*Ti* = *σi*). Thereafter, the temperatures based on the quenching schedule are updated after every *NT2* iterations or *NA* acceptances as follows:

> ( 1) ( ) 0 1 *ik ik <sup>i</sup> T T* <sup>+</sup> = × << α

each objective function. The suitable values for *NT1* and *NT2* were chosen 1000 and 500

In order to completely expose the trade-off between objective functions, the random selection of a solution from the archive, from which to recommence search, is systematically controlled using an intelligent return-to-base strategy. After the start of search, a return-tobase is first activated when the basic features of the trade-off between objectives have developed. It seems sensible that this take place when the temperatures are first lowered, i.e., after *NT1* iterations. Thereafter, the rate of return is, naturally, increased to intensify the

 α (17)

*<sup>i</sup>* is the cooling ratio of

α

evaluations are expensive and/or if computation time is a important factor.

where *Ti* is the temperature, *k* is the time index of annealing, and

iterations, respectively (Suppapitnann, 1998).

*7.1.3. Return to base strategy* 

*<sup>s</sup> <sup>P</sup> <sup>T</sup>* <sup>=</sup> −Δ <sup>=</sup>

*i i*

*S=(Zi(Y)-Zi(X))* and *N* is the number of objective functions, *X* is the current solution,

*i*

<sup>∏</sup> (16)

vector in the acceptance criteria. Hence, the acceptance probability step is given as:

*7.1.1. Acceptance probability and archiving* 

for each objective associated with a temperature *Ti* .

where Δ

*7.1.2. Annealing schedule* 

These are summarized by Davidson and Harel (1996):


SA is a popular optimization algorithm due to the simplicity of the model and its implementation. However, due to CPU time-consuming nature of standard SA, a fast temperature schedule to fulfill the required conditions is suggested. In fact, simulated annealing algorithm with the fast cooling process is called simulated quenching (SQ) which is used as an optimization method in this chapter to overcome the slow SA optimization process.
