**3. Simulated quenching algorithm for CAP**

#### **3.1. Basic concepts**

As mentioned in the Introduction, SQ is methodology proposed to speed up the standard SA algorithms when applied to solve difficult (NP-complete) optimization problems. The original SA method can be viewed as a simulation of the physical annealing process found in nature, e.g., the settling of a solid to its state of minimum energy (ground state). SQ is stronger based on physical intuition though it loses some mathematical rigor.

Generally speaking, an optimization problem consists of a set of *S* configurations (or solutions) and a cost function *J* that determines, for each configuration *s*, its cost *J(s)*. Local search is then performed by determining the neighbours *s*' of each solution *s*. Thus, a neighbour structure *N(s)* that defines a set of possible transitions that can be proposed by *s* has to be defined.

When performing local search, in each iteration of the algorithm, a neighbour *s*' of *s* is proposed randomly, and *s* will only be replaced by *s*' if cost does not increase, i.e., *J*(*s*')≤*J*(*s*). Obviously, this procedure terminates in a local minimum that may have a higher cost than the global optimal solution. To avoid this trapping in a suboptimal solution, our proposed SQ method occasionally allows "uphill moves" to solutions of higher cost using the socalled Metropolis criterion (Metropolis, 1953). This criterion states that, if *s* and *s*'∈*N(s)* are the two configurations to choose from, then the algorithm continues with configuration *s*' with a probability given by min{l,exp(-(*J*(*s*')-*J*(*s*))/*t*)}, with *t* being a positive parameter that gradually decreases to zero during the algorithm. Note that the acceptance probability decreases for increasing values of *J*(*s*')-*J*(*s*) and for decreasing values of *t*, and that costdecreasing transitions are always accepted (see Fig. 3).

**Figure 3.** SQ allows uphill moves up to a cost proportional to the instantaneous temperature *t*.

Mathematically, SA-SQ can be modelled as an inhomogeneous Markov process, consisting of a sequence of homogenous chains at each temperature level *t* (Duque, 1993). Under this framework, it has been shown (Aarts, 1989; Geman, 1984) that there exist two alternatives for the convergence of the algorithm to the globally minimal configurations. On the one hand (homogenous case), asymptotic convergence to a global minimum is guaranteed if *t* is lowered to 0, and if the homogenous chains are extended to infinite length to establish the stationary distribution on each level. On the other hand (inhomogeneous case), convergence is guaranteed, irrespective of the length of the homogenous chains, if *t* approaches 0 logarithmically slow.

The problem arising here is that just the enumeration of the configuration space has an exponential time complexity and, in practice, some approximation is required. The formal procedure is to choose a *cooling schedule* to decide for:


52 Simulated Annealing – Single and Multiple Objective Problems

0 if | |

*x y i j xy*

*ffc f f*

Parameter λACC in Eq. (3) is set to weigh the relative importance of the adjacent channel constraint. Finally, the cost due to the violation of the traffic demand requirements is

*TRAFF TRAFF i ij*

If the traffic demand requirements are incorporated implicitly by only considering those assignments that satisfy them, then the cost function can be expressed by *J=J1=JCSC+JACC*, subject to , *ij <sup>i</sup> <sup>j</sup> <sup>a</sup>* = ∀ *d i* . For that reason, the fitness function to be used in the algorithms is

Finally, the estimation of parameters λCSC and λACC has been achieved using the same inhomogeneous 25-cell network used by Kunz and Lai in (Kunz, 1991) and (Lai, 1996), respectively. After analyzing the number of iterations required for a proper convergence for different values of λCSC and λACC, the optimal values for the weights λCSC and λACC were

It is important to note that the most important difference between different pairs of λCSC and λACC is the required computational load for each of them, since the number of generations required to converge proportionally acts on the execution time. This way, a precise computation of both λCSC and λACC is indispensable to get an efficient allocation

As mentioned in the Introduction, SQ is methodology proposed to speed up the standard SA algorithms when applied to solve difficult (NP-complete) optimization problems. The original SA method can be viewed as a simulation of the physical annealing process found in nature, e.g., the settling of a solid to its state of minimum energy (ground state). SQ is

Generally speaking, an optimization problem consists of a set of *S* configurations (or solutions) and a cost function *J* that determines, for each configuration *s*, its cost *J(s)*. Local

stronger based on physical intuition though it loses some mathematical rigor.

*J* = − λ

 

1 otherwise

*x y*

<sup>2</sup> *<sup>n</sup>*

 

*i j*

*d a*

− ≥

(4)

(5)

*CSC ACC TRAFF JJ J J* =+ + (6)

(,)

Ψ =

*i j*

Gathering all the costs, the final cost function to be minimized is

where

modelled as

given by *ρ*=1/*J*.

algorithm.

**3.1. Basic concepts** 

found to be close to 1 and 1.3, respectively.

**3. Simulated quenching algorithm for CAP** 

• the stop condition (final temperature, *tF*).

The initial temperature should be chosen high enough in order to allow that most of the proposed transitions pass the Metropolis criterion. Hence, at the start of the algorithm, an explorative search into the configuration space is intended. Later on, the number of accepted transitions decreases as *t*→0. Finally, when *t* ≈ 0 , no more transitions are accepted and the algorithm may stop. As a consequence, the algorithm converges to a final configuration representing the solution of the optimization problem.

Simulated Quenching Algorithm for Frequency Planning in Cellular Systems 55

′ = − (7)

acceptance ratio. For that, first, *t* is set to 0, and then it is iteratively changed until the desired acceptance ratio is reached. Our simulations worked fine with acceptance ratios

Temperature decrement follows a restriction proposed by Huang et al. (Huang, 1986): the decrease Δ*J* in the average cost between two subsequent temperatures *t* and *t*' should be less than the standard deviation of the cost (on level *t*). After some calculus (Huang, 1986;

exp

Since testing for the establishment of equilibrium at a specific *t* would involve an unacceptable monitoring load, Huang et al. (Huang, 1986) approximate this check in two respects: (i) a Gaussian form for the equilibrium distribution is assumed, whose average and standard deviation are estimated from the Markov chain itself, and (ii) the process is considered stationary if the ratio of the number of accepted transitions, their costs being in a 2δ-legth interval, to the total number of accepted transitions, reaches a stable value *erf*(2*δ/σ*). In those cases where the criterion for stationarity can not be reached the length of the chain is bounded

The final temperature is reached if a substantial improvement in cost can no more be expected. In (Huang, 1986; Duque 1993) this is monitored by comparing the difference between the maximum and minimum costs encountered on a temperature level with the maximum single change in cost on that level. If they are the same, the process is assumed to

Numerical experiments show that once being trapped into a suboptimal solution (suboptimal minimum) it is almost impossible to get out of it. Technical literature has described simple approaches to partially improve this situation such as tuning the neighbourhoods to prefer flip-flops which resolve existing interference, or preset violations

Another solution is based on occasionally allowing arbitrary long jumps while preserving a fast cooling schedule. These long jumps open up the possibility to detrap from any minimum in a single transition, without being questioned by a maybe long chain of acceptance decisions. A simple method for producing these long jumps is to extend the basic transitions −flip-flops− to a chain of consecutive ones. By properly adjusting the chain length, this allows to tunnel through a hill of the cost function landscape in one single jump,

This section describes a low complexity GA (known as μGA) that is applied to solve the channel assignment problem. Next sections present the proposed method, particularizing

instead of painfully working to its top just to fall down into the next valley.

*t t*

proportionally to the number of configurations which can be reached in one transition.

be trapped in a local minimum and the algorithm is stopped.

and to disadvantage those that introduce new ones.

the concepts to the CAP for a better understanding.

**4. Genetic algorithm for CAP** 

*t*

λ

σ

between 0.55 and 0.6.

Romeo, 1989) this rule is expressed as

As (Duque, 1993) shows, when doing this most cooling schedules lean on the homogenous variant and try to establish and maintain equilibrium on each temperature level by adjusting the length of the Markov chains and the cooling speed.

According to this, the main steps required for solving an optimization problem applying SQ involves that, first, the problem must be expressed as a cost function optimization problem by defining the configuration space *S*, the cost function *J*, the neighbourhood structure *N*. Next, a cooling schedule must be chosen, and, finally, the annealing process is performed.
