**3.2. Simulated quenching applied to the CAP**

In order to apply SQ to solve the CAP, we have to formulate the CAP as a discrete optimization problem, with *S*, *J* and *N* defined. In section 3.2 we have already presented the problem together with its mathematical characterization: a mobile radio network of *n* radio cells, each of them capable to carry any of the *n* available channels. The channel assignment is given by binary matrix **A**, with *aij*=1 meaning that channel *j* is assigned to cell *i*. Since the traffic demand is modelled by vector **d,** the total number of 1's in row *i* of matrix **A** must equal *di*.

The cost function *J* is then given by Eq. (6) that quantifies the violation of the interference constraints defined in section 3.1. Thus *J*(*s*) reaches its minimum of zero if all constraints are satisfied.

In this work we will use the same simple strategies for generating the neighbourhood than those used in (Duque, 1993) but with probabilities specifically tuned for our application: (i) *single flip*: just switching on or off channel *i* in cell *j*, −this procedure mimics the mutation operation that will be described later in the GAs context, and (ii) *flip-flop*: replacing at cell *j* one used channel with one unused.

Considering the particularities of the channel allocation problem with hexagonal cells, the same channel should be reused as closed as possible. To approach this goal, the *basic flip-flop* is modified as follows: (ii-1) a cell *j* is chosen at random, (ii-2) from all the channels not used in cell *j*, the channel that is most used within the nearest cells to *j* that may share that channel with cell *j* is switched on, (ii-3) one of the channels previously used at cell *j* is randomly selected and switched off. This *modified flip-flop* is used in conjunction with the basic one.

For the cooling schedule we have implemented of a mixture of different cooling schemes −(Aarts, 1989; Huang, 1986; Romeo, 1989)− with a polynomial-time approximation behaviour. The initial value of the temperature is set to assure a user specified transitions' acceptance ratio. For that, first, *t* is set to 0, and then it is iteratively changed until the desired acceptance ratio is reached. Our simulations worked fine with acceptance ratios between 0.55 and 0.6.

Temperature decrement follows a restriction proposed by Huang et al. (Huang, 1986): the decrease Δ*J* in the average cost between two subsequent temperatures *t* and *t*' should be less than the standard deviation of the cost (on level *t*). After some calculus (Huang, 1986; Romeo, 1989) this rule is expressed as

$$t' = t \exp\left(-\frac{\lambda t}{\sigma}\right) \tag{7}$$

Since testing for the establishment of equilibrium at a specific *t* would involve an unacceptable monitoring load, Huang et al. (Huang, 1986) approximate this check in two respects: (i) a Gaussian form for the equilibrium distribution is assumed, whose average and standard deviation are estimated from the Markov chain itself, and (ii) the process is considered stationary if the ratio of the number of accepted transitions, their costs being in a 2δ-legth interval, to the total number of accepted transitions, reaches a stable value *erf*(2*δ/σ*). In those cases where the criterion for stationarity can not be reached the length of the chain is bounded proportionally to the number of configurations which can be reached in one transition.

The final temperature is reached if a substantial improvement in cost can no more be expected. In (Huang, 1986; Duque 1993) this is monitored by comparing the difference between the maximum and minimum costs encountered on a temperature level with the maximum single change in cost on that level. If they are the same, the process is assumed to be trapped in a local minimum and the algorithm is stopped.

Numerical experiments show that once being trapped into a suboptimal solution (suboptimal minimum) it is almost impossible to get out of it. Technical literature has described simple approaches to partially improve this situation such as tuning the neighbourhoods to prefer flip-flops which resolve existing interference, or preset violations and to disadvantage those that introduce new ones.

Another solution is based on occasionally allowing arbitrary long jumps while preserving a fast cooling schedule. These long jumps open up the possibility to detrap from any minimum in a single transition, without being questioned by a maybe long chain of acceptance decisions. A simple method for producing these long jumps is to extend the basic transitions −flip-flops− to a chain of consecutive ones. By properly adjusting the chain length, this allows to tunnel through a hill of the cost function landscape in one single jump, instead of painfully working to its top just to fall down into the next valley.
