**5.1. Benchmark problems**

In order to evaluate the performances of the proposed methods and compare them to other approaches, performance is analyzed using the set of thirteen benchmark problems defined in (Sivarajan, 1989) (also used in (Funabiki, 2000) as well as problems 1, 2 and 4 from (Ngo, 1998) (we will refer to these problems with numbers 14, 15 and 16). The characteristics of the first 13 benchmark instances can be found in (Sivarajan, 1989). The definition of problems 14, 15 and 16 are summarized in Table 1, where all the channel demand vectors were shown in Figure 1, and the compatibility matrices are: **C**1 (matrix in Example 1, page 846, in (Sivarajan, 1989)), **C**2 (matrix in Fig. 3 (c) (Funabiki, 1992), p. 435) and **C**3 (matrix in Fig. 3 (a) (Funabiki, 1992), p. 435). The total number of frequencies varies from 11 to 221. Benchmark problem 15 belongs to a particular set of useful benchmark tests for cellular assignment problems called *Philadelphia problems*. Notice that (Sivarajan, 1989) presents some variations from the original Philadelphia problems, which were first presented by Anderson (Anderson, 1973) in the early 70's. These problems constitute, by far, the most common set of benchmark problems for channel assignment algorithms, making it possible to compare the obtained solutions with previously published results. Notice that problems 1−4 and 9−14 consider the three constraints defined in the beginning of section 3.1, while problems 5−8, 15 and 16 consider only the co-channel and co-site constraints.


**Table 1.** Specifications of benchmark problems No. 14, 15 and 16.

As an example, Fig. 6 shows the cellular geometry of the Philadelphia problem with *n*=21 cells (the cluster size for CCC is *Nc*=7).

**Figure 6.** Cellular geometry for the Philadelphia benchmark problem with *n*=21 cells.

## **5.2. Adjustment of parameters and convergence performance**

60 Simulated Annealing – Single and Multiple Objective Problems

and 16 consider only the co-channel and co-site constraints.

**Table 1.** Specifications of benchmark problems No. 14, 15 and 16.

cells (the cluster size for CCC is *Nc*=7).

Lower Bound

14 4 11 **C**<sup>1</sup> **d**<sup>1</sup> 15 21 221 **C**<sup>2</sup> **d**<sup>2</sup> 16 25 73 **C**<sup>3</sup> **d**<sup>4</sup>

As an example, Fig. 6 shows the cellular geometry of the Philadelphia problem with *n*=21

Compatibility Matrix

Demand Vector

No. of Cells

This section evaluates the performance of the proposed algorithms in terms of convergence and solution accuracy under different conditions. Radio base stations are considered to be located at cell centers and the traffic is assumed to be inhomogeneous, with each cell having a different and *a priori* known traffic demand. Following the ideas shown in (Lai, 1996), the initial population is constructed using the available *a priori* information, i.e., the algorithm assigns a valid string of frequencies to all the cells following a simple approach: first, the algorithm attempts to assign a set of valid frequencies to as many base stations as possible. In the event that valid frequencies cannot be located to some of the cells, they are then

In order to evaluate the performances of the proposed methods and compare them to other approaches, performance is analyzed using the set of thirteen benchmark problems defined in (Sivarajan, 1989) (also used in (Funabiki, 2000) as well as problems 1, 2 and 4 from (Ngo, 1998) (we will refer to these problems with numbers 14, 15 and 16). The characteristics of the first 13 benchmark instances can be found in (Sivarajan, 1989). The definition of problems 14, 15 and 16 are summarized in Table 1, where all the channel demand vectors were shown in Figure 1, and the compatibility matrices are: **C**1 (matrix in Example 1, page 846, in (Sivarajan, 1989)), **C**2 (matrix in Fig. 3 (c) (Funabiki, 1992), p. 435) and **C**3 (matrix in Fig. 3 (a) (Funabiki, 1992), p. 435). The total number of frequencies varies from 11 to 221. Benchmark problem 15 belongs to a particular set of useful benchmark tests for cellular assignment problems called *Philadelphia problems*. Notice that (Sivarajan, 1989) presents some variations from the original Philadelphia problems, which were first presented by Anderson (Anderson, 1973) in the early 70's. These problems constitute, by far, the most common set of benchmark problems for channel assignment algorithms, making it possible to compare the obtained solutions with previously published results. Notice that problems 1−4 and 9−14 consider the three constraints defined in the beginning of section 3.1, while problems 5−8, 15

**5. Numerical results** 

randomly assigned.

Problem No.

**5.1. Benchmark problems** 

In this section, the convergence properties of the proposed methods are studied. Results shown in Table 2 are average values over 25 trials for each problem. The parameters to be set in the GA are: the number of iterations *ng*, the initial mutation and crossover probabilities, the population size *np*, and the parameters of functions *pc(k)* and *pm(k)*. After several trials that helped to fine tune the parameters ensuring that the computation is manageable, the optimal values were found to be:


On the other hand, the SQ algorithm has been implemented with a mixture of standard and modified *flip-flops* (described in section 4.2). Problems 5−8, 11−12, 14−16 are solved with a configuration of 50-70% of modified flip-flops, while problem instances 1−4, 9, 10 and 13 used 20-40% of modified flip-flops. The remaining cases, in all problem instances, were implemented with standard flip-flops. To explain this experimental adjustment, just notice that the more complex is the problem instance, the more explorative must be the global search for solutions in order to avoid convergence to suboptimal local minima.

Comparative results are shown in Table 2. The performance is measured using the percentage of convergence to the solutions, defined as the ratio of the total number of

successful convergence to the total number of runs. Table 2 shows the results for problems 10, 12, 14, 15 and 16, whose convergence properties have been previously studied by Ngo and Li using a GA-based scheme (Ngo, 1998) and by Funabiki and Takefuji, who applied a NN-based algorithm to solve these instances (Funabiki, 1992).

Simulated Quenching Algorithm for Frequency Planning in Cellular Systems 63

GA-based approaches is achieved maintaining a very similar −or even better− percentage of convergence (Table 2) and with the three approaches getting optimal conflict-free solutions.

10 4.129 **26.12** 3.285 **20.78** 6.101 **38.61**  12 0.959 **6.07** 0.738 **4.67** 1.022 **6.468**  14 0 **0** 0 **0** 0.01 **0.06**  15 0.192 **1.22** 0.158 **1** 0.189 **1.196**  16 0.284 **1.80** 0.226 **1.43** 0.386 **2.443 Table 3.** Execution times (in seconds) for benchmark problems 10, 12, 14, 15 and 16. CPU: AMD Athlon XP 2100+ 1.8 GHz. Bold figures show the CPU time normalized to the time required to solve problem 15

Comparing the values given in Table 3 for (Ngo, 1998) with the specific values reported in the original author's paper, a small difference can be observed. The reason is that the algorithm has been programmed and run in a different computer and language. In order to get the comparative figures shown in Table 3, both methods were similarly programmed

Now, different search techniques are compared when they run without any time constraint and an optimal solution is guaranteed. Figure 7 shows the execution times for three different algorithms: (i) the IDA (Iterative Deepening A) algorithm (Nilsson, 1998), which offers a quite simple algorithm that can solve large problems with a small computer memory, (ii) the so-called BDFS (Block Depth-Fist Search) real-time heuristic search method proposed in (Mandal, 2004), (iii) the proposed GA, and (iv) the proposed SQ method. For the sake of comparison, we have chosen the same number of cells and number of channels than in

It can be seen first that the BDFS algorithm produces an increasing average speedup over the IDA method. On the other hand, the proposed μGA outperforms BDFS (and, hence, IDA) whenever the complexity of the problem becomes considerable. In these cases, the running time of the μGA is about 20% smaller than the BDFS. Only in the three simplest cases (a: *n*=5, *c*=3), (b: *n*=5, *c*=4) and (c: *n*=7, *c*=3), the minimum computational load required to implement the μGA is larger than the BDFS, though still much better than the IDA.

When SQ is used, results show that for simple configurations, computational load is approximately that of the GA-based method. However, as complexity (in terms of the number of channels) is increased, the computational load of the SQ procedure tends towards that of the IDA algorithm. These results are in accordance with those outlined in

(Ngo, 1998) μGA SQ

MGA

and run in the same computer environment.

**5.4. Optimal solutions** 

(Mandal, 2004).

the other numerical simulations.

using the μGA


**Table 2.** Comparison between convergence results.

Results show that both GA or SQ based procedures outperform the convergence results of the neural network for solving the fixed CAP. The four approaches converge properly in 100% of cases in problem 14. In problems 12, 15 and 16, both genetic methods converge more frequently than the neural network-based approach, and SQ is slightly better than GA in problem 15, while marginally worse in problems 12 and 16. In problem 15 the GA shows a little bit worse convergence results than (Ngo, 1998) (only in about 2%) while SQ moderately improves the MGA. In spite of that, the proposed method involves fewer computational load than that required by (Ngo, 1998) (see Table 3) and the complexity of the SQ method is intermediate between that of the standard GA (MGA) and that of the proposed μGA. In contrast, the μGA presents notably better convergence in problems 10 and 12, where MGA and SQ offer very similar results. In essence, in problems 12, 15 and 16 algorithms exhibit very similar results, with the μGA being less complex.
