**5.2.7 Local search**

**Algorithm 3:** local-search **input** : *Chromosomei* **output**: A possibly improved *Chromosomei* ; **begin** PossFlips ← a randomly selected variable with the largest decrease (or smallest increase) in unsatisfied clauses;; v ← Pick (PossFlips);; *Chromosomei* ← *Chromosomei* with v flipped ; **If** *Chromosomei* satisfies Φ return *Chromosomei*; **end**

Finally, the last component of our MA is the use of local improvers. By introducing local search at this level, the search within promising areas is intensified. This local search should be able to quickly improve the quality of a solution produced by the crossover operator, without diversifying it into other areas of the search space. In the context of optimization, this rises a number of questions regarding how best to take advantage of both aspects of the whole algorithm. With regard to local search there are issues of which individuals will undergo local improvement and to what degree of intensity. However care should be made in order to balance the evolution component (exploration) against exploitation (local search component). Bearing this thought in mind, the strategy adopted in this regard is to let each chromosome go through a low rate intensity local improvement. Algorithm 3 shows the local search algorithm used. This heuristic is used for one iteration during which it seeks for the variable-value assignment with the largest decrease or the smallest increase in the number of unsatisfied clauses. Random tie breaking strategy is used between variables with identical score.

### **5.2.8 Convergence criteria**

As soon as the population tends to loose its diversity, premature convergence occurs and all individuals in the population tend to be identical with almost the same fitness value. During each level, the proposed memetic algorithm is assumed to reach convergence when no further improvement of the best solution (the fittest chromosome) has not been made during two consecutive generations.

Sat-Encoded Problems 9

A Multilevel Approach Applied to Sat-Encoded Problems 175

randomization nature of the algorithms, each problem instance was run 20 times with a cutoff parameter (max-time) set to (300*sec*). We use |.| to denote the number of elements in a set, e.g., |*V*| is the number of variables, while |*C*| denotes the number of clauses. Table shows the instances used in the experiment. The tests were carried out on a a DELL machine with 800 MHz CPU and 2 GB of memory. The code was written in C and compiled with the GNU C

• Stopping criteria for the coarsening phase: The coarsening stops as soon as the size of the coarsest problem reaches 100 variables (clusters). At this level, MA generates an initial

• Convergence during the refinement phase: If no improvement of the fitness function of the best individual has not been observed during 10 consecutive generations, MA is assumed

Figures 2-9 show how the best assignment (fittest chromosome) progresses during the search. The plots show immediately the dramatic improvement obtained using the multilevel paradigm. The performance of MA is unsatisfactory and is getting even far more dramatic for larger problems as the percentage excess over the solution is higher compared to that of MLVMA. The curves show no cross-over implying that MLVMA dominates MA. The plots suggest that problem solving with MLVMA happens in two phases. The first phase which corresponds to the early part of the search, MLVMA behaves as a hill-climbing method. This phase which can be described as a long one, up to 85% of the clauses are satisfied. The best assignment improves rapidly at first, and then flattens off as we mount the plateau, marking the start of the second phase. The plateau spans a region in the search space where flips typically leave the best assignment unchanged, and occurs more specifically once the refinement reaches the finest level. Comparing the multilevel version with the single level version, MLVMA is far better than MA, making it the clear leading algorithm. The key success behind the efficiency of MLVMA relies on the multilevel paradigm. MLVMA uses the multilevel paradigm and draw its strength from coupling the refinement process across different levels. This paradigm offers two main advantages which enables MA to become

• During the refinement phase MA applies a local a transformation ( i.e, a move) within the neighborhood (i.e, the set of solutions that can be reached from the current one ) of the current solution to generate a new one. The coarsening process offers a better mechanism for performing diversification (i.e, the ability to visit many and different regions of the search space) and intensification (i.e, the ability to obtain high quality solutions within

• By allowing MA to view a cluster of variables as a single entity, the search becomes guided and restricted to only those configurations in the solution space in which the variables grouped within a cluster are assigned the same value. As the size of the clusters varies from one level to another, the size of the neighborhood becomes adaptive and allows the possibility of exploring different regions in the search space while intensifying the search

by exploiting the solutions from previous levels in order to reach better solutions.

compiler version 4.6. The parameters used in the experiment are listed below:

to have reached convergence and moves to a higher level.

much more powerful in the multilevel context:

• Crossover probability = 0.85. • Mutation probability = 0.1. • Population size = 50 .

population.

**6.3 Experimental results**

those regions).

#### **5.3 Uncoarsening**

Having improved the assignment at the level *Lm*+1, the assignment must be projected onto its parent level *Lm*. The uncoarsening process is trivial; if a cluster *Ci* ∈ *Lm*+<sup>1</sup> is assigned the value of true then the matched pair of clusters that it represents, *Cj* and *Ck* ∈ *Lm* are also assigned the value true. The idea of refinement is to use the projected population from *Lm*+<sup>1</sup> onto *Lm* as the initial population for further improvement using the proposed memetic algorithm. Even though the population at *Lm*+<sup>1</sup> is at local minimum, the projected population at level *Lm* may not be at a local optimum. The projected population is already a good solution and contains individuals with high fitness value, MA will converge quicker within a few generation to a better assignment.
