1. Introduction

Combinatorial optimization is a lively field of applied mathematics, combining techniques from combinatorics, linear programming, and the theory of algorithms, to solve optimization problems over discrete structures. Utilizing classical methods of operations research often fails due to the exponentially growing computational effort. It is commonly accepted that these methods might be heavily penalized by the nondeterministic polynomial (NP)-hard nature of the problems and consequently will then be unable to solve large-size instances of a problem.

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

Therefore, in practice meta-heuristics are commonly used even if they are unable to guarantee an optimal solution. The driving force behind the high performance of meta-heuristics is their ability to find an appropriate balance between intensively exploiting areas with high-quality solutions (the neighborhood of elite solutions) and moving to unexplored areas when necessary. The evolution of meta-heuristics has taken an explosive upturn. The recent trends in computational optimization move away from the traditional methods to contemporary nature-inspired meta-heuristic algorithms though traditional methods can still be an important part of the solution techniques for small-size problems. As many real-world optimization problems become increasingly complex and hard to solve, better optimization algorithms are always needed. Nature-inspired algorithms such as genetic algorithms (GAs) are regarded as highly successful methods when applied to a broad range of discrete as well as continuous optimization problems. This chapter introduces the multilevel paradigm combined with genetic algorithm for solving the maximum satisfiability problem. Over the past few years, an increasing interest has arisen in solving hard optimization problems using genetic algorithms. These techniques offer the advantage of being flexible. They can be applied to any problem (discrete or continuous) whenever there is a possibility for encoding a candidate solution to the problem, and a mean of computing the quality of any candidate solution through the so-called objective function. Nevertheless, GAs may still suffer from premature convergence. The performance of GAs deteriorates very rapidly mostly due to two reasons. First, the complexity of the problem usually increases with its size, and second, the solution space of the problem increases exponentially with the problem size. Because of these two issues, optimization search techniques tend to spend most of the time exploring a restricted area of the search space preventing the search to visit more promising areas, and thus leading to solutions of poor quality. Designing efficient optimization search techniques requires a tactical interplay between diversification and intensification [1, 2]. The former refers to the ability to explore many different regions of the search space, whereas the latter refers to the ability to obtain high-quality solutions within those regions.

graph coloring and graph drawing [10, 11], feature selection problem in biomedical data [12], and maximum satisfiability problem [13–16]. A recent survey over multilevel techniques can be

A Multilevel Evolutionary Algorithm Applied to the Maximum Satisfiability Problems

http://dx.doi.org/10.5772/intechopen.72843

205

The multilevel paradigm works by merging the variables defining the problem to form clusters, uses the clusters to define a new problem, and the process is repeated until the problem size reaches some threshold. A random initial assignment is injected to the coarsest problem and the assignment is successively refined on all the problems starting with the coarsest and ending with the original. The multilevel evolutionary algorithm is described in Algorithm 1.

This process (lines 3–5 of Algorithm 1) is graphically illustrated in Figure 1 using an example with 10 variables. The coarsening phase uses two levels to coarsen the problem down to three

found in [1, 17, 18].

3.1. Main idea

input : Problem P<sup>0</sup>

1 begin

2 level := 0 ;

output: Solution Sfinalð Þ P<sup>0</sup>

4 Plevelþ1:¼Reduce (Plevel) ;

6 /\* Proceed with Memetic algorithm \*/ ;

7 Sstartð Þ¼ Plevel Initial-Assignment (Plevel) ;

<sup>10</sup> Sstartð Þ Plevel�<sup>1</sup> :¼Project Sfinalð Þ Plevel ;

11 Sfinalð Þ Plevel�<sup>1</sup> :¼Refinement (Sstartð Þ Plevel�<sup>1</sup> ) ;

8 Sfinalð Þ¼ Plevel Refinement (Plevel) ;

5 level := level + 1 ;

9 while ð Þ level > 0 do

12 level := level – 1

3.2. Reduction phase

13 end

3. The multilevel evolutionary algorithm

Algorithm 1. The multilevel evolutionary algorithm

3 while Not reached the desired number of levels do

In this chapter, a genetic algorithm is used in a multilevel context as a means to improve its performance. This chapter is organized as follows. Section 2 describes the maximum satisfiability problem. Section 3 explains the hierarchical evolutionary algorithm. In Section 4, we report the experimental results. Finally, Section 5 discusses the main conclusions and provides some guidelines for future work.
