6. Conclusion

data sample, and n is the number of observations in the second data sample. Calculating Ab <sup>12</sup> results in a number between 0 and 1 which represent the probability that MLVMA will yield a better solution than MA. If the two algorithms are equivalent, then Ab<sup>12</sup> ¼ :5, while a complete

#Case Difference Estimates of effect size

2bitadd10:cnf 14.4 [13.8,14.9] \*\*\* 1 c 2bitadd11:cnf 14.5 [13.9,15.2] \*\*\* 1 c

2bitadd31 973.6 [945.5,1001.7] \*\*\* 1 c 3bitadd32 1231.1 [1195.8,1266.6] \*\*\* 1 c

4blocksb 2.1 [1.7,2.4] \*\*\* 1 c e0ddr2-1-by-5-1 10527.7 [10459.5,10595.8] \*\*\* 1 c e0ddr2-1-by-5-4 10648.5 [10575.8,10721.3] \*\*\* 1 c enddr2-10-by-5-1 10671.0 [11591.2,11750.8] \*\*\* 1 c enddr2-10-by-5-8 11882.4 [11799.1,11965.7] \*\*\* 1 c ewddr2-10-by-5-1 12539 [12453.0,112626.8] \*\*\* 1 c ewddr2-10-by-5-8 13182.9 [13096.7,13269.1] \*\*\* 1 c

2bitcomp5 .5

214 Machine Learning - Advanced Techniques and Emerging Applications

2bitadd12:cnf 0.1 [�0.1,0.3] .247 .548 .547 [.476,.622]

2bitmax6 0.0 [�0.1,0.3] .653 .459 .459 [.475,.515]

3bloks 3.2 [2.8,3.6] \*\*\* .918 .920 [.877,.958] 4blocks 4.8 [4.1,5.4] \*\*\* .916 .917 [.878,.953]

M diff. [95% CI of M diff.] <sup>p</sup> Obs. <sup>A</sup><sup>b</sup> <sup>12</sup> <sup>A</sup><sup>b</sup> 12[95% CI of <sup>A</sup><sup>b</sup> 12]

Ab<sup>12</sup> is more easily interpreted than the more common parametric Cohen's d [22] which represents the mean difference between two groups in standard deviations for several reasons. First, Cohen's d assumes that the observed samples are normally distributed [20]. Second, when dealing with solutions to optimization problems, a researcher or a practitioner would only be interested in the single best solution given a sample of different solutions from one or more algorithms. Hence, using an effect size measure that indicates the probability that one algorithm would lead to a better solution than another (given the same amount of time) would be more informative and more easily interpretable for an optimization practitioner. The 95% confidence intervals of Ab <sup>12</sup> shown in Table 3 (where applicable) are calculated using a bootstrapping procedure [23] which is used to estimate the 95% confidence interval of Ab12. The procedure uses

a computer-intensive step-by-step process that consists of the following three steps:

dominance of algorithm MLVMA over MA would entail Ab<sup>12</sup> ¼ 1.

\*\*\* means p < 0.0001.

Table 3. Comparing effect sizes.

In this chapter, a multilevel evolutionary algorithm for solving the maximum satisfiability problem is presented. During the coarsening phase, a sequence of smaller problems, each with fewer variables, is constructed. Each child level is constructed from its parent level by collapsing pairs of variables. The new formed variables are used to define a new and smaller problem and recursively iterate the coarsening process until the size of the problem reaches some desired threshold. An evolutionary algorithm is applied through several optimization levels, where the converged population at a child level will serve as the starting population for a parent level. A set of instances were used to compare the performance of the new approach. The results obtained assert the superiority of the evolutionary algorithm when combined with the multilevel paradigm and always return a better solution for the equivalent run-time compared to MA.
