**1. Introduction**

24 Will-be-set-by-IN-TECH

[20] Mühlenbein, H., Bendisch, J. & Voigt, H.-M. [1996]. From recombination of genes to the estimation of distributions: II. continuous parameters, *in* H.-M. Voigt, W. Ebeling, I. Rechenberg & H.-P. Schwefel (eds), *Parallel Problem Solving from Nature – PPSN IV*,

[21] Mühlenbein, H. & michael Voigt, H. [1995]. Gene pool recombination in genetic

[22] Mühlenbein, H. & Paaß, G. [1996]. From recombination of genes to the estimation of distributions: I. Binary parameters, *in* H.-M. Voigt, W. Ebeling, I. Rechenberg & H.-P. Schwefel (eds), *Parallel Problem Solving from Nature – PPSN IV*, Springer, Berlin,

[23] Mühlenbein, H. & Schlierkamp-Voosen, D. [1993]. Predictive models for the breeder genetic algorithm: I. continuous parameter optimization, *Evolutionary Computation*

[24] Mühlenbein, H. & Schlierkamp-Voosen, D. [1994]. The science of breeding and its application to the breeder genetic algorithm (BGA), *Evolutionary Computation*

[25] Potter, M. A., Bassett, J. K. & De Jong, K. A. [2003]. Visualizing evolvability with Price's equation, *in* R. Sarker, R. Reynolds, H. Abbass, K. C. Tan, B. McKay, D. Essam & T. Gedeon (eds), *Proceedings of the 2003 Congress on Evolutionary Computation CEC2003*,

[26] Price, G. [1972]. Fisher's 'fundamental theorem' made clear, *Annals of Human Genetics*

[28] Prügel-Bennett, A. [1997]. Modelling evolving populations, *Journal of Theoretical Biology*

[29] Prügel-Bennett, A. & Shapiro, J. L. [1994]. Analysis of genetic algorithms using statistical

[30] Radcliffe, N. J. [1991]. Forma analysis and random respectful recombination, *in* R. K. Belew & L. B. Booker (eds), *Proceedings of the Fourth International Conference on Genetic Algorithms (ICGA'91)*, Morgan Kaufmann Publishers, San Mateo, California, pp. 222–229. [31] Rice, S. H. [2004]. *Evolutionary Theory: Mathematical and Conceptual Foundations*, Sinauer

[32] Vanneschi, L., Castelli, M. & Silva, S. [2010]. Measuring bloat, overfitting and functional complexity in genetic programming, *in* B. et al.Editors (ed.), *GECCO 10 Proceedings of the*

*10th annual conference on Genetic and evolutionary computation*, ACM, pp. 877–884.

URL: *http://www.sciencedirect.com/science/article/B6WMD-45KKVJV-7/2/3ac11d9873754b7db*

algorithms, *Metaheuristics: Theory and Applications* pp. 53—62. URL: *http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.56.3488*

Springer, Berlin, pp. 188–197.

IEEE Press, Canberra, pp. 2785–2790.

URL: *http://dx.doi.org/10.1111/j.1469-1809.1972.tb00764.x* [27] Price, G. R. [1970]. Selection and covariance, *Nature* 227: 520–521. URL: *http://adsabs.harvard.edu/abs/1970Natur.227..520P*

mechanics, *Physical Review Letters* 72(9): 1305–1309. URL: *http://link.aps.org/abstract/PRL/v72/p1305*

pp. 178–187.

1(1): 25–49.

1(4): 335–360.

36(2): 129–140.

185(1): 81–95.

*89bc424fc4919ad*

Associates, Inc.

Genetic Programming (GP) is a technique aiming at the automatic generation of programs. It was successfully used to solve a wide variety of problems, and it can be now viewed as a mature method as even patents for old and new discovery have been filled, see e.g. [1, 2]. GP is used in fields as different as bio-informatics [3], quantum computing [4] or robotics [5], among others.

The most widely used scheme in GP was proposed by Koza, where programs are represented as Lisp-like trees and evolved by a genetic algorithm. Many other paradigms were devised these last years to automatically evolve programs. For instance, linear genetic programming (LGP) [6] is based on an interesting feature: instead of creating program trees, LGP directly evolves programs represented as linear sequences of imperative computer instructions. LGP is successful enough to have given birth to a derived commercial product named *discipulus*. The representation (or genotype) of programs in LGP is a bounded-length list of integers. These integers are mapped into imperative instructions of a simple imperative language (a subset of C for instance).

While the previous schemes are mainly based on discrete optimization, a few other evolutionary schemes for automatic programming have been proposed that rely on some sort of continuous representation. These include notably Ant Colony Optimization in AntTAG [7, 8], or the use of probabilistic models like Probabilistic Incremental Program Evolution [9] or Bayesian Automatic Programming [10].

In 1997, Storn and Price proposed a new evolutionary algorithm for continuous optimization, called Differential Evolution (DE) [11]. Another popular continuous evolution scheme is the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) that was proposed by Hansen and Ostermeier [12] in 1996. Differential Evolution differs from Evolution Strategies in the way it uses information from the current population to determine the perturbation brought to solutions (this can be seen as determining the direction of the search).

In this chapter, we propose to evolve programs with continuous representation, using these two continuous evolution engines, Differential Evolution and CMA Evolution Strategy. A

©2012 Fonlupt et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0),which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ©2012 Fonlupt et al., licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### 2 Will-be-set-by-IN-TECH 28 Genetic Programming – New Approaches and Successful Applications Continuous Schemes for Program Evolution <sup>3</sup>

program is represented by a float vector that is translated to a linear sequence of imperative instructions, *a la* LGP.

inspired from the LGP literature, that are lacking in TreeDE. The tree-depth parameter from TreeDE is also replaced by the maximum length of the programs to be evolved: this is a lesser constraint on the architecture of solutions and it still has the benefit of limiting the well known

Continuous Schemes for Program Evolution 29

This section only introduces the main Differential Evolution (DE) concepts. The interested reader might refer to [11] for a full presentation. DE is a population-based search algorithm that draws inspiration from the field of evolutionary computation, even if it is not usually

DE is a real-valued, vector based, heuristic for minimizing possibly non-differentiable and non linear continuous space functions. As most evolutionary schemes, DE can be viewed as a stochastic directed search method. But instead of randomly mating two individuals (like crossover in Genetic Algorithms), or generating random offspring from an evolved probability distribution (like PBIL [19] or CMA-ES [20]), DE takes the difference vector of two randomly chosen population vectors to perturb an existing vector. This perturbation is made for every individual (vector) inside the population. A newly perturbated vector is kept

DE is a search method working on a set or population *X* = (*X*1, *X*2,..., *XN*) of *N* solutions that are *d*−dimensional float vectors, trying to optimize a fitness (or objective) function

DE can be roughly decomposed into an initialization phase and three very simple steps that

5- end if termination criterion is fulfilled else

At the beginning of the algorithm, the initial population is randomly initialized and evaluated using the fitness function *f* . Then new potential individuals are created: a new trial solution is created for every vector *Xj*, in two steps called mutation and crossover. A selection process is triggered to determine whether or not the trial solution replaces the vector *Xj* in the

Let *t* indicate the number of the current iteration (or generation), for each vector *Xj*(*t*) of the population, a variant vector *Vj*(*t* + 1)=(*vj*1, *vj*2,..., *vjd*) is generated according to Eq. 1:

bloat problem (uncontrolled increase in solution size) that plagues standard GP.

in the population only if it has a better fitness than its previous version.

1- initialization 2- mutation 3- crossover 4- selection

go to step 2

**2.2. Differential evolution**

*2.2.1. Principles*

are iterated on:

population.

*2.2.2. Mutation*

*<sup>f</sup>*(*Xi*)*i*∈[1,*N*] : **<sup>R</sup>***<sup>d</sup>* <sup>→</sup> **<sup>R</sup>**.

viewed as a typical evolutionary algorithm.

The chapter is organized in the following way. The first section introduces the Differential Evolution and CMA Evolution Strategy schemes, focusing on the similarities and main differences. We then present our continuous schemes, LDEP and CMA-LEP, respectively based on DE and CMA-ES. We show that these schemes are easily implementable as plug-ins for DE and CMA-ES. In Section 4, we compare the performance of these two schemes, and also traditional GP, over a range of benchmarks.
