**5. Conclusions**

18 Will-be-set-by-IN-TECH

• indexed memory functions read and write. The write function is a two argument function arg1 and arg2. It evaluates the two arguments and sets the indexed memory pointed by

• functions to modify the stack pointer: inc\_aux to increment the stack pointer, dec\_aux to decrement it, write\_aux to set the stack pointer to its argument and returns the original

We used a slightly modified version of our continuous scheme as the stack problem requires the simultaneous evolution of the five operations (push, pop, makenull, top, empty). An individual is composed of 5 vectors, one for each operation. Mutation and crossover are only performed with vectors of the same type (i.e. vectors evolving the push operation for

Programs are coded in prefix notation, that means that an operation like (arg1 + MAX) was coded as + arg1 MAX. We did not impose any restrictions on each program's size except that each vector has a maximum length of 100 (this is several times more than sufficient to code

In his original work, Langdon chose to use a population of size 1, 000 individuals with 101 generations. In the DE case, it is known from experience that using large populations is usually inadequate. So, we fixed a population of 10 individuals with 10, 000 generations for

We used the same fitness function that was defined by Langdon. It consists in 4 test sequences, each one being composed of 40 stack operations. As explained in the previous section, the makenull and push operations do not return any value, they can only be tested indirectly

In Langdon's experiments, 4 runs out of 60 produced successful individuals (i.e. a fully operational stack). We obtained the same success ratio with LDEP: 4 out of the first 60 runs yielded perfect solutions. Extending the number of runs, LDEP evolved 6 perfect solutions out of 100 runs, providing a convincing proof of feasibility. Regarding CMA-LEP, results are

An example of successful solution is given in table 7 with the raw evolved code and a

less convincing, since only one run out of 100 was able to successfully evolve a stack.

The following set was available for LDEP:

• aux, the current value of the stack pointer

• arithmetic operators + and −

value of aux.

example).

*Results*

*Algorithm and fitness function*

• arg1, the value to be pushed on to the stack (read-only argument)

• constants 0, 1 and *MAX* (maximum depth of the stack, set to 10)

any of the five operations needed to manipulate the stack).

LDEP, amounting to about the same number of evaluations.

by seeing if the other operations perform correctly.

simplified version where redundant code is removed.

arg1 to arg2 (i.e. stack[arg1] = arg2). It returns the original value of aux.

This chapter explores evolutionary continuous optimization engines applied to automatic programming. We work with Differential Evolution (LDEP) and CMA-Evolution Strategy (CMA-LEP), and we translate the continuous representation of individuals into linear imperative programs. Unlike the TreeDE heuristic, our schemes include the use of float constants (e.g. in symbolic regression problems).

Comparisons with GP confirm that LDEP is a promising optimization engine for automatic programming. In the most realistic case of regression problems, when using constants, steady state LDEP slightly outperforms standard GP on 5 over 6 problems. On the artificial ant problem, the leading heuristic depends on the number of steps: for the 400 steps version GP is the clear winner, while for 600 steps generational LDEP yields the best average fitness. LDEP improves on the TreeDE results for both versions of the ant problem, without needing a fine-tuning of the solutions tree-depth.

For both regression and artificial ant, CMA-LEP performs poorly with the same representation of solutions than LDEP. This can be deemed not really surprising since the problems we tackle are clearly outside the domain targeted by the CMA-ES heuristic that drives evolution. Nonetheless it is also the case for DE, which still produces interesting solutions, thus this points to a fundamental difference in behavior between these two heuristics. We suspect that CMA-ES lack of elitism may be an explanation. It also points to a possible inherent robustness of the DE method, on fitness landscapes that are possibly more chaotic than the usual continuous benchmarks.

The promising results of LDEP on the artificial ant and on the stack problems are a great incentive to deepen the exploration of this heuristic. Many interesting questions remain open. In the beginnings of GP, experiments showed that the probability of crossover had to be set differently for internal and terminal nodes: is it possible to improve LDEP in similar ways? It is to be noticed that in our experiments the individual vector components take their values in the range (−∞, +∞), since it is required by the standard CMA-ES algorithm. It could be interesting to experiment DE-based algorithms with a reduced range of vector component values, for example [−1.0, 1.0], that would require to modify the mapping of constant indices.
