**Evolutionary Multi-Objective Algorithms**

Aurora Torres, Dolores Torres, Sergio Enriquez, Eunice Ponce de León and Elva Díaz *University of Aguascalientes, México*

### **1. Introduction**

52 Real-World Applications of Genetic Algorithms

Silva, P. H. F. et al. (2010a). Neuromodeling and Natural Optimization of Nonlinear Devices

Silva, P. H. F. et al. (2010b). Blending PSO and ANN for Optimal Design of FSS Filters With

Trindade, J. I. A. et al. (2011). Analysis of Stop-Band Frequency Selective Surfaces With

Oliveira, E. E. C.; Campos, A. L. P. S. & Silva, P. H. F. (2009). Miniaturization of Frequency

Zhang, Q. J. & Gupta, C. (2000). *Neural Networks for RF and Microwaves Design,* Artech House,

*Letters*, Vol.51, No.8, (August 2009), pp. 1983-1986, ISSN 1098-2760

Vol.47, No. 5, (May 2011), pp. 1518-1521, ISSN 0018-9464

October 29-1, 2007

2010), pp. 3010-3013, ISSN 0018-9464

ISBN 1-781-769-9750, Norwood, MA, USA

USA

*Proceedings of the SBMO/IEEE MTT-S International Microwave and Optoelectronics Conference*, Vol.1, No.1, pp. 275-279, ISBN 978-1-4244-0661-6, Salvador, Brazil,

and Circuits, In: *System and Circuit Design for Biologically Inspired Intelligent Learning,* Turgay Temel, pp. 326-348, IGI Global, ISBN 978-1-60960-020-4, New York, NY,

Koch Island Patch Elements. *IEEE Transactions on Magnetics*, Vol.46, No.8, (August

Dürer's Pentagon Pre-Fractals Patch Elements. *IEEE Transactions on Magnetics*,

Selective Surfaces Using Fractal Koch Curves. *Microwave and Optical Technology* 

The versatility that genetic algorithm (GA) has proved to have for solving different problems, has make it the first choice of researchers to deal with new challenges. Currently, GAs are the most well known evolutionary algorithms, because their intuitive principle of operation and their relatively simple implementation; besides they have the ability to reflect the philosophy of evolutionary computation in an easy and quick way.

As time goes by, human beings are more sophisticated. Every time we demand better performance of the equipment and techniques in the solution of more complex problems; forcing problem-solvers to use non-exhaustive solution techniques, although this could means the loss of accuracy. Non conventional techniques provide a solution in a suitable time when other techniques can be extraordinarily slow. Evolutionary algorithms are metaheuristics inspired on Darwin's theory of the survival of the fittest. A feature shared by these algorithms is that they are population-based, so each population represents a group of possible solutions to the problem posed; and only will transcend to the next generation those individuals with the best performance. At the end of the evolutionary process, the population is formed by the better individuals only. In general, all metaheuristics have shown their efficiency in solving complex optimization problems with one goal, so having to work simultaneously with more than one target, and therefore having to determine not only one answer but a set of them; population-based metaheuristics like evolutionary algorithms seem to be the most natural technique to address this type of optimization.

This chapter presents the theoretical description of the multi-objective optimization problem and establishes some important concepts. Later the most well known algorithms that initially were used for solving this problem are presented. Among these algorithms excels the GA and some modifications to it. The chapter also briefly discusses the estimation of the distribution algorithm (EDA), which was also inspired on the GA. Subsequently, the drawing graphs problem is established and solved. This problem, like many other of real life is inherently multi-objective. The proposed solution to this problem uses a hybrid EDA combined with a hill-climbing algorithm, which handled three simultaneous objectives: minimizing the number of crossing edges in the graph (total number of crossing edges of the graph have to be minimized), minimizing the graph area (total space used by the graph have to be as small as possible) and minimizing the graph aspect ratio (the graph have to be in a perfect square Visualized area). This section includes the description of the used

Evolutionary Multi-Objective Algorithms 55

MOOP also called multi-criteria optimization, multi-performance or vector optimization problem, can be defined (in words) as the problem of finding a vector of decision variables which satisfies constraints and optimizes a vector function whose elements represent the objective functions (Osyczka, 1985). These functions form a mathematical description of performance criteria which are usually in conflict with each other. Hence, the term "*optimize*" means finding such a solution which would give the values of all the objective

Decision variables are numeric values, which should be selected in a problem of

 x�� = ��� �� ⋮ ��

Constraints imposed by the nature and environment of certain studied case, will be found in most of optimization problems. These conditions can be physical limitations, space or resistance obstacles, or restrictions in the time for the realization of a task, among others. So, certain solution is considered acceptable, if at least it satisfies these constraints. The constraints represent dependences between the parameters and the decision variables in the optimization problem. We can identify two different types of constraints; constraints of

It is necessary to highlight that p should be smaller than n, because the number of equality constraints should be smaller than the number of decision variables, since if ��� the problem is known as over constrained (Ramírez, 2007), and this means that will have more unknown variables than equations. Those constraints can be explicit (described by one algebraic expression), or implicit (in which case, an algorithm or method have to exist to

To know how good a solution is, it is necessary to have a criterion to evaluate it. This measure should be expressed as an algebraic function of the decision variables and it is known as objective function. It is possible that researches do not have this mathematical

� (2)

g�(x��) ≤ 0 i = 1,2, … , m (3)

h�(x��) = 0 i = 1,2, … , p (4)

**2.2 General multi-objective optimization problem** 

functions acceptable to the decision maker (Coello, 2001).

The vector of � decision variables x�� is represented by:

optimization. These variables are represented for �� where � = 1,2, … , ��

**2.2.1 Decision variables** 

**2.2.2 Constraints** 

inequality:

and the equality constraints:

**2.2.3 Objective functions** 

calculate this constraints for any vector ��).

approach and a group of experimental results, as well as some conclusions and future work. Finally, the last section of this chapter is a brief reflection on the future of multi-objective optimization research. On it, we capture some concerns and issues that are relevant to the development of this area.

### **2. Multi-objective optimization**

Optimization in both mathematics and computing, refers to the determination of one or more feasible solutions that corresponds to an extreme value (maximum or minimum), according to one or more objective functions. To find the extreme solutions of one or more objective functions can be applied in a wide range of practical situations, such as to minimize the manufacturing cost of a product, to maximize profit, to reduce uncertainty, and so on. The principles and methods of optimization are used in solving quantitative problems in disciplines such as physics, biology, engineering, economics, and others. The simplest optimization problems involve functions of a single variable and can be solved by differential calculus. When researchers work with optimization, we could find two main types: mono-objective optimization and multi-objective optimization (MOO), depending on the number of optimization functions. The optimization can be subject to one or several constraints. The constraints are conditions that limit the selection of the values variables can take. This area has been approached for different techniques and methods.

Probably, the main difficulty of modelling mono-objective problems consists on obtaining just one equation for the complete problem. This stage could be too complicated to reach (Collette & Siarry, 2002). Due to the difficulty of finding an equation for a problem where many factors can influence, multi-objective optimization gives a very important advantage. Nevertheless, multi-objective optimization let us use some equations for reaching more than one objective; this property adds complexity to the model. As complexity of problems is increased, it is necessary to use new tools; for example: lineal programming that was created to solve optimization problems that involve two or more entrance variables.

### **2.1 Global optimization**

Global optimization is the process of finding the global maximum or minimum (it will depend on the problem to be solved), inside a space �. Formally, it could be defined as (Bäck, 1996):

Definition 1. Given a function �( x�� ) � Ω � � � ℝ� → ℝ, Ω ≠ ∅, for x�� ∈ Ω the value �<sup>∗</sup> � �( x��∗) > −∞ is named the global minimum if and only if

$$\forall \vec{\mathbf{x}} \in \Omega : f(\vec{\mathbf{x}}^\*) \le f(\vec{\mathbf{x}}) \tag{1}$$

This way, x�� is the global minimum, *f* (��<sup>∗</sup>) is the objective function and the set Ω is the feasible region inside the set �. The problem of determining the global minimum is called "*problem of global optimization*". When the problem to optimize is mono-objective, the solution is unique. But this is not the case of multi-objective optimization problems (MOOP), they usually give a group of solutions that satisfy all objectives presented in vectors. Then, the decision maker (the human with this work) selects one or more of that vectors which represent acceptable solutions of the problem according to their own point of view (Coello et al., 2002).

approach and a group of experimental results, as well as some conclusions and future work. Finally, the last section of this chapter is a brief reflection on the future of multi-objective optimization research. On it, we capture some concerns and issues that are relevant to the

Optimization in both mathematics and computing, refers to the determination of one or more feasible solutions that corresponds to an extreme value (maximum or minimum), according to one or more objective functions. To find the extreme solutions of one or more objective functions can be applied in a wide range of practical situations, such as to minimize the manufacturing cost of a product, to maximize profit, to reduce uncertainty, and so on. The principles and methods of optimization are used in solving quantitative problems in disciplines such as physics, biology, engineering, economics, and others. The simplest optimization problems involve functions of a single variable and can be solved by differential calculus. When researchers work with optimization, we could find two main types: mono-objective optimization and multi-objective optimization (MOO), depending on the number of optimization functions. The optimization can be subject to one or several constraints. The constraints are conditions that limit the selection of the values variables can

Probably, the main difficulty of modelling mono-objective problems consists on obtaining just one equation for the complete problem. This stage could be too complicated to reach (Collette & Siarry, 2002). Due to the difficulty of finding an equation for a problem where many factors can influence, multi-objective optimization gives a very important advantage. Nevertheless, multi-objective optimization let us use some equations for reaching more than one objective; this property adds complexity to the model. As complexity of problems is increased, it is necessary to use new tools; for example: lineal programming that was created

Global optimization is the process of finding the global maximum or minimum (it will depend on the problem to be solved), inside a space �. Formally, it could be defined as

Definition 1. Given a function �( x�� ) � Ω � � � ℝ� → ℝ, Ω ≠ ∅, for x�� ∈ Ω the value

This way, x�� is the global minimum, *f* (��<sup>∗</sup>) is the objective function and the set Ω is the feasible region inside the set �. The problem of determining the global minimum is called "*problem of global optimization*". When the problem to optimize is mono-objective, the solution is unique. But this is not the case of multi-objective optimization problems (MOOP), they usually give a group of solutions that satisfy all objectives presented in vectors. Then, the decision maker (the human with this work) selects one or more of that vectors which represent acceptable

solutions of the problem according to their own point of view (Coello et al., 2002).

∀ x�� ∈ Ω � �( x��∗) � �( x�� ) (1)

take. This area has been approached for different techniques and methods.

to solve optimization problems that involve two or more entrance variables.

�<sup>∗</sup> � �( x��∗) > −∞ is named the global minimum if and only if

development of this area.

**2.1 Global optimization** 

(Bäck, 1996):

**2. Multi-objective optimization** 

#### **2.2 General multi-objective optimization problem**

MOOP also called multi-criteria optimization, multi-performance or vector optimization problem, can be defined (in words) as the problem of finding a vector of decision variables which satisfies constraints and optimizes a vector function whose elements represent the objective functions (Osyczka, 1985). These functions form a mathematical description of performance criteria which are usually in conflict with each other. Hence, the term "*optimize*" means finding such a solution which would give the values of all the objective functions acceptable to the decision maker (Coello, 2001).

#### **2.2.1 Decision variables**

Decision variables are numeric values, which should be selected in a problem of optimization. These variables are represented for �� where � = 1,2, … , ��

The vector of � decision variables x�� is represented by:

$$
\overrightarrow{\mathbf{x}} = \begin{bmatrix} \boldsymbol{\pi}\_1 \\ \boldsymbol{\pi}\_2 \\ \vdots \\ \boldsymbol{\pi}\_n \end{bmatrix} \tag{2}
$$

#### **2.2.2 Constraints**

Constraints imposed by the nature and environment of certain studied case, will be found in most of optimization problems. These conditions can be physical limitations, space or resistance obstacles, or restrictions in the time for the realization of a task, among others. So, certain solution is considered acceptable, if at least it satisfies these constraints. The constraints represent dependences between the parameters and the decision variables in the optimization problem. We can identify two different types of constraints; constraints of inequality:

$$\mathbf{g}\_{\mathbf{i}}(\widehat{\mathbf{x}}) \le \mathbf{0} \qquad \mathbf{i} = \mathbf{1}, \mathbf{2}, \dots, \mathbf{m} \tag{3}$$

and the equality constraints:

$$\mathbf{h}\_{\mathbf{i}}(\overrightarrow{\mathbf{x}}) = \mathbf{0} \quad \mathbf{i} = \mathbf{1}, \mathbf{2}, \dots, \mathbf{p} \tag{4}$$

It is necessary to highlight that p should be smaller than n, because the number of equality constraints should be smaller than the number of decision variables, since if ��� the problem is known as over constrained (Ramírez, 2007), and this means that will have more unknown variables than equations. Those constraints can be explicit (described by one algebraic expression), or implicit (in which case, an algorithm or method have to exist to calculate this constraints for any vector ��).

#### **2.2.3 Objective functions**

To know how good a solution is, it is necessary to have a criterion to evaluate it. This measure should be expressed as an algebraic function of the decision variables and it is known as objective function. It is possible that researches do not have this mathematical

Evolutionary Multi-Objective Algorithms 57

 max ( f�(x��)) = − min( − f�(x��)) (9) In the same way, inequality constraints (6) can be transformed multiplying by -1 and

i-th objective function. If the objectives were not in conflict, then would exist a unique point x�� (in the space of the decision variables), but this situation is very exceptional in the real

The most accepted notion of optimum in the multi-objective environment was formulated by Francis Ysidro Edgeworth in 1881 and generalized after by Vilfredo Pareto in 1896.

The concept of Pareto Optimum (also called Efficiency of Pareto, in honour of his discoverer, Vilfredo Pareto), is a concept of the economy with application in that discipline and in social

According to Pareto, a specific situation X is superior or preferable to other situation Y when the pass from Y to X supposes an improvement for all the members of the society, or an improvement for some, without the other ones be harmed. In other words, in economy and political economy, the concept of "*Optimum of Pareto*" simply indicates a situation in which cannot improve the situation of somebody without making worse the

As already was said, the concept was born in economics, but its scope covers any situation

We say that a vector of decision variables x��<sup>∗</sup> � � is Pareto optimal if there is not another x�� � � such that f�(x��) ≤ f�(x��∗) for all i = 1, … . k and f�(x��) < f�(x��∗) for at least one j. In other words, this definition establishes x��∗ is Pareto optimal if there no exists a feasible vector of decision variables �� � � which would decrease some criterion without causing a simultaneous increase in at least one other criterion. Unfortunately, this concept almost always gives not a single solution, but rather a set of solutions called the Pareto optimal set. The vectors x��∗ corresponding to the solutions included in the Pareto optimal set are called non-dominated ones. The plot of the objective functions whose non-dominated vectors are

Formally, it is said that a vector � = [u�, u�,…,u�]� dominates a vector � = [v�, v�,…,v�]�

in the Pareto optimal set that is called the Pareto front (Coello, 2011).

if and only if � is partially less than �. In other words:

−g�(x��) ≥ 0 i = 1,2, … , m (10)

denotes the optimal for the

, where f�

changing the sign of the inequality as follows:

is formed as f

� � = �f� � , f� � ,⋯,f� � � �

**2.4 The ideal vector** 

**2.5 Pareto – optimality** 

sciences and engineering.

with more than one objective to optimize.

others' situation.

**Pareto optimality** 

**2.6 Pareto dominance** 

� �

The ideal vector f

world.

model, so, at least it is needed to have some mechanisms to determine the quality of the solutions, which can vary depending on the problem.

In many problems of the real world, objective functions are in conflict one to each other and even in the same problem some of them can be functions to minimize while the remaining ones have to be maximized. The vector of objective functions ��( �� ) is defined as follow:

$$
\vec{f}(\vec{x}) = \begin{bmatrix}
\vec{f}\_1(\vec{x}) \\
\vec{f}\_2(\vec{x}) \\
\vdots \\
\vec{f}\_k(\vec{x})
\end{bmatrix} \tag{5}
$$

The set where R denotes the real numbers by ℝ� is called Euclidian space of n dimensions. For the multi-objective optimization problem are considered two Euclidian spaces: the one of the decisions variables and the one of the objective functions. Each point in the first space represents a solution and it can be mapped in the space of the objective functions and then the quality of each solution can be determined. The general MOOP can be formally defined as:

Definition 2. Find the vector x��<sup>∗</sup> = [x� ∗, x� ∗,…,x� ∗ ] � which will satisfy the m inequality constraints:

$$\log\_i(\overleftarrow{\mathbf{x}}) \le 0 \quad \text{i} = \text{1,2,...,m} \tag{6}$$

the p equality constraints

$$\mathbf{h}\_{\mathbf{i}}(\mathbf{\widetilde{x}}) = \mathbf{0} \quad \mathbf{i} = \mathbf{1}, \mathbf{2}, \dots, \mathbf{p} \tag{7}$$

and will optimize the vector function

$$\mathbf{f}(\vec{\mathbf{x}}) = [\mathbf{f}\_1(\vec{\mathbf{x}}), \mathbf{f}\_2(\vec{\mathbf{x}}), \dots, \mathbf{f}\_k(\vec{\mathbf{x}})]^\mathrm{T} \tag{8}$$

In other words, MOOP consists on determining the set of values for the decision variables x� ∗, x� ∗,…,x� ∗ which satisfy equations (6) and (7) and simultaneously optimize (8). Constraints given in (6) and (7) the feasible region of Ω and any point x�� ∈ Ω is a feasible solution. The vector of functions f �(x��) map the group of feasible solutions Ω to the group of feasible objective functions. The k objective functions in the vector f �(x��) represent the criterion that can be expressed in different units. The restrictions g�(x��) and h�(x��) represent constraints applied to the decision variables. The vector x��<sup>∗</sup> represents the group of optimal solutions.

#### **2.3 Multi-objective optimization type of problems**

In the area of multi-objective problems, three variants could be found; the first of them consists on minimizing the whole set of objective functions, the second consists on maximizing them and the third one is a mixture of minimization and maximization of the objective functions.

When we are in the third case, is very common that all the functions be transformed to their minimization version or maximization one, as it is preferred. So, the next equation can be used:

$$\max\left(\mathbf{f}\_{l}(\vec{\mathbf{x}})\right) = -\min\left(-\mathbf{f}\_{l}(\vec{\mathbf{x}})\right) \tag{9}$$

In the same way, inequality constraints (6) can be transformed multiplying by -1 and changing the sign of the inequality as follows:

$$-\mathbf{g}\_{\mathrm{i}}(\overleftarrow{\mathbf{x}}) \ge 0 \quad \mathbf{i} = \mathbf{1}, \mathbf{2}, \dots, \mathbf{m} \tag{10}$$

#### **2.4 The ideal vector**

56 Real-World Applications of Genetic Algorithms

model, so, at least it is needed to have some mechanisms to determine the quality of the

In many problems of the real world, objective functions are in conflict one to each other and even in the same problem some of them can be functions to minimize while the remaining ones have to be maximized. The vector of objective functions ��( �� ) is defined as follow:

> � � � � � �� �(��) �� �(��) ⋮ �� �(��)� � � � �

The set where R denotes the real numbers by ℝ� is called Euclidian space of n dimensions. For the multi-objective optimization problem are considered two Euclidian spaces: the one of the decisions variables and the one of the objective functions. Each point in the first space represents a solution and it can be mapped in the space of the objective functions and then the quality of each solution can be determined. The general MOOP can be formally defined

> ∗, x� ∗,…,x� ∗ ]

�(x��) = [f�( x�� ), f�( x�� ),…,f�( x�� )]

In other words, MOOP consists on determining the set of values for the decision variables

given in (6) and (7) the feasible region of Ω and any point x�� ∈ Ω is a feasible solution. The

can be expressed in different units. The restrictions g�(x��) and h�(x��) represent constraints applied to the decision variables. The vector x��<sup>∗</sup> represents the group of optimal solutions.

In the area of multi-objective problems, three variants could be found; the first of them consists on minimizing the whole set of objective functions, the second consists on maximizing them and the third one is a mixture of minimization and maximization of the

When we are in the third case, is very common that all the functions be transformed to their minimization version or maximization one, as it is preferred. So, the next equation can be

∗ which satisfy equations (6) and (7) and simultaneously optimize (8). Constraints

�(x��) map the group of feasible solutions Ω to the group of feasible

(5)

� which will satisfy the m inequality

� (8)

�(x��) represent the criterion that

g�(x��) ≤ 0 i = 1,2, … , m (6)

h�(x��) = 0 i = 1,2, … , p (7)

��(��) = 

solutions, which can vary depending on the problem.

Definition 2. Find the vector x��<sup>∗</sup> = [x�

and will optimize the vector function

f

objective functions. The k objective functions in the vector f

**2.3 Multi-objective optimization type of problems** 

as:

x� ∗, x� ∗,…,x�

constraints:

the p equality constraints

vector of functions f

objective functions.

used:

The ideal vector f � � is formed as f � � = �f� � , f� � ,⋯,f� � � � , where f� denotes the optimal for the i-th objective function. If the objectives were not in conflict, then would exist a unique point x�� (in the space of the decision variables), but this situation is very exceptional in the real world.

The most accepted notion of optimum in the multi-objective environment was formulated by Francis Ysidro Edgeworth in 1881 and generalized after by Vilfredo Pareto in 1896.

#### **2.5 Pareto – optimality**

The concept of Pareto Optimum (also called Efficiency of Pareto, in honour of his discoverer, Vilfredo Pareto), is a concept of the economy with application in that discipline and in social sciences and engineering.

According to Pareto, a specific situation X is superior or preferable to other situation Y when the pass from Y to X supposes an improvement for all the members of the society, or an improvement for some, without the other ones be harmed. In other words, in economy and political economy, the concept of "*Optimum of Pareto*" simply indicates a situation in which cannot improve the situation of somebody without making worse the others' situation.

As already was said, the concept was born in economics, but its scope covers any situation with more than one objective to optimize.

#### **Pareto optimality**

We say that a vector of decision variables x��<sup>∗</sup> � � is Pareto optimal if there is not another x�� � � such that f�(x��) ≤ f�(x��∗) for all i = 1, … . k and f�(x��) < f�(x��∗) for at least one j. In other words, this definition establishes x��∗ is Pareto optimal if there no exists a feasible vector of decision variables �� � � which would decrease some criterion without causing a simultaneous increase in at least one other criterion. Unfortunately, this concept almost always gives not a single solution, but rather a set of solutions called the Pareto optimal set. The vectors x��∗ corresponding to the solutions included in the Pareto optimal set are called non-dominated ones. The plot of the objective functions whose non-dominated vectors are in the Pareto optimal set that is called the Pareto front (Coello, 2011).

#### **2.6 Pareto dominance**

Formally, it is said that a vector � = [u�, u�,…,u�]� dominates a vector � = [v�, v�,…,v�]� if and only if � is partially less than �. In other words:

$$\forall i \in \{1, 2, \ldots, k\}, \mathfrak{u}\_l \le \upsilon\_l \land \exists l \in \{1, 2, \ldots, k\} : \mathfrak{u}\_l < \upsilon\_l \tag{11}$$

$$\mathcal{P}^\* = \{ \vec{x} \in \Omega \mid \neg \exists \ \vec{y} \in \Omega \ \vec{f}(\vec{y}) \succeq \!\!/ \to \!\!/ \} \tag{12}$$

$$\mathcal{PF}^\* = \left\{ \vec{f} = [f\_1(\vec{x}), \ f\_2(\vec{x}), \dots, f\_k(\vec{x})]^T \mid \vec{x} \in \mathcal{P}^\* \right\} \tag{13}$$

Evolutionary Multi-Objective Algorithms 61

Even though the evolutionary multi-objective optimization field is very young (less than twenty years), it is already considered as a well-established research and application area; according to Deb (Deb, 2008) there are hundreds of doctoral theses on this topic, and are

Marler and Arora (Marler and Arora, 2004) propose a general classification of all multiobjective optimization methods according to the decision maker (DM) intervention. These

The first category focuses on those methods where the user (DM) can specify certain preferences since the beginning of the process; which may be articulated in terms of goals, levels of importance of the objective functions, etc. The second category refers to the group of methods that begin the search for the Pareto set without additional information, but as the search process progresses, the method has to be assisted by the introduction of some preferences provided by the DM. Finally, when the DM is not able to define specifically what he prefers, it is necessary to employ methods that do not require any articulation of preferences. These methods are those that make up the third category of Marler and Arora.

Speaking more specifically about multi-objective evolutionary algorithms (MOEAs), we can find another widely accepted classification. This classification groups them as follows:

• Those algorithms that do not incorporate the concept of Pareto optimality in their

• Those algorithms that rely in the population according to whether an individual is

Considering this last classification and the one used by Coello (Coello, 1999), main multi-

In this chapter we will use mainly the latter classification, because our interest is in those techniques that come from the evolutionary computation. Since explaining all the algorithms of the previous classification would be very extensive, we will focus on

The most commonly used methods for solving multi-objective problems, also called "*basic methods*" (Miettinen, 2008) are those who handle problems as if they were single-objective problems. These methods consist on the transformation of the problem so that they can be solved by optimizing a single objective function. The tendency to transform a multi-

objective evolutionary algorithms can be grouped in the way shown in Figure 6.

Some of the reasons why evolutionary algorithms (EAs) have become so popular are:

3. EAs are flexible and have a wide-spread of applicability (Deb, 2008)

dozens of books devoted to it too.

1. EAs do not require any derivative information 2. EAs are relatively simple to implement

researchers distinguished the next categories:

• Methods with a posteriori articulation • Methods with no articulation of preferences.

For more details see (Marler and Arora, 2004).

selection mechanism.

discussing only the most used of them.

**3.1 Approaches that use aggregative functions** 

dominated or not.

• Methods with a priori articulation of preferences

This section will discuss the first multi-objective optimization algorithms (MOAs) used, passing from those that handle the problem as if it were a single objective problem, to those that make use of EDAs. EDAs are particularly important in this chapter, because towards the end of it, the problem of graph drawing is addresses by this type of metaheuristics.

The field of both mono-objective and multi-objective optimization has been benefited from a significant number of classical techniques, but quantity of new techniques have been recently included. A particularly successful approach is the application of evolutionary computation. Because this chapter deals with the solution of multi-objective problems with heuristic tools, we will start describing the general operation of an evolutionary algorithm.

An evolutionary algorithm begins with the creation (initialization) of a population of individuals (possible solutions to the problem) "*Pt*", usually created by a random procedure or knowledge-driven problem-information. Thereafter, the algorithm performs an iterative process that evaluates the quality of each individual in the population and starts a process of transformation of the current population by certain operators. The most common operators are selection, crossover, mutation and elitism. The iterative process stops when one or more predetermined criteria are met. Figure 5 shows the general procedure of an evolutionary algorithm. In this figure each apostrophe represents a new transformation of the current population, while "*t*" indicates the generation number.

Fig. 5. General Evolutionary Optimization Procedure (Deb, 2008)

Fig. 6. Classification of Multi-Objective Evolutionary Algorithms

This section will discuss the first multi-objective optimization algorithms (MOAs) used, passing from those that handle the problem as if it were a single objective problem, to those that make use of EDAs. EDAs are particularly important in this chapter, because towards the end of it, the problem of graph drawing is addresses by this type of metaheuristics.

The field of both mono-objective and multi-objective optimization has been benefited from a significant number of classical techniques, but quantity of new techniques have been recently included. A particularly successful approach is the application of evolutionary computation. Because this chapter deals with the solution of multi-objective problems with heuristic tools, we will start describing the general operation of an evolutionary algorithm. An evolutionary algorithm begins with the creation (initialization) of a population of individuals (possible solutions to the problem) "*Pt*", usually created by a random procedure or knowledge-driven problem-information. Thereafter, the algorithm performs an iterative process that evaluates the quality of each individual in the population and starts a process of transformation of the current population by certain operators. The most common operators are selection, crossover, mutation and elitism. The iterative process stops when one or more predetermined criteria are met. Figure 5 shows the general procedure of an evolutionary algorithm. In this figure each apostrophe represents a new transformation of the current

An Evolutionary Optimization Procedure

);

' = Selection (Pt

'' = Variation (Pt

while (Termination(Pt,Pt+1));

Pt+1 = Elitism (Pt

);

);

');

Vega

Weighted sum approach Goal programming Goal attainment ε-constraint method

Lexicographic ordering Use of game theory

\*Other approaches

Using gender to identify objectives Weighted min-max approach A non-generational genetic algorithm

Multiple objective genetic algorithm Non dominated sorting genetic algorithm Niched Pareto genetic algorithm

, Pt '');

population, while "*t*" indicates the generation number.

t=0;

do

MOEAs

Fig. 5. General Evolutionary Optimization Procedure (Deb, 2008)

Other approaches not based on the notion of Pareto Optimum

Approaches that use aggregating functions

Initialization (Pt

Pt

Pt

Evaluation(Pt

Fig. 6. Classification of Multi-Objective Evolutionary Algorithms

Pareto based approaches

Even though the evolutionary multi-objective optimization field is very young (less than twenty years), it is already considered as a well-established research and application area; according to Deb (Deb, 2008) there are hundreds of doctoral theses on this topic, and are dozens of books devoted to it too.

Some of the reasons why evolutionary algorithms (EAs) have become so popular are:


Marler and Arora (Marler and Arora, 2004) propose a general classification of all multiobjective optimization methods according to the decision maker (DM) intervention. These researchers distinguished the next categories:


The first category focuses on those methods where the user (DM) can specify certain preferences since the beginning of the process; which may be articulated in terms of goals, levels of importance of the objective functions, etc. The second category refers to the group of methods that begin the search for the Pareto set without additional information, but as the search process progresses, the method has to be assisted by the introduction of some preferences provided by the DM. Finally, when the DM is not able to define specifically what he prefers, it is necessary to employ methods that do not require any articulation of preferences. These methods are those that make up the third category of Marler and Arora. For more details see (Marler and Arora, 2004).

Speaking more specifically about multi-objective evolutionary algorithms (MOEAs), we can find another widely accepted classification. This classification groups them as follows:


Considering this last classification and the one used by Coello (Coello, 1999), main multiobjective evolutionary algorithms can be grouped in the way shown in Figure 6.

In this chapter we will use mainly the latter classification, because our interest is in those techniques that come from the evolutionary computation. Since explaining all the algorithms of the previous classification would be very extensive, we will focus on discussing only the most used of them.

### **3.1 Approaches that use aggregative functions**

The most commonly used methods for solving multi-objective problems, also called "*basic methods*" (Miettinen, 2008) are those who handle problems as if they were single-objective problems. These methods consist on the transformation of the problem so that they can be solved by optimizing a single objective function. The tendency to transform a multi-

Evolutionary Multi-Objective Algorithms 63

The operating principle of this method is to optimize only one objective at a time, leaving the rest of them as constraints that must be limited by certain permitted levels εj. The objective that is optimized, is the one considered as the principal or most important f1. ε<sup>j</sup> levels are then altered to generate the Pareto optimal entire set. This method can be

�����������(��) (16)

�������������(��) � ������������ � �� � � �� � � � (17)

decision maker finds a satisfactory solution. This method was introduced by Haimes et al in (Haimes et al., 1971). It is possible that this procedure should be repeated for different

use independent GAs or other techniques for optimizing each objective function. The main weakness of this method is related to its huge consumption of time, however, its relative

Although techniques mentioned in the previous sub-section have proven to be useful for solving multi-objective optimization problems, we must not forget that they do it as if it were a problem with a single objective. The search for other alternatives resulted in the development of the techniques in the second category according to Figure 6. Techniques in this category introduced two very important elements: the use the populations and the use of special handling of objectives. To illustrate this group of techniques, the Vector Evaluated Genetic Algorithm (VEGA) and the lexicographic ordering are going to be discussed. VEGA is so important because it was the first GA used as a tool for solving MOOP. On the other hand, during the decade of the 80's and early 90's, the MOEAs were characterized by the use of aggregative techniques (already discussed), target vector optimization and lexicographic

The first multi-objective genetic algorithm was implemented by Schaffer (Schaffer, 1984), and it was inspired on the "*simple GA*" (SGA). After making some modifications to the first implementation, Schaffer named it "*Vector Evaluated Genet Algorithm*" (Schaffer, 1985). Schaffer proposed the creation of one sub-population per each objective function of the problem on each generation of the algorithm. So, assuming a population size of *N* for a problem with *k* objective functions, *k* subsets (sub-populations) of size *N/k* should be generated; then the *k* sub-populations must be shuffled together to obtain the new population of size *N*. Finally, the GA will apply classical operators. Figure 7 shows the

The main weakness of this algorithm comes from the fact that it promotes the conservation of solutions with very good performance in only one of the k objectives of the problem, by eliminating the solutions that have what Schaffer called "*middling*" performance (acceptable

≠

 *l).* The search stops when the

*<sup>j</sup>* is very common to

ε

*<sup>j</sup>* are upper bounds for the objectives (*j*

values of the index *l*. In order to obtain a set of appropriate values of

ease, has made it very popular especially in the community of GAs.

**3.2 Other approaches not based on the notion of Pareto optimum** 

ordering; so, it would be illustrative to review this last one.

**3.2.1 Vector Evaluated Genetic Algorithm (VEGA)** 

selection scheme of VEGA.

**3.1.2** ε**-constraint method** 

formulated as follows:

 *{1,….,k}* and

ε

where *l*∈

objective problem to the form of a single-objective one, responds to the fact that singleobjective optimization techniques are better known than those that include optimization based on several functions. The intuitive nature of these techniques, besides the fact that GAs use scalar fitness, makes aggregative functions the first option for solving multiobjective problems. Aggregative functions are combinations either linear or nonlinear of all objectives into a single one. Although there are some drawbacks in using arithmetic combinations of objectives, these techniques have been used extensively since the late sixties, when Rosenberg published his work (Rosenberg, 1967). Even though Rosenberg did not use a multi-objective technique, his work showed that it was feasible to use evolutionary search techniques to handle multi-objective problems. The two techniques that best represent this kind of approaches are: Weighted Sum Method and ε-Constraint Method.

Readers interested on techniques in this group, can consult "*A comprehensive Survey of Evolutionary-Based Multi-objective Techniques*" (Coello, 1999).

#### **3.1.1 Weighted sum method**

The goal of this method is constituted by the sum of all objectives of the problem, using different coefficients for each one. The coefficients used represent the level of importance assigned to each of the objectives. So the optimization problem becomes a problem of scale optimization as follows:

$$\text{minimize } \Sigma\_{\mathbf{l}=\mathbf{1}}^{\mathbf{k}} \le \mathbf{f}\_{\mathbf{l}}(\mathbf{\overrightarrow{x}}) \tag{14}$$

Where wi ≥ 0 is the weighting coefficient that represents the relative importance of the i-th objective. It is usually assumed that

$$\sum\_{l=1}^{k} \mathbf{w}\_{l} = \mathbf{1} \tag{15}$$

The normalization above takes place because the results obtained by this technique may have significant variations to small changes in the coefficients and avoids that different magnitudes confuse the method. Very often it is need to perform a set of experiments before determining the best combination of weights. When the decision maker has some a priori knowledge about the problem, it is feasible and beneficial to introduce this information in modelling. At the end of the process is the decision maker the one who should make the most appropriate solution according to his experience and intuition. There are several variations of this method, for example, adding constant multipliers to scale objectives in a better way. This was the first method used for the generation of non inferior solutions for multi-objective optimization (Coello 1998), perhaps because it was implied by Kuhn and Tucker in their seminar work on numerical optimization (Kuhn and Tucker, 1951). Computationally speaking, this method is efficient and it has proven to have the ability of generating non-dominated solutions which are often used as a starting point for other techniques; nevertheless, its main drawback is the enormous complexity to determine the appropriate weights when there is no information about the problem. In the case that there is no information about the problem, the literature suggests using simple linear combinations of the objectives to adjust the weights iteratively. In general this technique is not suitable in the presence of search spaces non-convex (Ritzel et al., 1994), because the alteration of the weights can produce jumps between several vertex, leaving undetected intermediate solutions.

### **3.1.2** ε**-constraint method**

62 Real-World Applications of Genetic Algorithms

objective problem to the form of a single-objective one, responds to the fact that singleobjective optimization techniques are better known than those that include optimization based on several functions. The intuitive nature of these techniques, besides the fact that GAs use scalar fitness, makes aggregative functions the first option for solving multiobjective problems. Aggregative functions are combinations either linear or nonlinear of all objectives into a single one. Although there are some drawbacks in using arithmetic combinations of objectives, these techniques have been used extensively since the late sixties, when Rosenberg published his work (Rosenberg, 1967). Even though Rosenberg did not use a multi-objective technique, his work showed that it was feasible to use evolutionary search techniques to handle multi-objective problems. The two techniques that best represent this kind of approaches are: Weighted Sum Method and ε-Constraint Method.

Readers interested on techniques in this group, can consult "*A comprehensive Survey of* 

The goal of this method is constituted by the sum of all objectives of the problem, using different coefficients for each one. The coefficients used represent the level of importance assigned to each of the objectives. So the optimization problem becomes a problem of scale

minimize ∑ w�f�(x��) �

Where wi ≥ 0 is the weighting coefficient that represents the relative importance of the i-th

∑ �� = 1 �

The normalization above takes place because the results obtained by this technique may have significant variations to small changes in the coefficients and avoids that different magnitudes confuse the method. Very often it is need to perform a set of experiments before determining the best combination of weights. When the decision maker has some a priori knowledge about the problem, it is feasible and beneficial to introduce this information in modelling. At the end of the process is the decision maker the one who should make the most appropriate solution according to his experience and intuition. There are several variations of this method, for example, adding constant multipliers to scale objectives in a better way. This was the first method used for the generation of non inferior solutions for multi-objective optimization (Coello 1998), perhaps because it was implied by Kuhn and Tucker in their seminar work on numerical optimization (Kuhn and Tucker, 1951). Computationally speaking, this method is efficient and it has proven to have the ability of generating non-dominated solutions which are often used as a starting point for other techniques; nevertheless, its main drawback is the enormous complexity to determine the appropriate weights when there is no information about the problem. In the case that there is no information about the problem, the literature suggests using simple linear combinations of the objectives to adjust the weights iteratively. In general this technique is not suitable in the presence of search spaces non-convex (Ritzel et al., 1994), because the alteration of the weights can produce jumps between several vertex, leaving undetected

��� (14)

��� (15)

*Evolutionary-Based Multi-objective Techniques*" (Coello, 1999).

**3.1.1 Weighted sum method** 

objective. It is usually assumed that

optimization as follows:

intermediate solutions.

The operating principle of this method is to optimize only one objective at a time, leaving the rest of them as constraints that must be limited by certain permitted levels εj. The objective that is optimized, is the one considered as the principal or most important f1. ε<sup>j</sup> levels are then altered to generate the Pareto optimal entire set. This method can be formulated as follows:

$$\text{minimize } f\_l(\vec{x}) \tag{16}$$

$$\text{subject to } f\_j(\vec{x}) \le \varepsilon\_j \text{ for all } j = 1, \dots, k, j \ne l \tag{17}$$

where *l*∈ *{1,….,k}* and ε*<sup>j</sup>* are upper bounds for the objectives (*j*≠ *l).* The search stops when the decision maker finds a satisfactory solution. This method was introduced by Haimes et al in (Haimes et al., 1971). It is possible that this procedure should be repeated for different values of the index *l*. In order to obtain a set of appropriate values of ε*<sup>j</sup>* is very common to use independent GAs or other techniques for optimizing each objective function. The main weakness of this method is related to its huge consumption of time, however, its relative ease, has made it very popular especially in the community of GAs.

### **3.2 Other approaches not based on the notion of Pareto optimum**

Although techniques mentioned in the previous sub-section have proven to be useful for solving multi-objective optimization problems, we must not forget that they do it as if it were a problem with a single objective. The search for other alternatives resulted in the development of the techniques in the second category according to Figure 6. Techniques in this category introduced two very important elements: the use the populations and the use of special handling of objectives. To illustrate this group of techniques, the Vector Evaluated Genetic Algorithm (VEGA) and the lexicographic ordering are going to be discussed. VEGA is so important because it was the first GA used as a tool for solving MOOP. On the other hand, during the decade of the 80's and early 90's, the MOEAs were characterized by the use of aggregative techniques (already discussed), target vector optimization and lexicographic ordering; so, it would be illustrative to review this last one.

### **3.2.1 Vector Evaluated Genetic Algorithm (VEGA)**

The first multi-objective genetic algorithm was implemented by Schaffer (Schaffer, 1984), and it was inspired on the "*simple GA*" (SGA). After making some modifications to the first implementation, Schaffer named it "*Vector Evaluated Genet Algorithm*" (Schaffer, 1985). Schaffer proposed the creation of one sub-population per each objective function of the problem on each generation of the algorithm. So, assuming a population size of *N* for a problem with *k* objective functions, *k* subsets (sub-populations) of size *N/k* should be generated; then the *k* sub-populations must be shuffled together to obtain the new population of size *N*. Finally, the GA will apply classical operators. Figure 7 shows the selection scheme of VEGA.

The main weakness of this algorithm comes from the fact that it promotes the conservation of solutions with very good performance in only one of the k objectives of the problem, by eliminating the solutions that have what Schaffer called "*middling*" performance (acceptable

Evolutionary Multi-Objective Algorithms 65

The greatest strength of this method lies in its simplicity, and its greatest weakness comes from the high level of dependence of their performance with the order of importance chosen for each objective function. Because this method takes into account one objective at a time, it tends to promote only certain goals, when there are others in the problem, making the

As the reader may have observed, all techniques discussed so far produce Pareto front members implicitly, because they do not use the Pareto-optimality concept as a search mechanism, nevertheless there are also a set of methods that employ the definition of Pareto-optimality to conduct the search for solutions. In 1989 Goldberg suggested the use of a fitness function based on the concept of Pareto-optimality to deal with the problem of speciation identified by Schaffler. Goldberg's proposal was to find the set of individuals that are Pareto non-dominated by the rest of the population and assign them the rank 1, then removing them from contention, and then find a new set of non-dominated individuals and

The main weakness of this method is that there is not yet an efficient algorithm to check non-dominance in a set of feasible solutions (Coello, 1996). As the size of population and the number of objective functions grow up, efficiency of algorithms is worse; however, Pareto ranking is the most appropriate method to generate an entire Pareto front in a single run of the GA (Coello, 1999). Several algorithms that use Pareto based approaches have been

A scheme in which the rank of an individual depends on the number of individuals from a certain population, by which it is dominated, was proposed by Fonseca and Fleming (Fonseca and Fleming, 1993). For example, lets suppose generation *t*, all non-dominated

(t) is the number of solutions that dominates the solution xi. The individual xi in the

 ����(��� �) �����

2. Fitness is assigned to individuals by interpolating from the best (rank 1) to the worst

3. The fitness of individuals with the same rank is averaged, so all of them will be

A potential weakness of this algorithm is the premature convergence produced by a large selection pressure because of blocked selected fitness (Goldberg and Deb, 1991). To avoid this, Fonseca and Fleming used niche-formation method to distribute the population over the Pareto-optimal region; however instead of performing sharing on the parameters values,

(t)) where

(�) (23)

individuals are assigned rank 1, while dominated ones are assigned a rank of (1+pi

Fitness assignment is performed in the following way (Fonseca and Fleming, 1993).

(rank n). Interpolation is usually linear but it can be non linear.

process to converge to a particular area of the Pareto front.

rank them as 2, and so forth. This technique is named Pareto ranking.

developed; next subsections will discuss some of them.

**3.3.1 Multiple Objective Genetic Algorithm (MOGA)** 

generation t, can be assigned the next rank.

1. Population is sort by the assigned rank

they used sharing on the objective function values.

sampled at the same rate.

**3.3 Pareto based approaches** 

pi

performance in all objective functions). The problem mentioned is known in genetics like "*speciation*", and it is obviously undesirable in solving multi-objective problems because it goes against the goal of finding compromise solutions.

In more general terms, the performance of this method is compared with the linear combination of objectives, where the weights depend on the distribution of the population in each generation as demonstrated by Richardson et al (Richardson et al., 1989). Therefore this technique has not the ability to produce Pareto optimal solutions in the presence of nonconvex search spaces.

Fig. 7. Scheme of VEGA selection

#### **3.2.2 Lexicographic ordering**

This method, which is commonly grouped with the methods that articulate some preferences a priori according with the Marler and Arora's classification (Marler and Arora, 2004), or the named as a priori methods (Miettinen, 2008), begins with the arrangement of all objective functions according to their relative importance. Subsequently, the most important objective function is minimized subject to the original constraints. Then, we formulate a similar problem with the second most important objective function and an extra restriction. This procedure is repeated until the k objectives have been considered. The first problem to be solved, assuming that f1 is the most important objective, has the following form:

$$\text{minimize } f\_1(\vec{x}) \tag{18}$$

$$\text{subject to: }\ g\_j(\vec{x}) \le 0 \quad j = 1, 2 \dots, m \tag{19}$$

By solving (5) and (6), we obtain ������� ∗ and *f1\*=f(*������� ∗ *)*, and then, the next problem is formulated:

$$\text{minimize } f\_2(\vec{\mathfrak{x}}) \tag{20}$$

$$\text{subject to: }\ g\_j(\vec{x}) \le 0 \quad j = 1, 2 \dots, m \tag{21}$$

$$f\_1(\vec{x}) = f\_1 \, ^\ast \tag{22}$$

Once the problem in (7), (8) and (9) is solved, �� ����� ∗ and *f2\*=f(*�� ����� ∗ *)* are obtained. This procedure is then repeated over and over, until all objective functions have been taken into account. The final solution obtained ����� �� ∗ is considered the best solution of the problem.

performance in all objective functions). The problem mentioned is known in genetics like "*speciation*", and it is obviously undesirable in solving multi-objective problems because it

In more general terms, the performance of this method is compared with the linear combination of objectives, where the weights depend on the distribution of the population in each generation as demonstrated by Richardson et al (Richardson et al., 1989). Therefore this technique has not the ability to produce Pareto optimal solutions in the presence of non-

This method, which is commonly grouped with the methods that articulate some preferences a priori according with the Marler and Arora's classification (Marler and Arora, 2004), or the named as a priori methods (Miettinen, 2008), begins with the arrangement of all objective functions according to their relative importance. Subsequently, the most important objective function is minimized subject to the original constraints. Then, we formulate a similar problem with the second most important objective function and an extra restriction. This procedure is repeated until the k objectives have been considered. The first problem to

����������������(��) � ����� � ��� � � � (19)

∗

����������������(��) � ����� � ��� � � � (21)

∗

∗

and *f2\*=f(*�� �����

is considered the best solution of the problem.

∗

�����(��) � ��

is then repeated over and over, until all objective functions have been taken into account.

�����������(��) (18)

�����������(��) (20)

*)*, and then, the next problem is formulated:

(22)

*)* are obtained. This procedure

be solved, assuming that f1 is the most important objective, has the following form:

and *f1\*=f(*�������

∗

goes against the goal of finding compromise solutions.

convex search spaces.

Fig. 7. Scheme of VEGA selection

**3.2.2 Lexicographic ordering** 

By solving (5) and (6), we obtain �������

The final solution obtained �����

Once the problem in (7), (8) and (9) is solved, �� �����

�� ∗ The greatest strength of this method lies in its simplicity, and its greatest weakness comes from the high level of dependence of their performance with the order of importance chosen for each objective function. Because this method takes into account one objective at a time, it tends to promote only certain goals, when there are others in the problem, making the process to converge to a particular area of the Pareto front.

### **3.3 Pareto based approaches**

As the reader may have observed, all techniques discussed so far produce Pareto front members implicitly, because they do not use the Pareto-optimality concept as a search mechanism, nevertheless there are also a set of methods that employ the definition of Pareto-optimality to conduct the search for solutions. In 1989 Goldberg suggested the use of a fitness function based on the concept of Pareto-optimality to deal with the problem of speciation identified by Schaffler. Goldberg's proposal was to find the set of individuals that are Pareto non-dominated by the rest of the population and assign them the rank 1, then removing them from contention, and then find a new set of non-dominated individuals and rank them as 2, and so forth. This technique is named Pareto ranking.

The main weakness of this method is that there is not yet an efficient algorithm to check non-dominance in a set of feasible solutions (Coello, 1996). As the size of population and the number of objective functions grow up, efficiency of algorithms is worse; however, Pareto ranking is the most appropriate method to generate an entire Pareto front in a single run of the GA (Coello, 1999). Several algorithms that use Pareto based approaches have been developed; next subsections will discuss some of them.

### **3.3.1 Multiple Objective Genetic Algorithm (MOGA)**

A scheme in which the rank of an individual depends on the number of individuals from a certain population, by which it is dominated, was proposed by Fonseca and Fleming (Fonseca and Fleming, 1993). For example, lets suppose generation *t*, all non-dominated individuals are assigned rank 1, while dominated ones are assigned a rank of (1+pi (t)) where pi (t) is the number of solutions that dominates the solution xi. The individual xi in the generation t, can be assigned the next rank.

$$rank(\mathbf{x}\_l, \mathbf{t}) = \mathbf{1} + p\_l^{(\mathbf{t})} \tag{23}$$

Fitness assignment is performed in the following way (Fonseca and Fleming, 1993).


A potential weakness of this algorithm is the premature convergence produced by a large selection pressure because of blocked selected fitness (Goldberg and Deb, 1991). To avoid this, Fonseca and Fleming used niche-formation method to distribute the population over the Pareto-optimal region; however instead of performing sharing on the parameters values, they used sharing on the objective function values.

Evolutionary Multi-Objective Algorithms 67

In this section, the general idea behind the EDA is discuss, because it is the technique used in solving the problem of drawing graphs. Section 4.2 of this chapter describes the used algorithm called "*Hybrid multi-objective optimization estimation of distribution algorithm*". This

The main idea behind EDAs is to use the probability distribution of the population in the reproduction of the new offspring. EDAs are a natural outgrowth of GA in which statistical information of the population is used to build a probability distribution. Then, this distribution is used to generate new individuals by sampling. Because probability distribution replaces Darwinian operators, this kind of algorithm is classified as non-

EDAs are classified according to the level of variable-interaction they use in their

Update the probabilistic model Q(x) using selected population and f() values

• Univariate: This class of EDAs suppose that there is not interaction among problem-

Although initially EDAs were intended for combinatorial optimization, now they have been extended to the continuous domain. Nowadays the application field of EDAs not only addresses mono-objective optimization issues, but it has been created a discipline related to their application on multi-objective problems. The group of EDAs applied to multi-objective optimization is called "*multi-objective optimization EDAs*" (MOEDAs) (Marti, 2008). Most of the actual MOEAs are modified single-objective EDAs whose fitness assignments are

According to some researchers, there are several aspects that are crucial in the implementation of multi-objective solutions when MOEDAs are used; some of them are:

• Fitness assignment: Since several objectives have to be taken into account; this aspect is

• Diversity preservation: In order to reach a good coverage of the Pareto front, population

very important and more complex than in single-objective optimization.

• Bivariate: This class of EDAs suppose that there is interaction between two variables. • Multivariate: In this class of EDAs, the probabilistic distribution models the interaction

The general procedure of the EDA can be sketched as shown in figure 8.

Create a population of n individuals by sampling from Q(x) Evaluate the objective function for each individual Select m individuals according to a selection method

Template of the EDA algorithm

Initialize a probability model Q(x) **While** Termination criteria are not met **Do**

Generate randomly a population of n individuals

Fig. 8. Estimation of the Distribution Algorithm (Talbi, 2009)

**Output:** Best found solution or set of solutions

algorithm is a hybridized EDA with Hill Climbing.

Darwinian evolutionary algorithm.

t=1

t=t+1 **End While**

probabilistic model:

variables.

among more than two variables.

replaced by multi-objective assignments.

diversity is critical.

This algorithm has been widely accepted and used because of its efficiency and relatively easy implementation. As other Pareto ranking techniques, this algorithm is highly dependent of an appropriate selection of the sharing factor, but Fonseca and Fleming developed a methodology to compute this factor for their approach (Fonseca and Fleming, 1993).

### **3.3.2 Non-dominated Sorting Genetic Algorithm (NSGA)**

The NSGA was proposed by Srinivas and Deb (Srinivas and Deb, 1993). This method is characterized in that the fitness assignment is performed by a rank of dominance. It does not work with a functional value, but with a dummy fitness.

In the first step of this method, the population is ranked based on non-domination. All nondominated individuals are put into a category with a dummy fitness proportional to population size. Then, this group of classified individuals is ignored and another layer of non-dominated individuals is considered. This process continues until all individuals in the population have been classified. Because individuals of the first front have the highest value of fitness, they will be copied more times than the rest of the population. This method allows the search of non-dominated regions with quick convergence results. The efficiency of this method lies in the way a group of objectives is replaced by a dummy function using a non-dominated sorting procedure. According with Srinivas and Deb, with this approach maximization and minimization with any number of objectives can be handled (Srinivas and Deb, 1994). Among other researchers, Coello has reported that this approach is less efficient than the MOGA, and more sensitive to the value of the sharing factor.

### **3.3.3 Niched Pareto Genetic Algorithm (NPGA)**

A tournament selection scheme based on Pareto dominance was proposed by Horn and Nafpliotis (Horn and Nafpliotis, 1993). The main idea of this approach is to use tournament selection based on Pareto dominance with respect to a subset of the population (typically around 10 individuals). In case of ties (when both competitors were either dominated or non-dominated), the decision is made by fitness sharing in both, fitness function space and in the decision variables space.

### **3.4 Other approaches**

Evolutionary algorithms have proved to be very efficient in solving several multi-objective optimization problems, because they have good ability of global exploration and fast convergence speed, all due to the use of nature-inspired operators (crossover, mutation, selection). However, they also have been criticized for the little use made of the information about the problem, the high random component they possess and the large number of evaluations of the problem they use. Some of these problems are being addressed through proposals such as EDAs and Scatter Search, in which operators are deterministic or employ techniques that reduce the number of evaluations.

Another recent trend to address the weaknesses of evolutionary algorithms is combining them with classical optimization methods or other metaheuristics. This type of technique has been used successfully in single-objective optimization, leading to what is called "*memetic algorithms*" (Moscato, 1999).

This algorithm has been widely accepted and used because of its efficiency and relatively easy implementation. As other Pareto ranking techniques, this algorithm is highly dependent of an appropriate selection of the sharing factor, but Fonseca and Fleming developed a methodology to compute this factor for their approach (Fonseca and Fleming,

The NSGA was proposed by Srinivas and Deb (Srinivas and Deb, 1993). This method is characterized in that the fitness assignment is performed by a rank of dominance. It does not

In the first step of this method, the population is ranked based on non-domination. All nondominated individuals are put into a category with a dummy fitness proportional to population size. Then, this group of classified individuals is ignored and another layer of non-dominated individuals is considered. This process continues until all individuals in the population have been classified. Because individuals of the first front have the highest value of fitness, they will be copied more times than the rest of the population. This method allows the search of non-dominated regions with quick convergence results. The efficiency of this method lies in the way a group of objectives is replaced by a dummy function using a non-dominated sorting procedure. According with Srinivas and Deb, with this approach maximization and minimization with any number of objectives can be handled (Srinivas and Deb, 1994). Among other researchers, Coello has reported that this approach is less

A tournament selection scheme based on Pareto dominance was proposed by Horn and Nafpliotis (Horn and Nafpliotis, 1993). The main idea of this approach is to use tournament selection based on Pareto dominance with respect to a subset of the population (typically around 10 individuals). In case of ties (when both competitors were either dominated or non-dominated), the decision is made by fitness sharing in both, fitness function space and

Evolutionary algorithms have proved to be very efficient in solving several multi-objective optimization problems, because they have good ability of global exploration and fast convergence speed, all due to the use of nature-inspired operators (crossover, mutation, selection). However, they also have been criticized for the little use made of the information about the problem, the high random component they possess and the large number of evaluations of the problem they use. Some of these problems are being addressed through proposals such as EDAs and Scatter Search, in which operators are deterministic or employ

Another recent trend to address the weaknesses of evolutionary algorithms is combining them with classical optimization methods or other metaheuristics. This type of technique has been used successfully in single-objective optimization, leading to what is called

efficient than the MOGA, and more sensitive to the value of the sharing factor.

**3.3.2 Non-dominated Sorting Genetic Algorithm (NSGA)** 

work with a functional value, but with a dummy fitness.

**3.3.3 Niched Pareto Genetic Algorithm (NPGA)** 

techniques that reduce the number of evaluations.

"*memetic algorithms*" (Moscato, 1999).

in the decision variables space.

**3.4 Other approaches** 

1993).

In this section, the general idea behind the EDA is discuss, because it is the technique used in solving the problem of drawing graphs. Section 4.2 of this chapter describes the used algorithm called "*Hybrid multi-objective optimization estimation of distribution algorithm*". This algorithm is a hybridized EDA with Hill Climbing.

The main idea behind EDAs is to use the probability distribution of the population in the reproduction of the new offspring. EDAs are a natural outgrowth of GA in which statistical information of the population is used to build a probability distribution. Then, this distribution is used to generate new individuals by sampling. Because probability distribution replaces Darwinian operators, this kind of algorithm is classified as non-Darwinian evolutionary algorithm.

The general procedure of the EDA can be sketched as shown in figure 8.


Fig. 8. Estimation of the Distribution Algorithm (Talbi, 2009)

EDAs are classified according to the level of variable-interaction they use in their probabilistic model:


Although initially EDAs were intended for combinatorial optimization, now they have been extended to the continuous domain. Nowadays the application field of EDAs not only addresses mono-objective optimization issues, but it has been created a discipline related to their application on multi-objective problems. The group of EDAs applied to multi-objective optimization is called "*multi-objective optimization EDAs*" (MOEDAs) (Marti, 2008). Most of the actual MOEAs are modified single-objective EDAs whose fitness assignments are replaced by multi-objective assignments.

According to some researchers, there are several aspects that are crucial in the implementation of multi-objective solutions when MOEDAs are used; some of them are:


Evolutionary Multi-Objective Algorithms 69

At the beginning, we have a graph given by its edges, that is, a pair of vertices. To each vertex is assigned a pair of coordinates. All coordinates of the vertices of the graph are randomly generated in the cartesian plane. If any two vertices have the same coordinates then new coordinates are randomly generated for one of them. The candidate solution is represented as a vector of pairs of coordinates. The input information, i.e., the list of edges of the graph is used by the algorithm to draw the edges in the best manner in order to fulfill a

• Minimization of the number of crossing edges in the graph: The total number of

The vector of the objective functions is denoted by F=(f1,f2,f3). The first function f1 is

To draw a line between two vertices, v�(x�, y�) and v�(x�, y�) we use the following

�����

and solve the equation system for knowing if the two lines corresponding to edges have an intersection point. The function f1 sums the number of intersection points between edges of

The second function f2 is defined as the area of the rectangle containing the graph drawing.

where x���and x��� are the least and greatest values on the abscise axis, and y��� and y���

Finally, the f3 function is obtained as a ratio of (x��� − x���) on (y��� − y���) or vice versa, depending on which was the least. f3 is the value of this ratio, and it is knowing as aspect

We use the Pareto front approach for the multi-objective optimization problem (Coello and López, 2009), (Deb, 2001) and we give the final Pareto front and also give as more promissory solution, that solution closest to the origin, because it resumes all objective tradeoffs. The distance to origin is calculated evaluating the Euclidean distance using the

are the least and greatest values on the vertical axis. S is the value of the function f2.

(x − x�) (24)

a�x+b�y=c� (25)

a�x+b�y=c� (26)

S = (x��� − x���) ∙ (y��� − y���) (27)

y−y� <sup>=</sup> �����

• Minimization of the graph area: to minimize the total space used by the graph (f2). • Minimization of the graph aspect ratio: the graph has to be visualized in an

**4.1 Formulation of the multi-objective optimization for graph drawing problem** 

In this chapter the following in conflict objectives have been considered:

crossing edges of the graph has to be minimized (f1).

tradeoff between all considered objective functions.

approximate square area (f3).

calculated as follows:

equation:

this drawing.

ratio.

The following formula is used:

standardized values of the objectives of the problem.

• Elitism: Elitism is the mechanism used to preserve non dominated solution through successive generations of the algorithm.

With these aspects in mind, next section will discuss the implementation of the proposed solution to the graph drawing problem.

### **4. An application of a multi-objective optimization hybrid estimation of distribution algorithm for graph drawing problem**

Graph drawing problems are a particular class of combinatorial optimization problems whose goal is to find plane layout of an input graph in such a way that certain objective functions are optimized. A large number of relevant problems in different domains can be formulated as graph layout problems. Among these problems are optimization of networks for parallel computer architectures, VLSI circuit design, information retrieval, numerical analysis, computational biology, graph theory, graphical model visualization, scheduling and archaeology. Most interesting graph drawing problems are NP-hard and their decisional versions are NP-complete (Garey and Johnson, 1983), but, for most of their applications, feasible solutions with an almost optimal cost are sufficient. As a consequence, approximation algorithms and effective heuristics are welcome in practice (Díaz et al., 2002).

Visualization of complex conceptual structures is a support tool used on several engineering and scientific applications. A graph is an abstract structure used to model information. Graphs are used to represent information that can be modeled as connections between variables, and so, to draw graphs to put information in an understandable way. The usefulness of graphs visualization systems depends on how easy is to catch its meaning, and how fast and clear is to interpret it. This characteristic can be expressed through of aesthetic criteria (Sugiyama, 2002) as the edges' crossing minimization, the reduction of drawing area and the minimization of aspect ratio, the minimization of the maximum length of an edge, among others.

In our approach the three first objectives are used and we can make a multi-objective optimization formulation for the graph drawing problem. On the one hand, to enhance the legibility of the graph drawing is very important to keep as low as possible the number of crosses, as well as to keep a good aspect ratio in the draw. Another point is to maintain symmetric the drawing region (same drawing height and width). It is very desirable too, to keep the drawing area small. This last requirement avoids the waste of screen space. These objectives are in conflict with each other. To reach the minimum crossing edges in the graph drawing is frequently needed a bigger area. At the same time, for minimizing the aspect ratio of the graph is needed to draw the nodes in a symmetrically delimited region. The reduction of the used area increases the number of crosses because as closer the edges are, there is less space to do the crossing edges minimization. Besides, area reduction of the sketching also affects the symmetrical delimitation of the region used by the graph. The aspect ratio minimization is affected by the crossing edges minimization due that just to get a node outside the defined area contributes to the imbalance of the symmetry reached until that moment. So, the reduction of the drawing area affects directly the aspect ratio of the graph because generally this kind of reduction is not symmetric. A first approach of the multi-objective optimization problem for these three objectives for graph drawing could be found in (Enriquez et al., 2011).

• Elitism: Elitism is the mechanism used to preserve non dominated solution through

With these aspects in mind, next section will discuss the implementation of the proposed

Graph drawing problems are a particular class of combinatorial optimization problems whose goal is to find plane layout of an input graph in such a way that certain objective functions are optimized. A large number of relevant problems in different domains can be formulated as graph layout problems. Among these problems are optimization of networks for parallel computer architectures, VLSI circuit design, information retrieval, numerical analysis, computational biology, graph theory, graphical model visualization, scheduling and archaeology. Most interesting graph drawing problems are NP-hard and their decisional versions are NP-complete (Garey and Johnson, 1983), but, for most of their applications, feasible solutions with an almost optimal cost are sufficient. As a consequence, approximation algorithms and effective heuristics are welcome in practice (Díaz et al., 2002). Visualization of complex conceptual structures is a support tool used on several engineering and scientific applications. A graph is an abstract structure used to model information. Graphs are used to represent information that can be modeled as connections between variables, and so, to draw graphs to put information in an understandable way. The usefulness of graphs visualization systems depends on how easy is to catch its meaning, and how fast and clear is to interpret it. This characteristic can be expressed through of aesthetic criteria (Sugiyama, 2002) as the edges' crossing minimization, the reduction of drawing area and the minimization of aspect ratio, the minimization of the maximum length of an edge,

In our approach the three first objectives are used and we can make a multi-objective optimization formulation for the graph drawing problem. On the one hand, to enhance the legibility of the graph drawing is very important to keep as low as possible the number of crosses, as well as to keep a good aspect ratio in the draw. Another point is to maintain symmetric the drawing region (same drawing height and width). It is very desirable too, to keep the drawing area small. This last requirement avoids the waste of screen space. These objectives are in conflict with each other. To reach the minimum crossing edges in the graph drawing is frequently needed a bigger area. At the same time, for minimizing the aspect ratio of the graph is needed to draw the nodes in a symmetrically delimited region. The reduction of the used area increases the number of crosses because as closer the edges are, there is less space to do the crossing edges minimization. Besides, area reduction of the sketching also affects the symmetrical delimitation of the region used by the graph. The aspect ratio minimization is affected by the crossing edges minimization due that just to get a node outside the defined area contributes to the imbalance of the symmetry reached until that moment. So, the reduction of the drawing area affects directly the aspect ratio of the graph because generally this kind of reduction is not symmetric. A first approach of the multi-objective optimization problem for these three objectives for graph drawing could be

**4. An application of a multi-objective optimization hybrid estimation of** 

successive generations of the algorithm.

**distribution algorithm for graph drawing problem** 

solution to the graph drawing problem.

among others.

found in (Enriquez et al., 2011).

#### **4.1 Formulation of the multi-objective optimization for graph drawing problem**

At the beginning, we have a graph given by its edges, that is, a pair of vertices. To each vertex is assigned a pair of coordinates. All coordinates of the vertices of the graph are randomly generated in the cartesian plane. If any two vertices have the same coordinates then new coordinates are randomly generated for one of them. The candidate solution is represented as a vector of pairs of coordinates. The input information, i.e., the list of edges of the graph is used by the algorithm to draw the edges in the best manner in order to fulfill a tradeoff between all considered objective functions.

In this chapter the following in conflict objectives have been considered:


The vector of the objective functions is denoted by F=(f1,f2,f3). The first function f1 is calculated as follows:

To draw a line between two vertices, v�(x�, y�) and v�(x�, y�) we use the following equation:

$$\mathbf{y} - \mathbf{y}\_1 = \frac{\mathbf{y}\_2 - \mathbf{y}\_1}{\mathbf{x}\_2 - \mathbf{x}\_1} (\mathbf{x} - \mathbf{x}\_1) \tag{24}$$

and solve the equation system for knowing if the two lines corresponding to edges have an intersection point. The function f1 sums the number of intersection points between edges of this drawing.

$$\mathbf{a}\_1 \mathbf{x} + \mathbf{b}\_1 \mathbf{y} = \mathbf{c}\_1 \tag{25}$$

$$\mathbf{a}\_2 \mathbf{x} + \mathbf{b}\_2 \mathbf{y} = \mathbf{c}\_2 \tag{26}$$

The second function f2 is defined as the area of the rectangle containing the graph drawing. The following formula is used:

$$\mathbf{S} = (\mathbf{x}\_{\max} - \mathbf{x}\_{\min}) \cdot (\mathbf{y}\_{\max} - \mathbf{y}\_{\min}) \tag{27}$$

where x���and x��� are the least and greatest values on the abscise axis, and y��� and y��� are the least and greatest values on the vertical axis. S is the value of the function f2.

Finally, the f3 function is obtained as a ratio of (x��� − x���) on (y��� − y���) or vice versa, depending on which was the least. f3 is the value of this ratio, and it is knowing as aspect ratio.

We use the Pareto front approach for the multi-objective optimization problem (Coello and López, 2009), (Deb, 2001) and we give the final Pareto front and also give as more promissory solution, that solution closest to the origin, because it resumes all objective tradeoffs. The distance to origin is calculated evaluating the Euclidean distance using the standardized values of the objectives of the problem.

Evolutionary Multi-Objective Algorithms 71

UMDA is a particular case of EDAs, introduced by Mühlenbein (Mühlenbein et al., 1998), where the variables are totally independent. The n-dimensional joint probability is a product

��(�) = ∏ ��(��) �

The joint probability distribution of each generation is estimated using the individuals *p* (*x*) *<sup>l</sup>* selected. The joint probability distribution factorizes as the product of independent

This section describes how to define a measure of quality (dominance index) for each solution stored in the Pareto front. The objective of this dominance index is to order the

Definition. Dominance index of a solution x��: Let ��� �� be two approximate Pareto fronts and let r(��) be the number of elements of �� and �(��) the number of elements of ��. The dominance index of a solution x�� is defined as the number of times n(x��) that a solution

Based on the definition of dominance index of a solution x��, the quality index of Pareto front is constructed. Given two Pareto fronts, a relative evaluation of the first front �� with

(�) dominates elements of the second Pareto front ��. To normalize this quantity in the dominance index definition, it is divided by the number of solutions of the second front

Definition. Quality Index of the first Pareto front with respect to the second: Let now ∑ n� be the sum of the number of times all the solutions of the first Pareto front dominate the solutions of the second front. To normalize this quantity, it is divided by the number of solutions in �� front. This last quantity can be considered a relative quality index of the first

In a previous paper a factorial experiment was performed (Enriquez et al., 2011) where the best combination of factors found was: number of generations equal to 500 and population size equal to 150. These parameters were the ones that reached the best results of the algorithm. Seven graphs were selected from the papers (Rossete, 2000),(Branke, et al., 1997),(Eleoranta and Mäkinen, 2001), (Hobbs and Rodgers, 1998), (Rossete and Ochoa, 1998) to use them as benchmarks, but only the results of the composite graph (Enriquez et al., 2010) is commented in this chapter because this graph is the biggest one. It is a no planar

��� (28)

(�)

) be the number of times

(�).

of n univariate probability distributions (Larrañaga & Lozano, 2002).

**4.3 Dominance index to evaluate solutions in Pareto front** 

x��� � �� dominates solutions x��� � ��, divided by �(��).

respect to the second �� can be given as follows:

Pareto front with respect to the second.

**4.5 Experimental design** 

**4.4 Quality index to evaluate Pareto front performance** 

(�) be one solution of the first Pareto front and let n� = n(x���

�(��). The quantity obtained is the quality index to evaluate the solution x���

Example:

Let x���

x���

univariate distributions.

elements of the Pareto front.

### **4.2 Hybrid multi-objective optimization estimation of distribution algorithm**

This section presents a description of the components of the proposed algorithm, which is built of three main components. One of them the Univariate Marginal Distribution Algorithm (UMDA) (Mühlenbein et al., 1998) adapted for multi-objective optimization problems is used for exploration of the search space, and the second component the Random Mutation Hill Climbing (RMHC) algorithm is used for the exploitation. Finally, a component for calculating the Pareto front is used.

The pseudocode of the multi-objective optimization evolutionary hill climbing estimation of distribution algorithm (MOEA-HCEDA) is as shows in figure 9.

Fig. 9. Pseudocode of MOEA-HCEDA

Fig. 10. Pseudocode of RMHC

ParetoInitialPopulation( ): In the first step a random population with size 2\*size of population is generated. After that, the first Pareto front is obtained using the dominance solution. The first approximated Pareto front is saved in D�.

RandomMutationHillClimbing( ): In Random Mutation Hill Climbing (Mitchell et al., 1994), a string is chosen randomly and its fitness is evaluated. The string solution is mutated randomly choosing a single locus, and the new solution is evaluated. If mutation leads to an equal or higher fitness, the new string solution replaces the old. This procedure is iterated until the optimum has been found or a maximum number of function evaluations have been performed. The algorithm RMHC works as figure 10 shows.

CalculateParetoPopulation( ): In the first step, the last approximated Pareto front saved in D��� is joined with the recently generated population and saved in D�. In the second step the new approximated Pareto front is calculated from D��� ∪ D�. The new approximated Pareto front is saved in D���.

UMDA is a particular case of EDAs, introduced by Mühlenbein (Mühlenbein et al., 1998), where the variables are totally independent. The n-dimensional joint probability is a product of n univariate probability distributions (Larrañaga & Lozano, 2002).

Example:

70 Real-World Applications of Genetic Algorithms

This section presents a description of the components of the proposed algorithm, which is built of three main components. One of them the Univariate Marginal Distribution Algorithm (UMDA) (Mühlenbein et al., 1998) adapted for multi-objective optimization problems is used for exploration of the search space, and the second component the Random Mutation Hill Climbing (RMHC) algorithm is used for the exploitation. Finally, a

The pseudocode of the multi-objective optimization evolutionary hill climbing estimation of

Repeat for ι = 1, 2, . . . until stop criterion is verified. Obtain estimate of joint probability distribution

Sample M individuals (new population) from RandomMutationHillClimbing\_RMHC( ); CalculateParetoPopulation();

*Dl* ← *p* (*x*) *<sup>l</sup>*

ParetoInitialPopulation( ): In the first step a random population with size 2\*size of population is generated. After that, the first Pareto front is obtained using the dominance

Choose a binary string at random. Call this string best-evaluated solution.

Compute the fitness of the mutated string. If the fitness is greater than the

If the maximum number of function evaluations has been performed return the

fitness of the best-evaluated, then set the best-evaluated to the mutated

RandomMutationHillClimbing( ): In Random Mutation Hill Climbing (Mitchell et al., 1994), a string is chosen randomly and its fitness is evaluated. The string solution is mutated randomly choosing a single locus, and the new solution is evaluated. If mutation leads to an equal or higher fitness, the new string solution replaces the old. This procedure is iterated until the optimum has been found or a maximum number of function evaluations have been

CalculateParetoPopulation( ): In the first step, the last approximated Pareto front saved in D��� is joined with the recently generated population and saved in D�. In the second step the new approximated Pareto front is calculated from D��� ∪ D�. The new approximated Pareto

**4.2 Hybrid multi-objective optimization estimation of distribution algorithm** 

component for calculating the Pareto front is used.

Fig. 9. Pseudocode of MOEA-HCEDA

Fig. 10. Pseudocode of RMHC

front is saved in D���.

solution. The first approximated Pareto front is saved in D�.

End RandomMutationHillClimbing\_RMHC

performed. The algorithm RMHC works as figure 10 shows.

distribution algorithm (MOEA-HCEDA) is as shows in figure 9.

MOEA-HCEDA

End repeat End MOEA-HCEDA

RMHC

string.

Pseudocode MOEA-HCEDA ParetoInitialPopulation( );

> ( ) ( ) 1 *i n <sup>i</sup> <sup>l</sup> <sup>l</sup> <sup>p</sup> <sup>x</sup>* ∏ *p <sup>x</sup>* <sup>=</sup> =

Pseudocode RandomMutationHillClimbing\_RMHC

Mutate a bit chosen a random in best-evaluated.

best evaluated, otherwise, go to step 2.

$$p\_l(\mathbf{x}) = \prod\_{l=1}^{n} p\_l(\mathbf{x}\_l) \tag{28}$$

The joint probability distribution of each generation is estimated using the individuals *p* (*x*) *<sup>l</sup>* selected. The joint probability distribution factorizes as the product of independent univariate distributions.

### **4.3 Dominance index to evaluate solutions in Pareto front**

This section describes how to define a measure of quality (dominance index) for each solution stored in the Pareto front. The objective of this dominance index is to order the elements of the Pareto front.

Definition. Dominance index of a solution x��: Let ��� �� be two approximate Pareto fronts and let r(��) be the number of elements of �� and �(��) the number of elements of ��. The dominance index of a solution x�� is defined as the number of times n(x��) that a solution x��� � �� dominates solutions x��� � ��, divided by �(��).

#### **4.4 Quality index to evaluate Pareto front performance**

Based on the definition of dominance index of a solution x��, the quality index of Pareto front is constructed. Given two Pareto fronts, a relative evaluation of the first front �� with respect to the second �� can be given as follows:

Let x��� (�) be one solution of the first Pareto front and let n� = n(x��� (�) ) be the number of times x��� (�) dominates elements of the second Pareto front ��. To normalize this quantity in the dominance index definition, it is divided by the number of solutions of the second front �(��). The quantity obtained is the quality index to evaluate the solution x��� (�).

Definition. Quality Index of the first Pareto front with respect to the second: Let now ∑ n� be the sum of the number of times all the solutions of the first Pareto front dominate the solutions of the second front. To normalize this quantity, it is divided by the number of solutions in �� front. This last quantity can be considered a relative quality index of the first Pareto front with respect to the second.

#### **4.5 Experimental design**

In a previous paper a factorial experiment was performed (Enriquez et al., 2011) where the best combination of factors found was: number of generations equal to 500 and population size equal to 150. These parameters were the ones that reached the best results of the algorithm. Seven graphs were selected from the papers (Rossete, 2000),(Branke, et al., 1997),(Eleoranta and Mäkinen, 2001), (Hobbs and Rodgers, 1998), (Rossete and Ochoa, 1998) to use them as benchmarks, but only the results of the composite graph (Enriquez et al., 2010) is commented in this chapter because this graph is the biggest one. It is a no planar

Evolutionary Multi-Objective Algorithms 73

increases to 1.0461. Figure 15 shows the graph 10028 of the generation 300. This graph is better in two objectives than the other three graphs because the edges crossing decrease to 14, total area decreases to 109525 and the aspect ratio newly decreases to 1.0369. Figure 16 shows the graph 13924 of the generation 400. This graph is better in two objectives than the other four graphs because the total area decreases to 102700 and the aspect ratio decreases to 1.0284, the edges crossing is manteined in 14 crosses. Figure 17 shows the graph 17470 of the generation 500. This graph is the best in all objectives because the edges crossing

decreases to 9, total area decrease to 91506 and aspect ratio decreases to 1.0033.

Fig. 11. Quality index for Pareto front comparison.

Fig. 12. Generation 1, graph 16.

Fig. 13. Generation 100, graph 2555

Fig. 14. Generation 200, graph 5822.

graph with a total of 40 vertices and 69 edges. A total of ten runs for the combination of factors (500,150) were executed, each run has an output that is an approximation to the Pareto Front. The evaluation of the convergence to the Pareto front was performed with the quality index.

### **4.6 Results and discussion**

The results of this experiment appear on table 1, figures 11, 12, 13, 14, 15, 16, and 17. Table 1 shows the best graphs obtained for ten repetitions of MOEA-HCEDA algorithm. For each of the best solution, the table shows run, graph number, total number of edges intersected, area size and aspect ratio. A distance to origin is used to evaluate the best solution obtained on each repetition. This distance is calculated evaluating the Euclidean distance using the standardized values of the three objectives of the problem. The optimal Pareto value is obtained in the graph 267 of the 5th repetition. The results show the average number crossing is 16.1, average area is 106318.6, and average aspect ratio is 1.0632.


Table 1. Best solution on each run

Figure 11 shows the average for ten runs of the Pareto front quality index printed on each generation of the algorithm, a convergent curve is showed. The results of the experiments showed that the algorithm converges to an optimal Pareto front.

Figures 12, 13, 14, 15, 16, and 17 show the evolution of graphs corresponding to run 5. Figure 12 shows the graph 16 of the generation 1. This graph has 412 edges crossing, 285270 total area and 1.01698 aspect ratio. Figure 13 shows the graph 2555 in the generation 100. This graph is better than the graph 16 because the edges crossing decrease to 29, total area decreases to 116620 and aspect ratio decreases to 1.0088. Figure 14 shows the graph 5822 of the generation 200. This graph is better in two objectives compared to 16th and 2555th graphs because the edges crossing decrease to 24, total area decreases to 110500 but the aspect ratio

graph with a total of 40 vertices and 69 edges. A total of ten runs for the combination of factors (500,150) were executed, each run has an output that is an approximation to the Pareto Front. The evaluation of the convergence to the Pareto front was performed with the

The results of this experiment appear on table 1, figures 11, 12, 13, 14, 15, 16, and 17. Table 1 shows the best graphs obtained for ten repetitions of MOEA-HCEDA algorithm. For each of the best solution, the table shows run, graph number, total number of edges intersected, area size and aspect ratio. A distance to origin is used to evaluate the best solution obtained on each repetition. This distance is calculated evaluating the Euclidean distance using the standardized values of the three objectives of the problem. The optimal Pareto value is obtained in the graph 267 of the 5th repetition. The results show the average number

**Objective Functions** 

55 17 106446 1.079617834 0.679992707 115 18 89951 1.04778157 0.489841704 130 22 111132 1.058641975 0.702735094 230 10 128520 1.111764706 0.465940815 267 9 91506 1.003311258 0.343328661 303 12 155298 1.185082873 0.815347591 317 18 79520 1.014285714 0.368875792 420 16 93852 1.063973064 0.458813972 500 22 101661 1.064724919 0.559167084 520 17 105300 1.00308642 0.44348268

**RATIO**

**RATIO** 

**DISTANCE TO ORIGIN** 

**DISTANCE TO ORIGIN** 

Figure 11 shows the average for ten runs of the Pareto front quality index printed on each generation of the algorithm, a convergent curve is showed. The results of the experiments

**The best solution obtained** 

5 267 9 91506 1.0033113 0.343328661

Figures 12, 13, 14, 15, 16, and 17 show the evolution of graphs corresponding to run 5. Figure 12 shows the graph 16 of the generation 1. This graph has 412 edges crossing, 285270 total area and 1.01698 aspect ratio. Figure 13 shows the graph 2555 in the generation 100. This graph is better than the graph 16 because the edges crossing decrease to 29, total area decreases to 116620 and aspect ratio decreases to 1.0088. Figure 14 shows the graph 5822 of the generation 200. This graph is better in two objectives compared to 16th and 2555th graphs because the edges crossing decrease to 24, total area decreases to 110500 but the aspect ratio

showed that the algorithm converges to an optimal Pareto front.

crossing is 16.1, average area is 106318.6, and average aspect ratio is 1.0632.

**Total Average: 16.1 106318.6 1.063227033** 

**NUMBER NCROSS AREA ASPECT** 

**NUMBER NCROSS AREA ASPECT** 

quality index.

**4.6 Results and discussion** 

**RUN GRAPH** 

Table 1. Best solution on each run

**RUN GRAPH** 

increases to 1.0461. Figure 15 shows the graph 10028 of the generation 300. This graph is better in two objectives than the other three graphs because the edges crossing decrease to 14, total area decreases to 109525 and the aspect ratio newly decreases to 1.0369. Figure 16 shows the graph 13924 of the generation 400. This graph is better in two objectives than the other four graphs because the total area decreases to 102700 and the aspect ratio decreases to 1.0284, the edges crossing is manteined in 14 crosses. Figure 17 shows the graph 17470 of the generation 500. This graph is the best in all objectives because the edges crossing decreases to 9, total area decrease to 91506 and aspect ratio decreases to 1.0033.

Fig. 11. Quality index for Pareto front comparison.

Fig. 12. Generation 1, graph 16.

Fig. 13. Generation 100, graph 2555

Fig. 14. Generation 200, graph 5822.

Evolutionary Multi-Objective Algorithms 75

Although there are many versions of evolutionary algorithms that are tailored to multiobjective optimization, theoretical results are apparently not yet available. Rudolph (1999) has shown that results known from the theory of evolutionary algorithms in case of single

Assuming that the evolutionary algorithms are Markov processes, and that the fitness functions are partially ordered, Rudolph presented some theoretical results about the convergence of multi objective algorithms. In particular some properties of the operators have to be checked to establish the algorithm convergence. This theoretical analysis shows that a special version of an evolutionary algorithm converges with probability 1 to the Pareto set for the test problem under consideration, but this tools are not used frequently. Although, there exist a number of multi-objective GA implementations and there exist a number of GA applications to multi-objective optimization problems, there not exists systematic study to speculate what problem features may cause a multi-objective GA to face difficulties. The systematic testing in a controlled manner on various aspects of problem difficulties is not so deeply addressed. Specifically, multi-modal multi-objective problems, deceptive multi-objective problems, multi-objective problems having convex, non-convex, and discrete Pareto-optima fronts, and non-uniformly represented Pareto-optimal fronts are

Although some studies have compared different GA implementations (Zitzler and Thiele, 1998), they all have presented a specific problem without an analysis about the complexity of the test problems. The test functions suggested until now in the literature provide various degrees of complexity but are not enough. The construction of test problems has been done without enough knowledge of how multi-objective GAs work. Thus, it will be worthwhile to investigate how existing multi-objective GA implementations work in the context of different test problems. It is intuitive that as the number of objectives increase, the Paretooptimal region is represented by multi-dimensional surfaces. With more objectives, multiobjective GAs must have to maintain more diverse solutions in the non-dominated front in each iteration. Whether GAs are able to find and maintain diverse solutions, as demanded by the search space of the problem with many objectives would be a matter of interesting study. Whether population size alone can solve this scalability issue or a major structural change (implementing a better niching method) is imminent would be the outcome of such a study. Constraints can introduce additional complexity in the search space by inducing infeasible regions in the search space, thereby obstructing the progress of an algorithm towards the global Pareto-optimal front. Thus, creation of constrained test problems is an interesting area which should get emphasis in the near future. With the development of such complex test problems, there is also a need to develop efficient constraint handling techniques that would be able to help GAs to overcome hurdles caused by constraints. Some such methods are in progress in the context of single-objective GAs and with proper implementations they should also work in multi-objective GAs. Most multi-objective GAs that exist to date, work with the non-domination principle. It is a question if all solutions in a non-dominated set need not be members of the true Pareto optimal front, although some of them could be. This means that all non-dominated solutions found by a multi-objective optimization algorithm may not necessarily be Pareto-optimal solutions. Thus, while working with such algorithms, it is wise to check the Pareto-optimality of each of such

objective optimization do not carry over to the multi-objective case.

**5. Future directions for research** 

not presented and systematically analyzed.

Fig. 15. Generation 300, graph 10028.

Fig. 16. Generation 400, graph 13924.

Fig. 17. Generation 500, graph 17470

### **4.7 Conclusions and future work**

The main contributions of this application is the test of the hybrid MOEA-HCEDA algorithm and the quality index based on the Pareto front used in the graph drawing problem. The Pareto front quality index obtained on each generation of the algorithm showed a convergent curve. The results of the experiments showed that the algorithm converges. A graphical user interface was constructed providing users with a tool for a friendly and easy to use graphs display. The automatic drawing of optimized graphs makes it easier for the user to compare results appearing in separate windows, giving the user the opportunity to choose the graph design which best fits their needs.

To continue this research, the hybridization MOEA-HCEDA with others algorithms, for example using other types of EDAs is a next objective. The testing of the algorithms using others more complex benchmarks and, the comparison of the results between different variants is a very challenging and interesting task for future work. The graphical presentation can be friendlier and dispose other facilities as, for example, the printing of the results.

The main contributions of this application is the test of the hybrid MOEA-HCEDA algorithm and the quality index based on the Pareto front used in the graph drawing problem. The Pareto front quality index obtained on each generation of the algorithm showed a convergent curve. The results of the experiments showed that the algorithm converges. A graphical user interface was constructed providing users with a tool for a friendly and easy to use graphs display. The automatic drawing of optimized graphs makes it easier for the user to compare results appearing in separate windows, giving the user the

To continue this research, the hybridization MOEA-HCEDA with others algorithms, for example using other types of EDAs is a next objective. The testing of the algorithms using others more complex benchmarks and, the comparison of the results between different variants is a very challenging and interesting task for future work. The graphical presentation can be friendlier and dispose other facilities as, for example, the printing of the

opportunity to choose the graph design which best fits their needs.

Fig. 15. Generation 300, graph 10028.

Fig. 16. Generation 400, graph 13924.

Fig. 17. Generation 500, graph 17470

**4.7 Conclusions and future work** 

results.

### **5. Future directions for research**

Although there are many versions of evolutionary algorithms that are tailored to multiobjective optimization, theoretical results are apparently not yet available. Rudolph (1999) has shown that results known from the theory of evolutionary algorithms in case of single objective optimization do not carry over to the multi-objective case.

Assuming that the evolutionary algorithms are Markov processes, and that the fitness functions are partially ordered, Rudolph presented some theoretical results about the convergence of multi objective algorithms. In particular some properties of the operators have to be checked to establish the algorithm convergence. This theoretical analysis shows that a special version of an evolutionary algorithm converges with probability 1 to the Pareto set for the test problem under consideration, but this tools are not used frequently.

Although, there exist a number of multi-objective GA implementations and there exist a number of GA applications to multi-objective optimization problems, there not exists systematic study to speculate what problem features may cause a multi-objective GA to face difficulties. The systematic testing in a controlled manner on various aspects of problem difficulties is not so deeply addressed. Specifically, multi-modal multi-objective problems, deceptive multi-objective problems, multi-objective problems having convex, non-convex, and discrete Pareto-optima fronts, and non-uniformly represented Pareto-optimal fronts are not presented and systematically analyzed.

Although some studies have compared different GA implementations (Zitzler and Thiele, 1998), they all have presented a specific problem without an analysis about the complexity of the test problems. The test functions suggested until now in the literature provide various degrees of complexity but are not enough. The construction of test problems has been done without enough knowledge of how multi-objective GAs work. Thus, it will be worthwhile to investigate how existing multi-objective GA implementations work in the context of different test problems. It is intuitive that as the number of objectives increase, the Paretooptimal region is represented by multi-dimensional surfaces. With more objectives, multiobjective GAs must have to maintain more diverse solutions in the non-dominated front in each iteration. Whether GAs are able to find and maintain diverse solutions, as demanded by the search space of the problem with many objectives would be a matter of interesting study. Whether population size alone can solve this scalability issue or a major structural change (implementing a better niching method) is imminent would be the outcome of such a study. Constraints can introduce additional complexity in the search space by inducing infeasible regions in the search space, thereby obstructing the progress of an algorithm towards the global Pareto-optimal front. Thus, creation of constrained test problems is an interesting area which should get emphasis in the near future. With the development of such complex test problems, there is also a need to develop efficient constraint handling techniques that would be able to help GAs to overcome hurdles caused by constraints. Some such methods are in progress in the context of single-objective GAs and with proper implementations they should also work in multi-objective GAs. Most multi-objective GAs that exist to date, work with the non-domination principle. It is a question if all solutions in a non-dominated set need not be members of the true Pareto optimal front, although some of them could be. This means that all non-dominated solutions found by a multi-objective optimization algorithm may not necessarily be Pareto-optimal solutions. Thus, while working with such algorithms, it is wise to check the Pareto-optimality of each of such

Evolutionary Multi-Objective Algorithms 77

artificial constraints (which are, in some sense, user-dependent). Moreover, a single run of a multi-objective GA may provide a number of Pareto-optimal solutions, each of which is optimal in one objective with a constrained upper limit on other objectives (such as optimal in cost for a particular upper bound on reliability). Thus, the advantages of using a multiobjective GA in real-world problems are many and there is the need for some interesting application case studies which would clearly show the advantages and flexibilities in using a

We believe that more such mentioned studies are needed to understand better the working principles of a multi-objective GA. An obvious outcome of such studies would be the

Bäck, T. (1996). Evolutionary Algorithms in Theory and Practice. Oxford University Press,

Branke, J., Bucher, F., and Schmeck, H. (1997). Using Genetic Algorithms for Drawing

Coello, C. (1996).*An Empirical Study of Evolutionary Techniques for Multiobjective Optimization.* 

Coello, C. (1999). A Comprehensive Survey of Evolutionary-Based Multiobjective

Coello, C., Van Veldhuizen, D., and Lamont, G. (2002). Evolutionary Algorithms for Solving

Coello, C. (2011). Evolutionary Multiobjective Optimization, In: *Data Mining and Knowledge* 

Coello, C. and López, A. (2009). Multi-Objective Evolutionary Algorithms: A Review of the

Collette, Y. and Siarry, P. (2002). Multiobjective optimization: principles and case studies.

Davis, T. and Principe, J. (1991). A simulated annealing like convergence theory for the

Deb, K. (2001). *Multi-Objective Optimization Using Evolutionay Algorithms*, John Wiley & Sons,

G. (ed), pp. 61-90, World Scientific, ISBN: 9812836519. Singapore.

Groupe Eyrolles , ISBN 978-3-540-40182-7, France.

Ltd, ISBN 0-471-87339-X. New York, N.Y, 2001.

Undirected Graphs. In Allen, J. (Ed.), *Proceedings of 3rd Nordic Workshop on Genetics* 

*Engineering Design*. Doctoral thesis. Department of Computer Science, Tulane

Optimization Techniques. In Knowledge and Information Systems. Vol. 1, No.3

Multi-Objective Problems. Kluwer Academic Publishers, ISBN 0-3064-6762-3 New

*Discovery,* W. Pedrycz, (Ed.), pp. 444-447, John Wiley & Sons, ISSN 1942-4795, N.J,

State-of-the-Art and some of their Applications in Chemical Engineering*.* In *Multi-Objective Optimization Techniques and Applications in Chemical Engineering*. Rangaiah

simple genetic algorithm. Proceeding of the Fourth International Conference on Genetic Algorithms. ISBN 1-55860-208-9. pp. 174-181. San Diego, CA, USA, July

multi-objective GA, as opposed to a single-objective GA.

development of new and improved multi-objective GAs.

ISBN 0-19-509971-0 New York, New York.

pp. 269-308., Aug.1999. Eberhart.

York, New York.

USA.

1991.

*Algorithms and their Applications*, pp. 193-205 (1997).

University, New Orleans, Louisiana, USA, April 1996.

**6. References** 

solutions (by perturbing the solution locally or by using weighted-sum single-objective methods originating from these solutions). In this regard, it would be interesting to introduce special features (such as elitism, mutation, or other diversity-preserving operators), the presence of which may help us to prove convergence of a GA population to the global Pareto-optimal front. Some such proofs exist for single-objective GAs (Davis and Principe, 1991; Rudolph, 1994) and a similar proof may also be attempted for multi-objective GAs. Elitism is a useful and popular mechanism used in single-objective GAs. Elitism ensures that the best solutions in each generation will not be lost. They are directly carried over from one generation to the next and what is important is that these good solutions get a chance to participate in recombination with other solutions in the hope of creating better solutions. In the context of single-objective optimization, there is only one best solution in a population. But in multi-objective optimization, all non-dominated solutions of the first level are the best solutions in the population. There is no way to distinguish one solution from the other in the non-dominated set. Then if we like to introduce elitism in multiobjective GAs, should we carry over all solutions in the first non-dominated set to the next generation! This may mean copying many good solutions from one generation to the next, a process which may lead to premature convergence to non-Pareto-optimal solutions. How elitism should be defined in this context is an interesting research topic. In this context, an issue related to comparison of two populations also raises some interesting questions.

There are two goals in a multi-objective optimization—convergence to the true Paretooptimal front and maintenance of diversity among Pareto-optimal solutions. A multiobjective GA may have found a population which has many Pareto-optimal solutions, but with less diversity among them. How would such a population be compared with respect to another which has a fewer number of Pareto-optimal solutions but with wide diversity? The practitioners of multi-objective GAs must have to settle for an answer for these questions before they would be able to compare different GA implementations or before they would be able to mimic operators in other single-objective GAs, such as CHC (Eshelman, 1990) or steady-state GAs (Syswerda, 1989). As it is often suggested and used in single-objective GAs, a hybrid strategy of either implementing problem-specific knowledge in GA operators or using a two-stage optimization process of first finding good solutions with GAs and then improving these good solutions with a domain-specific algorithm would make multiobjective optimization much faster than GAs alone.

Test functions test an algorithm's capability to overcome a specific aspect that a real-world problem may have. In this respect, an algorithm which can overcome more aspects of problem difficulty is naturally a better algorithm. This is precisely the reason why so much effort is spent on doing research in test function development. As it is important to develop better algorithms by applying them on test problems with known complexity, it is also equally important that the algorithms are tested in real-world problems with unknown complexity. Fortunately, most interesting engineering design problems are naturally posed as finding trade-offs among a number of objectives. Among them, cost and reliability are two objectives which are often the priorities of designers. This is because, often in a design, a solution which is less costly is likely to be less reliable and vice versa. In handling such real-world applications using single-objective GAs, often, an artificial scenario is created. Only one objective is retained and all other objectives are used as constraints. For example, if cost is retained as an objective, then an extra constraint restricting the reliability to be greater than 0.9 (or some other value) is used. With the availability of efficient multi-objective GAs, there is no need to have such artificial constraints (which are, in some sense, user-dependent). Moreover, a single run of a multi-objective GA may provide a number of Pareto-optimal solutions, each of which is optimal in one objective with a constrained upper limit on other objectives (such as optimal in cost for a particular upper bound on reliability). Thus, the advantages of using a multiobjective GA in real-world problems are many and there is the need for some interesting application case studies which would clearly show the advantages and flexibilities in using a multi-objective GA, as opposed to a single-objective GA.

We believe that more such mentioned studies are needed to understand better the working principles of a multi-objective GA. An obvious outcome of such studies would be the development of new and improved multi-objective GAs.

### **6. References**

76 Real-World Applications of Genetic Algorithms

solutions (by perturbing the solution locally or by using weighted-sum single-objective methods originating from these solutions). In this regard, it would be interesting to introduce special features (such as elitism, mutation, or other diversity-preserving operators), the presence of which may help us to prove convergence of a GA population to the global Pareto-optimal front. Some such proofs exist for single-objective GAs (Davis and Principe, 1991; Rudolph, 1994) and a similar proof may also be attempted for multi-objective GAs. Elitism is a useful and popular mechanism used in single-objective GAs. Elitism ensures that the best solutions in each generation will not be lost. They are directly carried over from one generation to the next and what is important is that these good solutions get a chance to participate in recombination with other solutions in the hope of creating better solutions. In the context of single-objective optimization, there is only one best solution in a population. But in multi-objective optimization, all non-dominated solutions of the first level are the best solutions in the population. There is no way to distinguish one solution from the other in the non-dominated set. Then if we like to introduce elitism in multiobjective GAs, should we carry over all solutions in the first non-dominated set to the next generation! This may mean copying many good solutions from one generation to the next, a process which may lead to premature convergence to non-Pareto-optimal solutions. How elitism should be defined in this context is an interesting research topic. In this context, an issue related to comparison of two populations also raises some interesting questions.

There are two goals in a multi-objective optimization—convergence to the true Paretooptimal front and maintenance of diversity among Pareto-optimal solutions. A multiobjective GA may have found a population which has many Pareto-optimal solutions, but with less diversity among them. How would such a population be compared with respect to another which has a fewer number of Pareto-optimal solutions but with wide diversity? The practitioners of multi-objective GAs must have to settle for an answer for these questions before they would be able to compare different GA implementations or before they would be able to mimic operators in other single-objective GAs, such as CHC (Eshelman, 1990) or steady-state GAs (Syswerda, 1989). As it is often suggested and used in single-objective GAs, a hybrid strategy of either implementing problem-specific knowledge in GA operators or using a two-stage optimization process of first finding good solutions with GAs and then improving these good solutions with a domain-specific algorithm would make multi-

Test functions test an algorithm's capability to overcome a specific aspect that a real-world problem may have. In this respect, an algorithm which can overcome more aspects of problem difficulty is naturally a better algorithm. This is precisely the reason why so much effort is spent on doing research in test function development. As it is important to develop better algorithms by applying them on test problems with known complexity, it is also equally important that the algorithms are tested in real-world problems with unknown complexity. Fortunately, most interesting engineering design problems are naturally posed as finding trade-offs among a number of objectives. Among them, cost and reliability are two objectives which are often the priorities of designers. This is because, often in a design, a solution which is less costly is likely to be less reliable and vice versa. In handling such real-world applications using single-objective GAs, often, an artificial scenario is created. Only one objective is retained and all other objectives are used as constraints. For example, if cost is retained as an objective, then an extra constraint restricting the reliability to be greater than 0.9 (or some other value) is used. With the availability of efficient multi-objective GAs, there is no need to have such

objective optimization much faster than GAs alone.


Evolutionary Multi-Objective Algorithms 79

Larrañaga, P. and Lozano, J.(2002). *Estimation of Distribution Algorithms: A New Tool for* 

Marler, R. and Arora, J. (2004). Survey of Multi-Objective Optimization Methods for

Marti, L., Garcia, J., Berlanga, A. and Molina, J. (2008). Model-building algorithms for

Miettinen, K. (2008). Introduction to Noninteractive Approaches. In *Multiobjective* 

K. and Slowinski R. (Eds). Springer-Verlag Berlin Heidelberg. Germany. Mitchell, M., Holland, J. and Forrest, S. (1994). "When Will a Genetic Algorithm

Moscato, P. (1999). Memetic algorithms: a short introduction, In *New Ideas in Optimization,* 

Mühlenbein, H., Mahnig, T. and Ochoa, A.(1998). Schemata Distributions and Graphical Models. *In Evolutionary Optimization, Journal of Heuristic*, Vol. 5, No. 2, pp. 215-247. Ramírez, M. (2007). *Técnicas Evolutivas Multiobjetivo Aplicadas en el Diseño de Rutas en* 

Richardson, J., Palmer, M., Liepins, G. and Hilliard, M. (1989). Some guidelines for genetic

Ritzel, B., Eheart, W. and Ranjithan S. (1994). Using genetic algorithms to solve a

*Research*, doi:10.1029/93WR03511. Vol. 30 No. 5. pp. 1589-1606. May 1994. Rosenberg, R. (1967). *Simulation of genetic populations with biochemical properties*. PhD thesis,

Rossete, A. and Ochoa, A. (1998). Genetic Graph Drawing. *In Proceedings of 13th International* 

Rossete, A. (2000). Un Enfoque General y Flexible para el Trazado de Grafos, Doctoral

Rudolph, G. (1994). Convergence Analysis of Canonical Genetic Algorithms. *In Transactions* 

Rudolph, G. (1999). *On a multi-objective evolutionary algorithm and its convergence to the Pareto* 

Thesis, Faculty of Industrial Enginering, CEIS, La Habana, Cuba.

Norwell, Massachusetts, USA.

5, UK Maidenhead, UK, England.

George Mason University.

*Vehículos Espaciales*. Doctoral thesis, México, DF.

University of Michigan, Ann Harbor, Michigan, 1967.

Science/XI, University of Dortmund.

395.

7, June, 2008.

6, pp. 51-58.

pp. 37-40.

*Evolutionary Computation*. Kluwer Academic Publishers, ISBN 0-7923-7466-5.

Engineering. In *Structural and Multidisciplinary Optimization*, Vol. 26, No. 6, pp. 369-

multiobjective EDAs: Directions for improvement, *Evolutionary Computation, 2008. CEC 2008. (IEEE World Congress on Computational Intelligence).*ISBN 978-1-4244-1823-

*Optimization. Interactive and Evolutionary Approaches*. Branke, J., Deb, K., Miettinen,

Outperform Hill Climbing?", *Advances in Neural Information Processing Systems*, Vol.

Corne, D., Dorigo, M. and Glover, F., pp. 219-234, McGraw-Hill, ISBN 0-07-709506-

algorithms with penalti functions. In *Proceedings of the Third International Conference on Genetic Algorithms*. Schaffer J. (ed), Morgan Kaufmann Publishers. pp. 191-197,

multiobjective groundwater pollution containment problem. *Water Resources* 

*Conference of Applications of Artificial Intelligence in Engineering, AIENG'98*, Adey, R. A., Rzevski, G., Nolan, P. (Ed.) Galway, Computational Mechanics Publications,

*on Neural Networks IEEE*, Vol. 5, No. 1, (January, 1994). pp. 96-101, ISSN: 1045-9227.

*set*. Technical Report No. CI-17/98. Dortmund, Germany: Department of Computer


Deb, K. (2008). Introduction to Evolutionary Multiobjective Optimization. In *Multiobjective* 

K. and Slowinski, R.(Ed). Springer-Verlag Berlin Heidelberg, Germany. Díaz, J., Petit, J., and Serna, M.(2002) A Survey of Graph Layout Problems. *ACM Computing Surveys*. Vol. 34, No. 3, (September 2002), pp. 313–356. New York, N.Y, USA. Eloranta, T. and Mäkinen E. (2001). *TimGA -* A Genetic Algorithm for Drawing Undirected

Enríquez, S., Ponce de León, E., Díaz, E., Padilla, A. (2010).A Hybrid Evolutionary

Eshelman, L. (1991). The CHC Adaptive Search Algorithm: How to Have Safe Search When

Fonseca, C. and Fleming P. (1993). Genetic Algorithms for Multiobjective Optimization:

Garey, M. and Johnson, D. (1983). Crossing number is NP-complete. In SIAM Journal on

Goldberg, D. (1989). Genetic Algorithms in Search Optimization & Machine Learning.

Goldberg, D. and Den, K. (1991). A comparison of selection schemes used in genetic

Haimes, Y., Lasdon, L., Wismer, D. (1971). On a Bicriterion Formulation of the Problems of

Horn, J. and Nafpliotis, N. (1993). *Multiobjective Optimization using the Niched Pareto Genetic*

Hobbs, M. and Rodgers, P. (1998). Representing Space: A Hybrid Genetic Algorithm for

Kuhn, H. and Tucker, A. (1951). Nonlinear programming. In *Proceedings of the Second Berkely* 

416-423, San Mateo, California, 1993. Urbana-Champaign.

Kaufmann. pp. 69-93., San Mateo, California, 1991.

*systems*, Man and Cibernetics Vol. 1. pp. 296-297.

University of California Press. Berkely, California.

Champaign, Urbana, Illinois, USA, 1993.

1315-2068.

ISBN 13 :9781466602977

Addison-Wesley.

415-418.

55860-170-8, San Mateo, CA. USA.

*Optimization. Interactive and Evolutionary Approaches*. Branke, J., Deb, K., Miettinen,

Graphs*. Divulgaciones Matemáticas,* Vol. 9 No.2, (October, 2001), pp. 155-171, ISSN

Algorithm for the Edge Crossing Minimization Problem in Graph Drawing. In *Advances in computer Science and Engineering*, Vol. 45, pp. 27-40. ISSN 1870-4069 Enríquez, S., Ponce de León, E., Díaz, E., Padilla, A, Torres, D., Torres, A., Ochoa A. (2011).

An Evolutionary Algorithm for Graph Drawing with a Multiobjective Approach. In *Logistics Management and Optimization through Hybrid Artificial Intelligent Systems*, Carlos A. Ochoa, Carmen Chira, Arturo Hernández y Miguel Basurto, IGI Global,

Engaging. In *Nontraditional Genetic Recombination*. Rawlins G. (ed.), pp. 265-283. Proceedings of the First Workshop on Foundations of Genetic Algorithms. ISBN 1-

Formulation, Discussion and Generalization. In *Proceeding of the Fifth International Conference on Genetic Algorithms.* Forrest, S. (Ed). Morgan Kauffman Publishers. pp.

Algebraic and Discrete Methods, Vol. 4 (March, 1982), pp. 312–316, ISSN 0895-4798.

algorithms. In *Foundations of Genetic Algorithms* Rawlins, G. (Ed). Morgan

Integrated System Identification and System Optimization . IEEE. *Transaction on* 

*Algorithm*. Technical Report IlliGAl Report 93005, University of Illinois at Urbana-

Aesthetic Graph Layout. In *FEA'98, Proceedings of 4th Joint Conference on Information Sciences, JCIS'98,* The Fourth Joint Conference on Information Sciences, Vol. 2, pp.

*Symposium on Mathematical Statistics and Probability.* Neyman, J. (ed). pp. 481-492.


**0**

**4**

Elias D. Niño *Universidad del Norte*

*Colombia*

**Evolutionary Algorithms Based on the Automata**

In this chapter we are going to study metaheuristics based on the Automata Theory for the Multi-objective Optimization of Combinatorial Problems. As well known, Combinatorial Optimization is a branch of optimization. Its domain is optimization problems where the set of feasible solutions is discrete or can be reduced to a discrete one, and the goal is to find the best possible solution(Yong-Fa & Ming-Yang, 2004). In this field it is possible to find a lot of problems denominated NP-Hard, that is mean that the problem does not have a solution in Polynomial Time. For instance, problems such as Multi-depot vehicle routing problem(Lim & Wang, 2005), delivery and pickup vehicle routing problem with time windows(Wang & Lang, 2008), multi-depot vehicle routing problem with weight-related costs(Fung et al., 2009), Railway Traveling Salesman Problem(Hu & Raidl, 2008), Heterogeneous, Multiple Depot, Multiple Traveling Salesman Problem(Oberlin et al., 2009) and Traveling Salesman

One of the most classical problems in the Combinatorial Optimization Field is the Traveling Salesman Problem (TSP), it has been analyzed for years(Sauer & Coelho, 2008) either in a Mono or Multi-objective way. It is defined as follows: "Given a set of cities and a departure city, visit each city only once and go back to the departure city with the minimum cost". Basically, that is mean, visiting each city once, to find an optimal tour in a set of cities, an

> *n* ∑ *j*=1

instance of TSP problem can be seen in figure 1. Formally, TSP is defined as follows:

*min n* ∑ *i*=1

*n* ∑ *j*=1

*n* ∑ *j*=1

∑ *i*∈*κ* ∑ *j*∈*κ*

with Multi-agent(Wang & Xu, 2009) are categorized as NP-Hard problems.

**1. Introduction**

Subject to:

**Theory for the Multi-Objective Optimization of**

**Combinatorial Problems**

*Cij* · *Xij* (1)

*Xij* = 1, ∀*i* = 1, . . . , *n* (2)

*Xij* = 1, ∀*j* = 1, . . . , *n* (3)

*Xij* ≤ |*κ*| − 1, ∀*κ* ⊂ {1, . . . , *n*} (4)

*Xij* = 0, 1∀*i*, *j* (5)


## **Evolutionary Algorithms Based on the Automata Theory for the Multi-Objective Optimization of Combinatorial Problems**

Elias D. Niño *Universidad del Norte Colombia*

#### **1. Introduction**

80 Real-World Applications of Genetic Algorithms

Schaffer, J. (1984). *Some Experiments in Machine Learning Using Vector Evaluated Genetic* 

Schaffer, J. (1985). Multiple Objective Optimization with Vector Evaluated Genetic

Srinivas, N. and Deb, K. (1993). *Multiobjective Optimization using nondominated sorting*.

Srinivas, N. and Deb, K. (1994). Multiobjective Optimization using Nondominated Sorting. In *Genetic Algorithms. Evolutionary Computation*, Vol. 2, No. 3, pp 221-248. Sugiyama, K. (2002). *Graph Drawing and Applications for Software and Knowledge Engineers*, Japan Advanced Institute of Science and Technology, Vol. 11, p. 218. Syswerda, G. (1989). Uniform Crossover in Genetic Algorithms. In Proceeding of the 3rd

Talbi, E. (2009). *Methaheuristics. From Design to Implementation*. John Wiley & Sons, Inc,

Zeleny, M. (1997). Towards the Tradeoffs-Free Optimality in MCDM, In *Multicriteria* 

Zitzler, E. and Thiele, L. (1998). Multiobjective optimization using evolutionary

Algorithms. In *Proceedings of the First International Conference on Genetic Algorithms*

Technical report. Genetic algorithms, Department of Mechanical Engineering,

International Conference on Genetic Algorithms, pp. 2-9, ISBN:1-55860-066-3. San

*Analysis*, J. Climaco, (Ed). , pp. 596-601, Springer-Verlag, ISBN 9783540620747,

algorithms—A comparative case study. In *Proceedings of the 5th International Conference on Parallel Problem Solving from Nature*. pp. 292-301, ISBN:3-540-65078-4,

*Algorithms*. Doctoral Thesis, Nashville, TN; Vanderbilt University.

*and their Applications*, pp. 93-100. July, 1985, Pittsburgh, Pa.

Indian Istitute of Technology, Kanput, India, 1993.

Publication, ISBN 978-0-470-27858-1, New Jersey.

Francisco, CA, USA. 1989.

Springer-Verlag, London, UK. 1998.

Berlin, Heidleberg.

In this chapter we are going to study metaheuristics based on the Automata Theory for the Multi-objective Optimization of Combinatorial Problems. As well known, Combinatorial Optimization is a branch of optimization. Its domain is optimization problems where the set of feasible solutions is discrete or can be reduced to a discrete one, and the goal is to find the best possible solution(Yong-Fa & Ming-Yang, 2004). In this field it is possible to find a lot of problems denominated NP-Hard, that is mean that the problem does not have a solution in Polynomial Time. For instance, problems such as Multi-depot vehicle routing problem(Lim & Wang, 2005), delivery and pickup vehicle routing problem with time windows(Wang & Lang, 2008), multi-depot vehicle routing problem with weight-related costs(Fung et al., 2009), Railway Traveling Salesman Problem(Hu & Raidl, 2008), Heterogeneous, Multiple Depot, Multiple Traveling Salesman Problem(Oberlin et al., 2009) and Traveling Salesman with Multi-agent(Wang & Xu, 2009) are categorized as NP-Hard problems.

One of the most classical problems in the Combinatorial Optimization Field is the Traveling Salesman Problem (TSP), it has been analyzed for years(Sauer & Coelho, 2008) either in a Mono or Multi-objective way. It is defined as follows: "Given a set of cities and a departure city, visit each city only once and go back to the departure city with the minimum cost". Basically, that is mean, visiting each city once, to find an optimal tour in a set of cities, an instance of TSP problem can be seen in figure 1. Formally, TSP is defined as follows:

$$\min \sum\_{i=1}^{n} \sum\_{j=1}^{n} \mathbb{C}\_{ij} \cdot \mathbf{X}\_{ij} \tag{1}$$

Subject to:

$$\sum\_{j=1}^{n} X\_{ij} = 1, \forall i = 1, \dots, n \tag{2}$$

$$\sum\_{j=1}^{n} X\_{ij} = 1, \forall j = 1, \dots, n \tag{3}$$

$$\sum\_{j \in \kappa} \sum\_{j \in \kappa} X\_{ij} \le |\kappa| - 1, \forall \kappa \subset \{1, \dots, n\} \tag{4}$$

$$X\_{i\bar{j}} = 0, 1 \forall i, j \tag{5}$$

**2. Preliminaries**

follows:

Subject to:

**2.1 Multi-objective optimization**

The Multi-objective optimization consists in two or more objectives functions to optimize and a set of constraints. Mathematically, the Multi-objective Optimization model is defined as

<sup>83</sup> Evolutionary Algorithms Based on the Automata

Where *F*(*X*) is the set of objective functions, *H*(*X*) and *G*(*X*) are the constraints of the

Unlike to Mono-objective Optimization, Multi-objective Optimization deal with searching a set of Optimal Solutions instead of a Optimal Solution. For instance, table 1 shows three solutions for a particualr Mono-objective Problem. If we suppose that those are related to a maximization problem then the Optimal Solution (found) is the solution 1 otherwise (minimization) will be the solution 2. On the other hand, in table 2 can be seen three solutions for a particular Tri-objective Problem. Thus, if we suppose that all the components of the solutions are related with a minimization problem, solution 2 is a *dominated solution* due to all the components (0.8, 0.9 and 1.0) are the biggest values. On the other hand, solution 0 and 1 are *no-dominated solutions* due to in the first and second component (0.6 and 0.4) solution 0 is bigger than the relative components of the solution 1 but in the third component (0.5) solution 0 is lower than the same component in solution 1. Both examples show the

difference between Mono-objective and Multi-objective Optimization. While the first deal with finding the Optimal Solution, the last does with finding a set of Optimal Solutions. In Combinatorial Optimization, the set of Optimal Solution is called *Pareto Front*. It contains all the no-dominated solutions for a Multi-objective Problem. Figure 2 shows a Pareto Front for a particular Tri-objective Problem. Lastly, it is probably that some Multi-objective Problems

*k F*(*Xk*) = { *f*0(*Xk*), *f*1(*Xk*), *f*2(*Xk*)}

have an infinite Pareto Front, in those cases is necessary to determinate how many solutions are required, for instance, using a maximum number of solution permitted in the Pareto Front.

0 {0.6, 0.4, 0.5} 1 {0.2, 0.3, 0.8} 2 {0.8, 0.9, 1.0}

problem. Lastly, *Xl* and *Xu* are the bounds for the set of variables *X*.

Theory for the Multi-Objective Optimization of Combinatorial Problems

Table 1. Solutions for a particular Mono-objective Problem

Table 2. Solutions for a particular Tri-objective Problem

*optimize F*(*X*) = { *f*1(*X*), *f*2(*X*),..., *fn*(*X*)} (6)

*H*(*X*) = 0 (7)

*G*(*X*) ≤ 0 (8) *Xl* ≤ *X* ≤ *Xu* (9)

Where *Cij* is the cost of the path *Xij* and *κ* is any nonempty proper subset of the cities 1, . . . , *m*. (1) is the objective function. The goal is the optimization of the overall cost of the tour. (2), (3) and (5) fulfills the constrain of visiting each city only once. Lastly, Equation (4) set the subsets of solutions, avoiding cycles in the tour.

Fig. 1. TSP instance of ten cities

TSP has an important impact on different sciences and fields, for instance in Operations Research and Theoretical Computer Science. Most problems related to those fields, are based in the TSP definition. For instance, problems such as Heterogeneous Machine Scheduling(Kim & Lee, 1998), Hybrid Scheduling and Dual Queue Scheduling(Shah et al., 2009), Project Management(de Pablo, 2009), Scheduling for Multichannel EPONs(McGarry et al., 2008), Single Machine Scheduling(Chunyue et al., 2009), Distributed Scheduling Systems(Yu et al., 1999), Relaxing Scheduling Loop Constraints(Kim & Lipasti, 2003), Distributed Parallel Scheduling(Liu et al., 2003), Scheduling for Grids(Huang et al., 2010), Parallel Scheduling for Dependent Task Graphs(Mingsheng et al., 2003), Dynamic Scheduling on Multiprocessor Architectures(Hamidzadeh & Atif, 1996), Advanced Planning and Scheduling System(Chua et al., 2006), Tasks and Messages in Distributed Real-Time Systems(Manimaran et al., 1997), Production Scheduling(You-xin et al., 2009), Cellular Network for Quality of Service Assurance(Wu & Negi, 2003), Net Based Scheduling(Wei et al., 2007), Spring Scheduling Co-processor(Niehaus et al., 1993), Multiple-resource Periodic Scheduling(Zhu et al., 2003), Real-Time Query Scheduling for Wireless Sensor Networks(Chipara et al., 2007), Multimedia Computing and Real-time Constraints(Chen et al., 2003), Pattern Driven Dynamic Scheduling(Yingzi et al., 2009), Security-assured Grid Job Scheduling(Song et al., 2006), Cost Reduction and Customer Satisfaction(Grobler & Engelbrecht, 2007), MPEG-2 TS Multiplexers in CATV Networks(Jianghong et al., 2000), Contention Awareness(Shanmugapriya et al., 2009) and The Hard Scheduling Optimization(Niño, Ardila, Perez & Donoso, 2010) had been derived from TSP. Although several algorithms have been implemented to solve TSP, there is no one that optimal solves it. For this reason, this chapter discuss novel metaheuristics based on the Automata Theory to solve the Multi-objective Traveling Salesman Problem.

This chapter is structured as follows: Section 2 shows important definitions to understand the Multi-objective Combinatorial Optimization and the Metaheuristic Approximation. Section 3, 4 and 5 discuss Evolutionary Metaheuritics based on the Automata Theory for the Multi-objective Optimization of Combinatorial Problems. Finally, Section 6 and 7 discuss the Experimental Results of each proposed Algorithm using Multi-objective Metrics from the specialized literature.

### **2. Preliminaries**

2 Will-be-set-by-IN-TECH

Where *Cij* is the cost of the path *Xij* and *κ* is any nonempty proper subset of the cities 1, . . . , *m*. (1) is the objective function. The goal is the optimization of the overall cost of the tour. (2), (3) and (5) fulfills the constrain of visiting each city only once. Lastly, Equation (4) set the

TSP has an important impact on different sciences and fields, for instance in Operations Research and Theoretical Computer Science. Most problems related to those fields, are based in the TSP definition. For instance, problems such as Heterogeneous Machine Scheduling(Kim & Lee, 1998), Hybrid Scheduling and Dual Queue Scheduling(Shah et al., 2009), Project Management(de Pablo, 2009), Scheduling for Multichannel EPONs(McGarry et al., 2008), Single Machine Scheduling(Chunyue et al., 2009), Distributed Scheduling Systems(Yu et al., 1999), Relaxing Scheduling Loop Constraints(Kim & Lipasti, 2003), Distributed Parallel Scheduling(Liu et al., 2003), Scheduling for Grids(Huang et al., 2010), Parallel Scheduling for Dependent Task Graphs(Mingsheng et al., 2003), Dynamic Scheduling on Multiprocessor Architectures(Hamidzadeh & Atif, 1996), Advanced Planning and Scheduling System(Chua et al., 2006), Tasks and Messages in Distributed Real-Time Systems(Manimaran et al., 1997), Production Scheduling(You-xin et al., 2009), Cellular Network for Quality of Service Assurance(Wu & Negi, 2003), Net Based Scheduling(Wei et al., 2007), Spring Scheduling Co-processor(Niehaus et al., 1993), Multiple-resource Periodic Scheduling(Zhu et al., 2003), Real-Time Query Scheduling for Wireless Sensor Networks(Chipara et al., 2007), Multimedia Computing and Real-time Constraints(Chen et al., 2003), Pattern Driven Dynamic Scheduling(Yingzi et al., 2009), Security-assured Grid Job Scheduling(Song et al., 2006), Cost Reduction and Customer Satisfaction(Grobler & Engelbrecht, 2007), MPEG-2 TS Multiplexers in CATV Networks(Jianghong et al., 2000), Contention Awareness(Shanmugapriya et al., 2009) and The Hard Scheduling Optimization(Niño, Ardila, Perez & Donoso, 2010) had been derived from TSP. Although several algorithms have been implemented to solve TSP, there is no one that optimal solves it. For this reason, this chapter discuss novel metaheuristics based

on the Automata Theory to solve the Multi-objective Traveling Salesman Problem.

This chapter is structured as follows: Section 2 shows important definitions to understand the Multi-objective Combinatorial Optimization and the Metaheuristic Approximation. Section 3, 4 and 5 discuss Evolutionary Metaheuritics based on the Automata Theory for the Multi-objective Optimization of Combinatorial Problems. Finally, Section 6 and 7 discuss the Experimental Results of each proposed Algorithm using Multi-objective Metrics from the

subsets of solutions, avoiding cycles in the tour.

Fig. 1. TSP instance of ten cities

specialized literature.

#### **2.1 Multi-objective optimization**

The Multi-objective optimization consists in two or more objectives functions to optimize and a set of constraints. Mathematically, the Multi-objective Optimization model is defined as follows:

$$\text{optimize}\quad F(\mathbf{X}) = \{f\_1(\mathbf{X}), f\_2(\mathbf{X}), \dots, f\_n(\mathbf{X})\}\tag{6}$$

Subject to:

$$H(X) = 0\tag{7}$$

$$\mathbb{G}(X) \le 0 \tag{8}$$

$$X\_l \le X \le X\_u \tag{9}$$

Where *F*(*X*) is the set of objective functions, *H*(*X*) and *G*(*X*) are the constraints of the problem. Lastly, *Xl* and *Xu* are the bounds for the set of variables *X*.

Unlike to Mono-objective Optimization, Multi-objective Optimization deal with searching a set of Optimal Solutions instead of a Optimal Solution. For instance, table 1 shows three solutions for a particualr Mono-objective Problem. If we suppose that those are related to a maximization problem then the Optimal Solution (found) is the solution 1 otherwise (minimization) will be the solution 2. On the other hand, in table 2 can be seen three solutions for a particular Tri-objective Problem. Thus, if we suppose that all the components of the solutions are related with a minimization problem, solution 2 is a *dominated solution* due to all the components (0.8, 0.9 and 1.0) are the biggest values. On the other hand, solution 0 and 1 are *no-dominated solutions* due to in the first and second component (0.6 and 0.4) solution 0 is bigger than the relative components of the solution 1 but in the third component (0.5) solution 0 is lower than the same component in solution 1. Both examples show the


Table 1. Solutions for a particular Mono-objective Problem

difference between Mono-objective and Multi-objective Optimization. While the first deal with finding the Optimal Solution, the last does with finding a set of Optimal Solutions. In Combinatorial Optimization, the set of Optimal Solution is called *Pareto Front*. It contains all the no-dominated solutions for a Multi-objective Problem. Figure 2 shows a Pareto Front for a particular Tri-objective Problem. Lastly, it is probably that some Multi-objective Problems


Table 2. Solutions for a particular Tri-objective Problem

have an infinite Pareto Front, in those cases is necessary to determinate how many solutions are required, for instance, using a maximum number of solution permitted in the Pareto Front.

Fig. 3. The Neighborhood of a solution *x* is known after being perturbed

Theory for the Multi-Objective Optimization of Combinatorial Problems

*Step 2. Crossover.* Cross the selected solutions avoiding local optimums.

three basic steps: Given a set of Initial Solutions *S*

Fig. 4. Basics steps of a Genetic Algorithm

Genetic Algorithms are Algorithms based on the Theory of Natural Selection(Wijkman, 1996). Thus, Genetic Algorithms mimics the realBehavior Genetic Algorithms(Fisher, 1930) through

<sup>85</sup> Evolutionary Algorithms Based on the Automata

*Step 3. Mutation.* Perturbs the new solutions found for increasing the population. The perturbation can be done according to the representation of the solution. In this step, good

Figure 4 shows the basics steps of a Genetic Algorithm. The most known Genetic

Algorithms from the literature(Dukkipati & Narasimha Murty, 2002) are the Non-Dominated Sorting Genetic Algorithm(Deb et al., 2002) (NSGA-II) and the Strength Pareto Evolutionary Algorithm 2(Zitzler et al., 2001; 2002) (SPEA 2). NSGA-II uses a no-dominated sort for sorting the solutions in different Pareto Sets. Consequently, it demands a lot of time, but it allows a global verification of the solutions for avoiding the Local Optimums. On the other hand, SPEA 2 is an improvement of SPEA. The difference with the first version is that SPEA 2 works using strength for every solution according to the number of solutions that it dominates. Consequently, at the end of the iterations, SPEA 2 has the non dominated solutions stronger avoiding Local Optimums. SPEA 2 and NSGA-II have been implemented to solve a lot of problems in the Multiobjective and Combinatorial Optimization fiel. For instance, problems such as Pattern-recognition based Machine Translation System(Sofianopoulos & Tambouratzis, 2011), Tuning of Fuzzy Logic controllers for a heating(Gacto et al., 2011), Real-coded Quantum Clones(Xiawen & Yu, 2011), Optimization Problems with Correlated Objectives(Ishibuchi et al., 2011), Production Planning(Yu et al., 2011), Optical and Dynamic Networks Designs(Araujo et al., 2011; Wismans et al., 2011), Benchmark multi-objective

*Step 1. Selection.* Select solutions from a population. In pairs, select two solutions *x*, *y* ∈ *S*

**2.3 Genetic algorithms**

solutions are added to *S*

Fig. 2. Pareto Front for a particular Tri-objective Problem

#### **2.2 Tabu search**

Tabu Search(Glover & Laguna, 1997) is a basic local search strategy for the Optimization of Combinatorial Problems. It is defined as follows: Given *S* as the Initial Solutions Set.

*Step 1. Selection.* Select *x* ∈ *S*

*Step 2. Perturbation.* Perturbs the solution *x* for the purpose of knowing its Neighborhood (*N*(*x*)). Perturbing a solution means to modify the solution *x* in order to obtain a new solution (*xi* � ). The solutions found are called Neighbors, and those represent the Neighborhood. For instance, figure 3 shows three perturbations for a *x* solutions and the new solutions *x*1 � ,*x*<sup>2</sup> � and *x*<sup>3</sup> � found. The perturbation can be done according to the representation of the solutions. Regularly, the representations of the solutions in Combinatorial Problems are based on Discrete Structures such as Vectors, Matrices, Queues and Lists. Lastly, good solutions are added to *S*.

*Setp 3. Check Stop Condition.* The stop condition can be delimited using rules such as number of execution without improvement or maximum number of iteration exceeded.

Recently, novels Tabu Search inspired Algorithms have been developed in order to solve Combinatorial Problems such as Permutation Flow Shop Scheduling(Ren et al., 2011), Displacement based on Support Vector(Fei et al., 2011), Examination Timetabling Problem(Malik et al., 2011), Partial Transmit Sequences for PAPR Reduction(Taspinar et al., 2011), Inverse Problems(An et al., 2011), Fuzzy PD Controllers(Talbi & Belarbi, 2011b), Instrusion Detection(Jian-guang et al., 2011), Tel-Home Care Problems(Lee et al., 2011), Ant Colony inspired Problems(Zhang-liang & Yue-guang, 2011), Steelmaking-Continuous Casting Production Scheduling(Zhao et al., 2011), Fuzzy Inference System(Talbi & Belarbi, 2011a) and Coordination of Dispatchable Distributed Generation and Voltage Control Devices(Ausavanop & Chaitusaney, 2011).

Fig. 3. The Neighborhood of a solution *x* is known after being perturbed

### **2.3 Genetic algorithms**

4 Will-be-set-by-IN-TECH

6 7 8 9

Combinatorial Problems. It is defined as follows: Given *S* as the Initial Solutions Set.

f1 f2

Tabu Search(Glover & Laguna, 1997) is a basic local search strategy for the Optimization of

*Step 2. Perturbation.* Perturbs the solution *x* for the purpose of knowing its Neighborhood (*N*(*x*)). Perturbing a solution means to modify the solution *x* in order to obtain a new solution

solutions. Regularly, the representations of the solutions in Combinatorial Problems are based on Discrete Structures such as Vectors, Matrices, Queues and Lists. Lastly, good solutions are

*Setp 3. Check Stop Condition.* The stop condition can be delimited using rules such as number

Recently, novels Tabu Search inspired Algorithms have been developed in order to solve Combinatorial Problems such as Permutation Flow Shop Scheduling(Ren et al., 2011), Displacement based on Support Vector(Fei et al., 2011), Examination Timetabling Problem(Malik et al., 2011), Partial Transmit Sequences for PAPR Reduction(Taspinar et al., 2011), Inverse Problems(An et al., 2011), Fuzzy PD Controllers(Talbi & Belarbi, 2011b), Instrusion Detection(Jian-guang et al., 2011), Tel-Home Care Problems(Lee et al., 2011), Ant Colony inspired Problems(Zhang-liang & Yue-guang, 2011), Steelmaking-Continuous Casting Production Scheduling(Zhao et al., 2011), Fuzzy Inference System(Talbi & Belarbi, 2011a) and Coordination of Dispatchable Distributed Generation and Voltage Control

of execution without improvement or maximum number of iteration exceeded.

). The solutions found are called Neighbors, and those represent the Neighborhood. For instance, figure 3 shows three perturbations for a *x* solutions and the new solutions

found. The perturbation can be done according to the representation of the

10 11 12 13

Fig. 2. Pareto Front for a particular Tri-objective Problem

Nondominated Solutions

x 104

*Step 1. Selection.* Select *x* ∈ *S*

Devices(Ausavanop & Chaitusaney, 2011).

**2.2 Tabu search**

(*xi* �

*x*1 � ,*x*<sup>2</sup> � and *x*<sup>3</sup> �

added to *S*.

0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 x 105

f3

0.6 0.7 0.8 0.9 <sup>1</sup> 1.1 1.2 1.3 1.4 1.5

x 105

Genetic Algorithms are Algorithms based on the Theory of Natural Selection(Wijkman, 1996). Thus, Genetic Algorithms mimics the realBehavior Genetic Algorithms(Fisher, 1930) through three basic steps: Given a set of Initial Solutions *S*

*Step 1. Selection.* Select solutions from a population. In pairs, select two solutions *x*, *y* ∈ *S*

*Step 2. Crossover.* Cross the selected solutions avoiding local optimums.

*Step 3. Mutation.* Perturbs the new solutions found for increasing the population. The perturbation can be done according to the representation of the solution. In this step, good solutions are added to *S*

Figure 4 shows the basics steps of a Genetic Algorithm. The most known Genetic

Fig. 4. Basics steps of a Genetic Algorithm

Algorithms from the literature(Dukkipati & Narasimha Murty, 2002) are the Non-Dominated Sorting Genetic Algorithm(Deb et al., 2002) (NSGA-II) and the Strength Pareto Evolutionary Algorithm 2(Zitzler et al., 2001; 2002) (SPEA 2). NSGA-II uses a no-dominated sort for sorting the solutions in different Pareto Sets. Consequently, it demands a lot of time, but it allows a global verification of the solutions for avoiding the Local Optimums. On the other hand, SPEA 2 is an improvement of SPEA. The difference with the first version is that SPEA 2 works using strength for every solution according to the number of solutions that it dominates. Consequently, at the end of the iterations, SPEA 2 has the non dominated solutions stronger avoiding Local Optimums. SPEA 2 and NSGA-II have been implemented to solve a lot of problems in the Multiobjective and Combinatorial Optimization fiel. For instance, problems such as Pattern-recognition based Machine Translation System(Sofianopoulos & Tambouratzis, 2011), Tuning of Fuzzy Logic controllers for a heating(Gacto et al., 2011), Real-coded Quantum Clones(Xiawen & Yu, 2011), Optimization Problems with Correlated Objectives(Ishibuchi et al., 2011), Production Planning(Yu et al., 2011), Optical and Dynamic Networks Designs(Araujo et al., 2011; Wismans et al., 2011), Benchmark multi-objective

Fig. 5. Automata state diagram for the example 1.

*Example 1.* MDFA for a Scheduling Parallel Machine Problem:

Theory for the Multi-Objective Optimization of Combinatorial Problems

*<sup>F</sup>*(*X*) =

Fig. 6. Set of states for the MDFA of example 2

is defined as follows:

the objectives to optimize.

representation of the feasible solution space of a Combinatorial Problem. Formally, a MDFA

<sup>87</sup> Evolutionary Algorithms Based on the Automata

Where *Q* represents all the set of states of the automata (feasible solution space), Σ is the input alphabet that is used for *δ* (transition function) to explore the feasible solution space of a combinatorial problem, *Q*<sup>0</sup> contains the initial set of states (initial solutions) and *F*(*X*) are

A Company has three machines. It is necessary to schedule three processes in parallel *P*1,*P*<sup>2</sup> and *P*3. Each process has a duration of 5, 10 y 50 minutes respectively. If the processes can be executed in any of the machines, how many manners the machines can be assigned to the

First of all, we need to build the MDFA. For doing this, we must define the states of the MDFA setting the structure of the solution for each state. Therefore, if we state that *Xq* = (*Pk*, *Pi*, *Pj*) represents the solution for the state *q*: machine 1 executes the process *Pk*, machine 2 executes the process *Pi* and machine 3 executes the process *Pj* then the arrays solution for each state will be *Xq*0 = (*P*1, *P*2, *P*3), *Xq*1 = (*P*1, *P*3, *P*2), *Xq*2 = (*P*2, *P*1, *P*3), *Xq*3 = (*P*2, *P*3, *P*1), *Xq*4 = (*P*3, *P*1, *P*2) y *Xq*5 = (*P*3, *P*2, *P*1). Now, we have six states *q*0,*q*1,*q*2,*q*3,*q*<sup>4</sup> and *q*5, those represent the feasible solution space of the Scheduling problem proposed. The set of states for the MDFA of this problem can be seen in figure 6. Once the set of states is defined, the Input Alphabet

(Σ) and the Transition Function (*δ*) be done. It is very important to take into account, first, the bond of both allows to perturb the solutions in all the possible manners, in other words, we can change of state using the combination of Σ and *δ*. Obviously, doing this, we avoid

*i* · *Xi*, *f*2(*X*) =

3 ∑ *i*=1 1 *i* · *Xi* 

(14)

processes? Given the Bi-objective function in (10), what is the optimal Pareto Front?

3 ∑ *i*=1

*f*1(*X*) =

*M* = (*Q*, Σ, *δ*, *Q*0, *F*(*X*)) (13)

optimization(McClymont & Keedwell, 2011) and Vendor-managed Inventory(Azuma et al., 2011) have been solved using SPEA and NSGA-II.

#### **2.4 Simulated Annealing algorithms**

Simulated Annealing(Kirkpatrick et al., 1983) is a generic probabilistic metaheuristic based in the Annealing in Metallurgy. Similar to Tabu Search, Simulated Annealing explores the neighborhood of solutions being flexible with no-good solutions. That is mean, accepting bad solutions as well as good solution, but only in the first iterations. The acceptation of a bad solution is based on the Boltzmann Probabilistic Distribution:

$$P(\mathbf{x}) = e^{\left(-\left(\frac{\mathbf{c}}{T\_i}\right)\right)} \tag{10}$$

Where *E* is the change of the Energy and *Ti* is the temperature in the moment *i*. In the first level of the temperature, bad solutions are accepted as well, anyways, when the temperature go down, Simulated Annealing behaves similar to Tabu Search (only accept good solutions).

Recentrly, similar to Genetic Algoritms and Tabu Search, many problems have been solved using Simulated Annealing metaheuristic. For instance, Neuro Fuzzy - SystemsCzabaski (2006), Contrast Functions for BSSGarriz et al. (2005), Cryptanalysis of Transposition CipherSong et al. (2008), Transmitter-Receiver Collaborative-Relay BeamformingZheng et al. (2011) and Two-Dimensional Strip Packing ProblemDereli & Sena Da (2007) have been solved through Simulated Annealing inspired algorithms.

#### **2.5 Deterministic Finite Automata**

Formally, a Deterministic Finite Automata is a Quint-tuple defined as follows:

$$A = (\mathbb{Q}\angle\Sigma, \delta\,\eta\_{0\prime}F) \tag{11}$$

Set of transitions *δ*. The set of transitions (*δ*) describes the behavior of the automata. Let *a* ∈ *S* and *q*,*r* ∈ *Q*, then the function is defined as follows:

$$
\delta(q, a) = r \tag{12}
$$

*Example 1.* Let *A* = (*Q*, Σ, *δ*, *q*0, *F*), where *Q* = {*q*0, *q*1, *q*2}, *S* = {0, 1}, *F* = {*q*1} and the set of transitions *δ* defined in table 3, the representation of *A* using a state diagram can be derived as shown in figure 5. Notice that each state of DFA has transitions with all the elements of Σ.


Table 3. Set of transitions for the DFA of example 1

#### **2.6 Metaheuristic Of Deterministic Swapping (MODS)**

Metaheuristic Of Deterministic Swapping (MODS) (Niño et al., 2011) is a local search strategy that explores the Feasible Solution Space of a Combinatorial Problem supported in a data structure named Multi Objective Deterministic Finite Automata (MDFA) (Niño, Ardila, Donoso & Jabba, 2010). A MDFA is a Deterministic Finite Automata that allows the

Fig. 5. Automata state diagram for the example 1.

6 Will-be-set-by-IN-TECH

optimization(McClymont & Keedwell, 2011) and Vendor-managed Inventory(Azuma et al.,

Simulated Annealing(Kirkpatrick et al., 1983) is a generic probabilistic metaheuristic based in the Annealing in Metallurgy. Similar to Tabu Search, Simulated Annealing explores the neighborhood of solutions being flexible with no-good solutions. That is mean, accepting bad solutions as well as good solution, but only in the first iterations. The acceptation of a bad

> − *<sup>E</sup> Ti*

Where *E* is the change of the Energy and *Ti* is the temperature in the moment *i*. In the first level of the temperature, bad solutions are accepted as well, anyways, when the temperature go down, Simulated Annealing behaves similar to Tabu Search (only accept good solutions). Recentrly, similar to Genetic Algoritms and Tabu Search, many problems have been solved using Simulated Annealing metaheuristic. For instance, Neuro Fuzzy - SystemsCzabaski (2006), Contrast Functions for BSSGarriz et al. (2005), Cryptanalysis of Transposition CipherSong et al. (2008), Transmitter-Receiver Collaborative-Relay BeamformingZheng et al. (2011) and Two-Dimensional Strip Packing ProblemDereli & Sena Da (2007) have been solved

Set of transitions *δ*. The set of transitions (*δ*) describes the behavior of the automata. Let *a* ∈ *S*

*Example 1.* Let *A* = (*Q*, Σ, *δ*, *q*0, *F*), where *Q* = {*q*0, *q*1, *q*2}, *S* = {0, 1}, *F* = {*q*1} and the set of transitions *δ* defined in table 3, the representation of *A* using a state diagram can be derived as shown in figure 5. Notice that each state of DFA has transitions with all the elements of Σ. *0 1 q*<sup>0</sup> *q*<sup>2</sup> *q*<sup>0</sup> *q*<sup>1</sup> *q*<sup>1</sup> *q*<sup>1</sup> *q*<sup>2</sup> *q*<sup>2</sup> *q*<sup>1</sup>

Metaheuristic Of Deterministic Swapping (MODS) (Niño et al., 2011) is a local search strategy that explores the Feasible Solution Space of a Combinatorial Problem supported in a data structure named Multi Objective Deterministic Finite Automata (MDFA) (Niño, Ardila, Donoso & Jabba, 2010). A MDFA is a Deterministic Finite Automata that allows the

*A* = (*Q*, Σ, *δ*, *q*0, *F*) (11)

*δ*(*q*, *a*) = *r* (12)

(10)

*P*(*x*) = *e*

Formally, a Deterministic Finite Automata is a Quint-tuple defined as follows:

2011) have been solved using SPEA and NSGA-II.

through Simulated Annealing inspired algorithms.

and *q*,*r* ∈ *Q*, then the function is defined as follows:

Table 3. Set of transitions for the DFA of example 1

**2.6 Metaheuristic Of Deterministic Swapping (MODS)**

**2.5 Deterministic Finite Automata**

solution is based on the Boltzmann Probabilistic Distribution:

**2.4 Simulated Annealing algorithms**

representation of the feasible solution space of a Combinatorial Problem. Formally, a MDFA is defined as follows:

$$M = \left(Q\_{\prime} \Sigma\_{\prime} \delta\_{\prime} Q\_{0\prime} F(X)\right) \tag{13}$$

Where *Q* represents all the set of states of the automata (feasible solution space), Σ is the input alphabet that is used for *δ* (transition function) to explore the feasible solution space of a combinatorial problem, *Q*<sup>0</sup> contains the initial set of states (initial solutions) and *F*(*X*) are the objectives to optimize.

*Example 1.* MDFA for a Scheduling Parallel Machine Problem:

A Company has three machines. It is necessary to schedule three processes in parallel *P*1,*P*<sup>2</sup> and *P*3. Each process has a duration of 5, 10 y 50 minutes respectively. If the processes can be executed in any of the machines, how many manners the machines can be assigned to the processes? Given the Bi-objective function in (10), what is the optimal Pareto Front?

$$F(X) = \left\{ f\_1(X) = \sum\_{i=1}^{3} i \cdot X\_{i\prime} f\_2(X) = \sum\_{i=1}^{3} \left(\frac{1}{i}\right) \cdot X\_i \right\} \tag{14}$$

First of all, we need to build the MDFA. For doing this, we must define the states of the MDFA setting the structure of the solution for each state. Therefore, if we state that *Xq* = (*Pk*, *Pi*, *Pj*) represents the solution for the state *q*: machine 1 executes the process *Pk*, machine 2 executes the process *Pi* and machine 3 executes the process *Pj* then the arrays solution for each state will be *Xq*0 = (*P*1, *P*2, *P*3), *Xq*1 = (*P*1, *P*3, *P*2), *Xq*2 = (*P*2, *P*1, *P*3), *Xq*3 = (*P*2, *P*3, *P*1), *Xq*4 = (*P*3, *P*1, *P*2) y *Xq*5 = (*P*3, *P*2, *P*1). Now, we have six states *q*0,*q*1,*q*2,*q*3,*q*<sup>4</sup> and *q*5, those represent the feasible solution space of the Scheduling problem proposed. The set of states for the MDFA of this problem can be seen in figure 6. Once the set of states is defined, the Input Alphabet

Fig. 6. Set of states for the MDFA of example 2

(Σ) and the Transition Function (*δ*) be done. It is very important to take into account, first, the bond of both allows to perturb the solutions in all the possible manners, in other words, we can change of state using the combination of Σ and *δ*. Obviously, doing this, we avoid

<sup>80</sup> <sup>90</sup> <sup>100</sup> <sup>110</sup> <sup>120</sup> <sup>130</sup> <sup>140</sup> <sup>150</sup> <sup>160</sup> <sup>170</sup> <sup>180</sup> <sup>25</sup>

<sup>89</sup> Evolutionary Algorithms Based on the Automata

No Dominated Solutions

*Q<sup>φ</sup>* = *Q<sup>φ</sup>* ∪ *Q*<sup>∗</sup> (16)

*M* = (*Q*, *Q*0, *P*(*q*), *F*(*X*), *A*(*n*)) (17)

*P*(*q*) : *Q* → *Q* (18)

f1

were visited, their solution dominated at least one solution of an element in *Qφ*. *Q<sup>φ</sup>* contains all the states with non-dominated solutions. Due to this, it can be inferred that the elements

*Step 1.* Create the initial set of solutions *Q*<sup>0</sup> using a heuristic relative to the problem to solve.

*Step 4.* Explore the neighborhood of *q* using *δ* and Σ. Add to *Q<sup>φ</sup>* the solutions found that are not dominated by elements of *Qf* . In addition, add to *Q*<sup>∗</sup> those solutions found that

Simulated Annealing & Metaheuristic Of Deterministic Swapping(Niño, 2012) (SAMODS) is a hybrid local search strategy based on the MODS theory and Simulated Annealing Algorithm for the Multiobjective Optimization of combinatorial problems. Its main propose consists in optimizing a combinatorial problem using a Search Direction and an Angle Improvement.

Alike MODS, *Q*<sup>0</sup> is the set of initial solutions, *Q* is the feasible solution space and *F*(*X*) are the

*P* receives a solution *q* ∈ *Q* and perturbs it returning a new solution *ri* ∈ *Q*. The perturbation can be done based on the representation of the solutions. An example of some perturbations

functions of the combinatorial problem. *P*(*q*) and *A*(*n*) are defined as follows:

*P*(*q*) is the *Permutation Function*, formally it is defined as follows:

based on the representation of the solution can be seen in figure 15.

**3. Simulated Annealing Metaheuristic Of Deterministic Swapping (SAMODS)**

Fig. 8. Pareto Front for the MDFA of example 2, Parallel execution of processes

of *Q*<sup>∗</sup> are contained in *Qφ*, for this reason is true that:

*Step 2.* Set *Q<sup>φ</sup>* as *Q*<sup>0</sup> and *Q*<sup>∗</sup> as *φ*.

*Step 3.* Select a random state *q* ∈ *Q<sup>φ</sup>* or *q* ∈ *Q*<sup>∗</sup>

dominated at least one element from *Qφ*. *Step 5.* Check stop condition, go to 3.

SAMODS is based in the next Automata:

Lastly, the template algorithm of MODS is defined as follows:

Theory for the Multi-Objective Optimization of Combinatorial Problems

f2

unfeasible solutions. Regarding the proposed problem, we propose the set Σ as follows:

$$
\Sigma = \{ (P\_1, P\_2), (P\_1, P\_3), (P\_2, P\_3) \} \tag{15}
$$

Hence, it is elemental that *δ*(*q*0,(*P*1, *P*2)) = *q*2, *δ*(*q*0,(*P*1, *P*3)) = *q*5, ... , *δ*(*q*5,(*P*2, *P*3)) = *q*3. At this part, the transitions has been defined therefore the MDFA can be seen in figure 7.

Finally, the solution of each state is replaced in (10). The results can be seen in table 4 and the Optimal Pareto Front is shown in figure 8.


Table 4. Values of *F*(*X*) for the states of example 2

Fig. 7. MDFA for example 2, Parallel execution of processes

As can be seen in figure 7, the feasible solution space for this problem was described using a MDFA. Also, unfeasible solutions are not allowed because of the definition of Σ. Nevertheless, the general problem was not solved, only a particular case of three variables (machines) was done. For this reason, it was easy to draw the entire MDFA. However, problems like this are intractable for a large number of variables, in other words, when the number of variables grow the feasible solution space grows exponentially. In this manner, it is not a good idea to draw the entire feasible solution space and pick the best solutions. Thus, what should we do in order to solve any combinatorial problem, without taking into account its size, using a MDFA? Looking an answer to this question, MODS was proposed.

MODS explores the feasible solution space represented through a MDFA using a search direction given by an elitist set of solutions (*Q*∗). The elitist solution are states that, when

Fig. 8. Pareto Front for the MDFA of example 2, Parallel execution of processes

were visited, their solution dominated at least one solution of an element in *Qφ*. *Q<sup>φ</sup>* contains all the states with non-dominated solutions. Due to this, it can be inferred that the elements of *Q*<sup>∗</sup> are contained in *Qφ*, for this reason is true that:

$$Q\_{\Phi} = Q\_{\Phi} \cup Q\_{\*} \tag{16}$$

Lastly, the template algorithm of MODS is defined as follows:

*Step 1.* Create the initial set of solutions *Q*<sup>0</sup> using a heuristic relative to the problem to solve.

*Step 2.* Set *Q<sup>φ</sup>* as *Q*<sup>0</sup> and *Q*<sup>∗</sup> as *φ*.

8 Will-be-set-by-IN-TECH

Hence, it is elemental that *δ*(*q*0,(*P*1, *P*2)) = *q*2, *δ*(*q*0,(*P*1, *P*3)) = *q*5, ... , *δ*(*q*5,(*P*2, *P*3)) = *q*3. At this part, the transitions has been defined therefore the MDFA can be seen in figure 7.

Finally, the solution of each state is replaced in (10). The results can be seen in table 4 and the

As can be seen in figure 7, the feasible solution space for this problem was described using a MDFA. Also, unfeasible solutions are not allowed because of the definition of Σ. Nevertheless, the general problem was not solved, only a particular case of three variables (machines) was done. For this reason, it was easy to draw the entire MDFA. However, problems like this are intractable for a large number of variables, in other words, when the number of variables grow the feasible solution space grows exponentially. In this manner, it is not a good idea to draw the entire feasible solution space and pick the best solutions. Thus, what should we do in order to solve any combinatorial problem, without taking into account its size, using a

MODS explores the feasible solution space represented through a MDFA using a search direction given by an elitist set of solutions (*Q*∗). The elitist solution are states that, when

State Assignments Times *F*(*X*) *qi M*<sup>1</sup> *M*<sup>2</sup> *M*<sup>3</sup> *M*<sup>1</sup> *M*<sup>2</sup> *M*<sup>3</sup> *f*1(*X*) *f*2(*X*) *q*<sup>0</sup> *P*<sup>1</sup> *P*<sup>2</sup> *P*<sup>3</sup> 10 50 5 125 36.66 *q*<sup>1</sup> *P*<sup>1</sup> *P*<sup>3</sup> *P*<sup>2</sup> 10 5 50 170 29.16 *q*<sup>2</sup> *P*<sup>2</sup> *P*<sup>1</sup> *P*<sup>3</sup> 50 10 5 85 56.66 *q*<sup>3</sup> *P*<sup>2</sup> *P*<sup>3</sup> *P*<sup>1</sup> 50 5 10 90 55.83 *q*<sup>4</sup> *P*<sup>3</sup> *P*<sup>1</sup> *P*<sup>2</sup> 5 10 50 175 26.66 *q*<sup>5</sup> *P*<sup>3</sup> *P*<sup>2</sup> *P*<sup>1</sup> 5 50 10 135 33.33

Optimal Pareto Front is shown in figure 8.

Table 4. Values of *F*(*X*) for the states of example 2

Fig. 7. MDFA for example 2, Parallel execution of processes

MDFA? Looking an answer to this question, MODS was proposed.

Σ = {(*P*1, *P*2),(*P*1, *P*3),(*P*2, *P*3)} (15)

unfeasible solutions. Regarding the proposed problem, we propose the set Σ as follows:

*Step 3.* Select a random state *q* ∈ *Q<sup>φ</sup>* or *q* ∈ *Q*<sup>∗</sup>

*Step 4.* Explore the neighborhood of *q* using *δ* and Σ. Add to *Q<sup>φ</sup>* the solutions found that are not dominated by elements of *Qf* . In addition, add to *Q*<sup>∗</sup> those solutions found that dominated at least one element from *Qφ*.

*Step 5.* Check stop condition, go to 3.

#### **3. Simulated Annealing Metaheuristic Of Deterministic Swapping (SAMODS)**

Simulated Annealing & Metaheuristic Of Deterministic Swapping(Niño, 2012) (SAMODS) is a hybrid local search strategy based on the MODS theory and Simulated Annealing Algorithm for the Multiobjective Optimization of combinatorial problems. Its main propose consists in optimizing a combinatorial problem using a Search Direction and an Angle Improvement. SAMODS is based in the next Automata:

$$M = (Q\_{\prime}Q\_{0\prime}P(q), F(X), A(n))\tag{17}$$

Alike MODS, *Q*<sup>0</sup> is the set of initial solutions, *Q* is the feasible solution space and *F*(*X*) are the functions of the combinatorial problem. *P*(*q*) and *A*(*n*) are defined as follows:

*P*(*q*) is the *Permutation Function*, formally it is defined as follows:

$$P(q): \mathbb{Q} \to \mathbb{Q} \tag{18}$$

*P* receives a solution *q* ∈ *Q* and perturbs it returning a new solution *ri* ∈ *Q*. The perturbation can be done based on the representation of the solutions. An example of some perturbations based on the representation of the solution can be seen in figure 15.

Fig. 10. Different angles given by different weights for a Bi-objective Problem.

the problem and *ρ* as the cooler factor.

*Step 4.* Perturbing Solutions. Set *s*

follows:

as *s* �

*Qφ*). Go to step 3.

*s* ∈ *Qφ*, set *W* = *A*(*n*) = {*w*1, *w*2, ··· , *wn*} and go to step 4.

Theory for the Multi-Objective Optimization of Combinatorial Problems

*Q<sup>φ</sup>* = *Q<sup>φ</sup>* ∪

*Q*<sup>∗</sup> = *Q*<sup>∗</sup> ∪

If *Q<sup>φ</sup>* has at least one element that dominated to *s*

Where *sX* is the vector *X* of solution *s*, *s*

and go to step 4 else go to step 6.

*Step 8.* Finishing. *Q<sup>φ</sup>* has the non-dominated solutions.

�

 *s* � 

 *s* � 

*γ* = *n* ∑ *i*=1

Where *Ti* is the temperature value in moment *i* and *γ* is defined as follows:

*Step 2.* Settings parameters. Set *T* as the initial temperature, *n* as the number of objectives of

<sup>91</sup> Evolutionary Algorithms Based on the Automata

*Step 3.* Setting Angle. If *T* is equal to 0 then got to 8, else set *Ti*+<sup>1</sup> = *ρ* × *Ti*, randomly select

⇔ (∃*r* ∈ *Q*∗)(*s*

*wi* · *fi*(*sX*) −

�

*Step 6.* Change the search direction. Randomly select a solution *s* ∈ *Q*<sup>∗</sup> and go to step 4.

assigned to the function *i* and *n* is the number of objectives of the problem. If *n < z* then set *s*

*Step 7.* Removing dominated solutions. Remove the dominated solution for each set (*Q*<sup>∗</sup> and

As can be seen in figure 11, alike MODS, SAMODS removes the dominated solutions when the new solution found is not dominated. Besides, if the new solution found dominated at least one element from the solution set (*Qφ*) then it will be added to the elitisms set (*Q*∗) that works as a search direction for the Pareto Front. As far as here, SAMODS could sounds as a simple local search strategy but not, when a new solution found is dominated, SAMODS tries to improve it using guessing. Guessing is done accepting dominated solution as good solutions. Alike Simulated Annealing inspired algorithms, the dominated solutions are accepted under the Boltzmann Distribution Probability assigning weights to the objectives of the problem. It is probably that perturbing a dominated solution, a non-dominated solution can be found as can be seen in figure 12. Due to this, local optimums are avoided. When the

*Step 5.* Guess with dominated solutions. Randomly generated a number *n* ∈ [0, 1]. Set *z* as

= *P*(*s*), add to *Q<sup>φ</sup>* and *Q*<sup>∗</sup> according to the next rules:

) (22)

*dominated to r*) (23)

*<sup>X</sup>*) (25)

, *wi* is the weight

�

go to step 5, otherwise go to step 7.

*z* = *e*(−(*γ*/*Ti*)) (24)

<sup>⇔</sup> (�∃*<sup>r</sup>* <sup>∈</sup> *<sup>Q</sup>φ*)(*r dominated to s*�

�

�

*n* ∑ *i*=1

*wi* · *fi*(*s* �

*<sup>X</sup>* is the vector *X* of solution *s*

Fig. 9. Different representation and perturbation of solutions.

*A*(*n*) is the *Weight Function*. Formally, it is defined as follow:

$$A(n) : \mathbb{N} \to \mathbb{R}^n \tag{19}$$

Where *n* is the number of objectives of the problem.

Function *A* receives a natural number as parameter and it returns a vector with the weights. The weight values are randomly generated with an uniform distribution. Those represent the weight to assign to each function of the combinatorial problem. The weight values returned by the function fulfill the next constrain:

$$\sum\_{i=1}^{n} \alpha\_i = 1, 0 \le \alpha\_i \le 1 \tag{20}$$

Where *α<sup>i</sup>* is the weight assigned to function *i*. Table 5 shows some vectors randomly generated by *A*(*n*).


Table 5. Some weight vectors generated by A(n)

But, what is the importance of those weights? The weights, in an implicit manner, allow setting the angle direction to the solutions. The angle direction is the course being followed by the solutions for optimizing F(X). Hence, when the weights values are changed, the angle of optimization is changed and a new search direction is obtained. For instance, different search directions for different weight values are shown in figure 16 in a Bi-objective combinatorial problem. Due to this, (6) is rewritten as follows:

$$F(X) = \sum\_{i=1}^{n} a\_i \cdot f\_i(X) \tag{21}$$

Where *n* is the number of objectives of the problem and *α<sup>i</sup>* is the weight assigned to the function *i*. The weights fulfills the constrain established in (20).

SAMODS main idea is simple: it takes advantage of the search directions given by MODS and it proposed an angle direction given by the function *A*(*n*). Thus, there are two directions; the first helps in the convergence of the Pareto Front and the second helps the solutions to find neighborhoods where *F*(*X*) is optimized. Due to this, SAMODS template is defined as follows:

*Step 1.* Setting sets. Set *Q*<sup>0</sup> as the set of Initial Solutions. Set *Q<sup>φ</sup>* and *Q*<sup>∗</sup> as *Q*0.

10 Will-be-set-by-IN-TECH

Function *A* receives a natural number as parameter and it returns a vector with the weights. The weight values are randomly generated with an uniform distribution. Those represent the weight to assign to each function of the combinatorial problem. The weight values returned

Where *α<sup>i</sup>* is the weight assigned to function *i*. Table 5 shows some vectors randomly generated

*Input Parameter Function Vector of Weights A*(2) {0.6, 0.4} *A*(3) {0.2, 0.4, 0.4} *A*(4) {0.3, 0.8, 0.1, 0.0}

But, what is the importance of those weights? The weights, in an implicit manner, allow setting the angle direction to the solutions. The angle direction is the course being followed by the solutions for optimizing F(X). Hence, when the weights values are changed, the angle of optimization is changed and a new search direction is obtained. For instance, different search directions for different weight values are shown in figure 16 in a Bi-objective combinatorial

> *n* ∑ *i*=1

Where *n* is the number of objectives of the problem and *α<sup>i</sup>* is the weight assigned to the

SAMODS main idea is simple: it takes advantage of the search directions given by MODS and it proposed an angle direction given by the function *A*(*n*). Thus, there are two directions; the first helps in the convergence of the Pareto Front and the second helps the solutions to find neighborhoods where *F*(*X*) is optimized. Due to this, SAMODS template is defined as

*F*(*X*) =

*Step 1.* Setting sets. Set *Q*<sup>0</sup> as the set of Initial Solutions. Set *Q<sup>φ</sup>* and *Q*<sup>∗</sup> as *Q*0.

function *i*. The weights fulfills the constrain established in (20).

*n* ∑ *i*=1

*<sup>A</sup>*(*n*) : **<sup>N</sup>** → �*<sup>n</sup>* (19)

*α<sup>i</sup>* = 1, 0 ≤ *α<sup>i</sup>* ≤ 1 (20)

*α<sup>i</sup>* · *fi*(*X*) (21)

Fig. 9. Different representation and perturbation of solutions.

*A*(*n*) is the *Weight Function*. Formally, it is defined as follow:

Where *n* is the number of objectives of the problem.

by the function fulfill the next constrain:

Table 5. Some weight vectors generated by A(n)

problem. Due to this, (6) is rewritten as follows:

by *A*(*n*).

follows:

Fig. 10. Different angles given by different weights for a Bi-objective Problem.

*Step 2.* Settings parameters. Set *T* as the initial temperature, *n* as the number of objectives of the problem and *ρ* as the cooler factor.

*Step 3.* Setting Angle. If *T* is equal to 0 then got to 8, else set *Ti*+<sup>1</sup> = *ρ* × *Ti*, randomly select *s* ∈ *Qφ*, set *W* = *A*(*n*) = {*w*1, *w*2, ··· , *wn*} and go to step 4.

*Step 4.* Perturbing Solutions. Set *s* � = *P*(*s*), add to *Q<sup>φ</sup>* and *Q*<sup>∗</sup> according to the next rules:

$$Q\_{\Phi} = Q\_{\Phi} \cup \left\{ \mathbf{s}' \right\} \Leftrightarrow (\exists r \in Q\_{\Phi}) (r \quad dominated \quad \text{to} \quad \mathbf{s'}) \tag{22}$$

$$Q\_\* = Q\_\* \cup \left\{ \mathbf{s'} \right\} \Leftrightarrow (\exists r \in Q\_\* ) (\mathbf{s'} \quad dominated \quad \text{to} \quad r) \tag{23}$$

If *Q<sup>φ</sup>* has at least one element that dominated to *s* � go to step 5, otherwise go to step 7.

*Step 5.* Guess with dominated solutions. Randomly generated a number *n* ∈ [0, 1]. Set *z* as follows:

$$z = e^{\left(-\left(\gamma/T\_l\right)\right)}\tag{24}$$

Where *Ti* is the temperature value in moment *i* and *γ* is defined as follows:

$$\gamma = \sum\_{i=1}^{n} w\_i \cdot f\_i(s\_X) - \sum\_{i=1}^{n} w\_i \cdot f\_i(s'\_X) \tag{25}$$

Where *sX* is the vector *X* of solution *s*, *s* � *<sup>X</sup>* is the vector *X* of solution *s* � , *wi* is the weight assigned to the function *i* and *n* is the number of objectives of the problem. If *n < z* then set *s* as *s* � and go to step 4 else go to step 6.

*Step 6.* Change the search direction. Randomly select a solution *s* ∈ *Q*<sup>∗</sup> and go to step 4.

*Step 7.* Removing dominated solutions. Remove the dominated solution for each set (*Q*<sup>∗</sup> and *Qφ*). Go to step 3.

*Step 8.* Finishing. *Q<sup>φ</sup>* has the non-dominated solutions.

As can be seen in figure 11, alike MODS, SAMODS removes the dominated solutions when the new solution found is not dominated. Besides, if the new solution found dominated at least one element from the solution set (*Qφ*) then it will be added to the elitisms set (*Q*∗) that works as a search direction for the Pareto Front. As far as here, SAMODS could sounds as a simple local search strategy but not, when a new solution found is dominated, SAMODS tries to improve it using guessing. Guessing is done accepting dominated solution as good solutions. Alike Simulated Annealing inspired algorithms, the dominated solutions are accepted under the Boltzmann Distribution Probability assigning weights to the objectives of the problem. It is probably that perturbing a dominated solution, a non-dominated solution can be found as can be seen in figure 12. Due to this, local optimums are avoided. When the

Where *Q* is the feasible solutions space, *QS* is the initial solutions and *F*(*X*) are the objectives

<sup>93</sup> Evolutionary Algorithms Based on the Automata

Where *q*,*r* ∈ *Q* and *k* ∈ *N*. *q* and *r* are named parents solutions and *k* is the cross point. The main idea of this function is cross two solutions in the same point and returns a new solution. For instance, two solutions of 4 variables are cross in figure 13. Obviously, the crossover is made regarding the representation of the solutions. Lastly, SAGAMODS template is defined

Fig. 13. Crossover between two solutions. Solutions of the states *qk* and *qj* are crossed in

*Step 1.* Setting parameters. Set *QS* as the solution set, *x* as the number of solutions to cross for

*Step 2. Selection.* Set *QC* (crossover set) as selection of *x* solutions in *QS*, *QM* (mutation set) as

Evolutionary Metaheuristic of Deterministic Swapping (EMODS), is a novel framework that allows the Multiobjective Optimization of Combinatorial Problems. Its framework is based on MODS template therefore its steps are the same: create Initial Solutions, Improve the Solutions (Optional) and Execute the Core Algorithm. Unlike SAMODS and SAGAMODS, EMODS avoids the slowly convergence of Simulated Annealing's method. EMODS explores different regions from the feasible solution space and search for non-dominated solution using Tabu

*Step 4. Mutation.* Set *Q*<sup>0</sup> as *QM*. Execute SAMODS as a local search strategy.

**5. Evolutionary Metaheuristic Of Deterministic Swapping (EMODS)**

*C*(*q*,*r*, *k*) : *Q* → *Q* (27)

*QM* = *QM* ∪ {*C*(*si*,*si*+1, *k*)} (28)

of the problem. *C*(*q*,*r*, *k*) is defined as follows: Formally, *Cross Function K* is defined as follows:

Theory for the Multi-Objective Optimization of Combinatorial Problems

order to get state *qi*

*φ* and *k* as a random value.

*Step 5.* Check stop conditions. Go to 2.

The Core Algorithm is defined as follows:

*Step 3. Crossover.* For each *si*,*si*<sup>+</sup><sup>1</sup> ∈ *QC*/1 ≤ *i <* |*QC*|:

as follows:

Search.

each iteration.

temperature is low, the bad solutions are avoided because *z* value is low therefore SAMODS accepts only non-dominated solutions. However, by that time, *Q<sup>φ</sup>* will be leaded on by *Q*∗.

Fig. 11. Behavior of SAMODS when the new solution found is not dominated. Once a new solution found is non-dominated, it is added to the elitism set *Q*<sup>∗</sup> and the dominated solutions from *Q<sup>φ</sup>* are removed.

Fig. 12. Behavior of SAMODS when the new solution found is dominated. In this case, guessing gives a new solution non-dominated.

### **4. Genetic Simulated Annealing Metaheuristic Of Deterministic Swapping (SAGAMODS)**

Simulated Annealing, Genetic Algorithm & Metaheuristic Of Deterministic Swapping(Niño, 2012) (SAGAMODS) is a hybrid search strategy based on the Automata Theory, Simulated Annealing and Genetics Algorithms. SAGAMODS is an extension of the SAMODS theory. It comes up as result of the next question: could SAMODS avoid quickly local optimums? Although, SAMODS avoids local optimums guessing, it can take a lot of time accepting dominated solutions for finding non-dominated. Thus, the answer to this question is based on the Evolutionary Theory. SAGAMODS proposes crossover step before SAMODS template is executed. Due to this, SAGAMODS supports to SAMODS for exploring distant regions of the solution space.

Formally, SAGAMODS is based on the next automata:

$$M = (Q\_{\prime}Q\_{\mathbb{S}\prime} \mathbb{C}(q, r, k), F(X)) \tag{26}$$

Where *Q* is the feasible solutions space, *QS* is the initial solutions and *F*(*X*) are the objectives of the problem. *C*(*q*,*r*, *k*) is defined as follows:

Formally, *Cross Function K* is defined as follows:

$$\mathsf{C}(q, r, k) : \mathbb{Q} \to \mathbb{Q} \tag{27}$$

Where *q*,*r* ∈ *Q* and *k* ∈ *N*. *q* and *r* are named parents solutions and *k* is the cross point. The main idea of this function is cross two solutions in the same point and returns a new solution. For instance, two solutions of 4 variables are cross in figure 13. Obviously, the crossover is made regarding the representation of the solutions. Lastly, SAGAMODS template is defined

Fig. 13. Crossover between two solutions. Solutions of the states *qk* and *qj* are crossed in order to get state *qi*

as follows:

12 Will-be-set-by-IN-TECH

temperature is low, the bad solutions are avoided because *z* value is low therefore SAMODS accepts only non-dominated solutions. However, by that time, *Q<sup>φ</sup>* will be leaded on by *Q*∗.

Fig. 11. Behavior of SAMODS when the new solution found is not dominated. Once a new solution found is non-dominated, it is added to the elitism set *Q*<sup>∗</sup> and the dominated

Fig. 12. Behavior of SAMODS when the new solution found is dominated. In this case,

**4. Genetic Simulated Annealing Metaheuristic Of Deterministic Swapping**

Simulated Annealing, Genetic Algorithm & Metaheuristic Of Deterministic Swapping(Niño, 2012) (SAGAMODS) is a hybrid search strategy based on the Automata Theory, Simulated Annealing and Genetics Algorithms. SAGAMODS is an extension of the SAMODS theory. It comes up as result of the next question: could SAMODS avoid quickly local optimums? Although, SAMODS avoids local optimums guessing, it can take a lot of time accepting dominated solutions for finding non-dominated. Thus, the answer to this question is based on the Evolutionary Theory. SAGAMODS proposes crossover step before SAMODS template is executed. Due to this, SAGAMODS supports to SAMODS for exploring distant regions of

*M* = (*Q*, *QS*, *C*(*q*,*r*, *k*), *F*(*X*)) (26)

solutions from *Q<sup>φ</sup>* are removed.

guessing gives a new solution non-dominated.

Formally, SAGAMODS is based on the next automata:

**(SAGAMODS)**

the solution space.

*Step 1.* Setting parameters. Set *QS* as the solution set, *x* as the number of solutions to cross for each iteration.

*Step 2. Selection.* Set *QC* (crossover set) as selection of *x* solutions in *QS*, *QM* (mutation set) as *φ* and *k* as a random value.

*Step 3. Crossover.* For each *si*,*si*<sup>+</sup><sup>1</sup> ∈ *QC*/1 ≤ *i <* |*QC*|:

$$Q\_M = Q\_M \cup \{ \mathcal{C}(\mathbf{s}\_{i\prime}, \mathbf{s}\_{i+1\prime}, k) \} \tag{28}$$

*Step 4. Mutation.* Set *Q*<sup>0</sup> as *QM*. Execute SAMODS as a local search strategy.

*Step 5.* Check stop conditions. Go to 2.

### **5. Evolutionary Metaheuristic Of Deterministic Swapping (EMODS)**

Evolutionary Metaheuristic of Deterministic Swapping (EMODS), is a novel framework that allows the Multiobjective Optimization of Combinatorial Problems. Its framework is based on MODS template therefore its steps are the same: create Initial Solutions, Improve the Solutions (Optional) and Execute the Core Algorithm. Unlike SAMODS and SAGAMODS, EMODS avoids the slowly convergence of Simulated Annealing's method. EMODS explores different regions from the feasible solution space and search for non-dominated solution using Tabu Search.

The Core Algorithm is defined as follows:

*Step 1.* Set *θ* as the maximum number of iterations, *β* as the maximum number of state selected in each iteration, *ρ* as the maximum number of perturbations by state and *Q<sup>φ</sup>* as *Q*<sup>0</sup>

*Step 2. Selection.* Randomly select a state *q* ∈ *Q<sup>φ</sup>* or *q* ∈ *Q*<sup>∗</sup>

*Step 3. Mutation - Tabu Search.* Set *N* as the new solutions found as result of perturbing *q*. Add to *Q<sup>φ</sup>* and *Q*<sup>∗</sup> according to the next equations:

$$\left(Q\_{\Phi} = Q\_{\Phi} \cup \{q\}\right) \Longleftrightarrow \left(\exists r \in Q\_{\Phi}/q \quad \text{is} \quad \text{dominated} \quad \text{by} \quad r\right) \tag{29}$$

$$(Q\_\* = Q\_\* \cup \{q\}) \Longleftrightarrow \left(\exists r \in Q\_\Phi / r \quad \text{is} \quad dominated \quad by \quad q\right) \tag{30}$$

Remove the states with dominated solutions for each set.

*Step 4. Crossover.* Randomly select states from *Q<sup>φ</sup>* and *Q*∗. Generate a random point of cross.

*Step 5.* Check stop condition, go to 3.

Step 2 and 3 support the algorithm in removing dominated solutions from the set of solutions *Q<sup>φ</sup>* as can be seen in figure 3. However, one of the most important steps in the EMODS algorithm is step 4. There, similar to SAGAMODS, the algorithm applies an Evolutionary Strategy based in the crossover step of Genetic Algorithms for avoiding Local Optimums. Due to the crossover is not always made in the same point (the k-value is randomly generated in each state analyzed) the variety of solutions found are diverse avoiding local optimums. An overview of EMODS behavior for a Tri-objective Combinatorial Optimization problem can be seen in figure 14

#### **6. Experimental analysis**

#### **6.1 Experimental settings**

The algorithms were tested using well-known instances from the Multi-objective Traveling Salesman Problem taken from TSPLIB(Heidelberg, n.d.). The instances worked are shown in table 6 and the input parameters for the algorithms are shown in table 7. The test of the algorithms was made using a Dual Core Computer with 2 Gb RAM. The optimal solutions were constructed based in the best non-dominated solutions of all algorithms in comparison for each instance worked.

#### **6.2 Performance metrics**

There are metrics that allow measuring the quality of a set of optimal solutions and the performance of an Algorithm (Corne & Knowles, 2003). Most of them use two Pareto Fronts. The first one is *PFtrue* and it refers to the real optimal solutions of a combinatorial problem. The second is *PFknow* and it represents the optimal solutions found by an algorithm.

*Generation of Non-dominated Vectors (GNDV)* It measures the number of No Dominates Solutions generated by an algorithm.

$$GNDV = |PF\_{known}|\tag{31}$$

Fig. 14. An overview of EMODS behavior for a Tri-objective Problem.

*RGNDV* =

 *GNDV* |*PFtrue*|

<sup>95</sup> Evolutionary Algorithms Based on the Automata

Theory for the Multi-Objective Optimization of Combinatorial Problems

A value closer to 100% for this metric is desired. *Real Generation of Non-dominated Vectors (ReGNDV)* This metric measures the number of Real Solutions found by an algorithm.

*ReGNDV* = |{*y*|*y* ∈ *PFknow* ∧ *y* ∈ *PFtrue*}| (33)

· 100% (32)

algorithm and the Real Solutions.

A higher value for this metric is desired. *Rate of Generation of No-dominated Vectors (RGNDV)* This metric measures the proportion of the No Dominates Solutions (31) generated by an 14 Will-be-set-by-IN-TECH

*Step 1.* Set *θ* as the maximum number of iterations, *β* as the maximum number of state selected

*Step 3. Mutation - Tabu Search.* Set *N* as the new solutions found as result of perturbing *q*. Add

*Step 4. Crossover.* Randomly select states from *Q<sup>φ</sup>* and *Q*∗. Generate a random point of cross.

Step 2 and 3 support the algorithm in removing dominated solutions from the set of solutions *Q<sup>φ</sup>* as can be seen in figure 3. However, one of the most important steps in the EMODS algorithm is step 4. There, similar to SAGAMODS, the algorithm applies an Evolutionary Strategy based in the crossover step of Genetic Algorithms for avoiding Local Optimums. Due to the crossover is not always made in the same point (the k-value is randomly generated in each state analyzed) the variety of solutions found are diverse avoiding local optimums. An overview of EMODS behavior for a Tri-objective Combinatorial Optimization problem can be

The algorithms were tested using well-known instances from the Multi-objective Traveling Salesman Problem taken from TSPLIB(Heidelberg, n.d.). The instances worked are shown in table 6 and the input parameters for the algorithms are shown in table 7. The test of the algorithms was made using a Dual Core Computer with 2 Gb RAM. The optimal solutions were constructed based in the best non-dominated solutions of all algorithms in comparison

There are metrics that allow measuring the quality of a set of optimal solutions and the performance of an Algorithm (Corne & Knowles, 2003). Most of them use two Pareto Fronts. The first one is *PFtrue* and it refers to the real optimal solutions of a combinatorial problem.

*Generation of Non-dominated Vectors (GNDV)* It measures the number of No Dominates

A higher value for this metric is desired. *Rate of Generation of No-dominated Vectors (RGNDV)* This metric measures the proportion of the No Dominates Solutions (31) generated by an

*GNDV* = |*PFknow*| (31)

The second is *PFknow* and it represents the optimal solutions found by an algorithm.

�∃*<sup>r</sup>* <sup>∈</sup> *<sup>Q</sup>φ*/*q is dominated by r* (29)

<sup>∃</sup>*<sup>r</sup>* <sup>∈</sup> *<sup>Q</sup>φ*/*r is dominated by q* (30)

in each iteration, *ρ* as the maximum number of perturbations by state and *Q<sup>φ</sup>* as *Q*<sup>0</sup>

*Step 2. Selection.* Randomly select a state *q* ∈ *Q<sup>φ</sup>* or *q* ∈ *Q*<sup>∗</sup>

(*Q*<sup>∗</sup> <sup>=</sup> *<sup>Q</sup>*<sup>∗</sup> <sup>∪</sup> {*q*}) ⇐⇒

Remove the states with dominated solutions for each set.

 ⇐⇒

to *Q<sup>φ</sup>* and *Q*<sup>∗</sup> according to the next equations:

*Q<sup>φ</sup>* = *Q<sup>φ</sup>* ∪ {*q*}

*Step 5.* Check stop condition, go to 3.

seen in figure 14

**6. Experimental analysis**

**6.1 Experimental settings**

for each instance worked.

**6.2 Performance metrics**

Solutions generated by an algorithm.

Fig. 14. An overview of EMODS behavior for a Tri-objective Problem.

algorithm and the Real Solutions.

$$RGNDV = \left(\frac{GNDV}{|PF\_{true}|}\right) \cdot 100\% \tag{32}$$

A value closer to 100% for this metric is desired. *Real Generation of Non-dominated Vectors (ReGNDV)* This metric measures the number of Real Solutions found by an algorithm.

$$\text{ReGNDV} = |\{y | y \in PF\_{known} \land y \in PF\_{true}\}|\tag{33}$$

Where *di* is the smallest Euclidean distance between the solution i of *FPknow* and the solutions of *FPtrue*. *p* is the dimension of the combinatorial problem, it means the number of objective functions. *Inverse Generational Distance (IGD)* This is another distance measurement between

<sup>97</sup> Evolutionary Algorithms Based on the Automata

 ·

<sup>|</sup>*PFknow*<sup>|</sup> ∑ *i*=1

Where *di* is the smallest Euclidean distance between the solution *i* of *PFknow* and the solutions of *PFtrue*. *Spacing (S)* It measures the range variance of neighboring solutions in *PFknow*

Where *di* is the smallest Euclidean distance between the solution *i* of *PFknow* and the rest of solutions of *PFknow*. *d* is the mean of all *di*. *p* is the dimension of the combinatorial problem. A value closer to 0 for this metric is desired. A value of 0 means that all the solutions are

*Error Rate (ε)* It estimates the error rate respect to the precision of the Real Algorithms Solutions

A value of 0% in this metric means that the values of the Real Pareto Front are constructed

Lastly, notice that every metric by itself does not have sense. It is necessary to support in the other metrics for a real judge about the quality of the solutions. For instance, if a Pareto Front has a higher value in *GNDV* but a lower value in *ReGNDV* then the solutions has a

The tests made with Bi-objectives, Tri-objectives, Quad-objectives and Quin-objectives TSP instances are shown in tables 8, 9, 10 and 11 respectively. The average of the measurement is shown in table 12. Furthermore, a graphical comparison for bi-objectives and tri objectives

It can be concluded, that, in the case of two and three objectives, metrics such as *S*, *IGD*, *GD* and *ε* determine the best algorithm. In this case, the measurement of the metrics is similar for SAMODS and SAGAMODS. On the other hand, MODS has the most poor-quality measurement for the metrics used and EMODS has the best quality measurement for the same

Lastly, why are the results of the metrics similar for quint-instances? In this case, all the solutions for each solution set are in the optimal set. The answer to this question is based in the angle improvement. MODS as a local search strategy explore a part of the feasible solution using its search direction (*Q*∗). However, SAMODS and SAGAMODS, in addition, use a search direction given by the change of the search angle. While SAMODS was looking in a

 

*PFtrue ReGNDV* <sup>|</sup>*PFknow*<sup>|</sup> ∑ *i*=1

> *d* − *di* 2 (1/*p*)

*di* 

· 100% (37)

(35)

(36)

*IGD* =

Theory for the Multi-Objective Optimization of Combinatorial Problems

1


*ε* = 

instances worked is shown in figures 15 and 16 respectively.

*S* =

from the values of the Algorithm Pareto Front.

 1 |*PFtrue*|

> <sup>2</sup> ·

*FPknow* and *FPtrue*:

equidistant.

(33) as follows:

poor-quality.

**6.4 Analysis**

metrics.

**6.3 Experimental results**


Table 6. Instances worked for testing the proposed algorithms.


Table 7. Parameters setting for each compared algorithm.

A value closer to |*PFtrue*| for this metric is desired.

*Generational Distance (GD)* This metric measures the distance between *PFknow* and *PFtrue*. It allows to determinate the error rate in terms of the distance of a set of solutions relative to the real solutions.

$$GD = \left(\frac{1}{|PF\_{known}|}\right) \cdot \left(\sum\_{i=1}^{|PF\_{known}|} d\_i\right)^{(1/p)}\tag{34}$$

Where *di* is the smallest Euclidean distance between the solution i of *FPknow* and the solutions of *FPtrue*. *p* is the dimension of the combinatorial problem, it means the number of objective functions. *Inverse Generational Distance (IGD)* This is another distance measurement between *FPknow* and *FPtrue*:

$$IGD = \left(\frac{1}{|PF\_{true}|}\right) \cdot \left(\sum\_{i=1}^{|PF\_{known}|} d\_i\right) \tag{35}$$

Where *di* is the smallest Euclidean distance between the solution *i* of *PFknow* and the solutions of *PFtrue*. *Spacing (S)* It measures the range variance of neighboring solutions in *PFknow*

$$S = \left(\frac{1}{|PF\_{known}| - 1}\right)^2 \cdot \left(\sum\_{i=1}^{|PF\_{known}|} \left(\overline{d} - d\_i\right)^2\right)^{(1/p)}\tag{36}$$

Where *di* is the smallest Euclidean distance between the solution *i* of *PFknow* and the rest of solutions of *PFknow*. *d* is the mean of all *di*. *p* is the dimension of the combinatorial problem.

A value closer to 0 for this metric is desired. A value of 0 means that all the solutions are equidistant.

*Error Rate (ε)* It estimates the error rate respect to the precision of the Real Algorithms Solutions (33) as follows:

$$
\varepsilon = \left( \left| \frac{PF\_{true}}{ReGNDV} \right| \right) \cdot 100\% \tag{37}
$$

A value of 0% in this metric means that the values of the Real Pareto Front are constructed from the values of the Algorithm Pareto Front.

Lastly, notice that every metric by itself does not have sense. It is necessary to support in the other metrics for a real judge about the quality of the solutions. For instance, if a Pareto Front has a higher value in *GNDV* but a lower value in *ReGNDV* then the solutions has a poor-quality.

#### **6.3 Experimental results**

The tests made with Bi-objectives, Tri-objectives, Quad-objectives and Quin-objectives TSP instances are shown in tables 8, 9, 10 and 11 respectively. The average of the measurement is shown in table 12. Furthermore, a graphical comparison for bi-objectives and tri objectives instances worked is shown in figures 15 and 16 respectively.

#### **6.4 Analysis**

16 Will-be-set-by-IN-TECH

Combinatorial Problem Instance Number of Objectives

2

3

4

KROAB100

KROAC100 KROAD100 KROAE100 KROBC100 KROBD100 KROBE100 KROCD100 KROCE100 KRODE100 KROABC100

KROABD100 KROABE100 KROACD100 KROACE100 KROADE100 KROBCD100 KROBCE100 KROBDE100 KROCDE100 KROABCD100

KROABCE100 KROABDE100 KROACDE100 KROBCDE100

Table 6. Instances worked for testing the proposed algorithms.

Table 7. Parameters setting for each compared algorithm.

*GD* =

 1 |*PFknow*|

A value closer to |*PFtrue*| for this metric is desired.

real solutions.

KROABCDE100 5

MODS 100 80 NA NA NA SAMODS 100 80 1000 0.95 NA SAGAMODS 100 80 1000 0.95 0.6 EMODS 100 80 NA NA 0.6

Algorithm Max. Iterations Max. Perturbations Initial Temperature Cooler Value Crossover Rate

*Generational Distance (GD)* This metric measures the distance between *PFknow* and *PFtrue*. It allows to determinate the error rate in terms of the distance of a set of solutions relative to the

> ·

<sup>|</sup>*PFknow*<sup>|</sup> ∑ *i*=1

*di*

(1/*p*)

(34)

It can be concluded, that, in the case of two and three objectives, metrics such as *S*, *IGD*, *GD* and *ε* determine the best algorithm. In this case, the measurement of the metrics is similar for SAMODS and SAGAMODS. On the other hand, MODS has the most poor-quality measurement for the metrics used and EMODS has the best quality measurement for the same metrics.

Lastly, why are the results of the metrics similar for quint-instances? In this case, all the solutions for each solution set are in the optimal set. The answer to this question is based in the angle improvement. MODS as a local search strategy explore a part of the feasible solution using its search direction (*Q*∗). However, SAMODS and SAGAMODS, in addition, use a search direction given by the change of the search angle. While SAMODS was looking in a

*Instance Algorithm GNDV RGNDV ReGNDV ReGNDV*

Theory for the Multi-Objective Optimization of Combinatorial Problems

AB

AC

AD

AE

BC

BD

BE

CD

CE

DE

*GNDV*

MODS 289 0.0189 0 0% 0.0193 21.2731 2473.4576 100% SAMODS 7787 0.5096 1247 16.01% 0.001 0.2404 229.2593 91.84% SAGAMODS 8479 0.5549 2974 35.07% 0.0007 0.1837 158.8229 80.54% EMODS 26125 1.7096 11060 42.33% 0.0002 0.0412 75.8814 27.62%

<sup>99</sup> Evolutionary Algorithms Based on the Automata

MODS 217 0.0155 0 0% 0.034 28.575 2751.3232 100% SAMODS 6885 0.4927 2303 33.45% 0.0008 0.2297 179.0354 83.52% SAGAMODS 7023 0.5025 2431 34.61% 0.0008 0.2628 243.7536 82.6% EMODS 20990 1.502 9241 44.03% 0.0002 0.0617 119.8825 33.87%

MODS 281 0.0187 0 0% 0.0139 20.4429 2198.883 100% SAMODS 6383 0.4253 1464 22.94% 0.0029 0.3188 275.9153 90.24% SAGAMODS 6289 0.4191 764 12.15% 0.0016 0.2835 211.7935 94.91% EMODS 17195 1.1458 12779 74.32% 0.0005 0.0521 53.5314 14.85%

MODS 283 0.0189 0 0% 0.0533 21.3238 2433.63 100% SAMODS 5693 0.3804 1433 25.17% 0.0016 0.4308 402.0483 90.42% SAGAMODS 6440 0.4304 1515 23.52% 0.0013 0.3906 422.7859 89.88% EMODS 20695 1.383 12016 58.06% 0.0002 0.066 124.7197 19.7%

MODS 298 0.0212 0 0% 0.0158 19.537 2411.8365 100% SAMODS 6858 0.488 789 11.5% 0.0024 0.3597 433.0882 94.39% SAGAMODS 6919 0.4923 2201 31.81% 0.0015 0.2378 192.5601 84.34% EMODS 21902 1.5584 11064 50.52% 0.0003 0.0582 115.5673 21.28%

MODS 241 0.0198 0 0% 0.0239 21.7441 2251.0972 100% SAMODS 6844 0.561 2054 30.01% 0.0021 0.2542 248.0971 83.16% SAGAMODS 5934 0.4864 1971 33.22% 0.0018 0.2818 229.2093 83.84% EMODS 19420 1.5919 8174 42.09% 0.0003 0.0432 57.6434 32.99%

MODS 280 0.0259 0 0% 0.0309 19.0193 2622.5243 100% SAMODS 6260 0.5789 952 15.21% 0.001 0.2601 245.1433 91.2% SAGAMODS 5802 0.5365 1622 27.96% 0.0025 0.3912 476.4848 85% EMODS 17362 1.6055 8240 47.46% 0.0004 0.0631 111.0209 23.8%

MODS 286 0.022 0 0% 0.0184 18.0035 2040.9722 100% SAMODS 6171 0.4751 1912 30.98% 0.0007 0.2588 196.3394 85.28% SAGAMODS 6301 0.4851 994 15.78% 0.0014 0.2852 248.5855 92.35% EMODS 18628 1.434 10084 54.13% 0.0002 0.0426 48.3785 22.37%

MODS 224 0.0187 0 0% 0.0285 23.1312 2245.6542 100% SAMODS 5881 0.4919 946 16.09% 0.0017 0.2894 242.2535 92.09% SAGAMODS 4613 0.3859 939 20.36% 0.0028 0.481 411.854 92.15% EMODS 15211 1.2724 10070 66.2% 0.0003 0.0339 22.2645 15.77%

MODS 228 0.0147 0 0% 0.0477 23.6222 1864.9602 100% SAMODS 6110 0.3928 1157 18.94% 0.0022 0.2942 207.7947 92.56% SAGAMODS 7745 0.4979 407 5.26% 0.0012 0.2644 269.6204 97.38% EMODS 20058 1.2896 13990 69.75% 0.0005 0.0304 23.8829 10.06%

Table 8. Measuring algorithms performance for Bi-objectives instances of Traveling Salesman

Problem with Multi-objective optimization metrics.

% *S GD IGD ε*

Fig. 15. Graphical comparison between MODS, SAMODS, SAGAMODS and EMODS for Bi-objective TSP instances.

Fig. 16. Graphical comparison between MODS, SAMODS, SAGAMODS and EMODS for Tri-objective TSP instances.

part of the feasible solution space, SAGAMODS was doing the same in other. The same reason applies to EMODS. It can be possible because of the large size of the feasible solution space (5). The possibility of exploring the same part of the solution space for different algorithms is low.

18 Will-be-set-by-IN-TECH

Fig. 15. Graphical comparison between MODS, SAMODS, SAGAMODS and EMODS for

Fig. 16. Graphical comparison between MODS, SAMODS, SAGAMODS and EMODS for

part of the feasible solution space, SAGAMODS was doing the same in other. The same reason applies to EMODS. It can be possible because of the large size of the feasible solution space (5). The possibility of exploring the same part of the solution space for different algorithms

Bi-objective TSP instances.

Tri-objective TSP instances.

is low.


Table 8. Measuring algorithms performance for Bi-objectives instances of Traveling Salesman Problem with Multi-objective optimization metrics.

*Instance Algorithm GNDV RGNDV ReGNDV ReGNDV*

Theory for the Multi-Objective Optimization of Combinatorial Problems

ABCD

ABCE

ABDE

ACDE

BCDE

ABCDE

Multi-objective optimization metrics.

Multi-objective optimization metrics.

*Instance Algorithm GNDV RGNDV ReGNDV ReGNDV*

*GNDV*

MODS 5333 0.0925 3303 61.94% 0.3497 0.0256 6030.5288 94.27% SAMODS 28523 0.4947 12178 42.7% 0.231 0.0042 3454.1238 78.88% SAGAMODS 36802 0.6382 14967 40.67% 0.2203 0.0031 3092.5232 74.04% EMODS 201934 3.502 27214 13.48% 0.1754 0.0005 1991.148 52.8%

<sup>101</sup> Evolutionary Algorithms Based on the Automata

MODS 5533 0.0973 3439 62.15% 0.3452 0.0244 5861.8605 93.95% SAMODS 27684 0.4868 11471 41.44% 0.2331 0.0043 3444.6454 79.83% SAGAMODS 35766 0.6289 14552 40.69% 0.2232 0.0032 3118.6397 74.41% EMODS 204596 3.5976 27408 13.4% 0.1754 0.0005 1885.4464 51.81%

MODS 5259 0.0942 3142 59.75% 0.3487 0.0256 5864.5036 94.37% SAMODS 27180 0.4869 11247 41.38% 0.232 0.0043 3398.9429 79.85% SAGAMODS 34930 0.6257 14472 41.43% 0.2236 0.0033 2986.5319 74.08% EMODS 195756 3.5067 26963 13.77% 0.1775 0.0005 1916.7141 51.7%

MODS 5466 0.094 3400 62.2% 0.3405 0.0246 5617.5202 94.15% SAMODS 26757 0.4602 11336 42.37% 0.235 0.0044 3394.8396 80.5% SAGAMODS 34492 0.5932 14638 42.44% 0.2265 0.0033 2965.4482 74.83% EMODS 196800 3.3845 28774 14.62% 0.1764 0.0005 1793.4489 50.52%

MODS 5233 0.0879 3082 58.9% 0.3499 0.0259 5677.9988 94.83% SAMODS 28054 0.471 11739 41.84% 0.2315 0.0042 3296.196 80.29% SAGAMODS 36258 0.6087 15145 41.77% 0.2218 0.0032 2902.8041 74.57% EMODS 203017 3.4083 29599 14.58% 0.1752 0.0005 1873.1748 50.31%

*GNDV*

MODS 7517 0.0159 7517 100% 0.5728 0.0125 15705.6864 98.41% SAMODS 26140 0.0554 26140 100% 0.4101 0.0033 10801.6382 94.46% SAGAMODS 26611 0.0564 26611 100% 0.4097 0.0033 10544.8901 94.36% EMODS 411822 0.8723 411822 100% 0.3136 0.0001 950.4252 12.77%

Table 10. Measuring algorithms performance for Quad-objectives instances of TSP with

Table 11. Measuring algorithms performance for Quint-objectives instances of TSP with

% *S GD IGD ε*

% *S GD IGD ε*


Table 9. Measuring algorithms performance for Tri-objective instances of TSP with Multi-objective optimization metrics.

20 Will-be-set-by-IN-TECH

 *ReGNDV GNDV* 

MODS 2115 0.0307 83 3.92% 0.1567 0.2819 3075.4309 99.88% SAMODS 12768 0.1853 227 1.78% 0.0722 0.0421 2256.4593 99.67% SAGAMODS 12523 0.1818 328 2.62% 0.073 0.0427 2220.5614 99.52% EMODS 70474 1.023 68254 96.85% 0.0477 0.001 5.6388 0.93%

MODS 1951 0.0292 74 3.79% 0.1524 0.305 3153.9212 99.89% SAMODS 12094 0.1811 317 2.62% 0.0746 0.0441 2270.3475 99.53% SAGAMODS 12132 0.1817 250 2.06% 0.0726 0.0441 2286.7374 99.63% EMODS 68001 1.0184 66133 97.25% 0.0471 0.0011 6.3212 0.96%

MODS 1931 0.0281 63 3.26% 0.1496 0.315 3278.1554 99.91% SAMODS 12461 0.1815 373 2.99% 0.0743 0.0438 2371.7641 99.46% SAGAMODS 12391 0.1805 370 2.99% 0.0745 0.0436 2304.3277 99.46% EMODS 70411 1.0257 67839 96.35% 0.0474 0.0012 8.0639 1.17%

MODS 2031 0.0305 66 3.25% 0.1425 0.2945 3213.7378 99.9% SAMODS 12004 0.1802 241 2.01% 0.0734 0.0444 2277.0343 99.64% SAGAMODS 12123 0.182 206 1.7% 0.0735 0.0442 2310.3683 99.69% EMODS 67451 1.0127 66090 97.98% 0.0468 0.001 4.5012 0.77%

MODS 1950 0.0306 57 2.92% 0.1628 0.3024 3215.8357 99.91% SAMODS 11382 0.1785 263 2.31% 0.074 0.0461 2271.6542 99.59% SAGAMODS 11476 0.18 303 2.64% 0.0734 0.0456 2241.9933 99.52% EMODS 64804 1.0162 63145 97.44% 0.048 0.0012 7.3103 0.98%

MODS 1824 0.0274 67 3.67% 0.1487 0.3289 3248.597 99.9% SAMODS 12149 0.1827 179 1.47% 0.0733 0.0442 2336.2798 99.73% SAGAMODS 11773 0.1771 258 2.19% 0.0771 0.0457 2346.6414 99.61% EMODS 67767 1.0193 65981 97.36% 0.0468 0.0011 5.7824 0.76%

MODS 2065 0.03 43 2.08% 0.1451 0.2927 3206.9305 99.94% SAMODS 13129 0.1908 260 1.98% 0.0712 0.0417 2387.8219 99.62% SAGAMODS 12889 0.1873 253 1.96% 0.0786 0.042 2308.1811 99.63% EMODS 70035 1.0176 68270 97.48% 0.0452 0.001 5.6235 0.81%

MODS 2009 0.0286 58 2.89% 0.1505 0.3065 3327.787 99.92% SAMODS 12992 0.1852 229 1.76% 0.0701 0.0428 2448.3577 99.67% SAGAMODS 12582 0.1794 201 1.6% 0.0736 0.0445 2503.0421 99.71% EMODS 71176 1.0147 69654 97.86% 0.0464 0.0011 7.8122 0.7%

MODS 2039 0.0316 45 2.21% 0.1532 0.2914 3252.7813 99.93% SAMODS 12379 0.192 205 1.66% 0.0728 0.0434 2401.8804 99.68% SAGAMODS 12427 0.1928 195 1.57% 0.0742 0.0431 2377.0621 99.7% EMODS 65509 1.0163 64015 97.72% 0.0476 0.0011 5.6322 0.69%

MODS 2010 0.0278 83 4.13% 0.1463 0.3022 3094.2824 99.89% SAMODS 13084 0.1807 399 3.05% 0.0712 0.0414 2193.6586 99.45% SAGAMODS 13009 0.1796 347 2.67% 0.0719 0.0418 2224.4738 99.52% EMODS 74063 1.0227 71589 96.66% 0.0453 0.0011 7.2279 1.14%

Table 9. Measuring algorithms performance for Tri-objective instances of TSP with

% *S GD IGD ε*

*Instance Algorithm GNDV RGNDV ReGNDV*

ABC

ABD

ABE

ACD

ACE

ADE

BCD

BCE

BDE

CDE

Multi-objective optimization metrics.


Table 10. Measuring algorithms performance for Quad-objectives instances of TSP with Multi-objective optimization metrics.


Table 11. Measuring algorithms performance for Quint-objectives instances of TSP with Multi-objective optimization metrics.

**8. Acknowledgment**

**9. References**

First of all, I want to thank to God for being with me in my entire life, He made this possible. Secondly, I want to thank to my parents Elias Niño and Arely Ruiz and my sister Carmen Niño for their enormous love and support. Finally, and not less important, to thank to my

<sup>103</sup> Evolutionary Algorithms Based on the Automata

An, S., Yang, S., Ho, S., Li, T. & Fu, W. (2011). A modified tabu search method applied to

Araujo, D., Bastos-Filho, C., Barboza, E., Chaves, D. & Martins-Filho, J. (2011). A performance

Ausavanop, O. & Chaitusaney, S. (2011). Coordination of dispatchable distributed

*Technology (ECTI-CON), 2011 8th International Conference on*, pp. 869 –872. Azuma, R. M., Coelho, G. P. & Von Zuben, F. J. (2011). Evolutionary multi-objective

Chen, K.-Y., Liu, A. & Lee, C.-H. (2003). A multiprocessor real-time process scheduling

Chipara, O., Lu, C. & Roman, G.-C. (2007). Real-time query scheduling for wireless sensor

Chua, T., Wang, F., Cai, T. & Yin, X. (2006). A heuristics-based advanced planning and

Chunyue, Y., Meirong, X. & Ruiguo, Z. (2009). Single-machine scheduling problem in plate hot

Corne, D. & Knowles, J. (2003). Some multiobjective optimizers are better than others,

Czabaski, R. (2006). Deterministic annealing integrated with insensitive learning in

de Pablo, D. (2009). On scheduling models: An overview, *Computers Industrial Engineering,*

Deb, K., Pratap, A., Agarwal, S. & Meyarivan, T. (2002). A fast and elitist multiobjective genetic algorithm: Nsga-ii, *Evolutionary Computation, IEEE Transactions on* 6(2): 182 –197.

*Computer Science*, Springer Berlin Heidelberg, pp. 220–229.

*2009. CIE 2009. International Conference on*, pp. 153 –158.

*Factory Automation, 2006. ETFA '06. IEEE Conference on*, pp. 240 –247.

*Computation (CEC), 2011 IEEE Congress on*, pp. 1457 –1464.

comparison of multi-objective optimization evolutionary algorithms for all-optical networks design, *Computational Intelligence in Multicriteria Decision-Making (MDCM),*

generation and voltage control devices for improving voltage profile by tabu search, *Electrical Engineering/Electronics, Computer, Telecommunications and Information*

optimization for the vendor-managed inventory routing problem, *Evolutionary*

method, *Multimedia Software Engineering, 2003. Proceedings. Fifth International*

networks, *Real-Time Systems Symposium, 2007. RTSS 2007. 28th IEEE International*,

scheduling system with bottleneck scheduling algorithm, *Emerging Technologies and*

rolling production, *Control and Decision Conference, 2009. CCDC '09. Chinese*, pp. 2500

*Evolutionary Computation, 2003. CEC '03. The 2003 Congress on*, Vol. 4, pp. 2506 – 2512

neuro-fuzzy systems, *in* L. Rutkowski, R. Tadeusiewicz, L. Zadeh & J. Zurada (eds), *Artificial Intelligence and Soft Computing ICAISC 2006*, Vol. 4029 of *Lecture Notes in*

inverse problems, *Magnetics, IEEE Transactions on* 47(5): 1234 –1237.

beautiful wife Maria Padron and our baby for being my inspiration.

Theory for the Multi-Objective Optimization of Combinatorial Problems

*2011 IEEE Symposium on*, pp. 89 –96.

*Symposium on*, pp. 29 – 36.

pp. 389 –399.

–2503.

Vol.4.


Table 12. Measuring algorithms performance for Multi-objectives instances of TSP with Multi-objective optimization metrics.

### **7. Conclusion**

SAMODS, SAGAMODS and EMODS are algorithms based on the Automata Theory for the Multi-objective Optimization of Combinatorial Problems. All of them are derived from the MODS metaheuristic, which is inspired in the Theory of Deterministic Finite Swapping. SAMODS is a Simulated Annealing inspired Algorithm. It uses a search direction in order to optimize a set of solution (Pareto Front) through a linear combination of the objective functions. On the other hand, SAGAMODS, in addition to the advantages of SAMODS, is an Evolutionary inspired Algorithm. It implements a crossover step for exploring far regions of a solution space. Due to this, SAGAMODS tries to avoid local optimums owing to it takes a general look of the solution space. Lastly, in order to avoid slow convergence, EMODS is proposed. Unlike SAMODS and SAGAMODS, EMODS does not explore the neighborhood of a solution using Simulated Annealing, this step is done using Tabu Search. Thus, EMODS gets optimal solution faster than SAGAMODS and SAMODS. Lastly, the algorithms were tested using well known instances from TSPLIB and metrics from the specialized literature. The results shows that for instances of two, three and four objectives, the proposed algorithm has the best performance as the metrics values corroborate. For the last instance worked, quint-objective, the behavior of MODS, SAMODS and SAGAMODS tend to be the same, them have similar error rate but, EMODS has a the best performance. In all the cases, EMODS shows the best performance. However, for the last test, all the algorithms have different solutions sets of non-dominated solutions, and those form the optimal solution set.

### **8. Acknowledgment**

22 Will-be-set-by-IN-TECH

 *ReGNDV GNDV* 

MODS 262.7 0.0194 0 0% 0.0286 21.6672 2329.4338 100% SAMODS 6487.2 0.4796 1425.7 22.03% 0.0016 0.2936 265.8974 89.47% SAGAMODS 6554.5 0.4791 1581.8 23.97% 0.0015 0.3062 286.547 88.3% EMODS 19758.6 1.4492 10671.8 54.89% 0.0003 0.0492 75.2773 22.23%

MODS 1992.5 0.0295 63.9 3.21% 0.1508 0.302 3206.7459 99.91% SAMODS 12444.2 0.1838 269.3 2.16% 0.0727 0.0434 2321.5258 99.6% SAGAMODS 12332.5 0.1822 271.1 2.2% 0.0743 0.0437 2312.3389 99.6% EMODS 68969.1 1.0187 67097 97.3% 0.0468 0.0011 6.3914 0.89%

MODS 5364.8 0.0932 3273.2 60.99% 0.3468 0.0252 5810.4824 94.31% SAMODS 27639.6 0.4799 11594.2 41.94% 0.2325 0.0043 3397.7495 79.87% SAGAMODS 35649.6 0.619 14754.8 41.4% 0.2231 0.0032 3013.1894 74.39% EMODS 200420.6 3.4798 27991.6 13.97% 0.176 0.0005 1891.9864 51.43%

MODS 7517 0.0159 7517 100% 0.5728 0.0125 15705.6864 98.41% SAMODS 26140 0.0554 26140 100% 0.4101 0.0033 10801.6382 94.46% SAGAMODS 26611 0.0564 26611 100% 0.4097 0.0033 10544.8901 94.36% EMODS 411822 0.8723 411822 100% 0.3136 0.0001 950.4252 12.77%

Table 12. Measuring algorithms performance for Multi-objectives instances of TSP with

of non-dominated solutions, and those form the optimal solution set.

SAMODS, SAGAMODS and EMODS are algorithms based on the Automata Theory for the Multi-objective Optimization of Combinatorial Problems. All of them are derived from the MODS metaheuristic, which is inspired in the Theory of Deterministic Finite Swapping. SAMODS is a Simulated Annealing inspired Algorithm. It uses a search direction in order to optimize a set of solution (Pareto Front) through a linear combination of the objective functions. On the other hand, SAGAMODS, in addition to the advantages of SAMODS, is an Evolutionary inspired Algorithm. It implements a crossover step for exploring far regions of a solution space. Due to this, SAGAMODS tries to avoid local optimums owing to it takes a general look of the solution space. Lastly, in order to avoid slow convergence, EMODS is proposed. Unlike SAMODS and SAGAMODS, EMODS does not explore the neighborhood of a solution using Simulated Annealing, this step is done using Tabu Search. Thus, EMODS gets optimal solution faster than SAGAMODS and SAMODS. Lastly, the algorithms were tested using well known instances from TSPLIB and metrics from the specialized literature. The results shows that for instances of two, three and four objectives, the proposed algorithm has the best performance as the metrics values corroborate. For the last instance worked, quint-objective, the behavior of MODS, SAMODS and SAGAMODS tend to be the same, them have similar error rate but, EMODS has a the best performance. In all the cases, EMODS shows the best performance. However, for the last test, all the algorithms have different solutions sets

% *S GD IGD ε*

*Objectives Algorithm GNDV RGNDV ReGNDV*

2

3

4

5

**7. Conclusion**

Multi-objective optimization metrics.

First of all, I want to thank to God for being with me in my entire life, He made this possible. Secondly, I want to thank to my parents Elias Niño and Arely Ruiz and my sister Carmen Niño for their enormous love and support. Finally, and not less important, to thank to my beautiful wife Maria Padron and our baby for being my inspiration.

### **9. References**


Kim, G. H. & Lee, C. (1998). Genetic reinforcement learning approach to the heterogeneous

<sup>105</sup> Evolutionary Algorithms Based on the Automata

Kim, I. & Lipasti, M. (2003). Macro-op scheduling: relaxing scheduling loop constraints,

Kirkpatrick, S., Gelatt, C. D. & Vecchi, M. P. (1983). Optimization by Simulated Annealing,

Lee, H.-C., Keh, H.-C., Huang, N.-C. & Chang, W.-H. (2011). An application of google map

Lim, A. & Wang, F. (2005). Multi-depot vehicle routing problem: a one-stage approach, *Automation Science and Engineering, IEEE Transactions on* 2(4): 397–402. Liu, J., Hamdi, M. & Hu, Q. (2003). Distributed parallel scheduling algorithms for high speed

Malik, A. M. A., Othman, A. K., Ayob, M. & Hamdan, A. R. (2011). Hybrid integrated

Manimaran, G., Shashidhar, M., Manikutty, A. & Murthy, C. (1997). Integrated scheduling of

McClymont, K. & Keedwell, E. (2011). Benchmark multi-objective optimisation test problems

McGarry, M., Reisslein, M., Colbourn, C., Maier, M., Aurzada, F. & Scheutzow, M. (2008).

Mingsheng, S., Shixin, S. & Qingxian, W. (2003). An efficient parallel scheduling algorithm

Niño, E. D. (2012). Samods and sagamods: Novel algorithms based on the automata theory

Niño, E. D., Ardila, C., Donoso, Y. & Jabba, D. (2010). A novel algorithm based on

problems, *Computer Technology and Application* 2(4): 280 – 292.

URL: *http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.18.4175*

*Science, Number 4598, 13 May 1983* 220, 4598: 671–680.

Theory for the Multi-Objective Optimization of Combinatorial Problems

*Abstracts. ACS/IEEE International Conference on*, p. 27.

*Systems, 1997. Proceedings of the Joint Workshop on*, pp. 64 –71.

–893.

–4767.

pp. 232 –236.

pp. 2131 –2138.

26(10): 1204 –1216.

Pending: Pending.

pp. 595 – 598.

*Symposium on*, pp. 277 – 288.

machine scheduling problem, *Robotics and Automation, IEEE Transactions on* 14(6): 879

*Microarchitecture, 2003. MICRO-36. Proceedings. 36th Annual IEEE/ACM International*

and tabu-search algorithm for traveling salesman problem on tel-home care, *Electric Information and Control Engineering (ICEICE), 2011 International Conference on*, pp. 4764

virtual output queuing switches, *Computer Systems and Applications, 2003. Book of*

two-stage multi-neighbourhood tabu search-emcq technique for examination timetabling problem, *Data Mining and Optimization (DMO), 2011 3rd Conference on*,

tasks and messages in distributed real-time systems, *Parallel and Distributed Real-Time*

with mixed encodings, *Evolutionary Computation (CEC), 2011 IEEE Congress on*,

Just-in-time scheduling for multichannel epons, *Lightwave Technology, Journal of*

of dependent task graphs, *Parallel and Distributed Computing, Applications and Technologies, 2003. PDCAT'2003. Proceedings of the Fourth International Conference on*,

for the multi-objective optimization of combinatorial problems, *International Journal of Artificial Intelligence - Special issue of IJAI on Metaheuristics in Artificial Intelligence*

deterministic finite automaton for solving the mono-objective symmetric traveling salesman problem, *International Journal of Artificial Intelligence* 5(A10): 101 – 108. Niño, E. D., Ardila, C., Donoso, Y., Jabba, D. & Barrios, A. (2011). Mods: A novel metaheuristic

of deterministic swapping for the multi U objective optimization of combinatorials ˝


24 Will-be-set-by-IN-TECH

Dereli, T. & Sena Da, G. (2007). A hybrid simulated-annealing algorithm for two-dimensional

Dukkipati, A. & Narasimha Murty, M. (2002). Selection by parts: 'selection in two episodes'

Fei, X., Ke, W., Jidong, S., Zheng, X. & Guilan, L. (2011). Back analysis of displacement based

Fung, R., Tang, J. & Zhang, J. (2009). A multi-depot vehicle routing problem with

Gacto, M., Alcala, R. & Herrera, F. (2011). Evolutionary multi-objective algorithm to effectively

Garriz, J., Puntonet, C., Morales, J. & delaRosa, J. (2005). Simulated annealing based-ga

Glover, F. & Laguna, M. (1997). *Tabu Search*, Kluwer Academic Publishers, Norwell, MA, USA. Grobler, J. & Engelbrecht, A. (2007). A scheduling-specific modeling approach for real

Hamidzadeh, B. & Atif, Y. (1996). Dynamic scheduling of real-time aperiodic tasks on

Heidelberg, U. O. (n.d.). Tsplib - office research group discrete optimization - university of heidelberg, URL: *http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/*. Hu, B. & Raidl, G. (2008). Solving the railway traveling salesman problem via a transformation

Huang, Y., Brocco, A., Bessis, N., Kuonen, P. & Hirsbrunner, B. (2010). Community-aware

Ishibuchi, H., Akedo, N., Ohyanagi, H. & Nojima, Y. (2011). Behavior of emo algorithms

Jian-guang, W., Ran, T. & Zhi-Yong, L. (2011). An improving tabu search algorithm for

Jianghong, D., Zhongyang, X., Hao, C. & Hui, D. (2000). Scheduling algorithm for mpeg-2 ts multiplexers in catv networks, *Broadcasting, IEEE Transactions on* 46(4): 249 –255.

*Engineering (ICETCE), 2011 International Conference on*, pp. 2016 –2019. Fisher, R. (1930). *The genetical theory of natural selection*, Clarendon Press, Oxford.

*Computer Science*, Springer Berlin Heidelberg, pp. 508–516.

*(GEFS), 2011 IEEE 5th International Workshop on*, pp. 73 –80.

*Computer Science*, Springer Berlin Heidelberg, pp. 505–600.

*Hawaii International Conference on ,*, Vol. 1, pp. 469 –478 vol.1.

*(AINA), 2010 24th IEEE International Conference on*, pp. 334 –341.

*Computation (CEC), 2011 IEEE Congress on*, pp. 1465 –1472.

*2011 Third International Conference on*, Vol. 1, pp. 435 –439.

*2002 Congress on*, Vol. 1, pp. 657 –662.

*Conference on*, pp. 1028 –1033.

*International Conference on*, pp. 85 –89.

*Eighth International Conference on*, pp. 73 –77.

strip packing problem, *in* B. Beliczynski, A. Dzielinski, M. Iwanowski & B. Ribeiro (eds), *Adaptive and Natural Computing Algorithms*, Vol. 4431 of *Lecture Notes in*

in evolutionary algorithms, *Evolutionary Computation, 2002. CEC '02. Proceedings of the*

on support vector machine and continuous tabu search, *Electric Technology and Civil*

weight-related costs, *Computers Industrial Engineering, 2009. CIE 2009. International*

improve the performance of the classic tuning of fuzzy logic controllers for a heating, ventilating and air conditioning system, *Genetic and Evolutionary Fuzzy Systems*

using injective contrast functions for bss, *in* V. Sunderam, G. van Albada, P. Sloot & J. Dongarra (eds), *Computational Science ICCS 2005*, Vol. 3514 of *Lecture Notes in*

world scheduling, *Industrial Engineering and Engineering Management, 2007 IEEE*

multiprocessor architectures, *System Sciences, 1996., Proceedings of the Twenty-Ninth*

into the classical traveling salesman problem, *Hybrid Intelligent Systems, 2008. HIS '08.*

scheduling protocol for grids, *Advanced Information Networking and Applications*

on many-objective optimization problems with correlated objectives, *Evolutionary*

intrusion detection, *Measuring Technology and Mechatronics Automation (ICMTMA),*


Wei, Y., Gu, K., Liu, H. & Li, D. (2007). Contract net based scheduling approach using

<sup>107</sup> Evolutionary Algorithms Based on the Automata

Wijkman, P. (1996). Evolutionary computation and the principle of natural selection, *Intelligent Information Systems, 1996., Australian and New Zealand Conference on*, pp. 292 –297. Wismans, L., Van Berkum, E. & Bliemer, M. (2011). Comparison of evolutionary multi

*and Control (ICNSC), 2011 IEEE International Conference on*, pp. 275 –280. Wu, D. & Negi, R. (2003). Downlink scheduling in a cellular network for quality of service

Xiawen, Y. & Yu, S. (2011). A real-coded quantum clone multi-objective evolutionary

Yingzi, W., Xinli, J., Pingbo, H. & Kanfeng, G. (2009). Pattern driven dynamic scheduling

Yong-Fa, Q. & Ming-Yang, Z. (2004). Research on a new multiobjective combinatorial

You-xin, M., Jie, Z. & Zhuo, C. (2009). An overview of ant colony optimization algorithm

Yu, G., Chai, T. & Luo, X. (2011). Multiobjective production planning optimization using

Yu, L., Ohsato, A., Kawakami, T. & Sekiguchi, T. (1999). Corba-based design and development

Zhang-liang, W. & Yue-guang, L. (2011). An ant colony algorithm with tabu search and

Zhao, Y., Jia, F., Wang, G. & Wang, L. (2011). A hybrid tabu search for steelmaking-continuous

Zheng, D., Liu, J., Chen, L., Liu, Y. & Guo, W. (2011). Transmitter-receiver collaborative-relay

Zhu, D., Mosse, D. & Melhem, R. (2003). Multiple-resource periodic scheduling problem:

Zitzler, E., Laumanns, M. & Thiele, L. (2001). Spea2: Improving the strength pareto

*1999 IEEE International Conference on*, Vol. 4, pp. 522 –527 vol.4.

*(ADCONIP), 2011 International Symposium on*, pp. 535 –540.

*ICIT '07. IEEE International Conference on*, pp. 281 –286.

Theory for the Multi-Objective Optimization of Combinatorial Problems

pp. 1391 – 1395 Vol.3.

*International Conference on*, pp. 4683 –4687.

*International Conference on*, pp. 514 –519.

*International Conference on*, pp. 187 –191.

*International Conference on*, pp. 135 –138.

*International Conference on*, Vol. 2, pp. 412 –416.

*IEEE Transactions on* 15(4): 487 –514.

Berlin Heidelberg, pp. 411–418.

*IEEE*, pp. 142 – 151.

interactive bidding for dynamic job shop scheduling, *Integration Technology, 2007.*

objective algorithms for the dynamic network design problem, *Networking, Sensing*

assurance, *Vehicular Technology Conference, 2003. VTC 2003-Fall. 2003 IEEE 58th*, Vol. 3,

algorithm, *Consumer Electronics, Communications and Networks (CECNet), 2011*

approach using reinforcement learning, *Automation and Logistics, 2009. ICAL '09. IEEE*

optimization algorithm, *Robotics and Biomimetics, 2004. ROBIO 2004. IEEE*

and its application on production scheduling, *Innovation Management, 2009. ICIM '09.*

hybrid evolutionary algorithms for mineral processing, *Evolutionary Computation,*

of distributed scheduling systems: an application to flexible flow shop scheduling systems, *Systems, Man, and Cybernetics, 1999. IEEE SMC '99 Conference Proceedings.*

its application, *Intelligent Computation Technology and Automation (ICICTA), 2011*

casting production scheduling problem, *Advanced Control of Industrial Processes*

beamforming by simulated annealing, *in* Y. Tan, Y. Shi, Y. Chai & G. Wang (eds), *Advances in Swarm Intelligence*, Vol. 6729 of *Lecture Notes in Computer Science*, Springer

how much fairness is necessary?, *Real-Time Systems Symposium, 2003. RTSS 2003. 24th*

evolutionary algorithm, *Technical Report 103*, Computer Engineering and Networks


26 Will-be-set-by-IN-TECH

Niño, E. D., Ardila, C., Perez, A. & Donoso, Y. (2010). A genetic algorithm for multiobjective

Niehaus, D., Ramamritham, K., Stankovic, J., Wallace, G., Weems, C., Burleson, W. & Ko, J.

Oberlin, P., Rathinam, S. & Darbha, S. (2009). A transformation for a heterogeneous, multiple

Ren, W.-J., Duan, J.-H., rong Zhang, F., yan Han, H. & Zhang, M. (2011). Hybrid tabu search

Sauer, J. & Coelho, L. (2008). Discrete differential evolution with local search to solve the

Shanmugapriya, R., Padmavathi, S. & Shalinie, S. (2009). Contention awareness in task

Sofianopoulos, S. & Tambouratzis, G. (2011). Studying the spea2 algorithm for optimising

*Multicriteria Decision-Making (MDCM), 2011 IEEE Symposium on*, pp. 97 –104. Song, J., Yang, F., Wang, M. & Zhang, H. (2008). Cryptanalysis of transposition cipher using

Song, S., Hwang, K. & Kwok, Y.-K. (2006). Risk-resilient heuristics and genetic algorithms for

Talbi, N. & Belarbi, K. (2011a). Evolving fuzzy inference system by tabu search algorithm

Talbi, N. & Belarbi, K. (2011b). A self organized fuzzy pd controller using tabu

Taspinar, N., Kalinli, A. & Yildirim, M. (2011). Partial transmit sequences for papr reduction

Wang, S.-Q. & Xu, Z.-Y. (2009). Ant colony algorithm approach for solving traveling

Wang, Y. & Lang, M. (2008). Study on the model and tabu search algorithm for delivery and

*Systems, 2008. CIS 2008. 7th IEEE International Conference on*, pp. 1 –6. Shah, S., Mahmood, A. & Oxley, A. (2009). Hybrid scheduling and dual queue scheduling,

*Systems Symposium, 1993., Proceedings.*, pp. 106 –111.

*and Decision Conference (CCDC), 2011 Chinese*, pp. 1699 –1702.

*Control* 5(5): 825–836.

*'09.*, pp. 1292 –1297.

*Conference on*, pp. 539 –543.

*International*, pp. 272 –277.

Springer Berlin Heidelberg, pp. 795–802.

*International Conference on*, pp. 1 –6.

*Conference on*, Vol. 1, pp. 381 –384.

*Symposium on*, pp. 460 –464.

PP(99): 1 –3.

–1469.

hard scheduling optimization, *International Journal of Computers Communications &*

(1993). The spring scheduling co-processor: Design, use, and performance, *Real-Time*

depot, multiple traveling salesman problem, *American Control Conference, 2009. ACC*

algorithm for bi-criteria no-idle permutation flow shop scheduling problem, *Control*

traveling salesman problem: Fundamentals and case studies, *Cybernetic Intelligent*

*Computer Science and Information Technology, 2009. ICCSIT 2009. 2nd IEEE International*

scheduling using tabu search, *Advance Computing Conference, 2009. IACC 2009. IEEE*

a pattern-recognition based machine translation system, *Computational Intelligence in*

simulated annealing genetic algorithm, *in* L. Kang, Z. Cai, X. Yan & Y. Liu (eds), *Advances in Computation and Intelligence*, Vol. 5370 of *Lecture Notes in Computer Science*,

security-assured grid job scheduling, *Computers, IEEE Transactions on* 55(6): 703 –719.

and its application to control, *Multimedia Computing and Systems (ICMCS), 2011*

search, *Innovations in Intelligent Systems and Applications (INISTA), 2011 International*

using parallel tabu search algorithm in ofdm systems, *Communications Letters, IEEE*

salesman with multi-agent, *Information Engineering, 2009. ICIE '09. WASE International*

pickup vehicle routing problem with time windows, *Service Operations and Logistics, and Informatics, 2008. IEEE/SOLI 2008. IEEE International Conference on*, Vol. 1, pp. 1464


**5** 

**Evolutionary Techniques in Multi-Objective** 

**Optimization Problems in Non-Standardized** 

To schedule production in a Job-Shop environment means to allocate adequately the available resources. It requires to rely on efficient optimization procedures. In fact, the Job-Shop Scheduling Problem (JSSP) is a NP-Hard problem (Ullman, 1975), so ad-hoc algorithms have to be applied to its solution (Frutos et al., 2010). This is similar to other combinatorial programming problems (Olivera et al., 2006), (Cortés et al., 2004). Most instances of the Job-Shop Scheduling Problem involve the simultaneous optimization of two usually conflicting goals. This one, like most multi-objective problems, tends to have many solutions. The Pareto frontier reached by an optimization procedure has to contain a uniformly distributed number of solutions close to the ones in the true Pareto frontier. This feature facilitates the task of the expert who interprets the solutions (Kacem et al., 2002). In this paper we present a Genetic Algorithm linked to a Simulated Annealing procedure able to schedule the production in a Job-Shop manufacturing system (Cortés et al., 2004), (Tsai &

The huge literature on the topic presents a variety of solution strategies that go from simple priority rules to sophisticated parallel branch-and-bound algorithms. A particular variety of scheduling problem is the JSSP. Muth and Thompson's 1964 (Muth & Thompson, 1964) book Industrial Scheduling presented the JSSP, basically in its currently known form. Even before, Jackson in 1956 (Jackson, 1956) generalized the flow-shop algorithm of Johnson (1954) (Johnson, 1954) to yield a job-shop algorithm. In 1955, Akers and Friedman (Akers & Friedman, 1955) gave a Boolean representation of the procedure, which later Roy and Sussman (1964) (Roy & Sussman, 1964) described by means of a disjunctive graph, while Egon Balas, already in 1969 (Balas, 1969), applied an enumerative approach that could be better understood in terms of this graph. Giffler and Thompson (1960) (Giffler & Thomson, 1960) presented an algorithm based on rule priorities to guide the search. For these reasons,

Lin, 2003), (Wu et al., 2004), (Chao-Hsien & Han-Chiang, 2009).

**1.1 JSSP treatments: State of the art** 

**1. Introduction** 

Mariano Frutos1, Ana C. Olivera2 and Fernando Tohmé3

**Production Processes** 

*2Department of Computer Science & Engineering,* 

*Universidad Nacional del Sur and CONICET,* 

*1Department of Engineering,* 

*3Department of Economics,* 

*Argentina* 

Laboratory (TIK), Swiss Federal Institute of Technology (ETH) Zurich, Gloriastrasse 35, CH-8092 Zurich, Switzerland.

Zitzler, E., Laumanns, M. & Thiele, L. (2002). Spea2: Improving the strength pareto evolutionary algorithm for multiobjective optimization, *in* K. Giannakoglou, D. Tsahalis, J. Periaux, K. Papaliliou & T. Fogarty (eds), *Evolutionary Methods for Design, Optimisation and Control with Application to Industrial Problems. Proceedings of the EUROGEN2001 Conference,Athens, Greece, September 19-21, 2001*, International Center for Numerical Methos in Engineering (CIMNE), Barcelona, Spain, pp. 95–100.
