**Evolutionary Techniques in Multi-Objective Optimization Problems in Non-Standardized Production Processes**

Mariano Frutos1, Ana C. Olivera2 and Fernando Tohmé3 *1Department of Engineering, 2Department of Computer Science & Engineering, 3Department of Economics, Universidad Nacional del Sur and CONICET, Argentina* 

### **1. Introduction**

28 Will-be-set-by-IN-TECH

108 Real-World Applications of Genetic Algorithms

Zitzler, E., Laumanns, M. & Thiele, L. (2002). Spea2: Improving the strength pareto

35, CH-8092 Zurich, Switzerland.

Laboratory (TIK), Swiss Federal Institute of Technology (ETH) Zurich, Gloriastrasse

evolutionary algorithm for multiobjective optimization, *in* K. Giannakoglou, D. Tsahalis, J. Periaux, K. Papaliliou & T. Fogarty (eds), *Evolutionary Methods for Design, Optimisation and Control with Application to Industrial Problems. Proceedings of the EUROGEN2001 Conference,Athens, Greece, September 19-21, 2001*, International Center for Numerical Methos in Engineering (CIMNE), Barcelona, Spain, pp. 95–100.

> To schedule production in a Job-Shop environment means to allocate adequately the available resources. It requires to rely on efficient optimization procedures. In fact, the Job-Shop Scheduling Problem (JSSP) is a NP-Hard problem (Ullman, 1975), so ad-hoc algorithms have to be applied to its solution (Frutos et al., 2010). This is similar to other combinatorial programming problems (Olivera et al., 2006), (Cortés et al., 2004). Most instances of the Job-Shop Scheduling Problem involve the simultaneous optimization of two usually conflicting goals. This one, like most multi-objective problems, tends to have many solutions. The Pareto frontier reached by an optimization procedure has to contain a uniformly distributed number of solutions close to the ones in the true Pareto frontier. This feature facilitates the task of the expert who interprets the solutions (Kacem et al., 2002). In this paper we present a Genetic Algorithm linked to a Simulated Annealing procedure able to schedule the production in a Job-Shop manufacturing system (Cortés et al., 2004), (Tsai & Lin, 2003), (Wu et al., 2004), (Chao-Hsien & Han-Chiang, 2009).

### **1.1 JSSP treatments: State of the art**

The huge literature on the topic presents a variety of solution strategies that go from simple priority rules to sophisticated parallel branch-and-bound algorithms. A particular variety of scheduling problem is the JSSP. Muth and Thompson's 1964 (Muth & Thompson, 1964) book Industrial Scheduling presented the JSSP, basically in its currently known form. Even before, Jackson in 1956 (Jackson, 1956) generalized the flow-shop algorithm of Johnson (1954) (Johnson, 1954) to yield a job-shop algorithm. In 1955, Akers and Friedman (Akers & Friedman, 1955) gave a Boolean representation of the procedure, which later Roy and Sussman (1964) (Roy & Sussman, 1964) described by means of a disjunctive graph, while Egon Balas, already in 1969 (Balas, 1969), applied an enumerative approach that could be better understood in terms of this graph. Giffler and Thompson (1960) (Giffler & Thomson, 1960) presented an algorithm based on rule priorities to guide the search. For these reasons,

Evolutionary Techniques in Multi-Objective Optimization Problems in

*i* υ

*j* 2

*i j* τ

max

1 *i j* τ

Table 1. Flexible Job-Shop Scheduling Problem

 τ*<sup>t</sup>* − − = ++ for each pair *<sup>i</sup>* <sup>1</sup> *Ojh*

( 1) ( 1) ( , , 0) *<sup>i</sup> i i s s jk jh jh pk pk t max t* τ

**3. Hybrid genetic algorithm** 

and operations *Si* , *Ss* .

and 5→J3J2J1 (see Table 2).

1

*<sup>j</sup> J <sup>i</sup> Ojk*

1*J*

2*J*

3*J*

Where *<sup>i</sup>*

Non-Standardized Production Processes 111

MF01 / Problem 3 × 4 with 8 operations (flexible)

2

*i* υ

<sup>1</sup> *O*1*<sup>k</sup>* 1 10 3 8 4 6 1 9 <sup>2</sup> *O*1*<sup>k</sup>* 3 4 8 2 2 10 1 12 <sup>3</sup> *O*1*<sup>k</sup>* 3 8 5 4 4 6 7 3

<sup>1</sup> *O*2*<sup>k</sup>* 4 7 1 16 1 14 4 6 <sup>2</sup> *O*2*<sup>k</sup>* 2 10 3 8 9 3 3 8 <sup>3</sup> *O*2*<sup>k</sup>* 9 3 1 15 2 10 2 13

<sup>1</sup> *O*3*<sup>k</sup>* 8 6 6 8 3 12 5 10 <sup>2</sup> *O*3*<sup>k</sup>* 4 11 5 8 8 6 1 18

> τ

υ

= + (1)

(2)

<sup>−</sup> , *<sup>s</sup> O E pk k* ∈ and all machines *Mk* , *Mh*

*jk <sup>k</sup> <sup>x</sup>* <sup>=</sup> . Besides,

( ) 1 : max( ) *<sup>j</sup> k k ij ij k M iOj*

> 2 : *i i jk jk jik f x*

*jk x* = 1 if *<sup>i</sup> O E jk k* ∈ and 0 otherwise. On the other hand 1 *<sup>i</sup>*

Due to its many advantages, evolutionary algorithms have become very popular for solving multi-objective optimization problems (Ztzler et al., 2001), (Coello Coello et al., 2002). Among the evolutionary algorithms used, some of the most interesting are Genetic Algorithms (GA) (Goldberg, 1989). To represent the individuals, we use a variant of (Wu et al., 2004). Since the Flexible JSSP has two subproblems, the Hybrid Genetic Algorithm (HGA) presented here operates over two chromosomes. The first one represents the allocation *<sup>i</sup> Ajk* of each *<sup>i</sup> Ojk* to every *Mk* . We denote with values between 0 and (m - 1) the allocation of each *Mk* , that is, for m = 4, we might have something like 0→M1, 1→M2, 2→M3 and 3→M4. The second chromosome represents the sequencing of the *<sup>i</sup> Ojk* already assigned to each of the ( ) *<sup>i</sup> Mk* ∀ ∈ *O E jk k* . We denote with values between 0 and (n! - 1) the sequence of *<sup>j</sup> J* in each *Mk* . That is, for n = 3, we may have 0→J1J2J3, 1→J1J3J2, 2→J2J1J3, 3→J2J3J1, 4→J3J1J2

The algorithm NSGAII (Non-Dominated Sorting Genetic Algorithm II) (Deb et al., 2002), creates an initial population, be it random or otherwise. NSGAII uses an elitist strategy joint with an explicit diversity mechanism. Each individual candidate solution i is assumed to

<sup>∈</sup> <sup>∈</sup>

*f C t*

*M*<sup>1</sup> *M*<sup>2</sup> *M*<sup>3</sup> *M*<sup>4</sup>

*j* 3

*i j* τ

3

*i* υ

*j* 4

*i j* τ

4

*i* υ*j*

the problem was already part of the folklore in Operations Research years before its official inception. The JSSP generated a huge literature. Its resiliency made it an ideal problem for further study. Besides, its usefulness made it a problem worth to scrutinize. Due to its complexity, several alternative presentations of the problem have been tried (Cheng & Smith, 1997), (Sadeh & Fox, 1995), (Crawford & Baker, 1994), (De Giovanni & Pezzella, 2010), in order to apply particular algorithms like Clonal Selection (Cortés Rivera et al., 2003), Taboo Search (Armentano & Scrich, 2000), Ant Colony Optimization (Merkle & Middendorf, 2001), Genetic Algorithms (Zalzala & Flemming, 1997), Priority Rules (Panwalker & Iskander, 1977), Shifting Bottlenecks (Adams et al., 1998), etc. The performance of these meta-heuristic procedures varies, and some seem fitter than others (Chinyao & Yuling, 2009).

#### **1.2 Multi-objective optimization: Basic concepts**

Our goal in this section is to characterize the general framework in which we will state the Job-Shop problem. We assume, without loss of generality, that there are several goals (objectives) to be minimized. Then, we seek to find a vector \*\* \* <sup>1</sup> [ ,..., ]*<sup>T</sup> <sup>n</sup> xx x* <sup>=</sup> of decision variables, satisfying q inequalities ( ) 0, 1,..., *<sup>i</sup> gx i q* ≥ = as well as p equations ( ) 0, 1,..., *<sup>i</sup> hx i p* = = , such that 1 ( ) [ ( ),..., ( )]*<sup>T</sup> <sup>k</sup> <sup>f</sup> <sup>x</sup>* <sup>=</sup> *<sup>f</sup> <sup>x</sup> <sup>f</sup> <sup>x</sup>* , a vector of k functions, each one corresponding to an objective, defined over the decision variables, attains its minimum. The class of the decision vectors satisfying the q inequalities and the p equations is denoted by Ω and each *<sup>x</sup>* ∈Ω is a feasible alternative. A \* *<sup>x</sup>* ∈ Ω is Pareto optimal if for any *<sup>x</sup>* ∈Ω and every *i k* <sup>=</sup> 1,..., , \* ( ) () *i i <sup>f</sup> <sup>x</sup>* <sup>≤</sup> *<sup>f</sup> <sup>x</sup>* . That is, if there is no *x* that improves some objectives without worsening the others. To simplify the notation, we say that a vector 1 [ ,..., ]*<sup>T</sup> uu u* = *<sup>n</sup>* dominates another, 1 [ ,..., ]*<sup>T</sup> <sup>n</sup> vv v* <sup>=</sup> (denoted *u v* ) if and only if {1,..., } ∀ ∈*i k* , {1,..., } : *u v i ku v i i* ≤ ∧∃ ∈ <*i i* . Then, the set of Pareto optima is \* '' *P x x fx fx* = ∈Ω ¬ ∃ ∈Ω { , ( ) ( )} while the corresponding Pareto frontier is \* \* *FP* = ∈ { ( ), } *f xx P* . The search of the Pareto frontier is the main goal of Multi-Objective Optimization.

### **2. Flexible job-shop scheduling problem**

The JSSP can be described as that of organizing the execution of n jobs on m machines. We assume a finite number of tasks, 1 { }*<sup>n</sup> j j J* <sup>=</sup> . These tasks must be processed by a finite number of machines 1 { }*<sup>m</sup> Mk k*<sup>=</sup> . To process a task *<sup>j</sup> J* in a machine *Mk* is denoted by *<sup>i</sup> Ojk* , where *i* indicates the order in which a class of operations 1 { }*<sup>n</sup> Sj j*= is applied on a task *<sup>j</sup> J* . *<sup>i</sup> Ojk* requires the uninterrupted use of a machine *Mk* for a period *<sup>i</sup> jk* τ (the processing time) at a cost *<sup>i</sup>* υ *jk* (see Table 1). A particular case is Flexible JSSP, in which the allocation of *<sup>i</sup> Ojk* on *Mk* is undifferentiated, which means that each *<sup>i</sup> Ojk* can be processed by any of the machines in 1 { }*<sup>m</sup> Mk k*<sup>=</sup> .

After allocating the operations, we obtain a finite class E of groupings of the *<sup>i</sup> O sjk* on the same machine. We denote each of these groupings as *Ek* , for 1,..., *k m* = . A key issue here is the scheduling of activities, i.e. the determination of the starting time *<sup>i</sup> jk t* of each *<sup>i</sup> Ojk* . The Flexible JSSP demands a procedure to handle its two sub-problems: the allocation of the *<sup>i</sup> O sjk* on the different *Mks* and their sequencing, guided by the goals to reach. That is, to find optimal levels of Makespan (Processing Time) (see Eq. 1) and Total Operation Costs (see Eq. 2).

the problem was already part of the folklore in Operations Research years before its official inception. The JSSP generated a huge literature. Its resiliency made it an ideal problem for further study. Besides, its usefulness made it a problem worth to scrutinize. Due to its complexity, several alternative presentations of the problem have been tried (Cheng & Smith, 1997), (Sadeh & Fox, 1995), (Crawford & Baker, 1994), (De Giovanni & Pezzella, 2010), in order to apply particular algorithms like Clonal Selection (Cortés Rivera et al., 2003), Taboo Search (Armentano & Scrich, 2000), Ant Colony Optimization (Merkle & Middendorf, 2001), Genetic Algorithms (Zalzala & Flemming, 1997), Priority Rules (Panwalker & Iskander, 1977), Shifting Bottlenecks (Adams et al., 1998), etc. The performance of these meta-heuristic procedures varies, and some seem fitter than others (Chinyao & Yuling, 2009).

Our goal in this section is to characterize the general framework in which we will state the Job-Shop problem. We assume, without loss of generality, that there are several goals (objectives)

satisfying q inequalities ( ) 0, 1,..., *<sup>i</sup> gx i q* ≥ = as well as p equations ( ) 0, 1,..., *<sup>i</sup> hx i p* = = , such

Pareto optima is \* '' *P x x fx fx* = ∈Ω ¬ ∃ ∈Ω { , ( ) ( )} while the corresponding Pareto frontier

The JSSP can be described as that of organizing the execution of n jobs on m machines. We

of machines 1 { }*<sup>m</sup> Mk k*<sup>=</sup> . To process a task *<sup>j</sup> J* in a machine *Mk* is denoted by *<sup>i</sup> Ojk* , where *i* indicates the order in which a class of operations 1 { }*<sup>n</sup> Sj j*= is applied on a task *<sup>j</sup> J* . *<sup>i</sup> Ojk*

After allocating the operations, we obtain a finite class E of groupings of the *<sup>i</sup> O sjk* on the same machine. We denote each of these groupings as *Ek* , for 1,..., *k m* = . A key issue here is the

JSSP demands a procedure to handle its two sub-problems: the allocation of the *<sup>i</sup> O sjk* on the different *Mks* and their sequencing, guided by the goals to reach. That is, to find optimal

levels of Makespan (Processing Time) (see Eq. 1) and Total Operation Costs (see Eq. 2).

 *jk* (see Table 1). A particular case is Flexible JSSP, in which the allocation of *<sup>i</sup> Ojk* on *Mk* is undifferentiated, which means that each *<sup>i</sup> Ojk* can be processed by any of the

satisfying the q inequalities and the p equations is denoted by Ω and each *x* ∈Ω

*<sup>k</sup> <sup>f</sup> <sup>x</sup>* <sup>=</sup> *<sup>f</sup> <sup>x</sup> <sup>f</sup> <sup>x</sup>* , a vector of k functions, each one corresponding to an objective, defined over the decision variables, attains its minimum. The class of the decision vectors

) if and only if {1,..., } ∀ ∈*i k* , {1,..., } : *u v i ku v i i* ≤ ∧∃ ∈ <*i i* . Then, the set of

. The search of the Pareto frontier is the main goal of Multi-Objective

<sup>1</sup> [ ,..., ]*<sup>T</sup>*

that improves some objectives without worsening the others. To

*<sup>n</sup> xx x* <sup>=</sup> of decision variables,

and every *i k* <sup>=</sup> 1,..., , \* ( ) () *i i <sup>f</sup> <sup>x</sup>* <sup>≤</sup> *<sup>f</sup> <sup>x</sup>* .

dominates another, 1 [ ,..., ]*<sup>T</sup>*

*j j J* <sup>=</sup> . These tasks must be processed by a finite number

*jk* τ

is a feasible

*<sup>n</sup> vv v* <sup>=</sup>

(the processing time) at a

*jk t* of each *<sup>i</sup> Ojk* . The Flexible

**1.2 Multi-objective optimization: Basic concepts** 

alternative. A \* *<sup>x</sup>* ∈ Ω is Pareto optimal if for any *<sup>x</sup>* ∈Ω

simplify the notation, we say that a vector 1 [ ,..., ]*<sup>T</sup> uu u* = *<sup>n</sup>*

requires the uninterrupted use of a machine *Mk* for a period *<sup>i</sup>*

scheduling of activities, i.e. the determination of the starting time *<sup>i</sup>*

**2. Flexible job-shop scheduling problem** 

assume a finite number of tasks, 1 { }*<sup>n</sup>*

that 1 ( ) [ ( ),..., ( )]*<sup>T</sup>*

That is, if there is no *x*

(denoted *u v*

Optimization.

cost *<sup>i</sup>* υ

is \* \* *FP* = ∈ { ( ), } *f xx P*

machines in 1 { }*<sup>m</sup> Mk k*<sup>=</sup> .

to be minimized. Then, we seek to find a vector \*\* \*


Table 1. Flexible Job-Shop Scheduling Problem

$$f1: \mathcal{C}^{j}\_{\text{max}} = \sum\_{i \in O(j)} \max\_{k \in M} (t^k\_{ij} + \tau^k\_{ij}) \tag{1}$$

$$f\,2: \sum\_{j} \sum\_{i} \sum\_{k} \mathbf{x}\_{jk}^{i} \boldsymbol{\nu}\_{jk}^{i} \,\tag{2}$$

Where *<sup>i</sup> jk x* = 1 if *<sup>i</sup> O E jk k* ∈ and 0 otherwise. On the other hand 1 *<sup>i</sup> jk <sup>k</sup> <sup>x</sup>* <sup>=</sup> . Besides, ( 1) ( 1) ( , , 0) *<sup>i</sup> i i s s jk jh jh pk pk t max t* τ τ *<sup>t</sup>* − − = ++ for each pair *<sup>i</sup>* <sup>1</sup> *Ojh* <sup>−</sup> , *<sup>s</sup> O E pk k* ∈ and all machines *Mk* , *Mh* and operations *Si* , *Ss* .

### **3. Hybrid genetic algorithm**

Due to its many advantages, evolutionary algorithms have become very popular for solving multi-objective optimization problems (Ztzler et al., 2001), (Coello Coello et al., 2002). Among the evolutionary algorithms used, some of the most interesting are Genetic Algorithms (GA) (Goldberg, 1989). To represent the individuals, we use a variant of (Wu et al., 2004). Since the Flexible JSSP has two subproblems, the Hybrid Genetic Algorithm (HGA) presented here operates over two chromosomes. The first one represents the allocation *<sup>i</sup> Ajk* of each *<sup>i</sup> Ojk* to every *Mk* . We denote with values between 0 and (m - 1) the allocation of each *Mk* , that is, for m = 4, we might have something like 0→M1, 1→M2, 2→M3 and 3→M4. The second chromosome represents the sequencing of the *<sup>i</sup> Ojk* already assigned to each of the ( ) *<sup>i</sup> Mk* ∀ ∈ *O E jk k* . We denote with values between 0 and (n! - 1) the sequence of *<sup>j</sup> J* in each *Mk* . That is, for n = 3, we may have 0→J1J2J3, 1→J1J3J2, 2→J2J1J3, 3→J2J3J1, 4→J3J1J2 and 5→J3J2J1 (see Table 2).

The algorithm NSGAII (Non-Dominated Sorting Genetic Algorithm II) (Deb et al., 2002), creates an initial population, be it random or otherwise. NSGAII uses an elitist strategy joint with an explicit diversity mechanism. Each individual candidate solution i is assumed to

Evolutionary Techniques in Multi-Objective Optimization Problems in

 Number of Jobs (n)

Relating sets 0→J1J2...J(n-1)Jn 1→J1J2...JnJ(n-1) ...

Random initialization Mk→Value (Chromosomes of Sequences)

Fig. 1. Lay-out of the Hybrid Genetic Algorithm

**4. Practical experiences** 

procedure is depicted in Fig. 1.

Set of integer numbers {0, 1, …, (n!-1)}

Inicialization Algorithm

Multi-Objective Memetic Algorithm

Non-Standardized Production Processes 113

up from a cooling coefficient (α) while ω is a control parameter ensuring sufficient permutations, particularly when the temperature is high. Summarizing all this, the relevant parameters for this phase of the procedure are the initial temperature (Ti), the final one (Tf), the cooling parameter (α) and the control parameter (ω). The general layout of the whole

> Set of sequences {J1J2...J(n-1)Jn, J1J2...JnJ(n-1),…}

> > Initial population ↓↑ Individual ↓ Decoding ↓ Fitness

NSGAII \* Simulated Annealing + Decoding

End Algorithm

The parameters and characteristics of the computing equipment used during these experiments were as follows: size of the population: 200, number of generations: 500, type of crossover: uniform, probability of crossover: 0.90, type of mutation: two-swap, probability of mutation: 0.01, type of local search: simulated annealing (Ti: 850, Tf: 0.01, α: 0.95, ω: 10), probability of local search: 0.01, CPU: 3.00 GHZ and RAM: 1.00 GB. We worked with the PISA tool (A Platform and Programming Language Independent Interface for Search Algorithms) (Bleuler et al., 2003). The results obtained by means of HGA were compared to those yield by Greedy Randomized Adaptive Search Procedures (GRASP) (Binato et al., 2001), Taboo Search (TS) (Armentano & Scrich, 2000) and Ant Colony Optimization (ACO) (Heinonen & Pettersson, 2007). For the problems MF01, MF02, MF03, MF04 and MF05 (Frutos et al., 2010), we show the results for the multi-objective analysis based on Makespan (f1, (1)) and Total Operation Costs (f2, (2)). They were obtained by running each algorithm 10 times.

Set of integer numbers {0, 1, …, (m-1)}

Set of machines {M1, M2, …, Mm}

Number of Machines (m)

> Relating sets 0→M1 1→M2 ...

Random initialization Oikj→Value (Chromosomes of Allocations)

have an associated rank of non-dominance *ir* and a distance *<sup>i</sup> d* which indicates the radius of the area in the search space around i not occupied by another solution (see Eq. 3). A solution i is preferred over j if *<sup>i</sup> <sup>j</sup> r r* < . When i and j have the same rank, i is preferred if *<sup>i</sup> <sup>j</sup> d d* > . Let *Yi* be an ordered class of individuals with same rank as i and *<sup>i</sup>* <sup>1</sup> *<sup>j</sup> f* <sup>+</sup> the value for objective j for the individual after i, while *<sup>i</sup>* <sup>1</sup> *<sup>j</sup> f* <sup>−</sup> is the value for the individual before i. max *j f* is the maximal value for j among *Yi* while min *<sup>j</sup> f* is the minimal value among *Yi* . The distances consider all the objective functions and attach an infinite value to the extreme solutions in *Yi* . Since these yield the best values for one of the objective functions on the frontier, the resulting distance is the sum of the distances for the N objective functions.


0→J1J2J3, 1→J1J3J2, 2→J2J1J3, 3→J2J3J1, 4→J3J1J2 and 5→J3J2J1 / 0→MB1B, 1→MB2B, 2→MB3B, 3→MB4

Table 2. Chromosome encoding process

$$d\_i = \sum\_{j=1}^{N} (f\_j^{i+1} - f\_j^{i-1}) \Big/ (f\_j^{\max} - a\_j^{\min}) \tag{3}$$

Starting with a population *Pt* a new population of descendants *Qt* obtains. These two populations mix to yield a new one, *Rt* of size 2*N* (N is the original size of *Pt* ). The individuals in *Rt* are ranked with respect the frontier and a new population *Pt*+1 obtains applying a tournament selection to *Rt* . After experimenting with several genetic operators we have chosen the uniform crossover for the crossover and two-swap for mutation (Fonseca & Fleming, 1995). After the individuals have been affected by these operators and before allowing them to become part of a new population we apply an improvement operator (Frutos & Tohmé, 2009). This operator has been designed following the guidelines of Simulated Annealing (Dowsland, 1993). This complements the genetic procedure. For the change of structure of both chromosomes we select a gene at random and change its value. This is repeated *M T* = + ( ) 1 ω, where T corresponds to the actual temperature determined

have an associated rank of non-dominance *ir* and a distance *<sup>i</sup> d* which indicates the radius of the area in the search space around i not occupied by another solution (see Eq. 3). A solution i is preferred over j if *<sup>i</sup> <sup>j</sup> r r* < . When i and j have the same rank, i is preferred if

distances consider all the objective functions and attach an infinite value to the extreme solutions in *Yi* . Since these yield the best values for one of the objective functions on the frontier, the resulting distance is the sum of the distances for the N objective functions.

MF01 / Problem 3 × 4 with 8 operations (flexible)

0→J1J2J3, 1→J1J3J2, 2→J2J1J3, 3→J2J3J1, 4→J3J1J2 and 5→J3J2J1 / 0→MB1B, 1→MB2B, 2→MB3B, 3→MB4

1 1

*d ff fa* + −

*i jj j j*

Starting with a population *Pt* a new population of descendants *Qt* obtains. These two populations mix to yield a new one, *Rt* of size 2*N* (N is the original size of *Pt* ). The individuals in *Rt* are ranked with respect the frontier and a new population *Pt*+1 obtains applying a tournament selection to *Rt* . After experimenting with several genetic operators we have chosen the uniform crossover for the crossover and two-swap for mutation (Fonseca & Fleming, 1995). After the individuals have been affected by these operators and before allowing them to become part of a new population we apply an improvement operator (Frutos & Tohmé, 2009). This operator has been designed following the guidelines of Simulated Annealing (Dowsland, 1993). This complements the genetic procedure. For the change of structure of both chromosomes we select a gene at random and change its value.

( )( )

*i i max min*

=− − (3)

, where T corresponds to the actual temperature determined

1

*j*

ω

=

*N*

*Mk M*<sup>1</sup> *M*<sup>2</sup> *M*<sup>3</sup> *M*<sup>4</sup> Chr. 3 3 0 5

*<sup>j</sup> f* <sup>+</sup> the value for

*j f*

*<sup>j</sup> f* <sup>−</sup> is the value for the individual before i. max

*<sup>j</sup> f* is the minimal value among *Yi* . The

*<sup>i</sup> <sup>j</sup> d d* > . Let *Yi* be an ordered class of individuals with same rank as i and *<sup>i</sup>* <sup>1</sup>

objective j for the individual after i, while *<sup>i</sup>* <sup>1</sup>

<sup>1</sup> *O*1*<sup>k</sup>* 2 <sup>2</sup> *O*1*<sup>k</sup>* 1 <sup>3</sup> *O*1*<sup>k</sup>* 0

<sup>1</sup> *O*2*<sup>k</sup>* 1 <sup>2</sup> *O*2*<sup>k</sup>* 2 <sup>3</sup> *O*2*<sup>k</sup>* 3

<sup>1</sup> *O*3*<sup>k</sup>* 0 <sup>2</sup> *O*3*<sup>k</sup>* 3

Table 2. Chromosome encoding process

This is repeated *M T* = + ( ) 1

*<sup>j</sup> J <sup>i</sup> Ojk*

1*J*

2*J*

3*J*

is the maximal value for j among *Yi* while min

up from a cooling coefficient (α) while ω is a control parameter ensuring sufficient permutations, particularly when the temperature is high. Summarizing all this, the relevant parameters for this phase of the procedure are the initial temperature (Ti), the final one (Tf), the cooling parameter (α) and the control parameter (ω). The general layout of the whole procedure is depicted in Fig. 1.

Fig. 1. Lay-out of the Hybrid Genetic Algorithm

### **4. Practical experiences**

The parameters and characteristics of the computing equipment used during these experiments were as follows: size of the population: 200, number of generations: 500, type of crossover: uniform, probability of crossover: 0.90, type of mutation: two-swap, probability of mutation: 0.01, type of local search: simulated annealing (Ti: 850, Tf: 0.01, α: 0.95, ω: 10), probability of local search: 0.01, CPU: 3.00 GHZ and RAM: 1.00 GB. We worked with the PISA tool (A Platform and Programming Language Independent Interface for Search Algorithms) (Bleuler et al., 2003). The results obtained by means of HGA were compared to those yield by Greedy Randomized Adaptive Search Procedures (GRASP) (Binato et al., 2001), Taboo Search (TS) (Armentano & Scrich, 2000) and Ant Colony Optimization (ACO) (Heinonen & Pettersson, 2007). For the problems MF01, MF02, MF03, MF04 and MF05 (Frutos et al., 2010), we show the results for the multi-objective analysis based on Makespan (f1, (1)) and Total Operation Costs (f2, (2)). They were obtained by running each algorithm 10 times.

Evolutionary Techniques in Multi-Objective Optimization Problems in

Non-Standardized Production Processes 115

Mean Time 15,885 sec. 6,405 sec. 13,940 sec. 9,602 sec. -

Table 4. Solutions for MF02

(1)(Frutos et al., 2010), (2)(Binato et al., 2001), (3)(Armentano & Scrich, 2000) and (4)(Heinonen & Pettersson, 2007)

For each algorithm the sets of undominated solutions 1 2 10 *PP P* , ,..., were obtained as well as the superpopulation 1 2 <sup>10</sup> ... *PPP P <sup>T</sup>* = ∪ ∪∪ . From each superpopulation a class of undominated solutions was extracted, constituting the Pareto frontier for each algorithm. To obtain an approximation to the true Pareto front (Approximate Pareto Frontier), we take the fronts of each algorithm, from which all the dominated solutions are eliminated. These are detailed in Table 3 (MF01), Table 4 (MF02), Table 5 (MF03), Table 6 (MF04) and Table 7 (MF05), and are shown in Fig. 2 (MF01), Fig. 3 (MF02), Fig. 4 (MF03), Fig. 5 (MF04) and Fig. 6 (MF05).


(1)(Frutos et al., 2010), (2)(Binato et al., 2001), (3)(Armentano & Scrich, 2000) and (4)(Heinonen & Pettersson, 2007)

Table 3. Solutions for MF01

For each algorithm the sets of undominated solutions 1 2 10 *PP P* , ,..., were obtained as well as the superpopulation 1 2 <sup>10</sup> ... *PPP P <sup>T</sup>* = ∪ ∪∪ . From each superpopulation a class of undominated solutions was extracted, constituting the Pareto frontier for each algorithm. To obtain an approximation to the true Pareto front (Approximate Pareto Frontier), we take the fronts of each algorithm, from which all the dominated solutions are eliminated. These are detailed in Table 3 (MF01), Table 4 (MF02), Table 5 (MF03), Table 6 (MF04) and Table 7 (MF05), and are shown in Fig. 2 (MF01), Fig. 3 (MF02), Fig. 4 (MF03), Fig. 5 (MF04) and Fig. 6 (MF05).

**MF01 / Problem 3 × 4 with 8 operations (flexible)**  HGA (1) GRASP (2) TS (3) ACO (4) Approach

Solutions *f1 f2 f1 f2 f1 f2 f1 f2 f1 f2* 3x4\_1 6 66 6 70 6 66 6 66 6 66 3x4\_2 7 62 7 65 7 62 7 62 7 62 3x4\_3 8 55 8 61 8 55 8 57 8 55 3x4\_4 9 51 9 57 9 51 9 51 9 51 3x4\_5 10 47 10 50 10 48 10 47 10 47 3x4\_6 11 43 11 47 11 44 11 43 11 43 3x4\_7 13 42 13 43 13 43 13 42 13 42 3x4\_8 - - - - 15 41 - - 15 41 3x4\_9 17 40 17 40 - - 17 40 17 40 3x4\_10 - - - - 20 39 - - 20 39 3x4\_11 22 38 22 38 - - 22 38 22 38 3x4\_12 - - - - 25 37 - - 25 37 3x4\_13 27 36 27 37 - - 27 36 27 36 3x4\_14 28 35 28 35 28 35 28 35 28 35 3x4\_15 30 34 30 34 30 34 30 34 30 34 3x4\_16 31 33 - - 31 33 31 33 31 33 3x4\_17 32 31 32 32 32 32 32 32 32 31 3x4\_18 35 29 35 29 35 29 35 29 35 29

Mean Time 5,325 sec. 2,147 sec. 4,673 sec. 3,218 sec. -

Table 3. Solutions for MF01

(1)(Frutos et al., 2010), (2)(Binato et al., 2001), (3)(Armentano & Scrich, 2000) and (4)(Heinonen & Pettersson, 2007)


(1)(Frutos et al., 2010), (2)(Binato et al., 2001), (3)(Armentano & Scrich, 2000) and (4)(Heinonen & Pettersson, 2007)

Table 4. Solutions for MF02

Evolutionary Techniques in Multi-Objective Optimization Problems in

Non-Standardized Production Processes 117

Mean Time 31,439 sec. 11,214 sec. 27,590 sec. 22,999 sec. -

Table 6. Solutions for MF04

(1)(Frutos et al., 2010), (2)(Binato et al., 2001), (3)(Armentano & Scrich, 2000) and (4)(Heinonen & Pettersson, 2007)


(1)(Frutos et al., 2010), (2)(Binato et al., 2001), (3)(Armentano & Scrich, 2000) and (4)(Heinonen & Pettersson, 2007)

Table 5. Solutions for MF03

Mean Time 21,502 sec. 7,669 sec. 18,869 sec. 15,994 sec. -

Table 5. Solutions for MF03

(1)(Frutos et al., 2010), (2)(Binato et al., 2001), (3)(Armentano & Scrich, 2000) and (4)(Heinonen & Pettersson, 2007)


(1)(Frutos et al., 2010), (2)(Binato et al., 2001), (3)(Armentano & Scrich, 2000) and (4)(Heinonen & Pettersson, 2007)

Table 6. Solutions for MF04

Evolutionary Techniques in Multi-Objective Optimization Problems in

*f2*

*f2*

*f2*

0

Fig. 2. Makespan *vs.* Total Operation Costs (MF01)

0

Fig. 3. Makespan *vs.* Total Operation Costs (MF02)

Fig. 4. Makespan *vs.* Total Operation Costs (MF03)

50

100

150

200

20

40

60

80

Non-Standardized Production Processes 119

HGA GRASP TS ACO Approximate Pareto Frontier

HGA GRASP TS ACO Approximate Pareto Frontier

HGA GRASP TS ACO Approximate Pareto Frontier

0 10 20 30 40

0 20 40 60 80

0 20 40 60 80 100 120 140

*f1*

*f1*

*f1*


(1)(Frutos et al., 2010), (2)(Binato et al., 2001), (3)(Armentano & Scrich, 2000) and (4)(Heinonen & Pettersson, 2007)

Table 7. Solutions for MF05

(1)(Frutos et al., 2010), (2)(Binato et al., 2001), (3)(Armentano & Scrich, 2000) and (4)(Heinonen & Pettersson, 2007)

Table 7. Solutions for MF05

Fig. 2. Makespan *vs.* Total Operation Costs (MF01)

Fig. 3. Makespan *vs.* Total Operation Costs (MF02)

Fig. 4. Makespan *vs.* Total Operation Costs (MF03)

Evolutionary Techniques in Multi-Objective Optimization Problems in

Table 8. Comparing HGA, GRASP, TS and ACO (MF01)

Non-Standardized Production Processes 121

Test for Problem MF01 Ranktest HGA GRASP TS ACO HGA - 0,4668807 0,5248382 0,5000000 GRASP 0,5331193 - 0,5578024 0,5331193 TS 0,4751618 0,4421976 - 0,4751618 ACO 0,5000000 0,4668807 0,5248382 - *IH* HGA GRASP TS ACO HGA - 0,4849375 0,5193377 0,5451365 GRASP 0,5150625 - 0,5537379 0,5782757 TS 0,4806623 0,4462621 - 0,5793756 ACO 0,4548635 0,4217243 0,4206244 - *Ie 1* HGA GRASP TS ACO HGA - 0,4668807 0,5000000 0,5248382 GRASP 0,5331193 - 0,5331193 0,5567435 TS 0,5000000 0,4668807 - 0,5578024 ACO 0,4751618 0,4421976 0,4751618 - *IR21* HGA GRASP TS ACO HGA - 0,4560385 0,4883887 0,5126501 GRASP 0,5439615 - 0,5207389 0,5438144 TS 0,5116113 0,4792611 - 0,5448488 ACO 0,4873499 0,4561856 0,4551512 -

Test for Problem MF02 Ranktest HGA GRASP TS ACO HGA - 0,4507347 0,4364051 0,5123441 GRASP 0,5492653 - 0,4920168 0,5614884 TS 0,5635949 0,5079832 - 0,5792996 ACO 0,4876559 0,4385116 0,4207004 - *IH* HGA GRASP TS ACO HGA - 0,4554712 0,5065161 0,4369711 GRASP 0,5445288 - 0,5705083 0,5110457 TS 0,4934839 0,4294917 - 0,4532833 ACO 0,5630289 0,4889543 0,5467167 - *Ie 1* HGA GRASP TS ACO HGA - 0,4385116 0,4876559 0,4207004 GRASP 0,5614884 - 0,5492653 0,4920168 TS 0,5123441 0,4507347 - 0,4364051

Fig. 5. Makespan *vs.* Total Operation Costs (MF04)

Fig. 6. Makespan *vs.* Total Operation Costs (MF05)

In order to compare the results of the algorithms and establish the better option for the Flexible JSSP, several tests were applied over the solutions. First, we consider a dominance ranking among the different algorithms. One-tailed Mann-Whitney rank sum (Conover, 1999) was run over the results (Ranktest, Table 8 (MF01), Table 9 (MF02), Table 10 (MF03), Table 11 (MF04) and Table 12 (MF05)). The outcomes are summarized in Table 1. None of the results for MF01, MF02, MF03, MF04 and MF05 is statistically significant at an overall significance level α=0.05. This indicates that no algorithm generate approximation sets that are significantly better. Next, we considered unary quality indicators using normalized approximation sets. Then, we applied the unary indicators (unary hypervolume indicator *IH*, unary epsilon indicatior *Ie <sup>1</sup>* and R indicator *IR21*) on the normalized approximation sets as well as on the reference set generated by PISA (*IH*, *Ie <sup>1</sup>* and *IR21*, Table 8 (MF01), Table 9 (MF02), Table 10 (MF03), Table 11 (MF04) and Table 12 (MF05)). Again, no significant differences were found at the 0.05 level.

0 20 40 60 80

0 20 40 60 80 100 120 140 160

*f1*

In order to compare the results of the algorithms and establish the better option for the Flexible JSSP, several tests were applied over the solutions. First, we consider a dominance ranking among the different algorithms. One-tailed Mann-Whitney rank sum (Conover, 1999) was run over the results (Ranktest, Table 8 (MF01), Table 9 (MF02), Table 10 (MF03), Table 11 (MF04) and Table 12 (MF05)). The outcomes are summarized in Table 1. None of the results for MF01, MF02, MF03, MF04 and MF05 is statistically significant at an overall significance level α=0.05. This indicates that no algorithm generate approximation sets that are significantly better. Next, we considered unary quality indicators using normalized approximation sets. Then, we applied the unary indicators (unary hypervolume indicator *IH*,

(MF02), Table 10 (MF03), Table 11 (MF04) and Table 12 (MF05)). Again, no significant

*f1*

HGA GRASP TS ACO Approximate Pareto Frontier

HGA GRASP TS ACO

*<sup>1</sup>* and R indicator *IR21*) on the normalized approximation sets as

*<sup>1</sup>* and *IR21*, Table 8 (MF01), Table 9

Approximate Pareto Frontier

*f2*

*f2*

unary epsilon indicatior *Ie*

differences were found at the 0.05 level.

Fig. 5. Makespan *vs.* Total Operation Costs (MF04)

Fig. 6. Makespan *vs.* Total Operation Costs (MF05)

well as on the reference set generated by PISA (*IH*, *Ie*


Table 8. Comparing HGA, GRASP, TS and ACO (MF01)


Evolutionary Techniques in Multi-Objective Optimization Problems in

Table 11. Comparing HGA, GRASP, TS and ACO (MF04)

Table 12. Comparing HGA, GRASP, TS and ACO (MF05)

Non-Standardized Production Processes 123

Test for Problem MF05 Ranktest HGA GRASP TS ACO HGA - 0,4551815 0,4673350 0,4713557 GRASP 0,5448185 - 0,5207202 0,5201772 TS 0,5326650 0,4792798 - 0,5031851 ACO 0,5286443 0,4798228 0,4968146 - *IH* HGA GRASP TS ACO HGA - 0,4983801 0,5501285 0,5160291 GRASP 0,5016199 - 0,5658896 0,5408593 TS 0,4498715 0,4341104 - 0,4854094 ACO 0,4839709 0,4591407 0,5145906 - *Ie 1* HGA GRASP TS ACO HGA - 0,4798228 0,5296443 0,4968146 GRASP 0,5201772 - 0,5448185 0,5207202 TS 0,4713557 0,4551815 - 0,4673350 ACO 0,5031557 0,4792798 0,5326650 - *IR21* HGA GRASP TS ACO HGA - 0,4686801 0,5173445 0,4852773 GRASP 0,5313199 - 0,5321664 0,5086278 TS 0,4826555 0,4678336 - 0,4564823 ACO 0,5147227 0,4913722 0,5435177 -

HGA - 0,5407021 0,5566027 0,6414813 GRASP 0,4592979 - 0,5359183 0,6250339 TS 0,4433973 0,4640817 - 0,6138640 ACO 0,3585187 0,3749661 0,3861360 - *Ie 1* HGA GRASP TS ACO HGA - 0,5205689 0,5358774 0,6175955 GRASP 0,4794311 - 0,5159632 0,6017605 TS 0,4641226 0,4840368 - 0,5910066 ACO 0,3824045 0,3982395 0,4089934 - *IR21* HGA GRASP TS ACO HGA - 0,5084799 0,5234329 0,6032533 GRASP 0,4915201 - 0,5039812 0,5877861 TS 0,4765671 0,4960188 - 0,5772819 ACO 0,3967467 0,4122139 0,4227181 -


Table 9. Comparing HGA, GRASP, TS and ACO (MF02)



ACO 0,5792996 0,5079832 0,5635949 - *IR21* HGA GRASP TS ACO HGA - 0,4283282 0,4763312 0,4109306 GRASP 0,5716718 - 0,5365099 0,4805909 TS 0,5236688 0,4634901 - 0,4262707 ACO 0,5890694 0,5194091 0,5737293 -

Test for Problem MF03 Ranktest HGA GRASP TS ACO HGA - 0,6430292 0,5250930 0,5052068 GRASP 0,3569708 - 0,3782149 0,3627307 TS 0,4749070 0,6217851 - 0,4791809 ACO 0,4947932 0,6372693 0,5208191 - *IH* HGA GRASP TS ACO HGA - 0,6619159 0,5139296 0,5409620 GRASP 0,3380841 - 0,3707768 0,3928425 TS 0,4860704 0,6292232 - 0,5454012 ACO 0,4590380 0,6071575 0,4545988 - *Ie 1* HGA GRASP TS ACO HGA - 0,6372693 0,4947932 0,5208191 GRASP 0,3627307 - 0,3569708 0,3782149 TS 0,5052068 0,6430292 - 0,5250930 ACO 0,4791809 0,6217851 0,4749070 - *IR21* HGA GRASP TS ACO HGA - 0,6224702 0,4833028 0,5087244 GRASP 0,3775298 - 0,3486810 0,3694317 TS 0,5166972 0,6513190 - 0,5128990 ACO 0,4912756 0,6305683 0,4871010 -

Test for Problem MF04 Ranktest HGA GRASP TS ACO HGA - 0,4840368 0,5910066 0,4641226 GRASP 0,5159632 - 0,6017605 0,4794311 TS 0,4089934 0,3982395 - 0,3824045 ACO 0,5358774 0,5205689 0,6175955 - *IH* HGA GRASP TS ACO

Table 9. Comparing HGA, GRASP, TS and ACO (MF02)

Table 10. Comparing HGA, GRASP, TS and ACO (MF03)


Table 11. Comparing HGA, GRASP, TS and ACO (MF04)


Table 12. Comparing HGA, GRASP, TS and ACO (MF05)

Evolutionary Techniques in Multi-Objective Optimization Problems in

*Multi-Criterion Optimization*, pp. 494-508.

clonal selection principle, *ACDM'2004*, UK.

Computer Science, Vol. 2787, pp 1-10.

York.

*Laboratory*.

(2), pp 182-197.

Campus de Teatinos.

Vol. 181, pp 745-765.

España.

Vol. 200 (2), pp 395-408.

Non-Standardized Production Processes 125

Binato, S.; Hery, W. J.; Loewenstern, D. M. & Resende, M. G. C. (2001). A grasp for job shop

Bleuler, S.; Laumanns, M.; Thiele, L. & Zitzler, E. (2003). PISA, A Platform and Programming

Chao-Hsien, J. & Han-Chiang, H. (2009). A hybrid genetic algorithm for no-wait job shop cheduling problems, *Expert Systems with Applications*, Vol. 36 (3), pp 5800-5806. Cheng, C. C. & Smith, S. F. (1997). Applyng constraint satisfaction techniques to job shop

Chinyao, L. & Yuling, Y. (2009). Genetic algorithm-based heuristics for an open shop

Coello Coello, C. A.; Van Veldhuizen, D. A. & Lamont, G. B. (2002). Evolutionary

Cortés Rivera, D.; Coello Coello, C. A. & Cortés, N. C. (2004). Job shop scheduling using the

Cortés Rivera, D.; Coello Coello, C. A. & Cortés, N. C. (2003). Use of an Artificial Immune

Crawford, J. M. & Baker, A. B. (1994). Experimental Results on the Application of

De Giovanni, L. & Pezzella, F. (2010). An Improved Genetic Algorithm for the Distributed

Deb, K.; Pratap, A.; Agarwal, S. & Meyarivan, T. (2002). A Fast and Elitist Multi-objective

Dowsland, K. A. (1993). Simulated Annealing, Modern Heuristic Techniques for Combinatorial Problems, Ed. C. R. Reeves, *Blackwell Scientific Pub*, Oxford. Durillo, J. J.; Nebro, A. J.; Luna Dorronsoro B. & Alba E. (2006). JMetal: A Java Framework

Fonseca, C. M. & Fleming, P. J. (1995). Multi-objective genetic algorithms made easy:

Frutos, M.; Olivera, A. C. & Tohmé, F. (2010). A Memetic Algorithm based on a NSGAII

Frutos, M.; & Tohmé, F. (2009). Desarrollo de un procedimiento genético diseñado para

Selection, sharing and mating restriction, *GALESIA*, pp 45-52.

Language Independent Interface for Search Algorithms, *Proceedings of Evolutionary* 

scheduling problem with setup, processing, and removal times separated, *Robotics* 

Algorithms for Solving Multi-Objective Problems, *Kluwer Academic Publishers*, New

System for Job Shop Scheduling, *Proceedings of Second International Conference on Artificial Immune Systems*, Edinburgh, Scotland. Springer-Verlag, Lecture Notes in

Satisfiability Algorithms to Scheduling Problems, *Computational Intelligence Research* 

and Flexible Jobshop Scheduling problem, *European Journal of Operational Research*,

Genetic Algorithm: NSGAII, *IEEE Transactions on Evolutionary Computation*, Vol. 6

for Developing Multi-Objective Optimization Metaheuristics. Departamento de Lenguajes y Ciencias de la Computación. University of Málaga. *E.T.S.I. Informática*,

Scheme for the Flexible Job-Shop Scheduling Problem, *Annals of Operations Research*,

programar la producción en un sistema de manufactura tipo job-shop, *Proceedings of VI Congreso Español sobre Meta-heurísticas, Algoritmos Evolutivos y Bioinspirados*,

scheduling, *Essays and Surveys in Meta-heuristics*, pp. 59-80.

scheduling, *Annals of Operations Research*, Vol. 70, pp 327-357.

*and Computer-Integrated Manufacturing*, Vol. 25 (2), pp 314-322.

Conover, W. (1999). Practical Nonparametric Statistics. *John Wiley & Sons*, New York.


Finally, we note that there are no major differences between the Pareto frontiers generated by the four algorithms. Therefore, we calculated the percentage of solutions provided by each algorithm that belong to the Approximate Pareto Frontier (see Table 13).

(1) (Frutos et al., 2010), (2)(Binato et al., 2001), (3)(Armentano & Scrich, 2000) and (4)(Heinonen & Pettersson, 2007)

Table 13. Comparing HGA, GRASP, TS and ACO (MF01, MF02, MF03, MF04 and MF05)

### **5. Conclusions**

We presented a Hybrid Genetic Algorithm (HGA) intended to solve the Flexible Job-Shop Scheduling Problem (Flexible JSSP). The application of HGA required the calibration of parameters, in order to yield valid values for the problem at hand, which constitute also a reference for similar problems. We have shown that this HGA yields more solutions in the Approximate Pareto Frontier than other algorithms. As said above, PISA has been used here as a guide for the implementation of our HGA. Nevertheless, PISA itself has features that we tried to overcome, making the understanding and extension of its outcomes a little bit hard. JMetal (Meta-heuristic Algorithms in Java) (Durillo et al., 2006) is already an alternative to PISA implemented on JAVA. We are currently experimenting with other techniques of local search in order to achieve a more aggressive exploration. We are also interested in evaluating the performance of the procedure over other kinds of problems to see whether it saves resources without sacrificing precision in convergence.

### **6. Acknowledgments**

We would like to thank the economic support of the Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET) and the Universidad Nacional del Sur (UNS) for Grant PGI 24/JO56.

### **7. References**


Finally, we note that there are no major differences between the Pareto frontiers generated by the four algorithms. Therefore, we calculated the percentage of solutions provided by

Percentage of solutions in the Approximate Pareto Frontier HGA (1) GRASP (2) TS (3) ACO (4) MF01 83,33% 27,78% 61,11% 72,22% MF02 85,71% 25,00% 75,00% 71,43% MF03 95,45% 31,82% 77,27% 77,27% MF04 92,59% 51,85% 81,48% 74,07% MF05 90,32% 45,16% 70,97% 80,65%

(1) (Frutos et al., 2010), (2)(Binato et al., 2001), (3)(Armentano & Scrich, 2000) and (4)(Heinonen & Pettersson, 2007)

Table 13. Comparing HGA, GRASP, TS and ACO (MF01, MF02, MF03, MF04 and MF05)

see whether it saves resources without sacrificing precision in convergence.

scheduling, *Management Science*, Vol. 34 (3), pp 391-401.

*International Journal of Production Economics*, Vol. 63, pp 131-140.

Problems, *Operations Research*, Vol. 3 (4), pp 429-442.

We presented a Hybrid Genetic Algorithm (HGA) intended to solve the Flexible Job-Shop Scheduling Problem (Flexible JSSP). The application of HGA required the calibration of parameters, in order to yield valid values for the problem at hand, which constitute also a reference for similar problems. We have shown that this HGA yields more solutions in the Approximate Pareto Frontier than other algorithms. As said above, PISA has been used here as a guide for the implementation of our HGA. Nevertheless, PISA itself has features that we tried to overcome, making the understanding and extension of its outcomes a little bit hard. JMetal (Meta-heuristic Algorithms in Java) (Durillo et al., 2006) is already an alternative to PISA implemented on JAVA. We are currently experimenting with other techniques of local search in order to achieve a more aggressive exploration. We are also interested in evaluating the performance of the procedure over other kinds of problems to

We would like to thank the economic support of the Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET) and the Universidad Nacional del Sur (UNS) for Grant

Adams, J.; Balas, E. & Zawack, D. (1998). The Shifting Bottleneck Procedure for job shop

Akers, S. B. & Friedman, J. (1955). A Non-Numerical Approach to Production Scheduling

Armentano, V. & Scrich, C. (2000). Taboo search for minimizing total tardiness in a job-shop,

Balas, E. (1969). Duality in Discrete Programming: The Quadratic Case, *Management Science*,

**5. Conclusions** 

**6. Acknowledgments** 

Vol. 16 (1), pp 14-32.

PGI 24/JO56.

**7. References** 

each algorithm that belong to the Approximate Pareto Frontier (see Table 13).


**6** 

**A Hybrid Parallel Genetic Algorithm for** 

Reliability engineering is known to have been first applied to communication and transportation systems in the late 1940's and early 1950's. Reliability is the probability that an item will perform a required function without failure under stated conditions for a stated period of time. Therefore a system with high reliability can be likened to a system which has a superior quality. Reliability is one of the most important design factors in the successful and effective operation of complex technological systems. As explained by Tzafestas (1980), one of the essential steps in the design of multiple component systems is the problem of using the available resources in the most effective way so as to maximize the system reliability, or so as to minimize the consumption of resources while achieving specific reliability goals. The improvement of system reliability can be accomplished using the following methods: reduction of the system complexity, the allocation highly reliable components, and the allocation of component redundancy alone or combined with high component reliability, and the practice of a planned maintenance and repair schedule. This study deals with reliability

optimization that maximizes the system reliability subject to resource constraints.

compared with the results of existing meta-heuristics and CPLEX.

allocation problem with component choices (RAPCC).

**2.1 The reliability-redundancy allocation problem (RRAP)** 

This study suggests mathematical programming models and a hybrid parallel genetic algorithm (HPGA). The suggested algorithm includes different heuristics such as swap, 2 opt, and interchange (except for reliability allocation problem with component choices (RAPCC)) for an improvement solution. The component structure, reliability, cost, and weight were computed by using HPGA and the experimental results of HPGA were

The goal of reliability optimization is to maximize the reliability of a system considering some constraints such as cost, weight, and so on. In general, reliability optimization divides into two categories: the reliability-redundancy allocation problem (RRAP) and the reliability

The RRAP is the determination of both optimal component reliability and the number of component redundancy allowing mixed components to maximize the system reliability

**1. Introduction** 

**2. Literature review** 

**Reliability Optimization** 

Ki Tae Kim and Geonwook Jeon *Korea National Defense University,* 

*Republic of Korea* 

