**Using a Genetic Algorithm to Solve the Benders' Master Problem for Capacitated Plant Location**

Ming-Che Lai1 and Han-suk Sohn2,\* *1Yu Da University, 2New Mexico State University 1Taiwan 2USA* 

#### **1. Introduction**

404 Bio-Inspired Computational Algorithms and Their Applications

Ku, S. & Lee, B. (2001). A Set-Oriented Genetic Algorithm and the Knapsack Problem, In:

Kumar, R. & Singh, P. K. (2010). Assessing Solution Quality of Biobjective 0-1 Knapsack

Martello, S. & Toth, P. (1990). *Algorithms and Computer Implementations*, John Wiley & Sons,

Nowostawski , M. & Poli , R. (1999). Parallel Genetic Algorithm Taxonomy, In: *Proceedings of* 

Reynolds, R. G., & Peng, B. (2004). Cultural Algorithms: Computational Modeling of How

*and Systems Research*, ISBN 3-85206-169-5, Vienna, Austria, April 13-16 2004. Reynolds, R. G. & Chung C. (1996). The Use of Cultural Algorithms to Evolve Multiagent

Reynolds, R. G. (1994), An Introduction to Cultural Algorithms, In: *Proceedings of the Third* 

Reynolds, R. G. (1999). Chapter Twenty-Four; Cultural Algorithms: Theory and

Silva, D. J. A. & Oliveira R. C. L. (2009). A Multipopulation Cultural Algorithm Based on

Sivanandam, S.N. & Deepa, S. N. (2007). *Introduction to Genetic Algorithms*, (1st), Springer,

Sivaraj, R. & Ravichandran,T. (2011). An Improved Clustering Based Genetic Algorithm for

Spillman, R. (1995). Solving Large Knapsack Problems with a Genetic Algorithm, In: *IEEE* 

Tavares, J., Pereira, F. B. & Costa, E. (2008). Multidimensional Knapsack Problem: A Fitness

Tomassini, Marco (2005). *Spatially Structured Evolutionary Algorithms: Artificial Evolution,* 

Zoheir, E. (2002). Solving the 0/1 knapsack Problem Using an Adaptive Genetic Algorithm,

367-377, McGraw-Hill Ltd., ISBN 0-07-709506-5, UK, England.

pp 632 -637, Vancouver, BC, Canada, October 22-25 1995.

*Cybernetics*, Vol. 38, No. 3, June 2008, pp.604-616, ISSN 1083-4419.

654, Seoul, South Korea, May 27-30 2001.

Diego, California, February 24-26 1994.

ISBN 978-3-540-73189-4, New York.

2011, pp. 1033-1037, ISSN 15493636.

3540241930, Secaucus, NJ, USA.

No. 1, January 2002, pp.23-30, ISSN 08900604.

ISBN 0471924202, New York.

Korea, 1996.

Canada, July 8-12 2009.

Vol. 10, No 3, June 2010, pp. 711 – 718, ISSN 1568-4946.

*systems*, ISBN 0780355784, pp. 88-92, Adelaide, August 1999.

*Proceedings of the Congress on Evolutionary Computation*, ISBN 0-7803-6657-3, pp. 650–

Problem using Evolutionary and Heuristic Algorithms, In: *Applied Soft Computing*,

*the Third International conference on knowledge-based intelligent information engineering* 

Cultures Learn to Solve Problems, In: *Seventeenth European Meeting on Cybernetics* 

Cooperation, In: *Proc. Micro-Robot World Cup Soccer Tournament*, pp. 53–56. Taejon,

*Annual Conference on Evolutionary Programming,* ISBN 9810218109*,* pp. 131-139, San

Applications, In: *New Ideas in Optimization*, Corne, D., Dorigo, M. & Glover F., pp.

Genetic Algorithm for the MKP, In: *Proc. of the 11th Annual conference on Genetic and evolutionary computation,* ISBN 978-1-60558-325-9, pp. 1815-1816, Montreal, Québec,

Solving Complex NP Problems, In: *Journal of Computer Science,* Vol. 7, No. 7, May

*International Conference on Systems, Man and Cybernetics*, Vol. 1, ISBN 0-7803-2559-1,

Landscape Analysis, In: *IEEE Transactions on Systems, Man and Cybernetics, Part B:* 

*Space and Tim - Natural Computing Series* (1st), Springer, New York, Inc., ISBN

In: *Artificial Intelligence for Engineering Design, Analysis and Manufacturing*, Vol.16,

The capacitated plant location problem (CPL) consists of locating a set of potential plants with capacities, and assigning a set of customers to these plants. The objective is to minimize the total fixed and shipping costs while at the same time demand of all the customers can be satisfied without violating the capacity restrictions of the plants. The CPL is a well-known combinatorial optimization problem and a number of decision problems can be obtained as special cases of CPL. There are substantial numbers of heuristic solution algorithms proposed in the literature (See Rolland et al., 1996; Holmberg & Ling, 1997; Delmaire et al., 1999; Kratica et al., 2001; He et al., 2003; Uno et al., 2005). As well, exact solution methods have been studied by many authors. These include branch-and-bound procedures, typically with linear programming relaxation (Van Roy & Erlenkotter, 1982; Geoffrion & Graves, 1974) or Lagrangiran relaxation (Cortinhal & Captivo, 2003). Van Roy (1986) used the Cross decomposition which is a hybrid of primal and dual decomposition algorithm, and Geoffrion & Graves (1974) considered Benders' decomposition to solve CPL problem. Unlike many other mixed-integer linear programming applications, however, Benders decomposition algorithm was not successful in this problem domain because of the difficulty of solving the master system. In mixed-integer linear programming problems, where Benders' algorithm is most often applied, the master problem selects values for the integer variables (the more difficult decisions) and the subproblem is a linear programming problem which selects values for the continuous variables (the easier decisions). If the constraints are explicit only in the subproblem, then the master problem is free of explicit constraints, making it more amenable to solution by genetic algorithm (GA). The fitness function of the GA is, in this case, evaluated quickly and simply by evaluating a set of linear functions. In this chapter, therefore, we discuss about a hybrid algorithm (Lai et al., 2010) and its implementation to overcome the difficulty of Benders' decomposition. The hybrid algorithm is based on the solution framework of Benders' decomposition algorithm, together with the use of GA to effectively reduce the computational difficulty. The rest of

<sup>\*</sup> Corresponding Author

Using a Genetic Algorithm to

and restating the problem (CPL) as

whose dual LP problem is

where 1

ˆ ˆ , *<sup>n</sup> <sup>k</sup> k k <sup>k</sup> i i ii j j*

α≡ + β≡ .

*j F SU DV*

=

The function *v(Y)* may be approximated by the underestimate

If ψ ={( ˆ ˆ ,

*Y*.) That is,

Solve the Benders' Master Problem for Capacitated Plant Location 407

1 1 1 Min *m m n*

*ij j*

*X SY i m*

{ } ( ) Y 0,1 Min *<sup>m</sup> v Y* <sup>∈</sup>

We will refer to the evaluation of *v(Y)* as the (primal) subproblem, a transportation LP

1 11

*i ij v Y FY SYU DV* = ==

in principle *v(Y)* could be evaluated by a complete enumeration of the *K* basic feasible solutions. (The motivation for using the dual problem is, of course, that ψ is independent of

> ( ) { } k=1,2,...K k=1,2,...K 1 11 Max Max ˆ ˆ *m mn k k k k*

> > <sup>T</sup> ( ) { } k=1,2,...T

{ } ( ) 0,1 Min *<sup>T</sup> Y*

*v Y* <sup>∈</sup>

*i i i ii jj*

*v Y FY SU Y DV Y*

*i ij*

= ==

where *T*≤*K*. Benders' decomposition alternates between a master problem

*i i ii i j j*

*k k U V* ), *k*=1,…,*K*} is the set of basic feasible solutions to the dual subproblem, then

*m mn*

*i i j v Y FY C X* = = =

*m*

*i*

=

*ij i i*

*i i ij ij*

, 1,

*X D j n*

, 1,...

= + (6)

≤ = (8)

0 , 1, ; 1, *X i mj n ij* ≥ =… =… (9)

=+ + (11)

Subject to for 1, ; 1, −+ ≤ = = *U V C i mj n i j ij* (12)

= + + = α +β (14)

Max *k k vY Y* ≡ α+ β (15)

(16)

0, 1, ; 0, 1, *U i mV j n i j* ≥= ≥= (13)

≥ = (7)

(10)

( )

Subject to 1

*n*

1

( ) Max

*j*

=

this chapter is organized as follows. In section 2 the classical capacitated plant location problem is presented. The applications of Benders' decomposition and genetic algorithm are described in sections 3 and 4, respectively. In Section 5 the hybrid Benders/genetic algorithm to solve the addressed problem is illustrated. A numerical example is described in Section 6. Finally, some concluding remarks are presented in Section 7 followed by an acknowledgment and a list of references in Sections 8 and 9, respectively.

#### **2. Problem formulation**

The classical capacitated plant location problem with *n* potential plants and *m* customers can be formulated as a mixed integer program:

$$\text{CPL: Min} \sum\_{i=1}^{m} F\_i Y\_i + \sum\_{i=1}^{m} \sum\_{j=1}^{n} C\_{ij} X\_{ij} \tag{1}$$

$$\text{Subject to } \sum\_{i=1}^{m} X\_{ij} \ge D\_{j'} \quad j = 1, \dots, n \tag{2}$$

$$\sum\_{j=1}^{n} X\_{ij} \le S\_i Y\_{i\prime} \quad i = 1, \ldots m \tag{3}$$

$$X\_{\left[\right]} \ge 0 \; \; \mathbf{1} = \mathbf{1} \; \dots \; m \; \; \mathbf{j} = \mathbf{1} \; \dots \; m \tag{4}$$

$$Y\_i \in \{0, 1\}, \quad i = 1, \ldots m \tag{5}$$

Here, *Y* is a vector of binary variables which selects the plants to be opened, while *X* is an array of continuous variables which indicate the shipments from the plants to the customers. *Fi* is the fixed cost of operating plant *i* and *Si* its capacity if it is opened. *Cij* is the shipping cost of all of customer *j*'s demand *Dj* from plant *i*. The first constraint ensures that all the demand of each customer must be satisfied. The second constraint ensures that the total demand supplied from each plant does not exceed its capacity. As well, it ensures that no customer can be supplied from a closed plant.

#### **3. Benders' decomposition algorithm**

Benders' decomposition algorithm was initially developed to solve mixed-integer linear programming problems (Benders, 1962), i.e., linear optimization problems which involve a mixture of either different types of variables or different types of functions. A successful implementation of the method to design a large-scale multi-commodity distribution system has been described in the paper of Geoffrion & Graves (1974). Since then, Benders' decomposition algorithm has been successfully applied in many other areas, for example, in vehicle assignment (Cordeau et al., 2000, 2001), cellular manufacturing system (Heragu, 1998), local access network design (Randazzo et al., 2001), spare capacity allocation (Kennington, 1999), multi-commodity multi-mode distribution planning, (Cakir, 2009), and generation expansion planning (Kim et al., 2011). Benders' algorithm projects the problem onto the *Y*-space by defining the function

$$\text{tr}\left(Y\right) = \sum\_{i=1}^{m} F\_i Y\_i + \text{Min} \sum\_{i=1}^{m} \sum\_{j=1}^{n} \mathbf{C}\_{ij} X\_{ij} \tag{6}$$

$$\text{Subject to } \sum\_{i=1}^{m} X\_{ij} \ge D\_{j,\prime} \quad j = 1, \dots, n \tag{7}$$

$$\sum\_{j=1}^{n} X\_{ij} \le S\_i Y\_{i'} \quad i = 1, \ldots m \tag{8}$$

$$X\_{ij} \ge 0 \; , \; i = 1 \; \dots \; m; \; j = 1 \; \dots n \tag{9}$$

and restating the problem (CPL) as

406 Bio-Inspired Computational Algorithms and Their Applications

this chapter is organized as follows. In section 2 the classical capacitated plant location problem is presented. The applications of Benders' decomposition and genetic algorithm are described in sections 3 and 4, respectively. In Section 5 the hybrid Benders/genetic algorithm to solve the addressed problem is illustrated. A numerical example is described in Section 6. Finally, some concluding remarks are presented in Section 7 followed by an

The classical capacitated plant location problem with *n* potential plants and *m* customers can

1 11 *m mn*

*i ij*

*ij j*

*X SY i m*

Here, *Y* is a vector of binary variables which selects the plants to be opened, while *X* is an array of continuous variables which indicate the shipments from the plants to the customers. *Fi* is the fixed cost of operating plant *i* and *Si* its capacity if it is opened. *Cij* is the shipping cost of all of customer *j*'s demand *Dj* from plant *i*. The first constraint ensures that all the demand of each customer must be satisfied. The second constraint ensures that the total demand supplied from each plant does not exceed its capacity. As well, it ensures that

Benders' decomposition algorithm was initially developed to solve mixed-integer linear programming problems (Benders, 1962), i.e., linear optimization problems which involve a mixture of either different types of variables or different types of functions. A successful implementation of the method to design a large-scale multi-commodity distribution system has been described in the paper of Geoffrion & Graves (1974). Since then, Benders' decomposition algorithm has been successfully applied in many other areas, for example, in vehicle assignment (Cordeau et al., 2000, 2001), cellular manufacturing system (Heragu, 1998), local access network design (Randazzo et al., 2001), spare capacity allocation (Kennington, 1999), multi-commodity multi-mode distribution planning, (Cakir, 2009), and generation expansion planning (Kim et al., 2011). Benders' algorithm projects the problem

*i i ij ij*

*X D j n*

, 1,...

, 1,

<sup>+</sup> (1)

≥ = (2)

≤ = (3)

0 , 1, ; 1, *X i mj n ij* ≥ =… =… (4)

*Y im <sup>i</sup>* ∈ = {0,1 , 1, } (5)

*FY C X* = ==

acknowledgment and a list of references in Sections 8 and 9, respectively.

CPL: Min

Subject to 1

*n*

1

*j*

=

*m*

*i*

=

*ij i i*

**2. Problem formulation** 

be formulated as a mixed integer program:

no customer can be supplied from a closed plant.

**3. Benders' decomposition algorithm** 

onto the *Y*-space by defining the function

$$\mathop{\text{Min\\_Min}}\_{\text{Ye}\{0,1\}^{\text{wt}}} v(Y) \tag{10}$$

We will refer to the evaluation of *v(Y)* as the (primal) subproblem, a transportation LP whose dual LP problem is

$$w(Y) = \sum\_{i=1}^{m} F\_i Y\_i + \text{Max} \sum\_{i=1}^{m} S\_i Y\_i \mathcal{U}\_i + \sum\_{j=1}^{n} D\_j V\_j \tag{11}$$

Subject to for 1, ; 1, −+ ≤ = = *U V C i mj n i j ij* (12)

$$\text{all } l \ge 0, \quad \mathbf{i} = \mathbf{1}, \dots \\
m\text{;} \quad V\_{\mathbf{j}} \ge 0, \quad \mathbf{j} = \mathbf{1}, \dots \\
n\tag{13}$$

If ψ ={( ˆ ˆ , *k k U V* ), *k*=1,…,*K*} is the set of basic feasible solutions to the dual subproblem, then in principle *v(Y)* could be evaluated by a complete enumeration of the *K* basic feasible solutions. (The motivation for using the dual problem is, of course, that ψ is independent of *Y*.) That is,

$$\operatorname{tr}(Y) = \sum\_{i=1}^{m} F\_i Y\_i + \max\_{k=1,2,\dots,K} \left\{ \sum\_{i=1}^{m} S\_i \hat{L}\_i^k Y\_i + \sum\_{j=1}^{n} D\_j \hat{V}\_j^k \right\} = \max\_{k \le 1,2,\dots,K} \left\{ \alpha^k Y + \beta^k \right\} \tag{14}$$

where 1 ˆ ˆ , *<sup>n</sup> <sup>k</sup> k k <sup>k</sup> i i ii j j j F SU DV* = α≡ + β≡ .

The function *v(Y)* may be approximated by the underestimate

$$\underline{\underline{w}}\_{\rm T}(Y) \equiv \max\_{k=1,2,\dots,T} \left\{ \alpha^k Y + \mathfrak{B}^k \right\} \tag{15}$$

where *T*≤*K*. Benders' decomposition alternates between a master problem

$$\underset{Y \in \{0, 1\}}{\text{Min}} \; \underline{v}^T \left( Y \right) \tag{16}$$

Using a Genetic Algorithm to

mutation rate and a suitable population size.

Solve the Benders' Master Problem for Capacitated Plant Location 409

with genetic algorithms (Reeves, 1997). The most direct solution is simply to ignore this problem. If an infeasible solution is encountered, it may be assigned a very low fitness value to increase the chance that it will "die off" soon. But sometimes, infeasible solutions are close to the optimum by any reasonable distance measure. Another direct solution is to modify the objective function by incorporating a penalty function which reduces the fitness by an amount which varies as the degree of infeasibility. Unfortunately, not all penalty functions work equally well, and care must be exercised in their choice (Liepins & Hillard, 1989). If the penalty is too small, many infeasible solutions are allowed to enter the population pool; if it is too large, the search is confined to a very small portion of the search space. Another increasingly popular technique for coping with infeasibility is the use of repair algorithms. These heuristic algorithms accept infeasible solutions but repair them in order to make them feasible before inserting them into the population. We can find various repair algorithms in the context of the traveling salesman problem in the literature (Goldberg & Lingle, 1985; Oliver et al., 1987; Chatterjee et al., 1996). Several practical questions arise, such as whether it should be the original offspring or the repaired version that should be used in the next generaion, and whether the entire randomness should be sacrificed because of the adoption of the repair methods. The third reason for failure is convergence to local optima (premature convergence). This condition occurs when most strings in the population have similar allele values. In this case, applying crossover to similar strings results in another similar string, and no new areas of the search space are explored (Levine, 1997). Many improvements to the genetic algorithms help to avoid premature convergence, such as thorough randomization of initial populations, multiple restart of problems, and appropriate parameter settings, i.e., carefully adjustment of the

Most researchers agree that, to guarantee success of an application of genetic algorithms, the representation system is of crucial importance. The difference between a successful application and an unsuccessful one often lies in the encoding. Kershenbaum (1997) pointed out that an ideal encoding would have the following properties: (a) It should be able to represent all feasible solutions; (b) It should be able to represent only feasible solutions. (An encoding that represents fewer infeasible solutions is generally better than one that represents a large number of infeasible solutions. The larger the number of representable infeasible solutions, the more likely it is that crossover and mutation will produce infeasible offspring, and the less effective the GA will become.); (c) All (feasible) solutions should have an equal probability of being represented; (d) It should represent useful schemata using a small number of genes that are close to one other in the chromosome. (It is generally very difficult to create an encoding with this property a priori, since we do not know in advance what the useful schemata are. It is, however, possible to recognize the presence of short, compact schemata in solutions with high fitness and thus to validate the encoding after the fact. This is important for recognizing successful GA applications.); and (e) The encoding itself should possess locality, in the sense that small changes to the chromosome make small changes in the solution. Kershenbaum also pointed out taht although some of these properties conflict (often making tradeoffs), to the extent taht those properties can be achieved, the genetic algorithms are likely to work well. In this section, we focus on the design of the GA approach for the master problem of CPL problem. More discussion of some of these as well as definitions and some of the basic GA terminology that is used in

which selects a trial *Yk*, and the subproblem, which evaluates *v(Yk)* and computes a new linear support α*kY*+β*k* using the dual solution of the transportation subproblem. The major effort required by Benders' algorithm is the repeated solution of the master problem, or its mixed-integer LP equivalent,

$$\text{Min } Z \tag{17}$$

$$\text{Subject to } Z \ge \alpha^k Y + \mathfrak{P}^k, \quad k = 1, \dots, T \tag{18}$$

$$Y\_i \in \{0, 1\} \tag{19}$$

One approach to avoiding some of this effort is by suboptimizing the master problem, i.e., finding a feasible solution of the linear system

$$
\hat{Z} > \alpha^k Y + \mathfrak{B}^k, \quad k = 1, \ldots, T \tag{18}
$$

$$Y\_i \in \{0, 1\}, \quad i = 1, \ldots m \tag{19}$$

i.e., *Y* such that ( ) *<sup>T</sup>* ˆ *vY Z*< , where *Z*ˆ is the value of the incumbent at the current iteration, i.e., the least upper bound provided by the subproblems. (By using implicit enumeration to suboptimize the master problem, and restarting the enumeration when solving the following master problem, this modification of Benders' algorithm allows a single search of the enumeration tree, interrupted repeatedly to solve subproblems.) For more information on the problem and the application of Benders' algorithm for its solution, refer to Salkin et al. (1989).

#### **4. Genetic algorithm**

Genetic algorithm (GA) has been effective and has been employed for solving a variety of difficult optimization problems. Much of the basic ground work in implementing and adapting GAs has been developed by Holland (1992). Since then, a large number of papers have appeared in the literature, proposing variations to the basic algorithm or describing different applications. In many cases, the GA can produce excellent solutions in a reasonable amount of time. For certain cases, however, the GA can fail to perform for a variety of reasons. Liepins & Hilliard (1989) have pointed out three of these reasons: (1) choice of a representation that is not consistent with the crossover operator; (2) failure to represent problem-specific information such as constraints; and (3) convergence to local optima (premature convergence). The first reason for failure, a representation inconsistent with the crossover operator, is most easily illustrated by an example of the traveling salesman problem, in which the crossover operator simply fails to preserve the feasible permutation in most cases. The second reason for failure is the inability to represent problem specific information such as constraints in an optimization problem. In general, for constrained problems, there is no guarantee that feasibility will be preserved by crossover or mutation, or even that a randomly-generated initial population is feasible. A broad range of approaches have been used in the literature to remedy this situation. However, there is no single mechanism that has performed consistently well in handling constrained problems 408 Bio-Inspired Computational Algorithms and Their Applications

which selects a trial *Yk*, and the subproblem, which evaluates *v(Yk)* and computes a new linear support α*kY*+β*k* using the dual solution of the transportation subproblem. The major effort required by Benders' algorithm is the repeated solution of the master problem, or its

Min *Z* (17)

One approach to avoiding some of this effort is by suboptimizing the master problem, i.e.,

i.e., *Y* such that ( ) *<sup>T</sup>* ˆ *vY Z*< , where *Z*ˆ is the value of the incumbent at the current iteration, i.e., the least upper bound provided by the subproblems. (By using implicit enumeration to suboptimize the master problem, and restarting the enumeration when solving the following master problem, this modification of Benders' algorithm allows a single search of the enumeration tree, interrupted repeatedly to solve subproblems.) For more information on the problem and the application of Benders' algorithm for its solution, refer to Salkin et

Genetic algorithm (GA) has been effective and has been employed for solving a variety of difficult optimization problems. Much of the basic ground work in implementing and adapting GAs has been developed by Holland (1992). Since then, a large number of papers have appeared in the literature, proposing variations to the basic algorithm or describing different applications. In many cases, the GA can produce excellent solutions in a reasonable amount of time. For certain cases, however, the GA can fail to perform for a variety of reasons. Liepins & Hilliard (1989) have pointed out three of these reasons: (1) choice of a representation that is not consistent with the crossover operator; (2) failure to represent problem-specific information such as constraints; and (3) convergence to local optima (premature convergence). The first reason for failure, a representation inconsistent with the crossover operator, is most easily illustrated by an example of the traveling salesman problem, in which the crossover operator simply fails to preserve the feasible permutation in most cases. The second reason for failure is the inability to represent problem specific information such as constraints in an optimization problem. In general, for constrained problems, there is no guarantee that feasibility will be preserved by crossover or mutation, or even that a randomly-generated initial population is feasible. A broad range of approaches have been used in the literature to remedy this situation. However, there is no single mechanism that has performed consistently well in handling constrained problems

Subject to , 1, *k k ZY k T* ≥α +β = (18)

*Yi* ∈{0,1} (19)

ˆ , 1, *k k ZY k T* >α +β = (18)

*Y im <sup>i</sup>* ∈ = {0,1 , 1, } (19)

mixed-integer LP equivalent,

al. (1989).

**4. Genetic algorithm** 

finding a feasible solution of the linear system

with genetic algorithms (Reeves, 1997). The most direct solution is simply to ignore this problem. If an infeasible solution is encountered, it may be assigned a very low fitness value to increase the chance that it will "die off" soon. But sometimes, infeasible solutions are close to the optimum by any reasonable distance measure. Another direct solution is to modify the objective function by incorporating a penalty function which reduces the fitness by an amount which varies as the degree of infeasibility. Unfortunately, not all penalty functions work equally well, and care must be exercised in their choice (Liepins & Hillard, 1989). If the penalty is too small, many infeasible solutions are allowed to enter the population pool; if it is too large, the search is confined to a very small portion of the search space. Another increasingly popular technique for coping with infeasibility is the use of repair algorithms. These heuristic algorithms accept infeasible solutions but repair them in order to make them feasible before inserting them into the population. We can find various repair algorithms in the context of the traveling salesman problem in the literature (Goldberg & Lingle, 1985; Oliver et al., 1987; Chatterjee et al., 1996). Several practical questions arise, such as whether it should be the original offspring or the repaired version that should be used in the next generaion, and whether the entire randomness should be sacrificed because of the adoption of the repair methods. The third reason for failure is convergence to local optima (premature convergence). This condition occurs when most strings in the population have similar allele values. In this case, applying crossover to similar strings results in another similar string, and no new areas of the search space are explored (Levine, 1997). Many improvements to the genetic algorithms help to avoid premature convergence, such as thorough randomization of initial populations, multiple restart of problems, and appropriate parameter settings, i.e., carefully adjustment of the mutation rate and a suitable population size.

Most researchers agree that, to guarantee success of an application of genetic algorithms, the representation system is of crucial importance. The difference between a successful application and an unsuccessful one often lies in the encoding. Kershenbaum (1997) pointed out that an ideal encoding would have the following properties: (a) It should be able to represent all feasible solutions; (b) It should be able to represent only feasible solutions. (An encoding that represents fewer infeasible solutions is generally better than one that represents a large number of infeasible solutions. The larger the number of representable infeasible solutions, the more likely it is that crossover and mutation will produce infeasible offspring, and the less effective the GA will become.); (c) All (feasible) solutions should have an equal probability of being represented; (d) It should represent useful schemata using a small number of genes that are close to one other in the chromosome. (It is generally very difficult to create an encoding with this property a priori, since we do not know in advance what the useful schemata are. It is, however, possible to recognize the presence of short, compact schemata in solutions with high fitness and thus to validate the encoding after the fact. This is important for recognizing successful GA applications.); and (e) The encoding itself should possess locality, in the sense that small changes to the chromosome make small changes in the solution. Kershenbaum also pointed out taht although some of these properties conflict (often making tradeoffs), to the extent taht those properties can be achieved, the genetic algorithms are likely to work well. In this section, we focus on the design of the GA approach for the master problem of CPL problem. More discussion of some of these as well as definitions and some of the basic GA terminology that is used in

Using a Genetic Algorithm to

2001).

**4.4 Replacement** 

**4.5 Termination** 

Solve the Benders' Master Problem for Capacitated Plant Location 411

mutation rate. Typically, but not always, mutation will flip a single bit. In fact, GenJam's mutation operators, on the other hand, are more complex than flipping a bit. They adopt several standard melodic development techniques, such as transposition, retrograde, rotation, inversion, sorting, and retrograde-inversion. Because these operators are all musically meaningful, they operate at the event level rather than on individual bits (Biles,

After the process of selection, crossover, and mutation, the current population is replaced by the new population. Those successful individuals of the each generation are more likely to survive in the next generation and those unsuccessful individuals are less likely to survive. In our GA, we use the incremental replacement method (See Beasley et al., 1993), i.e., only the new individuals whose fitness values are better than those of the current will be

In general, a genetic algorithm is terminated after a specified number of generations or when fitness values have converged. Our GA terminates when there has been no

The basic idea of Benders' partitioning algorithm for mixed-integer linear problems is to decompose the original problem into a pure integer master problem and one or more subproblems in the continuous variables, and then to iterate between these two problems. If the objective function value of the optimal solution to the master problem is equal to that of the subproblem, then the algorithm terminates with the optimal solution of the original mixed-integer problem. Otherwise, we add constraints, termed Benders' cuts, one at a time to the master problem, and solve it repeatedly until the termination criteria are met. A major difficulty with this decomposition lies in the solution of the master problem, which is a

For the addressed CPL problem, however, the constraints are explicit only in the subproblem and the master problem is free of explicit constraints. Thus, the master problem

Lai et al. (2010) introduced a hybrid Benders/Genetic algorithm which is a variation of Benders' algorithm that uses a genetic algorithm to obtain "good" subproblem solutions to the master problem. Lai and Sohn (2011) conducted a study applying the hybrid Benders/Genetic algorithm to the vehicle routing problem. Below is a detailed description

**Step 1.** Initialization. We initialize the iteration counter *k* to zero, select initial trial values for the vector of binary variables *Y* which selects the plants to be opened. **Step 2.** Primal Subsystem. We evaluate the value of *v(Y)* by solving a tranportation linear

programming problem whose fesible region is independent of *Y.*

replaced. Thus, the individuals with the best fitness are always in the population.

improvement in the best solution found for 100 iterations.

of the hybrid algorithm and it is illlustrated in Fig. 1 as well.

**5. Hybrid Benders/Genetic algorithm** 

"hard" problem, costly to compute.

is more amenable to solution by GA.

this section can be found in Goldberg (1989) and Davis (1991). The implementation of GA is a step-by-step procedure:

#### **4.1 Initialization**

Initialization is to generate an initial population. The population size and length of "chromosome" depends on the users' choice and other requirements of the specific problem. To start, we usually have a totally random population. Each random string (or "chromosome") of the population, representing a possible solution for the problem, is then evaluated using an objective function. The selection of this objective function is important because it practically encompasses all the knowledge of the problem to be solved. The user is supposed to choose the proper combination of desirable attributes that could be best fit to his purposes. In CPL problem, the variable Y is a vector of binary integers. It is easily to be coded as a string of binary bit with the position #*i* corresponding to the plant #*i*. For example, Y = (0 1 1 0 1 0 0) means that plants #1, 4, 6 and 7 are not open and plants 2, 3 and 5 are open. In our GA, a population size of 50 was used and the fitness function is evaluated quickly and simply by evaluating a set of linear functions, i.e., T ( ) { } k=1,2,...T Max *k k vY Y* ≡ α+ β .

#### **4.2 Selection**

Selection (called "reproduction" by Goldberg) starts with the current population. Selection is applied to create an intermediate population or mating pool. All the chromosomes in the mating pool are waiting for other operations such as crossover and/or mutation to create the next population. In the canonical genetic algorithm, selection is made according to the fitness. The fitness could be determined by many ways. For example, the fitness could be assigned according to probability of a string in the current population (Goldberg, 1989), a string's rank in the population (Baker, 1985; Whitley, 1989), or simply by its performance of scores. In our GA, the latter case is used, i.e., a string with an average score is given one mating; a string scoring one standard deviation above the average is given two matings; and a string scoring one standard deviation below the average is given no mating (Michalewicz, 1998).

#### **4.3 Crossover and mutation**

We use a standard single-point crossover method. The duplicated strings in the mating pool are randomly paired off to produce two offspring per mating. The crossover location of the strings is generally chosen at random but not necessary always the case. For example, the distribution for selection the crossover point of the GenJam system, an interactive genetic algorithm jazz improviser, which was developed by Dannenberg for the Carnegie Mellon MIDI Toolkit, is biased toward the center of the chromosome to promote diversity in the population. If a crossover point is too near one end of the chromosome or the other, the resulting children are more likely to resemble their parents. This will lead the GenJam system to repeat itself when two nearly identical phrases happen to be played close to one another in the same solo and it does not seem desirable for GenJam to perform in that way. The role of mutation is to guarantee the diversity of the population. In most case, mutation alters one or more genes (positions in a chromosome) with a probability equal to the mutation rate. Typically, but not always, mutation will flip a single bit. In fact, GenJam's mutation operators, on the other hand, are more complex than flipping a bit. They adopt several standard melodic development techniques, such as transposition, retrograde, rotation, inversion, sorting, and retrograde-inversion. Because these operators are all musically meaningful, they operate at the event level rather than on individual bits (Biles, 2001).

### **4.4 Replacement**

410 Bio-Inspired Computational Algorithms and Their Applications

this section can be found in Goldberg (1989) and Davis (1991). The implementation of GA is

Initialization is to generate an initial population. The population size and length of "chromosome" depends on the users' choice and other requirements of the specific problem. To start, we usually have a totally random population. Each random string (or "chromosome") of the population, representing a possible solution for the problem, is then evaluated using an objective function. The selection of this objective function is important because it practically encompasses all the knowledge of the problem to be solved. The user is supposed to choose the proper combination of desirable attributes that could be best fit to his purposes. In CPL problem, the variable Y is a vector of binary integers. It is easily to be coded as a string of binary bit with the position #*i* corresponding to the plant #*i*. For example, Y = (0 1 1 0 1 0 0) means that plants #1, 4, 6 and 7 are not open and plants 2, 3 and 5 are open. In our GA, a population size of 50 was used and the fitness function is evaluated quickly and simply by evaluating a set of linear functions, i.e., T ( ) { } k=1,2,...T

Selection (called "reproduction" by Goldberg) starts with the current population. Selection is applied to create an intermediate population or mating pool. All the chromosomes in the mating pool are waiting for other operations such as crossover and/or mutation to create the next population. In the canonical genetic algorithm, selection is made according to the fitness. The fitness could be determined by many ways. For example, the fitness could be assigned according to probability of a string in the current population (Goldberg, 1989), a string's rank in the population (Baker, 1985; Whitley, 1989), or simply by its performance of scores. In our GA, the latter case is used, i.e., a string with an average score is given one mating; a string scoring one standard deviation above the average is given two matings; and a string scoring one standard deviation below the average is given no mating (Michalewicz,

We use a standard single-point crossover method. The duplicated strings in the mating pool are randomly paired off to produce two offspring per mating. The crossover location of the strings is generally chosen at random but not necessary always the case. For example, the distribution for selection the crossover point of the GenJam system, an interactive genetic algorithm jazz improviser, which was developed by Dannenberg for the Carnegie Mellon MIDI Toolkit, is biased toward the center of the chromosome to promote diversity in the population. If a crossover point is too near one end of the chromosome or the other, the resulting children are more likely to resemble their parents. This will lead the GenJam system to repeat itself when two nearly identical phrases happen to be played close to one another in the same solo and it does not seem desirable for GenJam to perform in that way. The role of mutation is to guarantee the diversity of the population. In most case, mutation alters one or more genes (positions in a chromosome) with a probability equal to the

Max *k k vY Y* ≡ α+ β .

a step-by-step procedure:

**4.1 Initialization** 

**4.2 Selection** 

1998).

**4.3 Crossover and mutation** 

After the process of selection, crossover, and mutation, the current population is replaced by the new population. Those successful individuals of the each generation are more likely to survive in the next generation and those unsuccessful individuals are less likely to survive. In our GA, we use the incremental replacement method (See Beasley et al., 1993), i.e., only the new individuals whose fitness values are better than those of the current will be replaced. Thus, the individuals with the best fitness are always in the population.

### **4.5 Termination**

In general, a genetic algorithm is terminated after a specified number of generations or when fitness values have converged. Our GA terminates when there has been no improvement in the best solution found for 100 iterations.
