**1. Introduction**

Optimization techniques range widely from the early gradient techniques 1 to the latest random techniques 16, 18, 19 including ant colony optimization 13, 17. Gradient techniques are very powerful when applied to smooth well-behaved objective functions, and especially, when applied to a monotonic function with a single optimum. They encounter certain difficulties in problems with multi optima and in those having a sharp gradient, such as a problem with constraint or jump. The solution may converge to a local optimum, or not converge to any optimum but diverge near a jump.

To remedy these difficulties, several different techniques based on random searching have been developed: full random methods, simulated annealing methods, and genetic algorithms. The full random methods like the Monte Calro method are perfectly global but exhibit very slow convergence. The simulated annealing methods are modified versions of the hill-climbing technique; they have enhanced global search ability but they too have slow convergence rates.

Genetic algorithms 2-5 have good global search ability with relatively fast convergence rate. The global search ability is relevant to the crossover and mutations of chromosomes of the reproduced pool. Fast convergence is relevant to the selection that takes into account the fitness by the roulette or tournament operation. Micro-GA 3 does not need to adopt mutation, for it introduces completely new individuals in the mating pool that have no relation to the evolved similar individuals. The pool size is smaller than that used by the simple GA , which needs a big pool to generate a variety of individuals.

Versatile genetic algorithms have some difficulty in identifying the optimal solution that is correct up to several significant digits. They can quickly approach to the vicinity of the global optimum, but thereafter, march too slowly to it in many cases. To enhance the convergence rate, hybrid methods have been developed. A typical one obtains a rough optimum using the GA first, and then approaches the exact optimum by using a gradient method. Other one finds the rough optimum using the GA first, and then searches for the exact optimum by using the GA again in a local domain selected based on certain logic 7.

The SZGA (Successive Zooming Genetic Algorithm) 6, 8-12 zooms the search domain for a specified number of steps to obtain the optimal solution. The tentative optimum solutions

The Successive Zooming Genetic Algorithm and Its Applications 5

Three parameters control the performance of the SZGA: the zooming factor α, number of zooming operations NZOOM, and sub-iteration population number NSP. According to previous research, the optimal parameters for SZGA, such as the zooming factor, number of zooming operations, and sub-iteration population number, are closely related to the number

The zooming factor α, number of sub-iteration population NSP, and number of zooms NZOOM of SZGA greatly affect the possibility of finding an optimal solution and the accuracy of the found solution. These parameters have been selected empirically or by the trial and error method. The values assigned to these parameters determine the reliability and accuracy of the solution. Improper values of parameters might result in the loss of the global optimum, or may necessitate a further search because of the low accuracy of the optimum solution found based on these improper values. We shall optimize the SZGA itself by investigating the relation among these parameters and by finding the optimal values of these parameters. A standard way of selecting the values of these parameters in SZGA, considering the

The SZGA is optimized using the zooming factor α, number of sub-iteration population NSP, and the number of zooms NZOOM, for the target reliability of 99.9999% and target accuracy of 10-6. The objective of the current optimization is to minimize the computation load while meeting the target reliability and target accuracy. Instead of using empirical values for the parameters, we suggest a standard way of finding the optimal values of these parameters for the objective function, by using any optimization technique, to find the optimal values of these parameters which optimize the SZGA itself. Thus, before trying to solve any given optimization problem using SZGA, we shall optimize the SZGA itself first to find the optimal values of its parameters, and then solve the original optimization problem to find

After analyzing the relation among the parameters, we shall formulate the problem for the optimization of SZGA itself. The solution vector is comprised of the zooming factor α, the number of sub-iteration population NSP, and the number of zooms NZOOM. The objective function is composed of the difference of the actual reliability to the target reliability, difference of the actual accuracy to the target accuracy, difference of the actual NSP to the

( , , ) ( ) *SP ZOOM SZGA SP SP ZOOM F NN R AN N N*

=Δ +Δ +Δ + × (2)

α: zooming factor, β: improvement factor NVAR: dimension of the solution, NZOOM: number of zooms NSUB: number of sub-iterations, NPOP: number of populations

of variables used in the optimization problem.

**2.1 Selection of parameters in the SZGA** 

dimension of the solution, will be provided. .

the optimal solution by using these parameters.

α

Δ*RSZGA* : difference to the target reliability

where,

proposed NSP, and the number of total population generated as well.

NSP: total number of individuals during the sub-iterations (NSP=NSUB×NPOP)

where,

are corrected up to several significant digits according to the number of zooms and the zooming rate. The SZGA can predict the possibility that the solution found is the exact optimum solution. The zooming factor, number of sub-iteration populations, number of zooms, and dimensions of a given problem affect the possibility and accuracy of the solution. In this chapter, we examine these parameters and propose a method for selecting the optimal values of parameters in SZGA.

#### **2. The Successive Zooming Genetic Algorithm**

This section briefly introduces the successive zooming genetic algorithm 6 and provides the basis for the selection of the parameters used. The algorithm has been applied successively to many optimization problems. The successive zooming genetic algorithm involves the successive reduction of the search space around the candidate optimum point. Although this method can also be applied to a general Genetic Algorithm (GA), in the current study it is applied to the Micro-Genetic Algorithm (MGA). The working procedure of the SZGA is as follows. First, the initial solution population is generated and the MGA is applied. Thereafter, for every 100 generations, the elitist point with the best fitness is identified. Next, the search domain is reduced to (XOPT-αk/2, XOPT+αk/2), and then the optimization procedure is continued on the reduced domain (Fig. 1). This reduction of the search domain increases the resolution of the solution, and the procedure is repeated until a satisfactory solution is identified.

Fig. 1. Flowchart of SZGA and schematics of successive zooming algorithm

The SZGA can assess the reliability of the obtained optimal solution by the reliability equation expressed with three parameters and the dimension of the solution NVAR.

$$R\_{s\boxtimes\!\!\!\!\!\/\!\!\/} = \left[1 - (1 - (\alpha \;/\ \; \mathcal{D})^{\mathbb{N}\_{\mathbb{N}\times\mathbb{N}}} \times \beta\_{A\!\!\!\!\/\!\!\/})^{\mathbb{N}\_{\mathbb{N}\times\mathbb{N}}}\right]^{\mathbb{N}\_{\mathbb{Z}\times\mathbb{N}\times\mathbb{N}} - 1} \tag{1}$$

where,

4 Bio-Inspired Computational Algorithms and Their Applications

are corrected up to several significant digits according to the number of zooms and the zooming rate. The SZGA can predict the possibility that the solution found is the exact optimum solution. The zooming factor, number of sub-iteration populations, number of zooms, and dimensions of a given problem affect the possibility and accuracy of the solution. In this chapter, we examine these parameters and propose a method for selecting

This section briefly introduces the successive zooming genetic algorithm 6 and provides the basis for the selection of the parameters used. The algorithm has been applied successively to many optimization problems. The successive zooming genetic algorithm involves the successive reduction of the search space around the candidate optimum point. Although this method can also be applied to a general Genetic Algorithm (GA), in the current study it is applied to the Micro-Genetic Algorithm (MGA). The working procedure of the SZGA is as follows. First, the initial solution population is generated and the MGA is applied. Thereafter, for every 100 generations, the elitist point with the best fitness is identified. Next, the search domain is reduced to (XOPT-αk/2, XOPT+αk/2), and then the optimization procedure is continued on the reduced domain (Fig. 1). This reduction of the search domain increases the resolution of the solution, and the procedure is repeated until a satisfactory

Fig. 1. Flowchart of SZGA and schematics of successive zooming algorithm

equation expressed with three parameters and the dimension of the solution NVAR.

The SZGA can assess the reliability of the obtained optimal solution by the reliability

<sup>1</sup> [1 (1 ( / 2) ) ] *NVAR N N SP ZOOM RSZGA* α

 β*AVG*

<sup>−</sup> =−− × (1)

the optimal values of parameters in SZGA.

solution is identified.

**2. The Successive Zooming Genetic Algorithm** 

α: zooming factor, β: improvement factor NVAR: dimension of the solution, NZOOM: number of zooms NSUB: number of sub-iterations, NPOP: number of populations NSP: total number of individuals during the sub-iterations (NSP=NSUB×NPOP)

Three parameters control the performance of the SZGA: the zooming factor α, number of zooming operations NZOOM, and sub-iteration population number NSP. According to previous research, the optimal parameters for SZGA, such as the zooming factor, number of zooming operations, and sub-iteration population number, are closely related to the number of variables used in the optimization problem.

#### **2.1 Selection of parameters in the SZGA**

The zooming factor α, number of sub-iteration population NSP, and number of zooms NZOOM of SZGA greatly affect the possibility of finding an optimal solution and the accuracy of the found solution. These parameters have been selected empirically or by the trial and error method. The values assigned to these parameters determine the reliability and accuracy of the solution. Improper values of parameters might result in the loss of the global optimum, or may necessitate a further search because of the low accuracy of the optimum solution found based on these improper values. We shall optimize the SZGA itself by investigating the relation among these parameters and by finding the optimal values of these parameters. A standard way of selecting the values of these parameters in SZGA, considering the dimension of the solution, will be provided. .

The SZGA is optimized using the zooming factor α, number of sub-iteration population NSP, and the number of zooms NZOOM, for the target reliability of 99.9999% and target accuracy of 10-6. The objective of the current optimization is to minimize the computation load while meeting the target reliability and target accuracy. Instead of using empirical values for the parameters, we suggest a standard way of finding the optimal values of these parameters for the objective function, by using any optimization technique, to find the optimal values of these parameters which optimize the SZGA itself. Thus, before trying to solve any given optimization problem using SZGA, we shall optimize the SZGA itself first to find the optimal values of its parameters, and then solve the original optimization problem to find the optimal solution by using these parameters.

After analyzing the relation among the parameters, we shall formulate the problem for the optimization of SZGA itself. The solution vector is comprised of the zooming factor α, the number of sub-iteration population NSP, and the number of zooms NZOOM. The objective function is composed of the difference of the actual reliability to the target reliability, difference of the actual accuracy to the target accuracy, difference of the actual NSP to the proposed NSP, and the number of total population generated as well.

$$F(\mathcal{U}, \mathcal{N}\_{sp}, \mathcal{N}\_{\text{ZCOM}}) = \Delta \mathcal{R}\_{\text{SZCA}} + \Delta \mathcal{A} + \Delta \mathcal{N}\_{sp} + \{\mathcal{N}\_{sp} \times \mathcal{N}\_{\text{ZCOM}}\} \tag{2}$$

where,

Δ*RSZGA* : difference to the target reliability

The Successive Zooming Genetic Algorithm and Its Applications 7

parameters. Although these values may not be valid for all the other cases, they can be used as a good reference for new problems. Some other ways of choosing the values of these

Variables 2 4 8 16

NZOOM 5 8 17 22

Table 1. Result of optimized parameters in SZGA for different number of design variables

Programming the SZGA is simple, as explained below. This zooming philosophy may not be confined only in GA, but can be applied to most other global search algorithms. Let Y(I) be the global variables ranging YMIN(I) ~ YMAX(I), where I is the design variable number. Z(I) consists of local normalized variables ranging 0~1. Thus, the relation between them is as

The relation between local variable Z(I) and local variable X(I) (0~1) in the zoomed region is

Where, ZOPT(I,JWIN) is the elitist in the zoom step (JWIN-1), and ALP is the zooming factor. Note that ZOPT(I,JWIN-1) is more logical. However, the argument is increased by one to meet old versions of FORTRAN, which require a positive integer as a dimension argument. Based on the elitist in step (JWIN-1), we are seeking variables in step JWIN.

A pre-zoning algorithm adjusts the gussed initial zone to a very reasonable zone after one

Where, YINP(I)is the elitist obtained after one set of generation. Thus, we eliminate the assumed initial boundary, and establish a new reasonable boundary. The coefficient BTA

**2.2 Programming for successive zooming and pre-zoning algorithms** 

DO 10 I=1,NVAR ! NVAR=NO. of VARIABLES

12 Z(I)=ZOPT(I,JWIN)+ALP\*\*(JWIN-1)\*(X(I)-0.5)

10 Y(I)=YMIN(I)+(YMAX(I)-YMIN(I))\*Z(I)

NSP 1,000 2,000 9,510 1,479,230

.02573 .1303 .4216 .5176

5,000 16,000 161,670 32,543,060

parameters will be given later on.

No. of

Zooming Factor α

No. of Function Evaluation

follows in FORTRAN;

DO 12 I=1,NVAR

Please note that ZOPT(I,1)=0.

DO 14 I=1,NVAR

may be properly selected, say 0.5.

 YMIN(I)=YINP(I)-BTA\*ABS(YINP(I)) 14 YMAX(I)=YINP(I)+BTA\*ABS(YINP(I))

set of generation.

as follows;

#### Δ*A* : difference to the target accuracy Δ NSP : difference to the proposed NSP

The problem for optimzation of SZGA itself can be formulated by using this objective function as follows:

$$\text{Minimize F(X)}\tag{3}$$

where,

$$\begin{aligned} \mathbf{X} &= \{ \alpha, \ N\_{S^p}, N\_{Z^{\alpha \bullet M}} \}^T \\ &\quad \mathbf{0} < \alpha < \mathbf{1} \\ &\quad \mathbf{N}\_{S^p} \sim 100 \\ &\quad \mathbf{N}\_{Z^{\bullet \text{OOM}}} \ge \mathbf{1} \end{aligned}$$

The difference of the actual reliability to the target reliability is the difference between RSZGA and 99.9999%, where reliability RSZGA is rewritten with an average improvement factor as

$$R\_{\rm sZCA} = \left[1 - \left(1 - \left(\alpha \;/\; \mathcal{D}\right)^{\aleph\_{N\_{\rm MS}}} \times \mathcal{B}\_{\Lambda \text{V}}\right)^{\aleph\_{sp}}\right]^{\aleph\_{Z\&\text{out}} - 1} \tag{4}$$

Here, we can see the average improvement factor βAVG, which is to be regressed later on. The difference of realized accuracy to the target accuracy is the difference between accuracy A and 10-6, where accuracy A is actually the upper limit and may be written as,

$$A = \alpha^{\aleph\_{2 \times 0.04} - 1} \tag{5}$$

The difference of the actual NSP to the proposed NSP is difference between NSP and 100 7 . In organizing the optimization algorithm, each element in the objective function is given different weights according to its importance. Thus, the target reliability and target accuracy are met first, and then the number of total population generated is minimized. Although any optimization technique could have been used to slove eq.(3), one can adopt the SZGA in optimizing the SZGA itself to obtain a solution fast and accurately.

The parameters in SZGA have been optimized by using the objective function and improvement factor averaged after regression for a test function 9. The target reliability is 99.9999% and target accuracy of solution is 10-6. The proposed number of sub-iteration population NSP is 100. Table 1 shows the optimized values for the SZGA parameters for four cases of different number of design variables.

We found a similar tendency to Table 1 for test functions of various numbers of design variables. We also found that the recommended number of sub-iteration population NSP would no longer be acceptable to assure reliability and accuracy for the cases whose number of design variables is over 1. A much greater number of sub-iteration population is needed to obtain an optimal solution with the proper reliability (99.9999%) and accuracy (10-6).

To confirm our optimized result, we fixed two parameters in the feasible domain that satisfy the target reliability and target accuracy, and checked the change in the objective function as a function of the remaining parameter. Examples of the change in the objective function for the case of four design variables showed the validity of the obtained optimal values of the 6 Bio-Inspired Computational Algorithms and Their Applications

The problem for optimzation of SZGA itself can be formulated by using this objective

{ } , , *<sup>T</sup> X NN* α

*SP ZOOM* =

0 < α < 1 NSP ~ 100 NZOOM > 1 The difference of the actual reliability to the target reliability is the difference between RSZGA and 99.9999%, where reliability RSZGA is rewritten with an average improvement factor as

> <sup>1</sup> [1 (1 ( / 2) ) ] *NVAR N N SP ZOOM RSZGA* α

Here, we can see the average improvement factor βAVG, which is to be regressed later on. The difference of realized accuracy to the target accuracy is the difference between accuracy

> <sup>1</sup> *A NZOOM* α

The difference of the actual NSP to the proposed NSP is difference between NSP and 100 7 . In organizing the optimization algorithm, each element in the objective function is given different weights according to its importance. Thus, the target reliability and target accuracy are met first, and then the number of total population generated is minimized. Although any optimization technique could have been used to slove eq.(3), one can adopt the SZGA in

The parameters in SZGA have been optimized by using the objective function and improvement factor averaged after regression for a test function 9. The target reliability is 99.9999% and target accuracy of solution is 10-6. The proposed number of sub-iteration population NSP is 100. Table 1 shows the optimized values for the SZGA parameters for four

We found a similar tendency to Table 1 for test functions of various numbers of design variables. We also found that the recommended number of sub-iteration population NSP would no longer be acceptable to assure reliability and accuracy for the cases whose number of design variables is over 1. A much greater number of sub-iteration population is needed to obtain an optimal solution with the proper reliability (99.9999%) and accuracy (10-6).

To confirm our optimized result, we fixed two parameters in the feasible domain that satisfy the target reliability and target accuracy, and checked the change in the objective function as a function of the remaining parameter. Examples of the change in the objective function for the case of four design variables showed the validity of the obtained optimal values of the

A and 10-6, where accuracy A is actually the upper limit and may be written as,

optimizing the SZGA itself to obtain a solution fast and accurately.

cases of different number of design variables.

 β*AVG*

Minimize F(X) (3)

<sup>−</sup> =−− × (4)

<sup>−</sup> = (5)

Δ*A* : difference to the target accuracy Δ NSP : difference to the proposed NSP

function as follows:

where,

parameters. Although these values may not be valid for all the other cases, they can be used as a good reference for new problems. Some other ways of choosing the values of these parameters will be given later on.


Table 1. Result of optimized parameters in SZGA for different number of design variables

#### **2.2 Programming for successive zooming and pre-zoning algorithms**

Programming the SZGA is simple, as explained below. This zooming philosophy may not be confined only in GA, but can be applied to most other global search algorithms. Let Y(I) be the global variables ranging YMIN(I) ~ YMAX(I), where I is the design variable number. Z(I) consists of local normalized variables ranging 0~1. Thus, the relation between them is as follows in FORTRAN;

DO 10 I=1,NVAR ! NVAR=NO. of VARIABLES 10 Y(I)=YMIN(I)+(YMAX(I)-YMIN(I))\*Z(I)

The relation between local variable Z(I) and local variable X(I) (0~1) in the zoomed region is as follows;

```
DO 12 I=1,NVAR 
12 Z(I)=ZOPT(I,JWIN)+ALP**(JWIN-1)*(X(I)-0.5)
```
Where, ZOPT(I,JWIN) is the elitist in the zoom step (JWIN-1), and ALP is the zooming factor. Note that ZOPT(I,JWIN-1) is more logical. However, the argument is increased by one to meet old versions of FORTRAN, which require a positive integer as a dimension argument. Based on the elitist in step (JWIN-1), we are seeking variables in step JWIN. Please note that ZOPT(I,1)=0.

A pre-zoning algorithm adjusts the gussed initial zone to a very reasonable zone after one set of generation.

```
DO 14 I=1,NVAR 
 YMIN(I)=YINP(I)-BTA*ABS(YINP(I)) 
14 YMAX(I)=YINP(I)+BTA*ABS(YINP(I))
```
Where, YINP(I)is the elitist obtained after one set of generation. Thus, we eliminate the assumed initial boundary, and establish a new reasonable boundary. The coefficient BTA may be properly selected, say 0.5.

The Successive Zooming Genetic Algorithm and Its Applications 9

Fig. 2. Confirmed zoomed region after checking the concavity condition

Table 2. Comparison of results: MGA, MGA+DFP, MGA+LGA

Convergence

**3. Example of the SZGA** 

value of zooming factor 8.

as discussed in the next section can be used.

3-arm type Micro GA MGA+DFP MGA+LGA

Generation 9000 4000+α<sup>4100</sup>

Objection Function 0.690497E+10 0.690488E+10 0.690488E+10

The value of the zooming factor α, an optimal parameter was obtained in reference [8], and was found to show good match with the empirical one. Using this zooming factor in SZGA, the displacement of a truss structure was derived by minimizing the total potential energy of the system. The capacity of the servomotor, which operates the wicket gate mounted in a Kaplan type turbine of the electric power generator, was optimized using SZGA with the

This is just one parameter among the full optimal parameters discussed in sec.2.1 9. Therefore, the analysis done with this factor 8 is a simplified analysis. As commented in section 2.1, the values of the parameters of a well-behaved test model suggested in the Table 1 can be used for an optimization, or the values of the parameters obtained in another way

Several additional examples of SZGA optimization are presented in the following sections to provide more insight on SZGA and to find another way of choosing the values of the SZGA parameters. The first example finds the Moony-Rivlin coefficients of a rubber material to compare with those from the least square method. The second example is a damage detection problem in which the difference between the measured natural frequencies and those of the assumed damage in the structure is minimized. The third example finds the

#### **2.3 Hybrid genetic algorithm**

Genetic algorithms are stochastic global search methods based on the mechanism of natural selection and natural reproduction. GAs have been applied to structural optimization problems because they can solve optimization problems that involve mixing continuous, discontinuous, and non-convex regions etc. The SGA (simple GA) has been improved to MGA by using some techniques like tournament selection as well as the elitist strategy. Yet, GAs have some difficulty in fast searching the exact optimum point at a later stage. The DPE (Dynamic Parameter Encoding) GA 4 uses a digital zooming technique, which does not change a digit of a higher rank further after a certain stage. The SZGA (Successive Zooming GA) zooms the searching area successively, and thus the convergence rate is greatly increased. A new hybrid GA technique, which guarantees to find the optimum point, has been proposed 7, 14.

The hybrid GA first identifies a quasi optimal point using an MGA, which has better searching ability than the simple genetic algorithm. To solve the convergence problem at the later stage, we employed hybrid algorithms that combine the global GA with local search algorithms (DFP 1 or MGA). The hybrid algorithm using the DFP (Davidon Fletcher Powell) method incorporates the advantages of both a genetic algorithm and the gradient search technique. The other hybrid algorithm of global GA and local GA at the zoomed area is called LGA (Locally zoomed GA), checks the concavity condition near the quasi minimum point. The enhancement of the above hybrid algorithms is verified by application of these algorithms to the gate optimization problem.

In this hybrid algorithm of minimization problem, an MGA is performed generation-bygeneration until there is no further change of the objective function, and then the approximate optimum solution is found at **Z**MCA. The gradients of the objective function as a function of the design variables are checked, if the concavity condition 1 is satisfied at the boundary of a small zoomed area (Fig. 2). If the condition is not satisfied, the small zoomed area is increased by δ. After several iterations, concavity conditions are finally achieved at the boundary of the final zoomed area (κδ × κδ) centered at **Z**MCA. With the elitist solution from the global GA (approximate optimum solution, **Z**MCA) and the concavity condition, the optimum point is found within the final zoomed area [Z(i) : (ZMCA(i) - κδ) ~ (ZMCA(i) + κδ)]. From this point, a local GA is performed for the small finally zoomed area, which probably contains the optimum point. Usually, this area is much smaller than the original are, so the convergence rate increases considerably (note that the first approximate solution prematurely converged to an inexact but near optimum point).

Water gates need to be installed in dams to regulate the flow-rate and to ensure the containing function of dams. Among these gates, the radial gate is widely used to regulate the flow-rate of huge dams because of its accuracy, easy opening and closing, endurance etc. Moreover, 3-arm type radial gate has better performance than 2-arm type, in connection with the section size of girders and the vibration characteristics during discharging operation. Table 2 compares the optimized results for a 3-arm type radial gate, which considers the reactions to the minimized main weight of the structure including vertical girders with or without arms. The hybrid algorithm (MGA+DFP, MGA+LGA) obtained the exact optimal solution of 0.690488E+10 after far fewer generations of 4100 than the 9000 by MGA, which result in a close but not the exact solution of 0.690497E+10.

8 Bio-Inspired Computational Algorithms and Their Applications

Genetic algorithms are stochastic global search methods based on the mechanism of natural selection and natural reproduction. GAs have been applied to structural optimization problems because they can solve optimization problems that involve mixing continuous, discontinuous, and non-convex regions etc. The SGA (simple GA) has been improved to MGA by using some techniques like tournament selection as well as the elitist strategy. Yet, GAs have some difficulty in fast searching the exact optimum point at a later stage. The DPE (Dynamic Parameter Encoding) GA 4 uses a digital zooming technique, which does not change a digit of a higher rank further after a certain stage. The SZGA (Successive Zooming GA) zooms the searching area successively, and thus the convergence rate is greatly increased. A new hybrid GA technique, which guarantees to find the optimum point, has

The hybrid GA first identifies a quasi optimal point using an MGA, which has better searching ability than the simple genetic algorithm. To solve the convergence problem at the later stage, we employed hybrid algorithms that combine the global GA with local search algorithms (DFP 1 or MGA). The hybrid algorithm using the DFP (Davidon Fletcher Powell) method incorporates the advantages of both a genetic algorithm and the gradient search technique. The other hybrid algorithm of global GA and local GA at the zoomed area is called LGA (Locally zoomed GA), checks the concavity condition near the quasi minimum point. The enhancement of the above hybrid algorithms is verified by application of these

In this hybrid algorithm of minimization problem, an MGA is performed generation-bygeneration until there is no further change of the objective function, and then the approximate optimum solution is found at **Z**MCA. The gradients of the objective function as a function of the design variables are checked, if the concavity condition 1 is satisfied at the boundary of a small zoomed area (Fig. 2). If the condition is not satisfied, the small zoomed area is increased by δ. After several iterations, concavity conditions are finally achieved at the boundary of the final zoomed area (κδ × κδ) centered at **Z**MCA. With the elitist solution from the global GA (approximate optimum solution, **Z**MCA) and the concavity condition, the optimum point is found within the final zoomed area [Z(i) : (ZMCA(i) - κδ) ~ (ZMCA(i) + κδ)]. From this point, a local GA is performed for the small finally zoomed area, which probably contains the optimum point. Usually, this area is much smaller than the original are, so the convergence rate increases considerably (note that the first approximate solution

Water gates need to be installed in dams to regulate the flow-rate and to ensure the containing function of dams. Among these gates, the radial gate is widely used to regulate the flow-rate of huge dams because of its accuracy, easy opening and closing, endurance etc. Moreover, 3-arm type radial gate has better performance than 2-arm type, in connection with the section size of girders and the vibration characteristics during discharging operation. Table 2 compares the optimized results for a 3-arm type radial gate, which considers the reactions to the minimized main weight of the structure including vertical girders with or without arms. The hybrid algorithm (MGA+DFP, MGA+LGA) obtained the exact optimal solution of 0.690488E+10 after far fewer generations of 4100 than the 9000 by

**2.3 Hybrid genetic algorithm** 

been proposed 7, 14.

algorithms to the gate optimization problem.

prematurely converged to an inexact but near optimum point).

MGA, which result in a close but not the exact solution of 0.690497E+10.

Fig. 2. Confirmed zoomed region after checking the concavity condition


Table 2. Comparison of results: MGA, MGA+DFP, MGA+LGA
