**5. Applications of Genetic Algorithms**

108 Bio-Inspired Computational Algorithms and Their Applications

Once the reproduction and the fitness function have been properly defined, a GA is evolved according to the same basic structure (see source above in pseudocode). It starts by generating an initial population of chromosomes, which is generated randomly to ensure the genetic diversity. Then, the GA loops over an iteration process to make the next generation. Each iteration consists of fitness evaluation, selection, reproduction, new evaluation of the offsprings, and finally replacement in population. Stopping criterion may be the number of iterations (called here generations), or the convergence of the best

Sometimes the cost function is extremely complicated and time-consuming to evaluate. As a result some care must be taken to minimize the number of cost function evaluations. An idea was to use parallel execution of various Simple GAs, and these algorithms are called Parallel Genetic Algorithms (PGAs). PGAs have been developed to reduce the large execution times that are associated with simple genetic algorithms for finding near-optimal solutions in large search spaces. They have also been used to solve larger problems and to find better solutions. PGAs have considerable gains in terms of performance and scalability. There are a lot of methods of PGAs (Independent PGA, Migration PGA, Partition PGA,

Hybrid Genetic Algorithms (HGAs) produce another important class of GAs. A hybrid GA combines the power of the GA with the speed of a local optimizer. The GA excels at gravitating toward the global minimum. It is not especially fast at finding the minimum when in a locally quadratic region. Thus the GA finds the region of the optimum, and then the local optimizer takes over to find the minimum. Some examples of HGAs used in Digital

Adaptive genetic algorithms (AGAs) are GAs whose parameters, such as the population size, the crossing over probability, or the mutation probability are varied while the GA is running. "The mutation rate may be changed according to changes in the population; the longer the population does not improve, the higher the mutation rate is chosen. Vice versa,

Segmentation PGA) which are fully described in (Sivanandam & Deepa, 2008).

generate randomly the initial population of chromosomes;

calculate the fitness of chromosomes in population;

 select 2 chromosomes as parents; apply crossover to the selected parents; apply mutation to the new chromosomes; calculate the fitness of new child chromosomes;

 **until** end of the number of new chromosomes

Fig. 2. Pseudocode description of the Procedure Genetic Algorithm

**Procedure Genetic Algorithm** 

update the population;

chromosome toward the optimal solution.

**4. Classification of Genetic Algorithms** 

Electronics Design will be presented in the next section.

**until** end of the number of generations

**begin**

**end** 

 **repeat**

 **repeat** 

"GAs have been applied in science, engineering, business and social sciences. Number of scientists has already solved many engineering problems using genetic algorithms. GA concepts can be applied to the engineering problem such as optimization of gas pipeline systems. Another important current area is structure optimization. The main objective in this problem is to minimize the weight of the structure subjected to maximum and minimum stress constrains on each member. GA is also used in medical imaging system. The GA is used to perform image registration as a part of larger digital subtraction angiographies. It can be found that GAs can be used over a wide range of applications" (Sivanandam & Deepa, 2008). GAs can also be applied to production planning, air traffic problems, automobile, signal processing, communication networks, environmental engineering and so on. In (Bentley & Corne, 2002), Evolutionary Creativity is discussed, using a lot of examples from music, art in general, architecture and engineering design. Evolutionary Electronics, both Analog and Digital, have been investigated in many publications (Bentley & Corne, 2002; Popa, 2004; Popa et al., 2005). (Higuchi et al., 2006) is a very good book on Evolvable Hardware.

Evolvable Hardware (EHW) is a hardware built on software reconfigurable Programmable Logic Devices (PLDs). In these circuits the logic design is compiled into a binary bit string and, by changing the bits, arbitrary hardware structures can be implemented instantly. The key idea is to regard such a bit string as a chromosome of a Genetic Algorithm (GA). Through genetic learning, EHW finds the best bit string and reconfigures itself according to rewards received from the environment (Iba et al., 1996).

In the rest of this section we present three applications in evolutionary design of digital circuits developed by the author, using GAs. First of them describes a method of synthesis of a Finite State Machine (FSM) in a Complex Programmable Logic Device (CPLD), using a standard GA. The other two applications use different techniques of hybridisation of a standard GA: first of them with two other optimisation techniques (inductive search and simulated annealing), to solve the Automatic Test Pattern Generation for digital circuits, a problem described in (Bilchev & Parmee, 1996), and the second one to improve the convergence of the standard GA in evolutionary design of digital circuits, using the new paradigm of Quantum Computation (Han & Kim, 2002).

Genetic Algorithms: An Overview with Applications in Evolvable Hardware 111

For the evolutionary design of this circuit we take into account that each boolean function has a maximum number of 5 inputs and a maximum number of 4 minterms. If we want to implement these functions in a PLD structure (an AND array and logic cells configurable as OR gate), then the number of fuse array links is 2 5 4 40 ⋅⋅ = , and we may to consider this

Our GA is a standard one, with the population size of 30 chromosomes. One point crossover is executed with a probability of 80% and the mutation rate is 2%. Six worse chromosomes

are replaced each generation. The stop criterion is the number of generations.

Fig. 4. The evolution of the excitation functions of the computer interface

hardware evolution succeeds.

functions have only 3 variables).

design.

Figure 4 reflects the evolution of the circuit for the first 3 functions, called excitation functions, which generate the subcircuit A. However, this circuit is built from 3 independent circuits, each generating one output bit. Therefore, the evolution of a circuit with one output bit is repeated 3 times. The Y axis is the correct answer rate. If it reaches 100%, then the

In the same way, figure 5 reflects the evolution of the circuit for the output functions, which generate the subcircuit B. The evolution succeeds after a less number of generations because the total search space is in this case much lower than in previous case (all the output

Evolution may provide some non-minimal expressions for these boolean functions, but minimization is not necessary for PLD implementations. The length of the chromosomes is greater than the optimal one, and the evolved equations are much more complicated than the given equations (1-7). The complete cost of the whole combinational circuit is consisted of 15 gates and 37 inputs for traditional design, and 30 gates and 102 inputs for evolutionary

number as the total length of the chromosome (Iba et al., 1996).

#### **5.1 Implementation of a FSM using a standard GA**

This first example uses extrinsic hardware evolution, that is uses a model of the hardware and evaluates it by simulation in software. The FSM represented in the figure 3 is a computer interface for serial communication between two computers. A transition from one state to another depends from only one of the 4 inputs , 1, 4 *<sup>i</sup> x i* = . The circuit has 4 outputs, each of them beeing in 1 logic only in a single state. The FSM has 6 states and has been presented in (Popa, 2004).

Fig. 3. A FSM described as state transition graph and manual state assignment

With the state assignment given in the figure 3, the traditional design with D flip-flops gives the following equations for the excitations functions:

$$D\_2 = \mathbf{x}\_3 \cdot Q\_1 \cdot Q\_0 + Q\_2 \cdot \overline{Q}\_1 \tag{1}$$

$$D\_1 = \mathbf{x}\_2 \cdot \overline{\mathbf{Q}}\_1 \cdot \mathbf{Q}\_0 + \mathbf{x}\_4 \cdot \mathbf{Q}\_2 + \mathbf{Q}\_1 \cdot \overline{\mathbf{Q}}\_0 \tag{2}$$

$$D\_0 = \mathbf{x}\_1 \cdot \overline{\mathbf{Q}}\_2 \cdot \overline{\mathbf{Q}}\_0 + \overline{\mathbf{x}}\_2 \cdot \overline{\mathbf{Q}}\_1 \cdot \mathbf{Q}\_0 + \mathbf{Q}\_1 \cdot \overline{\mathbf{Q}}\_0 \tag{3}$$

The output functions, are given by the following equations:

$$y\_1 = \overline{Q}\_1 \cdot Q\_0 \tag{4}$$

$$y\_2 = \overline{Q}\_2 \cdot Q\_1 \cdot \overline{Q}\_0 \tag{5}$$

$$
\Delta y\_3 = Q\_2 \cdot \overline{Q}\_1 \tag{6}
$$

$$
\Delta y\_4 = Q\_2 \cdot Q\_1 \tag{7}
$$

110 Bio-Inspired Computational Algorithms and Their Applications

This first example uses extrinsic hardware evolution, that is uses a model of the hardware and evaluates it by simulation in software. The FSM represented in the figure 3 is a computer interface for serial communication between two computers. A transition from one state to another depends from only one of the 4 inputs , 1, 4 *<sup>i</sup> x i* = . The circuit has 4 outputs, each of them beeing in 1 logic only in a single state. The FSM has 6 states and has been

<sup>X</sup>*<sup>1</sup>* <sup>X</sup>*<sup>2</sup>* <sup>X</sup>*<sup>3</sup>* <sup>X</sup>*<sup>4</sup>*

S1 S4

S0: 000 S1: 001 S2: 010 S3: 011 S4: 100 S5: 110

S2 S3 S5

X*4*

X*4*

*D xQQ QQ* 2 310 21 =⋅⋅ + ⋅ (1)

2 210 *y* =⋅⋅ *QQQ* (5)

3 21 *y* = ⋅ *Q Q* (6)

4 21 *y* = ⋅ *Q Q* (7)

*D xQQ xQ QQ* 1 210 42 10 =⋅⋅ +⋅ + ⋅ (2)

*D xQQ xQQ QQ* 0 120 210 10 =⋅ ⋅ +⋅⋅ + ⋅ (3)

Y*<sup>1</sup>* Y*<sup>2</sup>* Y*<sup>3</sup>* Y*<sup>4</sup>*

**5.1 Implementation of a FSM using a standard GA** 

X*i*

the following equations for the excitations functions:

The output functions, are given by the following equations:

X*3*

Fig. 3. A FSM described as state transition graph and manual state assignment

With the state assignment given in the figure 3, the traditional design with D flip-flops gives

1 10 *y* = ⋅ *Q Q* (4)

X*2*

X*i* or

presented in (Popa, 2004).

S

S0

X*1*

Y*i*

*i = 1,2,3,4*

For the evolutionary design of this circuit we take into account that each boolean function has a maximum number of 5 inputs and a maximum number of 4 minterms. If we want to implement these functions in a PLD structure (an AND array and logic cells configurable as OR gate), then the number of fuse array links is 2 5 4 40 ⋅⋅ = , and we may to consider this number as the total length of the chromosome (Iba et al., 1996).

Our GA is a standard one, with the population size of 30 chromosomes. One point crossover is executed with a probability of 80% and the mutation rate is 2%. Six worse chromosomes are replaced each generation. The stop criterion is the number of generations.

Fig. 4. The evolution of the excitation functions of the computer interface

Figure 4 reflects the evolution of the circuit for the first 3 functions, called excitation functions, which generate the subcircuit A. However, this circuit is built from 3 independent circuits, each generating one output bit. Therefore, the evolution of a circuit with one output bit is repeated 3 times. The Y axis is the correct answer rate. If it reaches 100%, then the hardware evolution succeeds.

In the same way, figure 5 reflects the evolution of the circuit for the output functions, which generate the subcircuit B. The evolution succeeds after a less number of generations because the total search space is in this case much lower than in previous case (all the output functions have only 3 variables).

Evolution may provide some non-minimal expressions for these boolean functions, but minimization is not necessary for PLD implementations. The length of the chromosomes is greater than the optimal one, and the evolved equations are much more complicated than the given equations (1-7). The complete cost of the whole combinational circuit is consisted of 15 gates and 37 inputs for traditional design, and 30 gates and 102 inputs for evolutionary design.

Genetic Algorithms: An Overview with Applications in Evolvable Hardware 113

which are able to cover as many as possible faults in the circuit (we have taken into account two PLA structures with a total number of 50 and respective 200 stuck-at 0 possible faults). (Wong & Wong, 1994) designed a HGA using the algorithm of Simulated Annealing as local optimizer. The optimisation process in Simulated Annealing is essentially a simulation of the annealing process of a molten particle. Starting from a high temperature, a molten particle is cooled slowly. As the temperature reduces, the energy level of the particle also reduces. When the temperature is sufficiently low, the molten particle becomes solidified. Analogous to the temperature level in the physical annealing process is the iteration number in Simulated Annealing. In each iteration, a candidate solution is generated. If this solution is a better one, it will be accepted and used to generate yet another candidate solution. If it is

Each of this two methods of hybridisation discussed above have some advantages. The

Initialize a partial solution for *N* = 1 and establish the initial temperature *T*<sup>0</sup> ;

append each chromosome to the partial solution, and evaluate it;

the new chromosomes are accepted or not accepted;

computational complexity and the expected quality of results, while Simulated Annealing avoids the premature convergence and reduces the adverse effects of the mutation operation. In (Popa et al., 2002) we proposed a HGA that cumulates all these advantages in a single algorithm, through a double hybridisation of the Genetic Algorithm: with Inductive Search on the one hand, and with Simulated Annealing technique on the other hand. The

We have conducted the experiments with all three HGAs described above, in the purpose to find the maximum fault coverage with a limited number of test vectors. We have tested first a PLA structure with 50 potential "stuck-at 0" faults, taking into account the maximum

Generate randomly the initial population of chromosomes;

select, proportional with fitness, 2 parents; apply crossover to obtain 2 offsprings; apply mutation to the new chromosomes; calculate the fitness of new chromosomes;

**until** end of the number of chromosomes update the population, according with the fitness;

structure of the Multiple Hybridated Genetic Algorithm is presented in figure 6.

the temperature is decreased; **until** end of the number of generations

Update the partial solution;

a deteriorated solution, the solution will be accepted with some probability.

**Procedure MHGA** 

**for** *k* = 2 to *N* **,** 

**repeat** 

**end** 

Fig. 6. The structure of the MHGA

**end** 

**repeat** 

**begin**

inductive search effort at each inductive step controls the trade-off between the

Fig. 5. The evolution of the output functions of the computer interface

We have implemented both the traditional design and the evolved circuit in a real Xilinx XCR3064 CoolRunner CPLD by using the Xilinx ISE 6.1i software. In traditional design, that is using equations (1-7), the FSM used only 7 macrocells from a total number of 64 macrocells, 11 product terms from a total number of 224 product terms, and 7 function block inputs, from a total number of 160. Surprising is the fact that, although evolutionary design, with the same state assignment, provides more complicated equations, the implementation of this circuit in XCR3064XL CPLD also used 7 macrocells from a total number of 64, 10 product terms from a total number of 224, and 7 function block inputs, from a total number of 160. This is even a better result than in preceding case, because the number of product terms is less with 1. Both implementations have used the same number of flip-flops (that is 3/64) and the same number of pins used like inputs/outputs (that is 9/32). We have preserved the state assignment of the FSM, and the subcircuits A and B are in fact as pure combinational circuits. The interesting fact is that our GA have supplied a better solution than the one given by the minimization tool used for this purpose by the CAD software.

#### **5.2 Multiple hybridization of a GA**

Hybrid Genetic Algorithms (HGAs) combine the power of the GA with the speed of a local optimizer. Usually the GA finds the region of the optimum, and then the local optimizer takes over to find the minimum. (Bilchev & Parmee, 1996) developed a search space reduction methodology, which was called the Inductive Search. The problem of global optimisation is partitioned into a sequence of subproblems, which are solved by searching of partial solutions in subspaces with smaller dimensions.

This method has been used to solve the Automatic Test Pattern Generation Problem in Programmable Logic Arrays (PLAs), that is to find an effective set of input test vectors, 112 Bio-Inspired Computational Algorithms and Their Applications

Fig. 5. The evolution of the output functions of the computer interface

**5.2 Multiple hybridization of a GA** 

of partial solutions in subspaces with smaller dimensions.

We have implemented both the traditional design and the evolved circuit in a real Xilinx XCR3064 CoolRunner CPLD by using the Xilinx ISE 6.1i software. In traditional design, that is using equations (1-7), the FSM used only 7 macrocells from a total number of 64 macrocells, 11 product terms from a total number of 224 product terms, and 7 function block inputs, from a total number of 160. Surprising is the fact that, although evolutionary design, with the same state assignment, provides more complicated equations, the implementation of this circuit in XCR3064XL CPLD also used 7 macrocells from a total number of 64, 10 product terms from a total number of 224, and 7 function block inputs, from a total number of 160. This is even a better result than in preceding case, because the number of product terms is less with 1. Both implementations have used the same number of flip-flops (that is 3/64) and the same number of pins used like inputs/outputs (that is 9/32). We have preserved the state assignment of the FSM, and the subcircuits A and B are in fact as pure combinational circuits. The interesting fact is that our GA have supplied a better solution than the one given by the minimization tool used for this purpose by the CAD software.

Hybrid Genetic Algorithms (HGAs) combine the power of the GA with the speed of a local optimizer. Usually the GA finds the region of the optimum, and then the local optimizer takes over to find the minimum. (Bilchev & Parmee, 1996) developed a search space reduction methodology, which was called the Inductive Search. The problem of global optimisation is partitioned into a sequence of subproblems, which are solved by searching

This method has been used to solve the Automatic Test Pattern Generation Problem in Programmable Logic Arrays (PLAs), that is to find an effective set of input test vectors, which are able to cover as many as possible faults in the circuit (we have taken into account two PLA structures with a total number of 50 and respective 200 stuck-at 0 possible faults).

(Wong & Wong, 1994) designed a HGA using the algorithm of Simulated Annealing as local optimizer. The optimisation process in Simulated Annealing is essentially a simulation of the annealing process of a molten particle. Starting from a high temperature, a molten particle is cooled slowly. As the temperature reduces, the energy level of the particle also reduces. When the temperature is sufficiently low, the molten particle becomes solidified. Analogous to the temperature level in the physical annealing process is the iteration number in Simulated Annealing. In each iteration, a candidate solution is generated. If this solution is a better one, it will be accepted and used to generate yet another candidate solution. If it is a deteriorated solution, the solution will be accepted with some probability.

Each of this two methods of hybridisation discussed above have some advantages. The inductive search effort at each inductive step controls the trade-off between the

```
Procedure MHGA 
begin
    Initialize a partial solution for N = 1 and establish the initial temperature T0 ; 
    for k = 2 to N , 
         Generate randomly the initial population of chromosomes; 
         repeat 
             append each chromosome to the partial solution, and evaluate it; 
             repeat 
                  select, proportional with fitness, 2 parents; 
                  apply crossover to obtain 2 offsprings; 
                  apply mutation to the new chromosomes; 
                  calculate the fitness of new chromosomes; 
                  the new chromosomes are accepted or not accepted; 
             until end of the number of chromosomes 
             update the population, according with the fitness; 
             the temperature is decreased; 
         until end of the number of generations 
         Update the partial solution; 
    end 
end
```
Fig. 6. The structure of the MHGA

computational complexity and the expected quality of results, while Simulated Annealing avoids the premature convergence and reduces the adverse effects of the mutation operation. In (Popa et al., 2002) we proposed a HGA that cumulates all these advantages in a single algorithm, through a double hybridisation of the Genetic Algorithm: with Inductive Search on the one hand, and with Simulated Annealing technique on the other hand. The structure of the Multiple Hybridated Genetic Algorithm is presented in figure 6.

We have conducted the experiments with all three HGAs described above, in the purpose to find the maximum fault coverage with a limited number of test vectors. We have tested first a PLA structure with 50 potential "stuck-at 0" faults, taking into account the maximum

Genetic Algorithms: An Overview with Applications in Evolvable Hardware 115

Fig. 8. Fault Coverage Problem of 200 possible faults solved with three HGAs

Test Pattern Generation Problem in digital circuits of PLA type.

**5.3 A Quantum Inspired GA for EHW** 

α*i* , β

0 state and the 1 state such that 2 2 1

represented as:

where each couple

These experiments show that the proposed MHGA seems to offer a better performance than the two other HGAs: the Inductive Genetic Algorithm and the Genetic Algorithm hybridated by Simulated Annealing. We have proved on two different examples, with different complexities, that MHGA offers the greatest value of fault coverage in Automatic

Quantum Inspired Genetic Algorithm (QIGA) proposed in (Popa et al., 2010) uses a single chromosome, which is represented like a string of qubits, as is described in (Han & Kim, 2002; Zhou & Sun, 2005). A quantum chromosome which contains *n* qubits may be

> 1 2 1 2

β

<sup>=</sup>

αα

the *i*-th chromosome from the all 2*<sup>n</sup>* possible classic chromosomes (Zhou & Sun, 2005).

β

*q*

α β

probability of seeing a conventional gene, 0 or 1, when the qubit is measured. A quantum chromosome can be in all the 2*<sup>n</sup>* states at the same time, that is:

where *<sup>i</sup> a* represents the quantum probability amplitude, and <sup>2</sup>

 .... *<sup>n</sup> n*

 α

, (8)

*i* and <sup>2</sup> β

*<sup>i</sup> a* is the probability of seeing

*<sup>i</sup>* represent the

 β

*i i* + = and the values <sup>2</sup>

*<sup>i</sup>* , for *i n* = 1,..., , are the probability amplitudes associated with the

0 1 2 1 *qa a a* 00...0 00...1 ... 11...1 *<sup>n</sup>* <sup>−</sup> =++ , (9)

α

coverage with the faults with only 6 test vectors, and results may be seen in the figure 7. Then, we repeated the same algorithm for a more complicated PLA structure, with 200 potential "stuck-at 0" faults, and we tried to cover the maximum number of faults with 24 test vectors. The evolutions of these three algorithms may be seen in figure 8.

Fig. 7. Fault Coverage Problem of 50 possible faults solved with three HGAs in 500 iterations

If *n* is the number of covered faults and *N* is the number of all faults in the fault population, the associated fitness function is 100% *<sup>n</sup> <sup>f</sup> N*= ⋅ . There may also be a number of constraints

concerning the possible combinations of input signals. The designers of the circuit define the set of legal combinations in terms of the legal states of a number of channels. The set of all legal templates defines the feasible region. The main genetic parameters used in these algorithms are: a population size of 20 chromosomes, uniform crossover with 100% rate, uniform mutation with 1% rate. The maximum fault coverage achieved with the Multiple Hybridated Genetic Algorithm after 500 iterations was about 69%, while the maximum fault coverage achieved with the Inductive Genetic Algorithm, the best of the two single hybridated genetic algorithms, was about 66%. These results represent the average values of 5 succesive runnings. We have tried even with 10 or more number of runnings, but the results are basically the same.

Another set of experiments were made on a more complex digital structure of PLA type with 200 possible faults. Figure 8 shows the comparative performances of the three HGAs on this fault coverage problem. The number of input test vectors is 24. After 250 fitness function calls, that is 25 iterations, each with 10 generations per inductive step, the fault coverage of the Multiple Hybridated Genetic Algorithm is with about 1% better than the fault coverage of the Inductive Genetic Algorithm.

114 Bio-Inspired Computational Algorithms and Their Applications

coverage with the faults with only 6 test vectors, and results may be seen in the figure 7. Then, we repeated the same algorithm for a more complicated PLA structure, with 200 potential "stuck-at 0" faults, and we tried to cover the maximum number of faults with 24

Fig. 7. Fault Coverage Problem of 50 possible faults solved with three HGAs in 500 iterations

If *n* is the number of covered faults and *N* is the number of all faults in the fault population, the associated fitness function is 100% *<sup>n</sup> <sup>f</sup> N*= ⋅ . There may also be a number of constraints concerning the possible combinations of input signals. The designers of the circuit define the set of legal combinations in terms of the legal states of a number of channels. The set of all legal templates defines the feasible region. The main genetic parameters used in these algorithms are: a population size of 20 chromosomes, uniform crossover with 100% rate, uniform mutation with 1% rate. The maximum fault coverage achieved with the Multiple Hybridated Genetic Algorithm after 500 iterations was about 69%, while the maximum fault coverage achieved with the Inductive Genetic Algorithm, the best of the two single hybridated genetic algorithms, was about 66%. These results represent the average values of 5 succesive runnings. We have tried even with 10 or more number of runnings, but the

Another set of experiments were made on a more complex digital structure of PLA type with 200 possible faults. Figure 8 shows the comparative performances of the three HGAs on this fault coverage problem. The number of input test vectors is 24. After 250 fitness function calls, that is 25 iterations, each with 10 generations per inductive step, the fault coverage of the Multiple Hybridated Genetic Algorithm is with about 1% better than the

results are basically the same.

fault coverage of the Inductive Genetic Algorithm.

test vectors. The evolutions of these three algorithms may be seen in figure 8.

Fig. 8. Fault Coverage Problem of 200 possible faults solved with three HGAs

These experiments show that the proposed MHGA seems to offer a better performance than the two other HGAs: the Inductive Genetic Algorithm and the Genetic Algorithm hybridated by Simulated Annealing. We have proved on two different examples, with different complexities, that MHGA offers the greatest value of fault coverage in Automatic Test Pattern Generation Problem in digital circuits of PLA type.

#### **5.3 A Quantum Inspired GA for EHW**

Quantum Inspired Genetic Algorithm (QIGA) proposed in (Popa et al., 2010) uses a single chromosome, which is represented like a string of qubits, as is described in (Han & Kim, 2002; Zhou & Sun, 2005). A quantum chromosome which contains *n* qubits may be represented as:

$$q = \begin{bmatrix} \alpha\_1 & \alpha\_2 & \dots & \alpha\_n \\ \beta\_1 & \beta\_2 & \dots & \beta\_n \end{bmatrix} \tag{8}$$

where each couple α*i* , β*<sup>i</sup>* , for *i n* = 1,..., , are the probability amplitudes associated with the 0 state and the 1 state such that 2 2 1 α β *i i* + = and the values <sup>2</sup> α*i* and <sup>2</sup> β *<sup>i</sup>* represent the probability of seeing a conventional gene, 0 or 1, when the qubit is measured.

A quantum chromosome can be in all the 2*<sup>n</sup>* states at the same time, that is:

$$a\_1|q\rangle = a\_0|00...0\rangle + a\_1|00...1\rangle + ...a\_{2^n - 1}|11...1\rangle,\tag{9}$$

where *<sup>i</sup> a* represents the quantum probability amplitude, and <sup>2</sup> *<sup>i</sup> a* is the probability of seeing the *i*-th chromosome from the all 2*<sup>n</sup>* possible classic chromosomes (Zhou & Sun, 2005).

Genetic Algorithms: An Overview with Applications in Evolvable Hardware 117

that is the truth table of the evolved function must be identical with the truth table of the specified function. We can see some similarities in these evolutions, but significant

generate multiple chromosomes in population *P*(*t*);

generate a single chromosome in population *P*(*t*);

In CGA, global time of a successful run is about 74 seconds, and this value consists of both self time and the time spent for multiple evaluations of chromosomes in different populations. Self time is the time spent in an algorithm, excluding the time spent in its child functions. Self time also includes overhead resulting from the process of profiling, but this additional time is not important in our case. Evaluation time is almost 60 seconds, because the number of appeals to the evaluation function is elevated (25200 calls, that is evaluation

In SCQGA, global time is less than 40 seconds, because the number of calls to the evaluation function is less than above (only 19200 calls, that is evaluation of 64 chromosomes in 300 generations), and this quantum algorithm doesn't use anymore genetic operators like crossover and mutation. Finally, our QIGA has a global time less than 20 seconds, as a consequence of the insignificant number of calls to the evaluation function (only 4836 calls, a random number given by the probability of collapse). Self time is comparable with SCQGA, and evaluation time is less than 12 seconds. Taking into account all these times, QIGA has

evaluate all the chromosomes in population *P*(*t*);

differences may be seen in Table 1.

Initialize a quantum chromosome *q*(*t*);

store the best solution *b* among *P*(*t*); **while** (not termination condition) **do** 

generate multiple chromosomes in population *P*(*t*);

generate a single chromosome in population *P*(*t*);

evaluate all the chromosomes in population *P*(*t*);

 **if** the collapse of *q*(*t-1*) is likely

update *q*(*t*) using quantum gates; store the best result *b* among *P*(*t*);

**if** the collaps of *q*(*t*) is likely

**Procedure QIGA** 

**else** 

**end** 

 **begin** *t ← t* + 1

 **else** 

**end** 

 **end end** 

Fig. 9. The structure of the QIGA

of 64 plus 20 chromosomes in 300 generations).

the best ratio between evaluation and global time.

**end** 

**begin**  *t ←* 0

Due to this superposition of states in a quantum chromosome, we use a single chromosome in population. In Conventional Genetic Algorithm (CGA) or Simple Genetic Algorithm with the structure given in figure 2, the population has always a number of chromosomes, and the efficiency of the algorithm depends usually on the size of population. But a quantum chromosome can represent all the possible conventional chromosomes at the same time, and so, it may generates an arbitrary population of conventional chromosomes each generation. Quantum population will be transformed to conventional population when the fitness is evaluated.

Single Chromosome Quantum Genetic Algorithm (SCQGA) is described in (Zhou & Sun, 2005). In the first step, a quantum chromosome is generated using (8). A random number is compared with probabilities of each qubit, and it collapses to 0, or to 1. The conventional population of *N* chromosomes is obtained by repeating this process *N* times. In the next step, the fitness value is calculated for each conventional chromosome. It requires a lot of time, that involves the speed performance of the algorithm. The same problem of fitness evaluation and the low speed of the algorithm subsists also in CGAs.

Our idea, which was implemented in QIGA, was to initiate the collapse of the quantum chromosome each generation but, from time to time to generate a whole population of conventional chromosomes, and in the remaining iterations to generate only a single conventional chromosome. A new parameter, which we called the *probability of collapse*, establishes the rate of generating a conventional population during the evolution. The last important step of the algorithm is to establish a method of updating the quantum chromosome from the current generation to the next one. QIGA uses the same method described in (Han, 2003). The idea is to modify the probabilities of each quantum gene (or qubit) from the quantum chromosome using quantum rotation gate. This operator changes the probability amplitude by altering the quantum phase θ to θ θ + Δ . The idea for the construction of the rotation gate is to make the changing of the entire population (quantum chromosome) to the direction of the best individual. Each bit from the best conventional chromosome is compared with the adequate bit from the average version of the quantum chromosome (this version is build using a probability of 0.5 for each qubit). If the two bits are equal with 0 or 1, then Δ = θ 0 . If the bit of the best chromosome is 1 and the other one is 0, then Δ = θ *a* , otherwise Δ =− θ *a* . The angle parameter of the rotation gate Δθ may be 0,  *a*, or *a*, depending on the position of each qubit in chromosome. The parameter *a* is a positive small parameter, which decides the evolving rate (Zhou & Sun, 2005).

Basic structure of QIGA is given in figure 9. *q*(*t*) is the quantum chromosome in the iteration *t*, and *P*(*t*) is the population in the same iteration *t*. This population may contain a lot of chromosomes, or only one, depending on the probability of collapse in *q*(*t*). These three algorithms, CGA, SCQGA and QIGA have been compared on the same problem, which consists on synthesis of a boolean function with 4 variables, using different logic gates. The chromosomes define the connection in the network between the primary inputs and primary outputs of the gates, and decide the logic operators of the gates. The population of CGA has 64 chromosomes, 20 of them being changed each generation, and genetic operators use a single point 100% crossover and 5% rate mutation.

Figure 10 illustrates the average of evolutions of the three algorithms after 10 successful runnings on 300 generations. A successful running presumes a fitness evaluation of 100%, 116 Bio-Inspired Computational Algorithms and Their Applications

Due to this superposition of states in a quantum chromosome, we use a single chromosome in population. In Conventional Genetic Algorithm (CGA) or Simple Genetic Algorithm with the structure given in figure 2, the population has always a number of chromosomes, and the efficiency of the algorithm depends usually on the size of population. But a quantum chromosome can represent all the possible conventional chromosomes at the same time, and so, it may generates an arbitrary population of conventional chromosomes each generation. Quantum population will be transformed to conventional population when the fitness is

Single Chromosome Quantum Genetic Algorithm (SCQGA) is described in (Zhou & Sun, 2005). In the first step, a quantum chromosome is generated using (8). A random number is compared with probabilities of each qubit, and it collapses to 0, or to 1. The conventional population of *N* chromosomes is obtained by repeating this process *N* times. In the next step, the fitness value is calculated for each conventional chromosome. It requires a lot of time, that involves the speed performance of the algorithm. The same problem of fitness

Our idea, which was implemented in QIGA, was to initiate the collapse of the quantum chromosome each generation but, from time to time to generate a whole population of conventional chromosomes, and in the remaining iterations to generate only a single conventional chromosome. A new parameter, which we called the *probability of collapse*, establishes the rate of generating a conventional population during the evolution. The last important step of the algorithm is to establish a method of updating the quantum chromosome from the current generation to the next one. QIGA uses the same method described in (Han, 2003). The idea is to modify the probabilities of each quantum gene (or qubit) from the quantum chromosome using quantum rotation gate. This operator changes

construction of the rotation gate is to make the changing of the entire population (quantum chromosome) to the direction of the best individual. Each bit from the best conventional chromosome is compared with the adequate bit from the average version of the quantum chromosome (this version is build using a probability of 0.5 for each qubit). If the two bits

*a*, or *a*, depending on the position of each qubit in chromosome. The parameter *a* is a

Basic structure of QIGA is given in figure 9. *q*(*t*) is the quantum chromosome in the iteration *t*, and *P*(*t*) is the population in the same iteration *t*. This population may contain a lot of chromosomes, or only one, depending on the probability of collapse in *q*(*t*). These three algorithms, CGA, SCQGA and QIGA have been compared on the same problem, which consists on synthesis of a boolean function with 4 variables, using different logic gates. The chromosomes define the connection in the network between the primary inputs and primary outputs of the gates, and decide the logic operators of the gates. The population of CGA has 64 chromosomes, 20 of them being changed each generation, and genetic operators

Figure 10 illustrates the average of evolutions of the three algorithms after 10 successful runnings on 300 generations. A successful running presumes a fitness evaluation of 100%,

positive small parameter, which decides the evolving rate (Zhou & Sun, 2005).

θ to θ θ

0 . If the bit of the best chromosome is 1 and the other one is

*a* . The angle parameter of the rotation gate Δ

+ Δ . The idea for the

θ

may be 0, *-*

evaluation and the low speed of the algorithm subsists also in CGAs.

the probability amplitude by altering the quantum phase

θ

θ

use a single point 100% crossover and 5% rate mutation.

are equal with 0 or 1, then Δ =

*a* , otherwise Δ =−

0, then Δ = θ

evaluated.

that is the truth table of the evolved function must be identical with the truth table of the specified function. We can see some similarities in these evolutions, but significant differences may be seen in Table 1.

```
Procedure QIGA 
begin 
   t ← 0
   Initialize a quantum chromosome q(t); 
   if the collaps of q(t) is likely 
    generate multiple chromosomes in population P(t); 
   else 
    generate a single chromosome in population P(t); 
   end 
   evaluate all the chromosomes in population P(t); 
   store the best solution b among P(t); 
   while (not termination condition) do 
    begin 
 t ← t + 1 
 if the collapse of q(t-1) is likely 
    generate multiple chromosomes in population P(t); 
 else 
    generate a single chromosome in population P(t); 
 end 
 evaluate all the chromosomes in population P(t); 
 update q(t) using quantum gates; 
 store the best result b among P(t); 
 end 
   end 
end
```

```
Fig. 9. The structure of the QIGA
```
In CGA, global time of a successful run is about 74 seconds, and this value consists of both self time and the time spent for multiple evaluations of chromosomes in different populations. Self time is the time spent in an algorithm, excluding the time spent in its child functions. Self time also includes overhead resulting from the process of profiling, but this additional time is not important in our case. Evaluation time is almost 60 seconds, because the number of appeals to the evaluation function is elevated (25200 calls, that is evaluation of 64 plus 20 chromosomes in 300 generations).

In SCQGA, global time is less than 40 seconds, because the number of calls to the evaluation function is less than above (only 19200 calls, that is evaluation of 64 chromosomes in 300 generations), and this quantum algorithm doesn't use anymore genetic operators like crossover and mutation. Finally, our QIGA has a global time less than 20 seconds, as a consequence of the insignificant number of calls to the evaluation function (only 4836 calls, a random number given by the probability of collapse). Self time is comparable with SCQGA, and evaluation time is less than 12 seconds. Taking into account all these times, QIGA has the best ratio between evaluation and global time.

Genetic Algorithms: An Overview with Applications in Evolvable Hardware 119

complexity of evolved circuits is so far small. In our opinion, conclusion drawn in the paper (Yao & Higuchi, 1999) is still available: "EHW research needs to address issues, such as scalability, online adaptation, generalization, circuit correctness, and potential risk of evolving hardware in a real physical environment. It is argued that a theoretical foundation

Recently appeared the idea of hybridization of a GA with elements of quantum computation (Han & Kim, 2002; Han, 2003). We have proposed a new quantum inspired genetic algorithm (QIGA) considerably faster than other similar algorithms, based on the idea of introducing a new parameter, which we called the probability of collapse, and to initiate the collapse of the quantum chromosome in order to generate a conventional population of chromosomes from time to time, and not each generation, as usually is done. We believe that some improvements in this method may be found in a future research, by establishing of a new method of updating the quantum chromosome from the current generation to the next one. Finally, some hybridization techniques may be useful for new quantum inspired evolutionary algorithms. (Rubinstein, 2001) used Genetic Programming to evolve quantum circuits with various properties, and (Moore & Venayagamoorthy, 2005) has developed an algorithm inspired from quantum evolution and Particle Swarm to evolve conventional

Bentley, P. J. & Corne, D. W. (Ed(s).). (2002). *Creative Evolutionary Systems*, Academic Press,

Bilchev, G. & Parmee, I. (1996). Constraint Handling for the Fault Coverage Code

Burda, I. (2005). *Introduction to Quantum Computation,* Universal Publishers, ISBN: 1-58112-

Forbes, N. (2005). *Imitation of Life. How Biology Is Inspiring Computing,* MIT Press, ISBN: 0-

Han, K. H. & Kim, J. H. (2002). Quantum-Inspired Evolutionary Algorithm for a Class of

Han, K. H. (2003). *Quantum-Inspired Evolutionary Algorithm,* Ph.D. dissertation, Department

Haupt, R. L. & Haupt, S. E. (2004). *Practical Genetic Algorithms* (second edition), Wiley-

Higuchi, T.; Liu, Y. & Yao, X. (Ed(s).). (2006). *Evolvable Hardware*, Spinger-Verlag, ISBN-13:

Iba, H.; Iwata, M. And Higuchi, T. (1996). Machine Learning Approach to Gate-Level

Generation Problem: An Inductive Evolutionary Approach*, Proceedings of 4-th Conference on Parallel Problem Solving from Nature (PPSN IV)*, pp. 880-889, Berlin,

Combinatorial Optimization, *IEEE Transactions on Evolutionary Computation,*vol.6,

of Electrical Engineering and Computer Science, Korea Advanced Institute of

Evolvable Hardware. *Proceedings of the First International Conference on Evolvable* 

ISBN: 1-55860-673-4, San Francisco, USA

no.6, (December 2002), pp. 580-593, ISSN 1089-778X

Interscience, ISBN: 0-471-45565-2, New-Jersey, USA

*Systems ICES'96,* pp.327-343, Tsukuba, Japan, October 1996

466-X, Boca Raton, Florida, USA

262-06241-0, London, England

Science and Technology, Korea, 2003

978-0387-24386-3, New-York, USA

of EHW should be established before rushing to large-scale EHW implementations".

combinational logic circuits.

September 1996

**7. References** 

Fig. 10. The evolutions of CGA, SCQGA and QIGA


Table 1. A comparison between CGA, SCQGA and IQGA

Unfortunately, the number of successful runs in 300 generations is only in the order of 70% for CGA, and 60% for the rest two algorithms. It occurs due to the constraint that only 100% in fitness evaluation is accepted. In other applications, this constraint may be not critical.
