**4.2.2 Continuous Genetic Algorithm (CGA)**

GAs typically operates on binary data. For many applications, it is more convenient to work in the analog, or continuous, data space rather than in the binary space of most GAs. Hence, CGA is used because they have the advantage of requiring less storage and are faster than binary. CGA inputs are represented by floating-point numbers over whatever range is deemed appropriate. Figure 6 shows the flowchart of a simple CGA [Randy L. Haupt & Sue Ellen Haupt, 2004]. The various elements in the flowchart are described below:


202 Bio-Inspired Computational Algorithms and Their Applications

Genetic algorithms (GA) are an optimization technique that seeks the optimum solution of a function based on the Darwinian principles of biological evolution. Even though there are several methods of performing and evaluating image fusion, there are still many open questions. In this section, a new measure of image fusion quality is provided and compared with many existing ones. The focus is on pixel-level image fusion (PLIF) and a new image

The GA is used to optimize the parameters of the fusion process to produce an FI that contains more information than either of the individual images. The main purpose of this section is in finding the optimum weights that are used to fuse images with the help of CGA. The techniques for GA and image fusion are given in Section 4.2. Section 4.3 describes the evaluation criteria. Section 4.4 describes the experimental results, and compares our results with other image fusion techniques. In Section 4.5, conclusion is

As stated earlier, GA is a non-linear optimization technique that seeks the optimum solution of a function via a non-exhaustive search among randomly generated solutions. GAs use multiple search points instead of searching one point at a time and attempt to find global, near-optimal solutions without getting stuck at local optima. Because of these significant advantages, GAs reduce the search time and space. However, there are disadvantages of using GAs as well: they are not generally suitable for real-time applications since the time to converge to an optimal solution cannot be predicted. The convergence time depends on the population size, and the GA crossover and mutation operators. In this fusion process, a

GAs typically operates on binary data. For many applications, it is more convenient to work in the analog, or continuous, data space rather than in the binary space of most GAs. Hence, CGA is used because they have the advantage of requiring less storage and are faster than binary. CGA inputs are represented by floating-point numbers over whatever range is deemed appropriate. Figure 6 shows the flowchart of a simple CGA [Randy L. Haupt & Sue

i. *Definition of the cost function and the variables:* The variable values are represented as floating point numbers <sup>1</sup> ( ). *p* In each chromosome, the basic GA processing vector, there are number of value depending on the parameters 1 var ( ,..., ). *<sup>N</sup> p p* Each chromosome has a cost determined by evaluating the cost function [Randy L. Haupt & Sue Ellen Haupt,

ii. *Initial Population:* To begin the CGA process, an initial population must be defined. A matrix represents the population, with each row being a var 1× *N* chromosome of continuous values. The chromosomes are passed to the cost function for evaluation

Ellen Haupt, 2004]. The various elements in the flowchart are described below:

fusion technique that uses GA is proposed.

**4.2 The techniques of GA and image fusion** 

continuous genetic algorithm has been selected.

**4.2.2 Continuous Genetic Algorithm (CGA)** 

[Randy L. Haupt & Sue Ellen Haupt, 2004].

provided.

2004].

**4.2.1 Genetic Algorithm** 


$$\begin{aligned}parent\_1 &= \left[P\_{m1}, \dots, P\_{mN\_{\text{var}}}\right] \\parent\_2 &= \left[P\_{d1'}, \dots, P\_{dN\_{\text{var}}}\right] \end{aligned} \tag{32}$$

where subscripts *m* and *d* represent the mom and dad parent. Then the selected variables are combined to form new variables that will appear in the children.

$$\begin{aligned} \text{prenew}\_1 &= p\_{\text{max}} - \mathcal{J}[P\_{\text{max}} - P\_{\text{da}}] \\ \text{prenew}\_2 &= p\_{\text{da}} + \mathcal{J}[P\_{\text{ma}} - P\_{\text{da}}] \end{aligned} \tag{33}$$

where β is a random value between 0 and 1. The final step is to complete the crossover with the rest of chromosome:

$$\begin{aligned} \text{offsfspring}\_1 &= \begin{bmatrix} P\_{m1}, P\_{m2}, \dots, pnew\_1, \dots, P\_{dN\_{\text{w}}} \end{bmatrix} \\\\ \text{offsfspring}\_2 &= \begin{bmatrix} P\_{d1'}, P\_{d2'}, \dots, pnew\_2, \dots, P\_{\text{mN}\_{\text{w}}} \end{bmatrix} \end{aligned} \tag{34}$$


#### **4.2.3 Image fusion**

A set of input images of a scene, captured at a different time or captured by different kinds of sensors at the same time, reveals different information about the scene. The process of extracting and combining data from a set of input images to form a new composite image with extended information content is called image fusion.

Fusion of Visual and Thermal Images Using Genetic Algorithms 205

calibrated in field conditions because of constraints on time. Figure 3 shows an example where the IR and visual image were captured at the same time. It is obvious from the figure that the images need to be registered before they can be fused since the field-of-view and the

The performance of the proposed algorithm was tested and compared with different PLIF methods. The IR and visual images were not previously registered as shown in Figure 3. The registered image, base image (IR Image) and fused image with CGA are shown in Figure 4.

where *V* and *IR* are the visual and IR images, *wa* and *wb* are the respective associated weights, and *F* is the fused image. The initial population size is 100×3. The first and second columns in population matrix represent *waV,* and *wbIR* and the last column represents the cost function which is the entropy of *F*. Then initial population has been ranked based on the cost. In each iteration of the GA, 20 of the 100 rows are kept for mating and the rest are discarded. The crossover has been applied based on the Equation 35. The mutation rate was set to 0.20, hence the total number of mutated variables is 40. The value of a mutated

Fig. 4. The Result of Fusion: Left: Registered Images, Middle: IR Image Right: Fused Image

Registered Image IR Image Fused Image

The CGA results after 50 iterations of the GA such that the CGA maximize the cost and find optimum weights of images. In the 2nd, 8th, and 25th iterations, the cost increased but was not associated with the global solution. The optimum solution was determined in 45th iteration and remained unchanged because it is optimum solution. Figure 4 shows the fusion results of point-rules based PLIF. After registering IR and visual data, we determined that *wa* = 0.9931 and *wb* = 0.0940 provide the optimum values for maximizing the entropy cost function for the *F* specified in Equation 38. The evaluation of these weights results is shown in Table 2. Table 2 shows that CGA based fusion method gives better results

( ) *Entro a b py F w V w IR* = + (38)

pixel resolution are obviously different.

The cost function is very simple and defined as:

variable is replaced by a new random value in the same range.

Fig. 3. Visual and IR Images: Left: Visual Image, Right: IR Image.

with GA.

#### **4.3 Evaluation criteria**

In this section, the following criteria were defined to evaluate the performance of the image fusion algorithm.

#### **4.3.1 Image quality assessment**

This evaluation criterion was discussed in Section 3.2.3.

#### **4.3.2 Entropy**

Entropy is often defined as the amount of information contained in an image. Mathematically, entropy is usually given as:

$$E = -\sum\_{i=0}^{L-1} p\_i \log\_2 p\_i \tag{35}$$

where *L* is the total number of grey levels, and *pp p* = { 0 1 ,....., *<sup>L</sup>*<sup>−</sup> } is the probability of occurrence of each level. An increase in entropy after fusion can be interpreted as an overall increase in the information content. Hence, one can assess the quality of fusion by assessing entropy of the original data, and the entropy of the fused data.

#### **4.3.3 Mutual information indices**

Mutual Information Indices are used to evaluate the correlative performances of the fused image and the source images. Let A and B be random variables with marginal probability distributions ( ) *<sup>A</sup> p a* and ( ) *<sup>B</sup> p b* and the joint probability distribution (,) *AB p a b* . The mutual information is then defined as:

$$I\_{A\mathbb{B}} = \sum p\_{A\mathbb{B}}(a,b) \log[p\_{A\mathbb{B}}(a,b) / \left(p\_{A}(a)p\_{\mathbb{B}}(b)\right)] \tag{36}$$

A higher value of Mutual Information (MI) indicates that the fused image, *F*, contains fairly good quantity of information present in both the source images, *A* and *B*. The MI can be defined as . *M AF BF II I* = +

A high value of MI does not imply that the information from the both images is symmetrically fused. Therefore, information symmetry (IS) is introduced. IS is the indication of how symmetrically distributed is the information in the fused image, with respect to input images. The higher the value of IS, the better the fusion result. IS is given by :

$$IS = 2 - abs[I\_{AF} \mid (I\_{AF} + I\_{BF}) - 0.5] \tag{37}$$

#### **4.4 Experimental results**

The goal of this experiment is to fuse visual and IR images. To minimize registration issues, it is important that the visual and the thermal images are captured at the same time. Pinnacle software was used to capture the visual and the thermal images simultaneously. Although radiometric calibration is important, the thermal camera can not always be 204 Bio-Inspired Computational Algorithms and Their Applications

In this section, the following criteria were defined to evaluate the performance of the image

Entropy is often defined as the amount of information contained in an image.

0

where *L* is the total number of grey levels, and *pp p* = { 0 1 ,....., *<sup>L</sup>*<sup>−</sup> } is the probability of occurrence of each level. An increase in entropy after fusion can be interpreted as an overall increase in the information content. Hence, one can assess the quality of fusion by assessing

Mutual Information Indices are used to evaluate the correlative performances of the fused image and the source images. Let A and B be random variables with marginal probability distributions ( ) *<sup>A</sup> p a* and ( ) *<sup>B</sup> p b* and the joint probability distribution (,) *AB p a b* . The mutual

( , )log[ ( , ) /( ( ) ( ))] *AB AB AB A B <sup>I</sup>* <sup>=</sup> *<sup>p</sup> a b <sup>p</sup> a b <sup>p</sup> <sup>a</sup> <sup>p</sup> <sup>b</sup>* (36)

A higher value of Mutual Information (MI) indicates that the fused image, *F*, contains fairly good quantity of information present in both the source images, *A* and *B*. The MI can be

A high value of MI does not imply that the information from the both images is symmetrically fused. Therefore, information symmetry (IS) is introduced. IS is the indication of how symmetrically distributed is the information in the fused image, with respect to

The goal of this experiment is to fuse visual and IR images. To minimize registration issues, it is important that the visual and the thermal images are captured at the same time. Pinnacle software was used to capture the visual and the thermal images simultaneously. Although radiometric calibration is important, the thermal camera can not always be

2 [ /( ) 0.5] *AF AF BF IS abs I I I* = − + − (37)

input images. The higher the value of IS, the better the fusion result. IS is given by :

*L*

*i E p p* − =

2

= − (35)

*i i*

log

**4.3 Evaluation criteria** 

**4.3.1 Image quality assessment** 

**4.3.3 Mutual information indices** 

information is then defined as:

defined as . *M AF BF II I* = +

**4.4 Experimental results** 

This evaluation criterion was discussed in Section 3.2.3.

entropy of the original data, and the entropy of the fused data.

Mathematically, entropy is usually given as:

<sup>1</sup>

fusion algorithm.

**4.3.2 Entropy** 

calibrated in field conditions because of constraints on time. Figure 3 shows an example where the IR and visual image were captured at the same time. It is obvious from the figure that the images need to be registered before they can be fused since the field-of-view and the pixel resolution are obviously different.

The performance of the proposed algorithm was tested and compared with different PLIF methods. The IR and visual images were not previously registered as shown in Figure 3. The registered image, base image (IR Image) and fused image with CGA are shown in Figure 4. The cost function is very simple and defined as:

$$Extropy(F = w\_a V + w\_b IR) \tag{38}$$

where *V* and *IR* are the visual and IR images, *wa* and *wb* are the respective associated weights, and *F* is the fused image. The initial population size is 100×3. The first and second columns in population matrix represent *waV,* and *wbIR* and the last column represents the cost function which is the entropy of *F*. Then initial population has been ranked based on the cost. In each iteration of the GA, 20 of the 100 rows are kept for mating and the rest are discarded. The crossover has been applied based on the Equation 35. The mutation rate was set to 0.20, hence the total number of mutated variables is 40. The value of a mutated variable is replaced by a new random value in the same range.

Fig. 3. Visual and IR Images: Left: Visual Image, Right: IR Image.

Registered Image IR Image Fused Image

Fig. 4. The Result of Fusion: Left: Registered Images, Middle: IR Image Right: Fused Image with GA.

The CGA results after 50 iterations of the GA such that the CGA maximize the cost and find optimum weights of images. In the 2nd, 8th, and 25th iterations, the cost increased but was not associated with the global solution. The optimum solution was determined in 45th iteration and remained unchanged because it is optimum solution. Figure 4 shows the fusion results of point-rules based PLIF. After registering IR and visual data, we determined that *wa* = 0.9931 and *wb* = 0.0940 provide the optimum values for maximizing the entropy cost function for the *F* specified in Equation 38. The evaluation of these weights results is shown in Table 2. Table 2 shows that CGA based fusion method gives better results (optimum weights for maximizing the entropy of *F*) for entropy and IS from which it can concluded that CGA performs better than other PLIFs.

Fusion of Visual and Thermal Images Using Genetic Algorithms 207

Entropy 6.91 3.14 6.56 6.93 7.28

Quality 100 70 100 100 100

IS 1.90 1.63 1.96 1.91 1.96

Table 2. Performance Comparision of Image Fusion Methods for Figure 4 and Figure 5.

In this chapter, the focus is on visual image enhancement. Then the visual images will be registered with the IR images based landmark registration algorithm. Finally, the registered

The ETNUD algorithm was applied to 16 visual images as shown in Figure 6 under different illumination conditions. In all figures besides the regular room lights, the four extra spot lights located in the front of the person were turned off and on for creating different illumination conditions. To enhance those visual images, the luminance is first balanced, then image contrast is enhanced and finally, the enhanced image is obtained by a linear color restoration based on chromatic information contained in the original image. The results in the luminance enhancement part showed that the algorithms work well for dark images. All the details, which cannot be seen in the original image, become evident. The experiment results have shown that for all color images, the proposed algorithms work

First, the IR and visual images taken from different sensors, viewpoints, times and resolution were resized for the same size. The correspondence between the features detected in the IR image and those detected in the visual image were then established. Control points were picked manually from those corners detected by the Harris corner detection algorithm from both images, where the corners were in the same positions in the

In the second step, a spatial transformation was computed to map the selected corners in one image to those in another image. Once the transformation was established, the image to be registered was resampled and interpolated to match the reference image. For RGB and intensity images, the bilinear or bicubic interpolation method is recommended since they

Experimental results have been applied on the database, which is created by the research team. This algorithm is categorized into four steps, which are described respectively. In the

lead to better results. In the experiments, the bicubic interpolation method was used.

Average (Fig 5)

Threshold (Fig 5)

GA\_based (Fig 4)

Lowest (Fig 5)

Highest

Image

(Fig 5)

IR and visual images are fused for face recognition.

**5.2 Enhancement of visual images** 

**5.3 IR and visual images registration** 

sufficiently well.

two images.

**5.4 Discussion** 

Fig. 5. Fusion Results: (top-left) highest value from IR or Visual Images; (top-right) lowest value form IR or Visual Images; (bottom-left) average of IR and Visual Images; (bottomright) threshold value.

## **4.5 Conclusion**

In this section, CGA based image fusion algorithm was introduced and compared with other classical PLIFs. The results show that CGA based image fusion gives better result than other PLIFs.
