**4.1 Introduction**

Image fusion is defined as the process of combining information from two or more images of a scene to enhance the viewing or understanding of that scene. The images that are to be fused can come from different sensors, or have been acquired at different times, or from different locations. Hence, the first step in any image fusion process is the accurate registration of the image data. This is relatively straightforward if parameters such as the instantaneous field-of-view (IFOV), and locations and orientations from which the images are acquired are known, especially when the sensor modalities produce images that use the same coordinate space. This is more of a challenge when sensor modalities differ significantly and registration can only be accomplished at the information level. Hence, the goal of the fusion process is to preserve all relevant information in the component images and place it in the fused image (FI). This requires that the process minimize the noise and other artifacts in the FI. Because of this, the fusion process can be also regarded as an optimization problem [K. Kannan and S. Perumal, 2002]. In recent years, image fusion has been applied to a number of diverse areas such as remote sensing [T. A.Wilson, and S. K. Rogers,1997], medical imaging [C. S. Pattichis and M. S. Pattichis, 2001], and military applications [B. V. Dasarathy, 2002].

Fig. 2. Comparisons of Enhancement Techniques: (top-left) Original; (top-right) IRME; (middle-left) Gamma correction, g = 1.4; (middle-right) MSR; (bottom-left) AINDANE;(bottom-right) ETNUD.

Image fusion can be divided into three processing levels: pixel, feature and decision. These methods increase in abstraction from pixel to feature to decision levels. In the pixel-level approach, simple arithmetic rules like average of individual pixel intensities or more sophisticated combination schemes are used to construct the fused image. At the featurelevel, the image is classified into regions with known labels, and these labeled regions from different sensor modalities are used to combine the data. At the decision level, a combination of rules can be used to include part of the data or not.

Fusion of Visual and Thermal Images Using Genetic Algorithms 203

iii. *Natural Selection:* The chromosomes are ranked from the lowest to highest cost. Of the total of chromosomes in a given generation, only the top *NKeep* are kept for mating and

iv. *Mating:* Many different approaches have been tried for crossover in continuous GAs. In crossover, all the genes to the right of the crossover point are swapped. Variables are randomly selected in the first pair of parents to be the crossover point: ( ) var

1 1

*parent P P*

=

=

2 1

variables are combined to form new variables that will appear in the children.

*pnew p P P pnew p P P* α

α

1 12 1

*offspring P P pnew P*

*offspring P P pnew P*

2 12 2

v. *Mutation:* If care is not taken, the GA can converge too quickly into one region of the cost surface. If this area is in the region of the global minimum, there is no problem. However, some functions have many local minima. To avoid overly fast convergence, other areas of the cost surface must be explored by randomly introducing changes, or mutations, in some of the variables. Multiplying the mutation rate by the total number of variables that can be mutated in the population gives the amount of mutation. Random numbers are used to select the row and columns of the variables that are to be

vi. *Next Generation:* After all these steps, the starting population for the next generation is ranked. The bottom ranked chromosomes are discarded and replaced by offspring from the top ranked parents. Some random variables are selected for mutation from the bottom ranked chromosomes. The chromosomes are then ranked from lowest cost to highest cost. The process is iterated until a global solution is achieved [Randy L. Haupt

A set of input images of a scene, captured at a different time or captured by different kinds of sensors at the same time, reveals different information about the scene. The process of extracting and combining data from a set of input images to form a new composite image

1 2

*parent P P*

, where *U(0,1*) is the uniform distribution. The parents are given by [Randy L. Haupt &

[ ,....., ]

[ ,....., ] *m mN*

*d dN*

*m md d md*

β

=− − =+ −

β

 αα

 αα

is a random value between 0 and 1. The final step is to complete the crossover

, ,, ,,

<sup>=</sup>

*m m dN*

, ,, ,,

<sup>=</sup>

*d d mN*

var

var

where subscripts *m* and *d* represent the mom and dad parent. Then the selected

var

var

[ ] [ ]

α

(33)

(34)

= *U N* (0,1)

(32)

the rest are discarded to make room for the new offspring .

Sue Ellen Haupt, 2004]:

with the rest of chromosome:

where β

mutated.

**4.2.3 Image fusion** 

& Sue Ellen Haupt, 2004].

with extended information content is called image fusion.

Genetic algorithms (GA) are an optimization technique that seeks the optimum solution of a function based on the Darwinian principles of biological evolution. Even though there are several methods of performing and evaluating image fusion, there are still many open questions. In this section, a new measure of image fusion quality is provided and compared with many existing ones. The focus is on pixel-level image fusion (PLIF) and a new image fusion technique that uses GA is proposed.

The GA is used to optimize the parameters of the fusion process to produce an FI that contains more information than either of the individual images. The main purpose of this section is in finding the optimum weights that are used to fuse images with the help of CGA. The techniques for GA and image fusion are given in Section 4.2. Section 4.3 describes the evaluation criteria. Section 4.4 describes the experimental results, and compares our results with other image fusion techniques. In Section 4.5, conclusion is provided.
