**2.4.1 Introduction**

Optimization can be distinguished by either discrete or continuous variables. Discrete variables have only a finite number of possible values, whereas continuous variables have an infinite number of possible ones. Discrete variable optimization is also known as combinatorial optimization, because the optimum solution consists of a certain combination of variables from the finite pool of all possible variables. However, when trying to find the minimum value of *f*(*x*) on a number line, it is more appropriate to view the problem as continuous [J. H. Holland, 1975; S. K. Mitra et al., 1998].

Genetic algorithms manipulate a population of potential solutions for the problem to be solved. Usually, each solution is coded as a binary string, equivalent to the genetic material of individuals in nature. Each solution is associated with a *fitness value* that reflects how good it is, compared with other solutions in the population. The higher the fitness value of an individual, the higher its chances of survival and reproduction in the subsequent generation. Recombination of genetic material in genetic algorithms is simulated through a crossover mechanism that exchanges portions between strings.

Another operation, called mutation, causes sporadic and random alteration of the bits in strings. Mutation has a direct analogy in nature and plays the role of regenerating lost genetic material [M. Srinivas & L. M. Patnaik, 1994]. GAs have found applications in many fields including image processing [J. Zhang , 2008; L. Yu et al., 2008].

### **2.4.2 Continuous Genetic Algorithm (CGA)**

GAs typically represent solution as binary strings. For many applications, it is more convenient to denote solutions as real numbers known as continuous Genetic algorithms (CGA). CGAs have the advantage of requiring less storage and are faster than the binary counterparts. Figure 1 shows the flowchart of simple CGA [Randy L. Haupt & Sue Ellen Haupt, 2004].

#### **2.4.2.1 Components of a Continuous Genetic Algorithm**

The various elements in the flowchart are described below [D.Patnaik, 2006].

#### **2.4.2.1.1 Cost function**

The goal of GAs is to solve an optimization problem defined as a cost function with a set of parameters involved. In CCA, the parameters are organized as a vector known as a chromosome. If the chromosome has *N*var variables (an *N*-dimensional optimization problem) given by 123 var , , ,...., , *<sup>N</sup> ppp p* then the chromosome is written as an array with 1x *N*var elements as [Randy L. Haupt & Sue Ellen Haupt, 2004]:

$$\text{chromosome} = \left[p\_{1'}p\_{2'}p\_{3'}...p\_{N\_{\text{var}}}\right] \tag{3}$$

Fig. 1. Flowchart of CGA

190 Bio-Inspired Computational Algorithms and Their Applications

literature, there are popular techniques such as the nearest-neighbor and bilinear

Optimization can be distinguished by either discrete or continuous variables. Discrete variables have only a finite number of possible values, whereas continuous variables have an infinite number of possible ones. Discrete variable optimization is also known as combinatorial optimization, because the optimum solution consists of a certain combination of variables from the finite pool of all possible variables. However, when trying to find the minimum value of *f*(*x*) on a number line, it is more appropriate to view the problem as

Genetic algorithms manipulate a population of potential solutions for the problem to be solved. Usually, each solution is coded as a binary string, equivalent to the genetic material of individuals in nature. Each solution is associated with a *fitness value* that reflects how good it is, compared with other solutions in the population. The higher the fitness value of an individual, the higher its chances of survival and reproduction in the subsequent generation. Recombination of genetic material in genetic algorithms is simulated through a

Another operation, called mutation, causes sporadic and random alteration of the bits in strings. Mutation has a direct analogy in nature and plays the role of regenerating lost genetic material [M. Srinivas & L. M. Patnaik, 1994]. GAs have found applications in many

GAs typically represent solution as binary strings. For many applications, it is more convenient to denote solutions as real numbers known as continuous Genetic algorithms (CGA). CGAs have the advantage of requiring less storage and are faster than the binary counterparts. Figure 1 shows the flowchart of simple CGA [Randy L. Haupt & Sue Ellen

The goal of GAs is to solve an optimization problem defined as a cost function with a set of parameters involved. In CCA, the parameters are organized as a vector known as a chromosome. If the chromosome has *N*var variables (an *N*-dimensional optimization problem) given by 123 var , , ,...., , *<sup>N</sup> ppp p* then the chromosome is written as an array with 1x

chromosome =[ <sup>123</sup> var , , ,...., *<sup>N</sup> p pp p* ] (3)

interpolation [B. Zitova & J. Flusser, 2003].

continuous [J. H. Holland, 1975; S. K. Mitra et al., 1998].

crossover mechanism that exchanges portions between strings.

**2.4.2 Continuous Genetic Algorithm (CGA)** 

**2.4.2.1 Components of a Continuous Genetic Algorithm** 

*N*var elements as [Randy L. Haupt & Sue Ellen Haupt, 2004]:

fields including image processing [J. Zhang , 2008; L. Yu et al., 2008].

The various elements in the flowchart are described below [D.Patnaik, 2006].

**2.4 Genetic Algorithm** 

**2.4.1 Introduction** 

Haupt, 2004].

**2.4.2.1.1 Cost function** 

In this case, the variable values are represented as floating-point numbers. Each chromosome has a cost found by evaluating the cost function *f* at the variables <sup>123</sup> var , , ,...., , *<sup>N</sup> ppp p* 

$$\text{cost} = f\left(\text{chromosome}\right) = f\left(p\_{1\prime}p\_{2\prime}p\_{3\prime}\dots p\_{N\_{\text{un}}}\right) \tag{4}$$

Equations (3) and (4) along with applicable constraints constitute the problem to be solved. Since the GA is a search technique, it must be limited to exploring a reasonable region of variable space. Sometimes this is done by imposing a constraint on the problem. If one does not know the initial search region, there must be enough diversity in the initial population to explore a reasonably sized variable space before focusing on the most promising regions.

#### **2.4.2.1.2 Initial population**

To begin the CGA process, an initial population of *Npop* must be defined, a matrix represents the population, with each row being a 1x *N*var chromosome of continuous values [D.Patnaik, 2006]. Given an initial population of *N pop* chromosomes, the full matrix of *N x pop N*var random values is generated by:

$$pop = rand(N\_{pop}, N\_{var}) \tag{5}$$

All variables are normalized to have values between 0 and 1. If the range of values is between *lo p* and *hi p* , then the normalized values are given by:

$$p = (p\_{\text{lo}} - p\_{\text{lo}})p\_{\text{mern}} + p\_{\text{lo}} \tag{6}$$

where

Fusion of Visual and Thermal Images Using Genetic Algorithms 193

The problem with these point crossover methods is that no new information is introduced: each continuous value that was randomly initiated in the initial population is propagated to the next generation, only in different combinations. Although this strategy worked fine for binary representations, in case of continuous variables, we are merely interchanging two data points. These approaches totally rely on mutation to introduce new genetic material. The blending methods remedy this problem by finding ways to combine variable values from the two parents into new variable values in the offspring [Randy L. Haupt & Sue Ellen Haupt, 2004]. A single offspring variable value, *pnew*, comes from a combination of the two

> (1 ) *mn dn pnew* = +− β

The same variable of the second offspring is merely the complement of the first (i.e.,

average of the variables of the two parents. This method is demonstrated to work well on

Choosing which variables to blend is the next issue to be solved. Sometimes, this linear combination process is done for all variables to the right or to the left of some crossover point. Any number of points can be chosen to blend, up to *N*var values where all variables are linear combinations of those of the two parents. The variables can be blended by using

methods effectively combine the information from the two parents and choose values of the variables between the values bracketed by the parents; however, they do not allow introduction of values beyond the extremes already represented in the population. The simplest way is the linear crossover [Randy L. Haupt & Sue Ellen Haupt, 2004], where three

> 0.5 0.5 1.5 0.5 0.5 1.5

*pnew p p pnew p p pnew p p*

= + = − =− +

Any variable outside the bounds is discarded. Then the best two offspring are chosen to propagate. Of course, the factor 0.5 is not the only one that can be used in such a method. Heuristic crossover [Randy L. Haupt & Sue Ellen Haupt, 2004] is a variation where some

> ( ) *mn dn mn pnew* = −+ β

*mn dn mn dn mn dn*

, is chosen on the interval [0, 1] and the variables of the offspring are

*pp p* (12)

1 2 3

= 0, then *dn p* propagates in its entirety and *mn p* dies. When

for each variable or by choosing different

several interesting problems in [Randy L. Haupt & Sue Ellen Haupt, 2004].

 β

= 1, then *mn p* propagates in its entirety and *dn p* dies. In contrast,

β

*p p* (10)

β

's for each variable. These blending

= 0.5, the result is an

(11)

corresponding offspring variable values:

 = random number in the interval [0, 1] *mn p* = the *n*th variable in the mother chromosome *dn p* = the *n*th variable in the father chromosome

offspring are generated from two parents by

β

where

replacing

the same

β

random number,

formed by:

β by 1 β ). If β

β

if β

*lo p* = highest number in the variable range

*hi p* = lowest number in the variable range

*norm p* = normalized value of variable

This society of chromosomes is not a democracy: the individual chromosomes are not all created equal. Each one's worth is assessed by the cost function. So at this point, the chromosomes are passed to the cost function for evaluation [Randy L. Haupt & Sue Ellen Haupt, 2004].

Now is the time to decide which chromosomes in the initial population are good enough to survive and possibly reproduce offspring in the next generation. As done for the binary version of the algorithm, the *Npop* costs and associated chromosomes are ranked from lowest cost to highest cost. This process of natural selection occurs in each iteration to allow the population of chromosomes to evolve. Of the *Npop* chromosomes in a given generation, only the top *Nkeep* are kept for mating and the rest are discarded to make room for the new offspring [Randy L. Haupt & Sue Ellen Haupt, 2004].

#### **2.4.2.1.3 Pairing**

A set of eligible chromosomes is randomly selected as parents to generate next generation. Each pair produces two offspring that contain traits from each parent. The more similar the two parents, the more likely are the offspring to carry the traits of the parents.

#### **2.4.2.1.4 Mating**

As for the binary algorithm, two parents are chosen to produce offsprings. Many different approaches have been tried for crossing over in continuous GAs. The simplest method is to mark a crossover points first, then parents exchange their elements between the marked crossover points in the chromosomes. Consider two parents:

$$\begin{aligned} \text{parent}\_1 &= [p\_{w1}, \dots, p\_{mN\_{\text{var}}}] \\ \text{parent}\_2 &= [p\_{d1}, \dots, p\_{dN\_{\text{var}}}] \end{aligned} \tag{7}$$

two offspring's might be produced as:

$$\begin{aligned} \text{offsfspring}\_1 &= \left[ p\_{m1}, p\_{m2}, p\_{d3}, p\_{d4}, p\_{m5}, p\_{m6}, \dots, P\_{mN\_{uu}} \right] \\ \text{offsfspring}\_2 &= \left[ p\_{d1}, p\_{d2}, p\_{m3}, p\_{u4}, p\_{d5}, p\_{d6}, \dots, P\_{dN\_{uu}} \right] \end{aligned} \tag{8}$$

#### **2.4.2.1.5 Natural selection**

The extreme case is selecting *N*var points and randomly choosing which of the two parents will contribute its variable at each position. Thus one goes down the line of the chromosomes and, at each variable, randomly chooses whether or not to swap information between the two parents. This method is called uniform crossover [Randy L. Haupt & Sue Ellen Haupt, 2004]:

$$\begin{aligned} \text{offsfspring}\_1 &= \left[ p\_{m1}, p\_{d2}, p\_{d3}, p\_{d4}, p\_{d5}, p\_{m6}, \dots, P\_{dN\_{\text{var}}} \right] \\ \text{offsfspring}\_2 &= \left[ p\_{d1}, p\_{m2}, p\_{m3}, p\_{m4}, p\_{m5}, p\_{d6}, \dots, P\_{mN\_{\text{var}}} \right] \end{aligned} \tag{9}$$

The problem with these point crossover methods is that no new information is introduced: each continuous value that was randomly initiated in the initial population is propagated to the next generation, only in different combinations. Although this strategy worked fine for binary representations, in case of continuous variables, we are merely interchanging two data points. These approaches totally rely on mutation to introduce new genetic material. The blending methods remedy this problem by finding ways to combine variable values from the two parents into new variable values in the offspring [Randy L. Haupt & Sue Ellen Haupt, 2004]. A single offspring variable value, *pnew*, comes from a combination of the two corresponding offspring variable values:

$$
gamma = \beta p\_{\text{mu}} + (1 - \beta) p\_{\text{dn}} \tag{10}
$$

where

192 Bio-Inspired Computational Algorithms and Their Applications

This society of chromosomes is not a democracy: the individual chromosomes are not all created equal. Each one's worth is assessed by the cost function. So at this point, the chromosomes are passed to the cost function for evaluation [Randy L. Haupt & Sue Ellen

Now is the time to decide which chromosomes in the initial population are good enough to survive and possibly reproduce offspring in the next generation. As done for the binary version of the algorithm, the *Npop* costs and associated chromosomes are ranked from lowest cost to highest cost. This process of natural selection occurs in each iteration to allow the population of chromosomes to evolve. Of the *Npop* chromosomes in a given generation, only the top *Nkeep* are kept for mating and the rest are discarded to make room for the new

A set of eligible chromosomes is randomly selected as parents to generate next generation. Each pair produces two offspring that contain traits from each parent. The more similar the

As for the binary algorithm, two parents are chosen to produce offsprings. Many different approaches have been tried for crossing over in continuous GAs. The simplest method is to mark a crossover points first, then parents exchange their elements between the marked

> 1 1 2 1

1 1 234 5 6 2 12 3 456

The extreme case is selecting *N*var points and randomly choosing which of the two parents will contribute its variable at each position. Thus one goes down the line of the chromosomes and, at each variable, randomly chooses whether or not to swap information between the two parents. This method is called uniform crossover [Randy L. Haupt & Sue

> 1 12345 6 2 1 2 3 4 56

*offspring p p p p p p P offspring p p p p p p P*

[ , , , , , ,....., ] [ , , , , , ,....., ] *m d d d d m dN d m m m m d mN*

*offspring p p p p p p P offspring p p p p p p P*

*parent p p parent p p* =

var var

= (7)

var var

var var

= (9)

= (8)

[ ,....., ] [ ,....., ] *m mN d dN*

[ , , , , , ,....., ] [ , , , , , ,....., ] *mmddmm mN ddmmdd dN*

two parents, the more likely are the offspring to carry the traits of the parents.

*lo p* = highest number in the variable range *hi p* = lowest number in the variable range

offspring [Randy L. Haupt & Sue Ellen Haupt, 2004].

crossover points in the chromosomes. Consider two parents:

=

=

two offspring's might be produced as:

**2.4.2.1.5 Natural selection** 

Ellen Haupt, 2004]:

*norm p* = normalized value of variable

Haupt, 2004].

**2.4.2.1.3 Pairing** 

**2.4.2.1.4 Mating** 

β = random number in the interval [0, 1] *mn p* = the *n*th variable in the mother chromosome *dn p* = the *n*th variable in the father chromosome

The same variable of the second offspring is merely the complement of the first (i.e., replacing β by 1 β ). If β = 1, then *mn p* propagates in its entirety and *dn p* dies. In contrast, if β = 0, then *dn p* propagates in its entirety and *mn p* dies. When β = 0.5, the result is an average of the variables of the two parents. This method is demonstrated to work well on several interesting problems in [Randy L. Haupt & Sue Ellen Haupt, 2004].

Choosing which variables to blend is the next issue to be solved. Sometimes, this linear combination process is done for all variables to the right or to the left of some crossover point. Any number of points can be chosen to blend, up to *N*var values where all variables are linear combinations of those of the two parents. The variables can be blended by using the same β for each variable or by choosing different β 's for each variable. These blending methods effectively combine the information from the two parents and choose values of the variables between the values bracketed by the parents; however, they do not allow introduction of values beyond the extremes already represented in the population. The simplest way is the linear crossover [Randy L. Haupt & Sue Ellen Haupt, 2004], where three offspring are generated from two parents by

$$\begin{aligned} \textit{pnew}\_1 &= 0.5p\_{\textit{mu}} + 0.5p\_{\textit{du}}\\ \textit{pnew}\_2 &= 1.5p\_{\textit{mu}} - 0.5p\_{\textit{dn}}\\ \textit{pnew}\_3 &= -0.5p\_{\textit{mu}} + 1.5p\_{\textit{dn}} \end{aligned} \tag{11}$$

Any variable outside the bounds is discarded. Then the best two offspring are chosen to propagate. Of course, the factor 0.5 is not the only one that can be used in such a method. Heuristic crossover [Randy L. Haupt & Sue Ellen Haupt, 2004] is a variation where some random number, β , is chosen on the interval [0, 1] and the variables of the offspring are formed by:

$$p new = \beta (p\_{mn} - p\_{dn}) + p\_{mn} \tag{12}$$

Fusion of Visual and Thermal Images Using Genetic Algorithms 195

After all these steps, the chromosomes in the starting population are ranked and the bottom ranked chromosomes are replaced by offspring from the top ranked parents to produce the next generation. Some random variables are selected for mutation from the bottom ranked chromosomes. The chromosomes are then ranked from lowest cost to highest cost. The

In last decades, the rapid developments of image sensing technologies make multisensory systems popular in many applications. Researchers have begun to work on the fields of these systems such as medical imaging, remote sensing and the military applications [D.Patnaik, 2006]. The outcome of using these techniques is a great increase of the amount of diversity data available. Multi-sensor image data often present complementary information about the region surveyed so that image fusion provides an effective method to enable comparison and analysis of such data [H. Wang, 2004]. Image fusion is defined as the process of combining information in two or more images of a scene to enhance viewing or understanding of the scene. The fusion process must preserve all relevant information in the fused image [A. Mumtaz & A. Majid, 2008; S. Erkanli & Zia-Ur

Image fusion can be done at pixel, feature and decision levels. Out of these, the pixel level fusion method is the simplest technique, where average/weighted averages of individual pixel intensities are taken to construct a fused image [K. Kannan & S. Perumal, 2007]. Despite their simplicity, these methods are not used nowadays because of some serious disadvantages they possess. For instance, the contrast of the fused information is reduced and also redundant information is introduced in the fused image, which may mask the useful information. These disadvantages are overcomed by feature level and decision level fusion methods. Feature and decision level fusion methods are based on human vision system. Decision level fusion combines the results from multiple algorithms to yield a final fused image. Several pyramid transform methods for feature level fusion have been suggested [A. Wang et al., 2006]. Recently, developed methods based on the wavelet transform become popular [A. Wang et al., 2006]. In the method source images are decomposed into subimages of different resolutions and in each subimage different features become prominent. To fuse the original source images, the corresponding subimages of different source images are combined based some criteria to form composite subimages. Inverse pyramid transform of composite transform gives the final fused

The human visual system (HVS) allows individuals to assimilate information from their environment [S. Erkanli & Zia-Ur Rahman, 2010b; H. Kolb, 2003]. The HVS perceives colors and detail across a wide range of photometric intensity levels much better than electronic cameras. The perceived color of an object, additionally, is almost independent of the type of

**2.4.2.1.7 Next generation** 

**2.5 Image fusion** 

Rahman, 2010].

image.

**3.1 Introduction** 

**3. Enhancing poor visibility images** 

process is iterated until a global solution is achieved.

Variations on this theme include choosing any number of variables to modify and generate different β for each variable. This method also allows generations of offspring outside the value ranges of the two parent variables. If this happens, the offspring is discarded and the algorithm tries to use another *b*. The blend crossover (BLXα ) method [Randy L. Haupt & Sue Ellen Haupt, 2004] begins by choosing some parameters that determine the distance outside the bounds of the two parent variables that the offspring variable may lay. This method allows new values outside of the range of the parents without letting the algorithm stray too far.

The algorithm is a combination of an extrapolation method with a crossover method. The goal was to find a way to closely mimic the advantages of the binary GA mating scheme. It begins by randomly selecting a variable in the first pair of parents to be the crossover point:

$$\alpha = \operatorname{roundup}\{\operatorname{random}^\* N\_{\text{var}}\} \tag{13}$$

$$\begin{array}{c} \text{Let} \\ \text{Let} \end{array} \tag{14}$$
 
$$\begin{array}{c} \text{Let} \\ \text{parent}\_2 = [p\_{d1}, \dots, p\_{dn}, \dots, p\_{dN\_{uu}}] \end{array} \tag{14}$$

where the *m* and *d* subscripts discriminate between the *mom* and the *dad* parent. Then the selected variables are combined to form new variables that will appear in the children:

$$\begin{aligned} pnew\_1 &= p\_{ma} - \mathcal{J} [P\_{ma} - P\_{da}] \\ pnew\_2 &= p\_{da} + \mathcal{J} [P\_{ma} - P\_{da}] \end{aligned} \tag{15}$$

where β is a random value between 0 and 1. The final step is to complete the crossover with the rest of chromosome:

$$\begin{aligned} \text{offsfspring}\_1 &= \left[ p\_{m1}, p\_{m2}, p\_{mew1}, \dots, P\_{dN\_{vu}} \right] \\ \text{offsfspring}\_2 &= \left[ p\_{d1}, p\_{d2}, p\_{mew2}, \dots, P\_{mN\_{vu}} \right] \end{aligned} \tag{16}$$

where β is also a random value between 0 and 1. The final is to complete the crossover with the rest of the chromosome as before:

If the first variable of the chromosomes is selected, then only the variables to the right of the selected variable are swapped. If the last variable of the chromosomes is selected, then only the variables to the left of the selected variable are swapped. This method does not allow offspring variables outside the bounds set by the parent unless β> 1.

#### **2.4.2.1.6 Mutation**

If care is not taken, the GA can converge too quickly into one region on the cost surface. If this area is in the region of the global minimum, there is no problem. However, some functions have many local minima. To avoid overly fast convergence, other areas on the cost surface must be explored by randomly introducing changes, or mutations, in some of the variables. Random numbers are used to select the row and columns of the variables that are to be mutated [Randy L. Haupt & Sue Ellen Haupt, 2004].

#### **2.4.2.1.7 Next generation**

194 Bio-Inspired Computational Algorithms and Their Applications

Variations on this theme include choosing any number of variables to modify and generate

value ranges of the two parent variables. If this happens, the offspring is discarded and the

Sue Ellen Haupt, 2004] begins by choosing some parameters that determine the distance outside the bounds of the two parent variables that the offspring variable may lay. This method allows new values outside of the range of the parents without letting the algorithm

The algorithm is a combination of an extrapolation method with a crossover method. The goal was to find a way to closely mimic the advantages of the binary GA mating scheme. It begins by randomly selecting a variable in the first pair of parents to be the crossover

> 1 1 2 1

=

2

=

*parent p p p parent p p p*

where the *m* and *d* subscripts discriminate between the *mom* and the *dad* parent. Then the selected variables are combined to form new variables that will appear in the children:

> *pnew p P P pnew p P P* α

β

=− − =+ −

β

α

1 12 1 2 12 2

If the first variable of the chromosomes is selected, then only the variables to the right of the selected variable are swapped. If the last variable of the chromosomes is selected, then only the variables to the left of the selected variable are swapped. This method does not allow

If care is not taken, the GA can converge too quickly into one region on the cost surface. If this area is in the region of the global minimum, there is no problem. However, some functions have many local minima. To avoid overly fast convergence, other areas on the cost surface must be explored by randomly introducing changes, or mutations, in some of the variables. Random numbers are used to select the row and columns of the variables that are

*offspring p p p P offspring p p p P*

algorithm tries to use another *b*. The blend crossover (BLX-

α

Let var

offspring variables outside the bounds set by the parent unless

to be mutated [Randy L. Haupt & Sue Ellen Haupt, 2004].

<sup>1</sup>

the rest of the chromosome as before:

for each variable. This method also allows generations of offspring outside the

α

= *roundup random N* { \* var} (13)

= (14)

var

var var

= (16)

β> 1.

[ ] [ ] *m md d md*

 αα

 αα

[ , ,. ...., ] [ , ,. ...., ] *m m new dN d d new mN*

is also a random value between 0 and 1. The final is to complete the crossover with

is a random value between 0 and 1. The final step is to complete the crossover with

[ ,.., ,..., ] [ ,..., ,.., ] *m m mN d d dN*

α

α

) method [Randy L. Haupt &

(15)

different

stray too far.

point:

where

where

β

β

**2.4.2.1.6 Mutation** 

the rest of chromosome:

β

After all these steps, the chromosomes in the starting population are ranked and the bottom ranked chromosomes are replaced by offspring from the top ranked parents to produce the next generation. Some random variables are selected for mutation from the bottom ranked chromosomes. The chromosomes are then ranked from lowest cost to highest cost. The process is iterated until a global solution is achieved.

### **2.5 Image fusion**

In last decades, the rapid developments of image sensing technologies make multisensory systems popular in many applications. Researchers have begun to work on the fields of these systems such as medical imaging, remote sensing and the military applications [D.Patnaik, 2006]. The outcome of using these techniques is a great increase of the amount of diversity data available. Multi-sensor image data often present complementary information about the region surveyed so that image fusion provides an effective method to enable comparison and analysis of such data [H. Wang, 2004]. Image fusion is defined as the process of combining information in two or more images of a scene to enhance viewing or understanding of the scene. The fusion process must preserve all relevant information in the fused image [A. Mumtaz & A. Majid, 2008; S. Erkanli & Zia-Ur Rahman, 2010].

Image fusion can be done at pixel, feature and decision levels. Out of these, the pixel level fusion method is the simplest technique, where average/weighted averages of individual pixel intensities are taken to construct a fused image [K. Kannan & S. Perumal, 2007]. Despite their simplicity, these methods are not used nowadays because of some serious disadvantages they possess. For instance, the contrast of the fused information is reduced and also redundant information is introduced in the fused image, which may mask the useful information. These disadvantages are overcomed by feature level and decision level fusion methods. Feature and decision level fusion methods are based on human vision system. Decision level fusion combines the results from multiple algorithms to yield a final fused image. Several pyramid transform methods for feature level fusion have been suggested [A. Wang et al., 2006]. Recently, developed methods based on the wavelet transform become popular [A. Wang et al., 2006]. In the method source images are decomposed into subimages of different resolutions and in each subimage different features become prominent. To fuse the original source images, the corresponding subimages of different source images are combined based some criteria to form composite subimages. Inverse pyramid transform of composite transform gives the final fused image.
