**5.3 IR and visual images registration**

First, the IR and visual images taken from different sensors, viewpoints, times and resolution were resized for the same size. The correspondence between the features detected in the IR image and those detected in the visual image were then established. Control points were picked manually from those corners detected by the Harris corner detection algorithm from both images, where the corners were in the same positions in the two images.

In the second step, a spatial transformation was computed to map the selected corners in one image to those in another image. Once the transformation was established, the image to be registered was resampled and interpolated to match the reference image. For RGB and intensity images, the bilinear or bicubic interpolation method is recommended since they lead to better results. In the experiments, the bicubic interpolation method was used.

### **5.4 Discussion**

Experimental results have been applied on the database, which is created by the research team. This algorithm is categorized into four steps, which are described respectively. In the

Fusion of Visual and Thermal Images Using Genetic Algorithms 209

MEAN ENTROPY PSNR IQ

1(Fig.19) 101.61 153.50 7.03 7.58 14.16 35.73 85 94 2 111.78 144.92 7.26 7.68 13.64 35.73 90 95 3 105.35 124.06 7.25 7.42 13.84 28.13 87 96 4 118.91 140.72 7.33 7.53 13.21 28.33 97 96 5 104.2 117.17 7.41 7.82 14.12 29.40 91 94 6 106.82 117.41 7.46 7.78 14.10 29.15 97 94 7 115.76 137.67 7.37 7.68 14.12 29.26 98 98 8 116.18 137.02 7.56 7.83 14.50 29.64 97 96 9 93.22 134.03 7.29 7.63 15.22 33.41 87 83 10 114.05 143.26 7.23 7.60 14.64 36.17 99 98 11 111.50 131.12 7.34 7.51 13.92 28.25 93 99 12 117.51 142.50 7.37 7.66 13.60 30.10 96 95 13 114.65 139.16 7.34 7.51 14.18 30.05 94 96 14 116.47 141.82 7.29 7.54 15.08 30.94 99 99 15 115.81 132.06 7.53 7.60 14.39 28.75 98 97 16 118.57 137.00 7.34 7.68 14.93 28.90 99 99

Fig. 6. Fusion Results for Image 1: (top-left-(a)) Original; (top-right-(b)) Enhanced; (middleleft-(c)) Original; (middle-right-(d)) Enhanced; (bottom-left-(e)) IR;(bottom-right-(f)) Fused

Images; Graph-Genetic Algorithm result after 100 iterations.

A B A B A B A B

Database Images

Table 3. The Statistics of Database.

first step, there is enhancement of visual images, as described in Section 3. The fused image should be more suitable for human visual perception and computer-processing tasks. The experience of image processing has prompted the research to consider fundamental aspects for good visual presentation of images, requiring nonlinear image enhancement techniques of visual recorded images to get a better image, which has more information from the original images. In the second step, the corners of visual and IR images were determined with the help of Harris Detection algorithm for registration purpose to use as control points. In the third step, because the source images are obtained from different sensors, they present different resolution, size and spectral characteristic, the source images have to be correctly registered. In the last step, an image fusion process is performed, which was described in Section 4.

The registered images were overlapped at an appropriate transparency. The pixel value in the fused image was a weighted submission of the corresponding pixels in the IR and visual images. In the next section, results from advanced image fusion approaches are presented.

#### **5.5 Fusion of visual and IR images**

The Image fusion algorithm was applied with the help of Genetic Algorithm to the database. One of the issues is the determination of the quality of image fusion results. As part of the general theme of fusion evaluation there is a growing interest to develop methods that address the scored performance of image fusion algorithms as described in Section 4. Given the diversity of applications and various methods of evaluation metrics, there are still open questions concerning when to perform image fusion. There is an interest in exploring mean, standard deviation, entropy, mutual information, peak signal to noise ratio and image quality as described in Section 4. Because source images have different spectrum, they show quite distinct characters and have complementary information. It can be seen in Figure 6 (a and c) that the visual image does not have enough information to see the faces and is very dark. Figure 6 (b) shows that the luminance enhancement part works well for dark images and the technique adjusts itself to the image. In the contrast enhancement part it is clear that unseen or barely seen features of low contrast images were made visible. Enhancement algorithms were developed to improve the images before the fusion process. After enhancement it was found that the corners of the enhanced image and the IR image then registered the enhanced image as shown in Figure 6 (d). Then, the enhanced image was fused with the IR image in Figure 6 (f).

Figures 6 show the result of CGA after 100 iterations. The optimum solution was determined with a population size of 100x3 after 76 iterations. It was determined that *wa* = 0.99 and *wb* = 0.47 are the optimum values for maximizing the entropy cost function which is 7.58 for the *F* specified in Equation 38. The evaluation of these weights results is shown in Table 3. By inspection, the faces and the details in the fused image are clearer as compared to either the original IR image or the visual image.

Table 3 shows the detailed comparison results of the fused images. A is the fused image by averaging the visual and IR images. B is the fused image by the proposed approach. The total images used in this experiment were from the created database. The results show that this approach is better than the averaging fusion result.

208 Bio-Inspired Computational Algorithms and Their Applications

first step, there is enhancement of visual images, as described in Section 3. The fused image should be more suitable for human visual perception and computer-processing tasks. The experience of image processing has prompted the research to consider fundamental aspects for good visual presentation of images, requiring nonlinear image enhancement techniques of visual recorded images to get a better image, which has more information from the original images. In the second step, the corners of visual and IR images were determined with the help of Harris Detection algorithm for registration purpose to use as control points. In the third step, because the source images are obtained from different sensors, they present different resolution, size and spectral characteristic, the source images have to be correctly registered. In the last step, an image fusion process is performed, which was

The registered images were overlapped at an appropriate transparency. The pixel value in the fused image was a weighted submission of the corresponding pixels in the IR and visual images. In the next section, results from advanced image fusion approaches are

The Image fusion algorithm was applied with the help of Genetic Algorithm to the database. One of the issues is the determination of the quality of image fusion results. As part of the general theme of fusion evaluation there is a growing interest to develop methods that address the scored performance of image fusion algorithms as described in Section 4. Given the diversity of applications and various methods of evaluation metrics, there are still open questions concerning when to perform image fusion. There is an interest in exploring mean, standard deviation, entropy, mutual information, peak signal to noise ratio and image quality as described in Section 4. Because source images have different spectrum, they show quite distinct characters and have complementary information. It can be seen in Figure 6 (a and c) that the visual image does not have enough information to see the faces and is very dark. Figure 6 (b) shows that the luminance enhancement part works well for dark images and the technique adjusts itself to the image. In the contrast enhancement part it is clear that unseen or barely seen features of low contrast images were made visible. Enhancement algorithms were developed to improve the images before the fusion process. After enhancement it was found that the corners of the enhanced image and the IR image then registered the enhanced image as shown in Figure 6 (d). Then, the enhanced image was

Figures 6 show the result of CGA after 100 iterations. The optimum solution was determined with a population size of 100x3 after 76 iterations. It was determined that *wa* = 0.99 and *wb* = 0.47 are the optimum values for maximizing the entropy cost function which is 7.58 for the *F* specified in Equation 38. The evaluation of these weights results is shown in Table 3. By inspection, the faces and the details in the fused image are clearer as

Table 3 shows the detailed comparison results of the fused images. A is the fused image by averaging the visual and IR images. B is the fused image by the proposed approach. The total images used in this experiment were from the created database. The results show that

described in Section 4.

**5.5 Fusion of visual and IR images** 

fused with the IR image in Figure 6 (f).

compared to either the original IR image or the visual image.

this approach is better than the averaging fusion result.

presented.


Table 3. The Statistics of Database.

Fig. 6. Fusion Results for Image 1: (top-left-(a)) Original; (top-right-(b)) Enhanced; (middleleft-(c)) Original; (middle-right-(d)) Enhanced; (bottom-left-(e)) IR;(bottom-right-(f)) Fused Images; Graph-Genetic Algorithm result after 100 iterations.

Fusion of Visual and Thermal Images Using Genetic Algorithms 211

Fonseca L. M. G. and Manjunath B. S., Registration Techniques for Multisensor Remotely

Gonzalez R. C., Woods R. E. and Eddins S. L., Digital Image Processing, *Pearson Education,* 

Haupt Randy L. and Haupt Sue Ellen*, Practical Genetic Algorithms, Second Edition*, ISBN 0-

Hines G. D., Rahman Z., Jobson D. J., and Wodell G. A., Single-Scale Retinex Using Digital

Holland J. H., Adaptation In Natural and Artificial Systems, *University of Michigan Press*,

Hulbert A. C., The Computation of Color, *Ph.D. Dissertaion, Mass. Inst. Tech., Cambridge, MA*,

Jobson D. J., Rahman Z. and Woodell G. A., Properties and Performance of a Center/Surround Retinex, *IEEE Trans.Image Processing,* Vol.6, pp. 451-462, 1997. Kannan K. and Perumal S., Optimal Decomposition Level of Discrete Wavelet Transform for

Kong G. S., Heo J., Abidi B. R., Paik J. and Abidi M. A., Recent Advances in Visual and

Land E., An Alternative Technique For The Computation of The Designator in The Retinex

Mdhani S., Ho J., Vetter T. and Kriegman D. J., Face Recognition Using 3-D Models: Pose and Illumination, *Proceedings of the IEEE*, Vol. 94, pp. 1977 –1999, 2006. Mitra S. K., Murthy C. A. and Kundu M. K., Technique for Fractal Image Compression using

Mumtaz A. and Majid A., Genetic Algorithms and its application to Image Fusion,

Patnaik D., Biomedical Image Fusion using Wavelet Transforms and Neural Network, *IEEE International Conference on Industrial Technology,* pp. 1189 – 1194, 2006. Pattichis C. S. and Pattichis M. S., Medical Imaging Fusion Applications— An Overview, *In Conf. Rec. Asilomar Conf. Signals, Systems Computers*, Vol. 2, pp. 1263–1267, 2001. Prokoski F., History, Current Status, and Future of Infrared Identification, *Computer Vision* 

Rahman Z., Jobson D. and Woodell G. A., Multiscale Retinex For Color Image Enhancement, *In Proceedings of the IEEE International Conference On Image Processing*, 1996.

Rahman Z., Woodell G. A. and Jobson D., A Comparison of The Multiscale Retinex with

Schreiber W. F., Image processing for quality improvement, *Proceedings of the IEEE*, Vol. 66,

Rahman Z., The Lectures Notes of Image Processing, *Old Dominion University*, 2009.

*Beyond the Visible Spectrum: Methods and Applications, Proceedings IEEE Workshop*, pp.

other Image Enhancement Techniques, *In Proceedings Of The IS&T 50th Anniversary* 

Pixel Based Fusion of Multi-Focused Images, *International Conference On* 

Infrared Face recognition—a Review, *Computer Vision and Image Understanding*, Vol.

Theory of Color Vision, *Proc. Of The National Academy Of Science USA*, Vol. 83, pp.

471-45565-2 Copyright © 2004 John Wiley & Sons, Inc.

*Computational Intelligence And Multimedia Applications*, 2007.

Genetic Algorithm, *IEEE Trans Image Process*. pp. 586-93, 1998.

*International Conference On Emerging Technologies ICET*, 2008.

Kolb H., How the Retina works, *American Scientist*, Vol. 91, 2003.

Signal Processors *In Proceedings Of The GSPX*, 2004.

1056, 1996.

1975.

Sept. 1989.

1, pp. 103-135, 2005.

2078-3080, 1986.

5-14, 2000.

*Conference*, pp. 426-431, 1997b.

pp. 1640–1651, 1978.

*Inc. Prentice Hall*, 2004.

Sensed Imagery, *Photogrammetric Engineering & Remote Sensing*, Vol. 62, pp. 1049-
