**3. Enhancing poor visibility images**

#### **3.1 Introduction**

The human visual system (HVS) allows individuals to assimilate information from their environment [S. Erkanli & Zia-Ur Rahman, 2010b; H. Kolb, 2003]. The HVS perceives colors and detail across a wide range of photometric intensity levels much better than electronic cameras. The perceived color of an object, additionally, is almost independent of the type of

Fusion of Visual and Thermal Images Using Genetic Algorithms 197

*Imn rmn* ( , ) 0.2989 ( , ) 0.587 ( , ) 0.114 ( , ) = ++ *g mn bmn* (17)

where *r, g, b* are the red, green, and blue components of *<sup>c</sup> I* respectively, and *m* and *n* are the row and column pixel locations respectively. Assuming I to be 8-bits per pixel, *nI* is the

Using linear input-output intensity relationships typically does not produce a good visual representation compared with direct viewing of the scene. Therefore, nonlinear transformation for DRC is used, which is based on some information extracted from the image histogram. To do this, the histogram of the intensity images is subdivided into four

> ( ) 0 1 (0.5 (0.5 ) ) 1

*I x*

α

12 34 1 2 12 34 1 2 12 34 3 4 12 34 3 4

is the offset parameter, helping to adjust the brightness of

*I x*

*n*

The first mapping pulls out the details in the dark regions, and the second suppresses the

+≥ + Λ ≥

 +≥ + Λ < <sup>=</sup> +< + Λ ≥

+< + Λ <

where *f* ( ) *a* refers to number of pixels between the range *(a)*, 12 1 2 *f* ( ) () () *a a* += + *f a f a* , and Λ

image. The determination of the *x* values and their association with the range-relationships as given in Equation 20 was done experimentally using a large number of non-uniform and uniform dark images and *x* value can be also determined manually. The DRC mapping of the intensity image performs a visually dramatic transformation. However, it tends to have poor contrast, so a local, pixel dependent contrast enhancement method is used to improve

Many local enhancement methods rely on center/surround ratios [L. Tao, 2005]. Hurlbert [A. C. Hulbert, 1989] investigated the Gaussian as the optimal surround function. Other

0.2, ( ( ) ( )) ( ( ) ( )) 0.5, ( ( ) ( )) ( ( ) ( )) 3.0, ( ( ) ( )) ( ( ) ( )) 5.0, ( ( ) ( )) ( ( ) ( ))

*if f r r f r r f r f r if f r r f r r f r f r*

*if f r r f r r f r f r if f r r f r r f r f r*

 <sup>+</sup> < < <sup>=</sup> + +≥

<sup>1</sup>*r* =0–63, 2*r* = 64–127, 3*r* = 128–191 and 4*r* = 192–255. *nI* is mapped to *drc*

α

*x*

*<sup>n</sup> <sup>x</sup>*

*n drc*

*I*

bright overshoots. The value of *x* is given by

*x*

is the logical AND operator.

the contrast.

α

**3.2.2 Selection of surround parameter and color restoration** 

( , ) ( , ) / 255 *nI mn Imn* = (18)

*nI* using the

(20)

(19)

**3.2.1 Selection of transformation parameters for DRC**  The intensity I of the color image *<sup>c</sup> I* can be determined by:

normalized version of I, such that:

ranges:

following:

illumination, i.e., the HVS is color constant. Electronic cameras suffer, by comparison, from limited dynamic range and the lack of color constancy and current imaging and display devices such as CRT monitors and printers have limited dynamic range of about two orders of magnitude, while the best photographic prints can provide contrast up to <sup>3</sup> 10 : 1. However; real world scenes can have a dynamic range of six orders of magnitude [S. Erkanli & Zia-Ur Rahman, 2010b; L. Tao et al., 2005]. This can result in overexposure that causes saturation in high contrast images, or underexposure in dark images [Z. Rahman, 1996]. The idea behind enhancement techniques are to bring out details in images that are otherwise too dim to be perceived either due to insufficient brightness or insufficient contrast [Z. Rahman, 1997]. A large number of image enhancement methods have been developed, like log transformations, power law transformations, piecewise-linear transformations and histogram equalization. However these enhancement techniques are based on global processing which results in a single mapping between the input and the output intensity space. These techniques are thus not sufficiently powerful to handle images that have both very bright and very dark regions. Other image enhancement techniques are local in nature, i.e., the output value depends not only on the input pixel value but also on pixel values in the neighborhood of the pixel. These techniques are able to improve local contrast under various illumination conditions.

Single-Scale Retinex (SSR), is a modification of the Retinex algorithm introduced by Edwin Land [G. D. Hines et al., 2004; E. Land, 1986]. It provides dynamic range compression (DRC), color constancy, and tonal rendition. SSR gives good results for DRC or tonal rendition but does not provide both simultaneously. Therefore, the Multi-Scale Retinex (MSR) was developed by Rahman et al. The MSR combines several SSR outputs with different scale constants to produce a single output image, which has good DRC, color constancy and good tonal rendition. The outputs of MSR display most of the detail in the dark pixels but at the cost of enhancing the noise in these pixels and the tonal rendition is poor in large regions of slowly changing intensity. As a result, Multi-Scale Retinex with Color Restoration (MSRCR) was developed by Jobson et al., for synthesizing local contrast improvement, color constancy and lightness/color rendition. Other non-linear enhancement models include the Illuminance Reflectance Model for Enhancement (IRME) proposed by Tao et al. [L. Tao et al., 2005], and the Adaptive and Integrated Neighborhood-Dependent Approach for Nonlinear Enhancement (AINDANE) described by Tao [L.Tao, 2005]. Both use a nonlinear function for luminance enhancement and tune the intensity of each pixel based on its relative magnitude with respect to the neighboring pixels.

In this section, a new image enhancement approach is described: Enhancement Technique for Nonuniform and Uniform-Dark Images (ETNUD). The details of the new algorithm are given in Section 3.2, respectively. Sections 3.3 describe experimental results and compare our results with other techniques for image enhancement. Finally in Section 3.4, conclusions are presented.

#### **3.2 Enhancement Technique for Nonuniform and Uniform-Dark Images (ETNUD)**

The major innovation in ETNUD is in the selection of the transformation parameters for DRC, and the surround scale and color restoration parameters. The following sections describe the selection mechanisms.

196 Bio-Inspired Computational Algorithms and Their Applications

illumination, i.e., the HVS is color constant. Electronic cameras suffer, by comparison, from limited dynamic range and the lack of color constancy and current imaging and display devices such as CRT monitors and printers have limited dynamic range of about two orders of magnitude, while the best photographic prints can provide contrast up to <sup>3</sup> 10 : 1. However; real world scenes can have a dynamic range of six orders of magnitude [S. Erkanli & Zia-Ur Rahman, 2010b; L. Tao et al., 2005]. This can result in overexposure that causes saturation in high contrast images, or underexposure in dark images [Z. Rahman, 1996]. The idea behind enhancement techniques are to bring out details in images that are otherwise too dim to be perceived either due to insufficient brightness or insufficient contrast [Z. Rahman, 1997]. A large number of image enhancement methods have been developed, like log transformations, power law transformations, piecewise-linear transformations and histogram equalization. However these enhancement techniques are based on global processing which results in a single mapping between the input and the output intensity space. These techniques are thus not sufficiently powerful to handle images that have both very bright and very dark regions. Other image enhancement techniques are local in nature, i.e., the output value depends not only on the input pixel value but also on pixel values in the neighborhood of the pixel. These techniques are able to improve local contrast under

Single-Scale Retinex (SSR), is a modification of the Retinex algorithm introduced by Edwin Land [G. D. Hines et al., 2004; E. Land, 1986]. It provides dynamic range compression (DRC), color constancy, and tonal rendition. SSR gives good results for DRC or tonal rendition but does not provide both simultaneously. Therefore, the Multi-Scale Retinex (MSR) was developed by Rahman et al. The MSR combines several SSR outputs with different scale constants to produce a single output image, which has good DRC, color constancy and good tonal rendition. The outputs of MSR display most of the detail in the dark pixels but at the cost of enhancing the noise in these pixels and the tonal rendition is poor in large regions of slowly changing intensity. As a result, Multi-Scale Retinex with Color Restoration (MSRCR) was developed by Jobson et al., for synthesizing local contrast improvement, color constancy and lightness/color rendition. Other non-linear enhancement models include the Illuminance Reflectance Model for Enhancement (IRME) proposed by Tao et al. [L. Tao et al., 2005], and the Adaptive and Integrated Neighborhood-Dependent Approach for Nonlinear Enhancement (AINDANE) described by Tao [L.Tao, 2005]. Both use a nonlinear function for luminance enhancement and tune the intensity of each pixel based

In this section, a new image enhancement approach is described: Enhancement Technique for Nonuniform and Uniform-Dark Images (ETNUD). The details of the new algorithm are given in Section 3.2, respectively. Sections 3.3 describe experimental results and compare our results with other techniques for image enhancement. Finally in Section 3.4, conclusions

**3.2 Enhancement Technique for Nonuniform and Uniform-Dark Images (ETNUD)** 

The major innovation in ETNUD is in the selection of the transformation parameters for DRC, and the surround scale and color restoration parameters. The following sections

on its relative magnitude with respect to the neighboring pixels.

various illumination conditions.

are presented.

describe the selection mechanisms.

#### **3.2.1 Selection of transformation parameters for DRC**

The intensity I of the color image *<sup>c</sup> I* can be determined by:

$$I(m,n) = 0.2989r(m,n) + 0.587\,\mathrm{g}(m,n) + 0.114b(m,n)\tag{17}$$

where *r, g, b* are the red, green, and blue components of *<sup>c</sup> I* respectively, and *m* and *n* are the row and column pixel locations respectively. Assuming I to be 8-bits per pixel, *nI* is the normalized version of I, such that:

( , ) ( , ) / 255 *nI mn Imn* = (18)

Using linear input-output intensity relationships typically does not produce a good visual representation compared with direct viewing of the scene. Therefore, nonlinear transformation for DRC is used, which is based on some information extracted from the image histogram. To do this, the histogram of the intensity images is subdivided into four ranges:

<sup>1</sup>*r* =0–63, 2*r* = 64–127, 3*r* = 128–191 and 4*r* = 192–255. *nI* is mapped to *drc nI* using the following:

$$I\_{\
u}^{\text{dev}} = \begin{cases} (I\_{\
u})^{\times} + \alpha & 0 < \mathbf{x} < \mathbf{1} \\ & \text{(0.5 + (0.5I\_{\
u})^{\times}) + \alpha & \mathbf{x} \ge \mathbf{1} \end{cases} \tag{19}$$

The first mapping pulls out the details in the dark regions, and the second suppresses the bright overshoots. The value of *x* is given by

$$\mathbf{x} = \begin{cases} 0.2, & \text{if} \quad \left( f(r\_1 + r\_2) \ge f(r\_3 + r\_4) \right) \text{ A } \left( f(r\_1) \ge f(r\_2) \right) \\ 0.5, & \text{if} \quad \left( f(r\_1 + r\_2) \ge f(r\_3 + r\_4) \right) \text{ A } \left( f(r\_1) < f(r\_2) \right) \\ 3.0, & \text{if} \quad \left( f(r\_1 + r\_2) < f(r\_3 + r\_4) \right) \text{ A } \left( f(r\_3) \ge f(r\_4) \right) \\ 5.0, & \text{if} \quad \left( f(r\_1 + r\_2) < f(r\_3 + r\_4) \right) \text{ A } \left( f(r\_3) < f(r\_4) \right) \end{cases} \tag{20}$$

where *f* ( ) *a* refers to number of pixels between the range *(a)*, 12 1 2 *f* ( ) () () *a a* += + *f a f a* , and Λ is the logical AND operator. α is the offset parameter, helping to adjust the brightness of image. The determination of the *x* values and their association with the range-relationships as given in Equation 20 was done experimentally using a large number of non-uniform and uniform dark images and *x* value can be also determined manually. The DRC mapping of the intensity image performs a visually dramatic transformation. However, it tends to have poor contrast, so a local, pixel dependent contrast enhancement method is used to improve the contrast.

#### **3.2.2 Selection of surround parameter and color restoration**

Many local enhancement methods rely on center/surround ratios [L. Tao, 2005]. Hurlbert [A. C. Hulbert, 1989] investigated the Gaussian as the optimal surround function. Other

Fusion of Visual and Thermal Images Using Genetic Algorithms 199

There are some metrics such as brightness and contrast to characterize an image. Another such metric is sharpness. Sharpness is directly proportional to the high-frequency content of

*M M v v S hI hv v Iv v*

0 0

energy at the high frequencies relative to the low frequencies, thereby emphasizing the contribution of the high frequencies to *S*. The larger the value of *S*, the greater the sharpness

1 2 <sup>2</sup> [ , ] 1 exp *v v hv v*

is the parameter at which the attenuation coefficient <sup>1</sup> 1.0 2 / 3 *e*

sharpness *S*, where brightness and contrast are assumed to be the mean and the standard deviation. However, instead of using global statistics, it is used regional statistics. In order

1. Divide the *M*1 2 *xM* image *I* into 1 2 ( /10) ( /10) *M xM* non-overlapping blocks, ,*<sup>i</sup> I*

3. Classify the block as either GOOD or POOR based on the computed measure (will be

4. Classify the image as a whole as GOOD or POOR based upon the classification of

μ, σand *S*,

The following criteria are used for brightness, contrast and sharpness [Z. Rahman, 2009]:

/ 255 <sup>154</sup>

<sup>&</sup>lt; <sup>=</sup>

*otherwise*

 μ

1 / 255 *<sup>n</sup>*

μ

μ

−

A region is considered to have sufficient brightness when 0.4 0.6. ≤ ≤

*i i I I* ≈ ∪ <sup>=</sup> (Total Number of Regions are 100).

<sup>∧</sup> <sup>+</sup> =− −

= =

− − ∧ ∧

12 12

= ⊗= (27)

∧

∧

is its direct Discrete

(or *h*) is to weight the

(28)

μ

<sup>−</sup> = −≈ . A smaller

, contrast

(29)

μ*n* σand

[ , ][ , ]

2 2 1 2

σ

implies that fewer frequencies are attenuated and vice versa. For this research

1 1 <sup>2</sup>

an image. So the new metric is defined as [Z. Rahman, 2009]:

where *h* is a high-pass filter, periodic with period *M*1 2 *xM* and *h*

Fourier Transform (DFT). *I* is also DFT of Image *I*. The role of *h*

Equation 27 defines how the sharpness should be computed and defined as:

The overall quality of images can be measured by using the brightness

**3.2.3.1 A new metric** 

of *I* and conversely.

where

σ=0.15.

value of

1. Let

μ

σ

σ

**3.2.3.2 Image qality asessment** 

to do this [Z. Rahman, 2009]:

*i=*1,…,100, such that 1 , *<sup>N</sup>*

discussed with the following).

2. For each block compute the measures,

regions (will be discussed with the following).

*<sup>n</sup>* be normalized brightness parameter, such that:

μ

surround functions proposed by [E. Land, 1986] were compared with the performance of the Gaussian proposed by [D. J. Jobson, et al., 1997]. Both investigations determined that the Gaussian form produced good dynamic range compression over a range of space constants. Therefore, the luminance information of surrounding pixels is obtained by using 2D discrete spatial convolution with a Gaussian kernel, *G(m, n)* defined as:

$$\mathbb{E}\,\mathbb{G}(m,n) = K \exp\left(-\frac{m^2 + n^2}{\sigma\_s^2}\right) \tag{21}$$

where σ *<sup>s</sup>* is the surround space constant equal to the standard deviation of *G(m, n),* and *K* is determined under the constraint that , ( ,) 1 *m n Gmn* <sup>=</sup> .

The center-surround contrast enhancement is defined as:

$$I\_{\rm emh} \left( m, n \right) = \text{255} \left( \iota\_{n}^{\rm{drc}} \left( m, n \right) \right)^{\rm{E}(m, n)} \tag{22}$$

where, *E(m, n)* is given by:

$$E\left(m,n\right) = \left[\frac{I\_{\text{film}}\left(m,n\right)}{I\left(m,n\right)}\right]^s \tag{23}$$

$$\text{where}\\
\qquad\qquad I\_{\text{fit}}(m,n) = I(m,n)^{\*} \cdot \mathbb{G}(m,n)\tag{24}$$

S is an adaptive contrast enhancement parameter related to the global standard deviation of the input intensity image, *I(m, n),* and '\*' is the convolution operator, *I(m, n)* is defined by:

$$S = \begin{cases} 3 & \text{for} \qquad \sigma \le 7 \\ 1.5 & \text{for} \quad 7 < \sigma \le 20 \\ 1 & \text{for} \qquad \sigma \ge 20 \end{cases} \tag{25}$$

σ is the contrast—standard deviation—of the original intensity image. If σ < 7, the image has poor contrast and the contrast of the image will be increased. If σ ≥ 20, the image has sufficient contrast and the contrast will not be changed. Finally, the enhanced image can be obtained by linear color restoration based on chromatic information contained in the original image as:

$$S\_{\rangle}(\mathbf{x}, \mathbf{y}) = I\_{\text{coh}}(\mathbf{x}, \mathbf{y}) \frac{I\_{\rangle}(\mathbf{x}, \mathbf{y})}{I(\mathbf{x}, \mathbf{y})} \mathcal{A}\_{\rangle} \tag{26}$$

where *j* ∈{*r*, , *g b*} represents the RGB spectral band and λ*<sup>j</sup>* is a parameter which adjusts the color hue.

#### **3.2.3 Evaluation citeria**

In this work, following evaluation criteria was used.

#### **3.2.3.1 A new metric**

198 Bio-Inspired Computational Algorithms and Their Applications

surround functions proposed by [E. Land, 1986] were compared with the performance of the Gaussian proposed by [D. J. Jobson, et al., 1997]. Both investigations determined that the Gaussian form produced good dynamic range compression over a range of space constants. Therefore, the luminance information of surrounding pixels is obtained by using 2D discrete

<sup>2</sup> ( , ) exp

*m n Gmn K*

2 2

(21)

(23)

σ

(26)

*<sup>j</sup>* is a parameter which adjusts the

σ

(25)

< 7, the image

≥ 20, the image has

*s*

*<sup>n</sup> I mn mn* = (22)

σ

<sup>+</sup> = −

*<sup>s</sup>* is the surround space constant equal to the standard deviation of *G(m, n),* and *K* is

() () ( ) , , 255( , )*Emn*

( ) ( )

*Emn*

, , ,

where ( ) , ( , )\* ( , ) *filt I mn Imn Gmn* = (24)

S is an adaptive contrast enhancement parameter related to the global standard deviation of the input intensity image, *I(m, n),* and '\*' is the convolution operator, *I(m, n)* is defined by:

> 3 7 1.5 7 20 1 20

σ

σ

σ

( )

λ

λ

*j*

*for*

 ≤ = <≤

*for*

≥

sufficient contrast and the contrast will not be changed. Finally, the enhanced image can be obtained by linear color restoration based on chromatic information contained in the

() () ( )

, , , ,

*j enh j I xy S xy I xy I xy* <sup>=</sup>

*S for*

is the contrast—standard deviation—of the original intensity image. If

has poor contrast and the contrast of the image will be increased. If

where *j* ∈{*r*, , *g b*} represents the RGB spectral band and

In this work, following evaluation criteria was used.

*drc I*

( )

*filt I mn*

*I mn* <sup>=</sup>

*S*

spatial convolution with a Gaussian kernel, *G(m, n)* defined as:

*enh*

determined under the constraint that , ( ,) 1 *m n Gmn* <sup>=</sup> . The center-surround contrast enhancement is defined as:

where σ

σ

original image as:

**3.2.3 Evaluation citeria** 

color hue.

where, *E(m, n)* is given by:

There are some metrics such as brightness and contrast to characterize an image. Another such metric is sharpness. Sharpness is directly proportional to the high-frequency content of an image. So the new metric is defined as [Z. Rahman, 2009]:

$$S = \sqrt{\left\| h \otimes I \right\|^2} = \sqrt{\sum\_{v\_1=0}^{M\_1=1} \sum\_{v\_2=0}^{M\_2=1} \left| \hat{h}[\upsilon\_{1'}, \upsilon\_2] \hat{I}[\upsilon\_{1'}, \upsilon\_2] \right|} \tag{27}$$

where *h* is a high-pass filter, periodic with period *M*1 2 *xM* and *h* ∧ is its direct Discrete Fourier Transform (DFT). *I* is also DFT of Image *I*. The role of *h* ∧ (or *h*) is to weight the energy at the high frequencies relative to the low frequencies, thereby emphasizing the contribution of the high frequencies to *S*. The larger the value of *S*, the greater the sharpness of *I* and conversely.

Equation 27 defines how the sharpness should be computed and defined as:

$$\hat{h}[v\_1, v\_2] = 1 - \exp\left(-\frac{v\_1^2 + v\_2^2}{\sigma^2}\right) \tag{28}$$

where σ is the parameter at which the attenuation coefficient <sup>1</sup> 1.0 2 / 3 *e* <sup>−</sup> = −≈ . A smaller value of σ implies that fewer frequencies are attenuated and vice versa. For this research σ=0.15.

#### **3.2.3.2 Image qality asessment**

The overall quality of images can be measured by using the brightness μ, contrast σ and sharpness *S*, where brightness and contrast are assumed to be the mean and the standard deviation. However, instead of using global statistics, it is used regional statistics. In order to do this [Z. Rahman, 2009]:


The following criteria are used for brightness, contrast and sharpness [Z. Rahman, 2009]:

1. Let μ*<sup>n</sup>* be normalized brightness parameter, such that:

$$\mu\_u = \begin{cases} \mu / \text{255} & \mu < \text{154} \\ 1 - \mu / \text{255} & \text{otherwise} \end{cases} \tag{29}$$

A region is considered to have sufficient brightness when 0.4 0.6. ≤ ≤ μ*n*

Fusion of Visual and Thermal Images Using Genetic Algorithms 201

The ETNUD image enhancement algorithms provide high color accuracy and better balance

Image fusion is defined as the process of combining information from two or more images of a scene to enhance the viewing or understanding of that scene. The images that are to be fused can come from different sensors, or have been acquired at different times, or from different locations. Hence, the first step in any image fusion process is the accurate registration of the image data. This is relatively straightforward if parameters such as the instantaneous field-of-view (IFOV), and locations and orientations from which the images are acquired are known, especially when the sensor modalities produce images that use the same coordinate space. This is more of a challenge when sensor modalities differ significantly and registration can only be accomplished at the information level. Hence, the goal of the fusion process is to preserve all relevant information in the component images and place it in the fused image (FI). This requires that the process minimize the noise and other artifacts in the FI. Because of this, the fusion process can be also regarded as an optimization problem [K. Kannan and S. Perumal, 2002]. In recent years, image fusion has been applied to a number of diverse areas such as remote sensing [T. A.Wilson, and S. K. Rogers,1997], medical imaging [C. S. Pattichis and M. S. Pattichis, 2001], and military

Fig. 2. Comparisons of Enhancement Techniques: (top-left) Original; (top-right) IRME;

Image fusion can be divided into three processing levels: pixel, feature and decision. These methods increase in abstraction from pixel to feature to decision levels. In the pixel-level approach, simple arithmetic rules like average of individual pixel intensities or more sophisticated combination schemes are used to construct the fused image. At the featurelevel, the image is classified into regions with known labels, and these labeled regions from different sensor modalities are used to combine the data. At the decision level, a

(middle-left) Gamma correction, g = 1.4; (middle-right) MSR; (bottom-left)

combination of rules can be used to include part of the data or not.

**4. Entropy-based image fusion with Continuous Genetic Algorithm** 

**3.4 Conclusion** 

**4.1 Introduction** 

between the luminance and contrast in images.

applications [B. V. Dasarathy, 2002].

AINDANE;(bottom-right) ETNUD.

2. Let σ*<sup>n</sup>* be normalized contrast parameter, such that:

$$
\sigma\_n = \begin{cases}
\text{ } \sigma \text{ } / 128 & \mu \le 64 \\
1 - \sigma \text{ } / 128 & \text{otherwise}
\end{cases}
\tag{30}
$$

A region is considered to have sufficient contrast when 0.25 0.5. ≤ ≤ σ *<sup>n</sup>* When 0.25, σ *<sup>n</sup>* < the region has poor contrast, and when 0.5, σ*<sup>n</sup>* > the region has too much contrast.

3. Let *Sn* be normalized sharpness parameter, such that *Sn* =*min*(2.0,*S*/100). When *Sn* >0.8, the region has sufficient sharpness. Image Quality is evaluated using by:

$$Q = 0.5\mu\_n + \sigma\_u + 0.1S\_u \tag{31}$$

where 0 1.0 < < *Q* is the quality factor. A region is classified as good when *Q* > 0.55, and poor when 0.5. σ *<sup>n</sup>* ≤ An image is classified as GOOD when the total number of regions classified as GOOD, 0.6 . *N N <sup>G</sup>* >

#### **3.3 Experimental result**

The image samples for ETNUD were selected to be as diverse as possible so that the result would be as general as possible. MATLAB was used for AINDANE and IRME algorithms and their codes were developed by the author and research team. MSRCR enhancement was done with commercial software, Photo Flair. From visual experience, the following statements are made about the proposed algorithm:



Table 1. The Results of Evaluation Criteria for Figure 2.
