Image Enhancement Methods for Remote Sensing: A Survey

*Nur Huseyin Kaplan, Isin Erer and Deniz Kumlu*

## **Abstract**

The quality of the images obtained from remote sensing devices is very important for many image processing applications. Most of the enhancement methods are based on histogram modification and transform based methods. Histogram modification based methods aim to modify the histogram of the input image to obtain a more uniform distribution. Transform based methods apply a certain transform to the input image and enhance the image in transform domain followed by the inverse transform. In this work, both histogram modification and transform domain methods have been considered, as well as hybrid methods. Moreover, a new hybrid algorithm is proposed for remote sensing image enhancement. Visual comparisons as well as quantitative comparisons have been carried out for different enhancement methods. For objective comparison quality metrics, namely Contrast Gain, Enhancement Measurement, Discrete Entropy and Average Mean Brightness Error have been used. The comparisons show that, the histogram modification methods have a better contrast improvement, while transform domain methods have a better performance in edge enhancement and color preservation. Moreover, hybrid methods which combine the two former approaches have higher potential.

**Keywords:** Remote Sensing, Image Enhancement, Histogram Modification, Transform Domain Methods, Image Decomposition

#### **1. Introduction**

Widely used remote sensing applications, such as mapping, classification, soil moisture detection, target detection and tracking, etc. require high quality images. To meet the increasing need for higher quality images, image enhancement methods which improve the contrast and edge information of the input image are applied to the raw input images.

Images provided by remote sensing devices have to be enhanced by special methods instead of standard enhancement methods. Since applications like classification, target detection and target tracking are automated applications, the original reflectance values of the input image should be preserved as much as possible, which makes enhancing the remotely sensed image a challenging problem [1, 2]. Remote sensing image enhancement techniques should improve the visibility, contrast and edge information of the image while preserving the original reflectance values.

In recent years, many remote sensing image enhancement methods have been developed to increase the quality of these images. Image enhancement methods can be divided into two main groups as direct and indirect methods [3–5]. Direct

methods aim to enhance the images by using a defined contrast measure [6–9], while the indirect methods try to improve the dynamic range of the images without a contrast measurement [10–15].

In direct methods, contrast measurements can be global or local. In general, local measurements have better results [9]. Dhnawan et al. [6] proposed a local contrast function based on the relative difference between a central region and a neighboring region for a given pixel. Beghdad and Negrate [7] introduced an improvement of [6] by defining the contrast with the consideration of edge information. Laxmikant Dash and Chatterji [8] proposed an adaptive contrast enhancement method where contrast amplification is based on the brightness estimated by local image statistics. Cheng and Xu [9] proposes a another adaptive enhancement method based on the fuzzy entropy principle and fuzzy set theory.

The direct methods have a low computational cost but accordingly show a poor image enhancement performance. The state of art methods are generally indirect methods which provide better enhancement performances compared to the direct methods. The indirect methods can be divided into two sub categories as histogram modification based methods [3, 4, 16–22] and transform domain methods [1, 2, 21, 23–25].

The simplest histogram modification method is Histogram Equalization (HE) [16]. In this method, the histogram distribution of the input image is aimed to have uniform distribution. This method is able to improve the contrast. However, the HE based enhanced images generally suffer from undersaturation or oversaturation, which results in poor quality images. To fix this problem, more efficient histogram modification methods have been proposed in recent years such as Bi-Histogram equalization (BHE) [17] based and Recursive Mean-Separate histogram equalization (RMSHE) [18]. In both methods, the original histogram of the input image is divided into sub-histograms. After obtaining the sub-histograms, separate histogram equalizations are applied to these sub-histograms. Finally, the divided histograms are merged to obtain the enhanced image [17, 18]. The images obtained by these methods have higher quality compared to the classical HE method, however the undersaturation and oversaturation problems are not resolved. 2-D histogram based methods have also been proposed for image enhancement [19, 20]. These methods provide better results than the methods aforementioned, however the computational cost of 2-D histogram creation is too high, which makes these methods not suitable for automated applications. Moreover, there are faster methods with higher enhancement performances. Another method proposed in this sub-category is Adaptive Gamma Correction with Weighting Distribution (AGCWD) method [4]. In this method, a weighted distribution of the original histogram of the input image is obtained followed by Gamma correction. The most important benefit of this method is its ability to preserve the original reflectance values which are needed for remotely sensed image enhancement, however this method too suffers from saturation artifacts. Moreover, the edge information is lost especially in the brighter regions [2, 21]. Histogram modification methods have a good performance if the histogram of the input image is smoother. Moreover, these group eliminate the lower-scale details [22]. The histogram modification methods have a higher performance for low resolution images and images containing larger-scale details.

Transform domain based image enhancement methods use certain transformations to decompose the image into subbands and improve the contrast by modifying specific components [1, 2, 23–25]. The first method in this category uses a combination of discrete wavelet transform and singular value decomposition (DWT-SVD) [23]. In DWT-SVD method, first discrete wavelet transform (DWT) is applied to both the input image and to the equalized input image by a general

#### *Image Enhancement Methods for Remote Sensing: A Survey DOI: http://dx.doi.org/10.5772/intechopen.98527*

histogram equalization method. Since the details and edge information are kept in high pass sub bands, the method concentrates on the approximation sub bands. After obtaining both approximation sub bands, a Singular Value Decomposition (SVD) is applied to the approximation sub bands of the input and the equalized images. The singular value calculated from the input image is weighted by the singular value of the equalized image to obtain an enhanced singular value. Finally, inverse SVD followed by inverse DWT are applied to obtain the enhanced image. A more recent transform domain method uses the Bilateral Filtering (BF) for image enhancement [1]. The input image is decomposed into its approximation and detail layers by a multiscale BF. Finally, the obtained detail layers are added to the original image with a weighted manner to obtain edge enhanced image. Another method is the Remote Sensing Image Enhancement based on the hazy image model [2]. In this method, the commonly used hazy image model [26] is adapted for image enhancement applications. Here, the two unknown parameters of the hazy image model, namely airlight and transmission, are estimated with simple statistical properties of the input image to obtain the enhanced image. A more recent work is based on Robust Guided Filtering [24]. In this method, a robust guided filter described in [27] is applied to the input image and the difference between the original image and filtered image is considered as a detail sub-band as in DWT. The detail sub-bands are amplified and added to the original image to obtain the final enhanced image. Although they show a better performance, the methods in this group suffer from blocking artifacts or, in some cases, they are unable to enhance the image globally [22]. The overall performance of transform domain methods is better than the histogram modification methods. Moreover, the performance of this group of methods is significantly better for high resolution images and images containing both low and high scale details. There are hybrid methods combining histogram and transform methods. One hybrid method is based on a Regularized Histogram Equalization and Discrete Cosine Transform (RHE-DCT) [25]. In this technique, first a global enhancement is applied to the input image by a Regularized Histogram Equalization (RHE). Here, the equalization is made by using the sigmoid function. After obtaining the equalized image, Discrete Cosine Transform (DCT) is applied to the equalized image to obtain DCT coefficients. After this, the coefficients are modified to locally improve the contrast of the image. Finally, inverse DCT is applied to obtain the enhanced image.

In addition to all these methods, a hybrid algorithm combining [1] and HIM [2] methods has been proposed. In this proposed hybrid algorithm, the BF method described above is applied to the image to obtain a global enhanced image. Then, the HIM method is applied block by block to this globally enhanced image to obtain a local enhancement.

## **2. Remote sensing image enhancement methods**

The quality of remote sensing images depends upon numerous factors such as noise, illumination or equipment conditions during the acquisition procedure [28]. The data obtained by optic sensors (multispectral, hyperspectral, panchromatic sensors) are degraded by atmospheric effects and instrumental noises, namely thermal (Johnson) noise, quantization noise and shot (photon) noise which cause corruption in the spectral bands by varying degrees [29]. On the other hand, SAR images (radar sensors), which offer many benefits such as working 7/24 and in all weather conditions, suffer from multiplicative speckle noise [28].

These degradations reduce the contrast in the resulting images and can highly affect human perception or the accuracy of computer assisted applications [25].

Thus, contrast enhancement, besides noise removal, constitutes a primary step for various applications of remote sensing image processing for better information representation and visual perception.

#### **2.1 Adaptive gamma correction with weighting distribution (AGCWD)**

In this method, a weighting distribution of the original histogram of the input image is obtained followed by Gamma correction.

First an Adaptive Gamma correction is made to the input image as:

$$T(l) = l\_{\max} \left(\frac{l}{l\_{\max}}\right)^r = l\_{\max} \left(\frac{l}{l\_{\max}}\right)^{1 - F(l)}\tag{1}$$

Here, *l* is the intensity value of the current pixel and *lmax* is the maximum intensity value of the input image. *γ* is a varying adaptive parameter which is equal to 1 � *F l*ð Þ, and *F l*ð Þ is the cumulative distribution function. The reason to use cumulative distribution function for adaptive Gamma correction is to guarantee the Gamma parameter to follow the changes between the pixels of the image.

In order to avoid the adverse effects, a weighting distribution function is used so as to slightly modify the histogram as follows:

$$f(l) = f\_{\max} \left( \frac{f(l) - f\_{\min}}{f\_{\max} - f\_{\min}} \right)^a \tag{2}$$

Here, *α* is the adjustment parameter, *f* is the probability density function and *f max* and *f min* are the maximum and minimum of the *f*. Using (2), the modified cumulative distribution function *F* is evaluated by:

$$F\_{\alpha}(k) = \frac{\sum\_{l=0}^{k} f\_{\alpha}(l)}{\sum f\_{\alpha}} \tag{3}$$

where

$$\sum\_{}^{l} f\_{\
u} = \sum\_{l=0}^{l\_{\max}} f\_{\
u}(l) \tag{4}$$

Finally, the Gamma parameter of (1) is modified as:

$$\gamma = \mathbf{1} - F\_o(l) \tag{5}$$

The modified Gamma parameter and Eq. (1) is used to obtain the enhanced image.

#### **2.2 Discrete wavelet transform and singular value decomposition based method (DWT-SVD)**

In this method, a combination of discrete wavelet transform (DWT) and singular value decomposition (SVD) are used for enhancement purposes. In the classical one-dimensional (1D) DWT, the input signal is decomposed into its low (*L*) and high (*H*) frequency components. In order to perform a two-dimensional (2D) transform, the 1D DWT is applied to the row of the images followed by the columns of the image, or vice versa. After applying the 2D DWT, four different subbands are obtained, namely *LL*, *LH*, *HL*, and *HH*. The approximation subband *LL* contains low frequency components, while the diagonal subband *HH* contains high frequency components for both rows and columns of the image. The horizontal and vertical subbands *LH* and *HL* contains low frequency component for the rows and high frequency components for the columns and vice versa, respectively.

SVD is used to decompose a matrix into two orthogonal square matrices (*U* and *V*) and a diagonal matrix containing the singular values ð Þ Σ as shown:

$$I = U\_I \Sigma\_I V\_I^T \tag{6}$$

The enhancement method firstly applies a general histogram equalization to the input image *I* to obtain equalized image ~*I*. Then discrete wavelet transform is applied to both the input and equalized images so as to obtain the subbands *LLI*, *LHI*, *HLI*, *HHI* and *LL*~*I*, *LH*~*I*, *HL*~*I*, *HH*~*I*, respectively.

Since the rough information about the images are present in the *LL* subbands, SVD is applied to these subbands to obtain the singular values. As aforementioned, the singular values contain the intensity of the image. Therefore, equalization is made for the singular values. Here, the Σ components of the *LLI* and *LL*<sup>~</sup>*<sup>I</sup>* is weighted as to obtain a correction coefficient (*ξ*).

$$\xi = \frac{m\omega \propto \left(\Sigma\_{LL\_l}\right)}{m\omega \propto \left(\Sigma\_{LL\_l}\right)}\tag{7}$$

where Σ*LL*~*<sup>I</sup>* is the singular value matrix of the equalized image derived from its *LL*<sup>~</sup>*<sup>I</sup>* subband and Σ*LLI* is the singular value matrix of the input image obtained from its *LLI* subband. After determining the correction coefficient, the corrected singular value matrix Σ is obtained as:

$$
\Sigma = \xi \Sigma\_{LL\_l} \tag{8}
$$

Here, Σ is the corrected singular value matrix. The new *LL* subband is constructed as:

$$LL = U\_{LLl} \Sigma V\_{LL\_l}^T \tag{9}$$

After constructing the new *LL* subband, the enhanced image is obtain by performing the inverse DWT to this new *LL* subband and detail subbands of the original image.

#### **2.3 Regularized histogram equalization and discrete cosine transform based method (RHE-DCT)**

This method basically consists of two steps: Regularized Histogram Equalization (RHE) followed by Discrete Cosine Transform (DCT). The first one performs a global contrast enhancement and the second one enhances the local contrast.

RHE aims to perform a histogram equalization to the input image by a regularized manner as:

$$f(k) = s(k)(\mathbf{1} + h(k))\tag{10}$$

Here *f k*ð Þ is the probability density function of the equalized histogram, *h k*ð Þ is the normalized histogram of the input image, *s k*ð Þ is the sigmoid function defined as:

*Recent Remote Sensing Sensor Applications - Satellites and Unmanned Aerial Vehicles (UAVs)*

$$s(k) = \frac{1}{1 + e^{-(k-1)}} - \frac{1}{2} \tag{11}$$

By this modification, the minimum value of the equalized image is assured to be equal to 0. The *f k*ð Þ obtained is normalized as:

$$f(k) \leftarrow \frac{f(k)}{\sum\_{t=1}^{K} f(t)}\tag{12}$$

Here *K* is the number of the gray levels. The cumulative distribution function *F k*ð Þ is obtained as:

$$F(k) = \sum\_{t=1}^{K} f(t) \tag{13}$$

and new gray levels are evaluated as:

$$y(k) = \left[ F(k) \left( \mathcal{y}\_{\max} - \mathcal{y}\_{\min} \right) + \mathcal{y}\_{\min} \right] \tag{14}$$

Finally the equalized image is obtained by using a standard lookup table based HE procedure to obtain *Yeq*.

In order to perform a local enhancement, the DCT coefficients of the globally equalized image is used. For this purpose, first the DCT is applied to the equalized image as:

$$\mathbf{C}(u,v) = c\_{k}c\_{ao} \sum\_{k=0}^{M-1} \sum\_{l=0}^{N-1} Y\_{eq}(k,l) \cos\left[\frac{(2k+1)h\pi}{2M}\right] \cos\left[\frac{(2l+1)a\pi}{2N}\right] \tag{15}$$

*ch* and *c<sup>ω</sup>* are computed by:

$$c\_h = \begin{cases} \sqrt{\frac{1}{M}}, & h = 0\\ \sqrt{\frac{2}{M}}, & 1 \le h \le M - 1\\ \\ \begin{array}{c} \\ \end{array} \end{array} \tag{16}$$

$$\mathcal{c}\_o = \begin{cases} \sqrt{\frac{1}{N}}, & o = 0 \\ \sqrt{\frac{2}{N}}, & 1 \le o \le N - 1 \end{cases} \tag{17}$$

The lower absolute values of *C* should be adjusted to perform local enhancement while higher values should be maintained to avoid drastic changes. By this way new DCT coefficients are obtained as:

$$D'(h, w) = \begin{cases} D(h, w), & D(h, w) > 0.01D(0, 0) \\ aD(h, w), & D(h, w) \le 0.01D(0, 0) \end{cases} \tag{18}$$

Here *α* is the adjustment parameter and is automatically determined as:

$$a = \mathbf{1} + \sqrt{\left(\text{std}\left(\mathbf{Y}\_{global}\right)\right) - \text{std}(\mathbf{X})/\left(\mathbf{2}^B - \mathbf{1}\right)}\tag{19}$$

After obtaining the new DCT coefficients, inverse DCT is applied to obtain the final enhanced image.

## **2.4 Bilateral filtering based method (BF)**

This method is basically based on multiscale bilateral filtering. In classical bilateral filtering, the filter output can be determined as:

$$\text{BF}[I] = \left(\frac{\mathbf{1}}{\mathbf{W}}\right) \sum\_{q \in s} \mathbf{G}\_{\sigma\_{\boldsymbol{\tau}}}(||p - q||) \mathbf{G}\_{\sigma\_{\boldsymbol{\tau}}}(||I\_{p} - I\_{q}||) I\_{q} \tag{20}$$

where

$$\mathcal{W} = \sum\_{q \in s} \mathcal{G}\_{\sigma\_\*} (||p - q||) G\_{\sigma\_r} \left( ||I\_p - I\_q|| \right) \tag{21}$$

Here *σ<sup>s</sup>* and *σ<sup>r</sup>* are the Gaussian kernels controlling the spatial and range of the input image. *Ip* is the intensity value of the pixel at location *p*, *Iq* is the intensity value of the neighboring pixels within the window *S* at location *q*. The difference between the input image and the filter output gives the detail layer of the image.

$$D^1 = I - \mathbf{BF}[I] \tag{22}$$

Here, *D*<sup>1</sup> is the first detail layer of the image. In order to carry on the decomposition, bilateral filtering is applied again to the filter output. Here, to guarantee the shift invariance, *σ<sup>s</sup>* is doubled and *σ<sup>r</sup>* is halved. In order to obtain level detail layer, two adjacent filter outputs are subtracted as:

$$D^j = \mathbf{BF}^j[I] - \mathbf{BF}^{j-1}[I] \tag{23}$$

Here, *j* corresponds to the decomposition level.

In order to reconstruct the input image from an *L* levels of decomposition, one can simply add all detail layers to the final filtering output as:

$$I = \sum\_{j=1}^{L} D^j[I] + \mathbf{BF}^L[I] \tag{24}$$

Bilateral filtering based method firstly decomposes the input image by (24).

After obtaining the detail layers for L levels. The details are amplified and added directly to the original image as:

$$I\_E = \mathbf{BF}^L[I] + \sum\_{j=1}^L \alpha\_j D^j[I] \tag{25}$$

Here, *IE* is the enhanced image and *ω<sup>j</sup>* are the weighting factors for the corresponding detail subbands *D<sup>j</sup>* ½ �*I* .

The parameter determination is very important in order to achieve a good enhancement result. Therefore, *σr*, *σs*, and S parameters of the bilateral filter, as well as the decomposition level and weights have to be determined. To achieve this, a comparison between the enhancement results obtained by differing parameter are made. As a result of this comparison, *σ<sup>r</sup>* is chosen as 0.6, *σ<sup>s</sup>* is chosen as 1.8, *S* is

chosen as a window sized 5 � 5, the decomposition level is chosen as 4, and the weights (*ω*1, *ω*2, *ω*3, *ω*4) are chosen as 2 [1].

#### **2.5 Adaptive cuckoo search based enhancement algorithm (ACSEA)**

In this method, the image enhancement is performed by optimizing a predefined enhancement kernel [30]. The enhancement process of ACSEA is given below:

$$I\_{E\_{(i\bar{j})}} = \left(\mu\_{(i\bar{j})}^L\right)^a + F\_{(i\bar{j})}^\epsilon \left(I\_{i\bar{j}} - \sigma\_{\mu\_{(i\bar{j})}}^L\right) \tag{26}$$

where

$$F\_{(i,j)}^{\epsilon} = k \frac{\mu^G}{\sigma\_{(i,j)}^L + b} \tag{27}$$

Here, *F*ð Þ ð Þ *<sup>i</sup>*, *<sup>j</sup> <sup>e</sup>* is calculated by the mean value and standard deviation of the image and called as the image enhancement function. ð Þ *i*, *j* is the location of the current pixel. *σ*ð Þ ð Þ *<sup>i</sup>*, *<sup>j</sup> <sup>L</sup>* is the local standard deviation and *μ*ð Þ ð Þ *<sup>i</sup>*, *<sup>j</sup> <sup>L</sup>* is the local mean value calculated in a window sized *<sup>N</sup>* � *<sup>N</sup>* centered at ð Þ *<sup>i</sup>*, *<sup>j</sup>* , while *<sup>μ</sup><sup>G</sup>* is the global mean value. The method focuses on optimizing the parameters ð Þ *a*, *b*,*c*, *k* , where 0≤*a*≤1*:*5, 0≤ *b*≤ 0*:*5, 0 ≤*c*≤1, and 0*:*5≤*k*≤ 1*:*5.

In order to optimize the enhancement formula given in (26), a chaotic initialization is made and an objective fitness function is used as given below:

$$F(I\_E) = \log\left(\log\left(E\left(I\_E^8\right)\right) + \varepsilon\right) \frac{N\_\varepsilon\left(I\_E^8\right)}{MN} e^{H(0)}\tag{28}$$

where

$$I\_E^8 = \sqrt{\left(\nabla\_x I\_E\right)^2 + \left(\nabla\_\jmath I\_E\right)^2} \tag{29}$$

In (28), *E*ð Þ*:* is the expected value operator and *H*ð Þ*:* is the entropy operator. In (29), ∇*<sup>x</sup>* and ∇*<sup>y</sup>* are the gradients. *I* 8 *<sup>E</sup>* is the Sobel edge detected image.

In order to optimize the enhanced image ð Þ *IE* , the objective function given in (29) is optimized with a chaotic initialization so as to obtain the best enhancement result.

#### **2.6 Hazy image model based enhancement (HIM)**

This method is based on the commonly used hazy image model [26, 31].

$$I = \mathcal{J}t + A(\mathbf{1} - t) \tag{30}$$

where *I* is the input image, *A* is the airlight coefficient, t is the transmission map and *J* is the haze free image. In order to obtain haze free image *J*, *A* and *t* have to be estimated.

For dehazing purposes airlight coefficient is generally estimated from the brighter pixels of the input image. For enhancement, instead of the brighter pixels the mean of the image is assumed to be the airlight coefficient [2].

$$A = \mathbf{1} / \text{KL} \sum\_{k=1}^{K} \sum\_{l=1}^{L} I(k, l) \tag{31}$$

*Image Enhancement Methods for Remote Sensing: A Survey DOI: http://dx.doi.org/10.5772/intechopen.98527*

*I* is the input image with dimensions of *K* � *L* and *I k*ð Þ , *l* is the intensity value of the pixel at location ð Þ *k*, *l* . In general, in dehazing algorithms, the transmission map is estimated using the airlight coefficient and normalized input image. The normalized image is obtained by estimated airlight coefficient. Following the similar manner, this method also normalizes the image with the estimated airlight and estimate the transformation as:

$$t = 1 - o\left(\frac{1}{A}\right) \tag{32}$$

Here, *ω* is an arbitrary coefficient. The coefficient can be determined as the standard deviation ð Þ *σ* of the input image [2]. Finally, the enhanced image is obtained by simply taking out *J* out of (30) as:

$$J = \frac{I - A(\mathbf{1} - t)}{t} \tag{33}$$

#### **2.7 Robust guided filtering based method (SDF)**

This method uses the Robust Guided Filtering described in [27] which uses two guidance images namely dynamic guidance and static guidance. In order to perform Robust Guided Filtering, the following cost function should be minimized:

$$\in\_{u} = \sum\_{i} c\_{i} \left( u\_{i} - f\_{i} \right)^{2} + \lambda \Omega(u, \mathbf{g}) \tag{34}$$

Here, *f* is the input image, *u* is the dynamic guidance and *g* is the static guidance. *λ* is the regularization parameter and *ci* ≤0 is the confidence level. The regularizer Ωð Þ *u*, *g* can be defined as [27].

$$\Omega(u, \mathbf{g}) = \sum\_{i, j \in N} \phi\_{\mu} \left( \mathbf{g}\_{i} - \mathbf{g}\_{j} \right) \rho\_{v} \left( u\_{i} - u\_{j} \right) \tag{35}$$

where

$$\rho\_v(\mathbf{x}) = \frac{\mathbf{1} - \rho\_v(\mathbf{x})}{v} \quad \text{and} \quad \phi\_\mu(\mathbf{x}) = e^{-\mu \mathbf{x}^2} \tag{36}$$

*N* is the neighborhood size which is 8 � 8, while *μ* and *v* are parameters controlling the smoothness level.

In order to perform image enhancement, a multi-scale decomposition based on Robust Guided Filtering similar to the multi-scale bilateral filtering is proposed in [1]. The filtering output is considered as the first approximation layer of the original image as:

$$A\_1 = \text{SDF}[I] \tag{37}$$

Here, *I* is the input image, *A*<sup>1</sup> is the first level approximation layer and SDF operator stands for Robust Guided Filtering. In order to obtain further levels of approximation layers, SDF is applied to previous approximation layer as:

$$A\_l = \text{SDF}[A\_{l-1}] \tag{38}$$

with initial value *A*<sup>1</sup> ¼ *I*. The difference between two adjacent approximation layers give the detail layer of the corresponding level as:

$$D\_l = A\_l - A\_{l-1} \tag{39}$$

One can obtain the original image by simply adding the detail layers to the final level approximation layer.

$$I = \sum\_{j=1}^{L} D\_j + A\_L \tag{40}$$

SDF based enhancement firstly decomposes the input image by using (40).

After obtaining the detail layers, the details are amplified and added directly to the original image as:

$$I = \sum\_{j=1}^{L} w\_j D\_j + A\_L \tag{41}$$

The decomposition level and weights are determined by comparing different number of levels and weights. The best results for different images are applied for all images. Therefore, the decomposition level is chosen as 4. Moreover, the weights ð Þ *ω*1,*ω*2, *ω*3,*ω*<sup>4</sup> are chosen as 2 [24].

#### **2.8 Hybrid bilateral filtering and hazy image model method (BF-HIM)**

The BF based enhancement method [1] has a good enhancement, however the color distortion is present, whereas the HIM method [2] has a good color preservation with a lower enhancement performance. Therefore, a hybrid method combining these two methods can be a good candidate to obtain a good performance for enhancement along with a good color preservation.

The hybrid method first applies the multi-scale bilateral filtering given in (24) to the input image to obtain the bilateral filtering outputs and detail layers. Since, we will add the HIM model, the decomposition level is chosen as 2. Then, the detail layers are amplified as given in (25) to obtain the prior enhancement result. The prior enhanced image is divided into non overlapping blocks. HIM method given above is applied to these blocks separately to perform a local enhancement. Finally, the enhanced blocks are combined to construct the final enhancement result.

Here, the choice of the block size is important. The lower block size is expected to have a better local enhancement result. Therefore, the block size is chosen as 3 � 3.

#### **3. Evaluation criteria**

It is possible to determine the performance of an image enhancement method visually. However, a visual conclusion may not be objective. Therefore, in order to make objective comparisons, evaluation criteria has been developed. Here, the choice of criteria is also important. It is already known that every criterion can give an idea about one property of the resulting image. Therefore, criteria for different properties of the image should be used. Moreover, since each criterion gives an idea for a certain property of the image, all criteria should be considered together to have an overall idea of the image. The criteria presented below gives an idea for the

performance of the enhancement methods, however all should be considered together and along with visual results.

## **3.1 Contrast gain (CG)**

The first criteria to measure the performance of enhancement method is Contrast Gain (CG) [32]. This criterion focuses on the contrast improvement of the image as follows:

$$\text{CG} = \frac{\text{C}(Y)}{\text{C}(X)} \tag{42}$$

where *C* is average of the local Michelson contrast, which is calculated for 3x3 sized windows within an image and given as:

$$C = \frac{\max - \min}{\max + \min} \tag{43}$$

The higher CG value indicates that the contrast improvement is better.

#### **3.2 Enhancement measurement (EME)**

This criterion also considers the contrast improvement within the enhanced image and defined by following [22]:

$$\text{EME}\_{a,k\_1,k\_2}(\rho) = \frac{1}{k\_1 k\_2} \sum\_{l=1}^{k\_1} \sum\_{k=1}^{k\_2} a \left( \frac{l\_{\text{max}}^{k,l}(\rho)}{l\_{\text{min}}^{k,l}(\rho) + c} \right)^a \ln \frac{l\_{\text{max}}^{k,l}(\rho)}{l\_{\text{min}}^{k,l}(\rho) + c} \tag{44}$$

Here the image *I* is split into *k*<sup>1</sup> � *k*<sup>2</sup> sized blocks. *I* ð Þ *<sup>k</sup>*, *<sup>l</sup>* max and *I* ð Þ *k*, *l* min are the maximum and minimum values within the block, while *c* is a small constant to avoid division by zero. EME*<sup>α</sup>*,*k*1,*k*<sup>2</sup> ð Þ *φ* is called the Enhancement Measurement of Entropy with respect to transform *φ*.

The higher EME value indicates that the contrast improvement is better.

#### **3.3 Discrete entropy (DE)**

Discrete entropy of an image can be evaluated as:

$$\text{DE} = -\sum\_{k=1}^{K} p(\mathbf{x}\_k) \log p(\mathbf{x}\_k) \tag{45}$$

Here, *p x*ð Þ*<sup>k</sup>* is the probability of the pixel ð Þ *xk* . The higher value of DE indicates that a smoother distributed histogram is obtained, which may indicate that the contrast is higher.

#### **3.4 Absolute mean brightness error (AMBE)**

Absolute Mean Brightness Error (AMBE) [18] is an error function calculated between image *X* and image *Y* as:

$$\text{AMBE} = \frac{1}{\text{MN}} \sum\_{m=1}^{M} \sum\_{n=1}^{N} |X(m,n) - Y(m,n)| \tag{46}$$

Here, *M* and *N* are the dimensions of the images and ð Þ *m*, *n* is the pixel location. The lower AMBE value indicates that, the brightness preservation is better.

#### **4. Experimental setup**

The enhancement described above have been applied to several images. Comparisons of the methods are made both visually and quantitatively. Before applying the enhancement methods, the parameters for each method is determined.

#### **4.1 Visual comparison**

Visual comparisons are performed for different images and they are available online.<sup>1</sup>

The first image used for comparison is a tank image taken by a digital imaging system as shown in **Figure 1(a)**. **Figure 1(b)**–**(i)** show the enhancement results obtained by AGCWD, DWT-SVD, RHE-DCT, ACSEA, BF, HIM, SDF, and BF-HIM methods, respectively. In order to demonstrate closely, the zoomed version of the area inside the red square is given inside the green square. As seen in **Figure 1(b)**, AGCWD method has improved the contrast, however the color preservation of the method is not good. The contrast improvement of DWT-SVD seems to be low, as seen in **Figure 1(c)**. Even though the color preservation seems to be good, the edge information is lost as seen in the zoomed area. RHE-DCT method, shown in **Figure 1(d)**, has a good contrast improvement, however the color preservation is not good. The edge enhancement of RHE-DCT is better than AGCWD and DWT-SVD methods. ACSEA method in **Figure 1(e)** demonstrates a better color preservation, however the contrast improvement is not as good as the other methods. Moreover, the edge enhancement is lower than RHE-DCT method. As seen in **Figure 1(f)**, BF method preserves the color like as the ACSEA method and has a good edge enhancement performance. **Figure 1(g)** shows that HIM method has a good color preservation capability. However, the edge enhancement performance is not good compared to the BF method. SDF method, given in **Figure 1(h)** has a very good edge enhancement performance, but the color preservation is lower than ACSEA, HIM and BF methods. As demonstrated in **Figure 1(i)**, the hybrid BF-HIM method preserve the colors closer to the ACSEA and BF methods and enhances the edge information better than the former methods.

The second image used for comparison is an aerial image taken by a digital imaging system mounted on an air vehicle as shown in **Figure 2(a)**. **Figure 2(b)**–**(i)** show the enhancement results obtained by AGCWD, DWT-SVD, RHE-DCT, ACSEA, BF, HIM, SDF, and BF-HIM methods, respectively. In order to make a closer look, a zoomed version of the area inside the red square is given in the green square.

As seen in **Figure 2(b)**, AGCWD method has improved the contrast, however the color preservation of the method is not good. The car within the zoom area is visible. The contrast improvement of DWT-SVD is better than AGCWD method, as seen in **Figure 2(c)**. The color preservation is lower than AGCWD method and the

<sup>1</sup> http://sipi.usc.edu/database/

**Figure 1.**

*(a) Input image, enhancement results for (b) AGCWD (c) DWT-SVD, (d) RHE-DCT, (e) ACSEA, (f) BF, (g) HIM, (h) SDF, and (i) BF-HIM methods.*

visibility of the car in the zoomed area is not as good as AGCWD method. RHE-DCT method, shown in **Figure 2(d)**, has a good contrast improvement and better color preservation than AGCWD and DWT-SVD methods. The edge enhancement of RHE-DCT is closer to the AGCWD method as seen in the zoomed area. ACSEA method in **Figure 2(e)** demonstrates a better color preservation, however the contrast improvement is not as good as the other methods. Moreover, the edge enhancement is lower than RHE-DCT method. As seen in **Figure 2(f)**, BF method preserves the color like as the ACSEA method and has a better edge enhancement performance than RHE-DCT methods, as seen in the zoomed area. **Figure 2(g)** shows that HIM method has a good color preservation capability. However, the edge enhancement performance is not good compared to the BF method. SDF method, given in **Figure 2(h)** has a very good edge enhancement performance, but the color preservation is lower than ACSEA, HIM and BF methods. As demonstrated in **Figure 2(i)**, the hybrid BF-HIM method preserve the colors closer to the HIM method. A closer look demonstrates that the edge improvement is better than the former methods.

**Figure 2.**

*(a) Input image, enhancement results for (b) AGCWD (c) DWT-SVD, (d) RHE-DCT, (e) ACSEA, (f) BF, (g) HIM, (h) SDF, and (i) BF-HIM methods.*

The final image used for comparison is an aerial image of an area containing harbor and airport taken by a digital imaging system mounted on an air vehicle as shown in **Figure 3(a)**. **Figure 3(b)**–**(i)** show the enhancement results obtained by AGCWD, DWT-SVD, RHE-DCT, ACSEA, BF, HIM, SDF, and BF-HIM methods, respectively. For a closer look, the area shown in red square is zoomed and given within the green square.

As seen in **Figure 3(b)**, AGCWD method has improved the contrast, however the color preservation of the method is not good. Moreover, the edge information is lost as seen in the zoomed area. The contrast improvement of DWT-SVD seems to be low, as seen in **Figure 3(c)**. Even though the color preservation seems to be good, the edges have not been improved as seen in the zoomed area. RHE-DCT method, shown in **Figure 3(d)**, has a good contrast improvement, and the color preservation is good. The edge enhancement of RHE-DCT is better than AGCWD and DWT-SVD methods. ACSEA method in **Figure 3(e)** demonstrates a good color preservation, and a fine contrast improvement. Moreover, the edge improvement seems to be better than RHE-DCT method. As seen in **Figure 3(f)**, BF method

**Figure 3.**

*(a) Input image, enhancement results for (b) AGCWD (c) DWT-SVD, (d) RHE-DCT, (e) ACSEA, (f) BF, (g) HIM, (h) SDF, and (i) BF-HIM methods.*

preserves the color better than ACSEA method and has a good edge enhancement performance. **Figure 3(g)** shows that HIM method has a good color preservation capability. However, the edge enhancement performance is not good enough compared to the BF method. SDF method, given in **Figure 3(h)** has a very good edge enhancement performance, but the color preservation is lower than ACSEA, HIM and BF methods. As demonstrated in **Figure 3(i)**, the hybrid BF-HIM method preserve the colors closer to the ACSEA methods and enhances the edge information better than the former methods.

For an objective visual evaluation, the profiles of the horizontal lines given in **Figure 1(a)**, **Figure 2(a)** and **Figure 3(a)** are constructed for the enhancement methods, and the drawn profiles for the original image are given along with enhancement methods are given in **Figure 4(a)–(c)**, respectively.

According to **Figure 4(a)**, DWT-SVD and ACSEA methods cannot follow the changes which means the details of the image are lost for these methods. BF-HIM method can follow the changes better. Moreover, BF-HIM method have increased the intensity range more compared to the other methods, which indicates that the contrast improvement is better for BF-HIM method.

According to **Figure 4(b)**, all three methods seem to follow the pattern of the original image properly, in general. ACSEA method has lost the pattern in some parts. SDF and BF-HIM methods have followed the pattern better than ACSEA method. Moreover, BF-HIM seems to have a slightly wider range than SDF method as well.

According to **Figure 4(c)**, all three methods seem to follow the pattern of the original image properly. AGCWD method have increased the intensity values in general, which results in a brighter region. By this way, the contrast improvement is not good enough. Similarly, HIM method have decreased the intensity values, which results in a darker region. BF-HIM method has improved the contrast better than the other methods.

Therefore, according to the visual comparisons, the higher the detail level is within the image, the better the results for methods like AGCWD and RHE-DCT are, as expected, since both methods use histogram modification.

It can also be concluded that histogram modification methods like AGCWD and RHE-DCT methods have a good performance, if the resolution is low (**Figure 1**) or/ and the input image contains higher-scale edge information (**Figure 2**), while transform domain methods are generally better for high resolution images or/and images containing small-scale details. Also, transform domain methods seem to have a solid performance for high-scale details.

In addition to this, considering all aspects of the resulting images, in terms of color preservation, contrast improvement, and edge enhancement, the BF, and hybrid BF-HIM methods seem to have better results. Moreover, hybrid BF-HIM method seems to be the best method when looking at all three aspects.

#### **4.2 Quantitative comparison**

In order to perform an objective comparison, the criteria aforementioned are evaluated for enhancement results obtained by the methods, the visual results of which are given in **Figures 1**–**3**. The quantitative results are provided in **Tables 1**–**4** where the best results are emphasized in bold. The first criterion used for comparison is the Contrast Gain (CG). **Table 1** shows the CG values obtained for AGCWD, DWT-SVD, RHE-DCT, ACSEA, BF, HIM, SDF, and BF-HIM methods.

According to **Table 1**, for **Figure 1**, the best score is obtained by SDF followed by BF-HIM method. For **Figure 2**, the best score is obtained by RHE-DCT method followed by BF-HIM method. For **Figure 3** is achieved by hybrid BF-HIM method, followed by SDF method. Therefore, it is possible to say that RHE-DCT method has a better contrast gain for images containing high-scale details like **Figure 3**.

**Figure 4.** *Drawn profiles for input and enhanced images for (a) Figure 1, (b) Figure 2, (c) Figure 3.* *Image Enhancement Methods for Remote Sensing: A Survey DOI: http://dx.doi.org/10.5772/intechopen.98527*

The second criterion used for comparison is the Enhancement Measurement (EME). **Table 2** shows the EME values obtained for AGCWD, DWT-SVD, RHE-DCT, ACSEA, BF, HIM, SDF, and BF-HIM methods.

According to **Table 2**, for **Figure 1** the best EME scores are obtained by BF-HIM method, followed by SDF method. For **Figure 2**, the best score is obtained by DWT-SVD method followed by ACSEA method. For **Figure 3**, the best EME scores are obtained by BF-HIM method for followed by SDF method. Therefore, it is possible to say that DWT-SVD method has a better enhancement performance for images containing high-scale details, likeas in **Figure 2**.

The third criterion used for comparison is the Discrete Entropy (DE). **Table 3** shows the DE values obtained for AGCWD, DWT-SVD, RHE-DCT, ACSEA, BF, HIM, SDF, and BF-HIM methods.

According to **Table 3**, for **Figure 1**, the best score is obtained by RHE-DCT method followed by BF-HIM method. For **Figure 2**, the best score id obtained by RHE-DCT method followed by SDF method. As it is seen in DE values, the higher the scale of detail is within the images, the higher is the performance of RHE-DCT method. BF-HIM method has better DE values for **Figure 3**, and has a close score to RHE-DCT method for **Figures 1** and **2**.

The fourth criterion used for comparison is the Absolute Mean Brightness Error (AMBE). **Table 4** shows the AMBE values obtained for AGCWD, DWT-SVD, RHE-DCT, ACSEA, BF, HIM, SDF, and BF-HIM methods.

According to **Table 4**, for **Figure 1**, the best score is obtained by ACSEA method followed by BF-HIM method. For **Figure 2**, the best score is obtained by BF-HIM method followed by BF method. For **Figure 3**, the best value is obtained by BF-HIM


#### **Table 1.**

*CG values obtained for the enhancement methods.*


#### **Table 2.**

*EME values (*104*) obtained for the enhancement methods.*


**Table 3.**

*DE values obtained for the enhancement methods.*


**Table 4.**

*AMBE values obtained for the enhancement methods.*

method followed by DWT-SVD method. Since, AMBE is the error between the original image and enhanced image, the smaller value of AMBE indicates better color preservation. This criterion does not give an idea about enhancement performance.

As a result, even though the visual comparison may give the observer an idea about the enhancement performance, quantitative comparison has to be made to obtain a more objective conclusion. Here, the choice of the quantitative criterion is also important. As it is known, each criteria indicates different aspects for the resulting images. For instance, CG gives an idea about the contrast improvement, while AMBE is about the color preservation. If the aim is to compare the overall performance for the methods aforementioned, all criteria should be considered all together. Thus, the quantitative comparisons, as well as the visual comparisons demonstrate that the hybrid methods combining different methods like BF-HIM result in better enhanced images.

#### **5. Conclusion**

The use of image enhancement methods which improve the contrast and edge information of the image is vital for remote sensing applications. In this work, different remote sensing image enhancement methods based on histogram modification techniques (HE, AGCWD) and transform domain methods (DWT-SVD, ACSEA, RHE-DCT, BF, HIM, and SDF) have been reviewed. The resulting images have been compared visually and quantitatively. For quantitative comparison, several image quality criteria have been used. The resolution and the detail scales of the image affects the performance of the enhancement methods. For instance, the detail scales of the input image affect the performance of RHE-DCT and AGCWD methods deeply. Since both methods are histogram modification methods, even though RHE-DCT also uses a transformation, it can be concluded that histogram modification based methods are better if there are higher-scale details within the image or if the image has a lower resolution. The transform domain methods have a better performance for the images with low-scale details, but also the results of these methods are very solid compared to the histogram based methods for high-scale details, as well.

Another contribution of this work is to introduce a hybrid method, which combines the bilateral filtering with hazy image model. The visual and quantitative results demonstrate that using hybrid methods have a superior performance to the methods applied separately. Therefore, future research on remote sensing image enhancement should focus on hybrid methods.

*Image Enhancement Methods for Remote Sensing: A Survey DOI: http://dx.doi.org/10.5772/intechopen.98527*

## **Author details**

Nur Huseyin Kaplan<sup>1</sup> \*, Isin Erer<sup>2</sup> and Deniz Kumlu<sup>3</sup>

1 Electrical and Electronic Engineering Department, Erzurum Technical University, Erzurum, Turkey

2 Electronics and Communication Department, Istanbul Technical University, Istanbul, Turkey

3 Naval Research Center, Turkish Naval Forces, Istanbul, Turkey

\*Address all correspondence to: huseyin.kaplan@erzurum.edu.tr

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## **References**

[1] Kaplan, N.H, Erer, I, Gulmus, N. Remote sensing image enhancement via bilateral filtering. In: Proceedings of the 8th International Conference on Recent Advances in Space Technologies (RAST17), 19-22 June 2017. Istanbul, Turkey: IEEE;2017. p.139–142.

[2] Kaplan, N.H. Remote sensing image enhancement using hazy image model. Optik - International Journal for Light and Electron Optics. 2018;155: 139–148.

[3] Arici, T, Dikbas, S, Altunbasak, Y. A histogram modification framework and its application for image contrast enhancement. IEEE Transactions on Image Processing, 2009;18(9):1921-–1935.

[4] Huang, S.C, Cheng, F.C, Chiu, Y.S. Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Transactions on Image Processing. 2013; 22 (3):1032–-1041.

[5] Hanmandlu, M, Jha, D. An optimal fuzzy system for color image enhancement. IEEE Transactions on Image Processing. 2006; 15(10):2956–- 2966.

[6] Dhnawan, A.P., Buelloni, G., Gordon, R. Enhancement of mammographic features by optimal adaptive neighborhood image processing. IEEE Trans. Med. Imaging. 1986;5:8–15.

[7] Beghdadi, A, Negrate,A.L. Contrast enhancement technique based on local detection of edges. Computer Vision, Graphics, and Image Processing. 1989; 46(2):162–-174.

[8] Laxmikant Dash, Chatterji, B.N. Adaptive contrast enhancement and deenhancement. Pattern Recognition. 1991;24:289–302.

[9] Cheng, H.D, Xu, H.J. A novel fuzzy logic approach to contrast enhancement. Pattern Recognition. 2000;33(5):809-– 819.

[10] Sherrier, R, Johnson,G. Regionally adaptive histogram equalization of the chest. IEEE Transactions on Medical Imaging. 1987; MI-6(January(1)),1–-7.

[11] Sattar, F, Floreby, L, Salomonsson, G, Lovstrom, B. Image enhancement based on a nonlinear multiscale method. IEEE Transactions on Image Processing. 1997;6(6):888–895.

[12] Polesel, A, Ramponi,G, Mathews,V. Image enhancement via adaptive unsharp masking. IEEE Transactions on Image Processing. 2000;9(3):505–-510.

[13] Salari, E, Zhang, S. Integrated recurrent neural network for image resolution enhancement from multiple image frames. IEE Proceedings - Vision, Image and Signal Processing. 2003;150 (5):299–305.

[14] Lee, E, Kang,W, Kim, S, Paik, J. Color shift model-based image enhancement for digital multifocusing based on a multiple color-filter aperture camera. IEEE Transactions on Consumer Electronics. 2010;56(2):317–323.

[15] Wong, T, Bouman, C. A, Pollak, I. Image Enhancement Using the Hypothesis Selection Filter: Theory and Application to JPEG Decoding. IEEE Transactions on Image Processing. 2013; 22(3):898–913.

[16] Gonzalez, R.C, Woods, R.E. Digital Image Processing, 2nd ed. Addison-Wesley Longman Publishing;2001.

[17] Kim, Y.-T. Contrast enhancement using brightness preserving bihistogram equalization. IEEE Trans. Consum. Electron. 1997;43(1):1-–8.

[18] Chen, S.-D, Raml, A.R. Contrast enhancement using recursive mean*Image Enhancement Methods for Remote Sensing: A Survey DOI: http://dx.doi.org/10.5772/intechopen.98527*

separate histogram equalization for scalable brightness preservation. IEEE Trans.Consum. Electron.2003;49(4): 1301–-1309.

[19] Celik, T, Tjahjadi, T. Contextual and variational contrast enhancement. IEEE Transactions on Image Processing. 2011; 20(12):3431–-3441.

[20] Celik, T. Two-dimensional histogram equalization and contrast enhancement. Pattern Recognition. 2012;45(10):3810–-3824.

[21] Demirel, H, Anbarjafari, G. Image Resolution Enhancement by Using Discrete and Stationary Wavelet Decomposition. IEEE Transactions on Image Processing. 2011;20(5):1458–1460.

[22] Agaian, S.S, Silver, B, Panetta, K.A. Transform coefficient histogram-based image enhancement algorithms using contrast entropy IEEE Transactions on Image Processing. 2007;16(3):741–-758.

[23] Demirel, H, Ozcinar, C, Anbarjafari, G. Satellite image contrast enhancement using discrete wavelet transform and singular value decomposition. IEEE Geoscience and Remote Sensing Letters. 2010;7(2):333–-337.

[24] Kaplan, N.H., Erer, I. Remote Sensing Image Enhancement via Robust Guided Filtering. In: Proceedings of the 9th International Conference on Recent Advances in Space Technologies (RAST19), 11-14 June 2019. Istanbul, Turkey:IEEE;2009. p.447-450.

[25] Fu, X, Wang, J, Zeng, D, Huang, Y, Ding, X. Remote sensing image enhancement using regularizedhistogram equalization and DCT. IEEE Geoscience and Remote Sensing Letters. 2015;12(11):2301–-2305

[26] Narasimhan, S.G, Nayar, S.K. Contrast restoration of weather degraded images. IEEE Trans. Pattern Anal. Mach. Intell. 2003;25(6):713–-724. [27] Ham, B, Cho, M, Ponce, J. Robust guided image filtering using nonconvex potentials. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2018;40(1):192–-207.

[28] Soni,V., Bhandari,A.K.,Kumar, A., Singh, G.K. Improved sub-band adaptive thresholding function for denoising of satellite image based on evolutionary algorithms. IET Signal Processing. 2013;7(8):720–-730.

[29] Rasti, B., Scheunders, P., Ghamisi, P., Licciardi, G., Chanussot, J. Noise Reduction in Hyperspectral Imagery: Overview and Application. Remote Sens. 2018;10(3):482.

[30] Suresh, S, Lal, S, Reddy, C.S, Kiran, M.S. A Novel Adaptive Cuckoo Search Algorithm for Contrast Enhancement of Satellite Images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2017; 10(8):3665–3676.

[31] Kaplan, N.H, Dumlu, A, Ayten, K.K. Single image dehazing based on multiscale product prior and application to vision control. Signal, Image and Video Processing. 2017;11(8):1389–1396.

[32] Shin, J, Park, R.H. Histogram-based locality-preserving contrast enhancement. IEEE Signal Process. Lett. 2015;22 (9):1293–-1296.

## **Chapter 6**
