**1. Introduction**

448 Advances in Wavelet Theory and Their Applications in Engineering, Physics and Technology

(Peng & Kieffer, 2004) K. Peng and J. C. Kieffer, "Embedded Image Compression Based on

(Said & Pearlman, 1996) A. Said and W. Pearman, "A new, Fast, and Efficient Image Codec

(Saryazdi & Jafari, 2002) S. Saryazdi and M. Jafari, "A High Performance Image Coding

(Scargall & Dlay, 2000) L. D. Scargall and S. S. Dlay, "New methodology for adaptive vector

(Shapiro, 1993) J.M Shapiro, "Embedded image coding using zerotrees of wavelet

(Sheikh Akbari & Soraghan, 2003) A. Sheikh Akbari and J.J. Soraghan, "Adaptive Joint

(Tan et al, 2004) D. M. Tan, H. R. Wu, and Z. Yu, "Perceptual Coding of Digital

(Thornton et al., 2002) L. Thornton, J. Soraghan, R. Kutil, M. Chakraborty, "Unequally

(Valade & Nicolas, 2004) C. Valade, J. M. Nicolas, "Homomorphic wavelet transform and

(Van Dyck & Rajala, 1994) R.E. Van Dyck, and S. A. Rajala, "Subband/VQ Coding of Colour

(Voukelatos & Soraghan, 1997) S. P. Voukelatos and J. J. Soraghan, "Very Low Bit Rate Color

(Voukelatos & Soraghan, 1998) S. P. Voukelatos and J. J. Soraghan, "A multiresolution

(Wang et al., 2001) J. Wang, W. Zhang and S. YU, "Wavelet coding method using small block DCT," *Electronic Letters*, Vol. 37, No. 10, pp.627-629, May 2001. (Yovanof & Liu, 1996) Yovanof and S. Liu, "Statistical analysis of the DCT coefficients and

*Systems for Video Technology*, vol. 4, no. 1, February 1994.

*Systems for Video Technology*, Vol. 6, no. 3, pp. 243-250, June1996.

Quantization," *EurAsia-ICT 2002*, Shiraz, Iran, October 2002.

13, no. 8, pp. 1011-1017, August 2004.

December 2000.

September 2001.

February 2004.

pp. 327–335, 2002.

2, pp. 424-428, April 1997.

Computers, pp. 601-605, 1996.

135-143, 1998.

2004.

Issue 12, pp. 3445 - 3462, 1993.

Wavelet Pixel Classification and Sorting," *IEEE Transaction on Image Processing*, vol.

Based on Set Partitioning in Hieratical Trees", *IEEE Transaction on Circuits and* 

Using Uniform Morphological Sampling, Residues Classifying, and Vector

quantization", *IEE Proceedings on Vision, Image and Signal Processing*, Vol. 147, No. 6,

coefficients", *IEEE Transactions on Acoustics, Speech, and Signal Processing*, vol. 41,

Subband Vector Quantization Codec For Handheld Videophone Applications", *International Journal of IEE Electronic Letters,* VOL. 39. NO.14, pp. 1044 – 1046, 2003. (Skodras et al, 2001) A. Skodras, Ch. Christopoulos, and T. Ebrahimi, "The JPEG 2000 Still

Image Compression Standard," *IEEE Signal Processing Magazine*, vol. 18, no.5,

Monochrome Images," *IEEE Signal Processing Letters,* Vol. 11, no. 2, pp. 239-242,

protected SPIHT video codec for low bit rate transmission over highly error-prone mobile channels," *Elsevier Science: Signal Processing: Image Communication,* vol. 17,

new subband statistics models for SAR image compression" IEEE International Proceedings on Geoscience and Remote Sensing Symposium, IGARSS 2004, Vol.1,

Images with Perceptually Optimal Bit Allocation", *IEEE Trans. on Circuits and* 

Video Coding Using Adaptive Subband Vector Quantization with Dynamic Bit Allocation", *IEEE Transaction on Circuits and Systems for Video Technology,* vol. 7, no.

adaptive VQ based still image codec with application to progressive image transmission", *EURASIP Signal Processing: Image Communication*, vol. 13, no. 2, pp.

their quantization error", Thirtieth Asilomar Conference on Signals, Systems and

Digital images are prone to a variety of types of noise. Noise is the result of errors in the image acquisition process that result in pixel values that do not reflect the true intensities of the real scene (Gagnon & Smaili, 1996). There are several ways that noise can be introduced into an image, depending on how the image is created. For example if the image is scanned from a photograph made on film, the film grain is a source of noise. Noise can also be the result of damage to the film, or be introduced by the scanner itself. If the image is acquired directly in a digital format, the mechanism for gathering the data (such as a CCD detector) can introduce noise. Electronic transmission of image data can introduce noise. Noise is considered to be any measurement that is not part of the phenomena of interest. Noise can be categorized as Image data independent noise and image data dependent noise.

Wavelets are mathematical functions that cut up data into different frequency components, and then study each component with a resolution matched to its scale (Durand & Froment,1992). They have advantages over traditional Fourier methods in analyzing physical situations where the signal contains discontinuities and sharp spikes. Wavelets were developed independently in the fields of mathematics, quantum physics, electrical engineering, and seismic geology. Interchanges between these fields during the last ten years have led to many new wavelet applications such as image compression, turbulence, human vision, radar, and earthquake prediction.

Synthetic aperture radar is a radar technology that is used from satellite or airplane (Lee,Jukervish 1994). It produces high resolution images of earth's surface by using special signal processing techniques. Synthetic aperture radar has important role in gathering information about earth's surface because it can operate under all kinds of weather condition (whether it is cloudy, hazy or dark).However acquisition of SAR images face

<sup>\*</sup> Copyright notice : This proposal and its intellectual property right belongs to the author and has been submitted for publication as a chapter in the book entitled *"Wavelet Transform"* to be published by the INTECH**,** Open Access Publisher, Croatia. As a courtesy to the publisher, this proposal may not be reproduced or distributed in any form.

Image Denoising Based on Wavelet Analysis for Satellite Imagery 451

3. to propose wavelet concept for describing the denoising of images using shrinkage

4. to produce a case study to support the denoising approach within the goal-driven

Wavelet based methods are always a good choice for image denoising and has been discussed widely in literatures for the past two decades .Wavelet shrinkage permits a more efficient noise removal while preserving high frequencies based on the disbalancing of the energy of such representation. The technique denoises image in the orthogonal wavelet domain, where each coefficient is thresholded by comparing against a threshold; if the coefficient is smaller than the threshold, it is set to zero, otherwise it is kept or modified.

This chapter does not attempt to investigate in deep the theoretical properties of the proposed model in general settings. The primary goal is to demonstrate that how the performance of wavelet shrinkage based denoising methods can be applied by using the best wavelet family. In the following paragraphs, Section 2 reviews the existing works on wavelet concepts in image processing, Section 3 describes the state-of-the-art of image denoising, Section 4 explains the various shrinkage methods of wavelet, Section 5 explains about the solution for denoising in SAR imagery using wavelet Section 6 ends up with a

The main issue discussed in this section is to identify the nature of the dependence between the pixels in the original image and to estimate noise in signal. For astronomical images of sparsely distributed stars an independence assumption may be reasonable, while for many other kinds of images (including astronomical images of galaxies) such an assumption is inappropriate (Lopez & Cumplido 2004). If independence is a reasonable assumption then the CLEAN, maximum entropy, and maximally sparse methods are appropriate and the choice largely depends on the desired balance between accuracy and speed. For example, the CLEAN method is fast but can make mistakes for images containing clustered stars. For images that are expected to be relatively smooth then the Wiener filter and iterative methods are appropriate. If the images are known to satisfy some additional constraints (for example, the intensities are often known to be non-negative for physical reasons) or if the blurring function is space varying then the iterative methods such as Richardson-Lucy or constrained least squares are appropriate. Otherwise it is better to use the Wiener filter because it is fast and approximately includes the iterative methods as special cases. The wavelet methods tend to give a good compromise for images containing such a mixture of discontinuities and texture. Below are the research work in which wavelet concepts and

Magarey developed a motion estimation algorithm based on a complex discrete wavelet transform. The transform used short 4-tap complex filters but did not possess the PR property. The filter shapes were very close to those used in the DT-CWT suggesting that the conclusions would also be valid for the DT-CWT. The task is to try and estimate the displacement field between successive frames of an image sequence. The fundamental

case study and, finally, Section 7 discuss future works in the area of wavelet.

**2. Review based on wavelet concepts in image processing** 

methods are applied to image processing applications.

**2.1 Motion estimation** 

methods

concept.

certain problems. SAR images contain speckle noise which is based on multiplicative noise or rayleigh noise. Speckle noise is the result of two phenomenon, first phenomenon is the coherent summation of the backscattered signals and other is the random interference of electromagnetic signals. Speckle noise degrades the appearance and quality of SAR images (Brunique,1997). Ultimately it reduces the performances of important techniques of image processing such as detection, segmentation, enhancement and classification. That is why speckle noise should be removed before applying any image processing techniques. There are three main objectives of any speckle filtering. First is to remove noise in uniform regions. Second is to preserve and enhance edges and image features and third is to provide a good visual appearance. Unfortunately 100% speckle reduction is not possible. Therefore, tradeoff has to be made among these requirements. Speckle reduction usually consists of three stages. First stage is to transform the noisy image to a new space (frequency domain). Second stage is the manipulation of coefficients. Third is to transform the resultant coefficients back to the original space (spatial domain).Currently many statistical filters are available for speckle reduction, such as, Mean, Kuan, Frost and Lee filter etc. Results show that statistical filters are good in speckle reduction but they also lose important feature details. Additionally prior knowledge about noise statistics is a prerequisite for statistical filters. In recent years, there has been active research on wavelet based speckle reduction because wavelet provides multi resolution decomposition and analysis of image (Durand & Froment,1992).. In wavelet sub bands noise is present in small coefficients and important feature details are present in large coefficients. If small coefficients are removed, we will get noise free image. Previously most of the researchers use discrete wavelet transformation for reduction of speckle. Draw back of discrete wavelet transformation is that it is not translation invariant. That means it will lose lots of important coefficients during translation from original signal to sub bands(Nabil ,2009). In order to solve this problem and to save the coefficients, derivated form of discrete wavelet transformation is used called undecimated wavelet transformation. Basic idea is that it does not lose any coefficients, all coefficients remain intact. That is why it is also called redundant wavelet transformation. It requires more storage space and need more time for computation. Whether discrete or undecimated wavelet is used, biggest problem is the selection of optimal thresholding .Some researchers use wavelet based hard or soft thresholding. Other thresholding techniques were also used such as VisuShrink, SureShrink and neighshrink.

#### **1.1 Aim and objectives of the chapter**

Image denoising has a significant role in image pre processing. As the application areas of image denoising are more, there is a big demand for efficient denoising algorithms. In this chapter, a method that involves Wavelet with shrinkage concepts is proposed and applied it to denoise images corrupted with speckle noise. The intention behind this method is to reduce the convergence time of conventional filter and thereby increase its performance. The proposed method produces excellent results and comparison is made with two wavelet shrinkage and case study is carried over for ice classification based in SAR imagery.

To this the objectives are:


certain problems. SAR images contain speckle noise which is based on multiplicative noise or rayleigh noise. Speckle noise is the result of two phenomenon, first phenomenon is the coherent summation of the backscattered signals and other is the random interference of electromagnetic signals. Speckle noise degrades the appearance and quality of SAR images (Brunique,1997). Ultimately it reduces the performances of important techniques of image processing such as detection, segmentation, enhancement and classification. That is why speckle noise should be removed before applying any image processing techniques. There are three main objectives of any speckle filtering. First is to remove noise in uniform regions. Second is to preserve and enhance edges and image features and third is to provide a good visual appearance. Unfortunately 100% speckle reduction is not possible. Therefore, tradeoff has to be made among these requirements. Speckle reduction usually consists of three stages. First stage is to transform the noisy image to a new space (frequency domain). Second stage is the manipulation of coefficients. Third is to transform the resultant coefficients back to the original space (spatial domain).Currently many statistical filters are available for speckle reduction, such as, Mean, Kuan, Frost and Lee filter etc. Results show that statistical filters are good in speckle reduction but they also lose important feature details. Additionally prior knowledge about noise statistics is a prerequisite for statistical filters. In recent years, there has been active research on wavelet based speckle reduction because wavelet provides multi resolution decomposition and analysis of image (Durand & Froment,1992).. In wavelet sub bands noise is present in small coefficients and important feature details are present in large coefficients. If small coefficients are removed, we will get noise free image. Previously most of the researchers use discrete wavelet transformation for reduction of speckle. Draw back of discrete wavelet transformation is that it is not translation invariant. That means it will lose lots of important coefficients during translation from original signal to sub bands(Nabil ,2009). In order to solve this problem and to save the coefficients, derivated form of discrete wavelet transformation is used called undecimated wavelet transformation. Basic idea is that it does not lose any coefficients, all coefficients remain intact. That is why it is also called redundant wavelet transformation. It requires more storage space and need more time for computation. Whether discrete or undecimated wavelet is used, biggest problem is the selection of optimal thresholding .Some researchers use wavelet based hard or soft thresholding. Other thresholding techniques were also used

Image denoising has a significant role in image pre processing. As the application areas of image denoising are more, there is a big demand for efficient denoising algorithms. In this chapter, a method that involves Wavelet with shrinkage concepts is proposed and applied it to denoise images corrupted with speckle noise. The intention behind this method is to reduce the convergence time of conventional filter and thereby increase its performance. The proposed method produces excellent results and comparison is made with two wavelet

2. to generalize from current research thus, reaching a coherent view of the role of

shrinkage and case study is carried over for ice classification based in SAR imagery.

1. to analysis various wavelet concepts in image processing

such as VisuShrink, SureShrink and neighshrink.

**1.1 Aim and objectives of the chapter** 

Wavelet in image denoising ;

To this the objectives are:


Wavelet based methods are always a good choice for image denoising and has been discussed widely in literatures for the past two decades .Wavelet shrinkage permits a more efficient noise removal while preserving high frequencies based on the disbalancing of the energy of such representation. The technique denoises image in the orthogonal wavelet domain, where each coefficient is thresholded by comparing against a threshold; if the coefficient is smaller than the threshold, it is set to zero, otherwise it is kept or modified.

This chapter does not attempt to investigate in deep the theoretical properties of the proposed model in general settings. The primary goal is to demonstrate that how the performance of wavelet shrinkage based denoising methods can be applied by using the best wavelet family. In the following paragraphs, Section 2 reviews the existing works on wavelet concepts in image processing, Section 3 describes the state-of-the-art of image denoising, Section 4 explains the various shrinkage methods of wavelet, Section 5 explains about the solution for denoising in SAR imagery using wavelet Section 6 ends up with a case study and, finally, Section 7 discuss future works in the area of wavelet.
