**2. Review based on wavelet concepts in image processing**

The main issue discussed in this section is to identify the nature of the dependence between the pixels in the original image and to estimate noise in signal. For astronomical images of sparsely distributed stars an independence assumption may be reasonable, while for many other kinds of images (including astronomical images of galaxies) such an assumption is inappropriate (Lopez & Cumplido 2004). If independence is a reasonable assumption then the CLEAN, maximum entropy, and maximally sparse methods are appropriate and the choice largely depends on the desired balance between accuracy and speed. For example, the CLEAN method is fast but can make mistakes for images containing clustered stars. For images that are expected to be relatively smooth then the Wiener filter and iterative methods are appropriate. If the images are known to satisfy some additional constraints (for example, the intensities are often known to be non-negative for physical reasons) or if the blurring function is space varying then the iterative methods such as Richardson-Lucy or constrained least squares are appropriate. Otherwise it is better to use the Wiener filter because it is fast and approximately includes the iterative methods as special cases. The wavelet methods tend to give a good compromise for images containing such a mixture of discontinuities and texture. Below are the research work in which wavelet concepts and methods are applied to image processing applications.

## **2.1 Motion estimation**

Magarey developed a motion estimation algorithm based on a complex discrete wavelet transform. The transform used short 4-tap complex filters but did not possess the PR property. The filter shapes were very close to those used in the DT-CWT suggesting that the conclusions would also be valid for the DT-CWT. The task is to try and estimate the displacement field between successive frames of an image sequence. The fundamental

Image Denoising Based on Wavelet Analysis for Satellite Imagery 453

optimum denoising method is a Wiener filter whose frequency response depends on the local power spectrum of the signal. When the signal power is high, the power is kept mostly; when the signal power is low, the signal is attenuated. The size of each wavelet coefficient can be interpreted as an estimate of the power in some time-frequency bin and set the small ones to zero in order to approximate adaptive Wiener filtering. The first wavelet transform proposed for denoising was the standard orthogonal transform. However, orthogonal wavelet transforms (DWT) produce results that substantially vary even for small translations in the input and so a second transform was proposed, the nondecimated wavelet transform (NDWT) , which produced shift invariant results by effectively averaging the results of a DWT-based method over all possible positions for the origin. Experiments on test signals show that the NDWT is superior to the DWT. The main disadvantage of the NDWT is that even an efficient implementation takes longer to compute than the DWT, by a factor of the three times the number of levels used in the decomposition.Kingsbury has proposed the use of the DT-CWT for denoising because this transform not only reduces the amount of shift-variance but also may achieve better compaction of signal energy due to its increased directionality. In other words, at a given scale an object edge in an image may produce significant energy in 1 of the 3 standard

Compression algorithms with wavelet-based transformations were selected in competition with compression using fractal transformations. FBI's standard has similarities with the JPEG2000 standard, and especially with an extension to the JPEG2000 standard. Further decomposition of the LH-, HL- and HH-bands like this may improve compression somewhat, since the effect of the filter bank application may be thought of as an "approximative orthonormalization process". The extension to the JPEG2000 standard also opens up for this type of more general subband decompositions. In FBI's standard different wavelets can be used, with the coefficients of the corresponding filter banks signalled in the code-stream. The only constraint on the filters is that there should be no more than 32 nonzero coefficients. This is much longer than lossy compression in JPEG2000 (9 nonzero

Texture is an important characteristic for analyzing many types of images, including natural scenes and medical images. With the unique property of spatial-frequency localization, wavelet functions provide an ideal representation for texture analysis. Experimental evidence on human and mammalian vision support the notion of spatial-frequency analysis that maximizes a simultaneous localization of energy in both spatial and frequency domain.These psychophysical and physiological findings lead to several research works on texture-based segmentation methods based on multi-scale analysis. One important feature of wavelet transform is its ability to provide a representation of the image data in a multiresolution fashion. Such hierarchical decomposition of the image information provides the possibility of analyzing the coarse resolution first, and then sequentially refine the segmentation result at more detailed scales. In general, such practice provides additional

wavelet subbands, but only 1 of the 6 complex wavelet subbands.

**2.4 Compression and matching** 

coefficients).

**2.5 Segmentation** 

robustness to noise and local maxima (Mallat,1989).

property of wavelets that makes this possible is that translations of an image result in phase changes for the wavelet coefficients. By measuring the phase changes it is possible to infer the motion of the image. A major obstacle in motion estimation is that the reliability of motion estimates depends on image content. For example, it is easy to detect the motion of a single dot in an image, but it is much harder to detect the motion of a white piece of paper on a white background. Magarey developed a method for incorporating the varying degrees of confidence in the different estimates. In tests on synthetic sequences the optimised CDWT-based algorithm showed superior accuracy under simple perturbations such as additive noise and intensity scaling between frames. In addition, the efficiency of the CDWT structure minimises the usual disadvantage of phase-based schemes– their computational complexity. Detailed analysis showed that the number of floating point operations required is comparable to or even less than that of standard intensity-based hierarchical algorithms.
