The Discrete Quincunx Wavelet Packet Transform

*Abdesselam Bassou*

#### **Abstract**

This chapter aims to present an efficient compression algorithm based on quincunx wavelet packet transform that can be applied on any image of size 128 � 128 or bigger. Therefore, a division process into sub-images of size 128 � 128 was applied on three gray-scale image databases, then pass each sub-image through the wavelet transform and a bit-level encoder, to finally compress the sub-image with respect to a fixed bit rate. The quality of the reconstructed image is evaluated using several parameters at a given bit rate. In order to improve the quality in sense of the evaluation quality, an exhaustive search has led to the best packet decomposition base. Two versions of the proposed compression scheme were performed; the optimal version is able to decrease the effect of block boundary artifacts (caused by the image division process) by 27*:*70% considering a natural image. This optimal version of the compression scheme was compared with JPEG standard using the quality evaluation parameters and visual observation. As a result, the proposed compression scheme presents a competitive performance to JPEG standard; where the proposed scheme performs a peak signal to noise ratio of 0*:*88 *dB* over JPEG standard at a bit rate of 0*:*50 *bpp* for a satellite image.

**Keywords:** quincunx wavelet transform, wavelet packet, quality evaluation parameters, reduction factor, JPEG standard

#### **1. Introduction**

Wavelet is defined as a small wave that can be the base of all physical phenomena; which means that a time and/or space variation of a phenomenon is a sum of multiple wavelets. As examples, the wavelet transform was applied on an electrocardiogram (ECG) signal in order to extract the QRS complex [1] (time variation), on a video sequence in order to implement a hidden watermark [2] (time and space variation) and on a 2D image in order to reduce its size (compression) [3, 4] (space variation). In this chapter, one considers the application of the wavelet on 2D image compression.

An image is one of the most important sources of information; it provides a visual comprehension of a phenomenon. The image can take several natures as medical, natural, textural or satellite image, each nature is characterized by a proper amount of details. For a digital image, the size in bytes is as bigger as the amount of details; this applies the use of image compression process.

In other words, if one considers a gray-scale image of size 512 � 512, that means a bit rate of 8 bits per pixel (*Rc* ¼ 8 *bpp*) and a file size of 512 � 512 � 8 bits (256 Kbytes). Compressing this image leads to reduce its file size (without changing the

image size); for example, to reduce the file size by a factor of 10 (25.6 Kbytes), one have to consider a bit rate of *Rc* <sup>¼</sup> <sup>25</sup>*:*6�1024�<sup>8</sup> <sup>512</sup>�<sup>512</sup> <sup>¼</sup> <sup>0</sup>*:*<sup>8</sup> *bpp*.

Because of the conservation of all details in the image after decompression, the lossless<sup>1</sup> compression algorithms, as Run Length Coding (RLE), Lempel-Ziv-Welch (LZW) and Huffman [5, 6], are by far the ideal methods. However, such a compression algorithms does not provide a significant reduction of image's file size, and therefore the lossy<sup>2</sup> compression algorithms may be more appropriate.

The most known lossy compression algorithm is the standard JPEG (Joint Photographic Experts Group) [7]; it is based, as lossy algorithm, on a discrete transform (Cosine Discrete Transform, DCT in this case). The Discrete Wavelet Transform (DWT) and Quincunx Wavelet transform (QWT) are two other discrete transforms that can be found in the literature [8, 9]; they apply a progressive transformation on the image followed by an encoding process (like Embedded Zerotree Wavelet, EZW or Set Partitioning In Hierarchical Trees, SPIHT [10]) to give the image a bit-level representation.

This chapter aims to propose a QWT-based compression algorithm that can be applied on any image of size 128 � 128 or bigger. Therefore, the following structure is adopted: In Section 2, the discrete wavelet transform is introduced and the progressive presentation of an image is exposed. Section 3 is dedicated to the quincunx wavelet transform, the QWT extension to wavelet packet (PQWT) and the encoding process employing SPIHT algorithm. The PQWT-based compression algorithm is presented in Section 4, and the results and discussions in Section 5.

#### **2. Discrete wavelet transform**

#### **2.1 Definition**

As discrete sine and cosine, the DWT is used to represent a digital signal (as an image) with sum of projections over orthogonal functions; these functions are called "wavelet". Several wavelets are described in the literature; among them, one can find dyadic Daubechies family (represented with scaling and wavelet functions in **Figure 1** for four examples [8]).

In order to improve JPEG compression performances (in sense of evaluation parameters presented in Section 4), the researchers have proposed the JPEG 2000 compression algorithm based on a wavelet called CDF 9/7 (Cohen-Daubechies-Feauveau 9-tap/7-tap) [11, 12]. The scaling and wavelet functions, and decomposition and reconstruction low and high filters are shown in **Figure 2**.

#### **2.2 Wavelet decomposition**

As it is mentioned above, a wavelet applies a progressive transformation on the image. This process (called filter bank analysis) is realized by passing an image with coefficients *a*0½ � *k* , at time *k*, through a decomposition low filter *h*0, a decomposition high filter *h*<sup>1</sup> and a decimation function (↓2). As a result of level 1 decomposition, one obtain an approximation image of coefficients *a*1½ � *k* and a detail image of coefficients *d*1½ � *k* . The same process is applied, at level *j*, on the approximation

*a <sup>j</sup>*�<sup>1</sup>½ � *k* to get an approximation *a <sup>j</sup>*½ � *k* and a detail *d <sup>j</sup>*½ � *k* . **Figure 3** shows a wavelet

*Wavelet of Cohen-Daubechies-Feauveau 9-tap/7-tap. (a) Scaling and wavelet functions, (b) decomposition*

*Four examples of dyadic daubechies wavelets. Scaling function, Wavelet function.*

*The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

The reconstruction process (called filter bank synthesis) follows the inverse order of decomposition process, which means that, at level *j* and time *k*, an approximation *a <sup>j</sup>*½ � *k* and a detail *d <sup>j</sup>*½ � *k* are oversampled (↑2) and passed, respectively, through reconstruction low filter *h*<sup>0</sup> and reconstruction high filter *h*<sup>1</sup> to

3-level decomposition.

*and reconstruction filters.*

**Figure 1.**

**Figure 2.**

**207**

**2.3 Wavelet reconstruction**

<sup>1</sup> The term "lossless" refers to the conservation of all details in the image after reconstruction, which means that the original and reconstructed images are identical.

<sup>2</sup> The term "lossy" refers to the loss of details in the image after reconstruction by quantification or truncation, which means that the original image differs from the reconstructed one.

*The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

image size); for example, to reduce the file size by a factor of 10 (25.6 Kbytes), one

The most known lossy compression algorithm is the standard JPEG (Joint Photographic Experts Group) [7]; it is based, as lossy algorithm, on a discrete transform (Cosine Discrete Transform, DCT in this case). The Discrete Wavelet Transform (DWT) and Quincunx Wavelet transform (QWT) are two other discrete transforms that can be found in the literature [8, 9]; they apply a progressive transformation on the image followed by an encoding process (like Embedded Zerotree Wavelet, EZW or Set Partitioning In Hierarchical Trees, SPIHT [10]) to give the

This chapter aims to propose a QWT-based compression algorithm that can be applied on any image of size 128 � 128 or bigger. Therefore, the following structure is adopted: In Section 2, the discrete wavelet transform is introduced and the progressive presentation of an image is exposed. Section 3 is dedicated to the quincunx wavelet transform, the QWT extension to wavelet packet (PQWT) and the encoding process employing SPIHT algorithm. The PQWT-based compression algorithm is presented in Section 4, and the results and discussions in Section 5.

As discrete sine and cosine, the DWT is used to represent a digital signal (as an

In order to improve JPEG compression performances (in sense of evaluation parameters presented in Section 4), the researchers have proposed the JPEG 2000 compression algorithm based on a wavelet called CDF 9/7 (Cohen-Daubechies-Feauveau 9-tap/7-tap) [11, 12]. The scaling and wavelet functions, and decomposi-

As it is mentioned above, a wavelet applies a progressive transformation on the image. This process (called filter bank analysis) is realized by passing an image with coefficients *a*0½ � *k* , at time *k*, through a decomposition low filter *h*0, a decomposition high filter *h*<sup>1</sup> and a decimation function (↓2). As a result of level 1 decomposition, one obtain an approximation image of coefficients *a*1½ � *k* and a detail image of coefficients *d*1½ � *k* . The same process is applied, at level *j*, on the approximation

<sup>1</sup> The term "lossless" refers to the conservation of all details in the image after reconstruction, which

<sup>2</sup> The term "lossy" refers to the loss of details in the image after reconstruction by quantification or

truncation, which means that the original image differs from the reconstructed one.

image) with sum of projections over orthogonal functions; these functions are called "wavelet". Several wavelets are described in the literature; among them, one can find dyadic Daubechies family (represented with scaling and wavelet functions

tion and reconstruction low and high filters are shown in **Figure 2**.

means that the original and reconstructed images are identical.

therefore the lossy<sup>2</sup> compression algorithms may be more appropriate.

<sup>512</sup>�<sup>512</sup> <sup>¼</sup> <sup>0</sup>*:*<sup>8</sup> *bpp*. Because of the conservation of all details in the image after decompression, the lossless<sup>1</sup> compression algorithms, as Run Length Coding (RLE), Lempel-Ziv-Welch (LZW) and Huffman [5, 6], are by far the ideal methods. However, such a compression algorithms does not provide a significant reduction of image's file size, and

have to consider a bit rate of *Rc* <sup>¼</sup> <sup>25</sup>*:*6�1024�<sup>8</sup>

image a bit-level representation.

**2. Discrete wavelet transform**

in **Figure 1** for four examples [8]).

**2.2 Wavelet decomposition**

**206**

**2.1 Definition**

*Wavelet Theory*

**Figure 1.** *Four examples of dyadic daubechies wavelets. Scaling function, Wavelet function.*

**Figure 2.**

*Wavelet of Cohen-Daubechies-Feauveau 9-tap/7-tap. (a) Scaling and wavelet functions, (b) decomposition and reconstruction filters.*

*a <sup>j</sup>*�<sup>1</sup>½ � *k* to get an approximation *a <sup>j</sup>*½ � *k* and a detail *d <sup>j</sup>*½ � *k* . **Figure 3** shows a wavelet 3-level decomposition.

#### **2.3 Wavelet reconstruction**

The reconstruction process (called filter bank synthesis) follows the inverse order of decomposition process, which means that, at level *j* and time *k*, an approximation *a <sup>j</sup>*½ � *k* and a detail *d <sup>j</sup>*½ � *k* are oversampled (↑2) and passed, respectively, through reconstruction low filter *h*<sup>0</sup> and reconstruction high filter *h*<sup>1</sup> to

**Figure 5** illustrates the result of applying CDF 9/7 3-level decomposition over

The decomposition and reconstruction processes using QWT remain the same as

• The diamond McClellan transform [13] is applied to map a 1-D design onto the

<sup>p</sup> for each direction.

The 2D quincunx refinement and wavelet filters are given respectively by:

!� � <sup>¼</sup> *<sup>z</sup>*1*H*<sup>λ</sup> �*<sup>z</sup>*

and λ is filter order. All simulations in this chapter were performed considering λ ¼ 5. The QWT 6-level decomposition of image 'Lena' is given in **Figure 6**.

<sup>p</sup> ð Þ <sup>2</sup> <sup>þ</sup> cos*ω*<sup>1</sup> <sup>þ</sup> cos*ω*<sup>2</sup>

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

<sup>q</sup> (3)

λ 2

<sup>λ</sup> <sup>þ</sup> ð Þ <sup>2</sup> � cos*ω*<sup>1</sup> <sup>þ</sup> cos*ω*<sup>2</sup>

!�<sup>1</sup> � � (4)

is the discrete Fourier transform parameter

λ

ffiffi 2

! <sup>¼</sup> *<sup>e</sup> <sup>j</sup><sup>ω</sup>* !

ð Þ 2 þ cos*ω*<sup>1</sup> þ cos*ω*<sup>2</sup>

*G*<sup>λ</sup> *z*

2

gray-scale image 'Lena'.

**3.1 Definition**

**3. Quincunx wavelet transform**

quincunx structure.

*H*<sup>λ</sup> *e <sup>j</sup><sup>ω</sup>* ! � � <sup>¼</sup>

where, *ω*

**Figure 6.**

**209**

*6-level decomposition of image 'Lena' employing QWT.*

• The decimation factor is ffiffi

DWT; however, there are some differences:

*The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

! ¼ ð Þ *ω*1, *ω*<sup>2</sup> is 2D pulse, *z*

**Figure 3.** *Wavelet 3-level decomposition.*

#### **Figure 4.** *Wavelet 3-level reconstruction.*

generate an approximation image of coefficients *a <sup>j</sup>*�<sup>1</sup>½ � *k* . **Figure 4** shows a wavelet 3-level reconstruction.

A perfect reconstruction satisfies the following criteria:

$$
\overline{H}\_0(f) \bullet H\_0(f) + \overline{H}\_1(f) \bullet H\_1(f) = 2 \tag{1}
$$

$$
\overline{H}\_0(f+\mathbf{1}/2) \bullet H\_0(f+\mathbf{1}/2) + \overline{H}\_1(f+\mathbf{1}/2) \bullet H\_1(f) = \mathbf{0} \tag{2}
$$

where, *f* is a normolised frequency, *Hi*ð Þ*f* and *Hi*ð Þ*f* (*i* ¼ 0, 1) are, respectively, the Fourier transform of impulse responses *hi*ð Þ*k* and *hi*ð Þ*k* .

**Figure 5.** *3-level decomposition employing CDF 9/7. (a) Original 'Lena' image, (b) decomposed 'Lena' image.*

*The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

**Figure 5** illustrates the result of applying CDF 9/7 3-level decomposition over gray-scale image 'Lena'.

## **3. Quincunx wavelet transform**

#### **3.1 Definition**

The decomposition and reconstruction processes using QWT remain the same as DWT; however, there are some differences:


The 2D quincunx refinement and wavelet filters are given respectively by:

$$H\_{\lambda} \left( e^{j\overrightarrow{w}} \right) = \frac{\sqrt{2} \left( 2 + \cos \omega\_1 + \cos \omega\_2 \right)^{\frac{1}{2}}}{\sqrt{\left( 2 + \cos \omega\_1 + \cos \omega\_2 \right)^{\lambda} + \left( 2 - \cos \omega\_1 + \cos \omega\_2 \right)^{\lambda}}} \tag{3}$$

$$\mathbf{G}\_{\lambda} \left( \overrightarrow{\mathbf{z}} \right) = \mathbf{z}\_1 \mathbf{H}\_{\lambda} \left( -\overrightarrow{\mathbf{z}}^{-1} \right) \tag{4}$$

where, *ω* ! ¼ ð Þ *ω*1, *ω*<sup>2</sup> is 2D pulse, *z* ! <sup>¼</sup> *<sup>e</sup> <sup>j</sup><sup>ω</sup>* ! is the discrete Fourier transform parameter and λ is filter order. All simulations in this chapter were performed considering λ ¼ 5. The QWT 6-level decomposition of image 'Lena' is given in **Figure 6**.

**Figure 6.** *6-level decomposition of image 'Lena' employing QWT.*

generate an approximation image of coefficients *a <sup>j</sup>*�<sup>1</sup>½ � *k* . **Figure 4** shows a wavelet

*H*0ð Þ*f* ∙ *H*0ð Þþ *f H*1ð Þ*f* ∙ *H*1ð Þ¼ *f* 2 (1)

*H*0ð Þ *f* þ 1*=*2 ∙ *H*0ð Þþ *f* þ 1*=*2 *H*1ð Þ *f* þ 1*=*2 ∙ *H*1ð Þ¼ *f* 0 (2)

where, *f* is a normolised frequency, *Hi*ð Þ*f* and *Hi*ð Þ*f* (*i* ¼ 0, 1) are, respectively,

*3-level decomposition employing CDF 9/7. (a) Original 'Lena' image, (b) decomposed 'Lena' image.*

A perfect reconstruction satisfies the following criteria:

the Fourier transform of impulse responses *hi*ð Þ*k* and *hi*ð Þ*k* .

3-level reconstruction.

*Wavelet 3-level reconstruction.*

**Figure 3.**

*Wavelet Theory*

**Figure 4.**

**Figure 5.**

**208**

*Wavelet 3-level decomposition.*

Employing the packet transform on QWT (PQWT) implies that only dyadic parts of QWT decomposition are concerned, which means that the analysis time

**3.3 Set partitioning in hierarchical trees encoder**

*The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

**4. Proposed QWT/PQWT-based compression algorithm**

dividing process. An example of dividing process is given in **Figure 9**.

**Figure 7** shows the entropy-based PQWT 6-level decomposition of image 'Lena'.

In order to compress an image employing wavelet-based transform, an encoding step is used to give a bit-level representation to the image. This chapter employs the SPIHT encoder as bit-level representation encoder. **Figure 8** summarizes the relationship between decomposition levels. The authors of [15] had proposed a modified version of SPIHT for the wavelet packet transform; this version is adopted for

The JPEG standard is based on dividing an image into sub-images of size 8 � 8, then applying the DC transform on each image. In the proposed approach, one adopts 8-level PQWT as transform algorithm and a size of *<sup>m</sup>*<sup>2</sup> <sup>¼</sup> <sup>128</sup> � 128 for the

The proposed compression scheme is summarized in **Figure 10**. It consists on applying, on each image (Il, l ¼ 1, 2, … ) constituting the original image, the QW or

decreases.

the PQW transform.

**4.1 Compression scheme**

**Figure 9.**

**211**

*Example of dividing image 'Lena' into four images of size* m<sup>2</sup>*:*

**Figure 7.** *6-level decomposition of image 'Lena' employing PQWT.*

#### **3.2 Quincunx wavelet packet transform**

The Wavelet Packet Transform (WP) [14] consists on generalizing the decomposition over all parts of the decomposed images (approximations and details) considering the following condition: a detail image is decomposed if its entropy decreases after decomposition. The literature has shown that this technique is more efficient on textural images.

**Figure 8.** *SPIHT algorithm (L denotes low filtering and H denotes high filtering).*

*The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

Employing the packet transform on QWT (PQWT) implies that only dyadic parts of QWT decomposition are concerned, which means that the analysis time decreases.

**Figure 7** shows the entropy-based PQWT 6-level decomposition of image 'Lena'.

#### **3.3 Set partitioning in hierarchical trees encoder**

In order to compress an image employing wavelet-based transform, an encoding step is used to give a bit-level representation to the image. This chapter employs the SPIHT encoder as bit-level representation encoder. **Figure 8** summarizes the relationship between decomposition levels. The authors of [15] had proposed a modified version of SPIHT for the wavelet packet transform; this version is adopted for the PQW transform.

#### **4. Proposed QWT/PQWT-based compression algorithm**

#### **4.1 Compression scheme**

**3.2 Quincunx wavelet packet transform**

*SPIHT algorithm (L denotes low filtering and H denotes high filtering).*

*6-level decomposition of image 'Lena' employing PQWT.*

efficient on textural images.

**Figure 7.**

*Wavelet Theory*

**Figure 8.**

**210**

The Wavelet Packet Transform (WP) [14] consists on generalizing the decomposition over all parts of the decomposed images (approximations and details) considering the following condition: a detail image is decomposed if its entropy decreases after decomposition. The literature has shown that this technique is more

The JPEG standard is based on dividing an image into sub-images of size 8 � 8, then applying the DC transform on each image. In the proposed approach, one adopts 8-level PQWT as transform algorithm and a size of *<sup>m</sup>*<sup>2</sup> <sup>¼</sup> <sup>128</sup> � 128 for the dividing process. An example of dividing process is given in **Figure 9**.

The proposed compression scheme is summarized in **Figure 10**. It consists on applying, on each image (Il, l ¼ 1, 2, … ) constituting the original image, the QW or

**Figure 9.** *Example of dividing image 'Lena' into four images of size* m<sup>2</sup>*:*

stream is decoded and transformed using Inverse QW or Inverse PQW transform, to finally obtain the reconstructed sub-image *Il*, (*l* ¼ 1, 2, … ). All sub-images are

Choosing a compression bit rate *Rc* < 8 *bpp* for gray-level image, leads to a degradation on the original image. This degradation can be measured using the evaluation parameters of compressed image quality. In this chapter, three

*MSE* � �, where, *MSE* <sup>¼</sup> <sup>1</sup>

• Mean Structural SIMilarity index: the MSSIM index is the average over all

� � ∙*s Ii*,^*Ii*

• Visual Information Fidelity: the VIF parameter is a ratio of conditional mutual

The main purpose of this study is to establish a compression strategy using packet quincunx wavelet transform, whatever the type or the size of an image. Therefore, one has begun with applying on the 20 satellite images an exhaustive search among the 260 PQWT decomposition bases. To evaluate the compression quality employing PQWT (for a given bit rate), the relative errors (*εPSNR*, *εVIF* and *εMSSIM*) are used to distinguish between the different performance curves. These

> *<sup>X</sup>* � *<sup>m</sup>PQWT X mQWT X*

where, *X* designates an evaluation parameter (PSNR, VIF or MSSIM) and *mX* is the average of *X* over all database images. For a negative value of *εX*, the PQWT

**Figure 12** illustrates the best 17 decomposition bases that achieve minimum

The second step consists on applying these 17 decomposition bases on the other databases, and then evaluates the compressed images quality using the relative errors given in Eq. (5). **Table 1** shows for each database, the top 10 decomposition bases; and in green the five common decomposition bases between

The evaluation curves in sense of relative error are shown in **Figure 13**. Each Figure compares the evaluation curves of the top 10 decomposition bases, in addition of the PQWT proper decomposition. It can be observed

the Mean Square Error, *I* is the original image, ^*I* is the reconstructed image and

*N*�*M* P*<sup>N</sup> i*¼1 P*<sup>N</sup>*

� �, where, *l*, *c* and *s* are the

� 100 %½ � (5)

*<sup>j</sup>*¼<sup>1</sup> *I i*ð Þ� , *<sup>j</sup>* ^*I i*ð Þ , *<sup>j</sup>* � �<sup>2</sup>

is

then gathered to construct the reconstructed image.

*The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

evaluation parameters are adopted [10]:

*M* P*<sup>M</sup> <sup>i</sup>*¼<sup>1</sup>*l Ii*,^*Ii*

*PSNR* ¼ 10 � log <sup>10</sup>

*MSSIM I*,^*<sup>I</sup>* � � <sup>¼</sup> <sup>1</sup>

**5. Results and discussion**

parameters are expressed as follow:

outperforms the QWT.

all databases.

**213**

**4.3 Evaluation parameters of compressed image quality**

<sup>2</sup>*<sup>R</sup>* ð Þ �<sup>1</sup> <sup>2</sup>

*R* designates the resolution of a gray-scale image.

• Peak Signal to Noise Ratio: the PSNR parameter is given by

local windows of the product of three functions as follow

luminance, contrast and structure comparison functions.

*<sup>ε</sup><sup>X</sup>* <sup>¼</sup> *<sup>m</sup>QWT*

values of *εX*. Base 0 refers to QWT decomposition.

� � ∙*c Ii*,^*Ii*

information measured over all decomposition parts of the image.

**Figure 10.** *The proposed compression scheme*

PQW transform then the SPIHT algorithm with respect of a compression bit rate. The resulting bit streams are gathered to construct the compressed image.

In order to test the proposed compression algorithm, three gray-scale image databases were employed. The first database consists of 60 images (20 satellite images, 20 natural images and 20 medical images) of size 512 512 [16], the second database consists of 114 textural images of size 512 512 [17] and the third database consists of Shivang Patel 168 fingerprint images of size 256 256. Each image from databases is divided into sub-images of size 128 128 (16 sub-images in case of 512 512 image and 4 sub-images in case of 256 256 image).

Considering packet quincunx wavelet transform, one has tested the 260 possible 8-level decompositions (called decomposition bases) on the sub-images, in order to select the optimal packet decomposition base (in sense of evaluation parameters). The performance of PQWT is compared with QWT and entropy-based PQWT decomposition base (called proper base).

#### **4.2 Reconstruction scheme**

The proposed reconstruction scheme is shown in **Figure 11**. The compressed image is divided into bit streams according to the number of sub-images. Each bit

**Figure 11.** *The proposed reconstruction scheme*

stream is decoded and transformed using Inverse QW or Inverse PQW transform, to finally obtain the reconstructed sub-image *Il*, (*l* ¼ 1, 2, … ). All sub-images are then gathered to construct the reconstructed image.

#### **4.3 Evaluation parameters of compressed image quality**

Choosing a compression bit rate *Rc* < 8 *bpp* for gray-level image, leads to a degradation on the original image. This degradation can be measured using the evaluation parameters of compressed image quality. In this chapter, three evaluation parameters are adopted [10]:

• Peak Signal to Noise Ratio: the PSNR parameter is given by

$$\text{PSNR} = \textbf{10} \times \log\_{10} \left( \frac{\left(2^{\textbf{n}} - 1\right)^{2}}{\text{MSE}} \right), \text{where, } \text{MSE} = \frac{1}{\textbf{N} \times \textbf{M}} \sum\_{i=1}^{N} \sum\_{j=1}^{N} \left( I(i,j) - \hat{I}(i,j) \right)^{2} \text{ is}$$

the Mean Square Error, *I* is the original image, ^*I* is the reconstructed image and *R* designates the resolution of a gray-scale image.


#### **5. Results and discussion**

PQW transform then the SPIHT algorithm with respect of a compression bit rate.

In order to test the proposed compression algorithm, three gray-scale image databases were employed. The first database consists of 60 images (20 satellite images, 20 natural images and 20 medical images) of size 512 512 [16], the second database consists of 114 textural images of size 512 512 [17] and the third database consists of Shivang Patel 168 fingerprint images of size 256 256. Each image from databases is divided into sub-images of size 128 128 (16 sub-images in case of

Considering packet quincunx wavelet transform, one has tested the 260 possible 8-level decompositions (called decomposition bases) on the sub-images, in order to select the optimal packet decomposition base (in sense of evaluation parameters). The performance of PQWT is compared with QWT and entropy-based PQWT

The proposed reconstruction scheme is shown in **Figure 11**. The compressed image is divided into bit streams according to the number of sub-images. Each bit

The resulting bit streams are gathered to construct the compressed image.

512 512 image and 4 sub-images in case of 256 256 image).

decomposition base (called proper base).

**4.2 Reconstruction scheme**

**Figure 11.**

**212**

*The proposed reconstruction scheme*

**Figure 10.**

*Wavelet Theory*

*The proposed compression scheme*

The main purpose of this study is to establish a compression strategy using packet quincunx wavelet transform, whatever the type or the size of an image. Therefore, one has begun with applying on the 20 satellite images an exhaustive search among the 260 PQWT decomposition bases. To evaluate the compression quality employing PQWT (for a given bit rate), the relative errors (*εPSNR*, *εVIF* and *εMSSIM*) are used to distinguish between the different performance curves. These parameters are expressed as follow:

$$
\varepsilon\_X = \frac{m\_X^{QWT} - m\_X^{PQWT}}{m\_X^{QWT}} \times 100 \text{ [\%]} \tag{5}
$$

where, *X* designates an evaluation parameter (PSNR, VIF or MSSIM) and *mX* is the average of *X* over all database images. For a negative value of *εX*, the PQWT outperforms the QWT.

**Figure 12** illustrates the best 17 decomposition bases that achieve minimum values of *εX*. Base 0 refers to QWT decomposition.

The second step consists on applying these 17 decomposition bases on the other databases, and then evaluates the compressed images quality using the relative errors given in Eq. (5). **Table 1** shows for each database, the top 10 decomposition bases; and in green the five common decomposition bases between all databases.

The evaluation curves in sense of relative error are shown in **Figure 13**. Each Figure compares the evaluation curves of the top 10 decomposition bases, in addition of the PQWT proper decomposition. It can be observed

**Figure 12.** *Best 17 decomposition bases*


#### **Table 1.**

*Top 10 decomposition bases of each database*

that the proper decomposition achieves fewer performances in case of satellite, textural and fingerprint images, where a visibility (VIF) average degradation of 8% is measured for fingerprint images. On the other hand, the proper decomposition curves are competitive in comparison with the other decomposition curves in case of natural and medical images, especially for *εPSNR* at low bit rate values.

Considering the performance curves of fingerprint images (**Figure 13**.e), negative values of relative errors are observed at low bit rate region; which means that the chosen decomposition bases achieve better performance than base 0 (QWT). Regarding the five common decomposition bases (marked in green in **Table 1**), the

*The evaluation curves vs. bit rate in sense of relative error. (a) Satellite images, (b) natural images, (c) medical*

curves of **Figure 13** show that the decomposition base 3 achieves slightly better performance; therefore, in the rest of the chapter, one adopts this decomposition base. In order to illustrate the compression effect on the database images, one has chosen from each database the image that satisfies the minimum *εPSNR*. The chosen

images are given in **Figure 14**.

*images, (d) textural images, (e) fingerprint images*

*The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

**Figure 13.**

**215**

*The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

**Figure 13.** *The evaluation curves vs. bit rate in sense of relative error. (a) Satellite images, (b) natural images, (c) medical images, (d) textural images, (e) fingerprint images*

Considering the performance curves of fingerprint images (**Figure 13**.e), negative values of relative errors are observed at low bit rate region; which means that the chosen decomposition bases achieve better performance than base 0 (QWT).

Regarding the five common decomposition bases (marked in green in **Table 1**), the curves of **Figure 13** show that the decomposition base 3 achieves slightly better performance; therefore, in the rest of the chapter, one adopts this decomposition base.

In order to illustrate the compression effect on the database images, one has chosen from each database the image that satisfies the minimum *εPSNR*. The chosen images are given in **Figure 14**.

that the proper decomposition achieves fewer performances in case of satellite, textural and fingerprint images, where a visibility (VIF) average degradation of

**Base Satellite images Natural images Medical images Textural Images Fingerprint images**

1 2 3

6 7 8 9

11

13 14

12

15 16

4 5

decomposition curves in case of natural and medical images, especially for *εPSNR*

8% is measured for fingerprint images. On the other hand, the proper decomposition curves are competitive in comparison with the other

at low bit rate values.

*Top 10 decomposition bases of each database*

**Figure 12.**

*Wavelet Theory*

10

17

**Table 1.**

**214**

*Best 17 decomposition bases*

**Base Bit rate**

0.25 bpp

*The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

2.00 bpp

0.25 bpp

2.00 bpp

0.25 bpp

2.00 bpp

**217**

**Base 0 Base 3 Proper decomposition**

*PSNR* ¼ 28*:*48 dB *PSNR* ¼ 28*:*49 dB *PSNR* ¼ 28*:*26 dB

*PSNR* ¼ 38*:*53 dB *PSNR* ¼ 38*:*58 dB *PSNR* ¼ 38*:*21 dB

*PSNR* ¼ 27*:*36 dB *PSNR* ¼ 27*:*38 dB *PSNR* ¼ 27*:*16 dB

*PSNR* ¼ 38*:*91 dB *PSNR* ¼ 39*:*20 dB *PSNR* ¼ 38*:*36 dB

*PSNR* ¼ 39*:*92 dB *PSNR* ¼ 40*:*47 dB *PSNR* ¼ 39*:*50 dB

*PSNR* ¼ 47*:*24 dB *PSNR* ¼ **56***:***59** dB *PSNR* ¼ 46*:*98 dB

**Figure 14.**

*Original images from databases satisfying the minimum* εPSNR*.(a) satellite image, (b) natural image, (c) medical image,(d) textural image, (e) fingerprint image.*

In **Table 2**, it is given the performance in sense of peak SNR of the five adopted images, for two values of bit rate (0*:*25 and 2*:*00 *bpp*). These results show a tiny superiority of decomposition base 3 in comparison with base 0 and the proper decomposition; except for the medical image, where a difference of 9*:*35 *dB* is observed between base 3 and base 0 at a bit rate of 2*:*00 *bpp*.

As observed in JPEG compression scheme, the image division into sub-images causes block boundary artifacts; these artifacts are visible at low bit rates values. This phenomenon is clearer for natural, medical and fingerprint image at a bit rate of 0*:*25 *bpp*.

To remedy to the problem of block boundary artifacts, one propose to add two processes to the compression scheme (as shown in **Figure 15**):


To avoid the pixel redundancy causes by the overlapping, the pixel may have *½* value in case of two overlapped sub-image, and *¼* value in case of four overlapped sub-images. Therefore, as summarized in **Figure 18**, the total size *M* of a sub-image may be expressed as follow:



*PSNR* ¼ 47*:*24 dB *PSNR* ¼ **56***:***59** dB *PSNR* ¼ 46*:*98 dB

*The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

**217**

In **Table 2**, it is given the performance in sense of peak SNR of the five adopted images, for two values of bit rate (0*:*25 and 2*:*00 *bpp*). These results show a tiny superiority of decomposition base 3 in comparison with base 0 and the proper decomposition; except for the medical image, where a difference of 9*:*35 *dB* is

*Original images from databases satisfying the minimum* εPSNR*.(a) satellite image, (b) natural image,*

As observed in JPEG compression scheme, the image division into sub-images causes block boundary artifacts; these artifacts are visible at low bit rates values. This phenomenon is clearer for natural, medical and fingerprint image at a bit rate of 0*:*25 *bpp*.

To remedy to the problem of block boundary artifacts, one propose to add two

• The two sub-images *I*<sup>1</sup> and *I*<sup>2</sup> overlap and have *d* common pixels (an example

• Each sub-image is weighted by a 2D Gaussian window defined by the subimage size *m* and the minimum amplitude *a* (as shown in **Figure 17**).

To avoid the pixel redundancy causes by the overlapping, the pixel may have *½* value in case of two overlapped sub-image, and *¼* value in case of four overlapped sub-images. Therefore, as summarized in **Figure 18**, the total size *M* of a sub-image

• In case of two overlaps, the size of the sub-image equals *<sup>M</sup>* <sup>¼</sup> ð Þ *<sup>m</sup>* � *<sup>d</sup>* <sup>2</sup> <sup>∙</sup> <sup>1</sup> <sup>þ</sup>

,

ð Þ *<sup>m</sup>* � *<sup>d</sup>* <sup>∙</sup> <sup>1</sup> <sup>þ</sup> <sup>2</sup> � *<sup>d</sup>* <sup>∙</sup>ð Þ *<sup>m</sup>* � *<sup>d</sup>* <sup>∙</sup> <sup>½</sup> <sup>þ</sup> *<sup>d</sup>* <sup>∙</sup>ð Þ *<sup>m</sup>* � <sup>2</sup> <sup>∙</sup> *<sup>d</sup>* <sup>∙</sup> <sup>½</sup> <sup>þ</sup><sup>2</sup> � *<sup>d</sup>*<sup>2</sup> <sup>∙</sup> <sup>¼</sup> <sup>¼</sup> ð Þ *<sup>m</sup>* � *<sup>d</sup>* <sup>∙</sup> *<sup>m</sup>* � *<sup>d</sup>*

2 ,

2 <sup>2</sup>

• In case of three overlaps, the size of the sub-image equals *M* ¼ ð Þ *m* � 2 ∙ *d* ∙

observed between base 3 and base 0 at a bit rate of 2*:*00 *bpp*.

*(c) medical image,(d) textural image, (e) fingerprint image.*

processes to the compression scheme (as shown in **Figure 15**):

may be expressed as follow:

**216**

**Figure 14.**

*Wavelet Theory*

<sup>2</sup> � *<sup>d</sup>* <sup>∙</sup>ð Þ *<sup>m</sup>* � *<sup>d</sup>* <sup>∙</sup> <sup>½</sup> <sup>þ</sup> *<sup>d</sup>*<sup>2</sup> <sup>∙</sup> <sup>¼</sup> <sup>¼</sup> *<sup>m</sup>* � *<sup>d</sup>*

of image division with overlapping is given in **Figure 16**),

**Table 2.**

*Performance in sense of peak SNR of the five adopted images.*

• In case of four overlaps, the size of the sub-image equals *<sup>M</sup>* <sup>¼</sup> ð Þ *<sup>m</sup>* � <sup>2</sup> <sup>∙</sup> *<sup>d</sup>* <sup>2</sup> <sup>∙</sup> <sup>1</sup> <sup>þ</sup> <sup>4</sup> � *<sup>d</sup>* <sup>∙</sup>ð Þ *<sup>m</sup>* � <sup>2</sup> <sup>∙</sup> *<sup>d</sup>* <sup>∙</sup> *<sup>½</sup>* <sup>þ</sup> <sup>4</sup> � *<sup>d</sup>*<sup>2</sup> <sup>∙</sup> *<sup>¼</sup>* <sup>¼</sup> ð Þ *<sup>m</sup>* � *<sup>d</sup>* <sup>2</sup> .

These new sub-image sizes permit to define a bit rate for each sub-image according to the number of overlaps. In other words, the bit rate of an overlapped sub-image equals:

$$R\_c' = R\_c \bullet \frac{M}{m^2} \le R\_c \tag{6}$$

The proposed reconstruction scheme, given in **Figure 20**, divides the output of the inverse QWT or the inverse PQWT by the same 2D Gaussian windows define in the compression process. The reconstructed sub-images are overlapped with the same manner as the compression process, to construct the reconstructed image. The evaluation curves in sense of PSNR and VIF parameters are shown in **Figure 21**. Each Figure compares the evaluation curves of the PQWT with decomposition base 3 and proper decomposition, in addition of the JPEG compression

standard. It can be observed that:

*Example of dividing image 'Lena' with overlapping.*

**Figure 15.**

**Figure 16.**

**219**

*Proposed compression scheme employing overlapped sub-images.*

*The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

where, *Rc* denotes the bit rate without overlapping and *Fr* <sup>¼</sup> *<sup>M</sup> <sup>m</sup>*<sup>2</sup> is called the reduction factor of bit rate. The bit rate of the overall image is the average over sub-images bit rates.

In order to demonstrate the effect of the overlapping pixels (*d*) on bit rates, one has plotted in **Figure 19** the reduction factor curves for *m* ¼ 128. It is clear from these curves that a higher order of reduction factor leads to lower compression performance; therefore, a reduction factor threshold of 0*:*9 has to be respected.

*The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

**Figure 15.** *Proposed compression scheme employing overlapped sub-images.*

**Figure 16.** *Example of dividing image 'Lena' with overlapping.*

The proposed reconstruction scheme, given in **Figure 20**, divides the output of the inverse QWT or the inverse PQWT by the same 2D Gaussian windows define in the compression process. The reconstructed sub-images are overlapped with the same manner as the compression process, to construct the reconstructed image.

The evaluation curves in sense of PSNR and VIF parameters are shown in **Figure 21**. Each Figure compares the evaluation curves of the PQWT with decomposition base 3 and proper decomposition, in addition of the JPEG compression standard. It can be observed that:

• In case of four overlaps, the size of the sub-image equals *<sup>M</sup>* <sup>¼</sup> ð Þ *<sup>m</sup>* � <sup>2</sup> <sup>∙</sup> *<sup>d</sup>* <sup>2</sup> <sup>∙</sup> <sup>1</sup> <sup>þ</sup>

These new sub-image sizes permit to define a bit rate for each sub-image according to the number of overlaps. In other words, the bit rate of an overlapped

*<sup>c</sup>* <sup>¼</sup> *Rc* <sup>∙</sup> *<sup>M</sup>*

reduction factor of bit rate. The bit rate of the overall image is the average over

In order to demonstrate the effect of the overlapping pixels (*d*) on bit rates, one has plotted in **Figure 19** the reduction factor curves for *m* ¼ 128. It is clear from these curves that a higher order of reduction factor leads to lower compression performance; therefore, a reduction factor threshold of 0*:*9 has to be respected.

*R*0

where, *Rc* denotes the bit rate without overlapping and *Fr* <sup>¼</sup> *<sup>M</sup>*

.

**Base 0 Base 3 Proper decomposition**

*PSNR* ¼ 14*:*62 dB *PSNR* ¼ 15*:*48 dB *PSNR* ¼ 15*:*23 dB

*PSNR* ¼ 23*:*96 dB *PSNR* ¼ 24*:*55 dB *PSNR* ¼ 24*:*06 dB

*PSNR* ¼ 17*:*78 dB *PSNR* ¼ 17*:*87 dB *PSNR* ¼ 17*:*04 dB

*PSNR* ¼ 30*:*46 dB *PSNR* ¼ 31*:*98 dB *PSNR* ¼ 30*:*06 dB

*<sup>m</sup>*<sup>2</sup> <sup>≤</sup> *Rc* (6)

*<sup>m</sup>*<sup>2</sup> is called the

<sup>4</sup> � *<sup>d</sup>* <sup>∙</sup>ð Þ *<sup>m</sup>* � <sup>2</sup> <sup>∙</sup> *<sup>d</sup>* <sup>∙</sup> *<sup>½</sup>* <sup>þ</sup> <sup>4</sup> � *<sup>d</sup>*<sup>2</sup> <sup>∙</sup> *<sup>¼</sup>* <sup>¼</sup> ð Þ *<sup>m</sup>* � *<sup>d</sup>* <sup>2</sup>

*Performance in sense of peak SNR of the five adopted images.*

sub-image equals:

**Base Bit rate**

0.25 bpp

*Wavelet Theory*

2.00 bpp

0.25 bpp

2.00 bpp

**Table 2.**

sub-images bit rates.

**218**

#### **Figure 17.**

*2D Gaussian window.(a) Gaussian window with parameters* m *and* a*:(b) 2D Gaussian window with* m ¼ 128 *and* a ¼ 0*:*834*.*

**Figure 18.** *Computation of pixel sizes for all possible overlapping cases.*

• in case of satellite and fingerprint images (**Figure 21**.a and **Figure 21**.e), the proposed PQWT compression scheme presents better performance than JPEG, the others. To compare the performance of the proposed scheme (PQWT with *a* and *d* parameters), two other schemes are involved: the PQWT with *a* ¼ 1 and *d* ¼

To fix *a* and *d* parameters of PQWT – base 3 and PQWT – proper decomposition, an exhaustive search (with respect of the reduction factor threshold) has been performed to get the maximal value of PSNR parameter. It can be observed in **Table 3** that the *a* and *d* parameters differ from an image to another; therefore, these parameters have to be included in the compressed file, as well as the size of

The obtained results show a tiny superiority of PQWT – base 3 in comparison with JPEG and PQWT – proper decomposition; except for natural and medical

**Figure 22** compares the visual side of the five adopted compressed images, where details of size 128 � 128 from original, PQWT – base 3 and JPEG images are magnified. From these figures, it can be observed that PQWT compressed images present lower block boundary artifacts effect in comparison with JPEG images (especially for satellite and textural images), and preserve the continuity of their

0 (referring to the first proposed scheme) and JPEG standard.

images, where the JPEG standard is slightly better.

the overall image.

**Figure 19.**

**Figure 20.**

*Proposed reconstructed scheme.*

*Reduction factor of bit rate vs. overlapping pixels.*

*The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

detail shapes.

**221**


In **Table 3**, it is given the performance in sense of PSNR and VIF parameters of the five adopted images, at a bit rate of 0*:*54 *bpp* for textural image and 0*:*50 *bpp* for *The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

**Figure 19.** *Reduction factor of bit rate vs. overlapping pixels.*

**Figure 20.** *Proposed reconstructed scheme.*

the others. To compare the performance of the proposed scheme (PQWT with *a* and *d* parameters), two other schemes are involved: the PQWT with *a* ¼ 1 and *d* ¼ 0 (referring to the first proposed scheme) and JPEG standard.

To fix *a* and *d* parameters of PQWT – base 3 and PQWT – proper decomposition, an exhaustive search (with respect of the reduction factor threshold) has been performed to get the maximal value of PSNR parameter. It can be observed in **Table 3** that the *a* and *d* parameters differ from an image to another; therefore, these parameters have to be included in the compressed file, as well as the size of the overall image.

The obtained results show a tiny superiority of PQWT – base 3 in comparison with JPEG and PQWT – proper decomposition; except for natural and medical images, where the JPEG standard is slightly better.

**Figure 22** compares the visual side of the five adopted compressed images, where details of size 128 � 128 from original, PQWT – base 3 and JPEG images are magnified. From these figures, it can be observed that PQWT compressed images present lower block boundary artifacts effect in comparison with JPEG images (especially for satellite and textural images), and preserve the continuity of their detail shapes.

• in case of satellite and fingerprint images (**Figure 21**.a and **Figure 21**.e), the proposed PQWT compression scheme presents better performance than JPEG,

*2D Gaussian window.(a) Gaussian window with parameters* m *and* a*:(b) 2D Gaussian window with*

• in case of natural and textural images (**Figure 21**.b and **Figure 21**.d), JPEG

• in case of medical image (**Figure 21**.c), both schemes present slightly the same

In **Table 3**, it is given the performance in sense of PSNR and VIF parameters of the five adopted images, at a bit rate of 0*:*54 *bpp* for textural image and 0*:*50 *bpp* for

standard outperforms the proposed compression scheme,

*Computation of pixel sizes for all possible overlapping cases.*

performance.

**Figure 18.**

**220**

**Figure 17.**

*Wavelet Theory*

m ¼ 128 *and* a ¼ 0*:*834*.*

#### **Figure 21.**

*The evaluation curves vs. bit rate in sense of PSNR and VIF. (a) Satellite image, (b) natural image, (c) medical image, (d) textural image, (e) fingerprint image.*

For a deep study of the block boundary artifacts effect between both proposed compression schemes employing PQWT – base 3, one have focused only on the overlapping regions in the five adopted images, where bands of 8 pixels per line and **Bit rate**

**223**

0.50 bpp *PSNR*

0.50 bpp *PSNR*

0.50 bpp *PSNR*

¼ 44*:*64 dB, *VIF*

¼ 0*:*69

*PSNR*

*a* ¼ 0*:*938*, d*

¼ 2,

*Fr* ¼ 0*:*975

¼ 45*:*09 dB, *VIF*

¼ 0*:*70

*PSNR*

*a* ¼ 0*:*955, *d* ¼ 2,

*Fr* ¼ 0*:*975

¼ 44*:*76 dB, *VIF*

¼ 0*:*68

*PSNR*

¼ 46*:*00 dB, *VIF*

¼ 0*:*74

¼ 30*:*24 dB, *VIF*

¼ 0*:*44

*PSNR*

*a* ¼ 0*:*834, *d* ¼ 3,

*Fr* ¼ 0*:*962

¼ 30*:*43 dB, *VIF*

¼ 0*:*54

*PSNR*

*a* ¼ 1, *d* ¼ 4,

*Fr* ¼ 0*:*950

¼ 30*:*00 dB, *VIF*

¼ 0*:*44

*PSNR*

¼ 32*:*82 dB, *VIF*

¼ 0*:*51

¼ 30*:*63 dB, *VIF*

¼ 0*:*38

*PSNR*

*a* ¼ 0*:*969, *d* ¼ 4,

*Fr* ¼ 0*:*950

¼ 30*:*71 dB, *VIF*

¼ 0*:*39

*PSNR*

*a* ¼ 0*:*962, *d* ¼ 4,

*Fr* ¼ 0*:*950

¼ 30*:*54 dB, *VIF*

¼ 0*:*38

*PSNR*

¼ 29*:*83 dB, *VIF*

¼ 0*:*36

*The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

**PQWT – Base 3**

**PQWT – Base 3**

**PQWT – Proper** 

**decomposition**

**JPEG**

*a* ¼ **1,** *d* ¼ **0**

#### *The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

For a deep study of the block boundary artifacts effect between both proposed

*The evaluation curves vs. bit rate in sense of PSNR and VIF. (a) Satellite image, (b) natural image,*

overlapping regions in the five adopted images, where bands of 8 pixels per line and

– base 3, one have focused only on the

compression schemes employing PQWT

*(c) medical image, (d) textural image, (e) fingerprint image.*

**Figure 21.**

*Wavelet Theory*

**222**

column around these regions are extracted in order to measure the effect of block

*– 0.54 bpp, (e) fingerprint image*

PSNRs of the extracted regions employing, respectively, the first and second proposed schemes, the average block boundary artifacts effect is measured by

<sup>1</sup> and *PSNR*

*– 0.50 bpp, (b) natural image*

*– 0.50 bpp.*

<sup>2</sup> the measured

*– 0.50 bpp,*

boundary artifacts in sense of PSNR. By denoting *PSNR*

*Magnified detail from the five adopted images. (a) Satellite image*

*The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

*– 0.50 bpp, (d) textural image*

**Figure 22.**

**225**

*(c) medical image*

*Performance in sense of PSNR and VIF parameters of the five adopted images*

*Magnified detail from the five adopted images. (a) Satellite image – 0.50 bpp, (b) natural image – 0.50 bpp, (c) medical image – 0.50 bpp, (d) textural image – 0.54 bpp, (e) fingerprint image – 0.50 bpp.*

column around these regions are extracted in order to measure the effect of block boundary artifacts in sense of PSNR. By denoting *PSNR*<sup>1</sup> and *PSNR*<sup>2</sup> the measured PSNRs of the extracted regions employing, respectively, the first and second proposed schemes, the average block boundary artifacts effect is measured by

**Bit rate**

**224**

0.54 bpp *PSNR*

0.50 bpp *PSNR*

**Table 3.** *Performance*

 *in sense of PSNR and VIF parameters*

 *of the five adopted images*

¼ 21*:*02 dB, *VIF*

¼ 0*:*31

*PSNR*

*a* ¼ 1, *d* ¼ 1,

*Fr* ¼ 0*:*992

¼ 21*:*12 dB, *VIF*

¼ 0*:*31

*PSNR*

*a* ¼ 0*:*834, *d* ¼ 1,

*Fr* ¼ 0*:*992

¼ 20*:*48 dB, *VIF*

¼ 0*:*29

*PSNR*

¼ 20*:*48 dB, *VIF*

¼ 0*:*29

¼ 18*:*06 dB, *VIF*

¼ 0*:*22

*PSNR*

*a* ¼ 1, *d* ¼ 4,

*Fr* ¼ 0*:*950

¼ 18*:*14 dB, *VIF*

¼ 0*:*22

*PSNR*

*a* ¼ 1, *d* ¼ 2,

*Fr* ¼ 0*:*975

¼ 17*:*83 dB, *VIF*

¼ 0*:*21

*PSNR*

¼ 17*:*72 dB, *VIF*

¼ 0*:*19

**PQWT – Base 3**

**PQWT – Base 3**

**PQWT – Proper** 

**decomposition**

**JPEG**

*Wavelet Theory*

*a* ¼ **1,** *d* ¼ **0**


**Table 4.**

*Average block boundary artifacts effect in sense of PSNR for the five adopted images*

$$E\_{bba} = \sum\_{j=1}^{R} \frac{PSNR\_2(j) - PSNR\_1(j)}{PSNR\_1(j)} \times 100 \text{ [\%]} \tag{7}$$

**Nomenclature**

ECG Electro-CardioGram RLE Run Length Coding LZW Lempel-Ziv-Welch

JPEG Joint Photographic Experts Group DCT Cosine Discrete Transform DWT Discrete Wavelet Transform QWT Quincunx Wavelet transform EZW Embedded Zerotree Wavelet

*The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

SPIHT Set Partitioning In Hierarchical Trees

PQWT Packet-based Quincunx Wavelet transform IQWT Inverse Quincunx Wavelet Transform

IPQWT Inverse Packet-based Quincunx Wavelet Transform

Information processing and telecommunication laboratory (LTIT), University of

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

\*Address all correspondence to: bassou.abdesselam@univ-bechar.dz

CDF Cohen-Daubechies-Feauveau

PSNR Peak Signal to Noise Ratio MSSIM Mean Structural SIMilarity index VIF Visual Information Fidelity

WP Packet Wavelet

**Author details**

**227**

Abdesselam Bassou

Tahri Mohammed Bechar, Algeria

provided the original work is properly cited.

where, *R* values of the bit rate (*Rc*) from 0*:*1 *bpp* up to 8*:*1 *bpp* are employed to evaluate the PSNRs.

In **Table 4**, the values of *Ebba* are given for the five adopted images. From these results, it can be concluded that, in comparison with the first compression scheme, the second proposed compression scheme presents a significant reduction of 27.70% and 22.87% of the effect of block boundary artifacts for, respectively, natural and medical images. However, a tiny reduction of *Ebba* for fingerprint image is observed, which means that further processing on sub-images boundaries is necessary for such an image using local 2D filters [18].

#### **6. Conclusion**

This chapter introduces an image compression scheme that employs quincunx wavelet decomposition improved by wavelet packet. This process permits to focus on both approximation and detail parts of the image decomposition.

Using the concept of image division into sub-images (employed in JPEG standard compression algorithm), the effect of block boundary artifacts has occurred especially at low range of compression bit rates. To overcome this problem, the subimages are weighted by a 2D Gaussian window and overlapped with respect to the reduction factor of compression bit rate. This means that, in addition of the overall image size, two parameters have to be included in the compressed file: the minimum amplitude of 2D window and the number of overlapped pixels.

To present the proposed compression algorithm as a standard, its performances were compared, in sense of evaluation parameters, to those of JPEG standard. The main improvement was seen in the capacity of the proposed scheme to provide better image visual quality (detail shapes continuity). This means that, in the first hand, it can be possible to reduce the image file sizes without reducing the image visual quality, and increase the storage capacity in photographic devices in the other hand.

As a result, this compression technique permits to create benchmarks and databases with low capacity whatever its nature (satellite, medical, natural or textural image).

In this work, one have focused on gray-scale images in order to present the proposed compression scheme. It is necessary, in future works, to investigate its efficiency on video and color images compression.

#### **Conflict of interest**

The author declare that there is no conflict of interest.

*The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

#### **Nomenclature**

*Ebba* <sup>¼</sup> <sup>X</sup> *R*

sary for such an image using local 2D filters [18].

efficiency on video and color images compression.

The author declare that there is no conflict of interest.

evaluate the PSNRs.

**Table 4.**

*Wavelet Theory*

**6. Conclusion**

hand.

image).

**226**

**Conflict of interest**

*j*¼1

*Average block boundary artifacts effect in sense of PSNR for the five adopted images*

*PSNR*2ð Þ�*j PSNR*1ð Þ*j PSNR*1ð Þ*j*

**Image Satellite image Natural image Medical Image Textural Image Fingerprint image** *Ebba* [%] 2.19 27.70 22.87 9.41 0.63

where, *R* values of the bit rate (*Rc*) from 0*:*1 *bpp* up to 8*:*1 *bpp* are employed to

In **Table 4**, the values of *Ebba* are given for the five adopted images. From these results, it can be concluded that, in comparison with the first compression scheme, the second proposed compression scheme presents a significant reduction of 27.70% and 22.87% of the effect of block boundary artifacts for, respectively, natural and medical images. However, a tiny reduction of *Ebba* for fingerprint image is

observed, which means that further processing on sub-images boundaries is neces-

This chapter introduces an image compression scheme that employs quincunx wavelet decomposition improved by wavelet packet. This process permits to focus

Using the concept of image division into sub-images (employed in JPEG standard compression algorithm), the effect of block boundary artifacts has occurred especially at low range of compression bit rates. To overcome this problem, the subimages are weighted by a 2D Gaussian window and overlapped with respect to the reduction factor of compression bit rate. This means that, in addition of the overall image size, two parameters have to be included in the compressed file: the mini-

To present the proposed compression algorithm as a standard, its performances were compared, in sense of evaluation parameters, to those of JPEG standard. The main improvement was seen in the capacity of the proposed scheme to provide better image visual quality (detail shapes continuity). This means that, in the first hand, it can be possible to reduce the image file sizes without reducing the image visual quality, and increase the storage capacity in photographic devices in the other

As a result, this compression technique permits to create benchmarks and databases with low capacity whatever its nature (satellite, medical, natural or textural

In this work, one have focused on gray-scale images in order to present the proposed compression scheme. It is necessary, in future works, to investigate its

on both approximation and detail parts of the image decomposition.

mum amplitude of 2D window and the number of overlapped pixels.

� 100 %½ � (7)


## **Author details**

Abdesselam Bassou

Information processing and telecommunication laboratory (LTIT), University of Tahri Mohammed Bechar, Algeria

\*Address all correspondence to: bassou.abdesselam@univ-bechar.dz

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## **References**

[1] Barmase S, Das S, Mukhopadhyay S. Wavelet transform-based analysis of QRS complex in ECG signals. 2013; arXiv preprint arXiv:1311.6460.

[2] Al-Enizi F, Al-Asmari A. DWT-based data hiding technique for videos ownership protection [Chapter]. In: Baleanu D. Wavelet Transform and Complexity. IntechOpen; 2019. DOI: 10.5772/intechopen.84963

[3] Kanagaraj H, Muneeswaran V. Image compression using Haar discrete wavelet transform. In: Proceedings of 2020 5th International Conference on Devices, Circuits and Systems (ICDCS'20); 5–6 March 2020; Coimbatore, India. p. 271–274. DOI: 10.1109/ICDCS48716.2020.243596.

[4] Patlayenko M, Osharovska O, Pyliavskyi V, Solodka V. Wavelet feature family for image compression. In: Proceedings of 27th National Conference with International Participation (TELECOM'19); 30–31 October. 2019; Sofia, Bulgaria. p. 16–18. DOI: 10.1109/ TELECOM48729.2019.8994877.

[5] Ziv J, Lempel A. A universal algorithm for sequential data compression. IEEE trans. inform. theory. 1977;23.3:337–343. DOI: 10.1109/ TIT.1977.1055714

[6] Huffman D. A method for the construction of minimum redundancy codes. In: Proceedings of IRE; 1098– 1101 September 1952. DOI: 10.1109/ JRPROC.1952.273898

[7] JPEG website. Available from http:// jpeg.org/ [Accessed: 2020-04-10]

[8] Daubechies I. Ten lectures on wavelets. Society for Industrial and Applied Mathematics; 1992. 357 p. DOI: 10.1137/1.9781611970104

[9] Manuela F, Dimitri VD, Michael U. An orthogonal family of quincunx

wavelets with continuously adjustable order. IEEE trans. image proc. 2005;14.4: 499–510. DOI: 10.1109/TIP.2005.843754

[17] Brodatz P. Textures: a photographic album for artist and designers. Dover

*The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

[18] Lim JS. Two-dimensional signal and

Publications, New York; 1966

image processing. Prentice-Hall International; 1990. 694 p.

**229**

[10] Beladgham M. Construction d'une technique d'aide au diagnostic en imagerie médicale: Application à la compression d'images [thesis]. Tlemcen Algeria: Abou Bakr Belkaid University; 2012

[11] Cohen A, Daubechies I, Feauveau JC. Biorthogonal bases of compactly supported wavelets. Communications on pure and applied mathematics. 1992;45:485–560. DOI: 10.1002/cpa.3160450502

[12] Taubman D, Michael M. Image compression fundamentals, standards and practice. Springer; 2002. 777 p. DOI: 10.1007/978-1-4615-0799-4

[13] Shapiro J. Adaptive McClellan transformations for quincunx filter banks. IEEE trans. signal process. 1994; 42.3:642–648. DOI: 10.1109/78.277856

[14] Linderhed A. Adaptive image compression with wavelet packets and empirical mode decomposition [thesis]. Linköping Sweden: Linköping studies in science and technology; 2004

[15] Sprljan N, Grgic S, Mrak M, Grgic M. Modified SPIHT algorithm for wavelet packet image coding. In: Proceedings of the IEEE International Symposium on Video/Image Processing and Multimedia Communications (VIPromCom'02); 16–19 June 2002; Zadar, Croatia. p. 189–194. DOI: 10.1109/VIPROM.2002.1026653

[16] Allaoui CE, Bassou A, Benyahia I, Khelifi M. Selection of compression test images using variance-based statistical method. Indonesian journal of electrical engineering and computer science. 2019;16.1:243–258. DOI: 10.11591/ijeecs. v16.i1.pp243-258

*The Discrete Quincunx Wavelet Packet Transform DOI: http://dx.doi.org/10.5772/intechopen.94970*

[17] Brodatz P. Textures: a photographic album for artist and designers. Dover Publications, New York; 1966

**References**

*Wavelet Theory*

[1] Barmase S, Das S, Mukhopadhyay S. Wavelet transform-based analysis of QRS complex in ECG signals. 2013; arXiv preprint arXiv:1311.6460.

wavelets with continuously adjustable order. IEEE trans. image proc. 2005;14.4: 499–510. DOI: 10.1109/TIP.2005.843754

[10] Beladgham M. Construction d'une technique d'aide au diagnostic en imagerie médicale: Application à la compression d'images [thesis]. Tlemcen Algeria: Abou Bakr Belkaid University;

[11] Cohen A, Daubechies I,

10.1002/cpa.3160450502

10.1007/978-1-4615-0799-4

Feauveau JC. Biorthogonal bases of compactly supported wavelets. Communications on pure and applied mathematics. 1992;45:485–560. DOI:

[12] Taubman D, Michael M. Image compression fundamentals, standards and practice. Springer; 2002. 777 p. DOI:

[13] Shapiro J. Adaptive McClellan transformations for quincunx filter banks. IEEE trans. signal process. 1994; 42.3:642–648. DOI: 10.1109/78.277856

[14] Linderhed A. Adaptive image compression with wavelet packets and empirical mode decomposition [thesis]. Linköping Sweden: Linköping studies in

science and technology; 2004

[15] Sprljan N, Grgic S, Mrak M,

Grgic M. Modified SPIHT algorithm for wavelet packet image coding. In: Proceedings of the IEEE International Symposium on Video/Image Processing and Multimedia Communications (VIPromCom'02); 16–19 June 2002; Zadar, Croatia. p. 189–194. DOI: 10.1109/VIPROM.2002.1026653

[16] Allaoui CE, Bassou A, Benyahia I, Khelifi M. Selection of compression test images using variance-based statistical method. Indonesian journal of electrical engineering and computer science. 2019;16.1:243–258. DOI: 10.11591/ijeecs.

v16.i1.pp243-258

2012

[2] Al-Enizi F, Al-Asmari A. DWT-based

[3] Kanagaraj H, Muneeswaran V. Image

data hiding technique for videos ownership protection [Chapter]. In: Baleanu D. Wavelet Transform and Complexity. IntechOpen; 2019. DOI:

compression using Haar discrete wavelet transform. In: Proceedings of 2020 5th International Conference on

Devices, Circuits and Systems (ICDCS'20); 5–6 March 2020; Coimbatore, India. p. 271–274. DOI: 10.1109/ICDCS48716.2020.243596.

[4] Patlayenko M, Osharovska O,

with International Participation (TELECOM'19); 30–31 October. 2019; Sofia, Bulgaria. p. 16–18. DOI: 10.1109/ TELECOM48729.2019.8994877.

[5] Ziv J, Lempel A. A universal algorithm for sequential data compression. IEEE trans. inform. theory. 1977;23.3:337–343. DOI: 10.1109/

[6] Huffman D. A method for the construction of minimum redundancy codes. In: Proceedings of IRE; 1098– 1101 September 1952. DOI: 10.1109/

[7] JPEG website. Available from http:// jpeg.org/ [Accessed: 2020-04-10]

[9] Manuela F, Dimitri VD, Michael U. An orthogonal family of quincunx

[8] Daubechies I. Ten lectures on wavelets. Society for Industrial and Applied Mathematics; 1992. 357 p. DOI:

10.1137/1.9781611970104

**228**

TIT.1977.1055714

JRPROC.1952.273898

Pyliavskyi V, Solodka V. Wavelet feature family for image compression. In: Proceedings of 27th National Conference

10.5772/intechopen.84963

[18] Lim JS. Two-dimensional signal and image processing. Prentice-Hall International; 1990. 694 p.

**Chapter 11**

**Abstract**

**1. Introduction**

**231**

carefully tested with historic data.

Uncertainty and the Oracle of

Wavelet Coherence Analysis

**Keywords:** finance, sectors, wavelets, uncertainty, coherence

*Joan Nix and Bruce D. McNevin*

Market Returns: Evidence from

Wavelet methodology is employed to investigate the statistical relationship between three well-accepted measures of uncertainty and both market and sector returns. Our primary goal is to determine whether uncertainty is sector specific. Although there are periods when the market works effectively as an oracle capturing uncertainty, we also find sector specific uncertainty. The wavelet equivalent of correlation, coherence, is used to determine the presence of sector specific uncertainty. We find that allowing localized information in the time frequency domain is critical for separating out sector specific uncertainty from market uncertainty.

Uncertainty shocks call the the market's knowledge-gathering role into question. The equity market as an oracle works well when it provides rapid price discovery that reflects the underlying fundamentals of an economy. But when facing uncertainty shocks the equity market's function as a consensus mechanism that reveals economic reality appears at first glance, poorly suited for the environment it faces. An oracle needs a reliable channel for obtaining information. In the face of uncertainty, the equity market turns into a network of pipes where funds flows in ways that leave many skilled observers of market moves caught off guard. The shock filters through to the inter-temporal trade-offs of investors and makes forecasting more of a bet on imagined scenarios than the result of astute modeling that is

The relevance of wavelet methodology for examining whether the uncertainty measures are correlated at different scales and frequencies with market and sector returns may be more easily imagined with a metaphor.<sup>1</sup> The uncertainty shock operates as a push from behind that a person strolling down the street experiences. The push may be hard and throw the person completely off his path. He may end up face down and in a panic imagining the worse outcome. The push may also be soft from which the person experiences a quick feeling of panic but quickly recovers and

<sup>1</sup> The use of imagery as a guide to economic understanding has a rich history. See for example, the use of

a bicycle imagery by Samuelson to explain how a real economic system is capable of resolving indeterminacy even when the path between present and future is far from smooth [1].

#### **Chapter 11**
