**2. Blind and non-blind methods**

As described before, watermarking methods can be classified according to whether the original data is used in extraction/detection procedure or not. In 1997, Cox et al. [5] proposed a watermarking method where they embed the watermark into the

lower frequency coefficients in the DCT domain. Their method needs the original image and the embedding strength coefficient to detect the presence of the watermark. However, the original source might not be available in several applications. Barni et al. [6] presented a method to overcome the non-blind watermarking problem. They correlate the watermark sequence directly with all coefficients of the received image and then compare the correlation coefficient with some detection threshold. Only, the watermark sequence and the scaling factor are needed in the watermark detection. This approach is widely utilized in the watermarking community. However, it turns out that blind methods are less secure than non-blind methods.

### **3. Watermarking in transform domains**

Watermarking methods can be classified according to whether they use embedding based on additive algorithms or quantization algorithms.

#### **3.1 Additive algorithms**

Additive embedding strategies are characterized by the linear modification of the host image and correlative processing in the detection stage. A considerable number of image watermarking methods share this architecture. In most algorithms, the signature data is a sequence of numbers wi of length N that is embedded in a suitable selected subset of the host signal coefficients. The basic and commonly used embedding formulas are defined by the following equations (Eqs. (1) and (2)):

$$\boldsymbol{V}\_{i}^{\cdot} = \boldsymbol{V}\_{i}(1 + k.\boldsymbol{\omega}\_{i}) \tag{1}$$

correlation coefficient. Therefore, we can conclude that the image has been

The threshold *τ* is estimated using Eq. (4):

*Blind Wavelet-Based Image Watermarking DOI: http://dx.doi.org/10.5772/intechopen.88131*

**3.2 Algorithms based on quantization**

blocks of the source input.

cropping, and compression.

**4. Wavelet-based methods**

**99**

analytically.

watermarked with *w*. A detection threshold *τ* can be established to make the detection decision if *δ τ*. The detection threshold can be derived either experimentally or

where only the coefficients above the detection threshold are considered.

The quantization schemes perform nonlinear modifications during embedding and detecting the embedded message by quantizing the received samples to map them to the nearest reconstruction point. Quantization is the process of mapping a large possibly infinite set of values to a much smaller set. A quantizer consists of an encoder mapping and a decoder mapping. The range of source values is divided into a number of intervals. The encoder represents each interval with a code word assigned to that interval. The decoder is able to reconstruct a value for every code word produced by the encoder. Scalar quantizes take scalar values as input and output code words, while vector quantizers work with vectors of input sequences or

Quantization-based watermarking is a new technique, as a logo is embedded and

Authors in [7, 8] present another quantization-based watermarking algorithm which improves on the Tsai algorithm by incorporating variable quantization and resistance against a wide range of attacks like blurring, noising, sharpening, scaling,

The main issue with these quantization-based algorithms is that it only tackles a subset of attacks. For example, Tsai's algorithm is only robust against JPEG compression; however Chen's algorithm does not tackle geometric attacks like rotation. Hence we propose a new algorithm which is robust against cropping, JPEG com-

The wavelet transform finds a great popularity in the field of watermarking as it is able to decompose the available images into sub-bands, in which watermarks can be embedded [3, 9]. Taking the cue from the spread spectrum method, we embed the data in transform coefficients chosen in a random order. For extraction of the hidden data, the random sequence must be made available to the extractor. Cox et al. [5] were the first to apply the spread spectrum method to data hiding. Transforms such as the DCT and DWT have been used. The use of the DWT has advantages of speed and robustness against wavelet-based compression. Previously, Dugad's algorithm introduced an additive watermarking technique in the wavelet domain [3]. The proposed technique in this paper uses three-level wavelet

watermarking technique, where the watermark is embedded in the middle- and low-frequency bands. The robustness of the algorithm is tested by applying the

detected in a blind way. Authors in [6] introduced a scalar quantization

algorithm to JPEG compression. Only this attack is tested.

pression, resizing, rotation, and salt and pepper.

ð4Þ

$$\boldsymbol{V}\_{i}^{\cdot} = \boldsymbol{V}\_{i} + k.\boldsymbol{\omega}\_{i} \tag{2}$$

where *k* is a weighting factor that influences the robustness as well as the visibility and *V*<sup>0</sup> is the resulting modified host data coefficients carrying the watermark information. The majority of watermarking systems presented in the literature falls into this class, differing chiefly in the signal design, the embedding, and the retrieval of the watermark content. The extraction process is accomplished by applying the inverse embedding formulas.

The algorithm developed by Dugad et al. [3] makes use of a sequence of pseudorandom Gaussian real numbers, matching the size of the detailed sub-bands of the wavelet domain. The authors performed three-level decomposition with Daubechies-8 filters and selected all coefficients in all detailed sub-bands, whose magnitude is above a given threshold. The equation used for watermark embedding is described in Eq. (3).

For a blind retrieval of the watermark, a statistical detector was proposed based on the following formula:

$$\delta = \frac{\sum\_{N} V\_i^\*.\mathbf{w}\_i}{N} \tag{3}$$

where *δ* is estimated by correlating the watermark sequence *w* directly with all N coefficients of the received image *V\**. A large number of random sequences are tested, but only the sequence that was originally embedded yields the highest

lower frequency coefficients in the DCT domain. Their method needs the original image and the embedding strength coefficient to detect the presence of the watermark. However, the original source might not be available in several applications. Barni et al. [6] presented a method to overcome the non-blind watermarking problem. They correlate the watermark sequence directly with all coefficients of the received image and then compare the correlation coefficient with some detection threshold. Only, the watermark sequence and the scaling factor are needed in the watermark detection. This approach is widely utilized in the watermarking community. However, it turns out that blind methods are less secure than non-blind methods.

Watermarking methods can be classified according to whether they use embed-

Additive embedding strategies are characterized by the linear modification of the host image and correlative processing in the detection stage. A considerable number of image watermarking methods share this architecture. In most algorithms, the signature data is a sequence of numbers wi of length N that is embedded in a suitable selected subset of the host signal coefficients. The basic and commonly used embedding formulas are defined by the following equations (Eqs. (1) and (2)):

where *k* is a weighting factor that influences the robustness as well as the visibility and *V*<sup>0</sup> is the resulting modified host data coefficients carrying the watermark information. The majority of watermarking systems presented in the literature falls into this class, differing chiefly in the signal design, the embedding, and the retrieval of the watermark content. The extraction process is accomplished by

The algorithm developed by Dugad et al. [3] makes use of a sequence of pseudorandom Gaussian real numbers, matching the size of the detailed sub-bands of the wavelet domain. The authors performed three-level decomposition with Daubechies-8 filters and selected all coefficients in all detailed sub-bands, whose magnitude is above a given threshold. The equation used for watermark embedding

For a blind retrieval of the watermark, a statistical detector was proposed based

where *δ* is estimated by correlating the watermark sequence *w* directly with all N

coefficients of the received image *V\**. A large number of random sequences are tested, but only the sequence that was originally embedded yields the highest

ð1Þ

ð2Þ

ð3Þ

**3. Watermarking in transform domains**

applying the inverse embedding formulas.

is described in Eq. (3).

**98**

on the following formula:

**3.1 Additive algorithms**

*Cyberspace*

ding based on additive algorithms or quantization algorithms.

correlation coefficient. Therefore, we can conclude that the image has been watermarked with *w*. A detection threshold *τ* can be established to make the detection decision if *δ τ*. The detection threshold can be derived either experimentally or analytically.

The threshold *τ* is estimated using Eq. (4):

$$x = \frac{\alpha}{2.N} \sum\_{i=1}^{N} |V\_i^\*| \tag{4}$$

where only the coefficients above the detection threshold are considered.

#### **3.2 Algorithms based on quantization**

The quantization schemes perform nonlinear modifications during embedding and detecting the embedded message by quantizing the received samples to map them to the nearest reconstruction point. Quantization is the process of mapping a large possibly infinite set of values to a much smaller set. A quantizer consists of an encoder mapping and a decoder mapping. The range of source values is divided into a number of intervals. The encoder represents each interval with a code word assigned to that interval. The decoder is able to reconstruct a value for every code word produced by the encoder. Scalar quantizes take scalar values as input and output code words, while vector quantizers work with vectors of input sequences or blocks of the source input.

Quantization-based watermarking is a new technique, as a logo is embedded and detected in a blind way. Authors in [6] introduced a scalar quantization watermarking technique, where the watermark is embedded in the middle- and low-frequency bands. The robustness of the algorithm is tested by applying the algorithm to JPEG compression. Only this attack is tested.

Authors in [7, 8] present another quantization-based watermarking algorithm which improves on the Tsai algorithm by incorporating variable quantization and resistance against a wide range of attacks like blurring, noising, sharpening, scaling, cropping, and compression.

The main issue with these quantization-based algorithms is that it only tackles a subset of attacks. For example, Tsai's algorithm is only robust against JPEG compression; however Chen's algorithm does not tackle geometric attacks like rotation. Hence we propose a new algorithm which is robust against cropping, JPEG compression, resizing, rotation, and salt and pepper.
