**3.1 Embedding process of DWT-based audio watermarking**

Input: original audio, watermark; output: watermarked audio.

The steps of embedding process are as follows:

Step 1: The original audio signal of 16 samples is considered from two channels for further watermark embedding process.

Step 2: DWT transform is applied to obtain an approximate and detailed coefficient of the both channels. Here the approximate and detailed coefficients are low and high pass filter component of the original input signal.

Step 3: Then, binary bits are embedded in the detailed component of an input audio signal. If watermark bit is one then according to

$$\mathbf{P1}^{'} = \mathbf{P1} + (\mathbf{I^\*} \mathbf{P1}) \text{ in the first channel} \tag{1}$$

where P\_1' = detailed component after watermarking, P\_1 = detailed component before watermarking, I = intensity factor and if watermark bit is 0, then 2nd channel detailed component is changed with the first channel detailed component.

**Figure 2.** *Flowchart of the embedded process.*

embedding unit. DWT module is used to read the values from the RAM and then

tion of watermark, the values are fed to inverse integer modules where

*Hardware Implementation of Audio Watermarking Based on DWT Transform*

DWT module has coefficients calculation unit to compute various coefficient for Daubechies filter. After that, these coefficients are processed through watermarking module to insert the watermark. The watermarking module consists of comparator and adder/subtractor to embed the watermark into co-efficient. The comparator is used to take one bit of watermark from block RAM2 and as per bit, the module would decide to embed the watermark of the detailed co-efficient. After the inser-

The models for executing of the DWT have mainly grouped in two classifications: (1) convolution filter based [15] and (2) the lifting based [15]. The vital discrete wavelet transform (DWT) is frequently refined by a convolution-based filter implementation using the FIR-filters for doing its transform [16]. FIR filters are applicable for improving the execution of the DWT hardware design [17]. Since a lifting structures have points of interest over a convolution-based regarding computation memory usage and complexity, more consideration is paid to the liftingbased approach. Daubechies and Sweldens [15] proposed the new wavelet scheme by taking into account of the second-generation wavelet. The lifting plan has better performance than convolution filter-based DWT. The lifting plan, which altogether depends on the spatial domain, has numerous favorable circumstances contrasted with filter bank structure, for example, lower complexity and power consumption with relatively reduced area. The lifting-based DWT has fundamental part of highpass filter and divide the values in low-pass filters where sequence of upper and lower triangular matrices is being formed [18]. The lifting scheme contains mainly three stages, known as, split, predict (P), and update (U). Each of these steps is shown in **Figure 5**. The initial step is separating the original values into odd, and even samples, and then after the odd samples are changed to have the prediction and is obtained as the detail coefficients gj+1. The even value is represented as coarser adaptation of input of the significant portion from determination. The average value of preserved signal, the detailed coefficients would help to revise the

converts these values to frequency domain co-efficient.

watermarked audio samples are generated.

*VLSI architecture of watermark embedding process.*

*DOI: http://dx.doi.org/10.5772/intechopen.86087*

**Figure 4.**

**201**

**4.1 Hardware implementation of DWT**

### **Figure 3.**

*Flowchart of the extraction process.*

The flow chart of the embedded process is defined in **Figure 2** of the audio watermarking.

Step 4: After the completion of the embedding process at both channels, the inverse DWT transform is applied in both channels to get watermarked audio signal.

### **3.2 Extraction process of DWT-based audio watermarking**

Input: watermarked audio signal; output: watermark

Step 1: Total 16 samples of both channels of the watermarked audio signal as an input is collected with similar steps followed in embedding process.

Step 2: DWT transform is obtained an approximate and detailed components of the both channel.

Step 3: Now the detailed part of the both channels are observed if they are same then our watermarked bit is 0 otherwise it is 1. The flowchart of the extraction process of the audio watermarking is shown in **Figure 3**.

Step 4: All the watermarked bit into single output to obtain the watermark which was embedded into an audio signal.

### **4. Hardware implementation of proposed audio watermarking**

The architecture of watermark embedding process is defined in **Figure 4**. The process comprises of mainly three modules: DWT module, watermark embedding module and inverse DWT module. Initially, the audio samples are stored in Block RAM1 for the processing. The original watermark is also applied to watermark

*Hardware Implementation of Audio Watermarking Based on DWT Transform DOI: http://dx.doi.org/10.5772/intechopen.86087*

**Figure 4.** *VLSI architecture of watermark embedding process.*

embedding unit. DWT module is used to read the values from the RAM and then converts these values to frequency domain co-efficient.

DWT module has coefficients calculation unit to compute various coefficient for Daubechies filter. After that, these coefficients are processed through watermarking module to insert the watermark. The watermarking module consists of comparator and adder/subtractor to embed the watermark into co-efficient. The comparator is used to take one bit of watermark from block RAM2 and as per bit, the module would decide to embed the watermark of the detailed co-efficient. After the insertion of watermark, the values are fed to inverse integer modules where watermarked audio samples are generated.

### **4.1 Hardware implementation of DWT**

The models for executing of the DWT have mainly grouped in two classifications: (1) convolution filter based [15] and (2) the lifting based [15]. The vital discrete wavelet transform (DWT) is frequently refined by a convolution-based filter implementation using the FIR-filters for doing its transform [16]. FIR filters are applicable for improving the execution of the DWT hardware design [17]. Since a lifting structures have points of interest over a convolution-based regarding computation memory usage and complexity, more consideration is paid to the liftingbased approach. Daubechies and Sweldens [15] proposed the new wavelet scheme by taking into account of the second-generation wavelet. The lifting plan has better performance than convolution filter-based DWT. The lifting plan, which altogether depends on the spatial domain, has numerous favorable circumstances contrasted with filter bank structure, for example, lower complexity and power consumption with relatively reduced area. The lifting-based DWT has fundamental part of highpass filter and divide the values in low-pass filters where sequence of upper and lower triangular matrices is being formed [18]. The lifting scheme contains mainly three stages, known as, split, predict (P), and update (U). Each of these steps is shown in **Figure 5**. The initial step is separating the original values into odd, and even samples, and then after the odd samples are changed to have the prediction and is obtained as the detail coefficients gj+1. The even value is represented as coarser adaptation of input of the significant portion from determination. The average value of preserved signal, the detailed coefficients would help to revise the

The flow chart of the embedded process is defined in **Figure 2** of the audio

**3.2 Extraction process of DWT-based audio watermarking**

*Security and Privacy From a Legal, Ethical, and Technical Perspective*

input is collected with similar steps followed in embedding process.

Input: watermarked audio signal; output: watermark

process of the audio watermarking is shown in **Figure 3**.

was embedded into an audio signal.

Step 4: After the completion of the embedding process at both channels, the inverse DWT transform is applied in both channels to get watermarked audio

Step 1: Total 16 samples of both channels of the watermarked audio signal as an

Step 2: DWT transform is obtained an approximate and detailed components of

Step 3: Now the detailed part of the both channels are observed if they are same

Step 4: All the watermarked bit into single output to obtain the watermark which

The architecture of watermark embedding process is defined in **Figure 4**. The process comprises of mainly three modules: DWT module, watermark embedding module and inverse DWT module. Initially, the audio samples are stored in Block RAM1 for the processing. The original watermark is also applied to watermark

then our watermarked bit is 0 otherwise it is 1. The flowchart of the extraction

**4. Hardware implementation of proposed audio watermarking**

watermarking.

*Flowchart of the extraction process.*

the both channel.

signal.

**200**

**Figure 3.**

ð4Þ

Outputs di and si are low-pass as well as high-pass coefficients of wavelet.

*Hardware Implementation of Audio Watermarking Based on DWT Transform*

*DOI: http://dx.doi.org/10.5772/intechopen.86087*

Daubechies 9/7-based lifting scheme is shown in **Figure 7**. Each lifting step comprises one update as well as one predict step and that for second time for 2D

Pipelined shift-and-add logic plans multipliers used as a part of proposed DWT algorithm. This methodology diminishes the basic way essentially with little increment in latency and area. The shifter, signed adder and signed subtractor for multiplication process is used. For multiplication, alpha, beta, gamma, delta, multiply with K and divide with K module are discussed in **Figure 8**. The values are defined in Eq. (5)

> 1 2 þ 1 16 þ

<sup>16</sup> <sup>¼</sup> <sup>0</sup>*:*<sup>0625</sup>

1 8 þ 1

S ≫ n<sup>0</sup> defines the right shift to n bits, where |α| � S=S +(S ≫ 1) + (S ≫ 4) + (S ≫ 5). The predict step from first lifting generate odd and

1

1

1

<sup>16</sup> <sup>¼</sup> <sup>1</sup>*:*<sup>875</sup>

<sup>32</sup> <sup>¼</sup> <sup>1</sup>*:*<sup>59375</sup>

<sup>128</sup> <sup>¼</sup> <sup>0</sup>*:*<sup>8828125</sup>

<sup>128</sup> <sup>¼</sup> <sup>0</sup>*:*<sup>4453125</sup>

<sup>8</sup> <sup>¼</sup> <sup>0</sup>*:*<sup>875</sup> (5)

implementation as P1, P2, and U1, U2, separately.

where*,* j j *α* ¼ 1 þ

j j *<sup>β</sup>* <sup>¼</sup> <sup>1</sup>

j j *<sup>γ</sup>* <sup>¼</sup> <sup>1</sup> 2 þ 1 4 þ 1 8 þ

j j *<sup>δ</sup>* <sup>¼</sup> <sup>1</sup> 4 þ 1 8 þ 1 16 þ

j j *K* ¼ 1 þ

1 *K* 

 ¼ 1 2 þ 1 4 þ 1

*4.1.1 Lifting scheme*

where, <sup>0</sup>

**Figure 7.**

**203**

*Lifting plan for Daubechies 9/7 filter.*

**Figure 5.** *Forward DWT process.*

even part. The process carried out in update step that creates fj+1 approximate coefficients. In order to achieve inverse transform, the sign is going to be exchanged at predict stage and the update stage and all operations are being applied in reversed order as defined in **Figure 6**.

The main objective is to achieve the lower and upper matrices (triangular type) and normalized diagonal matrix by dividing the polyphase matrix of the wavelet filters [19]. As indicated by the fundamental rule, the lifting filter of polyphase matrix of a 9/7 is defined as in Eq. (2).

$$P(z) = \begin{bmatrix} h\_o(z) & g\_o(z) \\ h\_o(z) & g\_o(z) \end{bmatrix} \tag{2}$$

$$(\lambda(z)\gamma(z)) = (\chi\_o(z)z^{-1}\chi\_o(z))P(z)$$

where g(z) and h(z) are high pass and low pass filter and is denied as notation of e (even) and o (odd) part respectively. The value is defined in Eq. (3).

$$\begin{aligned} P(z) &= \begin{bmatrix} 1 & \alpha(1+z^{-1}) \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ \beta(1+z) & 1 \end{bmatrix} \begin{bmatrix} 1 & \gamma(1+z^{-1}) \\ 0 & 1 \end{bmatrix} \\ &= \begin{bmatrix} 1 & 0 \\ \delta(1+z) & 1 \end{bmatrix} \begin{bmatrix} K & 0 \\ 0 & 1/K \end{bmatrix} \end{aligned} \tag{3}$$

where *<sup>α</sup>* <sup>1</sup> <sup>þ</sup> *<sup>z</sup>*�<sup>1</sup> ð Þ and *<sup>γ</sup>* <sup>1</sup> <sup>þ</sup> *<sup>z</sup>*�<sup>1</sup> ð Þ are the predict polynomials, *<sup>β</sup>*ð Þ <sup>1</sup> <sup>þ</sup> *<sup>z</sup>* and *δ*ð Þ 1 þ *z* are polynomials which are being updated, and scale normalization factor is denoted as K. And α = �1.586134342, β = �0.052980118, γ = 0.8829110762, and δ = 0.4435068522 are lifting co-efficient, and K = 1.149604398. For an input x(n) sequence, for n = 0, 1, ..., N � 1, the steps of lifting scheme are given as in Eq. (4)

*Hardware Implementation of Audio Watermarking Based on DWT Transform DOI: http://dx.doi.org/10.5772/intechopen.86087*


Outputs di and si are low-pass as well as high-pass coefficients of wavelet.

### *4.1.1 Lifting scheme*

even part. The process carried out in update step that creates fj+1 approximate coefficients. In order to achieve inverse transform, the sign is going to be exchanged at predict stage and the update stage and all operations are being applied in reversed

*Security and Privacy From a Legal, Ethical, and Technical Perspective*

The main objective is to achieve the lower and upper matrices (triangular type) and normalized diagonal matrix by dividing the polyphase matrix of the wavelet filters [19]. As indicated by the fundamental rule, the lifting filter of polyphase

where g(z) and h(z) are high pass and low pass filter and is denied as notation of

where *<sup>α</sup>* <sup>1</sup> <sup>þ</sup> *<sup>z</sup>*�<sup>1</sup> ð Þ and *<sup>γ</sup>* <sup>1</sup> <sup>þ</sup> *<sup>z</sup>*�<sup>1</sup> ð Þ are the predict polynomials, *<sup>β</sup>*ð Þ <sup>1</sup> <sup>þ</sup> *<sup>z</sup>* and *δ*ð Þ 1 þ *z* are polynomials which are being updated, and scale normalization factor is denoted as K. And α = �1.586134342, β = �0.052980118, γ = 0.8829110762, and δ = 0.4435068522 are lifting co-efficient, and K = 1.149604398. For an input x(n) sequence, for n = 0, 1, ..., N � 1, the steps of lifting scheme are given as in Eq. (4)

e (even) and o (odd) part respectively. The value is defined in Eq. (3).

ð2Þ

ð3Þ

order as defined in **Figure 6**.

**Figure 5.**

**Figure 6.**

**202**

*Inverse DWT process.*

*Forward DWT process.*

matrix of a 9/7 is defined as in Eq. (2).

Daubechies 9/7-based lifting scheme is shown in **Figure 7**. Each lifting step comprises one update as well as one predict step and that for second time for 2D implementation as P1, P2, and U1, U2, separately.

Pipelined shift-and-add logic plans multipliers used as a part of proposed DWT algorithm. This methodology diminishes the basic way essentially with little increment in latency and area. The shifter, signed adder and signed subtractor for multiplication process is used. For multiplication, alpha, beta, gamma, delta, multiply with K and divide with K module are discussed in **Figure 8**. The values are defined in Eq. (5)

$$\text{where, } |a| = 1 + \frac{1}{2} + \frac{1}{16} + \frac{1}{32} = 1.59375$$

$$|\beta| = \frac{1}{16} = 0.0625$$

$$|\gamma| = \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{128} = 0.8828125$$

$$|\delta| = \frac{1}{4} + \frac{1}{8} + \frac{1}{16} + \frac{1}{128} = 0.4453125$$

$$|K| = 1 + \frac{1}{8} + \frac{1}{16} = 1.875$$

$$\left|\frac{1}{K}\right| = \frac{1}{2} + \frac{1}{4} + \frac{1}{8} = 0.875\tag{5}$$

where, <sup>0</sup> S ≫ n<sup>0</sup> defines the right shift to n bits, where |α| � S=S +(S ≫ 1) + (S ≫ 4) + (S ≫ 5). The predict step from first lifting generate odd and

**Figure 7.** *Lifting plan for Daubechies 9/7 filter.*

**Figure 8.** *DWT coefficients calculation.*

even contribution after one clock cycle delay. The even value is included with past even input sample (*s* 0 *<sup>i</sup> , s*<sup>0</sup> *<sup>i</sup>*þ1). Then operation of multiplier using primary filter coefficient is done at delay of the second and third clock cycles by applying only shifting and adding operation. After fourth clock cycle, the result of multiplier is considered at odd input sample (*d*<sup>0</sup> *<sup>i</sup>* ) to update coefficient (*d*<sup>1</sup> *<sup>i</sup>* ). At the end of fifth clock, the present value of predict (*d*<sup>1</sup> *<sup>i</sup>* ) and the past value of the predict (*d*<sup>1</sup> *<sup>i</sup>*�1) with help of past even info (*s* 0 *<sup>i</sup>* ) provides the first value of update (*d*<sup>2</sup> *<sup>i</sup>* ). The adders is only the required operation at every clock cycle, thus critical path is defined through an adder delay only. The both phase, predict as well as update, of both stages are implemented in full pipelined approach to increase the speed. The overall lifting implementation comprises of four shifters and seven adders/subtractors. Moreover, the second stage of lifting have overall eight shifters and ten adders. The detail process is defined as in Eq. (6).

For an inverse DWT transform we use alpha, beta, gamma, delta module as same as discussed earlier but we use multiply module with the detailed coefficients and divide module with the approximate coefficients then we go reverse order of all the equation and finally we obtained original audio sample. Total eight samples for DWT transform are considered so after the inverse DWT transform eight samples are obtained. All the inverse DWT transform equation steps are discussed under.

$$\begin{aligned} s\_i^0 &= x\_{2n} \\ d\_i^0 &= x\_{2n+1} \\ d\_i^1 &= d\_i^0 + \alpha \left( s\_i^0 + s\_{i+1}^0 \right) \\ s\_i^1 &= s\_i^0 + \beta \left( d\_{i-1}^1 + d\_i^1 \right) \\ d\_i^2 &= d\_i^1 + \gamma \left( s\_i^1 + s\_{i+1}^1 \right) \\ s\_i^2 &= s\_i^1 + \delta \left( d\_{i-1}^2 + d\_i^2 \right) \\ d\_i &= \frac{d\_i^2}{k} \\ s\_i &= s\_i^2 \times k \end{aligned} \tag{6}$$

**5. Simulation and result**

*Predict and update implementation of lifting scheme.*

*DOI: http://dx.doi.org/10.5772/intechopen.86087*

**5.1 MATLAB**

**Figure 9.**

bility parameter [21].

**Figure 10.** *Watermark.*

**205**

The results are initially developed through MATLAB and then hardware imple-

Experiments are performed in MATLAB 2010a. The proposed algorithm uses classical/pop music and speech audio clips in order to evaluate the performance [20]. These are three different types of audio clips are considered as they have different characteristics, perceptual properties and energy distribution. These audio signal have various distinct characteristics and also contains some selective features such as low energy, pulse clarity, pitch (in Hz.), inharmonicity, sampling rate (in Hz.), zero crossing rate (per second), spectral irregularity, temporal length (seconds/sample), tempo (in bpm), rms energy, etc. Each audio sample is of mono file of a 16-bit with sampling rate of 44.1 kHz of WAVE format. The watermark is of binary image of a 30 30 bits as in **Figure 10**. The synchronization code is a 16-bit Barker code of value "1111100110101110". The wavelet is applied with two decomposition levels. Array size m is 50 and the range of quantization step size Δ starts from 0.15 for speech audio and goes up to 0.6 for pop audio signal. The performance of audio watermarking algorithms is quantified by robustness, payload and inaudi-

mentation are achieved to verify the real-time implementation.

*Hardware Implementation of Audio Watermarking Based on DWT Transform*

The proposed design of DWT and inverse DWT would help to have efficient audio watermarking algorithm (**Figure 9**).

**204**

*Hardware Implementation of Audio Watermarking Based on DWT Transform DOI: http://dx.doi.org/10.5772/intechopen.86087*

**Figure 9.**

even contribution after one clock cycle delay. The even value is included with

only the required operation at every clock cycle, thus critical path is defined through an adder delay only. The both phase, predict as well as update, of both stages are implemented in full pipelined approach to increase the speed. The overall lifting implementation comprises of four shifters and seven adders/subtractors. Moreover, the second stage of lifting have overall eight shifters and ten adders. The

For an inverse DWT transform we use alpha, beta, gamma, delta module as

The proposed design of DWT and inverse DWT would help to have efficient

same as discussed earlier but we use multiply module with the detailed coefficients and divide module with the approximate coefficients then we go reverse order of all the equation and finally we obtained original audio sample. Total eight samples for DWT transform are considered so after the inverse DWT transform eight samples are obtained. All the inverse DWT transform equation steps are

coefficient is done at delay of the second and third clock cycles by applying only shifting and adding operation. After fourth clock cycle, the result of multiplier is

*<sup>i</sup>* ) to update coefficient (*d*<sup>1</sup>

*<sup>i</sup>* ) provides the first value of update (*d*<sup>2</sup>

*<sup>i</sup>*þ1). Then operation of multiplier using primary filter

*<sup>i</sup>* ). At the end of

*<sup>i</sup>*�1)

ð6Þ

*<sup>i</sup>* ). The adders is

*<sup>i</sup>* ) and the past value of the predict (*d*<sup>1</sup>

past even input sample (*s*

*DWT coefficients calculation.*

**Figure 8.**

considered at odd input sample (*d*<sup>0</sup>

detail process is defined as in Eq. (6).

audio watermarking algorithm (**Figure 9**).

with help of past even info (*s*

discussed under.

**204**

fifth clock, the present value of predict (*d*<sup>1</sup>

0 *<sup>i</sup> , s*<sup>0</sup>

0

*Security and Privacy From a Legal, Ethical, and Technical Perspective*

*Predict and update implementation of lifting scheme.*
