We are IntechOpen, the world's leading publisher of Open Access books Built by scientists, for scientists

4,000+

Open access books available

116,000+

International authors and editors

120M+

Downloads

Our authors are among the

Top 1%

most cited scientists

12.2%

Contributors from top 500 universities

Selection of our books indexed in the Book Citation Index in Web of Science™ Core Collection (BKCI)

## Interested in publishing with us? Contact book.department@intechopen.com

Numbers displayed above are based on latest data collected. For more information visit www.intechopen.com

## **Meet the editor**

Lino García has graduated in Automatic Control Engineering at Polytechnic Institute "José A. Echeverría". He has received a master's degree in Systems and Communications Networks at Technical University of Madrid (UPM), PhD. in Communications Technologies and Systems at UPM, PhD. in Contemporary Artistic Practices and Art Theory at European University of Madrid

(UEM). He was professor at the Superior Institute of Art, "Comillas" Pontifical University, Menéndez Pelayo International University, Director of Master in Architectonic and Environmental Acoustic at UEM. He is professor at UPM, Academic Director and professor of Master in Contemporary Art Conservation and Restoration and professor of MSc of Science in Acoustical Engineering for Building and Environment and MSc in Systems and Services Engineering for the Information Society.

Contents

**Preface VII**

Yang Han

Tõnu Trump

Chapter 1 **Hirschman Optimal Transform Block LMS Adaptive Filter 1** Osama Alkhouli, Victor DeBrunner and Joseph Havlicek

Chapter 2 **On Using ADALINE Algorithm for Harmonic Estimation and**

Chapter 3 **Applications of a Combination of Two Adaptive Filters 61**

Chapter 4 **Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition 91**

Chapter 5 **Performance of Adaptive Hybrid System in Two Scenarios: Echo Phone and Acoustic Noise Reduction 121**

**Smart Grid Applications 23**

**Phase-Synchronization for the Grid-Connected Converters in**

Zhidong Zhao, Yi Luo, Fangqin Ren, Li Zhang and Changchun Shi

Edgar Omar Lopez-Caudana and Hector Manuel Perez-Meana

### Contents

#### **Preface XI**


Preface

Adaptive signal filtering is an expanding discipline. In general, an adaptive scheme can be used to characterise unknown systems in time-variant environments. The main objective of this approach is to meet a difficult comprise: maximum convergence speed with maximum accuracy. Adaptive systems can often have multiple input and output channels, can have extremely long responses (e.g. acoustic systems), can work in noisy environments, can have more or less memory, and so on. Each application requires a certain approach which deter‐ mines the filter structure, the cost function to minimize the estimation error, the adaptive algorithm, and other parameters; and each selection involves certain cost in computational terms, that in any case should consume less time than the time required by the application working in real-time. Theory and application are not, therefore, isolated entities but an im‐ bricated whole that requires a holistic vision. This book collects some theoretical approaches and practical applications in different areas that support expanding of adaptive systems.

**Dr. Lino Garcia Morales**

Technical University of Madrid, Spain

### Preface

Adaptive signal filtering is an expanding discipline. In general, an adaptive scheme can be used to characterise unknown systems in time-variant environments. The main objective of this approach is to meet a difficult comprise: maximum convergence speed with maximum accuracy. Adaptive systems can often have multiple input and output channels, can have extremely long responses (e.g. acoustic systems), can work in noisy environments, can have more or less memory, and so on. Each application requires a certain approach which deter‐ mines the filter structure, the cost function to minimize the estimation error, the adaptive algorithm, and other parameters; and each selection involves certain cost in computational terms, that in any case should consume less time than the time required by the application working in real-time. Theory and application are not, therefore, isolated entities but an im‐ bricated whole that requires a holistic vision. This book collects some theoretical approaches and practical applications in different areas that support expanding of adaptive systems.

> **Dr. Lino Garcia Morales** Technical University of Madrid, Spain

**Chapter 1**

**Provisional chapter**

 

(1)

**Hirschman Optimal Transform Block LMS Adaptive**

**Hirschman Optimal Transform Block LMS Adaptive**

The HOT is a recently developed discrete unitary transform that uses the orthonormal minimizers of the entropy-based Hirschman uncertainty measure [2]. This measure is different from the energy-based Heisenberg uncertainty measure that is only suited for continuous time signals. The Hirschman uncertainty measure uses entropy to quantify the spread of discrete-time signals in time and frequency [3]. Since the HOT bases are among the minimizers of the uncertainty measure, they have the novel property of being the most compact in discrete time and frequency. The fact that the HOT basis sequences have many zero-valued samples, and their resemblance to the DFT basis sequences, makes the HOT computationally attractive. Furthermore, it has been shown recently that a thresholding algorithm using the HOT yields superior frequency resolution of a pure tone in additive white noise to a similar algorithm based on the DFT [4]. The main theorem in [2] describes a method to generate an *N* = *K*2-point orthonormal HOT basis, where *K* is an integer. A HOT basis sequence of length *K*<sup>2</sup> is the most compact bases in the time-frequency plane. The

> 100 1 0 0 1 0 0 010 0 1 0 0 1 0 001 0 0 1 0 0 1 <sup>100</sup> *<sup>e</sup>*−*j*2*π*/3 0 0 *<sup>e</sup>*−*j*4*π*/3 0 0 010 0 *<sup>e</sup>*−*j*2*π*/3 0 0 *<sup>e</sup>*−*j*4*π*/3 <sup>0</sup> 001 0 0 *<sup>e</sup>*−*j*2*π*/3 0 0 *<sup>e</sup>*−*j*4*π*/3 <sup>100</sup> *<sup>e</sup>*−*j*4*π*/3 0 0 *<sup>e</sup>*−*j*8*π*/3 0 0 010 0 *<sup>e</sup>*−*j*4*π*/3 0 0 *<sup>e</sup>*−*j*8*π*/3 <sup>0</sup> 001 0 0 *<sup>e</sup>*−*j*4*π*/3 0 0 *<sup>e</sup>*−*j*8*π*/3

Equation (1) indicates that the HOT of any sequence is related to the DFT of some polyphase components of the signal. In fact, we called this property the "1 and 1/2 dimensionality"

> ©2012 Alkhouli et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Alkhouli et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2013 Alkhouli et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

**Filter**

**Filter**

Joseph Havlicek

**1. Introduction**

32-point HOT matrix is

 

Osama Alkhouli,

http://dx.doi.org/10.5772/51394

Osama Alkhouli, Victor DeBrunner and

Victor DeBrunner and Joseph Havlicek

Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

**Provisional chapter**

#### **Hirschman Optimal Transform Block LMS Adaptive Filter Hirschman Optimal Transform Block LMS Adaptive Filter**

Osama Alkhouli, Victor DeBrunner and Joseph Havlicek Osama Alkhouli, Victor DeBrunner and Joseph Havlicek

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/51394

#### **1. Introduction**

The HOT is a recently developed discrete unitary transform that uses the orthonormal minimizers of the entropy-based Hirschman uncertainty measure [2]. This measure is different from the energy-based Heisenberg uncertainty measure that is only suited for continuous time signals. The Hirschman uncertainty measure uses entropy to quantify the spread of discrete-time signals in time and frequency [3]. Since the HOT bases are among the minimizers of the uncertainty measure, they have the novel property of being the most compact in discrete time and frequency. The fact that the HOT basis sequences have many zero-valued samples, and their resemblance to the DFT basis sequences, makes the HOT computationally attractive. Furthermore, it has been shown recently that a thresholding algorithm using the HOT yields superior frequency resolution of a pure tone in additive white noise to a similar algorithm based on the DFT [4]. The main theorem in [2] describes a method to generate an *N* = *K*2-point orthonormal HOT basis, where *K* is an integer. A HOT basis sequence of length *K*<sup>2</sup> is the most compact bases in the time-frequency plane. The 32-point HOT matrix is

$$
\begin{bmatrix}
1 \ 0 \ 0 & 1 & 0 & 0 & 1 & 0 & 0 \\
0 \ 1 \ 0 & 0 & 1 & 0 & 0 & 1 & 0 \\
0 \ 0 \ 1 & 0 & 0 & 1 & 0 & 0 & 1 \\
1 \ 0 \ 0 \ e^{-j2\pi/3} & 0 & 0 & e^{-j4\pi/3} & 0 & 0 \\
0 \ 1 \ 0 & 0 & e^{-j2\pi/3} & 0 & 0 & e^{-j4\pi/3} & 0 \\
0 \ 0 \ 1 & 0 & 0 & e^{-j2\pi/3} & 0 & 0 & e^{-j4\pi/3} \\
1 \ 0 \ 0 \ e^{-j4\pi/3} & 0 & 0 & e^{-j8\pi/3} & 0 & 0 \\
0 \ 1 \ 0 & 0 & e^{-j4\pi/3} & 0 & 0 & e^{-j8\pi/3} & 0 \\
0 \ 0 \ 1 & 0 & 0 & e^{-j4\pi/3} & 0 & 0 & e^{-j8\pi/3}
\end{bmatrix}
\tag{1}
$$

Equation (1) indicates that the HOT of any sequence is related to the DFT of some polyphase components of the signal. In fact, we called this property the "1 and 1/2 dimensionality"

Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Alkhouli et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Alkhouli et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

©2012 Alkhouli et al., licensee InTech. This is an open access chapter distributed under the terms of the

of the HOT in [3]. Consequently, for this chapter, we will use the terms HOT and DFT of the polyphase components interchangeably. The *K*2-point HOT requires fewer computations than the *K*2-point DFT. We used this computational efficiency of the HOT to implement fast convolution algorithms [5]. When *<sup>K</sup>* is a power of 2 integer, then *<sup>K</sup>*2log2*<sup>K</sup>* (complex) multiplications are needed to compute the HOT, which is half that is required when computing the DFT. In this chapter, we use the computational efficiency of the HOT to implement a fast block LMS adaptive filter. The fast block LMS adaptive filter was first proposed [6] to reduce computational complexity. Our proposed HOT block LMS adaptive filter requires less than half of the computations required in the corresponding DFT block LMS adaptive filter. This significant complexity reduction could be important in many real time applications.

The following notations are used throughout this chapter. Nonbold lowercase letters are used for scalar quantities, bold lowercase is used for vectors, and bold uppercase is used for matrices. Nonbold uppercase letters are used for integer quantities such as length or dimensions. The lowercase letter *k* is reserved for the block index. The lowercase letter *n* is reserved for the time index. The time and block indexes are put in brackets, whereas subscripts are used to refer to elements of vectors and matrices. The uppercase letter *N* is reserved for the filter length and the uppercase letter *L* is reserved for the block length. The superscripts *T* and *H* denote vector or matrix transposition and Hermitian transposition, respectively. The subscripts *F* and *H* are used to highlight the DFT and HOT domain quantities, respectively. The *<sup>N</sup>* × *<sup>N</sup>* identity matrix is denoted by **<sup>I</sup>***N*×*<sup>N</sup>* or **<sup>I</sup>**. The *<sup>N</sup>* × *<sup>N</sup>* zero matrix is denoted by **<sup>0</sup>***N*×*N*. The linear and circular convolutions are denoted by ∗ and ⋆, respectively. Diag [**v**] denotes the diagonal matrix whose diagonal elements are the elements of the vector **v**.

#### **2. The relation between the HOT and DFT in a matrix from**

The algorithm that we proposing is best analyzed if the relation between the HOT and DFT is presented in matrix form. This matrix form is shown in Figure 1, where **I**0, **I**1,..., **I***K*−<sup>1</sup> are *<sup>K</sup>* × *<sup>K</sup>*<sup>2</sup> matrices such that multiplication of a vector with **<sup>I</sup>***<sup>i</sup>* produces the *<sup>i</sup>* th polyphase component of the vector. The matrix **I***<sup>K</sup>* is formed from **I**0, **I**1,..., **I***K*−1, i.e.,

$$\mathbf{I}\_K = \begin{bmatrix} \mathbf{I}\_0 \\ \mathbf{I}\_1 \\ \vdots \\ \mathbf{I}\_{K-2} \\ \mathbf{I}\_{K-1} \end{bmatrix}. \tag{2}$$

**Figure 1.** The Relation between HOT and DFTs of the polyphase components.

**<sup>I</sup>**<sup>1</sup> = 

**<sup>I</sup>**<sup>2</sup> = 

The *K*2-point HOT matrix is denoted by **H**. It satisfies the following:

**3. Convolution using the HOT**

DFT and HOT. The DFT of *u* can be written as

010000000 000010000 000000010

001000000 000001000 000000001

In this section, the "HOT convolution," a relation between the HOT of two signals and their circular convolution, is derived. Let *u* and *w* be two signals of length *K*2. The circular convolution of the signals is *y* = *w* ⋆ *u*. In the DFT domain, the convolution is given by the pointwise multiplication of the respective DFTs of the signals, i.e., *yF*(*k*) = *wF*(*k*)*uF*(*k*). A similar relation in the HOT domain can be readily found through the relation between the

**HH***<sup>H</sup>* <sup>=</sup> *<sup>K</sup>***I***K*<sup>2</sup>×*K*<sup>2</sup> , (6)

**H** = **H***T*. (7)

, (4)

Hirschman Optimal Transform Block LMS Adaptive Filter

http://dx.doi.org/10.5772/51394

3

. (5)

Since the rows of � **I***i* � are taken from the rows of the *K*<sup>2</sup> × *K*<sup>2</sup> identity matrix, multiplications with such matrices does not impose any computational burden. For the special case *K* = 3, we have

$$\mathbf{I}\_0 = \begin{bmatrix} 1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \\ 0 \ 0 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \\ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 1 \ 0 \ 0 \end{bmatrix},\tag{3}$$

**Figure 1.** The Relation between HOT and DFTs of the polyphase components.

2 Adaptive Filtering

time applications.

elements of the vector **v**.

Since the rows of

we have

� **I***i* �

of the HOT in [3]. Consequently, for this chapter, we will use the terms HOT and DFT of the polyphase components interchangeably. The *K*2-point HOT requires fewer computations than the *K*2-point DFT. We used this computational efficiency of the HOT to implement fast convolution algorithms [5]. When *<sup>K</sup>* is a power of 2 integer, then *<sup>K</sup>*2log2*<sup>K</sup>* (complex) multiplications are needed to compute the HOT, which is half that is required when computing the DFT. In this chapter, we use the computational efficiency of the HOT to implement a fast block LMS adaptive filter. The fast block LMS adaptive filter was first proposed [6] to reduce computational complexity. Our proposed HOT block LMS adaptive filter requires less than half of the computations required in the corresponding DFT block LMS adaptive filter. This significant complexity reduction could be important in many real

The following notations are used throughout this chapter. Nonbold lowercase letters are used for scalar quantities, bold lowercase is used for vectors, and bold uppercase is used for matrices. Nonbold uppercase letters are used for integer quantities such as length or dimensions. The lowercase letter *k* is reserved for the block index. The lowercase letter *n* is reserved for the time index. The time and block indexes are put in brackets, whereas subscripts are used to refer to elements of vectors and matrices. The uppercase letter *N* is reserved for the filter length and the uppercase letter *L* is reserved for the block length. The superscripts *T* and *H* denote vector or matrix transposition and Hermitian transposition, respectively. The subscripts *F* and *H* are used to highlight the DFT and HOT domain quantities, respectively. The *<sup>N</sup>* × *<sup>N</sup>* identity matrix is denoted by **<sup>I</sup>***N*×*<sup>N</sup>* or **<sup>I</sup>**. The *<sup>N</sup>* × *<sup>N</sup>* zero matrix is denoted by **<sup>0</sup>***N*×*N*. The linear and circular convolutions are denoted by ∗ and ⋆, respectively. Diag [**v**] denotes the diagonal matrix whose diagonal elements are the

The algorithm that we proposing is best analyzed if the relation between the HOT and DFT is presented in matrix form. This matrix form is shown in Figure 1, where **I**0, **I**1,..., **I***K*−<sup>1</sup>

> 

with such matrices does not impose any computational burden. For the special case *K* = 3,

100000000 000100000 000000100

**I**0 **I**1 . . . **I***K*−<sup>2</sup> **I***K*−<sup>1</sup>

 

are taken from the rows of the *K*<sup>2</sup> × *K*<sup>2</sup> identity matrix, multiplications

th polyphase

. (2)

, (3)

**2. The relation between the HOT and DFT in a matrix from**

component of the vector. The matrix **I***<sup>K</sup>* is formed from **I**0, **I**1,..., **I***K*−1, i.e.,

**<sup>I</sup>**<sup>0</sup> = 

are *<sup>K</sup>* × *<sup>K</sup>*<sup>2</sup> matrices such that multiplication of a vector with **<sup>I</sup>***<sup>i</sup>* produces the *<sup>i</sup>*

**<sup>I</sup>***<sup>K</sup>* =

$$\mathbf{I}\_1 = \begin{bmatrix} 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \\ 0 \ 0 \ 0 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0 \\ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 1 \ 0 \end{bmatrix},\tag{4}$$

$$\mathbf{I}\_2 = \begin{bmatrix} 0 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \\ 0 \ 0 \ 0 \ 0 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \\ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 1 \end{bmatrix}.\tag{5}$$

The *K*2-point HOT matrix is denoted by **H**. It satisfies the following:

$$\mathbf{H} \mathbf{H}^H = K \mathbf{I}\_{K^2 \times K^{2\prime}} \tag{6}$$

$$\mathbf{H} = \mathbf{H}^{T}.\tag{7}$$

#### **3. Convolution using the HOT**

In this section, the "HOT convolution," a relation between the HOT of two signals and their circular convolution, is derived. Let *u* and *w* be two signals of length *K*2. The circular convolution of the signals is *y* = *w* ⋆ *u*. In the DFT domain, the convolution is given by the pointwise multiplication of the respective DFTs of the signals, i.e., *yF*(*k*) = *wF*(*k*)*uF*(*k*). A similar relation in the HOT domain can be readily found through the relation between the DFT and HOT. The DFT of *u* can be written as

$$u\_F(k) = \sum\_{n=0}^{K^2 - 1} u(n) \, e^{-j\frac{2\pi}{K^2}kn}$$

$$= \sum\_{i=0}^{K-1} e^{-j\frac{2\pi}{K^2}ki} \sum\_{l=0}^{K-1} u(lK+i) \, e^{-j\frac{2\pi}{K}kl} . \tag{8}$$

*K*−1 ∑ *i*=0

*K*−1 ∑ *i*=0

*K*−1 ∑ *i*=0 *e* −*j* <sup>2</sup>*<sup>π</sup>*

where

simplified to

where *r* = 0, 1, . . . , *K* − 1. Since

**<sup>D</sup>**0,*K*<sup>2</sup>−1(*i*)

 

**<sup>D</sup>***rK*,(*r*+1)*K*−1(*i*)**F***K***y***<sup>i</sup>* <sup>=</sup>

*<sup>K</sup> ri***D**0,*K*−1(*i*)**F***K***y***<sup>i</sup>* =

*K*

*K*−1 ∑ *i*=0

*K*−1 ∑ *j*=0

> ∑ *r*=0

*ej* 2*π <sup>K</sup> <sup>r</sup>*(*i*+*j*−*s*)

*ej* 2*π*

*K*−1 ∑ *r*=0

*K*−1 ∑ *i*=0

**<sup>D</sup>**0,*K*−1(*s*)**F***K***y***<sup>s</sup>* <sup>=</sup> <sup>1</sup>

**<sup>F</sup>***K***y***<sup>s</sup>* <sup>=</sup> <sup>1</sup>

Moreover, as *<sup>K</sup>*−<sup>1</sup>

**<sup>F</sup>***K***y***<sup>s</sup>* =

matrix for **F***K***w***i*. We have then that

*K*

*K*−1 ∑ *r*=0

> *K*−1 ∑ *i*=0

*K*−1 ∑ *j*=0

**F***K* **F***K* . . . **F***K*  

**<sup>y</sup>***<sup>i</sup>* =

The above matrix equation can be separated into a system of *K* equations

*K*−1 ∑ *i*=0

**<sup>D</sup>***rK*,(*r*+1)*K*−1(*i*) = *<sup>e</sup>*

*K*−1 ∑ *j*=0 *e* −*j* <sup>2</sup>*<sup>π</sup> <sup>K</sup> <sup>r</sup>*(*i*+*j*)

Since the DFT matrix is unitary, the solution of equation (16) can be expressed as

*ej* 2*π*

where *δ*(*n*) denotes the periodic Kronecker delta of periodicity *K*, equation (18) can be

where *s* = 0, 1, 2, . . . , *K* − 1. The pointwise matrix multiplication in equation equation (20) can be converted into conventional matrix multiplication if we define **W***<sup>i</sup>* as the diagonal

*<sup>δ</sup>*(*<sup>i</sup>* + *<sup>j</sup>* − *<sup>s</sup>*)**D**0,*K*−1(*<sup>i</sup>* + *<sup>j</sup>* − *<sup>s</sup>*)(**F***K***w***i*) ⊗

*K*−1 ∑ *j*=0

*K*−1 ∑ *i*=0

*K*−1 ∑ *j*=0

*K*−1 ∑ *i*=0

*K*−1 ∑ *j*=0

**<sup>D</sup>**0,*K*<sup>2</sup>−1(*<sup>i</sup>* <sup>+</sup> *<sup>j</sup>*)

−*j* <sup>2</sup>*<sup>π</sup>*

the HOT of the output can be obtained by solving the following set of *K* matrix equations:

 

**<sup>D</sup>***rK*,(*r*+1)*K*−1(*<sup>i</sup>* <sup>+</sup> *<sup>j</sup>*)(**F***K***w***i*) <sup>⊗</sup>

**<sup>D</sup>**0,*K*−1(*<sup>i</sup>* + *<sup>j</sup>*)(**F***K***w***i*) ⊗

*<sup>K</sup> <sup>r</sup>*(*s*−(*i*+*j*))**D**0,*K*−1(*<sup>i</sup>* <sup>+</sup> *<sup>j</sup>*)(**F***K***w***i*) <sup>⊗</sup>

**<sup>D</sup>**0,*K*−1(*<sup>i</sup>* + *<sup>j</sup>* − *<sup>s</sup>*)(**F***K***w***i*) ⊗

*<sup>K</sup> <sup>r</sup>*(*i*+*j*−*s*) <sup>=</sup> *<sup>K</sup>δ*(*<sup>i</sup>* <sup>+</sup> *<sup>j</sup>* <sup>−</sup> *<sup>s</sup>*), (19)

� **F***K***u***<sup>j</sup>* �

**F***K***w***<sup>i</sup>* **F***K***w***<sup>i</sup>* . . . **F***K***w***<sup>i</sup>*  ⊗  

Hirschman Optimal Transform Block LMS Adaptive Filter

**F***K***u***<sup>j</sup>* **F***K***u***<sup>j</sup>* . . . **F***K***u***<sup>j</sup>*

� **F***K***w***<sup>j</sup>* �

*<sup>K</sup> ri***D**0,*K*−1(*i*), (15)

� **F***K***u***<sup>j</sup>* �

> � **F***K***u***<sup>j</sup>* �

� **F***K***u***<sup>j</sup>* �

 

http://dx.doi.org/10.5772/51394

. (13)

5

, (14)

. (16)

, (17)

. (18)

, (20)

The signal *<sup>u</sup>*(*lK* + *<sup>i</sup>*), denoted by *ui*(*l*), is the *<sup>i</sup>* th polyphase component of *u*(*n*) with DFT given by

$$
\mu\_{i\overline{\mathbb{F}}}(k) = \sum\_{l=0}^{K-1} u\_i(l) \, e^{-j\frac{2\pi}{\overline{\mathbb{F}}}kl}.\tag{9}
$$

Therefore, the DFT of the signal *u* can be written in terms of the DFTs of the polyphase components, or the HOT of *u*. The relation between the HOT and the DFTs of the polyphase components is descried in Figure 1. Equation (8) may be written as

$$
\mu\_F(k) = \sum\_{i=0}^{K-1} e^{-j\frac{2\pi}{K^2}ki} \mu\_{iF}(k). \tag{10}
$$

Define the diagonal matrix

$$\mathbf{D}\_{i,j}(k) = \begin{bmatrix} e^{-j\frac{2\pi}{K^2}ki} & 0 & \cdots & 0\\ 0 & e^{-j\frac{2\pi}{K^2}k(i+1)} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & e^{-j\frac{2\pi}{K^2}kj} \end{bmatrix} \tag{11}$$

Then the DFT of the signal can be written in a matrix form

$$\mathbf{u}\_F = \sum\_{i=0}^{K-1} \mathbf{D}\_{0, K^2 - 1}(i) \begin{bmatrix} \mathbf{F}\_K \\ \mathbf{F}\_K \\ \vdots \\ \mathbf{F}\_K \end{bmatrix} \mathbf{u}\_i. \tag{12}$$

The above is the desired relation between the DFT and HOT. It should be noted that equation (12) represents a radix-*K* FFT algorithm which is less efficient than the radix-2 FFT algorithm. Therefore, HOT convolution is expected to be less efficient than DFT convolution. Now, we can use equation (12) to transform **<sup>y</sup>***<sup>F</sup>* = **<sup>w</sup>***<sup>F</sup>* ⊗ **<sup>u</sup>***<sup>F</sup>* into the HOT domain. The symbol ⊗ indicates pointwise matrix multiplication and, throughout this discussion, pointwise matrix multiplication takes a higher precedence than conventional matrix multiplication. We have that

$$\sum\_{i=0}^{K-1} \mathbf{D}\_{0,K^2-1}(i) \begin{bmatrix} \mathbf{F}\_K \\ \mathbf{F}\_K \\ \vdots \\ \mathbf{F}\_K \end{bmatrix} \mathbf{y}\_i = \sum\_{i=0}^{K-1} \sum\_{j=0}^{K-1} \mathbf{D}\_{0,K^2-1}(i+j) \begin{bmatrix} \mathbf{F}\_K \mathbf{w}\_i \\ \mathbf{F}\_K \mathbf{w}\_i \\ \vdots \\ \mathbf{F}\_K \mathbf{w}\_i \end{bmatrix} \otimes \begin{bmatrix} \mathbf{F}\_K \mathbf{u}\_j \\ \mathbf{F}\_K \mathbf{u}\_j \\ \vdots \\ \mathbf{F}\_K \mathbf{u}\_j \end{bmatrix}.\tag{13}$$

The above matrix equation can be separated into a system of *K* equations

$$\sum\_{i=0}^{K-1} \mathbf{D}\_{rK,(r+1)K-1}(i) \mathbf{F}\_K \mathbf{y}\_i = \sum\_{i=0}^{K-1} \sum\_{j=0}^{K-1} \mathbf{D}\_{rK,(r+1)K-1}(i+j) \left(\mathbf{F}\_K \mathbf{w}\_i\right) \otimes \left(\mathbf{F}\_K \mathbf{w}\_j\right),\tag{14}$$

where *r* = 0, 1, . . . , *K* − 1. Since

$$\mathbf{D}\_{rK,(r+1)K-1}(i) = e^{-j\frac{2\pi}{K}ri} \mathbf{D}\_{0,K-1}(i) \,\tag{15}$$

the HOT of the output can be obtained by solving the following set of *K* matrix equations:

$$\sum\_{i=0}^{K-1} e^{-j\frac{2\pi}{K}ri} \mathbf{D}\_{0,K-1}(i) \mathbf{F}\_{\mathbf{K}} \mathbf{y}\_i = \sum\_{i=0}^{K-1} \sum\_{j=0}^{K-1} e^{-j\frac{2\pi}{K}r(i+j)} \mathbf{D}\_{0,K-1}(i+j) \left(\mathbf{F}\_{\mathbf{K}} \mathbf{w}\_i\right) \otimes \left(\mathbf{F}\_{\mathbf{K}} \mathbf{u}\_j\right). \tag{16}$$

Since the DFT matrix is unitary, the solution of equation (16) can be expressed as

$$\mathbf{D}\_{0,K-1}(\mathbf{s})\mathbf{F}\_{\mathbf{K}}\mathbf{y}\_{\mathbf{s}} = \frac{1}{K} \sum\_{r=0}^{K-1} \sum\_{i=0}^{K-1} \sum\_{j=0}^{K-1} e^{j\frac{2\pi}{K}r(s-(i+j))} \mathbf{D}\_{0,K-1}(i+j) \left(\mathbf{F}\_{\mathbf{K}}\mathbf{w}\_{i}\right) \otimes \left(\mathbf{F}\_{\mathbf{K}}\mathbf{u}\_{j}\right), \tag{17}$$

where

4 Adaptive Filtering

given by

that

Define the diagonal matrix

*uF*(*k*) =

The signal *<sup>u</sup>*(*lK* + *<sup>i</sup>*), denoted by *ui*(*l*), is the *<sup>i</sup>*

= *K*−1 ∑ *i*=0 *e* −*j* <sup>2</sup>*<sup>π</sup> <sup>K</sup>*<sup>2</sup> *ki K*−1 ∑ *l*=0

*K*<sup>2</sup>−1 ∑ *n*=0

*uiF*(*k*) =

*uF*(*k*) =

  *e* −*j* <sup>2</sup>*<sup>π</sup>*

components is descried in Figure 1. Equation (8) may be written as

**<sup>D</sup>***i*,*j*(*k*) =

Then the DFT of the signal can be written in a matrix form

**<sup>u</sup>***<sup>F</sup>* =

*K*−1 ∑ *l*=0

Therefore, the DFT of the signal *u* can be written in terms of the DFTs of the polyphase components, or the HOT of *u*. The relation between the HOT and the DFTs of the polyphase

> *K*−1 ∑ *i*=0 *e* −*j* <sup>2</sup>*<sup>π</sup> <sup>K</sup>*<sup>2</sup> *ki*

0 *e*

. . . . .

*K*−1 ∑ *i*=0

*ui*(*l*)*<sup>e</sup>*

*<sup>K</sup>*<sup>2</sup> *ki* 0 ··· 0

0 0 ··· *e*

  **F***K* **F***K* . . . **F***K*  

*<sup>K</sup>*<sup>2</sup> *<sup>k</sup>*(*i*+1) ··· <sup>0</sup>

. ... .

. .  

−*j* <sup>2</sup>*<sup>π</sup> <sup>K</sup>*<sup>2</sup> *kj*

−*j* <sup>2</sup>*<sup>π</sup>*

**<sup>D</sup>**0,*K*<sup>2</sup>−1(*i*)

The above is the desired relation between the DFT and HOT. It should be noted that equation (12) represents a radix-*K* FFT algorithm which is less efficient than the radix-2 FFT algorithm. Therefore, HOT convolution is expected to be less efficient than DFT convolution. Now, we can use equation (12) to transform **<sup>y</sup>***<sup>F</sup>* = **<sup>w</sup>***<sup>F</sup>* ⊗ **<sup>u</sup>***<sup>F</sup>* into the HOT domain. The symbol ⊗ indicates pointwise matrix multiplication and, throughout this discussion, pointwise matrix multiplication takes a higher precedence than conventional matrix multiplication. We have

*u*(*n*)*e*

−*j* <sup>2</sup>*<sup>π</sup> <sup>K</sup>*<sup>2</sup> *kn*

*u*(*lK* + *i*)*e*

−*j* <sup>2</sup>*<sup>π</sup>*

−*j* <sup>2</sup>*<sup>π</sup>*

*<sup>K</sup> kl*. (8)

th polyphase component of *u*(*n*) with DFT

*<sup>K</sup> kl*. (9)

*uiF*(*k*). (10)

**u***i*. (12)

(11)

$$\mathbf{F\_{K}y\_{s}} = \frac{1}{K} \sum\_{r=0}^{K-1} \sum\_{i=0}^{K-1} \sum\_{j=0}^{K-1} e^{j\frac{2\pi}{K}r(i+j-s)} \mathbf{D}\_{0,K-1}(i+j-s) \left(\mathbf{F\_{K}w\_{l}}\right) \otimes \left(\mathbf{F\_{K}u\_{j}}\right). \tag{18}$$

Moreover, as *<sup>K</sup>*−<sup>1</sup>

$$\sum\_{r=0}^{K-1} e^{j\frac{2\pi}{K}r(i+j-s)} = K\delta(i+j-s),\tag{19}$$

where *δ*(*n*) denotes the periodic Kronecker delta of periodicity *K*, equation (18) can be simplified to

$$\mathbf{F\_{K}y\_{s}} = \sum\_{i=0}^{K-1} \sum\_{j=0}^{K-1} \delta(i+j-s) \mathbf{D}\_{0,K-1}(i+j-s) \left(\mathbf{F\_{K}w\_{i}}\right) \otimes \left(\mathbf{F\_{K}u\_{j}}\right),\tag{20}$$

where *s* = 0, 1, 2, . . . , *K* − 1. The pointwise matrix multiplication in equation equation (20) can be converted into conventional matrix multiplication if we define **W***<sup>i</sup>* as the diagonal matrix for **F***K***w***i*. We have then that

#### 6 Adaptive Filtering <sup>6</sup> Adaptive Filtering - Theories and Applications Hirschman Optimal Transform Block LMS Adaptive Filter 7

$$\mathbf{F}\_{K}\mathbf{y}\_{s} = \sum\_{i=0}^{K-1} \sum\_{j=0}^{K-1} \delta(i+j-s)\mathbf{D}\_{0,K-1}(i+j-s)\mathbf{W}\_{i}\mathbf{F}\_{K}\mathbf{u}\_{j}.\tag{21}$$

**<sup>y</sup>**0(*k*) = **<sup>F</sup>**−<sup>1</sup>

could be three times more efficient than the DFT.

**4. Development of the basic algorithm**

equation

*<sup>K</sup>* **<sup>I</sup>**0**w***H*(*k*) <sup>⊗</sup> **<sup>I</sup>**0**u***H*(*k*) + **<sup>F</sup>**−<sup>1</sup>

**<sup>w</sup>**(*<sup>k</sup>* <sup>+</sup> <sup>1</sup>) = **<sup>w</sup>**(*k*) + *<sup>µ</sup>*

such that *L* = *K*2/2. With this modification, equation (26) becomes

*K*

*K*/2−1 ∑ *i*=0

Since the DFT is most efficient when the length of the filter is equal to the block length [7], this will be assumed in equation (27). The parameter *j* determines which polyphase component of the error signal is being used in the adaptation. This parameter can be changed from block to block. If *j* = 0, the output can be computed using the HOT as in equation (25). A second convolution is needed to compute the sum in equation (27). This sum contains only one polyphase component of the error. If this vector is up-sampled by *K*, the sum is just a convolution between the input vector and the up-sampled error vector. Although all the polyphase components are needed in the sum, the convolution can be computed by the HOT with the same computational complexity as the first convolution since only one polyphase

**<sup>w</sup>**(*<sup>k</sup>* <sup>+</sup> <sup>1</sup>) = **<sup>w</sup>**(*k*) + <sup>2</sup>*<sup>µ</sup>*

component of the error vector is non-zero.

*<sup>K</sup>* **D**

Next, we examine the computational complexity of HOT convolution. To find the HOT of the two signals *<sup>w</sup>* and *<sup>u</sup>*, 2*<sup>K</sup>*2log2*<sup>K</sup>* multiplications are required. Multiplication with the diagonal matrix *D* requires *K*(*K* − 1) multiplications. Finally, the matrix multiplication requires *K*<sup>3</sup> scalar multiplications. Therefore, the total number of multiplications required is 2*<sup>K</sup>*2log2*<sup>K</sup>* <sup>+</sup> *<sup>K</sup>*<sup>3</sup> <sup>+</sup> *<sup>K</sup>*<sup>2</sup> <sup>−</sup> *<sup>K</sup>*. Thus, computation of the output *<sup>y</sup>* using the HOT requires *<sup>K</sup>*<sup>3</sup> <sup>+</sup> <sup>3</sup>*<sup>K</sup>*2log2*<sup>K</sup>* <sup>+</sup> *<sup>K</sup>*<sup>3</sup> <sup>+</sup> *<sup>K</sup>*<sup>2</sup> <sup>−</sup> *<sup>K</sup>* multiplications, which is more than 6*<sup>K</sup>*2log2*<sup>K</sup>* <sup>+</sup> *<sup>K</sup>*<sup>2</sup> as required by the DFT. When it is required to calculate only one polyphase component of the output, only *K*<sup>2</sup> + <sup>2</sup>*<sup>K</sup>*2log2*<sup>K</sup>* <sup>+</sup> *<sup>K</sup>*log2*<sup>K</sup>* multiplications are necessary. Asymptotically in *<sup>K</sup>*, we see that the HOT

In the block adaptive filter, the adaptation proceeds block-by-block with the weight update

*L*−1 ∑ *i*=0

*L*

where *d*(*n*) and *y*(*n*) are the desired and output signals, respectively, **u**(*n*) is the tap-input vector, *L* is the block length or the filter length, and *e*(*n*) = *d*(*n*) − *y*(*n*) is the filter error. The DFT is commonly used to efficiently calculate the output of the filter and the sum in the update equation. Since the HOT is more efficient than the DFT when it is only required to calculate one polyphase component of the output, the block LMS algorithm equation (26) is modified such that only one polyphase component of the error in the *k*th block is used to update the filter weights. For reasons that will become clear later, the filter length *L* is chosen

*K*−1 ∑ *i*=1

**<sup>I</sup>***K*−*i***w***H*(*k*) ⊗ **<sup>I</sup>***i***u***H*(*k*). (25)

http://dx.doi.org/10.5772/51394

7

Hirschman Optimal Transform Block LMS Adaptive Filter

**u**(*kL* + *i*)*e*(*kL* + *i*), (26)

**u**(*kL* + *iK* + *j*)*e*(*kL* + *iK* + *j*). (27)

Combining the above *K* equations into one matrix equation, the HOT convolution can be written as

$$
\begin{bmatrix}
\mathbf{F}\_{\mathbf{X}\mathbf{y}\_{0}} \\
\mathbf{F}\_{\mathbf{X}\mathbf{y}\_{1}} \\
\mathbf{F}\_{\mathbf{X}\mathbf{y}\_{2}} \\
\vdots \\
\mathbf{F}\_{\mathbf{X}\mathbf{y}\_{K-2}} \\
\mathbf{F}\_{\mathbf{X}\mathbf{y}\_{K-2}}
\end{bmatrix} = \begin{bmatrix}
\mathbf{W}\_{0} & \mathbf{D}\mathbf{W}\_{K-1} & \mathbf{D}\mathbf{W}\_{K-2} & \cdots & \mathbf{D}\mathbf{W}\_{2} & \mathbf{D}\mathbf{W}\_{1} \\
\mathbf{W}\_{1} & \mathbf{W}\_{0} & \mathbf{W}\_{K-1} & \cdots & \mathbf{D}\mathbf{W}\_{3} & \mathbf{D}\mathbf{W}\_{2} \\
\mathbf{W}\_{2} & \mathbf{W}\_{1} & \mathbf{W}\_{0} & \cdots & \mathbf{D}\mathbf{W}\_{4} & \mathbf{D}\mathbf{W}\_{3} \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
\mathbf{W}\_{K-2} & \mathbf{W}\_{K-3} & \mathbf{W}\_{K-4} & \cdots & \mathbf{W}\_{0} & \mathbf{D}\mathbf{W}\_{K-1} \\
\mathbf{W}\_{K-1} & \mathbf{W}\_{K-2} & \mathbf{W}\_{K-3} & \cdots & \mathbf{W}\_{1} & \mathbf{W}\_{0}
\end{bmatrix} \begin{bmatrix}
\mathbf{F}\_{K}\mathbf{u}\_{0} \\
\mathbf{F}\_{K}\mathbf{u}\_{1} \\
\mathbf{F}\_{K}\mathbf{u}\_{2} \\
\vdots \\
\mathbf{F}\_{K}\mathbf{u}\_{K-2} \\
\mathbf{F}\_{K}\mathbf{u}\_{K-1}
\end{bmatrix} \tag{22}$$

where

$$\mathbf{D} = \begin{bmatrix} 1 & 0 & \cdots & 0 \\ 0 \ e^{-j\frac{2\pi}{K^2}} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{-j\frac{2\pi}{K^2}(K-1)} \end{bmatrix} \tag{23}$$

Notice that the square matrix in equation (22) is arranged in a block Toeplitz structure.

A better understanding of this result may be obtained by comparing equation (22) with the *K*-point circular convolution

$$
\begin{bmatrix} y\_0 \\ y\_1 \\ y\_2 \\ \vdots \\ y\_{K-2} \\ y\_{K-1} \end{bmatrix} = \begin{bmatrix} w\_0 & w\_{K-1} \ w\_{K-2} \cdots \ w\_2 & w\_1 \\ w\_1 & w\_0 & w\_{K-1} \cdots \ w\_3 & w\_2 \\ w\_2 & w\_1 & w\_0 & \cdots \cdots w\_4 & w\_3 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ w\_{K-2} & w\_{K-3} & w\_{K-4} & \cdots \cdots w\_0 & w\_{K-1} \\ w\_{K-1} & w\_{K-2} & w\_{K-3} & \cdots & w\_1 & w\_0 \end{bmatrix} \begin{bmatrix} u\_0 \\ u\_1 \\ u\_2 \\ \vdots \\ u\_{K-2} \\ u\_{K-1} \end{bmatrix} . \tag{24}
$$

The square matrix in equation (24) is also Toeplitz. However, equation (24) is a pure time domain result, whereas equation (22) is a pure HOT domain relation, which may be interpreted in terms of both the time domain and the DFT domain features. This fact can be explained in terms of fact that the HOT basis is optimal in the sense of the entropic joint time-frequency uncertainty measure *Hp*(*u*) = *pH*(*u*)+(<sup>1</sup> − *<sup>p</sup>*)*H*(*uF*) for all 0 ≤ *<sup>p</sup>* ≤ 1. Before moving on to the computational complexity analysis of HOT convolution, we make the same observations about the term **DF***K***w***<sup>i</sup>* appearing in equation (22). This term is the complex conjugate of the DFT of the upside down flipped *i* th polyphase component of *w*.

It should be noted that equation (22) does not show explicitly the HOT of *u*(*n*) and *w*(*n*). However, the DFT of the polyphase components that are shown explicitly in equation (22) are related to the HOT of the corresponding signal as shown in Figure. 1. For example, the 0th polyphase component of the output is given by

$$\mathbf{y}\_0(k) = \mathbf{F}\_K^{-1} \mathbf{I}\_0 \mathbf{w}\_H(k) \otimes \mathbf{I}\_0 \mathbf{u}\_H(k) + \mathbf{F}\_K^{-1} \mathbf{D} \sum\_{i=1}^{K-1} \mathbf{I}\_{K-i} \mathbf{w}\_H(k) \otimes \mathbf{I}\_i \mathbf{u}\_H(k). \tag{25}$$

Next, we examine the computational complexity of HOT convolution. To find the HOT of the two signals *<sup>w</sup>* and *<sup>u</sup>*, 2*<sup>K</sup>*2log2*<sup>K</sup>* multiplications are required. Multiplication with the diagonal matrix *D* requires *K*(*K* − 1) multiplications. Finally, the matrix multiplication requires *K*<sup>3</sup> scalar multiplications. Therefore, the total number of multiplications required is 2*<sup>K</sup>*2log2*<sup>K</sup>* <sup>+</sup> *<sup>K</sup>*<sup>3</sup> <sup>+</sup> *<sup>K</sup>*<sup>2</sup> <sup>−</sup> *<sup>K</sup>*. Thus, computation of the output *<sup>y</sup>* using the HOT requires *<sup>K</sup>*<sup>3</sup> <sup>+</sup> <sup>3</sup>*<sup>K</sup>*2log2*<sup>K</sup>* <sup>+</sup> *<sup>K</sup>*<sup>3</sup> <sup>+</sup> *<sup>K</sup>*<sup>2</sup> <sup>−</sup> *<sup>K</sup>* multiplications, which is more than 6*<sup>K</sup>*2log2*<sup>K</sup>* <sup>+</sup> *<sup>K</sup>*<sup>2</sup> as required by the DFT. When it is required to calculate only one polyphase component of the output, only *K*<sup>2</sup> + <sup>2</sup>*<sup>K</sup>*2log2*<sup>K</sup>* <sup>+</sup> *<sup>K</sup>*log2*<sup>K</sup>* multiplications are necessary. Asymptotically in *<sup>K</sup>*, we see that the HOT could be three times more efficient than the DFT.

#### **4. Development of the basic algorithm**

6 Adaptive Filtering

written as

where

 

**F***K***y**<sup>0</sup> **F***K***y**<sup>1</sup> **F***K***y**<sup>2</sup> . . . **F***K***y***K*−<sup>2</sup> **F***K***y***K*−<sup>1</sup>

*K*-point circular convolution

 

*y*0 *y*1 *y*2 . . . *yK*−<sup>2</sup> *yK*−<sup>1</sup>  

=

complex conjugate of the DFT of the upside down flipped *i*

0th polyphase component of the output is given by

 

. . . . . . . . . ... . . . . . .

 

=

 

**<sup>F</sup>***K***y***<sup>s</sup>* =

*K*−1 ∑ *i*=0

> . . . . . . . .

**D** =

 

0 *e* −*j* <sup>2</sup>*<sup>π</sup>*

. . . . .

*K*−1 ∑ *j*=0

Combining the above *K* equations into one matrix equation, the HOT convolution can be

**<sup>W</sup>**<sup>0</sup> **DW***K*−<sup>1</sup> **DW***K*−<sup>2</sup> ··· **DW**<sup>2</sup> **DW**<sup>1</sup> **<sup>W</sup>**<sup>1</sup> **<sup>W</sup>**<sup>0</sup> **<sup>W</sup>***K*−<sup>1</sup> ··· **DW**<sup>3</sup> **DW**<sup>2</sup> **<sup>W</sup>**<sup>2</sup> **<sup>W</sup>**<sup>1</sup> **<sup>W</sup>**<sup>0</sup> ··· **DW**<sup>4</sup> **DW**<sup>3</sup>

**<sup>W</sup>***K*−<sup>2</sup> **<sup>W</sup>***K*−<sup>3</sup> **<sup>W</sup>***K*−<sup>4</sup> ··· **<sup>W</sup>**<sup>0</sup> **DW***K*−<sup>1</sup> **<sup>W</sup>***K*−<sup>1</sup> **<sup>W</sup>***K*−<sup>2</sup> **<sup>W</sup>***K*−<sup>3</sup> ··· **<sup>W</sup>**<sup>1</sup> **<sup>W</sup>**<sup>0</sup>

1 0 ··· 0

0 0 ··· *e*

Notice that the square matrix in equation (22) is arranged in a block Toeplitz structure.

A better understanding of this result may be obtained by comparing equation (22) with the

*<sup>w</sup>*<sup>0</sup> *wK*−<sup>1</sup> *wK*−<sup>2</sup> ··· *<sup>w</sup>*<sup>2</sup> *<sup>w</sup>*<sup>1</sup> *<sup>w</sup>*<sup>1</sup> *<sup>w</sup>*<sup>0</sup> *wK*−<sup>1</sup> ··· *<sup>w</sup>*<sup>3</sup> *<sup>w</sup>*<sup>2</sup> *<sup>w</sup>*<sup>2</sup> *<sup>w</sup>*<sup>1</sup> *<sup>w</sup>*<sup>0</sup> ··· *<sup>w</sup>*<sup>4</sup> *<sup>w</sup>*<sup>3</sup>

*wK*−<sup>2</sup> *wK*−<sup>3</sup> *wK*−<sup>4</sup> ··· *<sup>w</sup>*<sup>0</sup> *wK*−<sup>1</sup> *wK*−<sup>1</sup> *wK*−<sup>2</sup> *wK*−<sup>3</sup> ··· *<sup>w</sup>*<sup>1</sup> *<sup>w</sup>*<sup>0</sup>

The square matrix in equation (24) is also Toeplitz. However, equation (24) is a pure time domain result, whereas equation (22) is a pure HOT domain relation, which may be interpreted in terms of both the time domain and the DFT domain features. This fact can be explained in terms of fact that the HOT basis is optimal in the sense of the entropic joint time-frequency uncertainty measure *Hp*(*u*) = *pH*(*u*)+(<sup>1</sup> − *<sup>p</sup>*)*H*(*uF*) for all 0 ≤ *<sup>p</sup>* ≤ 1. Before moving on to the computational complexity analysis of HOT convolution, we make the same observations about the term **DF***K***w***<sup>i</sup>* appearing in equation (22). This term is the

It should be noted that equation (22) does not show explicitly the HOT of *u*(*n*) and *w*(*n*). However, the DFT of the polyphase components that are shown explicitly in equation (22) are related to the HOT of the corresponding signal as shown in Figure. 1. For example, the

*<sup>K</sup>*<sup>2</sup> ··· 0

. ... .

. .

−*j* <sup>2</sup>*<sup>π</sup> <sup>K</sup>*<sup>2</sup> (*K*−1)

. ... .

. . . . .

> 

> >

 

*u*0 *u*1 *u*2 . . . *uK*−<sup>2</sup> *uK*−<sup>1</sup>  

th polyphase component of *w*.

. (24)

*<sup>δ</sup>*(*<sup>i</sup>* + *<sup>j</sup>* − *<sup>s</sup>*)**D**0,*K*−1(*<sup>i</sup>* + *<sup>j</sup>* − *<sup>s</sup>*)**W***i***F***K***u***j*. (21)

   

**F***K***u**<sup>0</sup> **F***K***u**<sup>1</sup> **F***K***u**<sup>2</sup> . . . **F***K***u***K*−<sup>2</sup> **F***K***u***K*−<sup>1</sup>  

(22)

(23)

In the block adaptive filter, the adaptation proceeds block-by-block with the weight update equation

$$\mathbf{w}(k+1) = \mathbf{w}(k) + \frac{\mu}{L} \sum\_{i=0}^{L-1} \mathbf{u}(kL+i)e(kL+i),\tag{26}$$

where *d*(*n*) and *y*(*n*) are the desired and output signals, respectively, **u**(*n*) is the tap-input vector, *L* is the block length or the filter length, and *e*(*n*) = *d*(*n*) − *y*(*n*) is the filter error. The DFT is commonly used to efficiently calculate the output of the filter and the sum in the update equation. Since the HOT is more efficient than the DFT when it is only required to calculate one polyphase component of the output, the block LMS algorithm equation (26) is modified such that only one polyphase component of the error in the *k*th block is used to update the filter weights. For reasons that will become clear later, the filter length *L* is chosen such that *L* = *K*2/2. With this modification, equation (26) becomes

$$\mathbf{w}(k+1) = \mathbf{w}(k) + \frac{2\mu}{K} \sum\_{i=0}^{K/2-1} \mathbf{u}(kL + iK + j)e(kL + iK + j). \tag{27}$$

Since the DFT is most efficient when the length of the filter is equal to the block length [7], this will be assumed in equation (27). The parameter *j* determines which polyphase component of the error signal is being used in the adaptation. This parameter can be changed from block to block. If *j* = 0, the output can be computed using the HOT as in equation (25). A second convolution is needed to compute the sum in equation (27). This sum contains only one polyphase component of the error. If this vector is up-sampled by *K*, the sum is just a convolution between the input vector and the up-sampled error vector. Although all the polyphase components are needed in the sum, the convolution can be computed by the HOT with the same computational complexity as the first convolution since only one polyphase component of the error vector is non-zero.

The block adaptive filter that implements the above algorithm is called the HOT block LMS adaptive filter and is shown in Figure 2. The complete steps of this new, efficient, adaptive algorithm are summarized below:


$$\left[u\left((k-1)\frac{\mathbf{k}^2}{2}\right)\cdots u\left(k\frac{\mathbf{k}^2}{2}\right)u\left(k\frac{\mathbf{k}^2}{2}+1\right)\cdots u\left((k+1)\frac{\mathbf{k}^2}{2}-1\right)\right]^T.\tag{28}$$

Note that this vector contains the input samples for the current and previous blocks.


#### **5. Computational complexity analysis**

In this section, we analyze the computational cost of the algorithm and compare it to that of the DFT block adaptive algorithm. Parts (a), (b), and (e) require 3*K*<sup>2</sup> log2 *<sup>K</sup>* multiplications. Part (c) requires *<sup>K</sup>* log2 *<sup>K</sup>* <sup>+</sup> *<sup>K</sup>*2. Part (d) requires *<sup>K</sup>* log2 *<sup>K</sup>* multiplications, and part (f) requires *<sup>K</sup>*<sup>2</sup> <sup>+</sup> *<sup>K</sup>*<sup>2</sup> log2 *<sup>K</sup>* multiplications. The total number of multiplications is thus 4*K*<sup>2</sup> log2 *<sup>K</sup>* <sup>+</sup> <sup>2</sup>*<sup>K</sup>* log2 *<sup>K</sup>* <sup>+</sup> <sup>2</sup>*K*2. The corresponding DFT block adaptive algorithm requires <sup>10</sup>*K*<sup>2</sup> log2 *<sup>K</sup>* <sup>+</sup> <sup>2</sup>*K*<sup>2</sup> multiplications — asymptotically more than twice as many. Therefore, by using only one polyphase component for the adaptation in a block, the computational cost can be reduced by a factor of 2.5. While this complexity reduction comes at the cost of not using all available information, the proposed algorithm provides better estimates than the LMS filter. The reduction of the computational complexity in this algorithm comes from using the polyphase components of the input signal to calculate one polyphase component of the output via the HOT.

**Figure 2.** HOT block LMS adaptive filter.

minimizes the cost

LMS adaptive filter and the asymptotic ratio between their computational cost is almost reached at small filter lengths. The computational complexity of the HOT filter can be further improved by relating the HOT of the circularly flipped vector in step (e) to the HOT of the vector in step (b). Another possibility to reduce the computational cost of the HOT block algorithm is by removing the gradient constraint in the filter weight update equation as has

Hirschman Optimal Transform Block LMS Adaptive Filter

http://dx.doi.org/10.5772/51394

9

In this section, we analyze the convergence of the HOT block LMS algorithm in the time domain. We assume throughout that the step size is small. The HOT block LMS filter

been done in the unconstrained DFT block LMS algorithm [9].

**6. Convergence analysis in the time domain**

It is worth mentioning that the fast exact LMS (FELMS) adaptive algorithm [8] also reduces the computational complexity by finding the output by processing the polyphase components of the input. However, the computational complexity reduction of the FELMS algorithm is less than that found in the DFT and HOT block adaptive algorithms because the FELMS algorithm is designed to have exact mathematical equivalence to, and hence the same convergence properties as, the conventional LMS algorithm. Comparing the HOT block LMS algorithm with the block LMS algorithms described in Chapter 3, the HOT filter performs computationally better.

The multiplication counts for both the DFT block and HOT block LMS algorithms are plotted in Figure 3. The HOT block LMS adaptive filter is always more efficient than the DFT block

**Figure 2.** HOT block LMS adaptive filter.

8 Adaptive Filtering

algorithm are summarized below:

 *u* 

(d) Calculate the *j*

of the output via the HOT.

computationally better.

(b) Compute the HOT of the input vector

the circular convolution. The *j*

discarding the first half of the *j*

(*k* − 1) *<sup>K</sup>*<sup>2</sup> 2 ··· *u k <sup>K</sup>*<sup>2</sup> 2 *u k <sup>K</sup>*<sup>2</sup> <sup>2</sup> <sup>+</sup> <sup>1</sup> ··· *u* 

up-sample by *K*, then calculate its HOT.

**5. Computational complexity analysis**

required in the HOT definition) and find its HOT.

(c) Use the inverse HOT and equation (22) to calculate the *j*

(e) Circularly flip the vector in (b) and then compute its HOT.

The block adaptive filter that implements the above algorithm is called the HOT block LMS adaptive filter and is shown in Figure 2. The complete steps of this new, efficient, adaptive

(a) Append the weight vector with *K*2/2 zeros (the resulting vector is now *K*<sup>2</sup> points long as

Note that this vector contains the input samples for the current and previous blocks.

(f) Compute the sum in the update equation using equation (22). This sum is the first half of the elements of the circular convolution between the vectors in parts (e) and (d).

In this section, we analyze the computational cost of the algorithm and compare it to that of the DFT block adaptive algorithm. Parts (a), (b), and (e) require 3*K*<sup>2</sup> log2 *<sup>K</sup>* multiplications. Part (c) requires *<sup>K</sup>* log2 *<sup>K</sup>* <sup>+</sup> *<sup>K</sup>*2. Part (d) requires *<sup>K</sup>* log2 *<sup>K</sup>* multiplications, and part (f) requires *<sup>K</sup>*<sup>2</sup> <sup>+</sup> *<sup>K</sup>*<sup>2</sup> log2 *<sup>K</sup>* multiplications. The total number of multiplications is thus 4*K*<sup>2</sup> log2 *<sup>K</sup>* <sup>+</sup> <sup>2</sup>*<sup>K</sup>* log2 *<sup>K</sup>* <sup>+</sup> <sup>2</sup>*K*2. The corresponding DFT block adaptive algorithm requires <sup>10</sup>*K*<sup>2</sup> log2 *<sup>K</sup>* <sup>+</sup> <sup>2</sup>*K*<sup>2</sup> multiplications — asymptotically more than twice as many. Therefore, by using only one polyphase component for the adaptation in a block, the computational cost can be reduced by a factor of 2.5. While this complexity reduction comes at the cost of not using all available information, the proposed algorithm provides better estimates than the LMS filter. The reduction of the computational complexity in this algorithm comes from using the polyphase components of the input signal to calculate one polyphase component

It is worth mentioning that the fast exact LMS (FELMS) adaptive algorithm [8] also reduces the computational complexity by finding the output by processing the polyphase components of the input. However, the computational complexity reduction of the FELMS algorithm is less than that found in the DFT and HOT block adaptive algorithms because the FELMS algorithm is designed to have exact mathematical equivalence to, and hence the same convergence properties as, the conventional LMS algorithm. Comparing the HOT block LMS algorithm with the block LMS algorithms described in Chapter 3, the HOT filter performs

The multiplication counts for both the DFT block and HOT block LMS algorithms are plotted in Figure 3. The HOT block LMS adaptive filter is always more efficient than the DFT block

(*k* + 1) *<sup>K</sup>*<sup>2</sup>

th polyphase component of the output can be found by

th polyphase component of the circular convolution.

th polyphase component of the error, insert a block of *K*/2 zeros,

<sup>2</sup> <sup>−</sup> <sup>1</sup>

*<sup>T</sup>*

th polyphase component of

. (28)

LMS adaptive filter and the asymptotic ratio between their computational cost is almost reached at small filter lengths. The computational complexity of the HOT filter can be further improved by relating the HOT of the circularly flipped vector in step (e) to the HOT of the vector in step (b). Another possibility to reduce the computational cost of the HOT block algorithm is by removing the gradient constraint in the filter weight update equation as has been done in the unconstrained DFT block LMS algorithm [9].

#### **6. Convergence analysis in the time domain**

In this section, we analyze the convergence of the HOT block LMS algorithm in the time domain. We assume throughout that the step size is small. The HOT block LMS filter minimizes the cost

**Figure 3.** Multiplication counts for both the DFT block and HOT block LMS algorithms.

$$\hat{\xi} = \frac{2}{K} \sum\_{i=0}^{\frac{K}{2}-1} \left| e(kL + iK + j) \right|^2,\tag{29}$$

<sup>φ</sup>HOT(*k*) = <sup>−</sup>2*<sup>µ</sup>*

It is easily shown that

The mean square of the *l*

where *λ<sup>l</sup>* is the *l*

*<sup>E</sup>* |*ǫl*(*k*)|

Using equation (30), one may find *<sup>E</sup>*|*ǫl*(∞)|

than the DFT block LMS algorithm 2.

*<sup>K</sup>* **<sup>V</sup>***<sup>H</sup>*

*<sup>E</sup>*φHOT(*k*)φ*<sup>H</sup>*

<sup>2</sup> <sup>=</sup> <sup>2</sup>*<sup>µ</sup> <sup>J</sup>*min *K* <sup>2</sup> − *µλ<sup>l</sup>*

time constant of the HOT block LMS algorithm is given by

The misadjustment can be calculated directly and is given by

misadjustment of the HOT block LMS filter is then given by

<sup>1</sup> The average time constant of the DFT block LMS filter is [10] *τ* = *L*2/2*µ* ∑*<sup>L</sup> <sup>l</sup>*=<sup>1</sup> *<sup>λ</sup><sup>l</sup>* . <sup>2</sup> The misadjustment of the DFT block LMS algorithm is [10] *<sup>M</sup>* <sup>=</sup> *<sup>µ</sup>*

*K* <sup>2</sup> <sup>−</sup><sup>1</sup> ∑ *i*=0

**u**(*kL* + *iK* + *j*)*e*

HOT(*k*) = <sup>2</sup>*µ*<sup>2</sup> *<sup>J</sup>*min**<sup>Λ</sup>**

 |*ǫl*(0)|

th eigenvalue of the input autocorrelation matrix. Therefore, the average

2

th component of equation (34) is given by

+ (<sup>1</sup> − *µλl*)2*<sup>k</sup>*

*<sup>τ</sup>* <sup>=</sup> *<sup>L</sup>*<sup>2</sup> 2*µ* ∑*<sup>L</sup> <sup>l</sup>*=<sup>1</sup> *<sup>λ</sup><sup>l</sup>*

*<sup>l</sup>*=<sup>1</sup> *<sup>λ</sup>lE* <sup>|</sup>*ǫl*(∞)<sup>|</sup>

*J*min

*L* ∑ *l*=1

Thus, the average time constant of the HOT block LMS filter is the same as that of the DFT block LMS filter 1. However, the HOT block LMS filter has K times higher misadjustment

The HOT and DFT block LMS algorithms were simulated using white noise inputs. The desired signal was generated using the linear model *d*(*n*) = *wo*(*n*) ∗ *u*(*n*) + *eo*(*n*), where *<sup>e</sup>o*(*n*) is the measurement white gaussian noise with variance 10−<sup>4</sup> and *<sup>W</sup>o*(*z*) = <sup>1</sup> + 0.5*z*−<sup>1</sup> −

> *<sup>K</sup>*<sup>2</sup> <sup>∑</sup>*<sup>L</sup> <sup>l</sup>*=<sup>1</sup> *<sup>λ</sup><sup>l</sup>* .

*<sup>M</sup>* <sup>=</sup> <sup>∑</sup>*<sup>L</sup>*

*<sup>M</sup>* <sup>=</sup> *<sup>µ</sup> K* *<sup>o</sup>*(*kL* + *iK* + *j*). (32)

http://dx.doi.org/10.5772/51394

11

Hirschman Optimal Transform Block LMS Adaptive Filter

*<sup>K</sup>* . (34)

. (36)

. (37)

<sup>2</sup> and substitute the result into equation (37). The

*λl*. (38)

, (35)

*<sup>E</sup>*φHOT(*k*) = 0, (33)

<sup>2</sup> <sup>−</sup> <sup>2</sup>*<sup>µ</sup> Jmin K* <sup>2</sup> − *µλ<sup>l</sup>*

which is the average of the squared errors in the *j* th polyphase error component. From statistical LMS theory [10], the block LMS algorithm can be analyzed using the stochastic difference equation [10]

$$
\epsilon\_T(k+1) = \left(\mathbf{I} - \mu\mathbf{A}\right)\epsilon\_T(k) + \phi(k), \tag{30}
$$

where

$$\phi(k) = -\frac{\mu}{L}\mathbf{V}^H \sum\_{i=0}^{L-1} \mathbf{u}(kL+i) \, e^o(kL+i) \tag{31}$$

is the driving force of for the block LMS algorithm [10]. we found that the HOT block LMS algorithm has the following driving force

$$\phi\_{\rm HOT}(k) = -\frac{2\mu}{K} \mathbf{V}^{H} \sum\_{i=0}^{\frac{K}{2}-1} \mathbf{u}(kL + iK + j) \, e^{\rho}(kL + iK + j). \tag{32}$$

It is easily shown that

10 Adaptive Filtering

**Figure 3.** Multiplication counts for both the DFT block and HOT block LMS algorithms.

ˆ *<sup>ξ</sup>* = <sup>2</sup> *K*

<sup>ǫ</sup>*T*(*<sup>k</sup>* + <sup>1</sup>) =

<sup>φ</sup>(*k*) = <sup>−</sup>*<sup>µ</sup>*

which is the average of the squared errors in the *j*

algorithm has the following driving force

difference equation [10]

where

*K* <sup>2</sup> <sup>−</sup><sup>1</sup> ∑ *i*=0 

> **I** − *µ***Λ**

*<sup>L</sup>* **<sup>V</sup>***<sup>H</sup>*

*L*−1 ∑ *i*=0

is the driving force of for the block LMS algorithm [10]. we found that the HOT block LMS

*e*(*kL* + *iK* + *j*)

**u**(*kL* + *i*)*e*

statistical LMS theory [10], the block LMS algorithm can be analyzed using the stochastic

 2

, (29)

th polyphase error component. From

<sup>ǫ</sup>*T*(*k*) + <sup>φ</sup>(*k*), (30)

*<sup>o</sup>*(*kL* + *i*) (31)

$$E\phi\_{\rm HOT}(k) = 0,\tag{33}$$

$$E\phi\_{\rm HOT}(k)\phi\_{\rm HOT}^{H}(k) = \frac{2\mu^2 J\_{\rm min}\Lambda}{K}.\tag{34}$$

The mean square of the *l* th component of equation (34) is given by

$$E\left|\varepsilon\_{l}(k)\right|^{2} = \frac{2\mu\frac{I\_{\text{min}}}{K}}{2 - \mu\lambda\_{l}} + (1 - \mu\lambda\_{l})^{2k} \left( \left|\varepsilon\_{l}(0)\right|^{2} - \frac{2\mu\frac{I\_{\text{min}}}{K}}{2 - \mu\lambda\_{l}} \right),\tag{35}$$

where *λ<sup>l</sup>* is the *l* th eigenvalue of the input autocorrelation matrix. Therefore, the average time constant of the HOT block LMS algorithm is given by

$$\tau = \frac{L^2}{2\mu \sum\_{l=1}^{L} \lambda\_l}.\tag{36}$$

The misadjustment can be calculated directly and is given by

$$M = \frac{\sum\_{l=1}^{L} \lambda\_l E \, |\varepsilon\_l(\infty)|^2}{J\_{\text{min}}}.\tag{37}$$

Using equation (30), one may find *<sup>E</sup>*|*ǫl*(∞)| <sup>2</sup> and substitute the result into equation (37). The misadjustment of the HOT block LMS filter is then given by

$$M = \frac{\mu}{K} \sum\_{l=1}^{L} \lambda\_l. \tag{38}$$

Thus, the average time constant of the HOT block LMS filter is the same as that of the DFT block LMS filter 1. However, the HOT block LMS filter has K times higher misadjustment than the DFT block LMS algorithm 2.

The HOT and DFT block LMS algorithms were simulated using white noise inputs. The desired signal was generated using the linear model *d*(*n*) = *wo*(*n*) ∗ *u*(*n*) + *eo*(*n*), where *<sup>e</sup>o*(*n*) is the measurement white gaussian noise with variance 10−<sup>4</sup> and *<sup>W</sup>o*(*z*) = <sup>1</sup> + 0.5*z*−<sup>1</sup> −

<sup>1</sup> The average time constant of the DFT block LMS filter is [10] *τ* = *L*2/2*µ* ∑*<sup>L</sup>*

*<sup>l</sup>*=<sup>1</sup> *<sup>λ</sup><sup>l</sup>* . <sup>2</sup> The misadjustment of the DFT block LMS algorithm is [10] *<sup>M</sup>* <sup>=</sup> *<sup>µ</sup> <sup>K</sup>*<sup>2</sup> <sup>∑</sup>*<sup>L</sup> <sup>l</sup>*=<sup>1</sup> *<sup>λ</sup><sup>l</sup>* .

0.25*z*−<sup>2</sup> + 0.03*z*−<sup>3</sup> + 0.1*z*−<sup>4</sup> + 0.002*z*−<sup>5</sup> − 0.01*z*−<sup>6</sup> + 0.007*z*<sup>−</sup>7. The learning curves are shown in Figure 4 with the learning curve of the conventional LMS algorithm. The step sizes of all algorithms were chosen to be the same. The higher mean square error of the HOT algorithm, compared to the DFT algorithm, shows the trade-off for complexity reduction by more than half. As expected the HOT and DFT block LMS algorithms converge at the same rate.

and the tap-input vector

**<sup>u</sup>**(*k*) =

*u* 

convolution of **u**(*k*) and **w**(*k*) is given by

**u**(k) and **w**(k). The result is given by

**<sup>e</sup>**0(*k*) = **<sup>0</sup>** *<sup>K</sup>*

**<sup>F</sup>***K***e**0(*k*) = **<sup>F</sup>***<sup>K</sup>*

× 

Multiplying equation (45) by the DFT matrix yields

× 

 **0** *<sup>K</sup>* 2 **<sup>d</sup>**<sup>ˆ</sup> <sup>0</sup>(*k*)  − **<sup>F</sup>***<sup>K</sup>* **0** *<sup>K</sup>* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2 **0** *<sup>K</sup>* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2

2 **<sup>e</sup>**ˆ0(*k*)  = **0** *<sup>K</sup>* 2 **<sup>d</sup>**<sup>ˆ</sup> <sup>0</sup>(*k*)

*k*th block as

(*k* − 1) *<sup>K</sup>*<sup>2</sup> 2 ··· *u k <sup>K</sup>*<sup>2</sup> 2 *u k <sup>K</sup>*<sup>2</sup> <sup>2</sup> <sup>+</sup> <sup>1</sup> ··· *u* 

**<sup>F</sup>***K***y**0(*k*) = **<sup>F</sup>***K***w**0(*k*) ⊗ **<sup>F</sup>***K***u**0(*k*) + **<sup>D</sup>**

**<sup>F</sup>***K***y**0(*k*) = **<sup>I</sup>**0**w***H*(*k*) ⊗ **<sup>I</sup>**0**u***H*(*k*) + **<sup>D</sup>**

Denote the HOT transforms of **<sup>u</sup>**(*k*) and **<sup>w</sup>**(*k*) by **<sup>u</sup>***H*(*k*) = **Hu**(*k*) and **<sup>w</sup>***H*(*k*) = **Hw**(*k*), respectively, where **H** is the HOT matrix. The 0th polyphase component of the circular

Using **<sup>F</sup>***K***u***i*(*k*) = **<sup>I</sup>***i***Hu**(*k*) = **<sup>I</sup>***i***u***H*(*k*), equation (42) can be written in terms of the HOT of

The 0th polyphase component of the linear convolution of **w**ˆ (*k*) and **u**(n), the output of the adaptive filter in the *<sup>k</sup>*th block, is given by the last *<sup>K</sup>*/2 elements of **<sup>y</sup>**0(*k*). Let the desired signal be *d*(*n*) and define the extended 0th polyphase component of the desired signal in the

**<sup>d</sup>**0(*k*) = **<sup>0</sup>** *<sup>K</sup>*

The extended 0th polyphase component of error signal in the *k*th block is given by

**<sup>I</sup>**0**w***H*(*k*) ⊗ **<sup>I</sup>**0**u***H*(*k*) + **<sup>D</sup>**

**<sup>I</sup>**0**w***H*(*k*) ⊗ **<sup>I</sup>**0**u***H*(*k*) + **<sup>D</sup>**

 − **0** *<sup>K</sup>* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2 **0** *<sup>K</sup>* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2

**0** *<sup>K</sup>* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2 **I** *K* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2

2 **<sup>d</sup>**<sup>ˆ</sup> <sup>0</sup>(*k*)

> **0** *<sup>K</sup>* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2 **I** *K* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2

> > *K*−1 ∑ *i*=1

> > > **<sup>F</sup>**−<sup>1</sup> *K*

*K*−1 ∑ *i*=1

 **<sup>F</sup>**−<sup>1</sup> *K*

**<sup>I</sup>***K*−*i***w***H*(*k*) ⊗ **<sup>I</sup>***i***u***H*(*k*)

**<sup>I</sup>***K*−*i***w***H*(*k*) ⊗ **<sup>I</sup>***i***u***H*(*k*)

*K*−1 ∑ *i*=1

*K*−1 ∑ *i*=1

(*k* + 1) *<sup>K</sup>*<sup>2</sup>

Hirschman Optimal Transform Block LMS Adaptive Filter

<sup>2</sup> <sup>−</sup> <sup>1</sup>

**<sup>F</sup>***K***w***K*−*i*(*k*) ⊗ **<sup>F</sup>***K***u***i*(*k*). (42)

**<sup>I</sup>***K*−*i***w***H*(*k*) ⊗ **<sup>I</sup>***i***u***H*(*k*). (43)

. (44)

. (45)

. (46)

*<sup>T</sup>*

http://dx.doi.org/10.5772/51394

. (41)

13

**Figure 4.** Learning curves of the DFT and HOT block LMS algorithms with the conventional LMS filter.

#### **7. Convergence analysis in the HOT domain**

Let *u*(*n*) be the input to the adaptive filter and

$$\mathbf{w}(k) = \begin{bmatrix} w\_0(k) \ w\_1(k) \ \cdots \ w\_{\frac{K^2}{2}-1}(k) \end{bmatrix}^T \tag{39}$$

be the tap-weight vector of the adaptive filter, where *k* is the block index. Define the extended tap-weight vector

$$\mathbf{w}(k) = \begin{bmatrix} \mathbf{w}^T(k) \ \mathbf{0} \ \mathbf{0} \ \cdots \ \mathbf{0} \end{bmatrix}^T \tag{40}$$

and the tap-input vector

12 Adaptive Filtering

−45

tap-weight vector

−40

−35

−30

−25

−20

Mean square error dB

−15

−10

−5

0

5

0.25*z*−<sup>2</sup> + 0.03*z*−<sup>3</sup> + 0.1*z*−<sup>4</sup> + 0.002*z*−<sup>5</sup> − 0.01*z*−<sup>6</sup> + 0.007*z*<sup>−</sup>7. The learning curves are shown in Figure 4 with the learning curve of the conventional LMS algorithm. The step sizes of all algorithms were chosen to be the same. The higher mean square error of the HOT algorithm, compared to the DFT algorithm, shows the trade-off for complexity reduction by more than half. As expected the HOT and DFT block LMS algorithms converge at the same rate.

> DFT block LMS HOT block LMS

LMS

0 200 400 600 800 1000

Time index

*<sup>w</sup>*0(*k*) *<sup>w</sup>*1(*k*) ··· *<sup>w</sup> <sup>K</sup>*<sup>2</sup>

be the tap-weight vector of the adaptive filter, where *k* is the block index. Define the extended

**w**ˆ *<sup>T</sup>*(*k*) 0 0 ··· 0

<sup>2</sup> <sup>−</sup><sup>1</sup> (*k*) *T*

(39)

*<sup>T</sup>* (40)

**Figure 4.** Learning curves of the DFT and HOT block LMS algorithms with the conventional LMS filter.

**7. Convergence analysis in the HOT domain**

**w**ˆ (*k*) =

**w**(*k*) =

Let *u*(*n*) be the input to the adaptive filter and

$$\mathbf{u}(k) = \left[ \boldsymbol{\mu}\left( (k-1)\frac{\boldsymbol{k}^2}{2} \right) \cdot \cdots \cdot \boldsymbol{\mu}\left( \boldsymbol{k}\frac{\boldsymbol{k}^2}{2} \right) \,\boldsymbol{\mu}\left( \boldsymbol{k}\frac{\boldsymbol{k}^2}{2} + 1 \right) \cdot \cdots \cdot \boldsymbol{\mu}\left( (k+1)\frac{\boldsymbol{k}^2}{2} - 1 \right) \right]^T. \tag{41}$$

Denote the HOT transforms of **<sup>u</sup>**(*k*) and **<sup>w</sup>**(*k*) by **<sup>u</sup>***H*(*k*) = **Hu**(*k*) and **<sup>w</sup>***H*(*k*) = **Hw**(*k*), respectively, where **H** is the HOT matrix. The 0th polyphase component of the circular convolution of **u**(*k*) and **w**(*k*) is given by

$$\mathbf{F}\_{\mathbf{K}}\mathbf{y}\_{0}(k) = \mathbf{F}\_{\mathbf{K}}\mathbf{w}\_{0}(k) \otimes \mathbf{F}\_{\mathbf{K}}\mathbf{u}\_{0}(k) + \mathbf{D} \sum\_{i=1}^{K-1} \mathbf{F}\_{\mathbf{K}}\mathbf{w}\_{K-i}(k) \otimes \mathbf{F}\_{\mathbf{K}}\mathbf{u}\_{i}(k). \tag{42}$$

Using **<sup>F</sup>***K***u***i*(*k*) = **<sup>I</sup>***i***Hu**(*k*) = **<sup>I</sup>***i***u***H*(*k*), equation (42) can be written in terms of the HOT of **u**(k) and **w**(k). The result is given by

$$\mathbf{F}\_{\mathbf{K}}\mathbf{y}\_{0}(k) = \mathbf{I}\_{0}\mathbf{w}\_{H}(k) \otimes \mathbf{I}\_{0}\mathbf{u}\_{H}(k) + \mathbf{D} \sum\_{i=1}^{K-1} \mathbf{I}\_{K-i}\mathbf{w}\_{H}(k) \otimes \mathbf{I}\_{i}\mathbf{u}\_{H}(k). \tag{43}$$

The 0th polyphase component of the linear convolution of **w**ˆ (*k*) and **u**(n), the output of the adaptive filter in the *<sup>k</sup>*th block, is given by the last *<sup>K</sup>*/2 elements of **<sup>y</sup>**0(*k*). Let the desired signal be *d*(*n*) and define the extended 0th polyphase component of the desired signal in the *k*th block as

$$\mathbf{d}\_0(k) = \begin{bmatrix} \mathbf{0}\_{\frac{k}{2}} \\ \hat{\mathbf{d}}\_0(k) \end{bmatrix}. \tag{44}$$

The extended 0th polyphase component of error signal in the *k*th block is given by

$$
\mathbf{e}\_{0}(k) = \begin{bmatrix} \mathbf{0}\_{\frac{K}{2}} \\ \hat{\mathbf{e}}\_{0}(k) \end{bmatrix} = \begin{bmatrix} \mathbf{0}\_{\frac{K}{2}} \\ \hat{\mathbf{a}}\_{0}(k) \end{bmatrix} - \begin{bmatrix} \mathbf{0}\_{\frac{K}{2} \times \frac{K}{2}} & \mathbf{0}\_{\frac{K}{2} \times \frac{K}{2}} \\ \mathbf{0}\_{\frac{K}{2} \times \frac{K}{2}} & \mathbf{I}\_{\frac{K}{2} \times \frac{K}{2}} \end{bmatrix} \mathbf{F}\_{K}^{-1}
$$

$$
\times \left[ \mathbf{I}\_{0} \mathbf{w}\_{H}(k) \otimes \mathbf{I}\_{0} \mathbf{u}\_{H}(k) + \mathbf{D} \sum\_{i=1}^{K-1} \mathbf{I}\_{K-i} \mathbf{w}\_{H}(k) \otimes \mathbf{I}\_{i} \mathbf{u}\_{H}(k) \right]. \tag{45}
$$

Multiplying equation (45) by the DFT matrix yields

$$\begin{aligned} \mathbf{F}\_{K}\mathbf{e}\_{0}(k) &= \mathbf{F}\_{K}\begin{bmatrix} \mathbf{0}\_{\frac{K}{2}} \\ \hat{\mathbf{d}}\_{0}(k) \end{bmatrix} - \mathbf{F}\_{K}\begin{bmatrix} \mathbf{0}\_{\frac{K}{2}\times\frac{K}{2}} & \mathbf{0}\_{\frac{K}{2}\times\frac{K}{2}} \\ \mathbf{0}\_{\frac{K}{2}\times\frac{K}{2}} & \mathbf{I}\_{\frac{K}{2}\times\frac{K}{2}} \end{bmatrix} \mathbf{F}\_{K}^{-1} \\\\ &\times \left[ \mathbf{I}\_{0}\mathbf{w}\_{H}(k)\otimes\mathbf{I}\_{0}\mathbf{u}\_{H}(k) + \mathbf{D}\sum\_{i=1}^{K-1} \mathbf{I}\_{K-i}\mathbf{w}\_{H}(k)\otimes\mathbf{I}\_{i}\mathbf{u}\_{H}(k) \right]. \end{aligned} \tag{46}$$

Define **u***<sup>c</sup> <sup>H</sup>*(*k*) = **Hu***c*(*k*), where **<sup>u</sup>***c*(*k*) is the circularly shifted version of **<sup>u</sup>**(*k*). The adaptive filter update equation in the *k*th block is given by

$$\mathbf{w}\_{H}(k+1) = \mathbf{w}\_{H}(k) + \mu \operatorname{\mathbf{H}} \begin{bmatrix} \mathbf{I}\_{\frac{\mathbf{K}^{2}}{2} \times \frac{\mathbf{K}^{2}}{2}} \mathbf{0}\_{\frac{\mathbf{K}^{2}}{2} \times \frac{\mathbf{K}^{2}}{2}} \\ \mathbf{0}\_{\frac{\mathbf{K}^{2}}{2} \times \frac{\mathbf{K}^{2}}{2}} \mathbf{0}\_{\frac{\mathbf{K}^{2}}{2} \times \frac{\mathbf{K}^{2}}{2}} \end{bmatrix} \mathbf{H}^{-1} \phi\_{H}(k), \tag{47}$$

where *w<sup>o</sup>* is the impulse response of the Wiener optimal filter and *eo*(*n*) is the irreducible estimation error, which is white noise and statistically independent of the adaptive filter

> *K*−1 ∑ *i*=1

This form will be useful to obtain the stochastic difference equation that describes the convergence of the adaptive algorithm. Using the above equation to replace the desired

**I***K*−*i***w***<sup>o</sup>*

*K*−1 ∑ *i*=1

where <sup>ǫ</sup>*H*(*k*) is the error in the estimation of the adaptive filter weight vector, i.e., <sup>ǫ</sup>*H*(*k*) =

*<sup>H</sup>*(*k*) = Diag [**I***i***u***<sup>c</sup>*

 **0** *<sup>K</sup>* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2 **0** *<sup>K</sup>* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2

*<sup>H</sup>*(*k*)] **<sup>L</sup>***K*Diag

**0** *<sup>K</sup>* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2 **I** *K* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2

*<sup>H</sup>*(*k*)] **<sup>F</sup>***<sup>K</sup>*

*K*−1 ∑ *i*=1

*<sup>H</sup>*(*k*) <sup>⊗</sup> **<sup>I</sup>***i***u***H*(*k*) + **<sup>F</sup>***K***e***<sup>o</sup>*

Hirschman Optimal Transform Block LMS Adaptive Filter

**<sup>I</sup>***K*−*i*ǫ*H*(*k*) ⊗ **<sup>I</sup>***i***u***H*(*k*) + **<sup>F</sup>***K***e***<sup>o</sup>*

 **<sup>F</sup>**−<sup>1</sup> *<sup>K</sup>* <sup>×</sup>

Diag [**I***K*−*i***u***H*(*k*)]**I***i*ǫ*H*(*k*) + **<sup>F</sup>***K***e***o*(*k*)

**<sup>I</sup>***j***u***H*(*k*) 

*<sup>H</sup>*(*k*)] **<sup>F</sup>***K***e**0(*k*). (54)

, (56)

. (55)

<sup>0</sup>(*k*) 

http://dx.doi.org/10.5772/51394

<sup>0</sup>(*k*) 

, (53)

. (52)

15

input. The above equation can be written in the HOT domain form

 **<sup>F</sup>**−<sup>1</sup> *K*

*<sup>H</sup>*(*k*) <sup>⊗</sup> **<sup>I</sup>**0**u***H*(*k*) + **<sup>D</sup>**

 **<sup>F</sup>**−<sup>1</sup> *K*

**<sup>I</sup>**0ǫ*H*(*k*) ⊗ **<sup>I</sup>**0**u***H*(*k*) + **<sup>D</sup>**

th block in equation (50) is given by

**<sup>F</sup>***K***e**0(*k*) ⊗ **<sup>I</sup>***i***u***<sup>c</sup>*

*<sup>H</sup>*(*k*) = Diag [**I***i***u***<sup>c</sup>*

**<sup>T</sup>***i*,*<sup>j</sup>* = Diag [**I***i***u***<sup>c</sup>*

Substituting equation (53) into equation (54) yields

Diag [**I**0**u***H*(*k*)]**I**0ǫ*H*(*k*) + **<sup>D</sup>**

 **0** *<sup>K</sup>* 2 **<sup>d</sup>**<sup>ˆ</sup> <sup>0</sup>(*k*)  = **0** *<sup>K</sup>* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2 **0** *<sup>K</sup>* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2

× **I**0**w***<sup>o</sup>*

signal in equation (46), we have

 **0** *<sup>K</sup>* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2 **0** *<sup>K</sup>* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2

> **0** *<sup>K</sup>* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2 **I** *K* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2

> > ×

**<sup>F</sup>***K***e**0(*k*) = **<sup>F</sup>***<sup>K</sup>*

*<sup>H</sup>* <sup>−</sup> **<sup>w</sup>***H*(*k*). The *<sup>i</sup>*

Upon defining

where

**<sup>F</sup>***K***e**0(*k*) ⊗ **<sup>I</sup>***i***u***<sup>c</sup>*

**w***o*

**0** *<sup>K</sup>* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2 **I** *K* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2

where <sup>φ</sup>*H*(*k*) is found from

$$
\begin{bmatrix}
\mathbf{I}\_{0}\phi\_{H}(k) \\
\mathbf{I}\_{1}\phi\_{H}(k) \\
\mathbf{I}\_{2}\phi\_{H}(k) \\
\vdots \\
\mathbf{I}\_{K-2}\phi\_{H}(k) \\
\mathbf{I}\_{K-1}\phi\_{H}(k)
\end{bmatrix} = \begin{bmatrix}
\mathbf{F}\_{\mathbf{K}}\mathbf{e}\_{0}(k) \\
\mathbf{F}\_{\mathbf{K}}\mathbf{e}\_{0}(k) \\
\mathbf{F}\_{\mathbf{K}}\mathbf{e}\_{0}(k) \\
\vdots \\
\mathbf{F}\_{\mathbf{K}}\mathbf{e}\_{0}(k) \\
\mathbf{F}\_{\mathbf{K}}\mathbf{e}\_{0}(k)
\end{bmatrix} \otimes \begin{bmatrix}
\mathbf{I}\_{0}\mathbf{u}\_{H}^{c}(k) \\
\mathbf{I}\_{1}\mathbf{u}\_{H}^{c}(k) \\
\mathbf{I}\_{2}\mathbf{u}\_{H}^{c}(k) \\
\vdots \\
\mathbf{I}\_{K-2}\mathbf{u}\_{H}^{c}(k) \\
\mathbf{I}\_{K-1}\mathbf{u}\_{H}^{c}(k)
\end{bmatrix} \tag{48}
$$

as

$$\boldsymbol{\phi}\_{H}(k) = \mathbf{I}\_{K}^{-1} \begin{bmatrix} \mathbf{F}\_{K}\mathbf{e}\_{0}(k) \\ \mathbf{F}\_{K}\mathbf{e}\_{0}(k) \\ \mathbf{F}\_{K}\mathbf{e}\_{0}(k) \\ \vdots \\ \mathbf{F}\_{K}\mathbf{e}\_{0}(k) \\ \mathbf{F}\_{K}\mathbf{e}\_{0}(k) \end{bmatrix} \otimes \begin{bmatrix} \mathbf{I}\_{0}\mathbf{u}\_{H}^{c}(k) \\ \mathbf{I}\_{1}\mathbf{u}\_{H}^{c}(k) \\ \mathbf{I}\_{2}\mathbf{u}\_{H}^{c}(k) \\ \vdots \\ \mathbf{I}\_{K-2}\mathbf{u}\_{H}^{c}(k) \\ \mathbf{I}\_{K-1}\mathbf{u}\_{H}^{c}(k) \end{bmatrix}. \tag{49}$$

Finally, the HOT block LMS filter in the HOT domain can be written as

$$\begin{aligned} \mathbf{w}\_{H}(k+1) &= \mathbf{w}\_{H}(k) \\ &+ \mu \operatorname{\mathbf{H}} \begin{bmatrix} \mathbf{I}\_{\frac{K^{2}}{2}} \times \frac{k^{2}}{2} & \mathbf{0}\_{\frac{K^{2}}{2}} \times \frac{k^{2}}{2} \\ \mathbf{0}\_{\frac{K^{2}}{2}} \times \frac{k^{2}}{2} & \mathbf{0}\_{\frac{K^{2}}{2}} \times \frac{k^{2}}{2} \end{bmatrix} \mathbf{H}^{-1} \mathbf{I}\_{K}^{-1} \begin{bmatrix} \mathbf{F}\_{K} \mathbf{e}\_{0}(k) \\ \mathbf{F}\_{K} \mathbf{e}\_{0}(k) \\ \vdots \\ \mathbf{F}\_{K} \mathbf{e}\_{0}(k) \\ \mathbf{I}\_{K-1} \mathbf{e}\_{0}(k) \\ \mathbf{I}\_{K-1} \mathbf{e}\_{0}(k) \end{bmatrix} \otimes \begin{bmatrix} \mathbf{I}\_{0} \mathbf{u}\_{H}^{c}(k) \\ \mathbf{I}\_{1} \mathbf{u}\_{H}^{c}(k) \\ \vdots \\ \mathbf{I}\_{K-2} \mathbf{u}\_{H}^{c}(k) \\ \mathbf{I}\_{K-1} \mathbf{u}\_{H}^{c}(k) \end{bmatrix}. \end{aligned} \tag{50}$$

Next, we investigate the convergence properties of equation (50). we assume the following linear statistical model for the desired signal:

$$d(n) = w^o(n) \* u(n) + e^o(n),\tag{51}$$

where *w<sup>o</sup>* is the impulse response of the Wiener optimal filter and *eo*(*n*) is the irreducible estimation error, which is white noise and statistically independent of the adaptive filter input. The above equation can be written in the HOT domain form

$$
\begin{split}
\begin{bmatrix}
\mathbf{0}\_{\frac{\mathbf{x}}{2}} \\
\dot{\mathbf{a}}\_{0}(k)
\end{bmatrix} &= \begin{bmatrix}
\mathbf{0}\_{\frac{\mathbf{x}}{2}\times\frac{\mathbf{x}}{2}} & \mathbf{0}\_{\frac{\mathbf{x}}{2}\times\frac{\mathbf{x}}{2}} \\
\mathbf{0}\_{\frac{\mathbf{x}}{2}\times\frac{\mathbf{x}}{2}} & \mathbf{I}\_{\frac{\mathbf{x}}{2}\times\frac{\mathbf{x}}{2}}
\end{bmatrix} \mathbf{F}\_{K}^{-1} \\\\ &\times \left[\mathbf{I}\_{0}\mathbf{w}\_{H}^{o}(k)\otimes\mathbf{I}\_{0}\mathbf{u}\_{H}(k) + \mathbf{D}\sum\_{i=1}^{K-1} \mathbf{I}\_{K-i}\mathbf{w}\_{H}^{o}(k)\otimes\mathbf{I}\_{i}\mathbf{u}\_{H}(k) + \mathbf{F}\_{K}\mathbf{e}\_{0}^{o}(k)\right].
\end{split}
\tag{52}
$$

This form will be useful to obtain the stochastic difference equation that describes the convergence of the adaptive algorithm. Using the above equation to replace the desired signal in equation (46), we have

$$\begin{aligned} \mathbf{F}\_{K}\mathbf{e}\_{0}(k) &= \mathbf{F}\_{K} \begin{bmatrix} \mathbf{0}\_{\frac{\mathbf{v}}{2} \times \frac{\mathbf{x}}{2}} & \mathbf{0}\_{\frac{\mathbf{v}}{2} \times \frac{\mathbf{x}}{2}}\\ \mathbf{0}\_{\frac{\mathbf{x}}{2} \times \frac{\mathbf{x}}{2}} & \mathbf{I}\_{\frac{\mathbf{x}}{2} \times \frac{\mathbf{x}}{2}}^{K} \end{bmatrix} \mathbf{F}\_{K}^{-1} \\\\ & \times \left[ \mathbf{I}\_{0}\mathbf{e}\_{H}(k) \otimes \mathbf{I}\_{0}\mathbf{u}\_{H}(k) + \mathbf{D} \sum\_{i=1}^{K-1} \mathbf{I}\_{K-i}\mathbf{e}\_{H}(k) \otimes \mathbf{I}\_{i}\mathbf{u}\_{H}(k) + \mathbf{F}\_{K}\mathbf{e}\_{0}^{o}(k) \right], \end{aligned}$$

where <sup>ǫ</sup>*H*(*k*) is the error in the estimation of the adaptive filter weight vector, i.e., <sup>ǫ</sup>*H*(*k*) = **w***o <sup>H</sup>* <sup>−</sup> **<sup>w</sup>***H*(*k*). The *<sup>i</sup>* th block in equation (50) is given by

$$\mathbf{F}\_{K}\mathbf{e}\_{0}(k)\otimes\mathbf{I}\_{i}\mathbf{u}\_{H}^{c}(k)=\text{Diag}\left[\mathbf{I}\_{i}\mathbf{u}\_{H}^{c}(k)\right]\mathbf{F}\_{K}\mathbf{e}\_{0}(k).\tag{54}$$

Substituting equation (53) into equation (54) yields

$$\mathbf{F}\_{K}\mathbf{e}\_{0}(k)\otimes\mathbf{I}\_{l}\mathbf{u}\_{H}^{c}(k) = \text{Diag}\left[\mathbf{I}\_{l}\mathbf{u}\_{H}^{c}(k)\right]\mathbf{F}\_{K}\left[\begin{matrix}\mathbf{0}\_{\frac{K}{2}\times\frac{K}{2}} & \mathbf{0}\_{\frac{K}{2}\times\frac{K}{2}}\\ \mathbf{0}\_{\frac{K}{2}\times\frac{K}{2}} & \mathbf{I}\_{\frac{K}{2}\times\frac{K}{2}}^{L}\end{matrix}\right]\mathbf{F}\_{K}^{-1}\times\\\\ \left[\text{Diag}\left[\mathbf{I}\_{0}\mathbf{u}\_{H}(k)\right]\mathbf{I}\_{0}\mathbf{e}\_{H}(k) + \mathbf{D}\sum\_{i=1}^{K-1}\text{Diag}\left[\mathbf{I}\_{K-i}\mathbf{u}\_{H}(k)\right]\mathbf{I}\_{i}\mathbf{e}\_{H}(k) + \mathbf{F}\_{K}\mathbf{e}^{o}(k)\right].\tag{55}$$

Upon defining

$$\mathbf{T}\_{i,j} = \text{Diag}\left[\mathbf{I}\_i \mathbf{u}\_H^\varepsilon(k)\right] \mathbf{L}\_K \text{Diag}\left[\mathbf{I}\_j \mathbf{u}\_H(k)\right],\tag{56}$$

where

14 Adaptive Filtering

Define **u***<sup>c</sup>*

as

where <sup>φ</sup>*H*(*k*) is found from

filter update equation in the *k*th block is given by

 

**<sup>w</sup>***H*(*<sup>k</sup>* + <sup>1</sup>) = **<sup>w</sup>***H*(*k*)

� **I** *<sup>K</sup>*<sup>2</sup> <sup>2</sup> <sup>×</sup> *<sup>K</sup>*<sup>2</sup> 2 **0** *<sup>K</sup>*<sup>2</sup> <sup>2</sup> <sup>×</sup> *<sup>K</sup>*<sup>2</sup> 2

> **0** *<sup>K</sup>*<sup>2</sup> <sup>2</sup> <sup>×</sup> *<sup>K</sup>*<sup>2</sup> 2 **0** *<sup>K</sup>*<sup>2</sup> <sup>2</sup> <sup>×</sup> *<sup>K</sup>*<sup>2</sup> 2

linear statistical model for the desired signal:

+ *µ* **H**

**<sup>w</sup>***H*(*<sup>k</sup>* + <sup>1</sup>) = **<sup>w</sup>***H*(*k*) + *<sup>µ</sup>* **<sup>H</sup>**

**<sup>I</sup>**0φ*H*(*k*) **<sup>I</sup>**1φ*H*(*k*) **<sup>I</sup>**2φ*H*(*k*) . . . **<sup>I</sup>***K*−2φ*H*(*k*) **<sup>I</sup>***K*−1φ*H*(*k*)

<sup>φ</sup>*H*(*k*) = **<sup>I</sup>**

−1 *K*

Finally, the HOT block LMS filter in the HOT domain can be written as

� **<sup>H</sup>**−1**<sup>I</sup>** −1 *K*

 

=

  **<sup>F</sup>***K***e**0(*k*) **<sup>F</sup>***K***e**0(*k*) **<sup>F</sup>***K***e**0(*k*) . . . **<sup>F</sup>***K***e**0(*k*) **<sup>F</sup>***K***e**0(*k*)

> 

Next, we investigate the convergence properties of equation (50). we assume the following

*d*(*n*) = *wo*(*n*) ∗ *u*(*n*) + *e*

**<sup>F</sup>***K***e**0(*k*) **<sup>F</sup>***K***e**0(*k*) **<sup>F</sup>***K***e**0(*k*) . . . **<sup>I</sup>***K*−1**e**0(*k*) **<sup>I</sup>***K*−1**e**0(*k*)

 

⊗

 

 

*<sup>H</sup>*(*k*) = **Hu***c*(*k*), where **<sup>u</sup>***c*(*k*) is the circularly shifted version of **<sup>u</sup>**(*k*). The adaptive

�

**<sup>H</sup>**−1φ*H*(*k*), (47)

, (48)

. (49)

. (50)

� **I** *<sup>K</sup>*<sup>2</sup> <sup>2</sup> <sup>×</sup> *<sup>K</sup>*<sup>2</sup> 2 **0** *<sup>K</sup>*<sup>2</sup> <sup>2</sup> <sup>×</sup> *<sup>K</sup>*<sup>2</sup> 2

**<sup>F</sup>***K***e**0(*k*) **<sup>F</sup>***K***e**0(*k*) **<sup>F</sup>***K***e**0(*k*) . . . **<sup>F</sup>***K***e**0(*k*) **<sup>F</sup>***K***e**0(*k*)

 

⊗

 

**I**0**u***<sup>c</sup> <sup>H</sup>*(*k*) **I**1**u***<sup>c</sup> <sup>H</sup>*(*k*) **I**2**u***<sup>c</sup> <sup>H</sup>*(*k*) . . . **I***K*−2**u***<sup>c</sup>*

**I***K*−1**u***<sup>c</sup>*

*<sup>H</sup>*(*k*)

*<sup>H</sup>*(*k*)

**I**0**u***<sup>c</sup> <sup>H</sup>*(*k*) **I**1**u***<sup>c</sup> <sup>H</sup>*(*k*) **I**2**u***<sup>c</sup> <sup>H</sup>*(*k*) . . . **I***K*−2**u***<sup>c</sup>*

**I***K*−1**u***<sup>c</sup>*

 

⊗

 

**I**0**u***<sup>c</sup> <sup>H</sup>*(*k*) **I**1**u***<sup>c</sup> <sup>H</sup>*(*k*) **I**2**u***<sup>c</sup> <sup>H</sup>*(*k*) . . . **I***K*−2**u***<sup>c</sup>*

**I***K*−1**u***<sup>c</sup>*

*<sup>H</sup>*(*k*)

 

*<sup>o</sup>*(*n*), (51)

*<sup>H</sup>*(*k*)

*<sup>H</sup>*(*k*)

 

*<sup>H</sup>*(*k*)

 

**0** *<sup>K</sup>*<sup>2</sup> <sup>2</sup> <sup>×</sup> *<sup>K</sup>*<sup>2</sup> 2 **0** *<sup>K</sup>*<sup>2</sup> <sup>2</sup> <sup>×</sup> *<sup>K</sup>*<sup>2</sup> 2

$$\mathbf{L}\_K = \mathbf{F}\_K \begin{bmatrix} \mathbf{0}\_{\frac{K}{2} \times \frac{K}{2}} & \mathbf{0}\_{\frac{K}{2} \times \frac{K}{2}}\\ \mathbf{0}\_{\frac{K}{2} \times \frac{K}{2}} & \mathbf{I}\_{\frac{K}{2} \times \frac{K}{2}} \end{bmatrix} \mathbf{F}\_K^{-1} \tag{57}$$

where × denotes the Kronecker product and **<sup>1</sup>***K*×*<sup>K</sup>* is the *<sup>K</sup>* × *<sup>K</sup>* matrix with all element being

**<sup>1</sup>***K*×*<sup>K</sup>* × **<sup>L</sup>***<sup>K</sup>*

 

*H*(*k*)**u***<sup>T</sup>*

−1 *K*

 

**I**0 **DI**<sup>1</sup> **DI**<sup>2</sup> . . . **DI***K*−<sup>2</sup> **DI***K*−<sup>1</sup>  

*H*(*k*)**u***<sup>T</sup>*

*<sup>H</sup>*(*k*)**<sup>I</sup>** *cT K* � ⊗ �

 

<sup>−</sup>*µ***U***K*<sup>2</sup> **<sup>I</sup>**

**I***D <sup>K</sup>* <sup>=</sup>

Therefore, the adaptive block HOT filter convergence is governed by the matrix

� **<sup>H</sup>**−1**<sup>I</sup>** −1 *K* � **I***KE***u***<sup>c</sup>*

*<sup>H</sup>*(*k*)**<sup>I</sup>** *c K T* � ⊗ �

Diag[**I**0**u***<sup>c</sup>*

Diag[**I**1**u***<sup>c</sup>*

Diag[**I***K*−1**u***<sup>c</sup>*

. . . Diag[**I***K*−2**u***<sup>c</sup>*

**I**0 **I***K*−<sup>1</sup> . . . **I**1

 

� = � **I***K***u***<sup>c</sup>*

*H*(*k*)**u***<sup>T</sup>*

*<sup>H</sup>*(*k*)**<sup>I</sup>** *c K T* � ⊗ �

Hirschman Optimal Transform Block LMS Adaptive Filter

http://dx.doi.org/10.5772/51394

. (63)

� **I***D K* � <sup>ǫ</sup>*H*(*k*)

. (65)

**<sup>1</sup>***K*×*<sup>K</sup>* × **<sup>L</sup>***<sup>K</sup>*

� **I***D*

*<sup>K</sup>* . (66)

**<sup>L</sup>***K***e***o*(*k*), (64)

**<sup>1</sup>***K*×*<sup>K</sup>* × **<sup>L</sup>***<sup>K</sup>*

 

*<sup>H</sup>*(*k*)]

*<sup>H</sup>*(*k*)]

*<sup>H</sup>*(*k*)]

*<sup>H</sup>*(*k*)]

**<sup>1</sup>***K*×*<sup>K</sup>* × **<sup>L</sup>***<sup>K</sup>*

� , 17

equal to one. The matrix **T** can be written as

**<sup>I</sup>**0**u***H*(*k*) **<sup>I</sup>***K*−1**u***H*(*k*) . . . **<sup>I</sup>**2**u***H*(*k*) **<sup>I</sup>**1**u***H*(*k*)

  *T*

⊗ �

> **I** *c <sup>K</sup>* <sup>=</sup>

Finally, the error in the estimation of the adaptive filter is given by

−1 *K* � **I***K***u***<sup>c</sup>*

**<sup>I</sup>** <sup>−</sup> *<sup>µ</sup>***U***K*<sup>2</sup> **<sup>I</sup>**

   

**T** =

where

where

**Ψ** = **H**

� **I** *<sup>K</sup>*<sup>2</sup> <sup>2</sup> <sup>×</sup> *<sup>K</sup>*<sup>2</sup> 2 **0** *<sup>K</sup>*<sup>2</sup> <sup>2</sup> <sup>×</sup> *<sup>K</sup>*<sup>2</sup> 2

> **0** *<sup>K</sup>*<sup>2</sup> <sup>2</sup> <sup>×</sup> *<sup>K</sup>*<sup>2</sup> 2 **0** *<sup>K</sup>*<sup>2</sup> <sup>2</sup> <sup>×</sup> *<sup>K</sup>*<sup>2</sup> 2

 

**I**0**u***<sup>c</sup> <sup>H</sup>*(*k*) **I**1**u***<sup>c</sup> <sup>H</sup>*(*k*) . . . **I***K*−2**u***<sup>c</sup>*

**I***K*−1**u***<sup>c</sup>*

*<sup>H</sup>*(*k*)

*<sup>H</sup>*(*k*)

<sup>ǫ</sup>*H*(*<sup>k</sup>* <sup>+</sup> <sup>1</sup>) = �

the *i* th block of equation (50) can be written as

$$\mathbf{F}\_{K}\mathbf{e}^{\mathcal{O}}(k)\otimes\mathbf{I}\_{i}\mathbf{u}\_{H}^{\mathcal{E}}(k) = \begin{bmatrix} \mathbf{T}\_{i,0}\ \mathbf{T}\_{i,K-1}\ \mathbf{T}\_{i,K-2}\ \cdots \ \mathbf{T}\_{i,1} \end{bmatrix} \begin{bmatrix} \mathbf{I}\_{0}\boldsymbol{\epsilon}\_{H}(k) \\\\ \mathbf{D}\mathbf{I}\_{2}\boldsymbol{\epsilon}\_{H}(k) \\\\ \vdots \\\\ \mathbf{D}\mathbf{I}\_{K-1}\boldsymbol{\epsilon}\_{H}(k) \end{bmatrix}$$

$$+\operatorname{Diag}\left[\mathbf{I}\_{i}\mathbf{u}\_{H}^{c}(k)\right]\mathbf{L}\_{K}\mathbf{e}^{o}(k).\tag{58}$$

Using the fact that

$$\text{Diag}\left[\mathbf{v}\right]\mathbf{R}\,\text{Diag}\left[\mathbf{u}\right] = \left(\mathbf{v}\mathbf{u}^T\right)\otimes\mathbf{R},\tag{59}$$

equation (56) can be written as

$$\mathbf{T}\_{i,j} = \left(\mathbf{I}\_i \mathbf{u}\_H^c(k) \left(\mathbf{I}\_j \mathbf{u}\_H(k)\right)^T\right) \otimes \mathbf{L}\_K. \tag{60}$$

Define

$$\mathbf{U}\_{K^2} = \mathbf{H} \begin{bmatrix} \mathbf{I}\_{\frac{K^2}{2} \times \frac{K^2}{2}} & \mathbf{0}\_{\frac{K^2}{2} \times \frac{K^2}{2}}\\ \mathbf{0}\_{\frac{K^2}{2} \times \frac{K^2}{2}} & \mathbf{0}\_{\frac{K^2}{2} \times \frac{K^2}{2}} \end{bmatrix} \mathbf{H}^{-1}. \tag{61}$$

Then

$$\begin{aligned} \mathbf{w}\_{H}(k+1) &= \mathbf{w}\_{H}(k) \\ + \mu \mathbf{U}\_{K} \mathbf{I}\_{K}^{-1} \mathbf{T} \begin{bmatrix} \mathbf{I}\_{0} \epsilon\_{H}(k) \\ \mathbf{D} \mathbf{I}\_{1} \epsilon\_{H}(k) \\ \mathbf{D} \mathbf{I}\_{2} \epsilon\_{H}(k) \\ \vdots \\ \mathbf{D} \mathbf{I}\_{K-1} \epsilon\_{H}(k) \end{bmatrix} + \mu \mathbf{U}\_{K} \mathbf{I}\_{K}^{-1} \begin{bmatrix} \text{Diag}\left[\mathbf{I}\_{0} \mathbf{u}\_{H}^{c}(k)\right] \\ \text{Diag}\left[\mathbf{I}\_{1} \mathbf{u}\_{H}^{c}(k)\right] \\ \vdots \\ \text{Diag}\left[\mathbf{I}\_{K-1} \mathbf{u}\_{H}^{c}(k)\right] \end{bmatrix} \mathbf{I}\_{K} \mathbf{e}^{o}(k). \end{aligned} \tag{62}$$

The matrix **T** can be written as

$$\begin{split} \mathbf{T} = \left(\mathbf{1}\_{K \times K} \times \mathbf{L}\_{K}\right) \otimes \\ & \quad \begin{bmatrix} \mathbf{I}\_{0} \mathbf{u}\_{H}^{c}(k) \left[\mathbf{I}\_{0} \mathbf{u}\_{H}(k)\right]^{T} & \mathbf{I}\_{0} \mathbf{u}\_{H}^{c}(k) \left[\mathbf{I}\_{K-1} \mathbf{u}\_{H}(k)\right]^{T} & \cdots & \mathbf{I}\_{0} \mathbf{u}\_{H}^{c}(k) \left[\mathbf{I}\_{1} \mathbf{u}\_{H}(k)\right]^{T} \\ \mathbf{I}\_{1} \mathbf{u}\_{H}^{c}(k) \left[\mathbf{I}\_{0} \mathbf{u}\_{H}(k)\right]^{T} & \mathbf{I}\_{1} \mathbf{u}\_{H}^{c}(k) \left[\mathbf{I}\_{K-1} \mathbf{u}\_{H}(k)\right]^{T} & \cdots & \mathbf{I}\_{1} \mathbf{u}\_{H}^{c}(k) \left[\mathbf{I}\_{1} \mathbf{u}\_{H}(k)\right]^{T} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{I}\_{K-1} \mathbf{u}\_{H}^{c}(k) \left[\mathbf{I}\_{0} \mathbf{u}\_{H}(k)\right]^{T} & \mathbf{I}\_{K-1} \mathbf{u}\_{H}^{c}(k) \left[\mathbf{I}\_{K-1} \mathbf{u}\_{H}(k)\right]^{T} & \cdots & \mathbf{I}\_{K-1} \mathbf{u}\_{H}^{c}(k) \left[\mathbf{I}\_{1} \mathbf{u}\_{H}(k)\right]^{T} \end{bmatrix}. \\ \end{split}$$

where × denotes the Kronecker product and **<sup>1</sup>***K*×*<sup>K</sup>* is the *<sup>K</sup>* × *<sup>K</sup>* matrix with all element being equal to one. The matrix **T** can be written as

$$\mathbf{T} = \begin{bmatrix} \mathbf{I}\_{0}\mathbf{u}\_{H}^{c}(k) \\ \mathbf{I}\_{1}\mathbf{u}\_{H}^{c}(k) \\ \vdots \\ \mathbf{I}\_{K-2}\mathbf{u}\_{H}^{c}(k) \\ \mathbf{I}\_{K-1}\mathbf{u}\_{H}^{c}(k) \end{bmatrix} \begin{bmatrix} \mathbf{I}\_{0}\mathbf{u}\_{H}(k) \\ \mathbf{I}\_{K-1}\mathbf{u}\_{H}(k) \\ \vdots \\ \mathbf{I}\_{2}\mathbf{u}\_{H}(k) \\ \mathbf{I}\_{1}\mathbf{u}\_{H}(k) \end{bmatrix}^{T} \otimes \left(\mathbf{1}\_{K\times K} \times \mathbf{L}\_{K}\right) = \left(\mathbf{I}\_{K}\mathbf{u}\_{H}^{c}(k)\mathbf{u}\_{H}^{T}(k)\mathbf{I}\_{K}^{c}\right) \otimes \left(\mathbf{1}\_{K\times K} \times \mathbf{L}\_{K}\right).$$

where

16 Adaptive Filtering

the *i*

Using the fact that

Define

Then

**T** = �

<sup>+</sup> *<sup>µ</sup>* **<sup>U</sup>***K*<sup>2</sup> **<sup>I</sup>**

−1 *<sup>K</sup>* **T**

The matrix **T** can be written as

  � ⊗

**I**0**u***<sup>c</sup>*

**I**1**u***<sup>c</sup>*

**I***K*−1**u***<sup>c</sup>*

**<sup>1</sup>***K*×*<sup>K</sup>* × **<sup>L</sup>***<sup>K</sup>*

equation (56) can be written as

**<sup>w</sup>***H*(*<sup>k</sup>* + <sup>1</sup>) = **<sup>w</sup>***H*(*k*)

**<sup>I</sup>**0ǫ*H*(*k*) **DI**1ǫ*H*(*k*) **DI**2ǫ*H*(*k*) . . . **DI***K*−1ǫ*H*(*k*)

*<sup>H</sup>*(*k*)[**I**0**u***H*(*k*)]

*<sup>H</sup>*(*k*)[**I**0**u***H*(*k*)]

*<sup>H</sup>*(*k*)[**I**0**u***H*(*k*)]

. .

  **<sup>L</sup>***<sup>K</sup>* = **<sup>F</sup>***<sup>K</sup>*

th block of equation (50) can be written as

*<sup>H</sup>*(*k*) = �

**<sup>T</sup>***i*,*<sup>j</sup>* = � **I***i***u***<sup>c</sup> <sup>H</sup>*(*k*) � **<sup>I</sup>***j***u***H*(*k*)

**<sup>U</sup>***K*<sup>2</sup> <sup>=</sup> **<sup>H</sup>**

 

<sup>+</sup> *<sup>µ</sup>* **<sup>U</sup>***K*<sup>2</sup> **<sup>I</sup>**

*<sup>T</sup>* **<sup>I</sup>**0**u***<sup>c</sup>*

*<sup>T</sup>* **<sup>I</sup>**1**u***<sup>c</sup>*

. .

*<sup>T</sup>* **<sup>I</sup>***K*−1**u***<sup>c</sup>*

−1 *K*

  Diag �

Diag �

Diag �

Diag �

*<sup>H</sup>*(*k*)[**I***K*−1**u***H*(*k*)]

*<sup>H</sup>*(*k*)[**I***K*−1**u***H*(*k*)]

*<sup>H</sup>*(*k*)[**I***K*−1**u***H*(*k*)]

.

**<sup>F</sup>***K***e***o*(*k*) ⊗ **<sup>I</sup>***i***u***<sup>c</sup>*

� **0** *<sup>K</sup>* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2 **0** *<sup>K</sup>* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2

+ Diag [**I***i***u***<sup>c</sup>*

Diag [**v**] **R** Diag [**u**] = �

� **I** *<sup>K</sup>*<sup>2</sup> <sup>2</sup> <sup>×</sup> *<sup>K</sup>*<sup>2</sup> 2 **0** *<sup>K</sup>*<sup>2</sup> <sup>2</sup> <sup>×</sup> *<sup>K</sup>*<sup>2</sup> 2

> **0** *<sup>K</sup>*<sup>2</sup> <sup>2</sup> <sup>×</sup> *<sup>K</sup>*<sup>2</sup> 2 **0** *<sup>K</sup>*<sup>2</sup> <sup>2</sup> <sup>×</sup> *<sup>K</sup>*<sup>2</sup> 2

**0** *<sup>K</sup>* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2 **I** *K* <sup>2</sup> <sup>×</sup> *<sup>K</sup>* 2

**<sup>T</sup>***i*,0 **<sup>T</sup>***i*,*K*−<sup>1</sup> **<sup>T</sup>***i*,*K*−<sup>2</sup> ··· **<sup>T</sup>***i*,1

� **<sup>F</sup>**−<sup>1</sup>

**vu***T*�

�*<sup>T</sup>* �

�

**I**0**u***<sup>c</sup> <sup>H</sup>*(*k*) �

**I**1**u***<sup>c</sup> <sup>H</sup>*(*k*) �

**I**2**u***<sup>c</sup> <sup>H</sup>*(*k*) �

**I***K*−1**u***<sup>c</sup>*

*<sup>H</sup>*(*k*) �

*<sup>T</sup>* ··· **<sup>I</sup>**0**u***<sup>c</sup>*

*<sup>T</sup>* ··· **<sup>I</sup>**1**u***<sup>c</sup>*

*<sup>T</sup>* ··· **<sup>I</sup>***K*−1**u***<sup>c</sup>*

. ... .

 

. . . �

 

*<sup>K</sup>* , (57)

⊗ **R**, (59)

⊗ **<sup>L</sup>***K*. (60)

**<sup>H</sup>**<sup>−</sup>1. (61)

**<sup>L</sup>***K***e***o*(*k*). (62)

*<sup>H</sup>*(*k*)[**I**1**u***H*(*k*)]

*<sup>H</sup>*(*k*)[**I**1**u***H*(*k*)]

*<sup>H</sup>*(*k*)[**I**1**u***H*(*k*)]

. . *T*

 ,

*T*

*T*

 

**<sup>I</sup>**0ǫ*H*(*k*) **DI**1ǫ*H*(*k*) **DI**2ǫ*H*(*k*) . . . **DI***K*−1ǫ*H*(*k*)

*<sup>H</sup>*(*k*)] **<sup>L</sup>***K***e***o*(*k*). (58)

$$\mathbf{I}\_{\mathbf{K}}^{c} = \begin{bmatrix} \mathbf{I}\_{0} \\ \mathbf{I}\_{\mathbf{K}-1} \\ \vdots \\ \mathbf{I}\_{1} \end{bmatrix}. \tag{63}$$

Finally, the error in the estimation of the adaptive filter is given by

$$\epsilon\_{H}(k+1) = \left(\mathbf{I} - \mu \mathbf{U}\_{K} \mathbf{1}\_{K}^{-1} \left(\mathbf{I}\_{K} \mathbf{u}\_{H}^{c}(k) \mathbf{u}\_{H}^{T}(k) \mathbf{I}\_{K}^{c} \right) \otimes \left(\mathbf{1}\_{K \times K} \times \mathbf{L}\_{K} \right) \mathbf{I}\_{K}^{D} \right) \epsilon\_{H}(k)$$

$$-\mu \mathbf{U}\_{K} \mathbf{1}\_{K}^{-1} \begin{bmatrix} \text{Diag}[\mathbf{I}\_{0} \mathbf{u}\_{H}^{c}(k)] \\ \text{Diag}[\mathbf{I}\_{1} \mathbf{u}\_{H}^{c}(k)] \\ \vdots \\ \text{Diag}[\mathbf{I}\_{K-2} \mathbf{u}\_{H}^{c}(k)] \\ \text{Diag}[\mathbf{I}\_{K-1} \mathbf{u}\_{H}^{c}(k)] \end{bmatrix} \mathbf{I}\_{K} \mathbf{e}^{o}(k), \tag{64}$$

where

$$\mathbf{I}\_K^D = \begin{bmatrix} \mathbf{I}\_0 \\ \mathbf{D} \mathbf{I}\_1 \\ \mathbf{D} \mathbf{I}\_2 \\ \vdots \\ \mathbf{D} \mathbf{I}\_{K-2} \\ \mathbf{D} \mathbf{I}\_{K-1} \end{bmatrix}. \tag{65}$$

Therefore, the adaptive block HOT filter convergence is governed by the matrix

$$\mathbf{V} = \mathbf{H} \begin{bmatrix} \mathbf{I}\_{\frac{\mathbf{k}^{2}}{2} \times \frac{\mathbf{k}^{2}}{2}} & \mathbf{0}\_{\frac{\mathbf{k}^{2}}{2} \times \frac{\mathbf{k}^{2}}{2}} \\ \mathbf{0}\_{\frac{\mathbf{k}^{2}}{2} \times \frac{\mathbf{k}^{2}}{2}} & \mathbf{0}\_{\frac{\mathbf{k}^{2}}{2} \times \frac{\mathbf{k}^{2}}{2}} \end{bmatrix} \mathbf{H}^{-1} \mathbf{I}\_{K}^{-1} \left( \mathbf{I}\_{K} E \mathbf{u}\_{H}^{c}(k) \mathbf{u}\_{H}^{T}(k) \mathbf{I}\_{K}^{\mathbf{c}T} \right) \otimes \left( \mathbf{1}\_{K \times K} \times \mathbf{L}\_{K} \right) \mathbf{I}\_{K}^{D}. \tag{66}$$

The structure of **Ψ** is now analyzed. Using the relation between the HOT and the DFT transforms, we can write

$$\mathbf{I}\_K \mathbf{u}\_H^c = \begin{bmatrix} \mathbf{F}\_K \mathbf{u}\_0^c \\ \mathbf{F}\_K \mathbf{u}\_1^c \\ \vdots \\ \mathbf{F}\_K \mathbf{u}\_{K-2}^c \\ \mathbf{F}\_K \mathbf{u}\_{K-1}^c \end{bmatrix}. \tag{67}$$

diagonal. If each block is perfectly diagonal, then **I***<sup>K</sup>*

modes into *K* decoupled sets of modes.

**Figure 5.** Three-dimensional representation of **L**16.

**Figure 6.** Three-dimensional representation of **L**32.

 **I***KE***u***<sup>c</sup>*

will be block diagonal. Asymptotically the HOT block LMS adaptive filter transforms the *K*<sup>2</sup>

*H*(*k*)**u***<sup>T</sup>*

*H*(*k*)**I***cT K* 

Hirschman Optimal Transform Block LMS Adaptive Filter

⊗ (**1***K*×*<sup>K</sup>* × **<sup>L</sup>***K*)**I***<sup>D</sup>*

http://dx.doi.org/10.5772/51394

*K*

19

It can be easily shown that

$$\mathbf{F}\_{\mathbf{K}} \mathbf{u}\_{i}^{\mathbb{C}} = \begin{cases} \mathbf{F}\_{\mathbf{K}}^{H} \mathbf{u}\_{i} & \text{if } i = 0, \\ \mathbf{D}^{\*} \mathbf{F}\_{\mathbf{K}}^{H} \mathbf{u}\_{K-i} & \text{if } i \neq 0. \end{cases} \tag{68}$$

Then we have

$$\mathbf{I}\_K \mathbf{u}\_K^c = \begin{bmatrix} \mathbf{F}\_K^H \mathbf{u}\_0 \\ \mathbf{D}^\* \mathbf{F}\_K^H \mathbf{u}\_{K-1} \\ \vdots \\ \mathbf{D}^\* \mathbf{F}\_K^H \mathbf{u}\_2 \\ \mathbf{D}^\* \mathbf{F}\_K^H \mathbf{u}\_1 \end{bmatrix} \tag{69}$$

and

$$\mathbf{I}\_{K}\mathbf{u}\_{H}^{c}(k)\mathbf{u}\_{H}^{T}(k)\mathbf{I}\_{K}^{\mathbf{c}T} = \begin{bmatrix} \mathbf{F}\_{K}^{H}\mathbf{u}\_{0} \\ \mathbf{D}^{\ast}\mathbf{F}\_{K}^{H}\mathbf{u}\_{K-1} \\ \vdots \\ \mathbf{D}^{\ast}\mathbf{F}\_{K}^{H}\mathbf{u}\_{2} \\ \mathbf{D}^{\ast}\mathbf{F}\_{K}^{H}\mathbf{u}\_{1} \end{bmatrix} \begin{bmatrix} \mathbf{F}\_{K}\mathbf{u}\_{0} \\ \mathbf{F}\_{K}\mathbf{u}\_{K-1} \\ \vdots \\ \mathbf{F}\_{K}\mathbf{u}\_{2} \\ \mathbf{F}\_{K}\mathbf{u}\_{1} \end{bmatrix}^{T} . \tag{70}$$

Taking the expectation of equation (70) yields

$$\mathbf{I}\_{K}\mathbf{E}\mathbf{u}\_{H}^{c}(k)\mathbf{u}\_{H}^{T}(k)\mathbf{I}\_{K}^{T} = \begin{bmatrix} \mathbf{F}\_{K}^{H}E\mathbf{u}\_{0}\mathbf{u}\_{0}^{T}\mathbf{F}\_{K} & \mathbf{F}\_{K}^{H}E\mathbf{u}\_{0}\mathbf{u}\_{K-1}^{T}\mathbf{F}\_{K} & \dots & \mathbf{F}\_{K}^{H}E\mathbf{u}\_{0}\mathbf{u}\_{1}^{T}\mathbf{F}\_{K} \\ \mathbf{D}^{\star}\mathbf{F}\_{K}^{H}E\mathbf{u}\_{K-1}\mathbf{u}\_{0}^{T}\mathbf{F}\_{K}\mathbf{D}^{\star}\mathbf{F}\_{K}^{H}E\mathbf{u}\_{K-1}\mathbf{u}\_{K-1}^{T}\mathbf{F}\_{K} & \dots & \mathbf{D}^{\star}\mathbf{F}\_{K}^{H}E\mathbf{u}\_{K-1}\mathbf{u}\_{1}^{T}\mathbf{F}\_{K} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{D}^{\star}\mathbf{F}\_{K}^{H}E\mathbf{u}\_{1}\mathbf{u}\_{0}^{T}\mathbf{F}\_{K} & \mathbf{D}^{\star}\mathbf{F}\_{K}^{H}E\mathbf{u}\_{1}\mathbf{u}\_{K-1}^{T}\mathbf{F}\_{K} & \dots & \mathbf{D}^{\star}\mathbf{F}\_{K}^{H}E\mathbf{u}\_{1}\mathbf{u}\_{1}^{T}\mathbf{F}\_{K} \end{bmatrix}.$$

Each block in the above equation is an autocorrelation matrix that is asymptotically diagonalized by the DFT matrix. Each block will be also pointwise multiplied by **L***K*. Three-dimensional representations of **<sup>L</sup>***<sup>K</sup>* for *<sup>K</sup>* = 16 and *<sup>K</sup>* = 32 are shown in Figures 5 and 6, respectively. The diagonal elements of **L***<sup>K</sup>* are much higher than the off diagonal elements. Therefore, pointwise multiplying each block in the previous equation with **L***<sup>K</sup>* makes it more diagonal. If each block is perfectly diagonal, then **I***<sup>K</sup>* **I***KE***u***<sup>c</sup> H*(*k*)**u***<sup>T</sup> H*(*k*)**I***cT K* ⊗ (**1***K*×*<sup>K</sup>* × **<sup>L</sup>***K*)**I***<sup>D</sup> K* will be block diagonal. Asymptotically the HOT block LMS adaptive filter transforms the *K*<sup>2</sup> modes into *K* decoupled sets of modes.

**Figure 5.** Three-dimensional representation of **L**16.

18 Adaptive Filtering

transforms, we can write

It can be easily shown that

Then we have

and

**I***KE***u***<sup>c</sup>*

*H*(*k*)**u***<sup>T</sup>*

*<sup>H</sup>*(*k*)**<sup>I</sup>** *cT <sup>K</sup>* <sup>=</sup>

The structure of **Ψ** is now analyzed. Using the relation between the HOT and the DFT

 

**<sup>D</sup>**∗**F***<sup>H</sup>*

 

 

<sup>0</sup> **<sup>F</sup>***<sup>K</sup>* **<sup>F</sup>***<sup>H</sup>*

<sup>0</sup> **<sup>F</sup>***<sup>K</sup>* **<sup>D</sup>**∗**F***<sup>H</sup>*

. .

<sup>0</sup> **<sup>F</sup>***<sup>K</sup>* **<sup>D</sup>**∗**F***<sup>H</sup>*

Each block in the above equation is an autocorrelation matrix that is asymptotically diagonalized by the DFT matrix. Each block will be also pointwise multiplied by **L***K*. Three-dimensional representations of **<sup>L</sup>***<sup>K</sup>* for *<sup>K</sup>* = 16 and *<sup>K</sup>* = 32 are shown in Figures 5 and 6, respectively. The diagonal elements of **L***<sup>K</sup>* are much higher than the off diagonal elements. Therefore, pointwise multiplying each block in the previous equation with **L***<sup>K</sup>* makes it more

**F***<sup>H</sup> <sup>K</sup>* **u**<sup>0</sup> **<sup>D</sup>**∗**F***<sup>H</sup>*

*<sup>K</sup>* **u***K*−<sup>1</sup> . . . **<sup>D</sup>**∗**F***<sup>H</sup> <sup>K</sup>* **u**<sup>2</sup> **<sup>D</sup>**∗**F***<sup>H</sup> <sup>K</sup>* **u**<sup>1</sup>

*<sup>K</sup> <sup>E</sup>***u**0**u***<sup>T</sup>*

*<sup>K</sup> <sup>E</sup>***u***K*−1**u***<sup>T</sup>*

.

*<sup>K</sup> <sup>E</sup>***u**1**u***<sup>T</sup>*

   

**F***K***u**<sup>0</sup> **F***K***u***K*−<sup>1</sup> . . . **F***K***u**<sup>2</sup> **F***K***u**<sup>1</sup>

*<sup>K</sup>*−1**F***<sup>K</sup>* ... **<sup>F</sup>***<sup>H</sup>*

*<sup>K</sup>*−1**F***<sup>K</sup>* ... **<sup>D</sup>**∗**F***<sup>H</sup>*

. ... .

*<sup>K</sup>*−1**F***<sup>K</sup>* ... **<sup>D</sup>**∗**F***<sup>H</sup>*

  *T*

. (70)

<sup>1</sup> **F***<sup>K</sup>*

<sup>1</sup> **F***<sup>K</sup>*

 .

<sup>1</sup> **F***<sup>K</sup>*

*<sup>K</sup> <sup>E</sup>***u**0**u***<sup>T</sup>*

*<sup>K</sup> <sup>E</sup>***u***K*−1**u***<sup>T</sup>*

*<sup>K</sup> <sup>E</sup>***u**1**u***<sup>T</sup>*

. .

**F***<sup>H</sup> <sup>K</sup>* **u**<sup>0</sup> **<sup>D</sup>**∗**F***<sup>H</sup>*

*<sup>K</sup>* **u***K*−<sup>1</sup> . . . **<sup>D</sup>**∗**F***<sup>H</sup> <sup>K</sup>* **u**<sup>2</sup> **<sup>D</sup>**∗**F***<sup>H</sup> <sup>K</sup>* **u**<sup>1</sup>

 

**F***K***u***<sup>c</sup>* 0 **F***K***u***<sup>c</sup>* 1 . . . **F***K***u***<sup>c</sup> K*−2 **F***K***u***<sup>c</sup> K*−1

*<sup>K</sup>* **<sup>u</sup>***<sup>i</sup>* if *<sup>i</sup>* <sup>=</sup> 0,

 

. (67)

(69)

*<sup>K</sup>* **<sup>u</sup>***K*−*<sup>i</sup>* if *<sup>i</sup>* �<sup>=</sup> 0. (68)

**I***K***u***<sup>c</sup> <sup>H</sup>* <sup>=</sup>

**F***K***u***<sup>c</sup> i* = � **F***<sup>H</sup>*

> **I***K***u***<sup>c</sup> <sup>K</sup>* <sup>=</sup>

**I***K***u***<sup>c</sup>*

Taking the expectation of equation (70) yields

 

*H*(*k*)**u***<sup>T</sup>*

**F***<sup>H</sup> <sup>K</sup> <sup>E</sup>***u**0**u***<sup>T</sup>*

**<sup>D</sup>**∗**F***<sup>H</sup>*

*<sup>K</sup> <sup>E</sup>***u***K*−1**u***<sup>T</sup>*

*<sup>K</sup> <sup>E</sup>***u**1**u***<sup>T</sup>*

. .

**<sup>D</sup>**∗**F***<sup>H</sup>*

*<sup>H</sup>*(*k*)**<sup>I</sup>** *cT <sup>K</sup>* <sup>=</sup>

**Figure 6.** Three-dimensional representation of **L**32.

#### **8. Conclusions**

The "HOT convolution," a relation between the HOT of two signals and their circular convolution was derived. The result was used to develop a fast block LMS adaptive filter called the HOT block LMS adaptive filter. The HOT block LMS adaptive filter assumes that the filter and block lengths are the same. This filter requires slightly less than half of the multiplications that are required for the DFT block LMS adaptive filter. The reduction in the computational complexity of the HOT block LMS comes from using only one polyphase component of the filter error used to update the filter weights. Convergence analysis of the HOT block LMS algorithm showed that the average time constant is the same as that of the DFT block LMS algorithm and that the misadjustment is *K* times greater than that of the DFT block LMS algorithm. The HOT block LMS adaptive filter transforms the *K*<sup>2</sup> modes into *K* decoupled sets of modes.

[8] J. Benesty and P. Duhamel, "A fast exact least mean square adaptive algorithm," *IEEE*

Hirschman Optimal Transform Block LMS Adaptive Filter

http://dx.doi.org/10.5772/51394

21

[9] D. Mansour and A. H. Gray, "Unconstrained frequency-domain adaptive filter," *IEEE*

[10] Simon Haykin, *Adaptive Filter Theory*. Prentice Hall information and system sciences

*Trans. ASSP*, pp. 2904-2920, Dec 1992.

*Trans. ASSP*, pp. 726-734, Oct 1982.

series, Fourth edition, 2002.

#### **Author details**

Osama Alkhouli1,⋆, Victor DeBrunner<sup>2</sup> and Joseph Havlicek<sup>3</sup>


#### **References**


[8] J. Benesty and P. Duhamel, "A fast exact least mean square adaptive algorithm," *IEEE Trans. ASSP*, pp. 2904-2920, Dec 1992.

20 Adaptive Filtering

**8. Conclusions**

decoupled sets of modes.

Victor DeBrunner<sup>2</sup> and Joseph Havlicek<sup>3</sup>

1 Caterpillar Inc., Illinois, USA

<sup>⋆</sup> Address all correspondence to: osama\_k@ou.edu

*Trans. ASSP*, pp. 783-788, Mar 1999.

*IEEE Trans. ASSP*, pp. 744-752, Jun 1981.

ASSP-28, NO. 4, Aug 1980.

**Author details** Osama Alkhouli1,⋆,

**References**

The "HOT convolution," a relation between the HOT of two signals and their circular convolution was derived. The result was used to develop a fast block LMS adaptive filter called the HOT block LMS adaptive filter. The HOT block LMS adaptive filter assumes that the filter and block lengths are the same. This filter requires slightly less than half of the multiplications that are required for the DFT block LMS adaptive filter. The reduction in the computational complexity of the HOT block LMS comes from using only one polyphase component of the filter error used to update the filter weights. Convergence analysis of the HOT block LMS algorithm showed that the average time constant is the same as that of the DFT block LMS algorithm and that the misadjustment is *K* times greater than that of the DFT block LMS algorithm. The HOT block LMS adaptive filter transforms the *K*<sup>2</sup> modes into *K*

2 University of Oklahoma, School of Electrical and Computer Engineering, Oklahoma, USA 3 Florida State University, Electrical and Computer Engineering Department, Florida, USA

[2] H T. Przebinda, V. DeBrunner, and M. Özaydin, "The optimal transform for the discrete Hirschman uncertainty principle," *IEEE Trans. Infor. Theory*, pp. 2086-2090, Jul 2001.

[3] V. DeBrunner, M. Özaydin, and T. Przebinda, "Resolution in time-frequency," *IEEE*

[4] V. DeBrunner, J. Havlicek, T. Przebinda, and M. Özaydin, "Entropy-based uncertainty measures for *L*2(*R*)*n*, ℓ2(*Z*), and ℓ2(*Z*/*NZ*) with a Hirschman optimal transform for

[5] V. DeBrunner and E. Matusiak, "An algorithm to reduce the complexity required to convolve finite length sequences using the Hirschman optimal transform (HOT),"

[6] G. Clark, S. Mitra, and S Parker, "Block implementation of adaptive digital filters,"

[7] E. R. Ferrara, "Fast implementation of LMS adaptive filters," *IEEE Trans. ASSP*, vol.

[1] I. I. Hirschman, "A note on entropy," *Amer. J. Math.*, vol. 79, pp. 152-156, 1957.

ℓ2(*Z*/*NZ*) ," *IEEE Trans. ASSP*, pp. 2690-2696, August 2005.

*ICASSP 2003*, Hong Kong, China, pp. II-577-580, Apr 2003.


**Chapter 2**

**On Using ADALINE Algorithm for Harmonic Estimation**

The electric power transmission grid has been progressively developed for over a century, from initial design of local dc networks in low-voltage levels to three-phase high voltage ac networks, and finally to modern bulk interconnected networks with various voltage levels and multiple complex electrical components. The development of human society and eco‐ nomic needs is the major driving force the revolution of transmission grids stage-by-stage with the aid of innovative technologies. The current power industry is being modernized and tends to deal with the challenges more proactively by using the state-of-the-art technol‐ ogies in the areas of sensing, communications, control, computing, and information technol‐ ogy. The shift in the development of transmission grids to be more intelligent has been

In a smart transmission network, flexible and reliable transmission capabilities can be facili‐ tated by the advanced Flexible AC Transmission Systems (FACTS), high-voltage dc (HVDC) devices, and other power electronics-based devices. The FACTS devices are optimally placed in the transmission network to provide a flexible control of the transmission network and increase power transfer levels without new transmission lines. These devices also im‐ prove the dynamic performance and stability of the transmission network. Through the uti‐ lization of FACTS technologies, advanced power flow control, etc., the future smart transmission grids should be able to maximally relieve transmission congestions, and fully support deregulation and enable competitive power markets. In addition, with the increas‐ ing penetration of large-scale renewable/alternative energy resources, the future smart

> © 2013 Han; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

© 2013 Han; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

**and Phase-Synchronization for the Grid-Connected**

**Converters in Smart Grid Applications**

Additional information is available at the end of the chapter

Yang Han

**1. Introduction**

http://dx.doi.org/10.5772/52547

summarized as "smart grid" [see Fig.1].

## **On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected Converters in Smart Grid Applications**

Yang Han

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/52547

#### **1. Introduction**

The electric power transmission grid has been progressively developed for over a century, from initial design of local dc networks in low-voltage levels to three-phase high voltage ac networks, and finally to modern bulk interconnected networks with various voltage levels and multiple complex electrical components. The development of human society and eco‐ nomic needs is the major driving force the revolution of transmission grids stage-by-stage with the aid of innovative technologies. The current power industry is being modernized and tends to deal with the challenges more proactively by using the state-of-the-art technol‐ ogies in the areas of sensing, communications, control, computing, and information technol‐ ogy. The shift in the development of transmission grids to be more intelligent has been summarized as "smart grid" [see Fig.1].

In a smart transmission network, flexible and reliable transmission capabilities can be facili‐ tated by the advanced Flexible AC Transmission Systems (FACTS), high-voltage dc (HVDC) devices, and other power electronics-based devices. The FACTS devices are optimally placed in the transmission network to provide a flexible control of the transmission network and increase power transfer levels without new transmission lines. These devices also im‐ prove the dynamic performance and stability of the transmission network. Through the uti‐ lization of FACTS technologies, advanced power flow control, etc., the future smart transmission grids should be able to maximally relieve transmission congestions, and fully support deregulation and enable competitive power markets. In addition, with the increas‐ ing penetration of large-scale renewable/alternative energy resources, the future smart

© 2013 Han; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Han; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

transmission grids would be able to enable full integration of these renewable energy re‐ sources(Wira et al., 2010, Sauter & Lobashov 2011, Varaiya et al., 2011).

lems have stimulated the fast development of power quality mitigation devices, which are connected to the grid to improve the energy transmission efficiency of the transmission lines and the quality of the voltage waveforms at the common coupling points (PCCs) for the cus‐ tomers. These devices are known as flexible AC transmission systems (FACTS) (Fig.2), which are based on the grid-connected converters and real-time digital signal processing techniques. Much work has been conducted in the past decades on the FACTS technologies and many FACTS devices have been practically implemented for the high voltage transmis‐ sion grid, such as static synchronous compensators (STATCOMs), thyristor controlled series compensators (TCSCs) and unified power flow controllers (UPFCs) (Fig.3), etc(Cirrincione

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected...

http://dx.doi.org/10.5772/52547

25

The stable and smooth operation of the FACTS equipments is highly dependent on how these power converters are synchronized with the grid. The need for improvements in the existing grid synchronization approaches also stems from rapid proliferation of distributed generation (DG) units in electric networks. A converter-interfaced DG unit, e.g., a photovol‐ taic (PV) unit (Fig.4), a wind generator unit (Fig.5) and a micro-turbine-generator unit, un‐ der both grid-connected and micro-grid (islanding) scenarios requires accurate converter synchronization under polluted and/or variable-frequency environment to guarantee stable

operation of these grid-connected converters(Jarventausta et al., 2010).

et al., 2008, Jarventausta et al, 2010).

**Figure 2.** The circuit diagram of the FACTS and HVDC link

Smart substations would provide advanced power electronics and control interfaces for re‐ newable energy and demand response resources so that they can be integrated into the pow‐ er grid on a large scale at the distribution level. By incorporating micro-grids, the substation can deliver quality power to customers in a manner that the power supply degrades grace‐ fully after a major commercial outage, as opposed to a catastrophic loss of power, allowing more of the installations to continue operations. Smart substations should have the capabili‐ ty to operate in the islanding mode taking into account the transmission capability, load de‐ mand, and stability limit, and provide mechanisms for seamlessly transitioning to islanding operation. Coordinated and self-healing are the two key characteristics of the next genera‐ tion control functions. These applications require precise tracking of the utility's phase-angle information, for high performance local or remote control, sensing and fault diagnosis pur‐ poses(Froehlich et al., 2011, Han et al., 2009).

**Figure 1.** The vision of the future smart grid (SG) infrastructure

On the other hand, the proliferation of nonlinear loads causes significant power quality con‐ tamination for the electric distribution systems. For instance, high voltage direct transmis‐ sion (HVDC), electric arc furnaces (EAFs), variable speed ac drives which adopts six-pulse power converters as the first power conversion stage, these devices cause a large amount of characteristic harmonics and a low power factor, which deteriorate power quality of the electrical distribution systems. The increasing restrictive regulations on power quality prob‐ lems have stimulated the fast development of power quality mitigation devices, which are connected to the grid to improve the energy transmission efficiency of the transmission lines and the quality of the voltage waveforms at the common coupling points (PCCs) for the cus‐ tomers. These devices are known as flexible AC transmission systems (FACTS) (Fig.2), which are based on the grid-connected converters and real-time digital signal processing techniques. Much work has been conducted in the past decades on the FACTS technologies and many FACTS devices have been practically implemented for the high voltage transmis‐ sion grid, such as static synchronous compensators (STATCOMs), thyristor controlled series compensators (TCSCs) and unified power flow controllers (UPFCs) (Fig.3), etc(Cirrincione et al., 2008, Jarventausta et al, 2010).

**Figure 2.** The circuit diagram of the FACTS and HVDC link

transmission grids would be able to enable full integration of these renewable energy re‐

Smart substations would provide advanced power electronics and control interfaces for re‐ newable energy and demand response resources so that they can be integrated into the pow‐ er grid on a large scale at the distribution level. By incorporating micro-grids, the substation can deliver quality power to customers in a manner that the power supply degrades grace‐ fully after a major commercial outage, as opposed to a catastrophic loss of power, allowing more of the installations to continue operations. Smart substations should have the capabili‐ ty to operate in the islanding mode taking into account the transmission capability, load de‐ mand, and stability limit, and provide mechanisms for seamlessly transitioning to islanding operation. Coordinated and self-healing are the two key characteristics of the next genera‐ tion control functions. These applications require precise tracking of the utility's phase-angle information, for high performance local or remote control, sensing and fault diagnosis pur‐

On the other hand, the proliferation of nonlinear loads causes significant power quality con‐ tamination for the electric distribution systems. For instance, high voltage direct transmis‐ sion (HVDC), electric arc furnaces (EAFs), variable speed ac drives which adopts six-pulse power converters as the first power conversion stage, these devices cause a large amount of characteristic harmonics and a low power factor, which deteriorate power quality of the electrical distribution systems. The increasing restrictive regulations on power quality prob‐

sources(Wira et al., 2010, Sauter & Lobashov 2011, Varaiya et al., 2011).

poses(Froehlich et al., 2011, Han et al., 2009).

24 Adaptive Filtering - Theories and Applications

**Figure 1.** The vision of the future smart grid (SG) infrastructure

The stable and smooth operation of the FACTS equipments is highly dependent on how these power converters are synchronized with the grid. The need for improvements in the existing grid synchronization approaches also stems from rapid proliferation of distributed generation (DG) units in electric networks. A converter-interfaced DG unit, e.g., a photovol‐ taic (PV) unit (Fig.4), a wind generator unit (Fig.5) and a micro-turbine-generator unit, un‐ der both grid-connected and micro-grid (islanding) scenarios requires accurate converter synchronization under polluted and/or variable-frequency environment to guarantee stable operation of these grid-connected converters(Jarventausta et al., 2010).

**Figure 5.** The configurations of the wind generators with the network

**Figure 6.** The circuit diagram of the shunt active power filter

Besides, an active power filter (APF) (Fig.6) or dynamic voltage restorer (DVR) (Fig.7) rectifi‐ er also requires a reference signal which is properly synchronized to the grid. Interfacing power electronic converters to the utility grid, particularly at the medium and high voltages, necessitates proper synchronization for the purpose of operation and control of the gridconnected converters. However, the controller signals used for synchronization are often corrupted by harmonics, voltage sags or swells, commutation notches, noise, phase-angle

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected...

http://dx.doi.org/10.5772/52547

27

jump and frequency deviations(Abdeslam et al., 2007, Cirrincione et al., 2008).

**Figure 3.** The circuit diagram of the unified power flow controller (UPFC)

**Figure 4.** The configuration of PV arrays with the electric network

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected... http://dx.doi.org/10.5772/52547 27

**Figure 5.** The configurations of the wind generators with the network

**Figure 3.** The circuit diagram of the unified power flow controller (UPFC)

26 Adaptive Filtering - Theories and Applications

**Figure 4.** The configuration of PV arrays with the electric network

Besides, an active power filter (APF) (Fig.6) or dynamic voltage restorer (DVR) (Fig.7) rectifi‐ er also requires a reference signal which is properly synchronized to the grid. Interfacing power electronic converters to the utility grid, particularly at the medium and high voltages, necessitates proper synchronization for the purpose of operation and control of the gridconnected converters. However, the controller signals used for synchronization are often corrupted by harmonics, voltage sags or swells, commutation notches, noise, phase-angle jump and frequency deviations(Abdeslam et al., 2007, Cirrincione et al., 2008).

**Figure 6.** The circuit diagram of the shunt active power filter

Therefore, a desired synchronization method must detect the phase angle of the fundamen‐ tal component of utility voltages as fast as possible while adequately eliminating the im‐ pacts of corrupting sources on the signal. Besides, the synchronization process should be updated not only at the signal zero-crossing, but continuously over the fundamental period of the signal(Chang et al., 2009, Chang et al., 2010). This chapter aims to present the harmon‐ ic estimation and grid-synchronization method using the adaptive linear neural network (ADALINE) (Figs.8 and 9). The mathematical derivation of these algorithms, the parameter design guidelines, and digital simulation results would be provided. Besides, their practical application for the grid-connected converters in smart grid would also be presented in this

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected...

http://dx.doi.org/10.5772/52547

29

**Figure 9.** The grid-synchronization algorithm using the ADALINE-identifier

with Fourier series expansion as (Simon, 2002):

**2. Mathematical model of the adaptive linear neural network (ADALINE)**

The adaptive linear neural network (ADALINE) was used to estimate the time-varying mag‐ nitudes and phases of the fundamental and harmonics from a distorted waveform. The mathematical formulation of ADALINE is briefly reviewed. Consider an arbitrary signal*Y*(*t*)

chapter.

**Figure 7.** The circuit diagram of the dynamic voltage restorer (DVR)

**Figure 8.** The diagram of the adaptive linear neural network (ADALINE)

Therefore, a desired synchronization method must detect the phase angle of the fundamen‐ tal component of utility voltages as fast as possible while adequately eliminating the im‐ pacts of corrupting sources on the signal. Besides, the synchronization process should be updated not only at the signal zero-crossing, but continuously over the fundamental period of the signal(Chang et al., 2009, Chang et al., 2010). This chapter aims to present the harmon‐ ic estimation and grid-synchronization method using the adaptive linear neural network (ADALINE) (Figs.8 and 9). The mathematical derivation of these algorithms, the parameter design guidelines, and digital simulation results would be provided. Besides, their practical application for the grid-connected converters in smart grid would also be presented in this chapter.

**Figure 9.** The grid-synchronization algorithm using the ADALINE-identifier

**Figure 7.** The circuit diagram of the dynamic voltage restorer (DVR)

28 Adaptive Filtering - Theories and Applications

**Figure 8.** The diagram of the adaptive linear neural network (ADALINE)

#### **2. Mathematical model of the adaptive linear neural network (ADALINE)**

The adaptive linear neural network (ADALINE) was used to estimate the time-varying mag‐ nitudes and phases of the fundamental and harmonics from a distorted waveform. The mathematical formulation of ADALINE is briefly reviewed. Consider an arbitrary signal*Y*(*t*) with Fourier series expansion as (Simon, 2002):

$$\begin{aligned} Y(t) &= \sum\_{n=0,1,2,3,\cdots}^{N} A\_n \sin(n\alpha t + \varphi\_n) + \varepsilon(t) \\ &= \sum\_{n=0,1,2,3,\cdots}^{N} (a\_n \sin 2\pi n ft + b\_n \cos 2\pi n ft) + \varepsilon(t) \end{aligned} \tag{1}$$

1 sin ... cos sin sin sin ... sin cos

*T k kk k k*

w

*k k*

 w

> w

*P W*

 w

> w

> > w

**R** (9)

*<sup>k</sup>* , can be obtained by

http://dx.doi.org/10.5772/52547

31

^

<sup>ˆ</sup> <sup>0</sup> *<sup>k</sup>* -+ = *P W***<sup>R</sup>** (10)


*t N t*

**R** (8)

*k kk k k*

*t tt t Nt*

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected...

é ù ê ú

*Nt Nt t Nt Nt*

ë û

cos cos sin ... cos cos

Notably, matrix R is real and symmetric, and *ε* is a quadratic function of weights. The gradi‐ ent function ∇*ε* corresponding to the MSE function of Eq. (4) is obtained by straightforward

> *kkk k k k N N*

 e e

[ ] ... ... ... ...

 ww

 ww

w

w

011

which is a linear function of weights. The optimal set of weights, *W*

The solution of the Eq. (10) is called Weiner solution or the Weiner filter:

011 011

 e

( , , , ..., , ) ( , , , ..., , ) *kkk k k kkk k k <sup>T</sup> k kkk k k kkk k k k k k*

¶¶¶ ¶ ¶ ¶¶¶ ¶ ¶ Ñ = <sup>=</sup> = - ¶¶¶ ¶ ¶ ¶¶¶ ¶ ¶

*bab a b bab a b*

<sup>1</sup> ( ) ( ) *W W k k k k kk k k k k* <sup>+</sup> = + -Ñ = + = + -

 m

where the learning rate *μ* is used to adjust the convergence speed and the stability of weights updating process. Taking the expectation of Eq. (12), the following equation is de‐

calculating gradients of the square error at the *k*th iteration:

 e

cursive weights updating equation can be expressed as:

me

eee

wher*eek*=(*dk*-*sk*), and *sk* = *Xk*

e

rived:

<sup>ˆ</sup> <sup>1</sup> *W P <sup>k</sup>*

The Weiner solution corresponds to the point in weight space that represents the minimum mean-square error *εmin*. To compute the optimal filter one must first compute R-1 and *P*. However, it would be difficult to compute R-1 and *P* accurately when the input data com‐ prises a random stream of patterns (drawn from a stationary distribution). Thus, by direct

*N N N N*

% (12)

*eee e e e e X*

*<sup>T</sup> Wk* since we are dealing with linear neurons. Therefore, the re‐

 m% *W eX W d s X* (13)

eee

( , , , ..., , )*<sup>T</sup>*

¶¶¶ ¶ ¶ Ñ = =- + ¶¶¶ ¶ ¶

*bab a b*

*k k*

= =

differentiation:

setting ∇*ε* =0, which yields:

*EXX E*

e

where *An* and *φn* are correspondingly the amplitude and phase angle of the *n*th order har‐ monic component, and *ε*(*t*)represents higher order components and random noise. In order to formulate the harmonic estimation problem by using ADALINE, we firstly define the pat‐ tern vector*Xk* and weight vector *Wk* as:

$$X\_k = \left[1, \sin\alpha t\_{k'}, \cos\alpha t\_{k'}, \dots, \sin N\alpha t\_{k'}, \cos N\alpha t\_{k}\right]^T \tag{2}$$

$$\mathcal{W}\_k = \left[ b\_0^k, a\_1^k, b\_1^k, a\_2^k, b\_2^k, \dots, a\_{N'}^k, b\_N^k \right]^T \tag{3}$$

The square error on the pattern *Xk* is expressed as:

$$\varepsilon\_k = \frac{1}{2} (d\_k - X\_k^T W\_k)^2 = \frac{1}{2} e\_k^2 = \frac{1}{2} (d\_k^2 - 2d\_k X\_k^T W\_k + W\_k^T X\_k X\_k^T W\_k) \tag{4}$$

where *dk* is the desired scalar output. The mean-square error (MSE) *ε* can be obtained by cal‐ culating the expectation of both sides of Eq. (4), as:

$$\mathbb{E}\varepsilon = \mathbb{E}[\varepsilon\_k] = \frac{1}{2}\mathbb{E}[d\_k^2] - \mathbb{E}[d\_k \mathbf{X}\_k^T]\mathbb{W}\_k + \frac{1}{2}\mathbb{W}\_k^T\mathbb{E}[\mathbf{X}\_k \mathbf{X}\_k^T]\mathbb{W}\_k\tag{5}$$

where the weights are assumed to be fixed at *Wk* while computing the expectation. The ob‐ jective of the adaptive linear neural network (ADALINE) is to find the optimal weight vec‐ tor *W* ^ *<sup>k</sup>* that minimizes the MSE of Eq. (4). For convenience of expression, Eq. (5) is rewritten as (Abdeslam et. al, 2007, Simon 2002):

$$\mathcal{L} = E[\mathcal{E}\_k] = \frac{1}{2} E[d\_k^2] - P^T \mathcal{W}\_k + \frac{1}{2} \mathcal{W}\_k^T \mathbf{R} \mathcal{W}\_k \tag{6}$$

where *PT* and *R* are defined as:

$$\mathbf{P}^{\mathrm{T}} = \mathbb{E}[d\_{k}\mathbf{X}\_{k}^{\mathrm{T}}] = \mathbb{E}[(d\_{k'}d\_{k}\sin\alpha t\_{k'}d\_{k}\cos\alpha t\_{k'}\cdots, d\_{k}\sin\mathrm{Not}\_{k'}d\_{k}\cos\mathrm{Not}\_{k}\,)]\tag{7}$$

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected... http://dx.doi.org/10.5772/52547 31

$$\mathbf{R} = \mathbf{E}[\mathbf{X}\_k \mathbf{X}\_k^T] = \mathbf{E} \begin{bmatrix} 1 & \sin \alpha t\_k & \dots & \cos N \alpha t\_k \\ \sin \alpha t\_k & \sin \alpha t\_k \sin \alpha t\_k & \dots & \sin \alpha t\_k \cos N \alpha t\_k \\ \dots & \dots & \dots & \dots \\ \cos N \alpha t\_k & \cos N \alpha t\_k \sin \alpha t\_k & \dots & \cos N \alpha t\_k \cos N \alpha t\_k \end{bmatrix} \tag{8}$$

Notably, matrix R is real and symmetric, and *ε* is a quadratic function of weights. The gradi‐ ent function ∇*ε* corresponding to the MSE function of Eq. (4) is obtained by straightforward differentiation:

$$\nabla \mathcal{E} = (\frac{\partial \mathcal{E}}{\partial b\_0^{k'}}, \frac{\partial \mathcal{E}}{\partial a\_1^{k'}}, \frac{\partial \mathcal{E}}{\partial b\_1^{k'}}, \dots, \frac{\partial \mathcal{E}}{\partial a\_N^{k'}}, \frac{\partial \mathcal{E}}{\partial b\_N^k})^T = -P + \mathbf{R} \mathcal{W}\_k \tag{9}$$

which is a linear function of weights. The optimal set of weights, *W* ^ *<sup>k</sup>* , can be obtained by setting ∇*ε* =0, which yields:

$$-P + \mathbf{R}\hat{W}\_k = 0\tag{10}$$

The solution of the Eq. (10) is called Weiner solution or the Weiner filter:

0,1,2,3,

= ×××

*n N*

*n*

The square error on the pattern *Xk* is expressed as:

culating the expectation of both sides of Eq. (4), as:

 e

e

 e

e

as (Abdeslam et. al, 2007, Simon 2002):

where *PT* and *R* are defined as:

tern vector*Xk* and weight vector *Wk* as:

30 Adaptive Filtering - Theories and Applications

e

tor *W* ^

*N*

å

( ) sin( ) ( )

= + +

*Yt A n t t*

*n n*

wj

where *An* and *φn* are correspondingly the amplitude and phase angle of the *n*th order har‐ monic component, and *ε*(*t*)represents higher order components and random noise. In order to formulate the harmonic estimation problem by using ADALINE, we firstly define the pat‐

*n n*

[1,sin ,cos , ,sin ,cos ]*<sup>T</sup> X t t Nt Nt k kk k k* <sup>=</sup>

1 11 22 2 ( ) (2 ) 2 22 *T T TT k k k k k k kk k k kk k*

where *dk* is the desired scalar output. The mean-square error (MSE) *ε* can be obtained by cal‐

1 1 <sup>2</sup> [] [] [ ] [ ] 2 2

1 1 <sup>2</sup> [] [] 2 2

[ ] [( , sin , cos , , sin , cos )] *T T kk k k k k k k k k k P EdX E d d t d t d N t d N t* = = ww

*k k kk k k kk k*

where the weights are assumed to be fixed at *Wk* while computing the expectation. The ob‐ jective of the adaptive linear neural network (ADALINE) is to find the optimal weight vec‐

*k k k kk*

*<sup>k</sup>* that minimizes the MSE of Eq. (4). For convenience of expression, Eq. (5) is rewritten

*T T*

== - + *E Ed P W W W***R** (6)

 w

 w××× (7)

ww

= ++

p

( sin 2 cos2 ) ( ) ,

 w

= - == - + *d XW e d dXW W XXW* (4)

*T TT*

== - + *E Ed EdX W W EX X W* (5)

 w

0 1122 [ , , , , ,..., , ] *k k k k k k kT W b abab a b <sup>k</sup> N N* <sup>=</sup> (3)

××× (2)

 e

> pe

(1)

*a nft b nft t*

0,1,2,3,

å

= ×××

$$
\hat{\mathbf{W}}\_k = \mathbf{R}^{-1} \mathbf{P} \tag{11}
$$

The Weiner solution corresponds to the point in weight space that represents the minimum mean-square error *εmin*. To compute the optimal filter one must first compute R-1 and *P*. However, it would be difficult to compute R-1 and *P* accurately when the input data com‐ prises a random stream of patterns (drawn from a stationary distribution). Thus, by direct calculating gradients of the square error at the *k*th iteration:

$$\boldsymbol{\upTheta}\_{k} = (\frac{\partial \boldsymbol{\varepsilon}\_{k}}{\partial \boldsymbol{b}\_{0}^{k}}, \frac{\partial \boldsymbol{\varepsilon}\_{k}}{\partial \boldsymbol{a}\_{1}^{k}}, \frac{\partial \boldsymbol{\varepsilon}\_{k}}{\partial \boldsymbol{b}\_{1}^{k}}, \dots, \frac{\partial \boldsymbol{\varepsilon}\_{k}}{\partial \boldsymbol{a}\_{N}^{k}}, \frac{\partial \boldsymbol{\varepsilon}\_{k}}{\partial \boldsymbol{b}\_{N}^{k}})^{T} = \boldsymbol{e}\_{k}(\frac{\partial \boldsymbol{e}\_{k}}{\partial \boldsymbol{b}\_{0}^{k}}, \frac{\partial \boldsymbol{e}\_{k}}{\partial \boldsymbol{a}\_{1}^{k}}, \frac{\partial \boldsymbol{e}\_{k}}{\partial \boldsymbol{b}\_{1}^{k}}, \dots, \frac{\partial \boldsymbol{e}\_{k}}{\partial \boldsymbol{a}\_{N}^{k}}, \frac{\partial \boldsymbol{e}\_{k}}{\partial \boldsymbol{b}\_{N}^{k}}) = -\boldsymbol{e}\_{k}\boldsymbol{X}\_{k}\tag{12}$$

wher*eek*=(*dk*-*sk*), and *sk* = *Xk <sup>T</sup> Wk* since we are dealing with linear neurons. Therefore, the re‐ cursive weights updating equation can be expressed as:

$$\mathbf{W}\_{k+1} = \mathbf{W}\_k + \mu(-\overleftarrow{\mathbf{V}}\mathbf{e}\_k) = \mathbf{W}\_k + \mu e\_k \mathbf{X}\_k = \mathbf{W}\_k + \mu(d\_k - \mathbf{s}\_k)\mathbf{X}\_k \tag{13}$$

where the learning rate *μ* is used to adjust the convergence speed and the stability of weights updating process. Taking the expectation of Eq. (12), the following equation is de‐ rived:

$$E\mathbb{E}\widetilde{\nabla}\ \varepsilon\_k \mathbf{J} = -E\mathbb{E}e\_k X\_k \mathbf{J} = -E\mathbb{E}d\_k X\_k - X\_k X\_k^T \mathcal{W}\_k \mathbf{J} = \mathbf{R}\mathcal{W}\_k - P = \nabla \ \varepsilon. \tag{14}$$

From Eq. (14), it can be found that the long-term average of ∇˜ *<sup>ε</sup>k* approaches ∇*ε* hence <sup>∇</sup>˜ *<sup>ε</sup>k* can be used as unbiased estimate of ∇*ε*.If the input data set is finite (deterministic), then the gradient ∇*ε* can be computed accurately by collecting the different ∇˜ *<sup>ε</sup>k* gradi‐ ents over all training patterns *Xk* for the same set of weights. The steepest descent search is guaranteed to search the Weiner solution provided the learning rate condition Eq. (15) is satisfied (Simon 2002):

$$0 < \mu < \frac{2}{\lambda\_{\text{max}}} \tag{15}$$

ADALINE-PLL are outlined consecutively herein. In subsection 3.1, the optimal control pa‐ rameters selection of the proposed ADALINE-PLL is discussed in terms of the continuous domain and the discrete domain analysis. Furthermore, the time-domain simulation results of the proposed ADALINE-PLL under different control parameters are also presented for

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected...

This section presents the grid synchronization technique using the ADALINE algorithm. Firstly, the formulation of the ADALINE problem by using single-phase representation is

> 1 01 0 2 ( ) sin( ) sin( ) *N*

=

where *φ*1 and *φ<sup>n</sup>* are the initial phase angle of the fundamental and *n*th order harmonic com‐ ponent, respectively. Here the *dc* offset is neglected for the sake of brevity. The phase angle

where *θ*1 and *Δθ*1 represent the estimated phase angle of the fundamental grid voltage and the estimation error, respectively, obtained from the ADALINE-PLL. Therefore, the phase

> wq

where *φn* is the initial phase angle of the *n*th order harmonic component. Substituting Eq.

1 1 01 1 1 01 1 1 01

From Eq. (21), it can be deduced that the original signal denoted by Eq. (18) can be regener‐ ated by adjusting the coefficients *Vn*cos(*nΔθ*<sup>1</sup> + (*φ<sup>n</sup>* −*nφ*1)), *Vn*sin(*nΔθ*<sup>1</sup> + (*φ<sup>n</sup>* −*nφ*1)) (*n*=1, …, N), even though the phase angle of the original signal is unknown. The objective of the pro‐ posed ADALINE-PLL is to reconstruct the phase information of the fundamental grid volt‐ age *φ*<sup>1</sup> using least-mean-square (LMS) algorithm. Therefore, the grid voltage denoted by Eq.

( ) cos( )sin( ) sin( )cos( ) { cos( ( ))sin[ ( )] sin( ( ))cos[ ( )]}

= D ++ D + D+ - +

 j

> j

 wj

= ++ + å (18)

 qj

> wq

+ = + + - = + +D + - (20)

 q

+ D+ - <sup>+</sup> <sup>å</sup> (21)

 wq

 wq

=D + (19)

http://dx.doi.org/10.5772/52547

33

 j

*sa n n n*

1 11

 qq

0 01 1 01 1 1 () () ( ) *n n <sup>n</sup> nt n t n n t n n*

 q

2 1 1 01

=

*v t V tV t V n n nt V n n nt*

 wq

qj

qj

j

*vt V t V nt* wj

verification.

**3.1. Mathematical formulation of the ADALINE-PLL**

outlined as follows. An arbitrary grid voltage can be represented as:

of the fundamental component voltage can be expressed as:

angle of the *n*th order harmonic component can be expressed as:

wqj

*<sup>N</sup> n n n n n*

q

(20) back into Eq. (19), rearranging terms, we get:

wj

*sa*

+

where *λmax* represents the largest eigenvalue of R. As for learning rate *μ*, increasing it results in a faster convergence at the trade-off of losing accuracy and increasing overshoots in tran‐ sient response. Theoretically, a dynamical learning rate has better convergence characteris‐ tic, however, the implementation will be more demanding, and requires more expensive hardware setup. By a trial-and-error approach, a constant learning rate *μ* within the range of 0.025 and 0.04 is found sufficient for adequate stable convergence, which is consistent with Widrow-Hoff delta rule (Chang 2009, Chang 2010, Wira et al., 2010).

When mean-square error *ε* is minimized, the weight vector *W* ^ after convergence would be:

$$
\hat{\mathcal{W}} = \left[ b\_0, a\_1, b\_1, a\_2, b\_2, \dots, a\_N, b\_N \right]^T. \tag{16}
$$

Thus the fundamental component of the measured signal *Y*1(*tk*)is:

$$Y\_1(t\_k) = a\_1 \sin \alpha t\_k + b\_1 \cos \alpha t\_k. \tag{17}$$

Obviously, the dimension of the weight vector *Wk* to be updated depends on the order *N* of the harmonics to be estimated. In case of highly distorted load, lower order structure of neu‐ ral network is not accurate enough when high convergence speed is required, so using high‐ er order ANN structure is inevitable.

#### **3. Synchronization for grid-connected converters using ADALINE technique**

This Section formulates the generalized methodology for the phase-locked loop (PLL) de‐ sign and synthesis by using adaptive linear neural network (ADALINE) technique. The mathematical derivation, the stability analysis and the detailed description of the proposed ADALINE-PLL are outlined consecutively herein. In subsection 3.1, the optimal control pa‐ rameters selection of the proposed ADALINE-PLL is discussed in terms of the continuous domain and the discrete domain analysis. Furthermore, the time-domain simulation results of the proposed ADALINE-PLL under different control parameters are also presented for verification.

#### **3.1. Mathematical formulation of the ADALINE-PLL**

*<sup>E</sup>* <sup>∇</sup>˜ *<sup>ε</sup><sup>k</sup>* <sup>=</sup> <sup>−</sup>*<sup>E</sup> ek Xk* <sup>=</sup> <sup>−</sup>*<sup>E</sup> dk Xk* <sup>−</sup> *Xk Xk*

is satisfied (Simon 2002):

32 Adaptive Filtering - Theories and Applications

From Eq. (14), it can be found that the long-term average of ∇˜ *<sup>ε</sup>k* approaches ∇*ε* hence <sup>∇</sup>˜ *<sup>ε</sup>k* can be used as unbiased estimate of ∇*ε*.If the input data set is finite (deterministic), then the gradient ∇*ε* can be computed accurately by collecting the different ∇˜ *<sup>ε</sup>k* gradi‐

ents over all training patterns *Xk* for the same set of weights. The steepest descent search is guaranteed to search the Weiner solution provided the learning rate condition Eq. (15)

max

where *λmax* represents the largest eigenvalue of R. As for learning rate *μ*, increasing it results in a faster convergence at the trade-off of losing accuracy and increasing overshoots in tran‐ sient response. Theoretically, a dynamical learning rate has better convergence characteris‐ tic, however, the implementation will be more demanding, and requires more expensive hardware setup. By a trial-and-error approach, a constant learning rate *μ* within the range of 0.025 and 0.04 is found sufficient for adequate stable convergence, which is consistent with

<sup>2</sup> <sup>0</sup> m l

Widrow-Hoff delta rule (Chang 2009, Chang 2010, Wira et al., 2010).

When mean-square error *ε* is minimized, the weight vector *W*

Thus the fundamental component of the measured signal *Y*1(*tk*)is:

er order ANN structure is inevitable.

**technique**

11 1 ( ) sin cos . *Yt a t b t k kk* = + w

**3. Synchronization for grid-connected converters using ADALINE**

Obviously, the dimension of the weight vector *Wk* to be updated depends on the order *N* of the harmonics to be estimated. In case of highly distorted load, lower order structure of neu‐ ral network is not accurate enough when high convergence speed is required, so using high‐

This Section formulates the generalized methodology for the phase-locked loop (PLL) de‐ sign and synthesis by using adaptive linear neural network (ADALINE) technique. The mathematical derivation, the stability analysis and the detailed description of the proposed

 w

*<sup>T</sup> Wk* <sup>=</sup> *<sup>R</sup>Wk* <sup>−</sup>*<sup>P</sup>* <sup>=</sup>∇*ε*. (14)

< < (15)

0 1122 <sup>ˆ</sup> [ , , , , ,..., , ] . *<sup>T</sup> W b abab a b N N* <sup>=</sup> (16)

^ after convergence would be:

(17)

This section presents the grid synchronization technique using the ADALINE algorithm. Firstly, the formulation of the ADALINE problem by using single-phase representation is outlined as follows. An arbitrary grid voltage can be represented as:

$$w\_{sa}(t) = V\_1 \sin(\alpha\_0 t + \varphi\_1) + \sum\_{n=2}^{N} V\_n \sin(n\alpha\_0 t + \varphi\_n) \tag{18}$$

where *φ*1 and *φ<sup>n</sup>* are the initial phase angle of the fundamental and *n*th order harmonic com‐ ponent, respectively. Here the *dc* offset is neglected for the sake of brevity. The phase angle of the fundamental component voltage can be expressed as:

$$
\varphi\_1 = \Delta \theta\_1 + \theta\_1 \tag{19}
$$

where *θ*1 and *Δθ*1 represent the estimated phase angle of the fundamental grid voltage and the estimation error, respectively, obtained from the ADALINE-PLL. Therefore, the phase angle of the *n*th order harmonic component can be expressed as:

$$n\alpha \rho\_0 t + \varphi\_n = n(\alpha\_0 t + \theta\_1) + \varphi\_n - n\theta\_1 = n(\alpha\_0 t + \theta\_1) + n\Delta\theta\_1 + (\varphi\_n - n\varphi\_1) \tag{20}$$

where *φn* is the initial phase angle of the *n*th order harmonic component. Substituting Eq. (20) back into Eq. (19), rearranging terms, we get:

$$\begin{aligned} v\_{\rm sat}(t) &= V\_1 \cos(\Delta \theta\_1) \sin(\alpha\_0 t + \theta\_1) + V\_1 \sin(\Delta \theta\_1) \cos(\alpha\_0 t + \theta\_1) \\ &+ \sum\_{n=2}^{N} \{ V\_n \cos(n\Delta \theta\_1 + (\varphi\_n - n\rho\_1)) \sin[n(\alpha\_0 t + \theta\_1)] \\ &+ \sum\_{n=2}^{N} + V\_n \sin(n\Delta \theta\_1 + (\varphi\_n - n\rho\_1)) \cos[n(\alpha\_0 t + \theta\_1)] \} \end{aligned} \tag{21}$$

From Eq. (21), it can be deduced that the original signal denoted by Eq. (18) can be regener‐ ated by adjusting the coefficients *Vn*cos(*nΔθ*<sup>1</sup> + (*φ<sup>n</sup>* −*nφ*1)), *Vn*sin(*nΔθ*<sup>1</sup> + (*φ<sup>n</sup>* −*nφ*1)) (*n*=1, …, N), even though the phase angle of the original signal is unknown. The objective of the pro‐ posed ADALINE-PLL is to reconstruct the phase information of the fundamental grid volt‐ age *φ*<sup>1</sup> using least-mean-square (LMS) algorithm. Therefore, the grid voltage denoted by Eq. (18) can be expressed by the inner product of two vectors, namely, the vector of trigonomet‐ ric functions and the vector of weights in the LMS-based weights updating algorithm. The weight vector *W* is denoted by the coefficients of the corresponding trigonometric functions. Followed by this idea, Eq. (21) can be expressed as:

$$\hat{Y} = \mathbf{W}^T \mathbf{X} \tag{22}$$

1 1 ˆˆˆ *WWW k kk*

^

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected...

Hence the weights adaption process is achieved by solving the optimization problem, as in‐ dicated by Eqs. (26)-(27). The cost function at the *k*th iteration can be formulated by using

^

l

++ + = - × - +× - (29)

*<sup>k</sup>* +1, as shown by expanding Eq. (28) into:

11 1 ˆˆ ˆˆ ˆ ( )( ) ( ) *H H k k k k k k kk J W W W W YWX*

The optimum weight vector can be found by minimizing the cost function *J <sup>k</sup>* . Differentiate

*<sup>k</sup>* +1, we get:

1

*<sup>J</sup> WW X*

*kkk*

+

2 *WW X kk k*

1 1 ˆˆ ˆ ( ) || || 2 2

l

l

¶ = -- ¶ (30)

<sup>+</sup> = + (31)

2

^

 l<sup>+</sup> = =+ = + (32)

ˆ ˆ 2( ) <sup>ˆ</sup>

1 1 ˆ ˆ

Hence, the output of the ADALINE as denoted by Eq. (22) can be rewritten as:

*<sup>H</sup> H H Y W X W X X WX X k k k k k k kk k* l

+ + = - (26)

<sup>1</sup> <sup>ˆ</sup> *<sup>H</sup> WX Y kk k* <sup>+</sup> <sup>=</sup> (27)

+ + = +× - (28)

*<sup>k</sup>* +1, such that the following equation is satis‐

http://dx.doi.org/10.5772/52547

35

^

*<sup>k</sup>* +1.The cost function is a quadratic func‐

*<sup>k</sup>* +1, corresponding to the station‐

*<sup>k</sup>* +1 ||<sup>2</sup> denotes the

d

the method of Lagrange multipliers (Wira et al., 2010, Yin et al., 2010), as:

where *λ* denotes the real-valued Lagrange multiplier. The term | |*δW*

^

1

+

*k*

*k*

*W*

By setting Eq.(30) equal to zero, the optimum value for *W*

ary point of the cost function *J <sup>k</sup>* , can be derived as:

1

d

squared Euclidean norm of the weight change *δW*

^

tion of the weight vector *W*

the cost function *J <sup>k</sup>* with respect to *W*

2 1 1 ˆ ˆ || || ( ) *<sup>H</sup> k k k kk J W YWX*

 l

For each (*Xk*, *Yk*) pair, there exist at least one *W*

fied:

where *Y* ^ is the estimated output of the grid voltage *vsa*(*t*) by using the LMS-based linear op‐ timal filter methodology. The vector *W* and *X* corresponding to the weight vector and the input vector, respectively, are represented as:

$$\mathcal{W} = \left[ V\_1 \cos(\Lambda \theta\_1), V\_1 \sin(\Lambda \theta\_1), \dots, V\_n \cos(n\Lambda \theta\_1 + (\varphi\_n - n\varphi\_1)), V\_n \sin(n\Lambda \theta\_1 + (\varphi\_n - n\varphi\_1)) \right]^T \tag{23}$$

$$X = \left[ \sin(a\_0 t + \theta\_1), \cos(a\_0 t + \theta\_1), \dots, \sin[n(a\_0 t + \theta\_1)], \cos[n(a\_0 t + \theta\_1)] \right]^T \tag{24}$$

Equation (23) can be rewritten as:

$$\mathbf{W} = \begin{bmatrix} \alpha\_{a1'} \ \alpha\_{b1'} , \dots , \alpha\_{aN'} \ \alpha\_{bN} \end{bmatrix}^T \tag{25}$$

Notably, the salient difference between the ADALINE algorithm and the ADALINE-PLL algorithm is that, the frequency and phase angle signals utilized in the ADALINE weights updating process were assumed to be constant. However, in case of the ADA‐ LINE-PLL, the frequency and phase angle of fundamental component grid voltage is recursively updated by the loop filter (LF) and voltage controlled oscillator (VCO) of the PLL. In other words, the weights updating procedure of the ADALINE is utilized as the phase detector (PD) for the PLL, which generate the error signal to drive the loop filter (LF) and voltage controlled oscillator (VCO), according to the initial defini‐ tion of PLL The graphical interpretation of the proposed ADALINE-PLL is illustrated in Fig.9. In order to better illustrate the working principle of the proposed ADALINE-PLL, the weights updating law and stability conditions are discussed in detail as fol‐ lows.

In the discrete domain, the weight vector of the ADALINE should be changed in a mini‐ mum manner, subject to the constraint imposed on the updated filter output. Let *W* ^ *<sup>k</sup>* denote the old weight vector of the ADALINE filter at the *k*th iteration and *W* ^ *<sup>k</sup>* +1 denote its updated weight vector at the (*k*+1)th iteration. Therefore, given the input vector *Xk* and the desired output *Yk* , the weight vector *W* ^ *<sup>k</sup>* +1 can be written as:

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected... http://dx.doi.org/10.5772/52547 35

$$
\delta \hat{\mathcal{W}}\_{k+1} = \hat{\mathcal{W}}\_{k+1} - \hat{\mathcal{W}}\_{k} \tag{26}
$$

For each (*Xk*, *Yk*) pair, there exist at least one *W* ^ *<sup>k</sup>* +1, such that the following equation is satis‐ fied:

(18) can be expressed by the inner product of two vectors, namely, the vector of trigonomet‐ ric functions and the vector of weights in the LMS-based weights updating algorithm. The weight vector *W* is denoted by the coefficients of the corresponding trigonometric functions.

^ is the estimated output of the grid voltage *vsa*(*t*) by using the LMS-based linear op‐

 j

timal filter methodology. The vector *W* and *X* corresponding to the weight vector and the

1 11 1 1 1 1 1 [ cos( ), sin( ),..., cos( ( )), sin( ( ))]*<sup>T</sup> WV V V n n V n n n nn n* =D D

 qj

01 01 01 01 [sin( ), cos( ),...,sin[ ( )], cos[ ( )]]*<sup>T</sup> X t t nt nt* =+ + + +

1 1 [ , ,..., , ]*<sup>T</sup> <sup>W</sup>* ww

 w w

Notably, the salient difference between the ADALINE algorithm and the ADALINE-PLL algorithm is that, the frequency and phase angle signals utilized in the ADALINE weights updating process were assumed to be constant. However, in case of the ADA‐ LINE-PLL, the frequency and phase angle of fundamental component grid voltage is recursively updated by the loop filter (LF) and voltage controlled oscillator (VCO) of the PLL. In other words, the weights updating procedure of the ADALINE is utilized as the phase detector (PD) for the PLL, which generate the error signal to drive the loop filter (LF) and voltage controlled oscillator (VCO), according to the initial defini‐ tion of PLL The graphical interpretation of the proposed ADALINE-PLL is illustrated in Fig.9. In order to better illustrate the working principle of the proposed ADALINE-PLL, the weights updating law and stability conditions are discussed in detail as fol‐

In the discrete domain, the weight vector of the ADALINE should be changed in a mini‐

weight vector at the (*k*+1)th iteration. Therefore, given the input vector *Xk* and the desired

mum manner, subject to the constraint imposed on the updated filter output. Let *W*

*<sup>k</sup>* +1 can be written as:

the old weight vector of the ADALINE filter at the *k*th iteration and *W*

^

 wq

ˆ *<sup>T</sup> Y WX* = (22)

 qj

 wq

*a b aN bN* = (25)

D+ - D+ - (23)

 j

(24)

^

*<sup>k</sup>* +1 denote its updated

^

*<sup>k</sup>* denote

Followed by this idea, Eq. (21) can be expressed as:

input vector, respectively, are represented as:

wq

 q

> wq

q

34 Adaptive Filtering - Theories and Applications

Equation (23) can be rewritten as:

output *Yk* , the weight vector *W*

where *Y*

lows.

$$
\ddot{\mathbf{W}}\_{k+1}^H \mathbf{X}\_k = \mathbf{Y}\_k \tag{27}
$$

Hence the weights adaption process is achieved by solving the optimization problem, as in‐ dicated by Eqs. (26)-(27). The cost function at the *k*th iteration can be formulated by using the method of Lagrange multipliers (Wira et al., 2010, Yin et al., 2010), as:

$$J\_k = \mathbb{I} \mid \delta \hat{\mathcal{W}}\_{k+1} \mid \mathbb{I}^2 + \mathbb{I} \cdot (\mathbf{Y}\_k - \hat{\mathcal{W}}\_{k+1}^H \mathbf{X}\_k) \tag{28}$$

where *λ* denotes the real-valued Lagrange multiplier. The term | |*δW* ^ *<sup>k</sup>* +1 ||<sup>2</sup> denotes the squared Euclidean norm of the weight change *δW* ^ *<sup>k</sup>* +1.The cost function is a quadratic func‐ tion of the weight vector *W* ^ *<sup>k</sup>* +1, as shown by expanding Eq. (28) into:

$$J\_k = (\hat{\mathcal{W}}\_{k+1} - \hat{\mathcal{W}}\_k)^H \cdot (\hat{\mathcal{W}}\_{k+1} - \hat{\mathcal{W}}\_k) + \mathcal{Z} \cdot (Y\_k - \hat{\mathcal{W}}\_{k+1}^H X\_k) \tag{29}$$

The optimum weight vector can be found by minimizing the cost function *J <sup>k</sup>* . Differentiate the cost function *J <sup>k</sup>* with respect to *W* ^ *<sup>k</sup>* +1, we get:

$$\frac{\partial \mathbf{J}\_k}{\partial \hat{W}\_{k+1}} = \mathbf{2} (\hat{W}\_{k+1} - \hat{W}\_k) - \lambda \mathbf{X}\_k \tag{30}$$

By setting Eq.(30) equal to zero, the optimum value for *W* ^ *<sup>k</sup>* +1, corresponding to the station‐ ary point of the cost function *J <sup>k</sup>* , can be derived as:

$$
\hat{\mathcal{W}}\_{k+1} = \hat{\mathcal{W}}\_k + \frac{1}{2}\mathcal{X}X\_k \tag{31}
$$

Hence, the output of the ADALINE as denoted by Eq. (22) can be rewritten as:

$$\mathbf{Y}\_k = \hat{\mathbf{W}}\_{k+1}^H \mathbf{X}\_k = (\hat{\mathbf{W}}\_k + \frac{1}{2}\lambda \mathbf{X}\_k)^H \mathbf{X}\_k = \hat{\mathbf{W}}\_k^H \mathbf{X}\_k + \frac{1}{2}\lambda \|\mathbf{X}\_k\|^2 \tag{32}$$

Then, the Lagrange multiplier *λ* can be obtained as:

$$\mathcal{A} = \frac{2e\_k}{\|\|X\_k\|\|^2} \tag{33}$$

step-size would result in faster dynamic response and wider bandwidth of the ADALINE-PLL. On the other hand, if the step-size is selected too small, the corresponding ADALINE would be slow in transient response and results in a narrow bandwidth in frequency do‐ main. Assuming that the physical mechanism responsible for generating the desired re‐

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected...

where *W* represents the model's unknown parameter vector and *dk* represents unknown dis‐ turbances that accounts for various system impairments, such as random noise, modeling

rithm is an estimate of the actual weight vector *W*, hence the estimation error can be present‐

ˆ

e

From Eqs.(37)-(39), the incremental in the estimation error can be derived as:

 e

r

<sup>1</sup> <sup>2</sup> || || *k k k k k*

d

^

<sup>2</sup> [|| || ] *n k*

 e

2 2

1 2 2 2 || || | |

[ ]2 [ ] ( || || ) || || *k k k k*

*X X*

*k k*

 d

m x

Taking the squared Euclidean norms of both sides of Eq. (41), rearranging terms, and then

*Xe e E E*

+ +

*X* m

As stated above, the underlying idea of the ADALINE design is to minimize the incremental

LINE algorithm can be investigated by defining the mean-square deviation of the weight

<sup>1</sup> <sup>ˆ</sup> *H H Y W X WX d k kk kk* <sup>+</sup> = =+ (38)

*k k* = - *W W* (39)

<sup>+</sup> = - <sup>+</sup> (40)

*<sup>k</sup>* +1 from the *k*th and (*k*+1)th iteration, subject to a constraint

*<sup>k</sup>* +1. Based on this idea, the stability of the ADA‐

= *E* (41)

(42)

*<sup>k</sup>* computed by the ADALINE algo‐

http://dx.doi.org/10.5772/52547

37

^

*X e*

sponse *Yk* is controlled by the multiple regression model:

errors or other unknown sources. The weight vector *W*

e

^

taking the expectations on both sides of equation, we get:

where *ξk* denotes the undisturbed error signal defined by

*n n*

 rm

r

2

d

 <sup>+</sup> <sup>×</sup> <sup>=</sup> + -

change in the weight vector *W*

imposed on the updated weight vector *W*

vector estimation error, hence we get:

ed by:

where *ek* =*Yk* −*W* ^ *<sup>H</sup> k Xk* represents the estimation error of the ADALINE. From Eq. (31) and Eq. (32), the following equation can be derived:

$$
\delta \hat{\mathcal{W}}\_{k+1} = \hat{\mathcal{W}}\_{k+1} - \hat{\mathcal{W}}\_k = \frac{1}{\|\cdot\|\_{\mathcal{X}\_k} \|\cdot\|^2} X\_k e\_k \tag{34}
$$

In order to ensure stable operation of the weight vector updating process, a positive real scaling factor *μ* (learning rate) is introduced to the step size. Hence Eq. (34) can be redefined as:

$$
\delta \hat{\mathcal{W}}\_{k+1} = \hat{\mathcal{W}}\_{k+1} - \hat{\mathcal{W}}\_k = \frac{\mu}{\|\, \mathbf{X}\_k \,\|\, ^2} \mathbf{X}\_k \mathbf{e}\_k \tag{35}
$$

Equivalently,

$$
\hat{\mathcal{W}}\_{k+1} = \hat{\mathcal{W}}\_k + \frac{\mu}{\|\, \mathbf{X}\_k \,\|\, \mathbf{l}\|^2} \mathbf{X}\_k \mathbf{e}\_k \tag{36}
$$

The aforementioned weights updating scheme, in essence, belongs to the well-known least mean square (LMS) algorithm, which may introduce convergence problem in case of small input vector *Xk* since the squared norm | <sup>|</sup> *Xk* ||<sup>2</sup> appears in the denominator, as indicated by Eq. (36). To solve this problem, Eq. (36) can be modified as (Chang 2009):

$$
\hat{\mathcal{W}}\_{k+1} = \hat{\mathcal{W}}\_k + \frac{\mu}{\mathcal{S} + \|\, \mathbf{X}\_k \mid \mathbf{l}\|^2} \mathbf{X}\_k \mathbf{e}\_k \tag{37}
$$

where *δ*is a sufficiently small real number and *δ* >0. The weight adaptation law represented in Eq.(37) is adopted and practically implemented herein.

#### **3.2. Stability analysis of the ADALINE**

The selection of the step-size parameter *μ* is a compromise between the estimation accuracy and the convergence speed of the weights updating process. Generally speaking, a higher step-size would result in faster dynamic response and wider bandwidth of the ADALINE-PLL. On the other hand, if the step-size is selected too small, the corresponding ADALINE would be slow in transient response and results in a narrow bandwidth in frequency do‐ main. Assuming that the physical mechanism responsible for generating the desired re‐ sponse *Yk* is controlled by the multiple regression model:

Then, the Lagrange multiplier *λ* can be obtained as:

Eq. (32), the following equation can be derived:

d

d

where *ek* =*Yk* −*W*

Equivalently,

^ *<sup>H</sup> k*

36 Adaptive Filtering - Theories and Applications

2

*Xk* represents the estimation error of the ADALINE. From Eq. (31) and

+ + = -= (34)

+ + = -= (35)

<sup>+</sup> = + (36)

<sup>+</sup> (37)

= (33)

2 || || *k k e X*

1 1 2 1 ˆˆˆ

factor *μ* (learning rate) is introduced to the step size. Hence Eq. (34) can be redefined as:

1 1 2

1 2

1 2

d

*W W X e*


*X* m

where *δ*is a sufficiently small real number and *δ* >0. The weight adaptation law represented

The selection of the step-size parameter *μ* is a compromise between the estimation accuracy and the convergence speed of the weights updating process. Generally speaking, a higher

*W W Xe X* m


The aforementioned weights updating scheme, in essence, belongs to the well-known least mean square (LMS) algorithm, which may introduce convergence problem in case of small input vector *Xk* since the squared norm | <sup>|</sup> *Xk* ||<sup>2</sup> appears in the denominator, as indicated

*W W W Xe*


ˆˆˆ

ˆ ˆ

by Eq. (36). To solve this problem, Eq. (36) can be modified as (Chang 2009):

ˆ ˆ

in Eq.(37) is adopted and practically implemented herein.

**3.2. Stability analysis of the ADALINE**

<sup>+</sup> = +

*W W W Xe*


In order to ensure stable operation of the weight vector updating process, a positive real scaling

*k*

*k*

*X* m

*X*

l

$$\mathbf{Y}\_k = \hat{\mathbf{W}}\_{k+1}^H \mathbf{X}\_k = \mathbf{W}^H \mathbf{X}\_k + d\_k \tag{38}$$

where *W* represents the model's unknown parameter vector and *dk* represents unknown dis‐ turbances that accounts for various system impairments, such as random noise, modeling errors or other unknown sources. The weight vector *W* ^ *<sup>k</sup>* computed by the ADALINE algo‐ rithm is an estimate of the actual weight vector *W*, hence the estimation error can be present‐ ed by:

$$
\boldsymbol{\varepsilon}\_{k} = \mathbf{W} - \hat{\mathbf{W}}\_{k} \tag{39}
$$

From Eqs.(37)-(39), the incremental in the estimation error can be derived as:

$$
\varepsilon\_{k+1} = \varepsilon\_k - \frac{\mu}{\mathcal{S} + \left| \mid X\_k \mid \right|^2} X\_k e\_k \tag{40}
$$

As stated above, the underlying idea of the ADALINE design is to minimize the incremental change in the weight vector *W* ^ *<sup>k</sup>* +1 from the *k*th and (*k*+1)th iteration, subject to a constraint imposed on the updated weight vector *W* ^ *<sup>k</sup>* +1. Based on this idea, the stability of the ADA‐ LINE algorithm can be investigated by defining the mean-square deviation of the weight vector estimation error, hence we get:

$$\rho\_n = \mathbb{E}[\mathbb{I} \mid \mathbb{z}\_k \mid \mathbb{I}^2] \tag{41}$$

Taking the squared Euclidean norms of both sides of Eq. (41), rearranging terms, and then taking the expectations on both sides of equation, we get:

$$\rho\_{n+1} = \rho\_n + \mu^2 E[\frac{\|\lfloor X\_k \rfloor \|^2 \cdot \|e\_k\|^2}{(\delta + \|\lfloor X\_k \rfloor \|^2)^2}] - 2\mu E[\frac{\xi\_k e\_k}{\delta + \|\lfloor X\_k \rfloor \|^2}] \tag{42}$$

where *ξk* denotes the undisturbed error signal defined by

$$\boldsymbol{\varepsilon}\_{k}^{\varepsilon} = \left(\boldsymbol{W} - \hat{\boldsymbol{W}}\_{k}\right)^{H} \boldsymbol{X}\_{k} = \boldsymbol{\varepsilon}\_{k}^{H} \boldsymbol{X}\_{k} \tag{43}$$

utilized as the input signal for the loop filter (LF) of the PLL, which can be simply derived

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected...

http://dx.doi.org/10.5772/52547

39

(a)

(b)

(c)

(d)

**Figure 10.** The Matlab/Simulink diagram for the single-phase ADALINE-PLL

as:

From Eq.(43), it shows that the mean-square deviation *ρn* decrease exponentially with the in‐ crease of iterations, hence the ADALINE is therefore stable in the mean-square error sense (i.e., the convergence process is monotonic), provided *ρn*+1 <*ρn* is satisfied, which corre‐ sponding to the following condition:

$$0 < \mu < 2\frac{E[\xi\_k e\_k / (\mathcal{S} + \|\, \|\, X\_k\|\, \|^2)]}{E\{\|\, \|\, X\_k \|\, \|^2 \cdot \|\, e\_k\|^2 / \{ (\mathcal{S} + \|\, \|\, X\_k \|\, \|^2)^2 \} \}}\tag{44}$$

Considering the limited rate of variation in parameters for the practical grid-connected con‐ verter applications, if faster adaptation for the weight vector *W* ^ *<sup>k</sup>* +1than the parameter varia‐ tion of the input signal is ensured, it can be shown that this inequality can always be satisfied. It should be noted that the selection of the step-size parameter *μ* has a significant effect on the frequency characteristics of the ADALINE-PLL, which would be discussed in the forthcoming subsection. Here we first describe the proposed ADALINE-PLL and its im‐ plementation in Matlab/Simulink1 .

#### **3.3. Description of the proposed ADALINE-PLL**

Figs.10-11 show the single-phase and three-phase version of the proposed ADALINE-PLL. The following discussion is mainly focused on the single-phase version of the ADALINE-PLL, but the similar analysis can be easily extended to the three-phase version. For the sake of brevity, only the fundamental component, fifth and seventh order harmonics are consid‐ ered in the grid voltages, hence the estimation blocks corresponding to these three compo‐ nents are considered in the single-phase ADALINE-PLL. One may extend the order of the ADALINE-PLL by incorporating higher order harmonic blocks in the algorithm according to the particular applications. Fig.10(a) shows the top layer representation of the singlephase ADALINE-PLL, it can be observed that the estimation error, phase angle of the funda‐ mental component in grid voltage, the learning rate are utilized as the input signals to the subsystems, namely, the fundamental frequency block, the fifth order harmonic block and the seventh order harmonic block.

Figs.10(b)-(d) shows the three subsystems for individual harmonic component estimation, namely, the fundamental component, the fifth and the seventh order harmonic components. Once again, the weights of the fundamental frequency component are denoted as *ωa*<sup>1</sup> and *ωb*1, hence the phase estimation error denoted by *Δθ*1 can be regulated to zero by using a properly designed closed-loop control system, which resembles that of the existing grid syn‐ chronization schemes. As shown in Fig.10(b), the per unit representation of the weight *ωb*1 is

<sup>1</sup> www.mathworks.com

utilized as the input signal for the loop filter (LF) of the PLL, which can be simply derived as:

ˆ ( )*H H*

[ / ( || || )] 0 2

x

From Eq.(43), it shows that the mean-square deviation *ρn* decrease exponentially with the in‐ crease of iterations, hence the ADALINE is therefore stable in the mean-square error sense (i.e., the convergence process is monotonic), provided *ρn*+1 <*ρn* is satisfied, which corre‐

 e

2

2 2 2 2

d

{|| || | | /[( || || ) ]} *k k k kk k*

× +

Considering the limited rate of variation in parameters for the practical grid-connected con‐

tion of the input signal is ensured, it can be shown that this inequality can always be satisfied. It should be noted that the selection of the step-size parameter *μ* has a significant effect on the frequency characteristics of the ADALINE-PLL, which would be discussed in the forthcoming subsection. Here we first describe the proposed ADALINE-PLL and its im‐

Figs.10-11 show the single-phase and three-phase version of the proposed ADALINE-PLL. The following discussion is mainly focused on the single-phase version of the ADALINE-PLL, but the similar analysis can be easily extended to the three-phase version. For the sake of brevity, only the fundamental component, fifth and seventh order harmonics are consid‐ ered in the grid voltages, hence the estimation blocks corresponding to these three compo‐ nents are considered in the single-phase ADALINE-PLL. One may extend the order of the ADALINE-PLL by incorporating higher order harmonic blocks in the algorithm according to the particular applications. Fig.10(a) shows the top layer representation of the singlephase ADALINE-PLL, it can be observed that the estimation error, phase angle of the funda‐ mental component in grid voltage, the learning rate are utilized as the input signals to the subsystems, namely, the fundamental frequency block, the fifth order harmonic block and

Figs.10(b)-(d) shows the three subsystems for individual harmonic component estimation, namely, the fundamental component, the fifth and the seventh order harmonic components. Once again, the weights of the fundamental frequency component are denoted as *ωa*<sup>1</sup> and *ωb*1, hence the phase estimation error denoted by *Δθ*1 can be regulated to zero by using a properly designed closed-loop control system, which resembles that of the existing grid syn‐ chronization schemes. As shown in Fig.10(b), the per unit representation of the weight *ωb*1 is

+

*Ee X EX e X*

 d

*k k k kk* =- = *WW X X* (43)

^

(44)

*<sup>k</sup>* +1than the parameter varia‐

x

sponding to the following condition:

38 Adaptive Filtering - Theories and Applications

plementation in Matlab/Simulink1

the seventh order harmonic block.

1 www.mathworks.com

m

verter applications, if faster adaptation for the weight vector *W*

.

< <

**3.3. Description of the proposed ADALINE-PLL**

**Figure 10.** The Matlab/Simulink diagram for the single-phase ADALINE-PLL

#### 40 Adaptive Filtering - Theories and Applications

$$\alpha \rho\_{b1}^{\text{pu}} = \frac{\alpha \rho\_{b1}}{\sqrt{\rho\_{a1}^2 + \alpha\_{b1}^2}} = \frac{V\_1 \sin(\Delta \theta\_1)}{\sqrt{V\_1^2 \sin^2(\Delta \theta\_1) + V\_1^2 \cos^2(\Delta \theta\_1)}} = \sin(\Delta \theta\_1) \tag{45}$$

mental frequency negative sequence component, while the 6th order harmonic corresponds to the 5th order harmonic (negative sequence) and the 7th order harmonic (positive sequence) in stationary phase a-b-c frame. Generally speaking, the harmonic components considered in the proposed ADALINE-PLL are selected according to the particular applications and the

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected...

In this section, the parameter design of the single-phase version ADALINE-PLL is discussed by using continuous domain (*s*-domain) analysis, discrete domain (*z*-domain) analysis and time-domain simulation. It is found that the proposed ADALINE-PLL has the characteristic of band-pass filter around the fundamental frequency and a notch filter at harmonic fre‐

Assuming the phase angle of the fundamental grid voltage detected by the closed-loop

ADALINE-PLL indicated by Fig.10 can be simplified as Fig.12, provided that the estimated

lower and upper boundaries which defines the lock range of the PLL). Referring to the fun‐ damental frequency block in Fig.12, the estimated fundamental component in time domain

*v t et t h t t et t h t t sa*1( ) [ ( ) cos( )] ( ) cos( ) [ ( ) sin( )] ( ) sin( ) =× \*× +× \*× {

**Figure 12.** Frequency domain diagram for quasi-steady state analysis of the ADALINE-PLL

<sup>0</sup> is within its neighborhood, i.e., *ω*

^, which is an integral of the estimated angular frequency *<sup>ω</sup>*

<sup>0</sup> can be considered to be constant,

http://dx.doi.org/10.5772/52547

41

<sup>0</sup>*t*. Therefore, the block diagram of the

 and *ω* ^ 0 ''

> w

^

^ 0 ' ≤*ω* ^ <sup>0</sup> ≤*ω* ^ 0 '' (*ω* ^ 0 '

ˆ ˆˆ ˆ 01 0 } { 01 0 } (46)

^ <sup>=</sup>*<sup>ω</sup>* ^

ww

^ 0.

represent the

available computational resources.

quencies.

**3.4. Parameter selection of the ADALINE-PLL**

*3.4.1. Continuous-domain (s-domain) analysis*

^

In the steady state, the estimated angular frequency *ω*

hence the phase angle can be approximated as *θ*

w

ADALINE-PLL is denoted by *θ*

angular frequency *ω*

can be represented as:

The derived signal *ωb*<sup>1</sup> *pu*, is then used as input for the phase tracking algorithm. However, by incorporating the adaptive linear optimal filter methodology, the proposed ADALINE-PLL exhibits noticeable advantages compared to the existing grid synchronization algorithms in terms of response speed, accuracy and robustness.

**Figure 11.** The Matlab/Simulink diagram for the three-phase ADALINE-PLL

Fig. 11 shows the corresponding three-phase version of the proposed ADALINE-PLL, which has a similar architecture with that of the single-phase version. One of the salient features of the three-phase ADALINE-PLL algorithm is that the Clark's transformation and Park's transformation are utilized consecutively to derive the *q*-axis component of the grid voltag‐ es, similar to the procedure adopted in the conventional three-phase PLL (CPLL) and the virtual PLL (VPLL). However, the adaptive linear optimal filter (ADALINE) is used as the phase detector (PD) section, which generate the *dc* component for the voltage controlled os‐ cillator (PI regulator). It should be noted that there is one fundamental frequency shift when the electric quantities are transformed from the stationary *α*-*β* reference frame to the syn‐ chronous rotating reference frame (*d*-*q* frame). Besides, it is well known that the typical bal‐ anced nonlinear load produce characteristic harmonics of the orders: -5, +7, -11, +13… 6*n*+1 (*n* is integer), corresponding to the 6*n*th order harmonic components in synchronous rotat‐ ing reference frame. Therefore, the 2nd order harmonic in Fig.11 corresponds to the funda‐ mental frequency negative sequence component, while the 6th order harmonic corresponds to the 5th order harmonic (negative sequence) and the 7th order harmonic (positive sequence) in stationary phase a-b-c frame. Generally speaking, the harmonic components considered in the proposed ADALINE-PLL are selected according to the particular applications and the available computational resources.

#### **3.4. Parameter selection of the ADALINE-PLL**

1 1 1

*pu b*

w w

*a b*

terms of response speed, accuracy and robustness.

**Figure 11.** The Matlab/Simulink diagram for the three-phase ADALINE-PLL

w

*b*

40 Adaptive Filtering - Theories and Applications

w

The derived signal *ωb*<sup>1</sup>

<sup>1</sup> <sup>1</sup> 2 2 22 2 2 11 1 11 1

<sup>D</sup> <sup>=</sup> <sup>=</sup> = D

q

incorporating the adaptive linear optimal filter methodology, the proposed ADALINE-PLL exhibits noticeable advantages compared to the existing grid synchronization algorithms in

Fig. 11 shows the corresponding three-phase version of the proposed ADALINE-PLL, which has a similar architecture with that of the single-phase version. One of the salient features of the three-phase ADALINE-PLL algorithm is that the Clark's transformation and Park's transformation are utilized consecutively to derive the *q*-axis component of the grid voltag‐ es, similar to the procedure adopted in the conventional three-phase PLL (CPLL) and the virtual PLL (VPLL). However, the adaptive linear optimal filter (ADALINE) is used as the phase detector (PD) section, which generate the *dc* component for the voltage controlled os‐ cillator (PI regulator). It should be noted that there is one fundamental frequency shift when the electric quantities are transformed from the stationary *α*-*β* reference frame to the syn‐ chronous rotating reference frame (*d*-*q* frame). Besides, it is well known that the typical bal‐ anced nonlinear load produce characteristic harmonics of the orders: -5, +7, -11, +13… 6*n*+1 (*n* is integer), corresponding to the 6*n*th order harmonic components in synchronous rotat‐ ing reference frame. Therefore, the 2nd order harmonic in Fig.11 corresponds to the funda‐

*V V V*

sin ( ) cos ( )

q

sin( ) sin( )

 q

*pu*, is then used as input for the phase tracking algorithm. However, by

<sup>+</sup> D+ D (45)

 q

> In this section, the parameter design of the single-phase version ADALINE-PLL is discussed by using continuous domain (*s*-domain) analysis, discrete domain (*z*-domain) analysis and time-domain simulation. It is found that the proposed ADALINE-PLL has the characteristic of band-pass filter around the fundamental frequency and a notch filter at harmonic fre‐ quencies.

#### *3.4.1. Continuous-domain (s-domain) analysis*

Assuming the phase angle of the fundamental grid voltage detected by the closed-loop ADALINE-PLL is denoted by *θ* ^, which is an integral of the estimated angular frequency *<sup>ω</sup>* ^ 0. In the steady state, the estimated angular frequency *ω* ^ <sup>0</sup> can be considered to be constant, hence the phase angle can be approximated as *θ* ^ <sup>=</sup>*<sup>ω</sup>* ^ <sup>0</sup>*t*. Therefore, the block diagram of the ADALINE-PLL indicated by Fig.10 can be simplified as Fig.12, provided that the estimated angular frequency *ω* ^ <sup>0</sup> is within its neighborhood, i.e., *ω* ^ 0 ' ≤*ω* ^ <sup>0</sup> ≤*ω* ^ 0 '' (*ω* ^ 0 ' and *ω* ^ 0 '' represent the lower and upper boundaries which defines the lock range of the PLL). Referring to the fun‐ damental frequency block in Fig.12, the estimated fundamental component in time domain can be represented as:

$$\left| \boldsymbol{v}\_{\rm sat1}(t) = \left( [\boldsymbol{e}(t) \cdot \cos(\hat{\boldsymbol{\alpha}}\_{0}t)] \ast \boldsymbol{h}\_{1}(t) \right) \cdot \cos(\hat{\boldsymbol{\alpha}}\_{0}t) + \left( [\boldsymbol{e}(t) \cdot \sin(\hat{\boldsymbol{\alpha}}\_{0}t)] \ast \boldsymbol{h}\_{1}(t) \right) \cdot \sin(\hat{\boldsymbol{\alpha}}\_{0}t) \tag{46}$$

**Figure 12.** Frequency domain diagram for quasi-steady state analysis of the ADALINE-PLL

where *e*(*t*)represents the estimation error of the ADALINE, *vsa*1(*t*)represents the estimated fundamental component of grid voltage, *h*1(*t*)represents the operator of integration and as‐ terisk denotes convolution. Applying Laplace transform to Eq. (46), rearranging terms, we get:

$$V\_{sa1}(\mathbf{s}) = \frac{1}{2} [H\_1(\mathbf{s} + j\hat{a}\_0) + H\_1(\mathbf{s} - j\hat{a}\_0)] \cdot E(\mathbf{s})\tag{47}$$

where *Vsa*1(*s*), *H*1(*s*),*E*(*s*) corresponds to the Laplace transform of *vsa*1(*t*), *h*1(*t*) and *e*(*t*), re‐ spectively. In Eq. (47), *H*1(*s*) is represented as:

$$H\_1(\mathbf{s}) = \frac{k\_1}{\mathbf{s}}\tag{48}$$

1 1

*sa*

*V s G s G s*

*sa*

*fund*

quency *ω* ^

LINE.

LINE with the variation of gain.

( ) ( ) ( ) () 1 () () ()

Fig. 13 shows the bode-plot of the ADALINE when only the fundamental frequency block is considered. The frequency response of the ADALINE under the variations of the center fre‐

13(a) shows the open-loop frequency response of the ADALINE with the variation of center frequency, it is interesting to notice that this characteristic provides the flexible frequency tracking capability, compared to the adaptive linear neural network (ADALINE) algorithm since the frequency response of ADALINE cannot adapt to the frequency variation in the in‐ put signal. It can be observed from Fig.13(b) that the integration gain, i.e., the learning rate (*μ*), has a significant effect on the frequency characteristics of the ADALINE. Small learning rate results in a sharp amplitude-frequency curve and steep phase-frequency curve. Besides, small learning rate implies a narrow bandwidth and slow transient response of the weights updating process. Higher learning rate, on the other hand, implies a flat amplitude-frequen‐ cy curve, which would improve the dynamic response, increase the bandwidth of the ADA‐

(a) (b)

**Figure 13.** Bode plot of the ADALINE when only the fundamental frequency block is considered. (a) Open-loop fre‐ quency response of ADALINE with the variation of the center frequency, (b) Closed-loop frequency response of ADA‐

Fig.14 shows the frequency response of the ADALINE when the fundamental component, fifth and seventh harmonic components are considered. Fig.14(a) shows the bode-plot from the input signal *Vsa*(*s*) to the estimation error *E*(*s*). It can be observed that it exhibits as a typi‐ cal notch filter, and significant attenuation is observed in the amplitude-frequency curve at the harmonic components under consideration. The attenuation at particular harmonic fre‐ quency is controlled by the selection of the learning rate of ADALINE, higher learning rate

157

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected...

<sup>0</sup> and the integration gain are shown in Fig.13(a) and Fig.13(b), respectively. Fig.

*V s Gs Gs Gs* = = +++ (52)

http://dx.doi.org/10.5772/52547

43

where *k*1 is integration gain, corresponding to the learning rate (*μ*) of the weights updating process (*μ=k*1*T*). Combining Eq.(47) and Eq.(48), we get

$$G\_1(\mathbf{s}) = \frac{V\_{sa1}(\mathbf{s})}{E(\mathbf{s})} = \frac{k\_1 \mathbf{s}}{s^2 + \hat{\alpha}\_0^2} \tag{49}$$

Similarly, for the *n*th order harmonic block in Fig.12, the generalized transfer function from estimation error *E*(*s*) to the individual harmonic component output *Vsan*(*s*), can be derived as:

$$G\_n(s) = \frac{V\_{sun}(s)}{E(s)} = \frac{k\_n s}{s^2 + \left(n\hat{a}\_0\right)^2} \tag{50}$$

For the present case, the fundamental component, fifth and seventh order harmonics are considered, hence the error transfer function from the input *Vsa*(*s*) to *E*(*s*), can be represent‐ ed as:

$$G\_{error}(s) = \frac{E(s)}{V\_{sa}(s)} = \frac{1}{1 + G\_1(s) + G\_5(s) + G\_7(s)}\tag{51}$$

Similarly, the transfer function from the input *Vsa*(*s*) to the estimated fundamental compo‐ nent *Vsa*1(*s*), is:

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected... http://dx.doi.org/10.5772/52547 43

$$G\_{fund}\text{(s)} = \frac{V\_{sat1}\text{(s)}}{V\_{sa}\text{(s)}} = \frac{G\_1\text{(s)}}{1 + G\_1\text{(s)} + G\_5\text{(s)} + G\_7\text{(s)}}\tag{52}$$

Fig. 13 shows the bode-plot of the ADALINE when only the fundamental frequency block is considered. The frequency response of the ADALINE under the variations of the center fre‐ quency *ω* ^ <sup>0</sup> and the integration gain are shown in Fig.13(a) and Fig.13(b), respectively. Fig. 13(a) shows the open-loop frequency response of the ADALINE with the variation of center frequency, it is interesting to notice that this characteristic provides the flexible frequency tracking capability, compared to the adaptive linear neural network (ADALINE) algorithm since the frequency response of ADALINE cannot adapt to the frequency variation in the in‐ put signal. It can be observed from Fig.13(b) that the integration gain, i.e., the learning rate (*μ*), has a significant effect on the frequency characteristics of the ADALINE. Small learning rate results in a sharp amplitude-frequency curve and steep phase-frequency curve. Besides, small learning rate implies a narrow bandwidth and slow transient response of the weights updating process. Higher learning rate, on the other hand, implies a flat amplitude-frequen‐ cy curve, which would improve the dynamic response, increase the bandwidth of the ADA‐ LINE.

where *e*(*t*)represents the estimation error of the ADALINE, *vsa*1(*t*)represents the estimated fundamental component of grid voltage, *h*1(*t*)represents the operator of integration and as‐ terisk denotes convolution. Applying Laplace transform to Eq. (46), rearranging terms, we

where *Vsa*1(*s*), *H*1(*s*),*E*(*s*) corresponds to the Laplace transform of *vsa*1(*t*), *h*1(*t*) and *e*(*t*), re‐

1 <sup>1</sup>( ) *<sup>k</sup> H s*

where *k*1 is integration gain, corresponding to the learning rate (*μ*) of the weights updating

1 1 1 2 2

Similarly, for the *n*th order harmonic block in Fig.12, the generalized transfer function from estimation error *E*(*s*) to the individual harmonic component output *Vsan*(*s*), can be derived

( ) ( ) ( ) <sup>ˆ</sup> *V s sa k s G s E s s*

( ) ( ) ( ) ( ) <sup>ˆ</sup> *san n*

( ) <sup>1</sup> ( ) () 1 () () () *error*

Similarly, the transfer function from the input *Vsa*(*s*) to the estimated fundamental compo‐

*E s s n*

For the present case, the fundamental component, fifth and seventh order harmonics are considered, hence the error transfer function from the input *Vsa*(*s*) to *E*(*s*), can be represent‐

*V s ks G s*

0

w

2 2 0

w

157

 w

(47)

*<sup>s</sup>* <sup>=</sup> (48)

= = <sup>+</sup> (49)

= = <sup>+</sup> (50)

*V s Gs Gs Gs* = = +++ (51)

1 1 01 0 <sup>1</sup> ( ) [ ( ) ( )] ( ) ˆ ˆ <sup>2</sup> *V s H s j H s j Es sa* = ++ - × w

spectively. In Eq. (47), *H*1(*s*) is represented as:

42 Adaptive Filtering - Theories and Applications

process (*μ=k*1*T*). Combining Eq.(47) and Eq.(48), we get

*n*

*sa*

*E s G s*

get:

as:

ed as:

nent *Vsa*1(*s*), is:

**Figure 13.** Bode plot of the ADALINE when only the fundamental frequency block is considered. (a) Open-loop fre‐ quency response of ADALINE with the variation of the center frequency, (b) Closed-loop frequency response of ADA‐ LINE with the variation of gain.

Fig.14 shows the frequency response of the ADALINE when the fundamental component, fifth and seventh harmonic components are considered. Fig.14(a) shows the bode-plot from the input signal *Vsa*(*s*) to the estimation error *E*(*s*). It can be observed that it exhibits as a typi‐ cal notch filter, and significant attenuation is observed in the amplitude-frequency curve at the harmonic components under consideration. The attenuation at particular harmonic fre‐ quency is controlled by the selection of the learning rate of ADALINE, higher learning rate implies higher attenuation. Fig.14(b) shows the bode-plot from the input signal *Vsa*(*s*) to the estimated fundamental component *Vsa*1(*s*). It can be observed that it exhibits a band-pass fil‐ ter around the fundamental frequency, and a notch filter at the considered harmonic fre‐ quencies. In case of large frequency variation in grid voltages, the learning rates of the ADALINE should be sufficiently high to ensure a wide bandwidth. Besides, it should be noted that the number of harmonics considered in the ADALINE-PLL can be easily extend‐ ed to higher order harmonic components according to the particular applications.

<sup>1</sup> ( ) (1 )

integrator gain *ki*

as:

where

t

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected...

=*kp*/*τ*. Equation (54) can be rewritten in the generalized second order system

2

 w

t

2

*n*

<sup>0</sup>*T* , and *T* is the sampling frequency specified according to the particular ap‐


 t

*k k*

*n n*

 w

where *kp* and *τ* denote the proportional gain and time constant of the PI regulator, and the

2 2

, 2 2

*n*

1 1 (1 ) ( ) ( ) t

The root locus for the PLL modeled in the *s*-domain is shown in Fig.15(b). There are two open loop poles at the origin of the *s*-plane and one open loop zero at *s*=-1/*τ*. However, it is interesting to notice from Fig.15(b) that the *s*-domain model never predicts an unstable mode for any combination of PI parameters. Therefore, the discrete domain (*z*-domain) would be necessary to study the stability characteristic of the proposed ADALINE-PLL, as

> 2 ( ) ( cos ) ( ) ( ) 2 cos 1 *san n n*

plications, for the present case, *T*=100μs is selected which is the typical sampling frequency for the low voltage power converters. Hence, the discrete domain transfer function from

*E z z z*

*V z kzz G z*

== = *p p <sup>f</sup>*

*k ks K s <sup>s</sup> <sup>G</sup> s s <sup>s</sup>*

+ +

w= == *<sup>p</sup> <sup>p</sup>*

xw

<sup>+</sup> <sup>=</sup> + + *n n <sup>c</sup>*

<sup>2</sup> ( ) <sup>2</sup> xw

> tx

w

The open loop transfer function of Fig.15 (a) can be derived as:

*open*

discussed in subsequent section.

where *Ω<sup>n</sup>* =*nω*

^

*Vsa*(*z*) to *E*(*z*) can be represented as:

*3.4.2. Discrete-domain (z-domain) analysis*

In the discrete domain, Eq. (50) can be rewritten as:

*n*

*n p*

*Ks k f p* = + *<sup>s</sup>* (54)

http://dx.doi.org/10.5772/52547

45

*<sup>s</sup> H s s s* (55)

*k* (56)

(57)

**Figure 14.** Bode plot of the ADALINE when the fundamental frequency block, the fifth and seventh harmonic blocks are considered.

It should be noted that the frequency domain analysis is based on the quasi-steady state model of the ADALINE, which serves the purpose of phase detection (PD) for the PLL. The estimated phase error signal is then utilized as the input for the loop filter (LF), which is se‐ lected as the standard proportional-integral (PI) regulator for the present case. Here the line‐ arized model for the phase estimation can be described as Fig.15(a). It is interesting to observe that the derived linearized model for the phase estimation resembles that of the ex‐ isting PLL algorithms. The closed-loop transfer function of the linearized model indicated by Fig.15(a) can be represented as:

$$H\_c(\mathbf{s}) = \frac{\hat{\theta}(\mathbf{s})}{\theta(\mathbf{s})} = \frac{K\_f(\mathbf{s})}{\mathbf{s} + K\_f(\mathbf{s})} \tag{53}$$

where *θ* ^(*<sup>s</sup>*), *θ*(*s*) denote the Laplace transform of the estimated phase angle *<sup>θ</sup>* ^ and the actual phase angle *θ* respectively. To achieve a good trade-off between the filter performance and system stability, the proportional-integral (PI) type filter is utilized for the loop filter (LF), which can be given as:

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected... http://dx.doi.org/10.5772/52547 45

$$K\_f(\mathbf{s}) = k\_p(\mathbf{l} + \frac{\mathbf{l}}{\tau \mathbf{s}}) \tag{54}$$

where *kp* and *τ* denote the proportional gain and time constant of the PI regulator, and the integrator gain *ki* =*kp*/*τ*. Equation (54) can be rewritten in the generalized second order system as:

$$H\_c(\mathbf{s}) = \frac{2\xi\alpha\_n\mathbf{s} + \alpha\_n^2}{\mathbf{s}^2 + 2\xi\alpha\_n\mathbf{s} + \alpha\_n^2} \tag{55}$$

where

implies higher attenuation. Fig.14(b) shows the bode-plot from the input signal *Vsa*(*s*) to the estimated fundamental component *Vsa*1(*s*). It can be observed that it exhibits a band-pass fil‐ ter around the fundamental frequency, and a notch filter at the considered harmonic fre‐ quencies. In case of large frequency variation in grid voltages, the learning rates of the ADALINE should be sufficiently high to ensure a wide bandwidth. Besides, it should be noted that the number of harmonics considered in the ADALINE-PLL can be easily extend‐

(a) (b)

**Figure 14.** Bode plot of the ADALINE when the fundamental frequency block, the fifth and seventh harmonic blocks

It should be noted that the frequency domain analysis is based on the quasi-steady state model of the ADALINE, which serves the purpose of phase detection (PD) for the PLL. The estimated phase error signal is then utilized as the input for the loop filter (LF), which is se‐ lected as the standard proportional-integral (PI) regulator for the present case. Here the line‐ arized model for the phase estimation can be described as Fig.15(a). It is interesting to observe that the derived linearized model for the phase estimation resembles that of the ex‐ isting PLL algorithms. The closed-loop transfer function of the linearized model indicated

> ˆ( ) ( ) ( ) () () q

^(*<sup>s</sup>*), *θ*(*s*) denote the Laplace transform of the estimated phase angle *<sup>θ</sup>*

*s K s*

phase angle *θ* respectively. To achieve a good trade-off between the filter performance and system stability, the proportional-integral (PI) type filter is utilized for the loop filter (LF),

*f*

*f*

*s sKs* (53)

^ and the actual

q= = <sup>+</sup>

*c*

*H s*

are considered.

where *θ*

which can be given as:

by Fig.15(a) can be represented as:

44 Adaptive Filtering - Theories and Applications

ed to higher order harmonic components according to the particular applications.

$$
\rho o\_n = \sqrt{k\_p / \tau}, \xi = \frac{k\_p}{2o\_n} = \frac{\sqrt{\tau k\_p}}{2} \tag{56}
$$

The open loop transfer function of Fig.15 (a) can be derived as:

$$G\_{open} = \frac{K\_f(s)}{s} = \frac{k\_p(\text{l} + \frac{\text{l}}{\pi s})}{s} = \frac{k\_p(s + \frac{\text{l}}{\pi})}{s^2} \tag{57}$$

The root locus for the PLL modeled in the *s*-domain is shown in Fig.15(b). There are two open loop poles at the origin of the *s*-plane and one open loop zero at *s*=-1/*τ*. However, it is interesting to notice from Fig.15(b) that the *s*-domain model never predicts an unstable mode for any combination of PI parameters. Therefore, the discrete domain (*z*-domain) would be necessary to study the stability characteristic of the proposed ADALINE-PLL, as discussed in subsequent section.

#### *3.4.2. Discrete-domain (z-domain) analysis*

In the discrete domain, Eq. (50) can be rewritten as:

$$G\_n(z) = \frac{V\_{\text{sun}}(z)}{E(z)} = \frac{k\_n z (z - \cos \Omega\_n)}{z^2 - 2z \cos \Omega\_n + 1} \tag{58}$$

where *Ω<sup>n</sup>* =*nω* ^ <sup>0</sup>*T* , and *T* is the sampling frequency specified according to the particular ap‐ plications, for the present case, *T*=100μs is selected which is the typical sampling frequency for the low voltage power converters. Hence, the discrete domain transfer function from *Vsa*(*z*) to *E*(*z*) can be represented as:

$$G\_{error}(\mathbf{z}) = \frac{E(\mathbf{z})}{V\_{sa}(\mathbf{z})} = \frac{1}{1 + G\_1(\mathbf{z}) + G\_5(\mathbf{z}) + G\_7(\mathbf{z})} = \frac{1}{1 + \sum\_{n=1,5,7} \frac{k\_n z (z - \cos \Omega\_n)}{z^2 - 2z \cos \Omega\_n + 1}}\tag{59}$$

cus diagram that when 80<K<6833 (0.008<μ<0.68), the ADALINE system is stable, otherwise

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected...

http://dx.doi.org/10.5772/52547

47

(a)

(b)

**Figure 16.** Small signal analysis of the proposed ADALINE-PLL in *z*-domain: (a) The approximated second order linear‐

The ADALINE subsystem is assumed to be stable in the following discrete domain analysis, which implies that the phase detection is achieved. The *z*-domain analysis will be performed on a discrete-time PLL system with a second-order loop filter. As shown in Fig.17(a), and the block *Kd*(z) is the *z*-transform of the loop filter and voltage-controlled oscillator (VCO),

ˆ( ) ( ) ( ) () 1 ()

*<sup>z</sup> K z H z*

q

q

For the second order loop using the PI type filter, *Kd*(*z*) can be obtained as

*d*

*z Kz*

*d*

= = <sup>+</sup> (61)

ized model for phase estimation and (b) Root locus in *z*-domain of the linearized model.

hence the closed-loop transfer function can be represented as:

*c*

it is unstable.

**Figure 15.** Small signal analysis of the proposed ADALINE-PLL in *s*-domain: (a) The approximated second order linear‐ ized model for phase estimation and (b) Root locus in s-domain of the linearized model.

Assuming that *G*(*z*)=*G*1(*z*) + *G*5(*z*) + *G*7(*z*), *ω* ^ <sup>0</sup> =2×*π* ×50, *T*=100μs, and the integration gain *kn* of individual harmonic component are assumed to be identical for the sake of simplicity (*kn*=*K*), then the following representation can be derived:

$$\mathrm{G(z)} = \mathrm{K} \frac{0.0002988z^5 - 0.001479z^4 + 0.002944z^3 - 0.002944z^2 + 0.001479z - 0.0002988}{z^6 - 5.926z^5 + 14.71z^4 - 19.56z^3 + 14.71z^2 - 5.926z + 1} \tag{60}$$

The root locus for the ADALINE modeled in the *z*-domain is shown in Fig.16. There are two open loop zeros at *z*=1, a pair of conjugate zeros and three pair of conjugate poles distribut‐ ing in the z-plane. It can be observed from Fig.16 that the stability margin increases with the increase of integration gain K when 80<K<554 (0.008<μ<0.055) and decreases with the in‐ crease of K when 554<K<6833 (0.055<μ<0.68). Moreover, it can be observed from the root lo‐ cus diagram that when 80<K<6833 (0.008<μ<0.68), the ADALINE system is stable, otherwise it is unstable.

157

*error*

*E z G z*

46 Adaptive Filtering - Theories and Applications

( ) <sup>1</sup> <sup>1</sup> ( ) () 1 () () () ( cos ) <sup>1</sup>

= = <sup>=</sup> +++ - W

(a)

(b)

**Figure 15.** Small signal analysis of the proposed ADALINE-PLL in *s*-domain: (a) The approximated second order linear‐

of individual harmonic component are assumed to be identical for the sake of simplicity

*zzzzzz*

The root locus for the ADALINE modeled in the *z*-domain is shown in Fig.16. There are two open loop zeros at *z*=1, a pair of conjugate zeros and three pair of conjugate poles distribut‐ ing in the z-plane. It can be observed from Fig.16 that the stability margin increases with the increase of integration gain K when 80<K<554 (0.008<μ<0.055) and decreases with the in‐ crease of K when 554<K<6833 (0.055<μ<0.68). Moreover, it can be observed from the root lo‐

^

5432 65432 0.0002988 0.001479 0.002944 0.002944 0.001479 0.0002988 ( ) 5.926 14.71 19.56 14.71 5.926 1


*zzzzz Gz K*

ized model for phase estimation and (b) Root locus in s-domain of the linearized model.

Assuming that *G*(*z*)=*G*1(*z*) + *G*5(*z*) + *G*7(*z*), *ω*

(*kn*=*K*), then the following representation can be derived:

*V z Gz Gz Gz kzz*

*sa n n*

+

2 1,5,7

<sup>=</sup> *z z*

*n n*

2 cos 1

<sup>0</sup> =2×*π* ×50, *T*=100μs, and the integration gain *kn*

(60)


**Figure 16.** Small signal analysis of the proposed ADALINE-PLL in *z*-domain: (a) The approximated second order linear‐ ized model for phase estimation and (b) Root locus in *z*-domain of the linearized model.

The ADALINE subsystem is assumed to be stable in the following discrete domain analysis, which implies that the phase detection is achieved. The *z*-domain analysis will be performed on a discrete-time PLL system with a second-order loop filter. As shown in Fig.17(a), and the block *Kd*(z) is the *z*-transform of the loop filter and voltage-controlled oscillator (VCO), hence the closed-loop transfer function can be represented as:

$$H\_c(z) = \frac{\hat{\theta}(z)}{\theta(z)} = \frac{K\_d(z)}{1 + K\_d(z)}\tag{61}$$

For the second order loop using the PI type filter, *Kd*(*z*) can be obtained as

$$K\_d(z) = k\_p \frac{z(z-\alpha)}{\left(z-1\right)^2} \tag{62}$$

**3.5. Time-domain simulation results of the ADALINE-PLL**

rate (*μ*) when the loop regulator gains are selected as:*kp*=300, *ki*

learning rate *μ*=0.035.

*ki*

tion to a reference clock source.

Figs.18-19 show the time-domain simulation results of the single phase version of the pro‐ posed ADALINE-PLL under different control parameters. The grid voltage is assumed to contain 0.1 p.u. 5th order harmonic and 0.1 p.u. 7th order harmonic components and a transi‐ ent voltage sag occurs at t=0.05s to test the dynamic response of the ADALINE-PLL. Fig.18 shows the performance of the single-phase ADALINE-PLL with the variation of learning

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected...

that if the learning rate is selected too small, the estimation error of the ADALINE-PLL would be remarkable and there would be significant oscillation in the estimated frequency and the phase estimation error (see the dash line and the dash dot line in Fig.18). The solid line in Fig.18 shows the performance of the ADALINE-PLL corresponding to the optimal

**Figure 18.** The performance of single-phase ADALINE-PLL with the variation of learning rate (μ) when *kp*=300,

2 Jitter—The time variation of a characteristic of a periodic signal in electronics and telecommunications, often in rela‐

=10000. (Solid line: μ=0.035; dash line: μ=0.015; dash dot line: μ=0.025.)

=10000. It can be observed

http://dx.doi.org/10.5772/52547

49

where *α* =1−*T* / *τ* and *T* denotes the sampling period of the discrete system. The transfer function of the closed loop system in the discrete-time domain can be derived by substitut‐ ing Eq. (61) into Eq. (60) as

$$H\_c(z) = H\_{cm} \frac{z(z-a)}{z^2 - az + b} \tag{63}$$

where

$$H\_{cm} = \frac{k\_p}{1 + k\_p}, a = \frac{2 + k\_p \alpha}{1 + k\_p}, b = \frac{1}{1 + k\_p} \tag{64}$$

**Figure 17.** Root locus of discrete-time ADALINE system

The root locus for the PLL modeled in the *z*-domain is shown in Fig.17(b). It can be observed that there are two open loop poles at *z*=1 and two open loop zeros at *z*=0 and *z*=*α*. It is inter‐ esting to note that, since the open-loop zero location (*α*) is a function of the time constant *τ*, the z-domain model can predict unstable loop performance for the condition of *T* >2*τ* in which case an open-loop zero *α* is located on the negative real axis outside the unit circle. For *T* < <*τ*, the quantity *α* is close to unity, in this case, the *z*-domain and *s*-domain model predict similar characteristics for jitter2 frequencies within the loop's bandwidth. Moreover, the selection of parameter *kp* is a tradeoff between loop's bandwidth and dynamic response.

#### **3.5. Time-domain simulation results of the ADALINE-PLL**

( )<sup>2</sup> ( ) ( ) <sup>1</sup> *d p z z Kz k*


(63)

a

*z*

where *α* =1−*T* / *τ* and *T* denotes the sampling period of the discrete system. The transfer function of the closed loop system in the discrete-time domain can be derived by substitut‐

> 2 ( ) ( ) *c cm z z Hz H*

<sup>=</sup> - +

<sup>2</sup> <sup>1</sup> , , <sup>111</sup> *p p*

*k k H ab*

*ppp*

== = +++ (64)

frequencies within the loop's bandwidth. Moreover,

*kkk* + a

The root locus for the PLL modeled in the *z*-domain is shown in Fig.17(b). It can be observed that there are two open loop poles at *z*=1 and two open loop zeros at *z*=0 and *z*=*α*. It is inter‐ esting to note that, since the open-loop zero location (*α*) is a function of the time constant *τ*, the z-domain model can predict unstable loop performance for the condition of *T* >2*τ* in which case an open-loop zero *α* is located on the negative real axis outside the unit circle. For *T* < <*τ*, the quantity *α* is close to unity, in this case, the *z*-domain and *s*-domain model

the selection of parameter *kp* is a tradeoff between loop's bandwidth and dynamic response.

*cm*

**Figure 17.** Root locus of discrete-time ADALINE system

predict similar characteristics for jitter2

*z az b* a

ing Eq. (61) into Eq. (60) as

48 Adaptive Filtering - Theories and Applications

where

=

Figs.18-19 show the time-domain simulation results of the single phase version of the pro‐ posed ADALINE-PLL under different control parameters. The grid voltage is assumed to contain 0.1 p.u. 5th order harmonic and 0.1 p.u. 7th order harmonic components and a transi‐ ent voltage sag occurs at t=0.05s to test the dynamic response of the ADALINE-PLL. Fig.18 shows the performance of the single-phase ADALINE-PLL with the variation of learning rate (*μ*) when the loop regulator gains are selected as:*kp*=300, *ki* =10000. It can be observed that if the learning rate is selected too small, the estimation error of the ADALINE-PLL would be remarkable and there would be significant oscillation in the estimated frequency and the phase estimation error (see the dash line and the dash dot line in Fig.18). The solid line in Fig.18 shows the performance of the ADALINE-PLL corresponding to the optimal learning rate *μ*=0.035.

**Figure 18.** The performance of single-phase ADALINE-PLL with the variation of learning rate (μ) when *kp*=300, *ki* =10000. (Solid line: μ=0.035; dash line: μ=0.015; dash dot line: μ=0.025.)

<sup>2</sup> Jitter—The time variation of a characteristic of a periodic signal in electronics and telecommunications, often in rela‐ tion to a reference clock source.

the *park*-PLL is presented. Then, the simulation results of these algorithms are compared with those of the ADALINE-PLL under grid voltage disturbances, such as grid voltage sag,

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected...

In recent literature, the enhanced PLL (EPLL) system was proposed (Karimi-Ghartemani et al, 2004). The major improvement introduced by the EPLL is in the PD mechanism, which is replaced by a new strategy allowing more flexibility and provides more information such as amplitude and phase angle. The mechanism of this EPLL is based on estimating in-phase and quadrature-phase amplitudes of the desired signal, hence, has potential application in

The Matlab/Simulink diagram of this EPLL is shown in Fig.20. It can be observed that there are

amplitude, phase and frequency of the fundamental component of the input signal. The guide‐ line for the selection of these gains, however, is not that trivial. The control loop interaction exists since the amplitude, phase and frequency estimation are competing with each other, if any of these gains is varied, it would affect the performance and stability of the closed-loop algorithm.

However, it would result in slow dynamic performance under frequency deviation in the grid

pear or the algorithm may even diverge under large deviations in the input. Therefore, this EPLL scheme is difficult to be practically implemented, especially for the grid-connected converters which has demanding requirements for tracking accuracy, stability and reliability of the syn‐

, which are selected to control the convergence speed for the

) should be very small to ensure stability.

to be zero, steady state error may ap‐

http://dx.doi.org/10.5772/52547

51

communication systems which employ quadrature modulation techniques.

harmonics and random noise contamination scenarios.

**4.1.The enhanced phase-locked loop (EPLL)**

three gains, denoted as *kg*, *kp* and *ki*

Generally, the gain for the frequency estimation (*ki*

voltage. If the frequency estimation is disabled by setting *ki*

chronization algorithm (Karimi-Ghartemani et al, 2004).

**Figure 20.** The Matlab/Simulink diagram for the enhanced PLL (EPLL).

**Figure 19.** The performance of single-phase ADALINE-PLL with variation of *kp*, *ki* when μ=0.035. (Solid line:*kp*=300, *ki* =10000; dash line: *kp*=250, *ki* =30000; dash dot line: *kp*=500, *ki* =6000.)

Fig.19 shows the performance of the ADALINE-PLL with the variation of regulator gains when the learning rate is predefined. It can be observed that the dynamic response of the ADALINE-PLL is mainly determined by the proportional gain *kp*, if *kp* is selected too small, the ADALINE-PLL becomes sluggish and the estimated frequency and phase error decays slowly (dash line in Fig.19). On the other hand, if the gain is selected too high, there would be large overshoot in the estimated frequency and the phase estimation error (the dash dot line in Fig.19). It should be noted that the performance of the ADALINE-PLL is less sensitive to the integration gain *ki .* The solid line in Fig.19 shows the performance of ADALINE-PLL corresponding to the optimal regulator parameters.

#### **4. Performance comparison with the existing PLL algorithms**

This section presents the performance comparison among the existing PLL algorithms and the proposed ADALINE-PLL. Firstly, a brief introduction of the enhanced PLL (EPLL) and the *park*-PLL is presented. Then, the simulation results of these algorithms are compared with those of the ADALINE-PLL under grid voltage disturbances, such as grid voltage sag, harmonics and random noise contamination scenarios.

#### **4.1.The enhanced phase-locked loop (EPLL)**

**Figure 19.** The performance of single-phase ADALINE-PLL with variation of *kp*, *ki*

corresponding to the optimal regulator parameters.

=30000; dash dot line: *kp*=500, *ki*

**4. Performance comparison with the existing PLL algorithms**

=6000.)

*.* The solid line in Fig.19 shows the performance of ADALINE-PLL

Fig.19 shows the performance of the ADALINE-PLL with the variation of regulator gains when the learning rate is predefined. It can be observed that the dynamic response of the ADALINE-PLL is mainly determined by the proportional gain *kp*, if *kp* is selected too small, the ADALINE-PLL becomes sluggish and the estimated frequency and phase error decays slowly (dash line in Fig.19). On the other hand, if the gain is selected too high, there would be large overshoot in the estimated frequency and the phase estimation error (the dash dot line in Fig.19). It should be noted that the performance of the ADALINE-PLL is less sensitive

This section presents the performance comparison among the existing PLL algorithms and the proposed ADALINE-PLL. Firstly, a brief introduction of the enhanced PLL (EPLL) and

*ki*

=10000; dash line: *kp*=250, *ki*

50 Adaptive Filtering - Theories and Applications

to the integration gain *ki*

when μ=0.035. (Solid line:*kp*=300,

In recent literature, the enhanced PLL (EPLL) system was proposed (Karimi-Ghartemani et al, 2004). The major improvement introduced by the EPLL is in the PD mechanism, which is replaced by a new strategy allowing more flexibility and provides more information such as amplitude and phase angle. The mechanism of this EPLL is based on estimating in-phase and quadrature-phase amplitudes of the desired signal, hence, has potential application in communication systems which employ quadrature modulation techniques.

The Matlab/Simulink diagram of this EPLL is shown in Fig.20. It can be observed that there are three gains, denoted as *kg*, *kp* and *ki* , which are selected to control the convergence speed for the amplitude, phase and frequency of the fundamental component of the input signal. The guide‐ line for the selection of these gains, however, is not that trivial. The control loop interaction exists since the amplitude, phase and frequency estimation are competing with each other, if any of these gains is varied, it would affect the performance and stability of the closed-loop algorithm. Generally, the gain for the frequency estimation (*ki* ) should be very small to ensure stability. However, it would result in slow dynamic performance under frequency deviation in the grid voltage. If the frequency estimation is disabled by setting *ki* to be zero, steady state error may ap‐ pear or the algorithm may even diverge under large deviations in the input. Therefore, this EPLL scheme is difficult to be practically implemented, especially for the grid-connected converters which has demanding requirements for tracking accuracy, stability and reliability of the syn‐ chronization algorithm (Karimi-Ghartemani et al, 2004).

**Figure 20.** The Matlab/Simulink diagram for the enhanced PLL (EPLL).

#### **4.2. The** *Park* **phase-locked loop (***Park***-PLL)**

The *park*-PLL was another single-phase version of the three-phase synchronous reference frame (SRF) PLL (Filho, R. M. S., et al., 2008). As shown in Fig.21, the circuit diagram of the *park*-PLL consists of two matrix transformations, namely, the Park's transformation and the inverse Park's transformation. The component *vβ* of the stationary frame is obtained by in‐ verse Park's transformation of the filtered synchronous components *vd* ' and *vq* ' in order to emulate a three-phase balanced electric system. The time constants *τd* and *τ<sup>q</sup>* of the two firstorder low pass filters (FOLPFs) determines the dynamic characteristics of the phase detec‐ tion (PD) section.

tude *Vh*in input grid voltage will produce two components of orders *h*±1 and amplitude of *Vh*/2 in the PD output. Besides, a *dc* component in input voltage will also lead to a funda‐ mental frequency oscillation in the *dq* components. Therefore, a tradeoff between speed of dynamic response and harmonic rejection capabilities should be achieved to optimize the

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected...

http://dx.doi.org/10.5772/52547

53

**4.3. The performance evalution among the EPLL, the** *Park***-PLL, and the ADALINE-PLL**

Fig.22 shows the simulation results corresponding to the estimated frequency in grid voltage and the phase estimation error when the grid is subjected to 0.7 per unit (p.u.) voltage sag. Here the existing grid synchronization schemes, namely, the enhance PLL (EPLL) and the *park*-PLL are also simulated for the sake of comparison. It can be ob‐ served that the *park*-PLL and the EPLL have similar dynamic response in the estimated frequency, with an overshoot of 5Hz when voltage sag occurs. It is interesting to notice that the response time of *park*-PLL and the EPLL is longer when the grid voltage recov‐ ers to normal condition. The proposed ADALINE-PLL shows the lowest frequency over‐ shoot compared with other grid synchronization schemes. As far as the phase estimation error is concerned, the phase estimation error of the *park*-PLL and EPLL has high transi‐ ent overshoot with noticeable oscillations. Whereas, the proposed ADALINE-PLL shows the best dynamic response with smallest phase estimation error with overshoot of about 2 degrees. It can be concluded from the estimated frequency and the phase estimation error that the ADALINE-PLL provides a more robust performance when subject to sig‐

Fig.23 shows the simulation results corresponding to the estimated frequency in grid voltage and the phase estimation error when the grid is contaminated by harmonics. The 0.3 per unit (p.u.) 5th order harmonic and 0.3 per unit (p.u.) 7th order harmonic compo‐ nents are added to the grid voltage at t=0.05s with a duration of 0.15s to test the im‐ munity of the various grid synchronization schemes. The *park*-PLL and the EPLL show noticeable oscillations in the estimated frequency when the harmonics are added to the grid voltage. Besides, the *park*-PLL shows longer settling time when the grid voltage re‐ covers to the normal condition. The EPLL shows the highest estimation error in grid fre‐ quency with amplitude of about 20 Hz, and the *park*-PLL shows the estimation error of about 10Hz when the harmonics are imposed. However, the proposed ADALINE-PLL shows the lowest frequency overshoot (0.5Hz) and highest estimation accuracy in the es‐ timated frequency compared to the other grid synchronization schemes. Furthermore, the phase estimation error of the *park*-PLL and the EPLL is remarkable during transi‐ ents, and the *park*-PLL is found to have a large settling time when the grid voltage re‐ covers. Besides, it shows that the EPLL has significant ripples in the phase estimation error. However, the proposed ADALINE-PLL shows negligible estimation error com‐ pared to the other algorithms, which implies that the proposed ADALINE-PLL shows

better robustness under harmonic contamination in grid voltages.

performance of the *park*-PLL.

nificant sag in the grid voltage.

**Figure 21.** The Matlab/Simulink diagram of the *park*-PLL

It was reported that the PD is always asymptotically stable around the equilibrium condi‐ tion *ω* ^ <sup>≅</sup>*ω*. As for the selection of time constants, if*τd*(or*τq*) is made too small, a pair of real poles will take place and results in a slow dynamic response. On the other hand, if*τd*(or*τq*) is made too high, a pair of complex conjugate poles with small real part will take place, which makes the *park*-PLL slow and oscillatory. It was suggested that the filter cutoff frequency should be equal to about two times line frequency to ensure a fast dynamic response (Filho, R. M. S., et al., 2008).

After the cutoff frequency of the low pass filters is selected, the compensator gains, namely, *kp* and *ki* , can be set in order to meet dynamic response and line disturbance rejection specifi‐ cations. However, it should be noted that each harmonic component of order *h* and ampli‐ tude *Vh*in input grid voltage will produce two components of orders *h*±1 and amplitude of *Vh*/2 in the PD output. Besides, a *dc* component in input voltage will also lead to a funda‐ mental frequency oscillation in the *dq* components. Therefore, a tradeoff between speed of dynamic response and harmonic rejection capabilities should be achieved to optimize the performance of the *park*-PLL.

**4.2. The** *Park* **phase-locked loop (***Park***-PLL)**

52 Adaptive Filtering - Theories and Applications

**Figure 21.** The Matlab/Simulink diagram of the *park*-PLL

tion *ω*

*kp* and *ki*

R. M. S., et al., 2008).

tion (PD) section.

The *park*-PLL was another single-phase version of the three-phase synchronous reference frame (SRF) PLL (Filho, R. M. S., et al., 2008). As shown in Fig.21, the circuit diagram of the *park*-PLL consists of two matrix transformations, namely, the Park's transformation and the inverse Park's transformation. The component *vβ* of the stationary frame is obtained by in‐

emulate a three-phase balanced electric system. The time constants *τd* and *τ<sup>q</sup>* of the two firstorder low pass filters (FOLPFs) determines the dynamic characteristics of the phase detec‐

It was reported that the PD is always asymptotically stable around the equilibrium condi‐

After the cutoff frequency of the low pass filters is selected, the compensator gains, namely,

cations. However, it should be noted that each harmonic component of order *h* and ampli‐

, can be set in order to meet dynamic response and line disturbance rejection specifi‐

^ <sup>≅</sup>*ω*. As for the selection of time constants, if*τd*(or*τq*) is made too small, a pair of real poles will take place and results in a slow dynamic response. On the other hand, if*τd*(or*τq*) is made too high, a pair of complex conjugate poles with small real part will take place, which makes the *park*-PLL slow and oscillatory. It was suggested that the filter cutoff frequency should be equal to about two times line frequency to ensure a fast dynamic response (Filho,

'

 and *vq* '

in order to

verse Park's transformation of the filtered synchronous components *vd*

#### **4.3. The performance evalution among the EPLL, the** *Park***-PLL, and the ADALINE-PLL**

Fig.22 shows the simulation results corresponding to the estimated frequency in grid voltage and the phase estimation error when the grid is subjected to 0.7 per unit (p.u.) voltage sag. Here the existing grid synchronization schemes, namely, the enhance PLL (EPLL) and the *park*-PLL are also simulated for the sake of comparison. It can be ob‐ served that the *park*-PLL and the EPLL have similar dynamic response in the estimated frequency, with an overshoot of 5Hz when voltage sag occurs. It is interesting to notice that the response time of *park*-PLL and the EPLL is longer when the grid voltage recov‐ ers to normal condition. The proposed ADALINE-PLL shows the lowest frequency over‐ shoot compared with other grid synchronization schemes. As far as the phase estimation error is concerned, the phase estimation error of the *park*-PLL and EPLL has high transi‐ ent overshoot with noticeable oscillations. Whereas, the proposed ADALINE-PLL shows the best dynamic response with smallest phase estimation error with overshoot of about 2 degrees. It can be concluded from the estimated frequency and the phase estimation error that the ADALINE-PLL provides a more robust performance when subject to sig‐ nificant sag in the grid voltage.

Fig.23 shows the simulation results corresponding to the estimated frequency in grid voltage and the phase estimation error when the grid is contaminated by harmonics. The 0.3 per unit (p.u.) 5th order harmonic and 0.3 per unit (p.u.) 7th order harmonic compo‐ nents are added to the grid voltage at t=0.05s with a duration of 0.15s to test the im‐ munity of the various grid synchronization schemes. The *park*-PLL and the EPLL show noticeable oscillations in the estimated frequency when the harmonics are added to the grid voltage. Besides, the *park*-PLL shows longer settling time when the grid voltage re‐ covers to the normal condition. The EPLL shows the highest estimation error in grid fre‐ quency with amplitude of about 20 Hz, and the *park*-PLL shows the estimation error of about 10Hz when the harmonics are imposed. However, the proposed ADALINE-PLL shows the lowest frequency overshoot (0.5Hz) and highest estimation accuracy in the es‐ timated frequency compared to the other grid synchronization schemes. Furthermore, the phase estimation error of the *park*-PLL and the EPLL is remarkable during transi‐ ents, and the *park*-PLL is found to have a large settling time when the grid voltage re‐ covers. Besides, it shows that the EPLL has significant ripples in the phase estimation error. However, the proposed ADALINE-PLL shows negligible estimation error com‐ pared to the other algorithms, which implies that the proposed ADALINE-PLL shows better robustness under harmonic contamination in grid voltages.

**Figure 22.** Performance comparison among the EPLL, the park-PLL and the proposed ALOF-PLL algorithm under 0.7 p.u. voltage sag in grid voltages (note: the ADALINE-PLL is abbreviated by ALOF-PLL )

**Figure 23.** Performance comparison among the *park*-PLL, the EPLL and the proposed ADALINE-PLL algorithm under 0.3 p.u. 5th order harmonic (negative sequence) and 0.3 p.u. 7th order harmonic (positive sequence) components in

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected...

http://dx.doi.org/10.5772/52547

55

grid voltages

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected... http://dx.doi.org/10.5772/52547 55

**Figure 23.** Performance comparison among the *park*-PLL, the EPLL and the proposed ADALINE-PLL algorithm under 0.3 p.u. 5th order harmonic (negative sequence) and 0.3 p.u. 7th order harmonic (positive sequence) components in grid voltages

**Figure 22.** Performance comparison among the EPLL, the park-PLL and the proposed ALOF-PLL algorithm under 0.7

p.u. voltage sag in grid voltages (note: the ADALINE-PLL is abbreviated by ALOF-PLL )

54 Adaptive Filtering - Theories and Applications

Fig.24 shows the simulation results corresponding to the estimated frequency in grid volt‐ age and the phase estimation error when the grid voltage is contaminated by random noise. The random noise of power density 10e-5 per unit (p.u.) is added to the grid volt‐ age at t=0.05s with a duration of 0.15s to test the immunity of the various grid synchroni‐ zation schemes. Similar to the case of a sudden applying harmonics, the park-PLL and EPLL show noticeable oscillations in the estimated frequency when the noise is added to the grid voltage. Besides, the park-PLL shows longer settling time when the grid voltage recovers to the normal condition. The EPLL shows the highest estimation error in grid frequency with amplitude of about 5 Hz, and the park-PLL shows the estimation error of about 2Hz when the noise is imposed. However, the proposed ADALINE-PLL shows the lowest frequency oscillation (0.2Hz) and highest estimation accuracy in the estimated fre‐ quency compared to the other grid synchronization schemes. Moreover, the phase estima‐ tion error of the park-PLL and the EPLL is remarkable during transients, and the park-PLL is found to have a large settling time when the grid voltage recovers. Besides, it shows that the park-PLL has the maximum phase estimation error of about 3 degrees, and the phase estimation error of EPLL is less than 2 degrees. However, the proposed ADALINE-PLL shows negligible estimation error compared to the other algorithms, with amplitude of less than 0.5 degree. The estimated frequency and the phase estimation er‐ ror in Fig.24 indicate that the proposed ADALINE-PLL shows better robustness when

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected...

http://dx.doi.org/10.5772/52547

57

The electrical power systems are under a transition to the smart grid owing to the advance‐ ment of modern control, communication technologies and the requirement of real-time mar‐ keting. In the smart grid, the power converters are indispensable components which connect the renewable energy resources and the FACTS devices, power quality conditioning devices to the grid. Hence the accurate grid-synchronization of these power converters to the grid is crucial to ensure their stable operation. This book chapter aims to provide a systematic ap‐ proach for the adaptive linear neural network (ADALINE) algorithm for the real-time har‐ monic estimation and phase synchronization for the grid-connected converters, which are

The mathematical derivation of the ADALINE algorithm and the ADALINE-PLL scheme is presented, followed by the stability analysis, the continuous domain and the discrete do‐ main models, and the guidelines for parameter selection of the ADALINE-PLL algorithm. The performance of the ADALINE-PLL is further validated by performance comparison with the existing park-PLL and EPLL algorithms. It can be expected that the presented ADALINE-based algorithms can find wide application in the grid-connected converters for

grid voltage is contaminated by random noise.

the fundamental building blocks for the smart grid infrastructure.

**5. Conclusions**

smart grid applications.

**Figure 24.** Performance comparison among the *park*-PLL, the EPLL and the ADALINE-PLL algorithm when random noise (power=5e-6) is suddenly applied in grid voltages

Fig.24 shows the simulation results corresponding to the estimated frequency in grid volt‐ age and the phase estimation error when the grid voltage is contaminated by random noise. The random noise of power density 10e-5 per unit (p.u.) is added to the grid volt‐ age at t=0.05s with a duration of 0.15s to test the immunity of the various grid synchroni‐ zation schemes. Similar to the case of a sudden applying harmonics, the park-PLL and EPLL show noticeable oscillations in the estimated frequency when the noise is added to the grid voltage. Besides, the park-PLL shows longer settling time when the grid voltage recovers to the normal condition. The EPLL shows the highest estimation error in grid frequency with amplitude of about 5 Hz, and the park-PLL shows the estimation error of about 2Hz when the noise is imposed. However, the proposed ADALINE-PLL shows the lowest frequency oscillation (0.2Hz) and highest estimation accuracy in the estimated fre‐ quency compared to the other grid synchronization schemes. Moreover, the phase estima‐ tion error of the park-PLL and the EPLL is remarkable during transients, and the park-PLL is found to have a large settling time when the grid voltage recovers. Besides, it shows that the park-PLL has the maximum phase estimation error of about 3 degrees, and the phase estimation error of EPLL is less than 2 degrees. However, the proposed ADALINE-PLL shows negligible estimation error compared to the other algorithms, with amplitude of less than 0.5 degree. The estimated frequency and the phase estimation er‐ ror in Fig.24 indicate that the proposed ADALINE-PLL shows better robustness when grid voltage is contaminated by random noise.

#### **5. Conclusions**

**Figure 24.** Performance comparison among the *park*-PLL, the EPLL and the ADALINE-PLL algorithm when random

noise (power=5e-6) is suddenly applied in grid voltages

56 Adaptive Filtering - Theories and Applications

The electrical power systems are under a transition to the smart grid owing to the advance‐ ment of modern control, communication technologies and the requirement of real-time mar‐ keting. In the smart grid, the power converters are indispensable components which connect the renewable energy resources and the FACTS devices, power quality conditioning devices to the grid. Hence the accurate grid-synchronization of these power converters to the grid is crucial to ensure their stable operation. This book chapter aims to provide a systematic ap‐ proach for the adaptive linear neural network (ADALINE) algorithm for the real-time har‐ monic estimation and phase synchronization for the grid-connected converters, which are the fundamental building blocks for the smart grid infrastructure.

The mathematical derivation of the ADALINE algorithm and the ADALINE-PLL scheme is presented, followed by the stability analysis, the continuous domain and the discrete do‐ main models, and the guidelines for parameter selection of the ADALINE-PLL algorithm. The performance of the ADALINE-PLL is further validated by performance comparison with the existing park-PLL and EPLL algorithms. It can be expected that the presented ADALINE-based algorithms can find wide application in the grid-connected converters for smart grid applications.

#### **Acknowledgment**

This work is financially supported by the Fundamental Research Funds for the Central Uni‐ versities of China under grant No.ZYGX2011J093.

[8] Han, Y., Xu, L., & Chen, C. (2009). A novel synchronization scheme for grid-connect‐ ed converters by using adaptive linear optimal filter based PLL (ALOF-PLL). *Simula‐*

On Using ADALINE Algorithm for Harmonic Estimation and Phase-Synchronization for the Grid-Connected...

http://dx.doi.org/10.5772/52547

59

[9] Jarventausta, P., Repo, S., & Partanen, J. (2010). Smart grid power system control in distributed generation environment. *Annual Reviews in Control*, 34(2), 277-286,

[10] Karimi-Ghartemani, M. ., & Iravani, M. R. (2004). A method for synchronization of power electronic converters in polluted and variable-frequency environments. *IEEE*

[11] Kefalas, T. D., Kladas, A. G., & (2010, . (2010). Harmonic impact on distribution trans‐ former no-load loss. *IEEE Transactionson Industrial Electronics*, 0278-0046, 57(1),

[13] Sauter, T., & Lobashov, M. (2011). End-to-End communication architecture for smart grids, *IEEE Transactions on Industrial Electronics*, 0278-0046, 58(4), 1218-1228.

[14] Varaiya, P. P., Wu, F. F., & Bialek, J. W. (2011). Smart operation of smart grid: Risk-

[15] Wira, P., Abdeslam, D., & Mercklé, J. (2010). Artificial Neural Networks to Improve Current Harmonics Identification and Compensation. in: *Intelligent Industrial Systems: Modelling, Automation and Adaptive Behavior* (Editor: G. Rigatos), IGI Publications.

[16] Yin, J. J., Tang, W., & Man, K. F. (2010). A comparison of optimization algorithms for biological neural network identification. *IEEE Transactionson Industrial Electronics*,

[12] Simon H., Adaptive Filter Theory, Prentice Hall, Fourth Edition,. (2002). 13978.

limiting dispatch. Proceedings of the IEEE, 0018-9219, 99(1), 40-57.

*tion Modeling Practice and Theory*, 0156-9190X, 17(7), 1299-1345.

*Transactions on Power Systems*, 19(3), 1263-1270, 0885-8950.

1367-5788.

193-200.

978-1-61520-849-4

0278-0046, 57(3), 1127-1131.

#### **Author details**

Yang Han\*

Address all correspondence to: hanyang\_facts@hotmail.com

Dept. of Power Electronics, School of Mechatronics Engineering, University of Electronic Science and Technology of China, Chengdu, China

#### **References**


[8] Han, Y., Xu, L., & Chen, C. (2009). A novel synchronization scheme for grid-connect‐ ed converters by using adaptive linear optimal filter based PLL (ALOF-PLL). *Simula‐ tion Modeling Practice and Theory*, 0156-9190X, 17(7), 1299-1345.

**Acknowledgment**

58 Adaptive Filtering - Theories and Applications

**Author details**

Yang Han\*

**References**

versities of China under grant No.ZYGX2011J093.

Address all correspondence to: hanyang\_facts@hotmail.com

Science and Technology of China, Chengdu, China

0278-0046, 54(1), 61-76.

*ics*, 0278-0046, 56(6), 2220-2228.

This work is financially supported by the Fundamental Research Funds for the Central Uni‐

Dept. of Power Electronics, School of Mechatronics Engineering, University of Electronic

[1] Abdeslam, D. O., Wira, P., Chapuis, Y., & , A. (2007). A unified artificial neural net‐ work architecture for active power filters,*IEEE Transactionson Industrial Electronics*,

[2] Chang, G. W., Chen, C. I., Liang, Q. W., & (2009, . (2009). A two-stage ADALINE for harmonics and inter-harmonics measurement, *IEEE Transactionson Industrial Electron‐*

[3] Chang, G. W., Liu, Y. J., & Su, H. J. (2010). On real-time simulation for harmonic and flicker assessment of an industrial system with bulk nonlinear loads,*IEEE Transac‐*

[4] Cirrincione, M., Pucci, M., Vitale, G., & (2008, . (2008). A single-phase DG generation unit with shunt active power filter capability by adaptive neural filtering. *IEEE*

[5] Filho, R. M. S., Seixas, P. F. ., Cortizo, P. C., Torres, L. A. B., & Souza, A. F. (2008). Comparison of three single-phase PLL algorithms for UPS applications. *IEEE Trans‐*

[6] Froehlich, J., Larson, E., & Patel, S. N. (2011). Disaggregated End-Use energy sensing

[7] Han, Y., Khan, M. M., & Chen, C. (2008). A novel harmonic-free power factor correc‐ tor based on T-type APF with adaptive linear neural network (ADALINE) control.

*tionson Industrial Electronics*, Sept. 2010., 0278-0046, 57(9), 2998-3009.

*Transactionson Industrial Electronics,*, , *55*(5), 2093-2110, 0278-0046.

for the smart grid. *IEEE Pervasive Computing*, 10(1), 28-39, 1536-1268.

*Simulation Modeling Practice&Theory*,0156-9190X, 16(9), 1215-1238.

*actionson Industrial Electronics*, 0278-0046, 55(8), 2923-2932.


**Chapter 3**

**Applications of a Combination of Two Adaptive**

Designing a Least Mean Square (LMS) family adaptive algorithm includes solving the wellknown trade-off between the initial convergence speed and the mean-square error in steady state according to the requirements of the application at hands. The trade-off is controlled by the step-size parameter of the algorithm. Large step size leads to a fast initial conver‐ gence but the algorithm also exhibits a large mean-square error in the steady state and in contrary, small step size slows down the convergence but results in a small steady state er‐ ror [9,17]. In several applications it is, however, eligible to have both and hence it would be

very desirable to be able to design algorithms that can overcome the named trade-off.

determine how close the adaptive filter is to its optimum performance.

Variable step size adaptive schemes offer a potential solution allowing to achieve both fast initial convergence and low steady state misadjustment [1, 8, 12, 15, 18]. How successful these schemes are depends on how well the algorithm is able to estimate the distance of the adaptive filter weights from the optimal solution. The variable step size algorithms use dif‐ ferent criteria for calculating the proper step size at any given time instance. For example the algorithm proposed in [15] changes the time-varying convergence parameters in such a way that the change is proportional to the negative of gradient of the squared estimation error with respect to the convergence parameter. Squared instantaneous errors have been used in [12] and the squared autocorrelation of errors at adjacent time instances in [1] to modify the step size. In reference [18] the norm of projected weight error vector is used as a criterion to

More recently there has been an interest in a combination scheme that is able to optimize the trade-off between convergence speed and steady state error [14]. The scheme consists of two adaptive filters that are simultaneously applied to the same inputs as depicted in Figure 1. One of the filters has a large step size allowing fast convergence and the other one has a

> © 2013 Trump; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

© 2013 Trump; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

distribution, and reproduction in any medium, provided the original work is properly cited.

Additional information is available at the end of the chapter

**Filters**

Tõnu Trump

**1. Introduction**

http://dx.doi.org/10.5772/50451

### **Applications of a Combination of Two Adaptive Filters**

#### Tõnu Trump

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/50451

#### **1. Introduction**

Designing a Least Mean Square (LMS) family adaptive algorithm includes solving the wellknown trade-off between the initial convergence speed and the mean-square error in steady state according to the requirements of the application at hands. The trade-off is controlled by the step-size parameter of the algorithm. Large step size leads to a fast initial conver‐ gence but the algorithm also exhibits a large mean-square error in the steady state and in contrary, small step size slows down the convergence but results in a small steady state er‐ ror [9,17]. In several applications it is, however, eligible to have both and hence it would be very desirable to be able to design algorithms that can overcome the named trade-off.

Variable step size adaptive schemes offer a potential solution allowing to achieve both fast initial convergence and low steady state misadjustment [1, 8, 12, 15, 18]. How successful these schemes are depends on how well the algorithm is able to estimate the distance of the adaptive filter weights from the optimal solution. The variable step size algorithms use dif‐ ferent criteria for calculating the proper step size at any given time instance. For example the algorithm proposed in [15] changes the time-varying convergence parameters in such a way that the change is proportional to the negative of gradient of the squared estimation error with respect to the convergence parameter. Squared instantaneous errors have been used in [12] and the squared autocorrelation of errors at adjacent time instances in [1] to modify the step size. In reference [18] the norm of projected weight error vector is used as a criterion to determine how close the adaptive filter is to its optimum performance.

More recently there has been an interest in a combination scheme that is able to optimize the trade-off between convergence speed and steady state error [14]. The scheme consists of two adaptive filters that are simultaneously applied to the same inputs as depicted in Figure 1. One of the filters has a large step size allowing fast convergence and the other one has a

© 2013 Trump; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Trump; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

small step size for a small steady state error. The outputs of the filters are combined through a mixing parameter λ. The performance of this scheme has been studied for some parameter update schemes [2, 6, 19]. The reference [2] uses convex combination i.e. λ is constrained to lie between 0 and 1. The reference [19] presents a transient analysis of a slightly modified version of this scheme. The parameter λ is in those papers found using an LMS type adap‐ tive scheme and computing the sigmoidal function of the result. The reference [6] takes an‐ other approach computing the mixing parameter using an affine combination. This paper uses the ratio of time averages of the instantaneous errors of the filters. The error function of the ratio is then computed to obtain λ.

**2. Combination of Two Adaptive Filters**

*ei*

**w***i*

(*n*)=*d*(*n*)−**w***<sup>i</sup>*

(*n* −1) + *μi*

(*n*)=**w***<sup>i</sup>*

LMS adaptation rule

In the above *wi*

**Figure 1.** The combined adaptive filter.

The desired signal in 1 can be expressed as

*d*(*n*)=**w***<sup>o</sup>*

where the vector *wo* is the optimal Wiener filter coefficient vector for the problem at hands and the process *ζ(n)* is the irreducible error that is statistically independent of all the other signals.

gle filter.

Let us consider two adaptive filters, as shown in Figure 1, each of them updated using the

*ei*

*x(n)* is the known *N* input vector, common for both of the adaptive filters. The input process is assumed to be a zero mean wide sense stationary Gaussian process. *μ i* is the step size of *i*th adaptive filter. We assume without loss of generality that *μ 1 > μ2.* The case *μ 1 = μ2* is not interesting as in this case the two filters remain equal and the combination renders to a sin‐

*(n)* is the *N* vector of coefficients of the *i*-th adaptive filter, with *i = 1,2* and

*<sup>H</sup>* (*n* −1)**x**(*n*), (1)

*<sup>H</sup>* **x**(*n*) + *ζ*(*n*)., (3)

<sup>∗</sup>(*n*)**x**(*n*). (2)

Applications of a Combination of Two Adaptive Filters

http://dx.doi.org/10.5772/50451

63

In [13] a convex combination of two adaptive filters with different adaptation schemes has been investigated with the aim to improve the steady state characteristics. One of the adap‐ tive filters in that paper uses LMS algorithm and the other one Generalized Normalized Gradient Decent algorithm. The combination parameter λ is computed using stochastic gra‐ dient adaptation. In [24] the convex combination of two adaptive filters is applied in a varia‐ ble filter length scheme to gain improvements in low SNR conditions. In [11] the combination has been used to join two affine projection filters with different regularization parameters. The work [7] uses the combination on parallel binary structured LMS algo‐ rithms. These three works use the LMS like scheme of [5] to compute λ.

It should be noted that schemes involving two filters have been proposed earlier [3, 16]. However, in those early schemes only one of the filters have been adaptive while the other one has used fixed filter weights. Updating of the fixed filter has been accomplished by copying of all the coefficients from the adaptive filter, when the adaptive filter has been per‐ forming better than the fixed one.

In this Chapter we compute the mixing parameter λ from output signals of the individual filters. The way of calculating the mixing parameter is optimal in the sense that it results from minimization of the mean-squared error of the combined filter. The scheme was inde‐ pendently proposed in [21] and [4]. In [23], the output signal based combination was used in adaptive line enhancer and in [22] it was used in the system identification application.

We will investigate three applications of the combination: system investigation, adaptive beamforming and adaptive line enhancer. We describe each of the applications in detail and present a proper analysis.

We will assume throughout the Chapter that the signals are complex-valued and that the combination scheme uses two LMS adaptive filters. The italic, bold face lower case and bold face upper case letters will be used for scalars, column vectors and matrices respectively. The superscript *T* denotes transposition and the superscript *H* Hermitian transposition of a matrix. The operator *E[·]* denotes mathematical expectation, *Re{·}* is the real part of a com‐ plex variable and *Tr[·]* stands for the trace of a matrix.

#### **2. Combination of Two Adaptive Filters**

small step size for a small steady state error. The outputs of the filters are combined through a mixing parameter λ. The performance of this scheme has been studied for some parameter update schemes [2, 6, 19]. The reference [2] uses convex combination i.e. λ is constrained to lie between 0 and 1. The reference [19] presents a transient analysis of a slightly modified version of this scheme. The parameter λ is in those papers found using an LMS type adap‐ tive scheme and computing the sigmoidal function of the result. The reference [6] takes an‐ other approach computing the mixing parameter using an affine combination. This paper uses the ratio of time averages of the instantaneous errors of the filters. The error function of

In [13] a convex combination of two adaptive filters with different adaptation schemes has been investigated with the aim to improve the steady state characteristics. One of the adap‐ tive filters in that paper uses LMS algorithm and the other one Generalized Normalized Gradient Decent algorithm. The combination parameter λ is computed using stochastic gra‐ dient adaptation. In [24] the convex combination of two adaptive filters is applied in a varia‐ ble filter length scheme to gain improvements in low SNR conditions. In [11] the combination has been used to join two affine projection filters with different regularization parameters. The work [7] uses the combination on parallel binary structured LMS algo‐

It should be noted that schemes involving two filters have been proposed earlier [3, 16]. However, in those early schemes only one of the filters have been adaptive while the other one has used fixed filter weights. Updating of the fixed filter has been accomplished by copying of all the coefficients from the adaptive filter, when the adaptive filter has been per‐

In this Chapter we compute the mixing parameter λ from output signals of the individual filters. The way of calculating the mixing parameter is optimal in the sense that it results from minimization of the mean-squared error of the combined filter. The scheme was inde‐ pendently proposed in [21] and [4]. In [23], the output signal based combination was used in

We will investigate three applications of the combination: system investigation, adaptive beamforming and adaptive line enhancer. We describe each of the applications in detail and

We will assume throughout the Chapter that the signals are complex-valued and that the combination scheme uses two LMS adaptive filters. The italic, bold face lower case and bold face upper case letters will be used for scalars, column vectors and matrices respectively. The superscript *T* denotes transposition and the superscript *H* Hermitian transposition of a matrix. The operator *E[·]* denotes mathematical expectation, *Re{·}* is the real part of a com‐

adaptive line enhancer and in [22] it was used in the system identification application.

rithms. These three works use the LMS like scheme of [5] to compute λ.

the ratio is then computed to obtain λ.

62 Adaptive Filtering - Theories and Applications

forming better than the fixed one.

present a proper analysis.

plex variable and *Tr[·]* stands for the trace of a matrix.

Let us consider two adaptive filters, as shown in Figure 1, each of them updated using the LMS adaptation rule

$$d\_i e\_i(n) = d(n) - \mathbf{w}\_i^H(n-1)\mathbf{x}(n),\tag{1}$$

$$\mathbf{w}\_{i}(n) = \mathbf{w}\_{i}(n-1) + \mu\_{i} e\_{i}^{\*}(n)\mathbf{x}(n). \tag{2}$$

In the above *wi (n)* is the *N* vector of coefficients of the *i*-th adaptive filter, with *i = 1,2* and *x(n)* is the known *N* input vector, common for both of the adaptive filters. The input process is assumed to be a zero mean wide sense stationary Gaussian process. *μ i* is the step size of *i*th adaptive filter. We assume without loss of generality that *μ 1 > μ2.* The case *μ 1 = μ2* is not interesting as in this case the two filters remain equal and the combination renders to a sin‐ gle filter.

**Figure 1.** The combined adaptive filter.

The desired signal in 1 can be expressed as

$$d\mathfrak{a}(\mathfrak{n}) = \mathbf{w}\_o^H \mathbf{x}(\mathfrak{n}) + \zeta(\mathfrak{n}).\tag{3}$$

where the vector *wo* is the optimal Wiener filter coefficient vector for the problem at hands and the process *ζ(n)* is the irreducible error that is statistically independent of all the other signals. The outputs of the two adaptive filters are combined according to

$$y(n) = \lambda(n)y\_1(n) + \mathbf{[1} - \lambda(n)\mathbf{[y\_2(n)]}) \tag{4}$$

configuration is depicted in Figure 2. As before *x(n)* is the input signal, *v(n)* is the measure‐ ment noise, *y(n)* is the adaptive filter output signal and *e(n)* is the error signal. The desired

ducible error *ζ(n)* consists of the measurement noise *v(n)* together with the effects of the plant that can not be explained with a length *N* linear model. The result of pure system iden‐

The same basic configuration is also used to solve the echo and noise cancellation problems. In echo cancellation the unknown plant is the echo path either electrical or acoustical and the input signal *x(n)* is the speech signal of one of the parties of telephone conversation. Speech of the other party is contained in the signal *v(n)*. The objective is to cancel the compo‐

In noise cancellation problems the signal *v(n)* is the primary microphone signal containing noise and the signal to be cleaned. The input signal *x(n)* is formed by the reference micro‐ phones. The reference signals are supposed to be correlated with the noise in the primary signal but not with the useful signal. The objective here is to suppress the noise and clean

In here we are going to use the combination of two adaptive filters described in the previous

In this section we are interested in finding expressions that characterize transient perform‐ ance of the combined algorithm i.e. we intend to derive formulae that predict entire course

tification problem is the vector of adaptive filter coefficients.

**Figure 2.** Block diagram of generelized sidelobe canceller.

the signal of interest i.e. *v(n)*.

**3.2 Excess Mean Square Error**

nents of the desired signal that are due to the input *x(n)*.

Section to solve the system identification problem.

*<sup>H</sup>* **x**(*n*) + *ζ*(*n*), where *wo* is the vector of Wiener filter coefficients and the irre‐

Applications of a Combination of Two Adaptive Filters

http://dx.doi.org/10.5772/50451

65

signal is *d*(*n*)=**w***<sup>o</sup>*

where *yi* (*n*)=**w***<sup>i</sup> <sup>H</sup>* (*n* −1)**x**(*n*) and the mixing parameter *λ(n)* can be any real number.

We define the *a priori* system error signal as difference between the output signal of the opti‐ mal Wiener filter at time *n,* given by *yo*(*n*)=**w***<sup>o</sup> <sup>H</sup>* **x**(*n*)=*d*(*n*)−*ζ*(*n*), and the output signal of our adaptive scheme *y(n)*

$$e\_a(n) = y\_o(n) - \lambda(n)y\_1(n) - (1 - \lambda(n))y\_2(n). \tag{5}$$

Let us now find *λ(n)* by minimizing the mean square of the *a priori* system error. The deriva‐ tive of *<sup>E</sup>* <sup>|</sup> *ea*(*n*)|2 with respect to *λ(n)* reads

$$\frac{\partial E[\mathbb{I}\mid e\_a(n)\mathbb{I}^2]}{\partial \lambda(n)} = 2E[\mathbb{P}\mathbb{E}\{(y\_o(n) - y\_2(n))(y\_2(n) - y\_1(n))^\*\} + \lambda(n) \mid (y\_2(n) - y\_1(n))^2\}.\tag{6}$$

Setting the derivative to zero results in

$$\lambda(n) = \frac{E\left[\mathsf{Pre}\{(d(n) - y\_2(n))(y\_1(n) - y\_2(n))^\*\}\right]}{E\left[\mathsf{I}\mid (y\_1(n) - y\_2(n))\right]^2\right]},\tag{7}$$

where we have replaced the Wiener filter output signal *yo(n)* by its observable noisy version *d(n)*. Note however, that because the input signal *x(n)* and irreducible error *ζ(n)* are inde‐ pendent random processes, this can be done without introducing any error into our calcula‐ tions. The denominator of equation (7) comprises expectation of the squared difference of the two filter output signals. This quantity can be very small or even zero, particularly in the beginning of adaptation if the two step sizes are close to each other. Correspondingly λ com‐ puted directly from (7) may be large. To avoid this from happening we add a small regulari‐ zation constant to the denominator of (7). The constant should be selected small compared to *E* **x***<sup>T</sup>* (*n*)**x**(*n*) but large enough to prevent division by zero in given arithmetic.

#### **3. System Identification**

In several areas it is essential to build a mathematical model of some phenomenon or sys‐ tem. In this class of applications, the adaptive filter can be used to find a best fit of a linear model to an unknown plant. The plant and the adaptive filter are driven by the same known input signal and the plant output provides the desired signal of the adaptive filter. The plant can be dynamic and in this case we have a time varying model. The system identification configuration is depicted in Figure 2. As before *x(n)* is the input signal, *v(n)* is the measure‐ ment noise, *y(n)* is the adaptive filter output signal and *e(n)* is the error signal. The desired signal is *d*(*n*)=**w***<sup>o</sup> <sup>H</sup>* **x**(*n*) + *ζ*(*n*), where *wo* is the vector of Wiener filter coefficients and the irre‐ ducible error *ζ(n)* consists of the measurement noise *v(n)* together with the effects of the plant that can not be explained with a length *N* linear model. The result of pure system iden‐ tification problem is the vector of adaptive filter coefficients.

**Figure 2.** Block diagram of generelized sidelobe canceller.

The outputs of the two adaptive filters are combined according to

where *yi*

(*n*)=**w***<sup>i</sup>*

64 Adaptive Filtering - Theories and Applications

adaptive scheme *y(n)*

<sup>∂</sup>*<sup>E</sup>* <sup>|</sup> *ea*(*n*)|<sup>2</sup>

mal Wiener filter at time *n,* given by *yo*(*n*)=**w***<sup>o</sup>*

tive of *<sup>E</sup>* <sup>|</sup> *ea*(*n*)|2 with respect to *λ(n)* reads

Setting the derivative to zero results in

**3. System Identification**

*λ*(*n*)=

*y*(*n*)=*λ*(*n*)*y*1(*n*) + 1−*λ*(*n*) *y*2(*n*), (4)

*ea*(*n*)= *yo*(*n*)−*λ*(*n*)*y*1(*n*)−(1−*λ*(*n*))*y*2(*n*). (5)

*<sup>E</sup>* | (*y*1(*n*)<sup>−</sup> *<sup>y</sup>*2(*n*))|<sup>2</sup> , (7)

*<sup>H</sup>* **x**(*n*)=*d*(*n*)−*ζ*(*n*), and the output signal of our

*<sup>H</sup>* (*n* −1)**x**(*n*) and the mixing parameter *λ(n)* can be any real number.

We define the *a priori* system error signal as difference between the output signal of the opti‐

Let us now find *λ(n)* by minimizing the mean square of the *a priori* system error. The deriva‐

*<sup>E</sup>* <sup>R</sup>*e*{(*d*(*n*)<sup>−</sup> *<sup>y</sup>*2(*n*))(*y*1(*n*)<sup>−</sup> *<sup>y</sup>*2(*n*))∗}

where we have replaced the Wiener filter output signal *yo(n)* by its observable noisy version *d(n)*. Note however, that because the input signal *x(n)* and irreducible error *ζ(n)* are inde‐ pendent random processes, this can be done without introducing any error into our calcula‐ tions. The denominator of equation (7) comprises expectation of the squared difference of the two filter output signals. This quantity can be very small or even zero, particularly in the beginning of adaptation if the two step sizes are close to each other. Correspondingly λ com‐ puted directly from (7) may be large. To avoid this from happening we add a small regulari‐ zation constant to the denominator of (7). The constant should be selected small compared

In several areas it is essential to build a mathematical model of some phenomenon or sys‐ tem. In this class of applications, the adaptive filter can be used to find a best fit of a linear model to an unknown plant. The plant and the adaptive filter are driven by the same known input signal and the plant output provides the desired signal of the adaptive filter. The plant can be dynamic and in this case we have a time varying model. The system identification

to *E* **x***<sup>T</sup>* (*n*)**x**(*n*) but large enough to prevent division by zero in given arithmetic.

<sup>∂</sup>*λ*(*n*) =2*<sup>E</sup>* <sup>R</sup>*e*{(*yo*(*n*)<sup>−</sup> *<sup>y</sup>*2(*n*))(*y*2(*n*)<sup>−</sup> *<sup>y</sup>*1(*n*))∗} + *<sup>λ</sup>*(*n*)| (*y*2(*n*)<sup>−</sup> *<sup>y</sup>*1(*n*))|<sup>2</sup> . (6)

The same basic configuration is also used to solve the echo and noise cancellation problems. In echo cancellation the unknown plant is the echo path either electrical or acoustical and the input signal *x(n)* is the speech signal of one of the parties of telephone conversation. Speech of the other party is contained in the signal *v(n)*. The objective is to cancel the compo‐ nents of the desired signal that are due to the input *x(n)*.

In noise cancellation problems the signal *v(n)* is the primary microphone signal containing noise and the signal to be cleaned. The input signal *x(n)* is formed by the reference micro‐ phones. The reference signals are supposed to be correlated with the noise in the primary signal but not with the useful signal. The objective here is to suppress the noise and clean the signal of interest i.e. *v(n)*.

In here we are going to use the combination of two adaptive filters described in the previous Section to solve the system identification problem.

#### **3.2 Excess Mean Square Error**

In this section we are interested in finding expressions that characterize transient perform‐ ance of the combined algorithm i.e. we intend to derive formulae that predict entire course of adaptation of the algorithm. Before we can proceed we need, however, to introduce some notations.

First let us denote the weight error vector of *i*-th filter as

$$
\tilde{\mathbf{w}}\_i(\mathfrak{n}) = \mathbf{w}\_o - \mathbf{w}\_i(\mathfrak{n}).\tag{8}
$$

We thus need to investigate the evolution of the individual terms of the type

and *λ(n)*. To do so we, however, concentrate first on the mean square deviation de‐

*<sup>H</sup>* (*<sup>n</sup>* <sup>−</sup>1)**x**(*n*)=*eo*(*n*) + **<sup>w</sup>**˜ *<sup>i</sup>*

(*n* −1)−*μi*

We next approximate the outer product of input signal vectors by its correlation matrix **xx***<sup>H</sup>* <sup>≈</sup>**Rx**. The approximation is justified by the fact that with small step size the weight error update of the LMS algorithm (17) behaves like a low pass filter with a low cutoff frequency.

(*n* −1)−*μi*

This means in fact that we apply the small step size theory [9] even if the assumption of small step size is not really true for the fast adapting filter. In our simulation study we will

where **Q** is a unitary matrix whose columns are the orthogonal eigenvectors of *Rx* and *Ω* is a diagonal matrix having eigenvalues associated with the corresponding eigenvectors on its

**x***eo*

**x***eo*

**xx***<sup>H</sup>* )**w**˜ *<sup>i</sup>*

**Rx**)**w**˜ *<sup>i</sup>*

(*n* −1) in order to reveal the time evolution of *EMSE(n)*

*<sup>H</sup>* (*n* −1)**x**(*n*) (16)

Applications of a Combination of Two Adaptive Filters

http://dx.doi.org/10.5772/50451

67

<sup>∗</sup>(*n*). (17)

<sup>∗</sup>(*n*). (18)

**<sup>Q</sup>***<sup>H</sup>* **<sup>R</sup>***x***Q**=**Ω**, (19)

(*n*) (20)

<sup>∗</sup>(*n*). (21)

*EMS Ek* ,*<sup>l</sup>* =*E* **w**˜ *<sup>k</sup>*

fined in (10).

Reformulation the relation (1) as

*ei*

and subtracting (2) from *wo* we have

With this approximations we have

*<sup>H</sup>* (*<sup>n</sup>* <sup>−</sup>1)**x**(*n*)**x***<sup>H</sup>* (*n*)**w**˜ *<sup>l</sup>*

(*n*)=*d*(*n*)−**w***<sup>i</sup>*

**w**˜ *i*

**w**˜ *i*

(*n*)=(**I**−*μi*

(*n*)≈(**I**−*μi*

see, however, that the assumption works in practice rather well.

Let us now define the eigendecomposition of the correlation matrix as

main diagonal. We also define the transformed weight error vector as

**v***i*

**p***i* (*n*)=*μi*

and the transformed last term of equation (18) as

(*n*)=**Q***<sup>H</sup>* **<sup>w</sup>**˜ *<sup>i</sup>*

**<sup>Q</sup>***<sup>H</sup>* **<sup>x</sup>***eo*

Then we can rewrite the equation (18) after multiplying both sides by *QH* from the left as

Then the equivalent weight error vector of the combined adaptive filter will be

$$
\tilde{\mathbf{w}}(n) = \lambda \,\tilde{\mathbf{w}}\_1(n) + (1 - \lambda)\tilde{\mathbf{w}}\_2(n). \tag{9}
$$

The mean square deviation of the combined filter *MSD* =*E* **w**˜ *<sup>H</sup>* (*n*)**w**˜ (*n*) is given by

$$\text{MSD} = \lambda^2 \mathbb{E} \| \tilde{\mathbf{w}}\_1^H(n) \tilde{\mathbf{w}}\_1(n) \| + 2\lambda(1 - \lambda) \text{Re} \{ \mathbb{E} [\tilde{\mathbf{w}}\_2^H(n) \tilde{\mathbf{w}}\_1(n)] \} + (1 - \lambda)^2 \mathbb{E} [\tilde{\mathbf{w}}\_2^H(n) \tilde{\mathbf{w}}\_2(n)].\tag{10}$$

The *a priori* estimation error of an individual filter is defined as

$$e\_{i,a}(n) = \tilde{\mathbf{w}}\_i^H (n-1)\mathbf{x}(n). \tag{11}$$

It follows from (5) that we can express the *a priori* error of the combination as

$$e\_a(n) = \lambda(n)e\_{1,a}(n) + (1 - \lambda(n))e\_{2,a}(n) \tag{12}$$

and because *λ(n)* is according to (7) a ratio of mathematical expectations and, hence, deter‐ ministic, we have for the excess mean square error of the combination, *EMSE*(*n*)=*<sup>E</sup>* <sup>|</sup> *ea*(*n*)|<sup>2</sup> ,

$$\mathbb{E}\left[|\left|e\_{\mathfrak{a}}(n)\right|^{2}\right] = \lambda^{2}\mathbb{E}\left[|\left|e\_{1,\mathfrak{a}}(n)\right|^{2}\right] + 2\lambda(1-\lambda)\mathbb{E}\left[\mathsf{Re}\left\{e\_{1,\mathfrak{a}}(n)e\_{2,\mathfrak{a}}^{\*}(n)\right\}\right] + (1-\lambda)^{2}\mathbb{E}\left[|\left|e\_{2,\mathfrak{a}}(n)\right|^{2}\right].\tag{13}$$

As *ei*,*a*(*n*)=**w**˜ *<sup>i</sup> <sup>H</sup>* (*n* −1)**x**(*n*), the expression of the excess mean square error becomes

$$E\mathbb{L}\left[\boldsymbol{e}\_{d}(\boldsymbol{n})\right]^{2}\mathbf{J} = \boldsymbol{\lambda}^{2}E\mathbb{L}\tilde{\mathbf{w}}\_{1}^{H}\mathbf{x}\mathbf{x}^{H}\tilde{\mathbf{w}}\_{1}\mathbf{J} + 2\boldsymbol{\lambda}(1-\boldsymbol{\lambda})E\{\mathsf{Re}\{\tilde{\mathbf{w}}\_{1}^{H}\mathbf{x}\mathbf{x}^{H}\tilde{\mathbf{w}}\_{2}\}\mathbf{J} + (1-\boldsymbol{\lambda})^{2}E\{\tilde{\mathbf{w}}\_{2}^{H}\mathbf{x}\mathbf{x}^{H}\tilde{\mathbf{w}}\_{2}\}\mathbf{J} \tag{14}$$

In what follows we often drop the explicit time index *n* as we have done in (14), if it is not necessary to avoid a confusion.

Noting that *yi* (*n*)=**w***<sup>i</sup> <sup>H</sup>* (*n* −1)**x**(*n*), we can rewrite the expression for *λ (n)* in (7) as

$$\lambda(n) = \frac{E\mathbb{E}[\tilde{\mathbf{w}}\_2^H \mathbf{x} \mathbf{x}^H \tilde{\mathbf{w}}\_2] - E\mathbb{E}[\mathsf{Re}\langle \tilde{\mathbf{w}}\_2^H \mathbf{x} \mathbf{x}^H \tilde{\mathbf{w}}\_1 \rangle]}{E\mathbb{E}[\tilde{\mathbf{w}}\_1^H \mathbf{x} \mathbf{x}^H \tilde{\mathbf{w}}\_1] - 2E\mathbb{E}[\mathsf{Re}\langle \tilde{\mathbf{w}}\_1^H \mathbf{x} \mathbf{x}^H \tilde{\mathbf{w}}\_2 \rangle] + E\mathbb{E}[\tilde{\mathbf{w}}\_2^H \mathbf{x} \mathbf{x}^H \tilde{\mathbf{w}}\_2]}. \tag{15}$$

We thus need to investigate the evolution of the individual terms of the type *EMS Ek* ,*<sup>l</sup>* =*E* **w**˜ *<sup>k</sup> <sup>H</sup>* (*<sup>n</sup>* <sup>−</sup>1)**x**(*n*)**x***<sup>H</sup>* (*n*)**w**˜ *<sup>l</sup>* (*n* −1) in order to reveal the time evolution of *EMSE(n)* and *λ(n)*. To do so we, however, concentrate first on the mean square deviation de‐ fined in (10).

Reformulation the relation (1) as

of adaptation of the algorithm. Before we can proceed we need, however, to introduce some

(*n*). (8)

**w**˜ (*n*)=*λ***w**˜ 1(*n*) + (1−*λ*)**w**˜ <sup>2</sup>(*n*). (9)

2 *E* **w**˜ <sup>2</sup>

*<sup>H</sup>* (*n* −1)**x**(*n*). (11)

*<sup>H</sup>* (*n*)**w**˜ <sup>2</sup>(*n*) . (10)

*<sup>E</sup>* <sup>|</sup> *<sup>e</sup>*2,*a*(*n*)|<sup>2</sup> . (13)

*<sup>H</sup>* **xx***<sup>H</sup>* **<sup>w</sup>**˜ <sup>2</sup> . (14)

. (15)

*<sup>H</sup>* (*n*)**w**˜ 1(*n*)} + (1−*λ*)

*ea*(*n*)=*λ*(*n*)*e*1,*a*(*n*) + (1−*λ*(*n*))*e*2,*a*(*n*) (12)

<sup>∗</sup>(*n*)} + (1−*λ*)

*<sup>H</sup>* **xx***<sup>H</sup>* **<sup>w</sup>**˜ <sup>2</sup>} + (1−*λ*)

*<sup>H</sup>* **xx***<sup>H</sup>* **<sup>w</sup>**˜ <sup>1</sup>}

*<sup>H</sup>* **xx***<sup>H</sup>* **<sup>w</sup>**˜ 2} <sup>+</sup> *<sup>E</sup>* **<sup>w</sup>**˜ <sup>2</sup>

2

2 *E* **w**˜ <sup>2</sup>

*<sup>H</sup>* **xx***<sup>H</sup>* **<sup>w</sup>**˜ <sup>2</sup>

First let us denote the weight error vector of *i*-th filter as

**w**˜ *i*

*<sup>H</sup>* (*n*)**w**˜ 1(*n*) + 2*λ*(1−*λ*)R*e*{*<sup>E</sup>* **<sup>w</sup>**˜ <sup>2</sup>

The *a priori* estimation error of an individual filter is defined as

*ei*,*a*(*n*)=**w**˜ *<sup>i</sup>*

It follows from (5) that we can express the *a priori* error of the combination as

*<sup>E</sup>* <sup>|</sup> *<sup>e</sup>*1,*a*(*n*)|<sup>2</sup> + 2*λ*(1−*λ*)*<sup>E</sup>* <sup>R</sup>*e*{*e*1,*a*(*n*)*e*2,*<sup>a</sup>*

*<sup>H</sup>* **xx***<sup>H</sup>* **<sup>w</sup>**˜ <sup>1</sup> + 2*λ*(1−*λ*)*<sup>E</sup>* <sup>R</sup>*e*{**w**˜ <sup>1</sup>

*E* **w**˜ <sup>2</sup>

*<sup>H</sup>* **xx***<sup>H</sup>* **<sup>w</sup>**˜ <sup>1</sup> <sup>−</sup>2*<sup>E</sup>* <sup>R</sup>*e*{**w**˜ <sup>1</sup>

and because *λ(n)* is according to (7) a ratio of mathematical expectations and, hence, deter‐ ministic, we have for the excess mean square error of the combination,

*<sup>H</sup>* (*n* −1)**x**(*n*), the expression of the excess mean square error becomes

In what follows we often drop the explicit time index *n* as we have done in (14), if it is not

*<sup>H</sup>* **xx***<sup>H</sup>* **<sup>w</sup>**˜ <sup>2</sup> <sup>−</sup> *<sup>E</sup>* <sup>R</sup>*e*{**w**˜ <sup>2</sup>

*<sup>H</sup>* (*n* −1)**x**(*n*), we can rewrite the expression for *λ (n)* in (7) as

(*n*)=**w***<sup>o</sup>* −**w***<sup>i</sup>*

Then the equivalent weight error vector of the combined adaptive filter will be

The mean square deviation of the combined filter *MSD* =*E* **w**˜ *<sup>H</sup>* (*n*)**w**˜ (*n*) is given by

notations.

*MSD* =*λ* <sup>2</sup>

*E* **w**˜ <sup>1</sup>

66 Adaptive Filtering - Theories and Applications

*EMSE*(*n*)=*<sup>E</sup>* <sup>|</sup> *ea*(*n*)|<sup>2</sup> ,

*<sup>E</sup>* <sup>|</sup> *ea*(*n*)|<sup>2</sup> <sup>=</sup>*<sup>λ</sup>* <sup>2</sup>

*<sup>E</sup>* <sup>|</sup> *ea*(*n*)|<sup>2</sup> <sup>=</sup>*<sup>λ</sup>* <sup>2</sup>

necessary to avoid a confusion.

(*n*)=**w***<sup>i</sup>*

*λ*(*n*)=

*E* **w**˜ <sup>1</sup>

*E* **w**˜ <sup>1</sup>

As *ei*,*a*(*n*)=**w**˜ *<sup>i</sup>*

Noting that *yi*

$$e\_i(n) = d(n) - \mathbf{w}\_i^H(n-1)\mathbf{x}(n) = e\_o(n) + \tilde{\mathbf{w}}\_i^H(n-1)\mathbf{x}(n) \tag{16}$$

and subtracting (2) from *wo* we have

$$
\tilde{\mathbf{w}}\_i(n) = \left(\mathbf{I} - \mu\_i \mathbf{x} \mathbf{x}^H\right) \tilde{\mathbf{w}}\_i(n-1) - \mu\_i \mathbf{x} e\_o^\*(n). \tag{17}
$$

We next approximate the outer product of input signal vectors by its correlation matrix **xx***<sup>H</sup>* <sup>≈</sup>**Rx**. The approximation is justified by the fact that with small step size the weight error update of the LMS algorithm (17) behaves like a low pass filter with a low cutoff frequency. With this approximations we have

$$
\tilde{\mathbf{w}}\_i(\boldsymbol{\eta}) \approx \left(\mathbf{I} - \boldsymbol{\mu}\_i \mathbf{R}\_\mathbf{x}\right) \tilde{\mathbf{w}}\_i(\boldsymbol{\eta} - \mathbf{1}) - \boldsymbol{\mu}\_i \mathbf{x} e\_o^\*(\boldsymbol{\eta}).\tag{18}
$$

This means in fact that we apply the small step size theory [9] even if the assumption of small step size is not really true for the fast adapting filter. In our simulation study we will see, however, that the assumption works in practice rather well.

Let us now define the eigendecomposition of the correlation matrix as

$$\mathbf{Q}^H \mathbf{R}\_x \mathbf{Q} = \mathbf{Q},\tag{19}$$

where **Q** is a unitary matrix whose columns are the orthogonal eigenvectors of *Rx* and *Ω* is a diagonal matrix having eigenvalues associated with the corresponding eigenvectors on its main diagonal. We also define the transformed weight error vector as

$$\mathbf{v}\_i(n) = \mathbf{Q}^H \tilde{\mathbf{w}}\_i(n) \tag{20}$$

and the transformed last term of equation (18) as

$$\mathbf{p}\_i(n) = \mu\_i \mathbf{Q}^H \mathbf{x} e\_o^\*(n). \tag{21}$$

Then we can rewrite the equation (18) after multiplying both sides by *QH* from the left as

$$\mathbf{v}\_{i}(n) = \left(\mathbf{I} - \mu\_{i}\mathbf{Q}\right)\mathbf{v}\_{i}(n-1) - \mathbf{p}\_{i}(n). \tag{22}$$

so we also need to find the auto- and cross correlations of *v*.

<sup>∗</sup> (*n*) <sup>=</sup> *<sup>E</sup>* (1−*μkωm*)*nvk* (0)(1−*μl*

<sup>∗</sup> ( *j*) ={

<sup>∗</sup> (*n*) <sup>=</sup> (1−*μkωm*)*<sup>n</sup>*(1−*μl*

(1−*μkωm*)−*<sup>i</sup>*

The sum over *i* in the above equation can be recognized as a geometric series with *n* terms.

+*μkμl*

⋅∑ *i*=0 *n*−1

The first term is equal to 1 and the geometric ratio equals (1−*μkωm*)<sup>−</sup>1(1−*μl*

*ωm*)−*<sup>i</sup>* =

*ωm*)

*ω<sup>m</sup>* −

After substitution of the above into (31) and simplification we are left with

*μkμl*

+*E* ∑ *i*=0 *n*−1 ∑ *j*=0 *n*−1

ing that the vector *p* is independent of *v(0)* results in

*vk* ,*m*(0)=*vl*,*m*(0)=*vm*(0)

*E pk* ,*m*(*i*)*pl*,*<sup>m</sup>*

We then have for the *m*-th component of MSD

*E vk* ,*m*(*n*)*vl*,*<sup>m</sup>*

∑ *i*=0 *n*−1

> *μkμl ω<sup>m</sup>*

(1−*μkωm*)−*<sup>i</sup>*

<sup>=</sup> (1−*μkωm*)(1−*μl*

(1−*μl*

<sup>2</sup> <sup>−</sup>*μkω<sup>m</sup>* <sup>−</sup>*μl*

*E vk* ,*m*(*n*)*vl*,*<sup>m</sup>*

and that

have

Let us concentrate on the *m*-th component in the sum above corresponding to the cross term. The expressions for the component filters follow as special cases. Substituting (26) into the expression of m-th component of MSD above, taking the mathematical expectation and not‐

(1−*μkωm*)*n*−1−*<sup>i</sup>*

We now note that most likely the two component filters are initialized to the same value

*ωm*)*nvl*

(1−*μl*

*ωmJmin*, *i* = *j*

*ωmJmin*(1−*μkωm*)*n*−1(1−*μl*

(1−*μl*

1− (1−*μkλm*)<sup>−</sup>1(1−*μl*

(1−*μkωm*)−*<sup>n</sup>*+1(1−*μl*

*μkμl ω<sup>m</sup>*

1−(1−*μkλm*)<sup>−</sup>1(1−*μl*

<sup>2</sup> <sup>−</sup>*μkω<sup>m</sup>* <sup>−</sup>*μl*

*<sup>ω</sup>m*)*n*<sup>|</sup> *vm*(0)|<sup>2</sup>

*ωm*)−*<sup>i</sup>* .

<sup>∗</sup>(0)

*ωm*)*n*−1<sup>−</sup> *<sup>j</sup> pk* ,*m*(*i*)*pl*,*<sup>m</sup>*

Applications of a Combination of Two Adaptive Filters

http://dx.doi.org/10.5772/50451

69

0, otherwise. (30)

*ωm*)*n*−<sup>1</sup>

*λm*)−<sup>1</sup> *<sup>n</sup>*

*λm*)−<sup>1</sup>

.

*ωm*)−*n*+1

*ω<sup>m</sup>*

<sup>∗</sup> ( *<sup>j</sup>*) . (29)

(31)

(32)

*ωm*)<sup>−</sup>1. Hence we

We note that the mean of *pi* is zero by the orthogonality theorem and the crosscorrelation matrix of *pk* and *pl* equals

$$E\mathbb{E}[\mathbf{p}\_k \mathbf{p}\_l^H \mathbf{J} = \mu\_k \mu\_l \mathbf{Q}^H E \mathbb{E}[\mathbf{x} e\_o^\*(n) e\_o(n) \mathbf{x}^H \mathbf{J} \mathbf{Q}.\tag{23}$$

We now invoke the Gaussian moment factoring theorem to write

$$E\mathbb{E}e\_o^\*(n)e\_o(n)\mathbf{x}^H\mathbf{j} = E\mathbb{E}e\_o^\*(n)\mathbf{J}E\mathbb{E}e\_o(n)\mathbf{x}^H\mathbf{j} + E\mathbb{E}\mathbf{x}\mathbf{x}^H\mathbf{J}E\mathbb{I}\mid e\_o\mid^2\mathbf{J}.\tag{24}$$

The first term in the above is zero due to the principle of orthogonality and the second term equals **R***Jmin*, where *Jmin* <sup>=</sup>*<sup>E</sup>* <sup>|</sup> *eo*|<sup>2</sup> is the minimum mean square error produced by the cor‐ responding Wiener filter. Hence we are left with

$$E\mathbb{E}\mathbf{p}\_k\mathbf{p}\_l^H\mathbf{J} = \mu\_k\mu\_lJ\_{min}\mathbf{Q}.\tag{25}$$

As the matrices *I* and *Ω* in (22) are both diagonal, it follows that the *m*-th element of vector *vi (n)* is given by

$$\begin{split} \boldsymbol{\upsilon}\_{i,m}(\boldsymbol{n}) &= (1 - \mu\_i \boldsymbol{\omega}\_m) \boldsymbol{\upsilon}\_{i,m}(\boldsymbol{n} - 1) - p\_{i,m}(\boldsymbol{n}) \\ &= (1 - \mu\_i \boldsymbol{\omega}\_m)^\mu \boldsymbol{\upsilon}\_m(0) + \sum\_{i=0}^{n-1} (1 - \mu\_i \boldsymbol{\omega}\_m)^{n-1-i} \boldsymbol{p}\_{i,m}(i) . \end{split} \tag{26}$$

where *ω m* is the *m*-th eigenvalue of *Rx* and *vi,m* and *pi,m* are the *m*-th components of the vec‐ tors *vi* and *pi* respectively.

We immediately see that the mean value of *vi,m(n)* equals

$$E\mathbb{E}\upsilon\_{i,m}(n)\mathbb{J} = (1 - \mu\_i\omega\_m)^n \upsilon\_m(0) \tag{27}$$

as the vector *pi* has zero mean.

To proceed with our development for the combination of two LMS filters we note that we can express the MSD and its individual components in (10) through the transformed weight error vectors as

$$\begin{aligned} \mathbb{E}\mathbb{E}\mathbb{E}\tilde{\mathbf{w}}\_k^H(n)\tilde{\mathbf{w}}\_l(n) &= \mathbb{E}\mathbb{E}\mathbf{v}\_k^H(n)\mathbf{v}\_l(n) \mathbf{J} \\ &= \sum\_{m=0}^{N-1} \mathbb{E}\mathbb{E}v\_{k,m}(n)v\_{l,m}^\*(n) \mathbf{J} \end{aligned} \tag{28}$$

so we also need to find the auto- and cross correlations of *v*.

Let us concentrate on the *m*-th component in the sum above corresponding to the cross term. The expressions for the component filters follow as special cases. Substituting (26) into the expression of m-th component of MSD above, taking the mathematical expectation and not‐ ing that the vector *p* is independent of *v(0)* results in

$$\begin{split} \mathbb{E}\left[\upsilon\_{k,m}(n)\upsilon\_{l,m}^{\ast}(n)\right] &= \mathbb{E}\left[(1-\mu\_{k}\omega\_{m})^{u}\upsilon\_{k}(0)(1-\mu\_{l}\omega\_{m})^{u}\upsilon\_{l}^{\ast}(0)\right] \\ &+ \mathbb{E}\left[\sum\_{i=0}^{n-1}\sum\_{j=0}^{n-1}(1-\mu\_{k}\omega\_{m})^{u-1-i}(1-\mu\_{l}\omega\_{m})^{u-1-j}\ p\_{k,m}(i)\rho\_{l,m}^{\ast}(j)\right]. \end{split} \tag{29}$$

We now note that most likely the two component filters are initialized to the same value

$$\sf{U} \upsilon\_{k,m}(\mathbf{O}) = \upsilon\_{l,m}(\mathbf{O}) = \upsilon\_m(\mathbf{O})\mathbf{I}$$

and that

**v***i*

*E* **p***k***p***<sup>l</sup>*

<sup>∗</sup>(*n*)*eo*(*n*)**x***<sup>H</sup>* <sup>=</sup> *<sup>E</sup>* **<sup>x</sup>***eo*

responding Wiener filter. Hence we are left with

*vi*,*m*(*n*)= (1−*μi*

respectively.

has zero mean.

= (1−*μi*

We immediately see that the mean value of *vi,m(n)* equals

*E* **w**˜ *<sup>k</sup>*

*<sup>H</sup>* (*n*)**w**˜ *<sup>l</sup>*

We note that the mean of *pi*

68 Adaptive Filtering - Theories and Applications

*E* **x***eo*

equals

matrix of *pk* and *pl*

*vi*

tors *vi*

*(n)* is given by

and *pi*

as the vector *pi*

error vectors as

(*n*)=(**I**−*μi*

*<sup>H</sup>* <sup>=</sup>*μkμl*

*E* **p***k***p***<sup>l</sup>*

We now invoke the Gaussian moment factoring theorem to write

**Ω**)**v***<sup>i</sup>*

**<sup>Q</sup>***<sup>H</sup> <sup>E</sup>* **<sup>x</sup>***eo*

The first term in the above is zero due to the principle of orthogonality and the second term equals **R***Jmin*, where *Jmin* <sup>=</sup>*<sup>E</sup>* <sup>|</sup> *eo*|<sup>2</sup> is the minimum mean square error produced by the cor‐

As the matrices *I* and *Ω* in (22) are both diagonal, it follows that the *m*-th element of vector

*ωm*)*vi*,*m*(*n* −1)− *pi*,*m*(*n*)

*i*=0 *n*−1

where *ω m* is the *m*-th eigenvalue of *Rx* and *vi,m* and *pi,m* are the *m*-th components of the vec‐

To proceed with our development for the combination of two LMS filters we note that we can express the MSD and its individual components in (10) through the transformed weight

> *<sup>H</sup>* (*n*)**v***<sup>l</sup>* (*n*)

*E vk* ,*m*(*n*)*vl*,*<sup>m</sup>*

(*n*) = *E* **v***<sup>k</sup>*

= ∑ *m*=0 *N* −1 (1−*μi*

*<sup>ω</sup>m*)*nvm*(0) + ∑

*E vi*,*m*(*n*) =(1−*μi*

(*n* −1)−**p***<sup>i</sup>*

is zero by the orthogonality theorem and the crosscorrelation

(*n*). (22)

<sup>∗</sup>(*n*)*eo*(*n*)**x***<sup>H</sup>* **<sup>Q</sup>**. (23)

<sup>∗</sup>(*n*) *<sup>E</sup> eo*(*n*)**x***<sup>H</sup>* <sup>+</sup> *<sup>E</sup>* **xx***<sup>H</sup> <sup>E</sup>* <sup>|</sup> *eo*|<sup>2</sup> . (24)

*<sup>H</sup>* <sup>=</sup>*μkμlJmin***Ω**. (25)

*<sup>ω</sup>m*)*n*−1−*<sup>i</sup> pi*,*m*(*i*), (26)

*ωm*)*nvm*(0) (27)

<sup>∗</sup> (*n*) (28)

$$E\left[p\_{k,m}(i)\,p\_{l,m}^\*(j)\right] = \begin{cases} \mu\_k \mu\_l \omega\_m \boldsymbol{J}\_{min'} & \text{i} = \text{j} \\ 0, & \text{otherwise} \end{cases} \tag{30}$$

We then have for the *m*-th component of MSD

$$\begin{split} \mathbb{E}\left[\boldsymbol{\upsilon}\_{k,m}(\boldsymbol{\upsilon})\boldsymbol{\upsilon}\_{l,m}^{\*}(\boldsymbol{\upsilon})\right] &= \{1-\mu\_{k}\boldsymbol{\omega}\_{m}\}^{n}\{1-\mu\_{l}\boldsymbol{\omega}\_{m}\}^{n}\mid \boldsymbol{\upsilon}\_{m}(0)\ ^{\;}\boldsymbol{\upsilon} \\ &\quad + \mu\_{k}\mu\_{l}\boldsymbol{\omega}\_{m}\boldsymbol{I}\_{\min}\{1-\mu\_{k}\boldsymbol{\omega}\_{m}\}^{n-1}\{1-\mu\_{l}\boldsymbol{\omega}\_{m}\}^{n-1} \\ &\quad \cdot \sum\_{i=0}^{n-1} (1-\mu\_{k}\boldsymbol{\omega}\_{m})^{-i} \{1-\mu\_{l}\boldsymbol{\omega}\_{m}\}^{-i} . \end{split} \tag{31}$$

The sum over *i* in the above equation can be recognized as a geometric series with *n* terms. The first term is equal to 1 and the geometric ratio equals (1−*μkωm*)<sup>−</sup>1(1−*μl ωm*)<sup>−</sup>1. Hence we have

$$\begin{split} & \sum\_{i=0}^{n-1} (\mathbf{1} - \boldsymbol{\mu}\_{k} \boldsymbol{\omega}\_{m})^{-i} (\mathbf{1} - \boldsymbol{\mu}\_{l} \boldsymbol{\omega}\_{m})^{-i} = \frac{1 - \mathbb{I} (\mathbf{1} - \boldsymbol{\mu}\_{k} \boldsymbol{\lambda}\_{m})^{-1} (\mathbf{1} - \boldsymbol{\mu}\_{l} \boldsymbol{\lambda}\_{m})^{-1} \mathbf{1}^{n}}{1 - (\mathbf{1} - \boldsymbol{\mu}\_{k} \boldsymbol{\lambda}\_{m})^{-1} (\mathbf{1} - \boldsymbol{\mu}\_{l} \boldsymbol{\lambda}\_{m})^{-1}} \\ & = \frac{(\mathbf{1} - \boldsymbol{\mu}\_{k} \boldsymbol{\omega}\_{m}) (\mathbf{1} - \boldsymbol{\mu}\_{l} \boldsymbol{\omega}\_{m})}{\mu\_{k} \mu\_{l} \omega\_{m}^{2} - \mu\_{k} \boldsymbol{\omega}\_{m} - \mu\_{l} \boldsymbol{\omega}\_{m}} - \frac{(\mathbf{1} - \boldsymbol{\mu}\_{k} \boldsymbol{\omega}\_{m})^{-n+1} (\mathbf{1} - \boldsymbol{\mu}\_{l} \boldsymbol{\omega}\_{m})^{-n+1}}{\mu\_{k} \mu\_{l} \omega\_{m}^{2} - \mu\_{k} \boldsymbol{\omega}\_{m} - \mu\_{l} \boldsymbol{\omega}\_{m}}. \end{split} \tag{32}$$

After substitution of the above into (31) and simplification we are left with

$$E\left[\upsilon\_{k,m}(n)\upsilon\_{l,m}^\*(n)\right] = \left(1 - \mu\_k \omega\_m\right)^n (1 - \mu\_l \omega\_m)^n \left[ |\upsilon\_m(0)|^2 + \frac{J\_{min}}{\omega\_m^2 - \frac{\omega\_m}{\mu\_l} - \frac{\omega\_m}{\mu\_k}}\right]$$

$$-\frac{J\_{min}}{\omega\_m^2 - \frac{\omega\_m}{\mu\_l} - \frac{\omega\_m}{\mu\_k}} \,'$$

**4. Adaptive Sensor Array**

lobe Canceller [9].

dence angle as

sors of the linear array.

est is located at the electrical angle *θ 0.*

In this Chapter we describe how to use the combination of two adaptive filters in an adap‐ tive beamformer. The beamformer we employ here is often termed as Generalized Side‐

Let *ϕ* denote the the angle of incidence of a planar wave impinging a linear sensor array, measured with respect to the normal to the array. The electrical angle *θ* is related to the inci‐

where λ is the wavelength of the incident wave and *δ* is the spacing between adjacent sen‐

where *s(n)* is the vector of emitter signals, *Θ* is a collection of directions of arrivals, *A(Θ)* is the array steering matrix with its columns *a(θ)* defined as responses toward the individual sour‐ ces *s(n)* and *v(n)* is a vector of additive circularly symmetric Gaussian noise. The *M* vectors

are called the steering vectors of the respective sources. We assume that the source of inter‐

The block diagram of the Generalized Sidelobe Canceller is shown in Figure 3. The structure consists of two branches. The upper branch is the steering branch, that directs its beam to‐ ward the desired source. The lower branch is the blocking branch that blocks the signals im‐ pinging at the array from the direction of the desired source and includes an adaptive algorithm that minimizes the mean square error between the output signals of the branches.

i.e. we require the response in the direction of the source of interest *θ <sup>0</sup>* to equal a constant *g*.

*<sup>λ</sup>* sin *<sup>ϕ</sup>*, (37)

Applications of a Combination of Two Adaptive Filters

http://dx.doi.org/10.5772/50451

71

**u**(*n*)=**A**(**Θ**)**s**(*n*) + **v**(*n*), (38)

**a**(*θ*)= 1, *e <sup>j</sup>θ*, …, *e <sup>j</sup>*(*<sup>M</sup>* <sup>−</sup>1)*<sup>θ</sup> <sup>T</sup>* (39)

*<sup>H</sup>* **<sup>a</sup>**(*θ*0)= *<sup>g</sup>* (40)

*<sup>θ</sup>* <sup>=</sup> <sup>2</sup>*πδ*

Suppose that the signal impinging the array of *M=N+1* sensors is given by

The weights in steering branch **ws** are selected from the condition

Common choices for *g* are *g=M* and *g=1*. Here we have used *g=M*.

The signal at the output of the upper branch is given by

**w***s*

which is our result for a single entry to the MSD crossterm vector. It is easy to see that for the terms involving a single filter we get an expressions that coincide with the one available in the literature [9].

Let us now focus on the cross term

$$EMS\,E\_{kl} = E\left[\tilde{\mathbf{w}}\_k^H(n-1)\mathbf{x}(n)\mathbf{x}^H(n)\tilde{\mathbf{w}}\_l(n-1)\right],$$

appearing in the EMSE equation (14). Due to the independence assumption we can rewrite this using the properties of trace operator as

$$\begin{split} \operatorname{EMSE}\_{kl} &= \operatorname{E} \Big[ \tilde{\mathbf{w}}\_{k}^{H} \,(n-1) \mathbf{R}\_{\mathbf{x}} \tilde{\mathbf{w}}\_{l} (n-1) \Big] \\ &= \operatorname{Tr} \Big[ \operatorname{E} \Big[ \mathbf{R}\_{\mathbf{x}} \tilde{\mathbf{w}}\_{l} (n-1) \tilde{\mathbf{w}}\_{k}^{H} (n-1) \Big] \Big] \\ &= \operatorname{Tr} \Big[ \mathbf{R}\_{\mathbf{x}} \operatorname{E} \Big[ \tilde{\mathbf{w}}\_{l} (n-1) \tilde{\mathbf{w}}\_{k}^{H} (n-1) \Big] \Big] . \end{split} \tag{34}$$

Let us now recall that according to (20) for any of the filters **w**˜ *<sup>i</sup>* (*n*)=**Qv***<sup>i</sup>* (*n*) so that we are justified to write

$$\begin{split} \text{EMS}\,E\_{kl} &= \operatorname{Tr}\left[\mathbf{R}\_{\mathbf{x}}\mathbb{E}\left[\mathbf{Q}\mathbf{v}\_{l}(n-1)\mathbf{v}\_{k}^{H}(n-1)\mathbf{Q}^{H}\right]\right] \\ &= \operatorname{Tr}\left[\mathbb{E}\left[\mathbf{v}\_{k}^{H}(n-1)\mathbf{Q}^{H}\mathbf{R}\_{\mathbf{x}}\mathbf{Q}\mathbf{v}\_{l}(n-1)\right]\right] \\ &= \operatorname{Tr}\left[\mathbb{E}\left[\mathbf{v}\_{k}^{H}(n-1)\mathbf{Q}\mathbf{v}\_{l}(n-1)\right]\right] \\ &= \sum\_{i=0}^{N-1} \omega\_{i}\mathbb{E}\left[\boldsymbol{v}\_{k,i}^{\star\star}(n-1)\boldsymbol{v}\_{l,i}(n-1)\right]. \end{split} \tag{35}$$

The EMSE of the combined filter can now be computed as

$$EMSE = \sum\_{i=0}^{N-1} \omega\_i E \left\{ \mid \lambda \{ n \} v\_{k,i} (n-1) + (1 - \lambda \{ n \} v\_{l,i} (n-1))^2 \right\} \tag{36}$$

where the components of type *E vk* ,*<sup>i</sup>* (*n* −1)*vl*,*<sup>i</sup>* (*n* −1) are given by (33). To compute *λ(n)* we use (15) substituting (35) for its individual components.

#### **4. Adaptive Sensor Array**

*E vk* ,*m*(*n*)*vl*,*<sup>m</sup>*

70 Adaptive Filtering - Theories and Applications

Let us now focus on the cross term

*EMS Ekl* = *E* **w**˜ *<sup>k</sup>*

this using the properties of trace operator as

in the literature [9].

justified to write

<sup>∗</sup> (*n*) <sup>=</sup> (1−*μkωm*)*<sup>n</sup>*(1−*μl*

<sup>−</sup> *Jmin*

*ω<sup>m</sup>* 2 − *ω<sup>m</sup> μl* − *ω<sup>m</sup> μk*

*EMS Ekl* = *E* **w**˜ *<sup>k</sup>*

Let us now recall that according to (20) for any of the filters **w**˜ *<sup>i</sup>*

*EMS Ekl* = *Tr*{**Rx***E* **Qv***<sup>l</sup>*

= ∑ *i*=0 *N* −1 *ωi E vk* ,*<sup>i</sup>*

The EMSE of the combined filter can now be computed as

*EMSE* = ∑

where the components of type *E vk* ,*<sup>i</sup>*

*i*=0 *N* −1 *ωi*

use (15) substituting (35) for its individual components.

= *Tr*{*E* **v***<sup>k</sup>*

= *Tr*{*E* **v***<sup>k</sup>*

*E* |*λ*(*n*)*vk* ,*<sup>i</sup>*

(*n* −1)*vl*,*<sup>i</sup>*

*<sup>ω</sup>m*)*<sup>n</sup>* <sup>|</sup> *vm*(0)|<sup>2</sup> <sup>+</sup>

,

which is our result for a single entry to the MSD crossterm vector. It is easy to see that for the terms involving a single filter we get an expressions that coincide with the one available

appearing in the EMSE equation (14). Due to the independence assumption we can rewrite

*<sup>H</sup>* (*<sup>n</sup>* <sup>−</sup>1)**Rxw**˜ *<sup>l</sup>*

(*n* −1)**w**˜ *<sup>k</sup>*

(*n* −1)**w**˜ *<sup>k</sup>*

(*n* −1)**v***<sup>k</sup>*

*<sup>H</sup>* (*<sup>n</sup>* <sup>−</sup>1)**Q***<sup>H</sup>* **RxQv***<sup>l</sup>*

*<sup>H</sup>* (*<sup>n</sup>* <sup>−</sup>1)**Ωv***<sup>l</sup>*

<sup>∗</sup>(*<sup>n</sup>* <sup>−</sup>1)*vl*,*<sup>i</sup>*

(*n* −1) + (1−*λ*(*n*)*vl*,*<sup>i</sup>*

= *Tr*{*E* **Rxw**˜ *<sup>l</sup>*

= *Tr*{**Rx***E* **w**˜ *<sup>l</sup>*

*<sup>H</sup>* (*<sup>n</sup>* <sup>−</sup>1)**x**(*n*)**x***<sup>H</sup>* (*n*)**w**˜ *<sup>l</sup>*

(*n* −1)

*<sup>H</sup>* (*n* −1) }

*<sup>H</sup>* (*n* −1) }.

*<sup>H</sup>* (*n* −1)**Q***<sup>H</sup>* }

(*n* −1) }

(*n* −1) .

(*n* −1) }

*Jmin*

(33)

(34)

(35)

(*n*) so that we are

*ω<sup>m</sup>* 2 − *ω<sup>m</sup> μl* − *ω<sup>m</sup> μk*

(*n* −1) ,

(*n*)=**Qv***<sup>i</sup>*

(*n* −1)|<sup>2</sup> , (36)

(*n* −1) are given by (33). To compute *λ(n)* we

In this Chapter we describe how to use the combination of two adaptive filters in an adap‐ tive beamformer. The beamformer we employ here is often termed as Generalized Side‐ lobe Canceller [9].

Let *ϕ* denote the the angle of incidence of a planar wave impinging a linear sensor array, measured with respect to the normal to the array. The electrical angle *θ* is related to the inci‐ dence angle as

$$
\Theta = \frac{2\pi\delta}{\lambda} \sin \phi\_{\prime} \tag{37}
$$

where λ is the wavelength of the incident wave and *δ* is the spacing between adjacent sen‐ sors of the linear array.

Suppose that the signal impinging the array of *M=N+1* sensors is given by

$$\mathbf{u}(n) = \mathbf{A}(\Theta)\mathbf{s}(n) + \mathbf{v}(n),\tag{38}$$

where *s(n)* is the vector of emitter signals, *Θ* is a collection of directions of arrivals, *A(Θ)* is the array steering matrix with its columns *a(θ)* defined as responses toward the individual sour‐ ces *s(n)* and *v(n)* is a vector of additive circularly symmetric Gaussian noise. The *M* vectors

$$\mathbf{a}(\boldsymbol{\theta}) = \begin{bmatrix} \mathbf{1}, \ e^{j\boldsymbol{\theta}}, \ \dots, \ e^{j(\boldsymbol{M}-1)\boldsymbol{\theta}} \end{bmatrix}^T \tag{39}$$

are called the steering vectors of the respective sources. We assume that the source of inter‐ est is located at the electrical angle *θ 0.*

The block diagram of the Generalized Sidelobe Canceller is shown in Figure 3. The structure consists of two branches. The upper branch is the steering branch, that directs its beam to‐ ward the desired source. The lower branch is the blocking branch that blocks the signals im‐ pinging at the array from the direction of the desired source and includes an adaptive algorithm that minimizes the mean square error between the output signals of the branches.

The weights in steering branch **ws** are selected from the condition

$$\mathbf{w}\_s^H \mathbf{a}(\theta\_0) = \mathbf{g} \tag{40}$$

i.e. we require the response in the direction of the source of interest *θ <sup>0</sup>* to equal a constant *g*. Common choices for *g* are *g=M* and *g=1*. Here we have used *g=M*.

The signal at the output of the upper branch is given by

$$d\left(n\right) = \mathbf{w}\_s^H \mathbf{u}(n). \tag{41}$$

**4.1 Signal to Interference and Noise Ratio**

*Ps* =**w***<sup>s</sup>*

ing vectors. The corresponding array steering matrix is **A˘** (**Θ˘** ).

fering sources and the second component is due to the noise.

*J* ˘

*σint*,*<sup>v</sup>* <sup>2</sup> <sup>=</sup>**w***<sup>s</sup>*

plus excess mean square error due to interference and noise only

*<sup>H</sup>* **<sup>a</sup>**(*θ*0)*σs*<sup>0</sup> 2

is the variance of the useful signal arriving from the angle *θ 0*.

array output is according to (40)

to our adaptive scheme, is then given by

noise power at the array output is given by

excluding the signal from source of interest is

**p˘** =**C***<sup>b</sup>*

We can now find the eigendecomposition of **R˘**

**<sup>R</sup>˘ <sup>x</sup>** <sup>=</sup>**C***<sup>b</sup>*

where *σ s0<sup>2</sup>*

where *sigmav*

The EMSE of the adaptive algorithm can be analysed as it is done in Section 3.1. In this ap‐ plication we are also interested in signal to interference and noise ratio (SINR) at the array output. To evaluate this we first note that the power that signal of interest generates at the

**<sup>a</sup>***<sup>H</sup>* (*θ*0)**w***<sup>s</sup>* =| *<sup>g</sup>*|<sup>2</sup> *<sup>σ</sup>s*<sup>0</sup>

To find the interference and noise power we first define the reduced signal vector **s˘** and a reduced DOA collection **Θ˘** where we have left out the signal of interest and the steering vec‐ tor corresponding to the useful signal but kept all the interferers and the interference steer‐

The correlation matrix of interference and noise in the signal *x(n)*, which is the input signal

It follows from the standard Wiener filtering theory that the minimum interference and

*<sup>H</sup>* **<sup>A</sup>˘** (**Θ˘** )*<sup>E</sup>* **<sup>s</sup>˘s˘***<sup>H</sup>* **<sup>A</sup>˘** *<sup>H</sup>* (**Θ˘** )**C***<sup>b</sup>* <sup>+</sup> **<sup>C</sup>***<sup>s</sup>*

*min* <sup>=</sup>*σint*,*<sup>v</sup>* <sup>2</sup> <sup>−</sup>**p˘** *<sup>H</sup>* **<sup>R</sup>˘** <sup>−</sup><sup>1</sup>

where the desired signal variance excluding the signal from the source of interest is

*<sup>H</sup>* **<sup>A</sup>˘** (**Θ˘** )*<sup>E</sup>* **<sup>s</sup>˘s˘***<sup>H</sup>* **<sup>A</sup>˘** *<sup>H</sup>* (**Θ˘** )**w***<sup>s</sup>* <sup>+</sup> *<sup>σ</sup><sup>v</sup>*

*<sup>H</sup>* **<sup>A</sup>˘ <sup>R</sup>˘ <sup>A</sup>˘** *<sup>H</sup>* **<sup>w</sup>***<sup>s</sup>* <sup>+</sup> *<sup>σ</sup><sup>v</sup>*

and the crosscorrelation vector between the adaptive filter input signal and desired signal

(36) to find the excess mean square error due to interference and noise only *EMSEint,v*. The error power can be computed as minimum interference and noise power at the array output

2 **w***s*

> 2 **C***b*

2

*<sup>H</sup>* **<sup>C</sup>***bσ<sup>v</sup>* 2

*<sup>2</sup>* is the noise variance, the first component in the summation is due to the inter‐

, (45)

Applications of a Combination of Two Adaptive Filters

http://dx.doi.org/10.5772/50451

73

, (46)

**p˘**, (47)

*<sup>H</sup>* **<sup>w</sup>***<sup>s</sup>* (48)

*<sup>H</sup>* **<sup>A</sup>˘** (**Θ˘** )**w***s*. (49)

**<sup>x</sup>** and use the resulting eigenvalues in (35) and

In the lower branch we have a blocking matrix, that will block any signal coming from the direction *θ 0*. The columns of the *M × M-1* blocking matrix *Cb* are defined as being the or‐ thogonal complement of the steering vector *a(θ0)* in the upper branch

$$\mathbf{a}^H(\boldsymbol{\Theta}\_0)\mathbf{C}\_b = \mathbf{0}.\tag{42}$$

The vector valued signal *x(n)* at the output of the blocking matrix is formed as

$$\mathbf{x}(n) = \mathbf{C}\_b^H \mathbf{u}(n). \tag{43}$$

**Figure 3.** Block diagram of generelized sidelobe canceller.

The output of the algorithm is

$$e(n) = d\left(n\right) - \mathbf{w}\_b^H(n)\mathbf{x}(n). \tag{44}$$

The signals *x(n)* and *d(n)* can be used as the input and desired signals respectively in an adaptive algorithm to select the blocking weights *wb*. In this Chapter we use the combina‐ tion of two adaptive filters that gives us fast initial convergence and low steady state misad‐ justment at the same time.

#### **4.1 Signal to Interference and Noise Ratio**

*d*(*n*)=**w***<sup>s</sup>*

The vector valued signal *x(n)* at the output of the blocking matrix is formed as

**x**(*n*)=**C***<sup>b</sup>*

*e*(*n*)=*d*(*n*)−**w***<sup>b</sup>*

The signals *x(n)* and *d(n)* can be used as the input and desired signals respectively in an adaptive algorithm to select the blocking weights *wb*. In this Chapter we use the combina‐ tion of two adaptive filters that gives us fast initial convergence and low steady state misad‐

thogonal complement of the steering vector *a(θ0)* in the upper branch

**Figure 3.** Block diagram of generelized sidelobe canceller.

The output of the algorithm is

72 Adaptive Filtering - Theories and Applications

justment at the same time.

In the lower branch we have a blocking matrix, that will block any signal coming from the direction *θ 0*. The columns of the *M × M-1* blocking matrix *Cb* are defined as being the or‐

*<sup>H</sup>* **u**(*n*). (41)

**<sup>a</sup>***<sup>H</sup>* (*θ*0)**C***<sup>b</sup>* <sup>=</sup>**0**. (42)

*<sup>H</sup>* **u**(*n*). (43)

*<sup>H</sup>* (*n*)**x**(*n*). (44)

The EMSE of the adaptive algorithm can be analysed as it is done in Section 3.1. In this ap‐ plication we are also interested in signal to interference and noise ratio (SINR) at the array output. To evaluate this we first note that the power that signal of interest generates at the array output is according to (40)

$$P\_s = \mathbf{w}\_s^H \mathbf{a}(\Theta\_0) \sigma\_{s\_0}^2 \mathbf{a}^H(\Theta\_0) \mathbf{w}\_s = \|\ g \mid^2 \sigma\_{s\_0}^2\tag{45}$$

where *σ s0<sup>2</sup>* is the variance of the useful signal arriving from the angle *θ 0*.

To find the interference and noise power we first define the reduced signal vector **s˘** and a reduced DOA collection **Θ˘** where we have left out the signal of interest and the steering vec‐ tor corresponding to the useful signal but kept all the interferers and the interference steer‐ ing vectors. The corresponding array steering matrix is **A˘** (**Θ˘** ).

The correlation matrix of interference and noise in the signal *x(n)*, which is the input signal to our adaptive scheme, is then given by

$$\check{\mathbf{R}}\_{\mathbf{x}} = \mathbf{C}\_{b}^{H} \check{\mathbf{A}}(\check{\boldsymbol{\Theta}}) E \| \check{\mathbf{s}} \check{\mathbf{s}}^{H} \mathbf{J} \| ^{H} (\check{\boldsymbol{\Theta}}) \mathbf{C}\_{b} + \mathbf{C}\_{s}^{H} \mathbf{C}\_{b} \boldsymbol{\sigma}\_{v}^{2} \tag{46}$$

where *sigmav <sup>2</sup>* is the noise variance, the first component in the summation is due to the inter‐ fering sources and the second component is due to the noise.

It follows from the standard Wiener filtering theory that the minimum interference and noise power at the array output is given by

$$\check{\mathbf{J}}\_{\min} = \sigma\_{\inf, v}^2 - \check{\mathbf{p}}^H \check{\mathbf{R}}^{-1} \check{\mathbf{p}}\_{\prime} \tag{47}$$

where the desired signal variance excluding the signal from the source of interest is

$$
\sigma\_{int,v}^{2} = \mathbf{w}\_s^H \check{\mathbf{A}} \check{\mathbf{R}} \check{\mathbf{A}}^H \mathbf{w}\_s + \sigma\_v^2 \mathbf{w}\_s^H \mathbf{w}\_s \tag{48}
$$

and the crosscorrelation vector between the adaptive filter input signal and desired signal excluding the signal from source of interest is

$$\check{\mathbf{p}} = \mathbf{C}\_b^H \check{\mathbf{A}}(\check{\mathbf{G}}) \boldsymbol{E} [\check{\mathbf{s}} \check{\mathbf{s}}^H \check{\mathbf{J}} \check{\mathbf{A}}^H (\check{\mathbf{G}}) \mathbf{w}\_s + \sigma\_v^2 \mathbf{C}\_b^H \check{\mathbf{A}}(\check{\mathbf{G}}) \mathbf{w}\_s. \tag{49}$$

We can now find the eigendecomposition of **R˘ <sup>x</sup>** and use the resulting eigenvalues in (35) and (36) to find the excess mean square error due to interference and noise only *EMSEint,v*. The error power can be computed as minimum interference and noise power at the array output plus excess mean square error due to interference and noise only

$$P\_{v,int} = \check{\mathbf{f}}\_{min} + EMSE\_{int,v}(\mathbf{n})\tag{50}$$

dict the white noise and hence, only a prediction of narrow band components appears in the

Let us now find the autocorrelation function of the enhancer output signal *y(n).* We make the standard assumption from independence theory which states that the filter weights and

The input signal *x(n)* consists of two uncorrelated components *s(n)*, the sum of narrow band

We can decompose the impulse response of the adaptive filter into two components. One of

Substituting (53) and (55) into (52) and noticing that the cross-correlation between the Wie‐

because of the adopted independence assumption and because

*<sup>H</sup>* {**s**(*<sup>n</sup>* −Δ) + **<sup>v</sup>**(*<sup>n</sup>* −Δ)}{**s***<sup>H</sup>* (*<sup>n</sup>* −Δ <sup>+</sup> *<sup>l</sup>*) + **<sup>v</sup>***<sup>H</sup>* (*<sup>n</sup>* −Δ <sup>+</sup> *<sup>l</sup>*)}**w***<sup>o</sup>*

*r*(*l*)=*E* **w***<sup>H</sup>* (*n* −1)**x**(*n* −Δ)**x***<sup>H</sup>* (*n* −Δ + *l*)**w**(*n* −1 + *l*) . (52)

*x*(*n*)=*s*(*n*) + *v*(*n*). (53)

**w**˜ (*n*)=**w***<sup>o</sup>* −**w**(*n*). (55)

*y*(*n*)= *yo*(*n*)− *y*˜(*n*). (56)

*<sup>H</sup>* **x**(*n* −Δ)**x***<sup>H</sup>* (*l* −Δ)**w**˜ (*n* −1) =0 (57)

(58)

*E* **x**(*n* −Δ)*x*(*n*) (54)

Applications of a Combination of Two Adaptive Filters

http://dx.doi.org/10.5772/50451

75

The *l*-th autocorrelation lag of the filter output process *r*(*l*)=*E y*(*n*)*y* <sup>∗</sup>(*n* + *l*) , equals

filter output signal *y(n).* The signal *y(n)* is also the output of the system.

**<sup>w</sup>***<sup>o</sup>* <sup>=</sup>*<sup>E</sup>* **<sup>x</sup>**(*<sup>n</sup>* −Δ)**x***<sup>H</sup>* (*<sup>n</sup>* −Δ) <sup>−</sup><sup>1</sup>

the input signal are independent [9].

signals, and *v(n)*, the additive noise

them is the optimal Wiener filter for the problem

The output signal can hence be expressed as

*E* **w**˜ (*n* −1) =*E* **wo** −**w**(*n* −1) =0, we have

*r*(*l*)= *E* **wo**

and the other one, **w**˜ (*n*), represents the estimation errors.

ner filter output and that of the filter defined by weight errors is

+*E* **w**˜ *<sup>H</sup>* (*n* −1){**s**(*n* −Δ) + **v**(*n* −Δ)}

⋅ {**s***<sup>H</sup>* (*n* −Δ + *l*) + **v***<sup>H</sup>* (*n* −Δ + *l*)}**w**˜ (*n* −1 + *l*) .

*<sup>E</sup> yo*(*n*)*y*˜ <sup>∗</sup>(*l*) <sup>=</sup>*<sup>E</sup>* **<sup>w</sup>***<sup>o</sup>*

and the signal to noise ratio is thus given by

$$\text{SNR} \{ n \} = \frac{P\_s}{P\_{v, int} \{ n \}}. \tag{51}$$

#### **5. Adaptive Line Enhancer**

Adaptive line enhancer is a device that is able to separate the input into two components. One of them consists mostly of the narrow-band signals that are present at the input and the other one consists mostly of the broadband noise. In the context of this paper the signal is considered to be of narrow band if its bandwidth is small as compared to the sampling fre‐ quency of the system.

We assume that the broadband noise is zero mean, white and Gaussian and that the narrow band component is centred. One is usually interested in the narrow band components and the device is often used to clean narrow band signals from noise before any further process‐ ing. The line enhancer is shown in Figure 4. Note that the input signal to the adaptive filter of the line enhancer is delayed by *Δ* sample times and the input vector is thus *x(n-Δ)*. The desired signal is *d(n) = x(n).* The line enhancer is in fact a *Δ* step predictor. The device is able to predict the narrow band components that have long correlation times but it cannot pre‐ dict the white noise and hence, only a prediction of narrow band components appears in the filter output signal *y(n).* The signal *y(n)* is also the output of the system.

Let us now find the autocorrelation function of the enhancer output signal *y(n).* We make the standard assumption from independence theory which states that the filter weights and the input signal are independent [9].

The *l*-th autocorrelation lag of the filter output process *r*(*l*)=*E y*(*n*)*y* <sup>∗</sup>(*n* + *l*) , equals

$$\mathbf{r}(l) = \mathbb{E}\left[\mathbf{w}^H(n-1)\mathbf{x}(n-\Delta)\mathbf{x}^H(n-\Delta+l)\mathbf{w}(n-1+l)\right].\tag{52}$$

The input signal *x(n)* consists of two uncorrelated components *s(n)*, the sum of narrow band signals, and *v(n)*, the additive noise

$$\mathbf{x}(n) = \mathbf{s}(n) + \upsilon(n). \tag{53}$$

We can decompose the impulse response of the adaptive filter into two components. One of them is the optimal Wiener filter for the problem

$$\mathbf{w}\_o = \mathbb{E}\{\mathbf{x}(n-\Delta)\mathbf{x}^H(n-\Delta)\}^{-1}\mathbb{E}\{\mathbf{x}(n-\Delta)\mathbf{x}(n)\}\tag{54}$$

and the other one, **w**˜ (*n*), represents the estimation errors.

$$\mathbf{\tilde{w}}(n) = \mathbf{w}\_o - \mathbf{w}(n). \tag{55}$$

The output signal can hence be expressed as

*Pv*,*int* = *J* ˘

*SNR*(*n*)=

*Ps*

Adaptive line enhancer is a device that is able to separate the input into two components. One of them consists mostly of the narrow-band signals that are present at the input and the other one consists mostly of the broadband noise. In the context of this paper the signal is considered to be of narrow band if its bandwidth is small as compared to the sampling fre‐

We assume that the broadband noise is zero mean, white and Gaussian and that the narrow band component is centred. One is usually interested in the narrow band components and the device is often used to clean narrow band signals from noise before any further process‐ ing. The line enhancer is shown in Figure 4. Note that the input signal to the adaptive filter of the line enhancer is delayed by *Δ* sample times and the input vector is thus *x(n-Δ)*. The desired signal is *d(n) = x(n).* The line enhancer is in fact a *Δ* step predictor. The device is able to predict the narrow band components that have long correlation times but it cannot pre‐

and the signal to noise ratio is thus given by

**5. Adaptive Line Enhancer**

74 Adaptive Filtering - Theories and Applications

quency of the system.

**Figure 4.** The adaptive line enhancer.

*min* + *EMSEint*,*v*(*n*) (50)

*Pv*,*int*(*n*) . (51)

$$y(n) = y\_o(n) - \tilde{y}(n). \tag{56}$$

Substituting (53) and (55) into (52) and noticing that the cross-correlation between the Wie‐ ner filter output and that of the filter defined by weight errors is

$$E\mathbb{E}\left[\boldsymbol{y}\_o(n)\tilde{\boldsymbol{y}}^\*(l)\right] = E\mathbb{E}\left[\mathbf{w}\_o^H\mathbf{x}(n-\Delta)\mathbf{x}^H(l-\Delta)\tilde{\mathbf{w}}(n-1)\right] = 0\tag{57}$$

because of the adopted independence assumption and because *E* **w**˜ (*n* −1) =*E* **wo** −**w**(*n* −1) =0, we have

$$\begin{split} r(l) &= E\{\mathbf{w}\_{\bullet}^{H}\{\mathbf{s}(n-\Delta)+\mathbf{v}(n-\Delta)\}\{\mathbf{s}^{H}(n-\Delta+l)+\mathbf{v}^{H}\{n-\Delta+l\}\}\mathbf{w}\_{o}\} \\ &+ E\{\tilde{\mathbf{w}}^{H}(n-1)\{\mathbf{s}(n-\Delta)+\mathbf{v}(n-\Delta)\} \\ &\quad \cdot \{\mathbf{s}^{H}(n-\Delta+l)+\mathbf{v}^{H}(n-\Delta+l)\}\tilde{\mathbf{w}}(n-1+l)\}. \end{split} \tag{58}$$

Developing and grouping terms in the above equation results in

$$\begin{split} r(l) &= \mathbb{E} \left[ \mathbf{w}\_{\mathbf{o}}^{\ \ H} \mathbf{s} (n - \Delta) \mathbf{s}^{H} \{ n - \Delta + l \} \mathbf{w}\_{o} \right] \\ &\quad + \mathbb{E} \left[ \mathbf{w}\_{o}^{\ \ H} \mathbf{v} (n - \Delta) \mathbf{v}^{H} \{ n - \Delta + l \} \mathbf{J} \mathbf{w}\_{o} \right] \\ &\quad + \mathbb{E} \left[ \mathbf{\tilde{w}}^{H} (n - 1) \mathbf{s} (n - \Delta) \mathbf{s}^{H} \{ n - \Delta + l \} \mathbf{\tilde{w}} (n - 1 + l) \right] \\ &\quad + \mathbb{E} \left[ \mathbf{\tilde{w}}^{H} (n - 1) \mathbf{v} (n - \Delta) \right] \mathbf{\tilde{w}}^{H} (n - \Delta + l) \mathbf{\tilde{w}} (n - 1 + l) \Big]. \end{split} \tag{59}$$

the above as

Wiener filter.

where *σ <sup>v</sup>*

*2*

**K***i*,*<sup>k</sup>* ,*<sup>l</sup>*

(*n*)=*E* **w**˜ *<sup>i</sup>*

**K***i*,*<sup>k</sup>* ,*<sup>l</sup>*

input process by unit matrix as

substituting (64) into (63) yields

**K***i*,*<sup>k</sup>* ,*<sup>l</sup>*

In steady state, when *n* →∞ we have

Solving the above for **K***i*,*<sup>k</sup>* ,*<sup>l</sup>*

**K***i*,*<sup>k</sup>* ,*<sup>l</sup>*

(*n*)=(**I**−*μi*

(∞)=(1−*μi*

**K***i*,*<sup>k</sup>* ,*<sup>l</sup>*

*σv* 2 **I**)**K***i*,*<sup>k</sup>* ,*<sup>l</sup>*

> *σv* 2 )**K***i*,*<sup>k</sup>* ,*<sup>l</sup>*

(∞)= *μi*

*μi σx* <sup>2</sup> <sup>+</sup> *μkσ<sup>x</sup>*

(∞) we have

(*n*)=(**I**−*μi*

(*n* + *l*)**w**˜ *<sup>k</sup>*

*<sup>H</sup>* (*n*) <sup>=</sup> *<sup>E</sup>* (**I**−*μi*

pendence theory assumptions which state, that the weight errors **w**˜ *<sup>i</sup>*

**Rx**)**K***i*,*<sup>k</sup>* ,*<sup>l</sup>*

−*E* (**I**−*μi*

−*E μi*

+*E μi*

**Rx**)**w**˜ *<sup>i</sup>*

**x**(*n* −Δ)*eo*

*μk***x**(*n* −Δ)*eo*

The second and third terms of the above equal zero because we have made the usual inde‐

the input signal *x(n-Δ).* To evaluate the last term we assume that the adaptive filters are long enough to remove all the correlation between *eo(n)* and **x(n-Δ)**. In this case we can rewrite

(*n* −1)(**I**−*μk***Rx**) + *μi*

where *Jmin* <sup>=</sup>*<sup>E</sup>* <sup>|</sup> *eo*|2 is the minimum mean square error produced by the corresponding

We now assume that the signal to noise ratio is low so that the input signal is dominated by the white noise process *v(n).* In this case we can approximate the correlation matrix of the

developed this way actually works well with quite moderate signal to noise ratios. Then

(*n* −1)(**I**−*μkσ<sup>v</sup>*

(∞)(1−*μkσ<sup>v</sup>*

*μk Jmin*

<sup>2</sup> <sup>−</sup>*μi μkσ<sup>v</sup>*

is the noise variance. Later in the simulation study we will see that the theory

2 **I**) + *μi*

2 ) + *μi*

**R***<sup>x</sup>* ≈*σ<sup>v</sup>* 2 **Rx**)**w**˜ *<sup>i</sup>*

(*<sup>n</sup>* <sup>+</sup> *<sup>l</sup>* <sup>−</sup>1)**w**˜ *<sup>k</sup>* (*<sup>n</sup>* <sup>−</sup>1)*<sup>H</sup>* (**I**−*μk***Rx**)

(*<sup>n</sup>* <sup>+</sup> *<sup>l</sup>* <sup>−</sup>1)*μk***x***<sup>H</sup>* (*<sup>n</sup>* −Δ)*eo*(*n*)

<sup>∗</sup>(*n*)*eo*(*n*)**x***<sup>H</sup>* (*<sup>n</sup>* −Δ) .

*<sup>H</sup>* (*n* −1)

Applications of a Combination of Two Adaptive Filters

(*n*) are independent of

http://dx.doi.org/10.5772/50451

77

*μk Jmin***R***x*, (63)

**I**. (65)

**I**. (66)

<sup>2</sup> **I**. (67)

**I**, (64)

*μk Jminσ<sup>v</sup>* 2

*μk Jminσ<sup>v</sup>* 2

<sup>∗</sup>(*n*)(**I**−*μk***Rx**)**w**˜ *<sup>k</sup>*

Using the fact that **wo** is deterministic and the properties of the trace operator we further obtain

$$\begin{split} r(l) &= \mathbf{w}\_{\mathbf{o}}^{H} \operatorname{E} \{ \mathbf{s}(n - \Delta) \mathbf{s}^{H} \{ n - \Delta + l \} \} \mathbf{J} \mathbf{w}\_{o} \\ &+ \mathbf{w}\_{\mathbf{o}}^{H} \operatorname{E} \{ \mathbf{v}(n - \Delta) \mathbf{v}^{H} \{ n - \Delta + l \} \} \mathbf{w}\_{o} \\ &+ \operatorname{E} \{ \operatorname{Tr} \{ \tilde{\mathbf{w}}(n - 1 + l) \tilde{\mathbf{w}}^{H} \{ n - 1 \} \mathbf{s}(n - \Delta) \mathbf{s}^{H} \{ n - \Delta + l \} \} \end{split} \tag{60}$$
 
$$\begin{split} + &\operatorname{E} \{ \operatorname{Tr} \{ \tilde{\mathbf{w}}(n - 1 + l) \tilde{\mathbf{w}}^{H} \{ n - 1 \} \mathbf{v}(n - \Delta) \mathbf{v}^{H} \{ n - \Delta + l \} \} \mathbf{J} .\end{split} \tag{60}$$

We now invoke the independence assumption saying that the weight vector **w**˜ *<sup>H</sup>* (*n* −1) is in‐ dependent from the signals **s**(*n* −Δ) and **v**(*n* −Δ). This leads us to

$$\begin{split} r(l) &= \mathbf{w}\_{\mathbf{o}}^{H} E\{\mathbf{s}(n-\Delta)\mathbf{s}^{H}(n-\Delta+l)\} \mathbf{\tilde{w}}\_{o} \\ &+ \mathbf{w}\_{\mathbf{o}}^{\ \ H} E\{\mathbf{v}(n-\Delta)\mathbf{v}^{H}(n-\Delta+l)\} \mathbf{\tilde{w}}\_{o} \\ &+ \operatorname{Tr} \{ E\{\mathbf{\tilde{w}}(n-1+l)\mathbf{\tilde{w}}^{H}(n-1)\} E\{\mathbf{s}(n-\Delta)\mathbf{s}^{H}(n-\Delta+l)\} \} \\ &+ \operatorname{Tr} \{ E\{\mathbf{\tilde{w}}(n-1+l)\mathbf{\tilde{w}}^{H}(n-1)\} E\{\mathbf{v}(n-\Delta)\mathbf{v}^{H}(n-\Delta+l)\} \}. \end{split} \tag{61}$$

To proceed we need to find the matrix **K**(*l*)=*E* **w**˜ (*n* −1 + *l*)**w**˜ *<sup>H</sup>* (*n* −1) .

#### **5.1 Weight error correlation matrix**

In this Section we investigate the combination of two adaptive filters and derive the expres‐ sions for the crosscorrelation matrix between the output signals of the individual filters *yi (n)* and *yk(n)*. The autocorrelation matrices of the individual filter output signals follow directly using only one signal in the formulae.

For the problem at hands we can rewrite the equation (18) noting that we have introduced a *Δ* samples delay in the signal path as

$$
\tilde{\mathbf{w}}\_i(\boldsymbol{\nu}) \approx \left(\mathbf{I} - \boldsymbol{\mu}\_i \mathbf{R}\_\mathbf{x}\right) \tilde{\mathbf{w}}\_i(\boldsymbol{\nu} - \mathbf{1}) - \boldsymbol{\mu}\_i \mathbf{x}(\boldsymbol{\nu} - \boldsymbol{\Delta}) e\_o^\*(\boldsymbol{\nu}).\tag{62}
$$

For the weight error correlation matrix we then have

#### Applications of a Combination of Two Adaptive Filters http://dx.doi.org/10.5772/50451 77

$$\begin{split} \mathbf{K}\_{i,k,l}(n) = \mathbb{E}\left[\mathbf{\tilde{w}}\_{i}(n+l)\mathbf{\tilde{w}}\_{k}^{H}(n)\right] &= \mathbb{E}\left[\left(\mathbf{I} - \mu\_{i}\mathbf{R}\_{\mathbf{x}}\right)\mathbf{\tilde{w}}\_{i}(n+l-1)\mathbf{\tilde{w}}\_{k}(n-1)^{H}\left(\mathbf{I} - \mu\_{k}\mathbf{R}\_{\mathbf{x}}\right)\right] \\ &- \mathbb{E}\left[\left(\mathbf{I} - \mu\_{i}\mathbf{R}\_{\mathbf{x}}\right)\mathbf{\tilde{w}}\_{i}(n+l-1)\mu\_{k}\mathbf{x}^{H}(n-\Delta)e\_{o}(n)\right] \\ &- \mathbb{E}\left[\mu\_{i}\mathbf{x}(n-\Delta)e\_{o}\right^{\*}(n)\left(\mathbf{I} - \mu\_{k}\mathbf{R}\_{\mathbf{x}}\right)\mathbf{\tilde{w}}\_{k}^{H}(n-1)\right] \\ &+ \mathbb{E}\left[\mu\_{i}\mu\_{k}\mathbf{x}(n-\Delta)e\_{o}\right^{\*}(n)e\_{o}(n)\mathbf{x}^{H}(n-\Delta)\right]. \end{split}$$

The second and third terms of the above equal zero because we have made the usual inde‐ pendence theory assumptions which state, that the weight errors **w**˜ *<sup>i</sup>* (*n*) are independent of the input signal *x(n-Δ).* To evaluate the last term we assume that the adaptive filters are long enough to remove all the correlation between *eo(n)* and **x(n-Δ)**. In this case we can rewrite the above as

$$\mathbf{K}\_{i,k,l}(\boldsymbol{\nu}) = \left(\mathbf{I} - \boldsymbol{\mu}\_i \mathbf{R}\_\mathbf{x}\right) \mathbf{K}\_{i,k,l}(\boldsymbol{\nu} - \mathbf{1}) \left(\mathbf{I} - \boldsymbol{\mu}\_k \mathbf{R}\_\mathbf{x}\right) + \boldsymbol{\mu}\_i \boldsymbol{\mu}\_k \boldsymbol{I}\_{\text{min}} \mathbf{R}\_{\mathbf{x}^\prime} \tag{63}$$

where *Jmin* <sup>=</sup>*<sup>E</sup>* <sup>|</sup> *eo*|2 is the minimum mean square error produced by the corresponding Wiener filter.

We now assume that the signal to noise ratio is low so that the input signal is dominated by the white noise process *v(n).* In this case we can approximate the correlation matrix of the input process by unit matrix as

$$\mathbf{R}\_x \approx \sigma\_v^2 \mathbf{I},\tag{64}$$

where *σ <sup>v</sup> 2* is the noise variance. Later in the simulation study we will see that the theory developed this way actually works well with quite moderate signal to noise ratios. Then substituting (64) into (63) yields

$$\mathbf{K}\_{i,k,l}(n) = \left(\mathbf{I} - \mu\_i \sigma\_v^2 \mathbf{I}\right) \mathbf{K}\_{i,k,l}(n-1) \left(\mathbf{I} - \mu\_k \sigma\_v^2 \mathbf{I}\right) + \mu\_i \mu\_k I\_{min} \sigma\_v^2 \mathbf{I}.\tag{65}$$

In steady state, when *n* →∞ we have

Developing and grouping terms in the above equation results in

*<sup>H</sup>* **<sup>s</sup>**(*<sup>n</sup>* −Δ)**s***<sup>H</sup>* (*<sup>n</sup>* −Δ <sup>+</sup> *<sup>l</sup>*)**w***<sup>o</sup>*

*<sup>H</sup> <sup>E</sup>* **<sup>s</sup>**(*<sup>n</sup>* −Δ)**s***<sup>H</sup>* (*<sup>n</sup>* −Δ <sup>+</sup> *<sup>l</sup>*) **<sup>w</sup>***<sup>o</sup>*

dependent from the signals **s**(*n* −Δ) and **v**(*n* −Δ). This leads us to

*<sup>H</sup> <sup>E</sup>* **<sup>s</sup>**(*<sup>n</sup>* −Δ)**s***<sup>H</sup>* (*<sup>n</sup>* −Δ <sup>+</sup> *<sup>l</sup>*) **<sup>w</sup>***<sup>o</sup>*

To proceed we need to find the matrix **K**(*l*)=*E* **w**˜ (*n* −1 + *l*)**w**˜ *<sup>H</sup>* (*n* −1) .

*<sup>H</sup> <sup>E</sup>* **<sup>v</sup>**(*<sup>n</sup>* −Δ)**v***<sup>H</sup>* (*<sup>n</sup>* −Δ <sup>+</sup> *<sup>l</sup>*) **<sup>w</sup>***<sup>o</sup>*

*<sup>H</sup> <sup>E</sup>* **<sup>v</sup>**(*<sup>n</sup>* −Δ)**v***<sup>H</sup>* (*<sup>n</sup>* −Δ <sup>+</sup> *<sup>l</sup>*) **<sup>w</sup>***<sup>o</sup>*

*<sup>H</sup>* **<sup>v</sup>**(*<sup>n</sup>* −Δ)**v***<sup>H</sup>* (*<sup>n</sup>* −Δ <sup>+</sup> *<sup>l</sup>*) **<sup>w</sup>***<sup>o</sup>*

+*E* **w**˜ *<sup>H</sup>* (*n* −1)**s**(*n* −Δ)**s***<sup>H</sup>* (*n* −Δ + *l*)**w**˜ (*n* −1 + *l*) +*E* **w**˜ *<sup>H</sup>* (*n* −1)**v**(*n* −Δ) **v***<sup>H</sup>* (*n* −Δ + *l*)**w**˜ (*n* −1 + *l*) .

Using the fact that **wo** is deterministic and the properties of the trace operator we further obtain

+*E* Tr{**w**˜ (*n* −1 + *l*)**w**˜ *<sup>H</sup>* (*n* −1)**s**(*n* −Δ)**s***<sup>H</sup>* (*n* −Δ + *l*)} +*E* Tr{**w**˜ (*n* −1 + *l*)**w**˜ *<sup>H</sup>* (*n* −1)**v**(*n* −Δ)**v***<sup>H</sup>* (*n* −Δ + *l*)} .

We now invoke the independence assumption saying that the weight vector **w**˜ *<sup>H</sup>* (*n* −1) is in‐

+Tr{*E* **w**˜ (*n* −1 + *l*)**w**˜ *<sup>H</sup>* (*n* −1) *E* **s**(*n* −Δ)**s***<sup>H</sup>* (*n* −Δ + *l*) } +Tr{*E* **w**˜ (*n* −1 + *l*)**w**˜ *<sup>H</sup>* (*n* −1) *E* **v**(*n* −Δ)**v***<sup>H</sup>* (*n* −Δ + *l*) }.

In this Section we investigate the combination of two adaptive filters and derive the expres‐ sions for the crosscorrelation matrix between the output signals of the individual filters *yi*

and *yk(n)*. The autocorrelation matrices of the individual filter output signals follow directly

For the problem at hands we can rewrite the equation (18) noting that we have introduced a

(*n* −1)−*μi*

**x**(*n* −Δ)*eo*

(59)

(60)

(61)

*(n)*

<sup>∗</sup>(*n*). (62)

*r*(*l*)= *E* **wo**

76 Adaptive Filtering - Theories and Applications

*r*(*l*)= **wo**

*r*(*l*)= **wo**

**5.1 Weight error correlation matrix**

using only one signal in the formulae.

*Δ* samples delay in the signal path as

**w**˜ *i*

For the weight error correlation matrix we then have

(*n*)≈(**I**−*μi*

**Rx**)**w**˜ *<sup>i</sup>*

+**wo**

+**wo**

+*E* **wo**

$$\mathbf{K}\_{i,k,l}(\circ \circ) = \left(1 - \mu\_l \sigma\_v^2\right) \mathbf{K}\_{i,k,l}(\circ \circ) \left(1 - \mu\_k \sigma\_v^2\right) + \mu\_l \mu\_k I\_{min} \sigma\_v^2 \mathbf{I}.\tag{66}$$

Solving the above for **K***i*,*<sup>k</sup>* ,*<sup>l</sup>* (∞) we have

$$\mathbf{K}\_{i,k,l}(\circ \circ) = \frac{\mu\_i \mu\_k J\_{min}}{\mu\_i \sigma\_x^2 + \mu\_k \sigma\_x^2 - \mu\_i \mu\_k \sigma\_v^2} \mathbf{I}. \tag{67}$$

#### **5.2. Second order statistics of line enhancer output signal**

As we see from the previous discussion, the correlation matrix of the weight error vector is diagonal. We therefore have that the matrix **K***i*,*<sup>k</sup>* (*l*)=*E* **w**˜ *<sup>i</sup>* (*n* −1 + *l*)**w**˜ *<sup>k</sup> <sup>H</sup>* (*n* −1) has in steady state, when *n* →∞, elements different form zero only alongside the main diagonal and the elements at this diagonal equal to *μi μk J min μi σx* <sup>2</sup> <sup>+</sup> *μk <sup>σ</sup><sup>x</sup>* <sup>2</sup> <sup>−</sup> *μi μk σ<sup>v</sup>* <sup>2</sup> . Substituting **K***i*,*<sup>k</sup>* (*l*) into (61) we now have that the *l*-th correlation lag of the output signal is equal to

$$\begin{split} r\_{i,k}(l) &= \mathbf{w}\_{\mathbf{o}} \, ^H \boldsymbol{E} \left[ \mathbf{s} (n - \Delta) \mathbf{s}^H \left( n - \Delta + l \right) \right] \mathbf{w}\_{\boldsymbol{o}} \\ &+ \mathbf{w}\_{\mathbf{o}} \, ^H \boldsymbol{E} \left[ \mathbf{v} (n - \Delta) \mathbf{v}^H \left( n - \Delta + l \right) \right] \mathbf{w}\_{\boldsymbol{o}} \\ &+ r\_s(l) \, \mathrm{N} \frac{\mu\_i \mu\_k l\_{mit}}{\mu\_i \sigma\_x^2 + \mu\_k \sigma\_x^2 - \mu\_i \mu\_k \sigma\_v^2} \\ &+ \mathrm{Tr} \{ \mathbf{K}\_{i,k}(l) \, \mathrm{E} \left[ \mathbf{v} (n - \Delta) \mathbf{v}^H \left( n - \Delta + l \right) \right] \} .\end{split} \tag{68}$$

ing that *yi*

to evaluate

ator as

(*n*)=**w***<sup>i</sup>*

*γik* =*E* **w**˜ *<sup>i</sup>*

*<sup>H</sup>* (*n* −1)**x**(*n* −Δ). All the terms in the expression (15) are similar and we need

Due to the independence assumption we can rewrite (70) using the properties of trace oper‐

*<sup>γ</sup>ik* <sup>=</sup> *Tr*{*<sup>E</sup>* **<sup>x</sup>**(*<sup>n</sup>* −Δ)**x***<sup>H</sup>* (*<sup>n</sup>* −Δ)**w**˜ *<sup>k</sup>* (*<sup>n</sup>* <sup>−</sup>1)**w**˜ *<sup>i</sup>*

We are now ready to find *λ*(∞) by substituting (71) and (67) into (15).

1

*P* ˆ

In this Section we present the results of our simulation study.

nominator of (7) have been replaced by exponential averaging of the type

*<sup>K</sup> <sup>E</sup>*<sup>|</sup> *YK* ( *<sup>f</sup>* )|<sup>2</sup> <sup>=</sup> ∑

correlation matrix of a signal. In this paper we have used the Capon method [20].

( *<sup>f</sup>* )= *<sup>K</sup>* **a***<sup>H</sup>* ( *f* )**R**−<sup>1</sup>

where *YK(f)* is the length *K* discrete Fourier transform of the signal *y(n)* and *f* is the frequen‐ cy. There is a number of methods to compute an estimate of the power spectrum from the

where **a**( *f* )= 1*e* <sup>−</sup> *<sup>j</sup>*2*π<sup>f</sup>* … *e* <sup>−</sup> *<sup>j</sup>*2*π*(*<sup>M</sup>* <sup>−</sup>1) *<sup>f</sup> <sup>T</sup>* and **R** is the *K × K* Toeplitz correlation matrix of the signal of interest. The Capon method was chosen because the signals we are interested in are sine waves in noise and the Capon method gives a more distinct spectrum estimate than the

In order to obtain a practical algorithm, the expectation operators in both numerator and de‐

where *p(n)* is the quantity to be averaged, *Pav(n)* is the averaged quantity and *γ* is the smoothing parameter. The averaged quantities were then used in (7) to obtain λ. The curves

*Pav*(*n*)=(1−*γ*)*Pav*(*n* −1) + *γp*(*n*), (74)

*l*=−∞ ∞

**a**( *f* )

= *Tr*{**Rx***E* **w**˜ *<sup>k</sup>* (*n* −1)**w**˜ *<sup>i</sup>*

The power spectrum of the output process *y(n)* is given by

*P*( *f* )= lim *K*→∞

Fourier transform based methods in this situation.

**6. Simulation Results**

*<sup>H</sup>* (*<sup>n</sup>* <sup>−</sup>1)**x**(*<sup>n</sup>* −Δ)**x***<sup>H</sup>* (*<sup>n</sup>* −Δ)**w**˜ *<sup>k</sup>* (*<sup>n</sup>* <sup>−</sup>1) . (70)

*<sup>H</sup>* (*n* −1) }

*<sup>H</sup>* (*<sup>n</sup>* <sup>−</sup>1) } <sup>=</sup>*Tr*{**RxK***<sup>i</sup>*,*<sup>k</sup>* ,0(*<sup>n</sup>* <sup>−</sup>1)}. (71)

Applications of a Combination of Two Adaptive Filters

http://dx.doi.org/10.5772/50451

79

*r*(*l*)*e* <sup>−</sup> *<sup>j</sup>*2*πlf* , (72)

, (73)

where *rs(l)* is the *l*-th autocorrelation lag of the input signal *s(n).*

As the noise *v* has assumed to be white, the matrix *E* **v**(*n* −Δ)**v***<sup>H</sup>* (*n* −Δ + *l*) has nonzero ele‐ ments *σ <sup>2</sup> <sup>v</sup>* only along the *l*-th diagonal and the rest of the matrix is filled with zeroes. Then

$$\begin{aligned} r\_{i,k}(l) &= \mathbf{w}\_{\mathbf{o}} \,^H \boldsymbol{E} \left[ \mathbf{s} (n - \Delta) \mathbf{s}^H (n - \Delta + l) \right] \mathbf{w}\_{\boldsymbol{o}} \\ &+ \sigma\_v^2 \sum\_{i=0}^{N-l-1} \boldsymbol{w}\_{\boldsymbol{o}} \,^\* (i) \boldsymbol{w}\_{\boldsymbol{o}} (i + l) \\ &+ r\_s(l) \boldsymbol{N} \frac{\mu\_i \mu\_k l\_{\text{min}}}{\mu\_i \sigma\_x^2 + \mu\_k \sigma\_x^2 - \mu\_i \mu\_k \sigma\_v^2} + r\_{0^+} \end{aligned} \tag{69}$$

where *r*<sup>0</sup> = *N σ<sup>v</sup>* <sup>2</sup> *μi μk J min μi σx* <sup>2</sup> <sup>+</sup> *μk <sup>σ</sup><sup>x</sup>* <sup>2</sup> <sup>−</sup> *μi μk σ<sup>v</sup>* <sup>2</sup> , if *l = 0* and zero otherwise.

From (4) we see that the autocorrelation lags of the combination output signal *y(n)* can be composed from its components *ri,k(l)* as follows

$$\begin{split} r(l) &= \lambda \{n\}^2 E \mathbb{E} \left[ y\_1(n) y\_1^\*(n+l) \right] + 2\lambda \{n\} (1 - \lambda \{n\}) E \left[ \mathsf{Re} \{ y\_1(n) y\_2^\*(n+l) \} \right] \\ &+ (1 - \lambda \{n\})^2 E \mathbb{E} \{ y\_2(n) y\_2^\*(n+l) \} \\ &= \lambda \{n\}^2 r\_{1,1}(l) + 2\lambda \{n\} (1 - \lambda \{n\}) \mathsf{Re} \{ r\_{1,2}(l) \} + (1 - \lambda \{n\})^2 r\_{2,2}(l) .\end{split}$$

The autocorrelation matrix of *y* is a Toeplitz matrix **R** having the autocorrleation lags *r(l)* along its first row.

Thus far we have evaluated the terms *E yi* (*n*)*yk* <sup>∗</sup>(*n* + *l*) , what remains is to find an expres‐ sion for the steady state combination parameter *λ*(∞). For this purpose we can use (15), not‐ ing that *yi* (*n*)=**w***<sup>i</sup> <sup>H</sup>* (*n* −1)**x**(*n* −Δ). All the terms in the expression (15) are similar and we need to evaluate

$$\gamma\_{ik} = \mathbb{E}\left[\tilde{\mathbf{w}}\_i^H \mathbf{(}\boldsymbol{n} - \mathbf{1}\right) \mathbf{x} (\boldsymbol{n} - \boldsymbol{\Delta}) \mathbf{x}^H \left(\boldsymbol{n} - \boldsymbol{\Delta}\right) \tilde{\mathbf{w}}\_k \left(\boldsymbol{n} - \mathbf{1}\right)\right].\tag{70}$$

Due to the independence assumption we can rewrite (70) using the properties of trace oper‐ ator as

$$\begin{split} \mathcal{V}\_{ik} &= \operatorname{Tr} \Big[ \operatorname{E} \Big[ \mathbf{x} (n - \Delta) \mathbf{x}^{H} (n - \Delta) \tilde{\mathbf{w}}\_{k} (n - 1) \tilde{\mathbf{w}}\_{i}^{H} (n - 1) \Big] \Big] \\ &= \operatorname{Tr} \Big[ \mathbf{R}\_{\mathbf{x}} \boldsymbol{E} \Big[ \tilde{\mathbf{w}}\_{k} (n - 1) \tilde{\mathbf{w}}\_{i}^{H} (n - 1) \Big] \Big] + \operatorname{Tr} \Big[ \mathbf{R}\_{\mathbf{x}} \mathbf{K}\_{i,k,0} (n - 1) \Big] . \end{split} \tag{71}$$

We are now ready to find *λ*(∞) by substituting (71) and (67) into (15).

The power spectrum of the output process *y(n)* is given by

$$P(f) = \lim\_{K \to \alpha} \frac{1}{K} E \mid \mathbb{E}Y\_K(f) \mid^2 \mathbf{I} = \sum\_{l=-\alpha}^{\alpha} r(l)e^{-j2\pi lf} \, , \tag{72}$$

where *YK(f)* is the length *K* discrete Fourier transform of the signal *y(n)* and *f* is the frequen‐ cy. There is a number of methods to compute an estimate of the power spectrum from the correlation matrix of a signal. In this paper we have used the Capon method [20].

$$\stackrel{\circ}{P}(f) = \frac{K}{\mathbf{a}^H(f)\mathbf{R}^{-1}\mathbf{a}(f)}\tag{73}$$

where **a**( *f* )= 1*e* <sup>−</sup> *<sup>j</sup>*2*π<sup>f</sup>* … *e* <sup>−</sup> *<sup>j</sup>*2*π*(*<sup>M</sup>* <sup>−</sup>1) *<sup>f</sup> <sup>T</sup>* and **R** is the *K × K* Toeplitz correlation matrix of the signal of interest. The Capon method was chosen because the signals we are interested in are sine waves in noise and the Capon method gives a more distinct spectrum estimate than the Fourier transform based methods in this situation.

#### **6. Simulation Results**

**5.2. Second order statistics of line enhancer output signal**

diagonal. We therefore have that the matrix **K***i*,*<sup>k</sup>* (*l*)=*E* **w**˜ *<sup>i</sup>*

that the *l*-th correlation lag of the output signal is equal to

+**wo**

where *rs(l)* is the *l*-th autocorrelation lag of the input signal *s(n).*

+*σ<sup>v</sup>* <sup>2</sup> ∑ *i*=0 *N* −*l*−1 *wo*

<sup>+</sup>*rs*(*l*)*<sup>N</sup> μi*

*μi σx* <sup>2</sup> <sup>+</sup> *μkσ<sup>x</sup>*

<sup>∗</sup>(*n* + *l*)

*<sup>r</sup>*1,1(*l*)+2*λ*(*n*)(1−*λ*(*n*))R*e*{*r*1,2(*l*)} + (1−*λ*(*n*))<sup>2</sup>

*ri*,*<sup>k</sup>* (*l*)= **wo**

*ri*,*<sup>k</sup>* (*l*)= **wo**

*μi σx* <sup>2</sup> <sup>+</sup> *μk <sup>σ</sup><sup>x</sup>* <sup>2</sup> <sup>−</sup> *μi μk σ<sup>v</sup>*

<sup>+</sup>*rs*(*l*)*<sup>N</sup> μi*

*μi σx* <sup>2</sup> <sup>+</sup> *μkσ<sup>x</sup>*

elements at this diagonal equal to *μi*

78 Adaptive Filtering - Theories and Applications

ments *σ <sup>2</sup>*

where *r*<sup>0</sup> = *N σ<sup>v</sup>*

along its first row.

<sup>2</sup> *μi*

*μi σx* <sup>2</sup> <sup>+</sup> *μk <sup>σ</sup><sup>x</sup>* <sup>2</sup> <sup>−</sup> *μi μk σ<sup>v</sup>*

*r*(*l*)= *λ*(*n*)

= *λ*(*n*) 2 *μk J min*

composed from its components *ri,k(l)* as follows

*E y*1(*n*)*y*<sup>1</sup>

*E y*2(*n*)*y*<sup>2</sup>

2

+(1−*λ*(*n*))<sup>2</sup>

Thus far we have evaluated the terms *E yi*

As we see from the previous discussion, the correlation matrix of the weight error vector is

state, when *n* →∞, elements different form zero only alongside the main diagonal and the

*<sup>H</sup> <sup>E</sup>* **<sup>s</sup>**(*<sup>n</sup>* −Δ)**s***<sup>H</sup>* (*<sup>n</sup>* −Δ <sup>+</sup> *<sup>l</sup>*) **<sup>w</sup>***<sup>o</sup>*

*<sup>H</sup> <sup>E</sup>* **<sup>v</sup>**(*<sup>n</sup>* −Δ)**v***<sup>H</sup>* (*<sup>n</sup>* −Δ <sup>+</sup> *<sup>l</sup>*) **<sup>w</sup>***<sup>o</sup>*

+Tr{**K***i*,*<sup>k</sup>* (*l*)*<sup>E</sup>* **<sup>v</sup>**(*<sup>n</sup>* −Δ)**v***<sup>H</sup>* (*<sup>n</sup>* −Δ <sup>+</sup> *<sup>l</sup>*) },

As the noise *v* has assumed to be white, the matrix *E* **v**(*n* −Δ)**v***<sup>H</sup>* (*n* −Δ + *l*) has nonzero ele‐

*<sup>H</sup> <sup>E</sup>* **<sup>s</sup>**(*<sup>n</sup>* −Δ)**s***<sup>H</sup>* (*<sup>n</sup>* −Δ <sup>+</sup> *<sup>l</sup>*) **<sup>w</sup>***<sup>o</sup>*

<sup>∗</sup>(*i*)*wo*(*<sup>i</sup>* <sup>+</sup> *<sup>l</sup>*)

<sup>2</sup> , if *l = 0* and zero otherwise.

From (4) we see that the autocorrelation lags of the combination output signal *y(n)* can be

<sup>∗</sup>(*<sup>n</sup>* <sup>+</sup> *<sup>l</sup>*) + 2*λ*(*n*)(1−*λ*(*n*))*<sup>E</sup>* <sup>R</sup>*e*{*y*1(*n*)*y*<sup>2</sup>

The autocorrelation matrix of *y* is a Toeplitz matrix **R** having the autocorrleation lags *r(l)*

(*n*)*yk*

sion for the steady state combination parameter *λ*(∞). For this purpose we can use (15), not‐

*μk Jmin*

<sup>2</sup> <sup>−</sup>*μi μkσ<sup>v</sup>*

<sup>2</sup> + *r*0,

<sup>∗</sup>(*n* + *l*)}

<sup>∗</sup>(*n* + *l*) , what remains is to find an expres‐

*r*2,2(*l*).

*<sup>v</sup>* only along the *l*-th diagonal and the rest of the matrix is filled with zeroes. Then

*μk Jmin*

<sup>2</sup> <sup>−</sup>*μi μkσ<sup>v</sup>* 2

*μk J min*

(*n* −1 + *l*)**w**˜ *<sup>k</sup>*

<sup>2</sup> . Substituting **K***i*,*<sup>k</sup>* (*l*) into (61) we now have

*<sup>H</sup>* (*n* −1) has in steady

(68)

(69)

In this Section we present the results of our simulation study.

In order to obtain a practical algorithm, the expectation operators in both numerator and de‐ nominator of (7) have been replaced by exponential averaging of the type

$$P\_{av}(\eta) = (1 - \gamma^{\prime})P\_{av}(\eta - 1) + \gamma p(\eta),\tag{74}$$

where *p(n)* is the quantity to be averaged, *Pav(n)* is the averaged quantity and *γ* is the smoothing parameter. The averaged quantities were then used in (7) to obtain λ. The curves shown in the Figures to follow are averages over 100 independent trials. We often show the simulation results and the theoretical curves in the same Figures. In several cases the curves overlap and are therefore indistinguishable.

slow filter catches up the fast one λ starts to decrease and obtains a small negative value at

In the Figure 8 we show the time evolution of mean square deviation of the combination in the same test case. Again one can see that the theoretical and simulation curves fit well.

*<sup>2</sup>* = 10*-3*.

Applications of a Combination of Two Adaptive Filters

http://dx.doi.org/10.5772/50451

81

*<sup>2</sup>* = 10*-3*.

the end of the simulation example. The theoretical and simulated curves fit well.

**Figure 6.** Time-evolutions of EMSE with μ *1* = 0.005 and μ *2* = 0.0005 and σ *<sup>v</sup>*

**Figure 7.** Time-evolutions of λ with μ *1* = 0.005 and μ *2* = 0.0005 and σ *<sup>v</sup>*

**Figure 5.** The true impulse response.

#### **6.1 System Identification**

We have selected the sample echo path model number one shown in Figure 5 from [10], to be the unknown system to identify and combined two 64 tap long adaptive filters.

In the Figures below the noisy blue line represents the simulation result and the smooth red line is the theoretical result. The curves are averaged over 100 independent trials.

In the system identification example we use Gaussian white noise with unity variance as the input signal. The measurement noise is another white Gaussian noise with variance *συ* <sup>2</sup> =10−<sup>3</sup> . The step sizes are *μ*<sup>1</sup> =0.005 for the fast adapting filter and *μ*<sup>2</sup> =0.005 for the slowly adapting filter. Figure 6 depicts the evolution of EMSE in time. One can see that the system converges fast in the beginning. The fast convergence is followed by a stabilization period between sample times 1000-7000 followed by another convergence to a lower EMSE level between the sample times 8000-12000. The second convergence occurs when the mean squared error of the filter with small step size surpasses the performance of the filter with large step size. One can observe that the there is a good accordance between the theoretical and the simulated curves so that the theoretical and the simulation curves are difficult to distinguish from each other.

The combination parameter λ is shown in Figure 7. At the beginning, when the fast converg‐ ing filter gives smaller EMSE than the slowly converging one, λ is clise to unity. When the slow filter catches up the fast one λ starts to decrease and obtains a small negative value at the end of the simulation example. The theoretical and simulated curves fit well.

In the Figure 8 we show the time evolution of mean square deviation of the combination in the same test case. Again one can see that the theoretical and simulation curves fit well.

**Figure 6.** Time-evolutions of EMSE with μ *1* = 0.005 and μ *2* = 0.0005 and σ *<sup>v</sup> <sup>2</sup>* = 10*-3*.

shown in the Figures to follow are averages over 100 independent trials. We often show the simulation results and the theoretical curves in the same Figures. In several cases the curves

We have selected the sample echo path model number one shown in Figure 5 from [10], to

In the Figures below the noisy blue line represents the simulation result and the smooth red

In the system identification example we use Gaussian white noise with unity variance as the input signal. The measurement noise is another white Gaussian noise with variance

adapting filter. Figure 6 depicts the evolution of EMSE in time. One can see that the system converges fast in the beginning. The fast convergence is followed by a stabilization period between sample times 1000-7000 followed by another convergence to a lower EMSE level between the sample times 8000-12000. The second convergence occurs when the mean squared error of the filter with small step size surpasses the performance of the filter with large step size. One can observe that the there is a good accordance between the theoretical and the simulated curves so that the theoretical and the simulation curves are difficult to

The combination parameter λ is shown in Figure 7. At the beginning, when the fast converg‐ ing filter gives smaller EMSE than the slowly converging one, λ is clise to unity. When the

. The step sizes are *μ*<sup>1</sup> =0.005 for the fast adapting filter and *μ*<sup>2</sup> =0.005 for the slowly

be the unknown system to identify and combined two 64 tap long adaptive filters.

line is the theoretical result. The curves are averaged over 100 independent trials.

overlap and are therefore indistinguishable.

80 Adaptive Filtering - Theories and Applications

**Figure 5.** The true impulse response.

**6.1 System Identification**

distinguish from each other.

*συ* <sup>2</sup> =10−<sup>3</sup>

**Figure 7.** Time-evolutions of λ with μ *1* = 0.005 and μ *2* = 0.0005 and σ *<sup>v</sup> <sup>2</sup>* = 10*-3*.

#### **6.2 Adaptive beamforming**

In the beamforming example we have used a 8 element linear array with half wave-length spacing. The noise power is *10-4* in this simulation example. The useful signal which is 10 dB stronger than the noise arrives form the broadside of the array. There are three strong inter‐ ferers at *-35°*, *10°* and 15° with *SNR1* = 33 dB and *SNR2 = SNR3 = 30* dB respectively. The step sizes of the adaptive combination are μ1 = 0.05 and *μ 2= 0.006*.

The steady state antenna pattern is shown in Figure 6. One can see that the algorithm has formed deep nulls in the directions of the interferers while the response in the direction of

Applications of a Combination of Two Adaptive Filters

http://dx.doi.org/10.5772/50451

83

The evolution of EMSE in this simulation example is depicted in Figure 10. One can see a rapid convergence at the beginning of the simulation example. Then the EMSE value stabil‐ izes at a certain level and after a while a second convergence occurs. The dashed red line is the theoretical result and the solid blue line is the simulation result. One can see that the two

The time evolution of λ for this simulation example is shown in Figure 11. At the beginning λ is close to one forcing the output signal of the fast adapting filter to the output of the com‐ bination. Eventually the slow filter catches up with the fast one and λ starts to decrease ob‐ taining at the end of the simulation example a small negative value so that the output signal is dominated by the output signal of the slowly converging filter. One can see that the simu‐

The signal to interference and noise ratio evolution is show in Figure 12. One can see a fast improvement of SINR at the beginning of the simulation example followed by a stabilization region. After a while a new region of SINR improvement occurs and finally the SINR stabil‐ izes at an improved level. Again the theoretical result matches the simulation curve well

lation and theoretical curves for λ evolution are close to each other.

making the curves indistinguishable in black and white print.

the useful signal is equal to the number of antennas i.e. 8.

overlap and are indistinguishable in black and white print.

**Figure 10.** Time evolution of EMSE.

**Figure 8.** Time-evolutions of MSD with μ1 = 0.005 and mu;2 = 0.0005 and σ<sup>v</sup> 2 = 10-3.

**Figure 9.** The antenna pattern.

The steady state antenna pattern is shown in Figure 6. One can see that the algorithm has formed deep nulls in the directions of the interferers while the response in the direction of the useful signal is equal to the number of antennas i.e. 8.

The evolution of EMSE in this simulation example is depicted in Figure 10. One can see a rapid convergence at the beginning of the simulation example. Then the EMSE value stabil‐ izes at a certain level and after a while a second convergence occurs. The dashed red line is the theoretical result and the solid blue line is the simulation result. One can see that the two overlap and are indistinguishable in black and white print.

**Figure 10.** Time evolution of EMSE.

**6.2 Adaptive beamforming**

82 Adaptive Filtering - Theories and Applications

**Figure 9.** The antenna pattern.

In the beamforming example we have used a 8 element linear array with half wave-length spacing. The noise power is *10-4* in this simulation example. The useful signal which is 10 dB stronger than the noise arrives form the broadside of the array. There are three strong inter‐ ferers at *-35°*, *10°* and 15° with *SNR1* = 33 dB and *SNR2 = SNR3 = 30* dB respectively. The step

2 = 10-3.

sizes of the adaptive combination are μ1 = 0.05 and *μ 2= 0.006*.

**Figure 8.** Time-evolutions of MSD with μ1 = 0.005 and mu;2 = 0.0005 and σ<sup>v</sup>

The time evolution of λ for this simulation example is shown in Figure 11. At the beginning λ is close to one forcing the output signal of the fast adapting filter to the output of the com‐ bination. Eventually the slow filter catches up with the fast one and λ starts to decrease ob‐ taining at the end of the simulation example a small negative value so that the output signal is dominated by the output signal of the slowly converging filter. One can see that the simu‐ lation and theoretical curves for λ evolution are close to each other.

The signal to interference and noise ratio evolution is show in Figure 12. One can see a fast improvement of SINR at the beginning of the simulation example followed by a stabilization region. After a while a new region of SINR improvement occurs and finally the SINR stabil‐ izes at an improved level. Again the theoretical result matches the simulation curve well making the curves indistinguishable in black and white print.

#### **6.3. Adaptive Line Enhancer**

In order to illustrate the adaptive line enhancer application we have used length *K = 32* cor‐ relation sequences to form *K × K* correlation matrices for the Capon method. The narrow band signals were just sine waves in our simulations.

The input signal consist of three sine waves and additive noise with unity variance. The sine waves with frequencies 0.1 and 0.4 have amplitudes equal to one and the third sine wave with normalized frequency 0.25 has amplitude equal to 0.5. The spectra of the input signal *x(n)* and the output signal *y(n)* are shown in Figure 13. The step sizes used were *μ 1 = 0.5* and

Applications of a Combination of Two Adaptive Filters

http://dx.doi.org/10.5772/50451

85

*μ 2 = 0.05,* the filter is *N=16* taps long and the delay *Δ = 10.*

**Figure 13.** Line enhancer output signal spectrum.

**Figure 14.** Line enhancer output signal autocorrelation.

**Figure 11.** Time evolution of λ.

**Figure 12.** Time evolution of SINR.

The input signal consist of three sine waves and additive noise with unity variance. The sine waves with frequencies 0.1 and 0.4 have amplitudes equal to one and the third sine wave with normalized frequency 0.25 has amplitude equal to 0.5. The spectra of the input signal *x(n)* and the output signal *y(n)* are shown in Figure 13. The step sizes used were *μ 1 = 0.5* and *μ 2 = 0.05,* the filter is *N=16* taps long and the delay *Δ = 10.*

**Figure 13.** Line enhancer output signal spectrum.

**6.3. Adaptive Line Enhancer**

84 Adaptive Filtering - Theories and Applications

**Figure 11.** Time evolution of λ.

**Figure 12.** Time evolution of SINR.

band signals were just sine waves in our simulations.

In order to illustrate the adaptive line enhancer application we have used length *K = 32* cor‐ relation sequences to form *K × K* correlation matrices for the Capon method. The narrow

**Figure 14.** Line enhancer output signal autocorrelation.

In Figure 14 we show the correlation functions of input and output signals in the second simulation example. We can see that the theoretical correlation matches the correlation com‐ puted from simulations well.

The evolution of the excess mean square error of the combination together with that of the individual filters is shown in Figure 15. We see the fast initial convergence, which is due to the fast adapting filter. After the initial convergence there is a period of stabilization fol‐ lowed by a second convergence between the sample times 500 and 1500, when the error

In our final simulation example (Figure 16) we use three unity amplitude sinusoids with fre‐ quencies 0.1, 0.2 and 0.4. We have increased the noise variance to 10 so that the noise power is 20 times the power of each of the individual sinusoids. The adaptive filter is *N=16* taps long and the delay *Δ = 10.* The step sizes of the individual filters in the combined structure are *μ 1 = 0.5* and *μ 2 = 0.005.* One can see that even in such noisy conditions there is still a

In order to make the LMS type adaptive algorithm work properly one has to select a suitable

the input signal autocorrelation matrix, in order to guarantee stability of the algorithm. Giv‐ en that the stability condition is fulfilled a large step size allows the algorithm to initially converge fast but the mean square error in steady state remains large too. On the other hand if one selects a small step size it is possible to achieve a small steady state error but the initial convergence speed of the algorithm is reduced in this case. In this Chapter we have investi‐ gated the combination of two adaptive filters, which is a new and interesting way of achiev‐ ing fast initial convergence and low steady state error of an adaptive filter at the same time, solving thus the trade-off one has in step size selection . We were looking at three applica‐ tions of the technique - system identification, adaptive beamforming and adaptive line en‐ hancing. In all three applications we saw that the combination worked as expected allowing the algorithm to converge fast to a certain level and then after a while providing a second

*<sup>ω</sup>max* , where *ωmax* is the largest eigenvalue of

Applications of a Combination of Two Adaptive Filters

http://dx.doi.org/10.5772/50451

87

power of the slowly adapting filter bypasses that of the fast one.

reasonably good match between the theoretical and simulation results.

step size . The step size has to be smaller than <sup>2</sup>

convergence to a lower mean square error value.

Address all correspondence to: tonu.trump@gmail.com

Tallinn University of Technology, Estonia

**7. Conclusions**

**Author details**

Tõnu Trump\*

**Figure 15.** Evolution of EMSE of the two component filters and the combination in time.

**Figure 16.** Line enhancer output signal spectrum.

The evolution of the excess mean square error of the combination together with that of the individual filters is shown in Figure 15. We see the fast initial convergence, which is due to the fast adapting filter. After the initial convergence there is a period of stabilization fol‐ lowed by a second convergence between the sample times 500 and 1500, when the error power of the slowly adapting filter bypasses that of the fast one.

In our final simulation example (Figure 16) we use three unity amplitude sinusoids with fre‐ quencies 0.1, 0.2 and 0.4. We have increased the noise variance to 10 so that the noise power is 20 times the power of each of the individual sinusoids. The adaptive filter is *N=16* taps long and the delay *Δ = 10.* The step sizes of the individual filters in the combined structure are *μ 1 = 0.5* and *μ 2 = 0.005.* One can see that even in such noisy conditions there is still a reasonably good match between the theoretical and simulation results.

#### **7. Conclusions**

In Figure 14 we show the correlation functions of input and output signals in the second simulation example. We can see that the theoretical correlation matches the correlation com‐

**Figure 15.** Evolution of EMSE of the two component filters and the combination in time.

**Figure 16.** Line enhancer output signal spectrum.

puted from simulations well.

86 Adaptive Filtering - Theories and Applications

In order to make the LMS type adaptive algorithm work properly one has to select a suitable step size . The step size has to be smaller than <sup>2</sup> *<sup>ω</sup>max* , where *ωmax* is the largest eigenvalue of the input signal autocorrelation matrix, in order to guarantee stability of the algorithm. Giv‐ en that the stability condition is fulfilled a large step size allows the algorithm to initially converge fast but the mean square error in steady state remains large too. On the other hand if one selects a small step size it is possible to achieve a small steady state error but the initial convergence speed of the algorithm is reduced in this case. In this Chapter we have investi‐ gated the combination of two adaptive filters, which is a new and interesting way of achiev‐ ing fast initial convergence and low steady state error of an adaptive filter at the same time, solving thus the trade-off one has in step size selection . We were looking at three applica‐ tions of the technique - system identification, adaptive beamforming and adaptive line en‐ hancing. In all three applications we saw that the combination worked as expected allowing the algorithm to converge fast to a certain level and then after a while providing a second convergence to a lower mean square error value.

#### **Author details**

Tõnu Trump\*

Address all correspondence to: tonu.trump@gmail.com

Tallinn University of Technology, Estonia

#### **References**

[1] Aboulnasr, T., & Mayyas, K. (1997). A robust variable step-size LMS-type algorithm: Analysis and simulations. *IEEE Transactions on Signal Processing*, 45, 631-639.

[14] Martinez-Ramon, M., Arenas-Garcia, J., Navia-Vazquez, A., & Figueiras-Vidal, A. R. (2002). An adaptive combination of adaptive filters for plant identification. *Proc. 14th International Conference on Digital Signal Processing*, Santorini, Greece, 1195-1198.

Applications of a Combination of Two Adaptive Filters

http://dx.doi.org/10.5772/50451

89

[15] Mathews, V. J., & Xie, Z. (1993). A stochastic gradient adaptive filter with gradient

[16] Ochiai, K. (1977). Echo canceller with two echo path models. *IEEE Transactions on*

[18] Shin, H. C., & Sayed, A. H. (2004). Variable step-size NLMS and affine projection al‐

[19] Silva, M. T. M., Nascimento, V. H., & Arenas-Garcia, J. (2010). A transient analysis for the convex combination of two adaptive filters with transfer of coefficients. *Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing*, Dallas, TX,

[21] Trump, T. (2009). An output signal based combination of two NLMS adaptive algo‐ rithms. *Proc. 16th International Conference on Digital Signal Processing*, Santorini,

[22] Trump, T. (2011a). analysisOutput signal based combination of two NLMS adaptive filters - transient, *Proceedings of the Estonian Academy of Sciences*, 60(4), 258-268.

[23] Trump, T. (2011b). Output statistics of a line enhancer based on a combination of two

[24] Zhang, Y., & Chambers, J. A. (2006). Convex combination of adaptive filters for a var‐ iable tap-length LMS algorithm. *IEEE Signal Processing Letters*, 10, 628-631.

[20] Stoica, P., & Moses, R. (2005). Spectral Analysis of Signals. Prentice Hall.

adaptive filters. *Central European Journal of Engineering*, 1, 244-252.

adaptive step size. *IEEE Transactions on Signal Processing*, 41, 2075-2087.

[17] Sayed, A. H. (2008). Adaptive Filters. John Wiley and sons.

gorithms. *IEEE Signal Processing Letters*, 11, 132-135.

*Communications*, 25, 589-594.

USA, 3842-3845.

Greece.


**References**

*Processing*, 54, 1078-1090.

88 Adaptive Filtering - Theories and Applications

327-332.

901-904.

vada, 3301-3304.

*nal Processing*, 56, 1853-1864.

enM.OosterlinckA., Elsevier Science Publishers B.V.

[1] Aboulnasr, T., & Mayyas, K. (1997). A robust variable step-size LMS-type algorithm: Analysis and simulations. *IEEE Transactions on Signal Processing*, 45, 631-639.

[2] Arenas-Garcia, J., Figueiras-Vidal, A. R., & Sayed, A. H. (2006). Mean-square per‐ formance of convex combination of two adaptive filters. *IEEE Transactions on Signal*

[3] Armbruster, W. (1992). Wideband Acoustic Echo Canceller with Two Filter Structure, Signal Processing VI, Theories and Applications. VanderwalleJBoiteR.Moon‐

[4] Azpicueta-Ruiz, L. A., Figueiras-Vidal, A. R., & Arenas-Garcia, J. (2008a). A new least squares adaptation scheme for the affine combination of two adaptive filters. *Proc. IEEE International Workshop on Machine Learning for Signal Processing*, Cancun, Mexico,

[5] Azpicueta-Ruiz, L. A., Figueiras-Vidal, A. R., & Arenas-Garcia, J. (2008b). A normal‐ ized adaptation scheme for the convex combination of two adaptive filters. *Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing*, Las Vegas, Ne‐

[6] Bershad, N. J., Bermudez, J. C., & Tourneret, J. H. (2008). An affine combination of two LMS adaptive filters - transient mean-square analysis. *IEEE Transactions on Sig‐*

[7] Fathiyan, A., & Eshghi, M. (2009). Combining several PBS-LMS filters as a general form of convex combination of two filters. *Journal of Applied Sciences*, 9, 759-764.

[8] Harris, R. W., Chabries, D. M., & Bishop, F. A. (1986). Variable step (vs) adaptive fil‐ ter algorithm. *IEEE Transactions on Acoustics, Speech and Signal Processing*, 34, 309-316.

[11] Kim, K., Choi, Y., Kim, S., & Song, W. (2008). Convex combination of affine projec‐ tion filters with individual regularization. *Proc. 23rd International Technical Conference on Circuits/Systems, Computers and Communications (ITC-CSCC)*, Shimonoseki, Japan,

[12] Kwong, R. H., & Johnston, E. W. (1992). A variable step size LMS algorithm. *IEEE*

[13] Mandic, D., Vayanos, P., Boukis, C., Jelfs, B., Goh, S., I., Gautama, T., & Rutkowski, T. (2007). Collaborative adaptive learning using hybrid filters. *Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing*, Honolulu, Hawaii, 901-924.

[9] Haykin, S. (2002). Adaptive Filter Theory, Fourth Edition,. Prentice Hall.

*Transactions on Signal Processing*, 40, 1633-1642.

[10] ITU-T Recommendation G.168 Digital Network Echo Cancellers. (2009). *ITU-T*.


**Chapter 4**

**Adaptive Analysis of Diastolic Murmurs for Coronary**

Zhidong Zhao, Yi Luo, Fangqin Ren, Li Zhang and Changchun Shi

Additional information is available at the end of the chapter

stenosis contain higher frequency components.

diastolic period, shown in figure 1.

of heart valves.

http://dx.doi.org/10.5772/55690

**1. Introduction**

**Artery Disease Based on Empirical Mode Decomposition**

Coronary Artery Disease (CAD) is a leading type of heart disease in the world caused by the gradual build-up of plaque on the walls of the arteries. Due to CAD's high incidence rate and mortality, it is very harmful to human health. CAD can develop slowly and silently over years without any symptoms. Early diagnose of CAD is one of the most important medical research areas. Diastolic murmurs that occur as additional components in the heart sound signal provide clinicians with valuable diagnostic and prognostic information about the function of heart valves. When coronary arteries become narrowed or blocked, the turbulence appears which is produced by blood moving across the stenotic arteries. During the relatively quiet diastolic period of the cardiac cycle, the murmurs are likely to be loudest when coronary blood flow is maximal. Initial studies show that diastolic murmurs produced by coronary arterial

The heart sound signal represents the mechanical activity of the cardiohemic system, which is complicated and non-stationary. It contains physiological and pathological information between the heart and the various parts of the body, so it can be used in diagnosis of heart disease. Heart sound has been widely used in diagnosis of heart disease and many methods have been adopted to aid the diagnosis [1, 2]. The heart sound signal generally can be separated into four parts: the 1st heart sound S1, the systolic period, the 2nd heart sound S2 and the

Diastolic murmurs occur between S2 and the next S1 when the heart muscle relaxes between beats. Heart murmurs are usually considered pathological. They can be caused by some kinds of heart attacks, such as coronary artery stenosis, aortic regurgitation, etc. Diastolic murmurs can provide clinicians with valuable diagnostic and prognostic information about the function

> © 2013 Zhao et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

© 2013 Zhao et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

distribution, and reproduction in any medium, provided the original work is properly cited.

### **Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition**

Zhidong Zhao, Yi Luo, Fangqin Ren, Li Zhang and Changchun Shi

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/55690

#### **1. Introduction**

Coronary Artery Disease (CAD) is a leading type of heart disease in the world caused by the gradual build-up of plaque on the walls of the arteries. Due to CAD's high incidence rate and mortality, it is very harmful to human health. CAD can develop slowly and silently over years without any symptoms. Early diagnose of CAD is one of the most important medical research areas. Diastolic murmurs that occur as additional components in the heart sound signal provide clinicians with valuable diagnostic and prognostic information about the function of heart valves. When coronary arteries become narrowed or blocked, the turbulence appears which is produced by blood moving across the stenotic arteries. During the relatively quiet diastolic period of the cardiac cycle, the murmurs are likely to be loudest when coronary blood flow is maximal. Initial studies show that diastolic murmurs produced by coronary arterial stenosis contain higher frequency components.

The heart sound signal represents the mechanical activity of the cardiohemic system, which is complicated and non-stationary. It contains physiological and pathological information between the heart and the various parts of the body, so it can be used in diagnosis of heart disease. Heart sound has been widely used in diagnosis of heart disease and many methods have been adopted to aid the diagnosis [1, 2]. The heart sound signal generally can be separated into four parts: the 1st heart sound S1, the systolic period, the 2nd heart sound S2 and the diastolic period, shown in figure 1.

Diastolic murmurs occur between S2 and the next S1 when the heart muscle relaxes between beats. Heart murmurs are usually considered pathological. They can be caused by some kinds of heart attacks, such as coronary artery stenosis, aortic regurgitation, etc. Diastolic murmurs can provide clinicians with valuable diagnostic and prognostic information about the function of heart valves.

© 2013 Zhao et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Zhao et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*x z* = + q s

noise. For simplicity, we assume intensity of noise is one.

**1.** Apply discrete wavelet transform to observe noisy signals.

**3.** Apply the inverse discrete wavelet transform to reconstruct the signal.

The step of wavelet shrinkage is defined as follows:

threshold function' and 'soft threshold function'.

l

( ) *<sup>S</sup>*

l

d

l

d

d

signal.

*λ* is threshold value.

The vector x represents noisy signal and *θ* is an unknown original clean signal. z is independent identity distribution Gaussian white noise with mean zero and unit variance. *σ* is intensity of

Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition

**2.** Estimate noise and threshold value, thresholding the wavelet coefficients of the observed

The wavelet shrinkage method relies on the basic idea that the energy of signal will often be concentrated in a few coefficients in the wavelet domain while the energy of noise is spread among all coefficients in the wavelet domain. Therefore, the nonlinear shrinkage function in the wavelet domain will tend to keep a few larger coefficients over threshold value that represent the signal, while noise coefficients' down threshold value will tend to reduce to zero.

In wavelet shrinkage, how to select the threshold function and how to select the threshold value are most crucial. Donohue introduced two kinds of thresholding functions: 'hard

> 0 || ( ) | | *<sup>H</sup> x x*

l

l

*x*

ll

l

 l


<sup>ï</sup> <sup>&</sup>gt; <sup>î</sup> (2)

(3)

*x x*

0 ||

*xx x x x*

<sup>1</sup> ( ) 1 2

*<sup>m</sup> <sup>m</sup> <sup>m</sup> x x x*

 *m* l

l

<sup>ï</sup> + <- <sup>î</sup>

The hard threshold function (2) results in larger variance and can be unstable because of the discontinuous function. The soft threshold function (3) results in unnecessary bias due to shrinkage of the large coefficients to zero. We build the generalized threshold function:

<sup>ì</sup> £ <sup>ï</sup> =- > <sup>í</sup>

ìï £ <sup>=</sup> <sup>í</sup>

(1)

http://dx.doi.org/10.5772/55690

93

**Figure 1.** Heart sound signal

Short Time Fourier Transform, Wigner-Ville Distribution and Wavelet Transform, etc., have some inherent limitations [3, 4, 5]. Short Time Fourier Transform involves an intrinsic tradeoff between time resolution and frequency resolution. In Wigner-Ville distribution, the inherent cross-term interferences often mask the true time-frequency information associated with the signal of interest. The wavelet transform has received considerable attention in recent years. It provides a multi-resolution representation of signals, however, it is not adaptive in nature; once the wavelet mother function is given, one will have to use it to analyse all the data. In addition, the wavelet transform also underlies an uncertainty principle. In 1998, Dr.Norden Huang proposed a novel signal processing algorithm: the Hilbert Huang Trans‐ form (HHT) [6, 7]. It has proved to be a powerful tool to analyse non-stationary and nonlinear signals. The key parts of HHT are the Empirical Mode Decomposition (EMD) and Hilbert transform. EMD can decompose adaptively diastolic murmurs into a finite and usually small number of Intrinsic Mode Functions (IMFs) that admit a well-behaved Hilbert transform. The Hilbert transform of IMFs can yield instantaneous frequency and instantaneous amplitude. The local energy and instantaneous frequency derived from the IMFs give the fine-resolution frequency-time distribution of the energy that is designated as the Hilbert spectrum. The threedimensional distribution can reflect the inherent essential characteristic of the signal.

The paper is organized as follows: section 2 introduces generalized wavelet shrinkage denoising method. In section 3, the Hilbert spectrum based on EMD and marginal spectrum distributions of diastolic murmurs are studied; a new method to restrict the end effect of EMD is proposed in section 4.In section 5, the algorithm based on the Empirical Mode Decomposi‐ tion (EMD) and Teager Energy Operation (TEO) is proposed as an effective approach for estimating the instantaneous frequency of diastolic murmurs. Finally, some conclusions are given in section 6.

#### **2. Wavelet shrinkage method**

We consider the following model of a discrete noisy signal:

Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition http://dx.doi.org/10.5772/55690 93

$$
\infty = \theta + \sigma z \tag{1}
$$

The vector x represents noisy signal and *θ* is an unknown original clean signal. z is independent identity distribution Gaussian white noise with mean zero and unit variance. *σ* is intensity of noise. For simplicity, we assume intensity of noise is one.

The step of wavelet shrinkage is defined as follows:


The wavelet shrinkage method relies on the basic idea that the energy of signal will often be concentrated in a few coefficients in the wavelet domain while the energy of noise is spread among all coefficients in the wavelet domain. Therefore, the nonlinear shrinkage function in the wavelet domain will tend to keep a few larger coefficients over threshold value that represent the signal, while noise coefficients' down threshold value will tend to reduce to zero.

In wavelet shrinkage, how to select the threshold function and how to select the threshold value are most crucial. Donohue introduced two kinds of thresholding functions: 'hard threshold function' and 'soft threshold function'.

$$\mathcal{S}\_{\lambda}^{H}(\mathbf{x}) = \begin{cases} 0 & \vert \mathbf{x} \vert \le \lambda \\ \mathbf{x} & \vert \mathbf{x} \vert > \lambda \end{cases} \tag{2}$$

$$\delta\_{\lambda}^{S}(\mathbf{x}) = \begin{cases} 0 & \vert \mathbf{x} \vert \le \lambda \\ \mathbf{x} - \lambda & \mathbf{x} > \lambda \\ \mathbf{x} + \lambda & \mathbf{x} < -\lambda \end{cases} \tag{3}$$

The hard threshold function (2) results in larger variance and can be unstable because of the discontinuous function. The soft threshold function (3) results in unnecessary bias due to shrinkage of the large coefficients to zero. We build the generalized threshold function:

$$\mathcal{S}\_{\lambda}^{m}(\mathbf{x}) = \mathbf{x} - \frac{\mathcal{X}^{m}}{\mathbf{x}^{m-1}} \quad m = \mathbf{1} \ 2\infty \tag{4}$$

*λ* is threshold value.

Short Time Fourier Transform, Wigner-Ville Distribution and Wavelet Transform, etc., have some inherent limitations [3, 4, 5]. Short Time Fourier Transform involves an intrinsic tradeoff between time resolution and frequency resolution. In Wigner-Ville distribution, the inherent cross-term interferences often mask the true time-frequency information associated with the signal of interest. The wavelet transform has received considerable attention in recent years. It provides a multi-resolution representation of signals, however, it is not adaptive in nature; once the wavelet mother function is given, one will have to use it to analyse all the data. In addition, the wavelet transform also underlies an uncertainty principle. In 1998, Dr.Norden Huang proposed a novel signal processing algorithm: the Hilbert Huang Trans‐ form (HHT) [6, 7]. It has proved to be a powerful tool to analyse non-stationary and nonlinear signals. The key parts of HHT are the Empirical Mode Decomposition (EMD) and Hilbert transform. EMD can decompose adaptively diastolic murmurs into a finite and usually small number of Intrinsic Mode Functions (IMFs) that admit a well-behaved Hilbert transform. The Hilbert transform of IMFs can yield instantaneous frequency and instantaneous amplitude. The local energy and instantaneous frequency derived from the IMFs give the fine-resolution frequency-time distribution of the energy that is designated as the Hilbert spectrum. The three-

<sup>0</sup> <sup>500</sup> <sup>1000</sup> <sup>1500</sup> <sup>2000</sup> <sup>2500</sup> <sup>3000</sup> -0.15

samples

S1 S2

systolic period diastolic period

dimensional distribution can reflect the inherent essential characteristic of the signal.

given in section 6.



Amplitude

**Figure 1.** Heart sound signal

0

0.05

0.1

92 Adaptive Filtering - Theories and Applications

**2. Wavelet shrinkage method**

We consider the following model of a discrete noisy signal:

The paper is organized as follows: section 2 introduces generalized wavelet shrinkage denoising method. In section 3, the Hilbert spectrum based on EMD and marginal spectrum distributions of diastolic murmurs are studied; a new method to restrict the end effect of EMD is proposed in section 4.In section 5, the algorithm based on the Empirical Mode Decomposi‐ tion (EMD) and Teager Energy Operation (TEO) is proposed as an effective approach for estimating the instantaneous frequency of diastolic murmurs. Finally, some conclusions are When m is an even number:

$$\mathcal{S}\_{\lambda}^{m}(\mathbf{x}) = \mathbf{x} - \mathbf{x}I(\mid \mathbf{x} \mid \mathbf{x} \mid \mathbf{x}) - \frac{\mathcal{X}^{m}}{\mathbf{x}^{m-1}}I(\mid \mathbf{x} \mid \mathbf{x} \mid \mathbf{x}) \tag{5}$$

Let x~*N* (*θ*, 1)

tively. Then:

*<sup>∞</sup> ϕ*(*x* −*θ*)−*ϕ*(*x* + *θ*)

*<sup>x</sup> <sup>m</sup> dxBm*(*θ*)=*<sup>∫</sup>*

lq

lq

*λ*

*<sup>∞</sup> ϕ*(*x* −*θ*) + *ϕ*(*x* + *θ*)

Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition

*ϕ* And *Φ* are density and probability function of standard Gaussian random variable respec‐

<sup>1</sup> (,) (,) () *m Hm MM Am*

<sup>2</sup> ( , ) ( ( , ) ) *m m SB M*

22 2 <sup>2</sup> 1 22 <sup>1</sup> ( , ) ( , ) 2 ( ) ( ) ( ) 2 ( , ) ( ) *mH m <sup>m</sup> <sup>m</sup> m H V V B A B MA mm m <sup>m</sup>*

> 2 2 2 22 <sup>1</sup> () ( () ) () 2 () () 2 () *m m Hm m m*

*V <sup>H</sup>* (*λ*, *θ*)=(*θ* <sup>2</sup> + 1)(2−*Φ*(*λ* −*θ*)−*Φ*(*λ* + *θ*) + (*λ* + *θ*)*ϕ*(*λ* −*θ*) + (*λ* −*θ*)*ϕ*(*λ* + *θ*)−*M <sup>H</sup>* (

*<sup>H</sup>* (*θ*)=1 + (*θ* <sup>2</sup> −1)(*Φ*(*λ* −*θ*)−*Φ*(−*λ* −*θ*)) + (*λ* + *θ*)*ϕ*(*λ* + *θ*) + (*λ* −*θ*)*ϕ*(*λ* −*θ*)

soft, Non-Negative Garrote, hard threshold functions, respectively.

 l  q l


*E x BB A* -- - = -= - + + (10)

 q l

*<sup>H</sup>* (*θ*)−2*<sup>λ</sup> mBm*−2(*θ*) <sup>+</sup> *<sup>λ</sup>* <sup>2</sup>*mB*2*m*−2(*θ*) <sup>+</sup> <sup>2</sup>*θλ mAm*−1(*θ*)

*M <sup>m</sup>*(*λ*, *θ*) ,*SB <sup>m</sup>*(*λ*, *θ*), *V <sup>m</sup>*(*λ*, *θ*) are the mean, bias, variance and risk of generalized thresh‐

The soft threshold function provides smoother results in comparison with the hard threshold function; however, the hard threshold function provides better edge preservation in compar‐ ison with the soft threshold function. The hard threshold function is discontinuous and this leads to the oscillation of denoised signal. The soft threshold function tends to have bigger bias because of shrinkage, whereas the hard threshold function tends to have bigger variance because of discontinuity. The Non-Negative Garrote threshold function is the trade-off

 lq q

 lq l

*<sup>x</sup> <sup>m</sup> dx*

 q

> q

*mm m*

 q ql

*<sup>m</sup>*(*θ*), they are the mean, bias, variance and risk of the risk of


= - (8)

 l  lq  q

http://dx.doi.org/10.5772/55690

95

*λ* , *θ* ) 2

,

 q

*Am*(*θ*)=*∫ λ*

Mean:

Bias:

Variance:

lq

r q

*<sup>m</sup>*(*θ*)=*E*(*δλ*

*l2* Risk:

Where

*ρλ*

*ρλ*

 lq

> d

*<sup>m</sup>*(*x*)−*θ*)

old function. When m is 1, 2, *ρλ*

ll

 l

> q

<sup>2</sup> <sup>=</sup>*ρλ*

 q l

 r q

*M <sup>H</sup>* (*λ*, *θ*)=*θ* + *θ* 1−*Φ*(*λ* −*θ*)−*Φ*(*λ* + *θ*) + *ϕ*(*λ* −*θ*)−*ϕ*(*λ* + *θ*)

 l

When m is odd number:

$$\mathcal{S}\_{\lambda}^{m}(\mathbf{x}) = \mathbf{x} - \mathbf{x}I(\mid \mathbf{x} \mid \mathbf{x} \mid \mathbf{x} \lambda) - \frac{\mathcal{X}^{m}}{\mathbf{x}^{m-1}}I(\mid \mathbf{x} \mid \mathbf{x} \lambda) \text{sign}(\mathbf{x}) \tag{6}$$

When m=1, it is the soft threshold function; when m= *∞*, it is the hard threshold function. When m=2 it is Non-Negative Garrote threshold function. We show slope signal as an example, Figure2 illustrates the generalized threshold functions for different m.

**Figure 2.** Generalized threshold function

It can clearly be seen that when the coefficient is small, the smaller m is, the closer the gener‐ alized function is to the soft threshold function; when the coefficient is big, the bigger m is, the closer the generalized function is to the hard threshold function; when m lies between 1 and *∞*, the general threshold function achieves a compromise between hard and soft threshold function. With careful selection of m, we can achieve better denoising performance [8, 9].

We derived the exact formula of mean, bias, variance and *l* <sup>2</sup> risk for the generalized threshold function.

#### Let x~*N* (*θ*, 1)

When m is an even number:

94 Adaptive Filtering - Theories and Applications

When m is odd number:

<sup>1</sup> ( ) (| | ) (| | )

<sup>1</sup> ( ) (| | ) (| | ) ( )

*<sup>m</sup> x x xI x I x sign x x*

l

When m=1, it is the soft threshold function; when m= *∞*, it is the hard threshold function. When m=2 it is Non-Negative Garrote threshold function. We show slope signal as an example,

It can clearly be seen that when the coefficient is small, the smaller m is, the closer the gener‐ alized function is to the soft threshold function; when the coefficient is big, the bigger m is, the closer the generalized function is to the hard threshold function; when m lies between 1 and *∞*, the general threshold function achieves a compromise between hard and soft threshold function. With careful selection of m, we can achieve better denoising performance [8, 9].

<sup>2</sup> risk for the generalized threshold

ll

*x*

l

ll



*<sup>m</sup> <sup>m</sup> <sup>m</sup> x x xI x I x*

*<sup>m</sup> <sup>m</sup>*

Figure2 illustrates the generalized threshold functions for different m.

l

d

l

d

**Figure 2.** Generalized threshold function

function.

We derived the exact formula of mean, bias, variance and *l*

$$A\_m(\varTheta) = \int\_{\varLambda}^{\curvearrowleft} \frac{\phi(\chi - \varTheta) - \phi(\chi + \varTheta)}{\chi^m} d\mathbf{x} \, B\_m(\varTheta) = \int\_{\varLambda}^{\curvearrowleft} \frac{\phi(\chi - \varTheta) + \phi(\chi + \varTheta)}{\chi^m} d\mathbf{x} \, \varLambda$$

*ϕ* And *Φ* are density and probability function of standard Gaussian random variable respec‐ tively. Then:

Mean:

$$M^m(\boldsymbol{\lambda}, \boldsymbol{\theta}) = M^H(\boldsymbol{\lambda}, \boldsymbol{\theta}) - \boldsymbol{\lambda}^m A\_{m-1}(\boldsymbol{\theta}) \tag{7}$$

Bias:

$$\text{SB}^{\mathfrak{m}}(\lambda, \theta) = \left(\text{M}^{\mathfrak{m}}(\lambda, \theta) - \theta\right)^{2} \tag{8}$$

Variance:

$$V^{\mathfrak{m}}(\boldsymbol{\lambda},\boldsymbol{\theta}) = V^{H}(\boldsymbol{\lambda},\boldsymbol{\theta}) - 2\boldsymbol{\lambda}^{\mathfrak{m}}\mathbf{B}\_{\mathfrak{m}-2}(\boldsymbol{\theta}) - \boldsymbol{\lambda}^{2\mathfrak{m}}\mathbf{A}\_{\mathfrak{m}-1}^{2}(\boldsymbol{\theta}) + \boldsymbol{\lambda}^{2\mathfrak{m}}\mathbf{B}\_{2\mathfrak{m}-2}(\boldsymbol{\theta}) + 2\boldsymbol{\lambda}^{\mathfrak{m}}\mathbf{M}^{H}(\boldsymbol{\lambda},\boldsymbol{\theta})\mathbf{A}\_{\mathfrak{m}-1}(\boldsymbol{\theta})\tag{9}$$

*l2* Risk:

$$\rho^{m}\_{\lambda}(\theta) = \mathbb{E}(\delta^{m}\_{\lambda}(\mathbf{x}) - \theta)^{2} = \rho^{H}\_{\lambda}(\theta) - 2\lambda^{m}\mathbb{B}\_{m-2}(\theta) + \lambda^{2m}\mathbb{B}\_{2m-2}(\theta) + 2\theta\lambda^{m}A\_{m-1}(\theta) \tag{10}$$

Where

$$\begin{split} &\rho\_{\lambda}^{\mathfrak{m}}(\varTheta) = \mathbb{E}(\delta\_{\lambda}^{\mathfrak{m}}(\mathbf{x}) - \varTheta)^{2} = \rho\_{\lambda}^{\varTheta}(\varTheta) - 2\lambda^{\varTheta}B\_{m-2}(\varTheta) + \lambda^{2m}B\_{2m-2}(\varTheta) + 2\Theta\lambda^{\varTheta}A\_{m-1}(\varTheta) \\ &\quad \cdot M^{\varTheta}(\lambda,\varTheta) = \Theta + \Theta[1 - \varTheta(\lambda - \varTheta) - \varTheta(\lambda + \varTheta)] + \phi(\lambda - \varTheta) - \phi(\lambda + \varTheta) \\ &\quad \cdot V^{H}(\lambda,\varTheta) = (\varTheta^{2} + 1)(2 - \varTheta(\lambda - \varTheta) - \varTheta(\lambda + \varTheta)) + (\lambda + \varTheta)\phi(\lambda - \varTheta) + (\lambda - \varTheta)\phi(\lambda + \varTheta) - M^{H}\{\iota^{\varTheta}\_{\shortomega}\}\_{\varPsi} \\ &\quad \cdot \rho\_{\lambda}^{\varTheta}(\varTheta) = 1 + (\varTheta^{2} - 1)(\varTheta(\lambda - \varTheta) - \varTheta(-\lambda - \varTheta)) + (\lambda + \varTheta)\phi(\lambda + \varTheta) + (\lambda - \varTheta)\phi(\lambda - \varTheta) \\ &\quad \cdot M^{\varTheta}(\lambda,\varTheta) \,\,\mathsf{S}\,\mathsf{m}^{\varTheta}(\lambda,\varTheta) \,\,\,V^{H}(\lambda,\varTheta) \,\,\,\mathsf{H} \,\,\mathsf{ are the mean basis, variance and risk of generalized test} \end{split}$$

*M <sup>m</sup>*(*λ*, *θ*) ,*SB <sup>m</sup>*(*λ*, *θ*), *V <sup>m</sup>*(*λ*, *θ*) are the mean, bias, variance and risk of generalized thresh‐ old function. When m is 1, 2, *ρλ <sup>m</sup>*(*θ*), they are the mean, bias, variance and risk of the risk of soft, Non-Negative Garrote, hard threshold functions, respectively.

The soft threshold function provides smoother results in comparison with the hard threshold function; however, the hard threshold function provides better edge preservation in compar‐ ison with the soft threshold function. The hard threshold function is discontinuous and this leads to the oscillation of denoised signal. The soft threshold function tends to have bigger bias because of shrinkage, whereas the hard threshold function tends to have bigger variance because of discontinuity. The Non-Negative Garrote threshold function is the trade-off between the hard and soft threshold function. Firstly, it is continuous; secondly, the shrinkage amplitude is smaller than the soft threshold function.

Stein Unbiased Risk Estimate (SURE) [10] is an adaptive threshold selection rule which is data driven. The threshold value minimizes an estimate of the risk.

If *∞* is weakly differentiable, for single coefficient, *θ<sup>k</sup>* ∧ = *xk* + *H* (*xk* ), *k* =1...*N* , *H* is true risk. Then

$$\overset{\frown}{\rho}(\mathbf{x}\_{k'}\boldsymbol{\lambda}) = 1 + 2(\frac{d}{d\mathbf{x}\_k}H(\mathbf{x}\_k)) + H^2(\mathbf{x}\_k) \tag{11}$$

*SURE*(*xk* , *λ*)=1 + (*xk*

*SURE*(*xk* , *λ*)=1 + (*xk*

of the PCG signal.

conditions:

**empirical mode decomposition**

be equal or differ at most by one.

l

The SURE is

<sup>2</sup> −2)*I*(| *xk* | ≤*λ*) + (

<sup>2</sup> <sup>2</sup>

<sup>2</sup> <sup>−</sup>2)*I*(| *xk* <sup>|</sup> <sup>≤</sup>*λ*) <sup>+</sup>

l

2 2

ll

l

2(*m*−1)*λ <sup>m</sup> xk <sup>m</sup>* +

Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition

2( 1) 2( 1) ( , ) 1 ( 2) ( ) ( ) ( ) ( )

Suppose wavelet coefficients are

, the threshold value λ is set to minimize the estimate of the *x*1....*xN* risk for the given data,

<sup>2</sup>*m*−<sup>2</sup> *I*(| *xk* | >*λ*) +

*SURE x*

*k kk mm m k k k*

*m m SURE x x I x I x I x I x xxx*

*λ* <sup>2</sup>*<sup>m</sup> xk*

<sup>0</sup> <sup>1</sup> arg min ( , ) *N*

<sup>³</sup> <sup>=</sup>

l

*k*

For hard threshold function (m is ∞), because H (x) is discontinuity, the SURE is illogical.

**3. Analysis of diastolic murmurs for coronary artery disease based on**

Since a novel signal processing algorithm - the Hilbert HuangTransform (HHT) - was proposed by N.E.Huang in 1998 [6], it has been seen as a data-driven tool for nonlinear and nonstationary signal processing. HHT consists of two parts: the EMD and Hilbert transform. EMD as the important part of the HHT that can adaptively decompose signal into a finite and often a series of small numbers of Intrinsic Mode Functions (IMFs) subjected to the following two

**1.** In the whole dataset, the number of extrema and the number of zero-crossing must either

The noisy PCG signal is processed using the method mentioned above. For the generalized threshold functions, parameter m is selected as 2 which is simple and provides a good compromise between the hard and soft threshold function. The data-driven SURE threshold value is used. The filtered PCG signal is illustrated as figure 4(a). The phase space diagram of the filtered PCG signal is shown in figure 4(b). From visual inspection of figure 3, the PCG signal is much cleaner after being denoised; the first heart sound, the systolic period, the second heart sound and the diastolic period can be clearly identified. The results indicate that the method we have proposed significantly reduces noise and preserves well the characteristics

*λ* <sup>2</sup>*<sup>m</sup> xk*

*m mm*

*kk k*

*k*

 l

<sup>2</sup>*m*−<sup>2</sup> )*I*(| *xk* | >*λ*)

ll

 l

2(*m* − 1)*λ <sup>m</sup> xk*

<sup>=</sup> å (14)

 - - - =+ - £ + > + > - < - (13)

*<sup>m</sup> <sup>I</sup>*(*xk* <sup>&</sup>gt;*λ*)<sup>−</sup> 2(*<sup>m</sup>* <sup>−</sup> 1)*<sup>λ</sup> <sup>m</sup>*

 l

http://dx.doi.org/10.5772/55690

97

*xk*

*<sup>m</sup> I*(*xk* < −*λ*)

is the unbiased risk estimate of *ρ* ∧ (*xk* , *λ*)=1 + 2( *d d xk <sup>H</sup>* (*xk* )) <sup>+</sup> *<sup>H</sup>* <sup>2</sup> (*xk* ):

Proof:

$$\|\rho(\mathbf{x}\_{k'},\boldsymbol{\Lambda})\mathbf{E}\| \parallel \stackrel{\scriptstyle \Delta}{\Theta}\_{k} - \Theta\_{k} \| \stackrel{\scriptstyle \Delta}{=} \mathbf{E}\{\mathbf{x}\_{k} + H(\mathbf{x}\_{k}) - \Theta\_{k}\}^{2} = \mathbf{E}\{\mathbf{z}\_{k} + H(\mathbf{x}\_{k})\}^{2} = 1 + 2\mathbf{E}\{\mathbf{z}\_{k}H(\mathbf{x}\_{k})\} + \mathbf{E}\{H^{2}(\mathbf{x}\_{k})\}$$
 
$$\text{Where } = 1 + 2\mathbf{E}\{\mathbf{z}\_{k}H(\Theta\_{k} + \mathbf{z}\_{k})\} + \mathbf{E}\{H^{2}(\mathbf{x}\_{k})\} \text{ and by partial integral}$$

$$z\_k = x\_k - \Theta\_k$$

Then

$$\begin{split} &E\left(\mathbf{z}\_{k}H\left(\boldsymbol{\theta}\_{k}+\mathbf{z}\_{k}\right)\right) = \frac{1}{\sqrt{2\pi}} \Big[ \mathbf{z}\_{k}H\left(\boldsymbol{\theta}\_{k}+\mathbf{z}\_{k}\right)e^{\frac{\boldsymbol{\xi}^{2}}{2}}d\,\boldsymbol{\xi} = \frac{1}{\sqrt{2\pi}} \Big[ (\boldsymbol{\eta}\_{k}-\boldsymbol{\theta}\_{k})H\left(\boldsymbol{\eta}\_{k}\right)\exp\{-\frac{\left(\boldsymbol{\eta}\_{k}-\boldsymbol{\theta}\_{k}\right)^{2}}{2}\}d\,\boldsymbol{\eta}\_{k} \\ &= \frac{1}{\sqrt{2\pi}} \Big[ \exp\{-\frac{\left(\boldsymbol{\eta}\_{k}-\boldsymbol{\theta}\_{k}\right)^{2}}{2}\}\frac{dH\left(\boldsymbol{\eta}\_{k}\right)}{d\,\boldsymbol{\eta}\_{k}}d\,\boldsymbol{\eta}\_{k} = E\left(\frac{dH\left(\boldsymbol{\eta}\_{k}\right)}{d\,\boldsymbol{\eta}\_{k}}\mid\boldsymbol{\eta}\_{k} = \mathbf{x}\_{k}\right) \end{split}$$

So

*ρ*(*xk* , *λ*)=*E θ<sup>k</sup>* ∧ −*θ<sup>k</sup>* 2 =1 + 2*E*( *dH* (*xk* ) *d xk* ) + *E*(*H* <sup>2</sup> (*xk* )) is the unbiased risk estimate of true risk *E ρ* ∧ (*xk* , *λ*) =1 + 2( *dH* (*xk* ) *d xk* ) + *H* <sup>2</sup> (*xk* ).

For the generalized threshold function (5) and single coefficient, when m is even,

$$\rho(\mathbf{x}\_{k'} | \lambda)$$

The SURE is

$$\text{SLREE}(\mathbf{x}\_{k'}, \boldsymbol{\lambda}) = \mathbf{1} + (\mathbf{x}\_k^{\prime 2} - \mathbf{2})I(\left|\mathbf{x}\_k\right| \le \boldsymbol{\lambda}) + (\frac{\mathbf{2}(m-1)\boldsymbol{\lambda}^m}{\mathbf{x}\_k^m} + \frac{\boldsymbol{\lambda}^{2m}}{\mathbf{x}\_k^{2m-2}})I(\left|\mathbf{x}\_k\right| > \boldsymbol{\lambda})\tag{12}$$

When m is odd,

Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition http://dx.doi.org/10.5772/55690 97

$$SLRE\left(\mathbf{x}\_{k^\*}\,\,\lambda\right) = 1 + (\mathbf{x}\_k - 2)I\left(\mid \mathbf{x}\_k \mid \le \lambda\right) + (\frac{2(m-1)\lambda^m}{\mathbf{x}\_k^{\ \cdot m}} + \frac{\lambda^{2m}}{\mathbf{x}\_k^{2m+2}})I\left(\mid \mathbf{x}\_k \mid > \lambda\right),$$

The SURE is

between the hard and soft threshold function. Firstly, it is continuous; secondly, the shrinkage

Stein Unbiased Risk Estimate (SURE) [10] is an adaptive threshold selection rule which is data

<sup>2</sup> ( , ) 1 2( ( )) ( ) *<sup>k</sup> k k k d x Hx H x dx*

> *d d xk*

<sup>=</sup>*E*(*xk* <sup>+</sup> *<sup>H</sup>* (*xk* )−*θ<sup>k</sup>* )<sup>2</sup> <sup>=</sup>*E*(*zk* <sup>+</sup> *<sup>H</sup>* (*xk* ))<sup>2</sup> =1 <sup>+</sup> <sup>2</sup>*E*(*zkH* (*xk* )) <sup>+</sup> *<sup>E</sup>*(*<sup>H</sup>* <sup>2</sup>

(*xk* )) and by partial integral


(*xk* , *λ*)=1 + 2(

*ξ* 2 <sup>2</sup> *<sup>d</sup><sup>ξ</sup>* <sup>=</sup> <sup>1</sup>

> *dH* (*η<sup>k</sup>* ) *dη<sup>k</sup>*

) + *E*(*H* <sup>2</sup>

For the generalized threshold function (5) and single coefficient, when m is even,

*<sup>m</sup> SURE x x I x I x*

2( 1) ( , ) 1 ( 2) ( ) ( )( )

*k kk m m k*

*dη<sup>k</sup>* =*E*(

*dH* (*xk* ) *d xk*

(*xk* ).

∧

*<sup>H</sup>* (*xk* )) <sup>+</sup> *<sup>H</sup>* <sup>2</sup>

<sup>2</sup>*<sup>π</sup> <sup>∫</sup>*(*η<sup>k</sup>* <sup>−</sup>*θ<sup>k</sup>* )*<sup>H</sup>* (*η<sup>k</sup>* )exp(<sup>−</sup>

=+ + (11)

(*xk* ):

= *xk* + *H* (*xk* ), *k* =1...*N* , *H* is true risk.

(*η<sup>k</sup>* −*θ<sup>k</sup>* )<sup>2</sup>

(*xk* )) is the unbiased risk estimate of true risk

2

*m m*

 l

*k k*


*x x* l

2 2

 l


<sup>2</sup> )*dη<sup>k</sup>*

(*xk* ))

amplitude is smaller than the soft threshold function.

If *∞* is weakly differentiable, for single coefficient, *θ<sup>k</sup>*

r l

<sup>2</sup>*<sup>π</sup> <sup>∫</sup>zkH* (*θ<sup>k</sup>* <sup>+</sup> *zk* )*<sup>e</sup>*

*dH* (*η<sup>k</sup>* ) *dη<sup>k</sup>*

=1 + 2*E*(

) + *H* <sup>2</sup>

2

ll

*dH* (*xk* ) *d xk*

is the unbiased risk estimate of *ρ*

96 Adaptive Filtering - Theories and Applications

Where =1 <sup>+</sup> <sup>2</sup>*E*(*zkH* (*θ<sup>k</sup>* <sup>+</sup> *zk* )) <sup>+</sup> *<sup>E</sup>*(*<sup>H</sup>* <sup>2</sup>

(*η<sup>k</sup>* −*θ<sup>k</sup>* )<sup>2</sup> <sup>2</sup> )

∧ −*θ<sup>k</sup>* 2

∧ −*θ<sup>k</sup>* 2

*<sup>E</sup>*(*zkH* (*θ<sup>k</sup>* <sup>+</sup> *zk* ))= <sup>1</sup>

<sup>2</sup>*<sup>π</sup> <sup>∫</sup>*exp(<sup>−</sup>

*ρ*(*xk* , *λ*)=*E θ<sup>k</sup>*

(*xk* , *λ*) =1 + 2(

Ù

Then

Proof:

*ρ*(*xk* , *λ*)*E θ<sup>k</sup>*

*zk* = *xk* −*θ<sup>k</sup>*

Then

<sup>=</sup> <sup>1</sup>

So

*E ρ* ∧

*ρ*(*xk* , *λ*)

The SURE is

When m is odd,

driven. The threshold value minimizes an estimate of the risk.

∧

$$\text{LSERE}(\mathbf{x}\_k, \boldsymbol{\lambda}) = 1 + (\mathbf{x}\_k^2 - 2)I[\mathbf{x}\_k] \leq \boldsymbol{\lambda} \text{} \\
+ \frac{\boldsymbol{\lambda}^{2m}}{\mathbf{x}\_k^{2m-2}} I(\left|\mathbf{x}\_k\right| > \boldsymbol{\lambda}) \\
+ \frac{2(m-1)\boldsymbol{\lambda}^m}{\mathbf{x}\_k^m} I(\mathbf{x}\_k > \boldsymbol{\lambda}) \\

Suppose wavelet coefficients are *SURE*(*xk* , *λ*)=1 + (*xk* <sup>2</sup> <sup>−</sup>2)*I*(| *xk* <sup>|</sup> <sup>≤</sup>*λ*) <sup>+</sup> *λ* <sup>2</sup>*<sup>m</sup> xk* <sup>2</sup>*m*−<sup>2</sup> *I*(| *xk* | >*λ*) + 2(*m* − 1)*λ <sup>m</sup> xk <sup>m</sup> <sup>I</sup>*(*xk* <sup>&</sup>gt;*λ*)<sup>−</sup> 2(*<sup>m</sup>* <sup>−</sup> 1)*<sup>λ</sup> <sup>m</sup> xk <sup>m</sup> I*(*xk* < −*λ*)

, the threshold value λ is set to minimize the estimate of the *x*1....*xN* risk for the given data,

$$\mathcal{A} = \arg\min\_{\lambda \ge 0} \sum\_{k=1}^{N} SLAE(x\_{k'}\lambda) \tag{14}$$

For hard threshold function (m is ∞), because H (x) is discontinuity, the SURE is illogical.

The noisy PCG signal is processed using the method mentioned above. For the generalized threshold functions, parameter m is selected as 2 which is simple and provides a good compromise between the hard and soft threshold function. The data-driven SURE threshold value is used. The filtered PCG signal is illustrated as figure 4(a). The phase space diagram of the filtered PCG signal is shown in figure 4(b). From visual inspection of figure 3, the PCG signal is much cleaner after being denoised; the first heart sound, the systolic period, the second heart sound and the diastolic period can be clearly identified. The results indicate that the method we have proposed significantly reduces noise and preserves well the characteristics of the PCG signal.

#### **3. Analysis of diastolic murmurs for coronary artery disease based on empirical mode decomposition**

Since a novel signal processing algorithm - the Hilbert HuangTransform (HHT) - was proposed by N.E.Huang in 1998 [6], it has been seen as a data-driven tool for nonlinear and nonstationary signal processing. HHT consists of two parts: the EMD and Hilbert transform. EMD as the important part of the HHT that can adaptively decompose signal into a finite and often a series of small numbers of Intrinsic Mode Functions (IMFs) subjected to the following two conditions:

**1.** In the whole dataset, the number of extrema and the number of zero-crossing must either be equal or differ at most by one.

**2.** At any time, the mean value of the envelope of the local maxima and the envelope of the

Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition

These two conditions guarantee the well-behaved Hilbert transform. The IMFs represent the oscillatory modes embedded in the signal. Most signals include more than one oscillatory mode and are not IMFs. EMD is a numerical sifting process to decompose a signal into a finite number of hidden fundamental intrinsic oscillatory modes, i.e., IMFs. Applying the Hilbert transform to each IMF, the instantaneous frequency and amplitude of each IMF can be obtained which constitute the time-frequency-energy distribution of the signal, called the Hilbert spectrum. The Hilbert spectrum provides higher resolution and concentration in the timefrequency plane, and avoids the false high frequency and energy dispersion existent in the

Figure5 shows a classical IMF. The IMFs represent the oscillatory modes embedded in the signal. Each IMF actually is a zero mean monocomponents AM-FM signal with the following

f

with time varying amplitude envelope *x*(*t*)=*a*(*t*)cos*ϕ*(*t*) and phase *a*(*t*). The amplitude and

Most signals include more than one oscillatory mode, so they are not IMFs. EMD is a numerical sifting process to disintegrate empirically a signal into a finite number of hidden fundamental intrinsic oscillatory modes, that is, IMFs. The sifting process can be separated into the following

**1.** Finding all the local extrema, including maxima and minima; then connecting all the maxima and minima of signal x(t) using smooth cubic splines to get its upper envelope

**2.** Subtracting mean of these two envelopes *xlow*(*t*) from the signal to get their difference:

**3.** Regarding the *h*1(*t*)= *x*(*t*)−*m*1(*t*) as the new data and repeating steps 1 and 2 until the resulting signal meets the two criteria of an IMF, defined as *h*1(*t*). The first IMF *c*1(*t*) contains the highest frequency component of the signal. The residual signal *c*1(*t*) is given

**4.** Regarding *r*1(*t*)= *x*(*t*)−*c*1(*t*) as new data and repeating steps (1) (2) (3) until extracting all the IMFs. The sifting procedure is terminated until the *m*-th residue *r*1(*t*) becomes less than

a predetermined small number or becomes monotonic.

The original signal x (t) can thus be expressed as follows:

(15)

http://dx.doi.org/10.5772/55690

99

*xt at t* ( ) ( )cos ( ) =

phase both have physical and mathematical meaning.

*ϕ*(*t*) and lower envelope *xup*(*t*).

*m*1(*t*)=(*xup*(*t*) + *xlow*(*t*)) / 2.

by *r*1(*t*).

local minima must be zero.

Fourier spectrum.

form:

steps:

**Figure 3.** a) Noisy PCG signal (b) Phase space diagram of the noisy signal

**Figure 4.** a) PCG signal after denoising (b) Phase space diagram of denoised signal

**2.** At any time, the mean value of the envelope of the local maxima and the envelope of the local minima must be zero.

These two conditions guarantee the well-behaved Hilbert transform. The IMFs represent the oscillatory modes embedded in the signal. Most signals include more than one oscillatory mode and are not IMFs. EMD is a numerical sifting process to decompose a signal into a finite number of hidden fundamental intrinsic oscillatory modes, i.e., IMFs. Applying the Hilbert transform to each IMF, the instantaneous frequency and amplitude of each IMF can be obtained which constitute the time-frequency-energy distribution of the signal, called the Hilbert spectrum. The Hilbert spectrum provides higher resolution and concentration in the timefrequency plane, and avoids the false high frequency and energy dispersion existent in the Fourier spectrum.

(a)

(b)

(a)

(b)

**Figure 4.** a) PCG signal after denoising (b) Phase space diagram of denoised signal

**Figure 3.** a) Noisy PCG signal (b) Phase space diagram of the noisy signal

98 Adaptive Filtering - Theories and Applications

Figure5 shows a classical IMF. The IMFs represent the oscillatory modes embedded in the signal. Each IMF actually is a zero mean monocomponents AM-FM signal with the following form:

$$\mathbf{x}(t) = a(t)\cos\phi(t)\tag{15}$$

with time varying amplitude envelope *x*(*t*)=*a*(*t*)cos*ϕ*(*t*) and phase *a*(*t*). The amplitude and phase both have physical and mathematical meaning.

Most signals include more than one oscillatory mode, so they are not IMFs. EMD is a numerical sifting process to disintegrate empirically a signal into a finite number of hidden fundamental intrinsic oscillatory modes, that is, IMFs. The sifting process can be separated into the following steps:


The original signal x (t) can thus be expressed as follows:

$$\mathbf{x}(t) = \sum\_{j=1}^{M} c\_j(t) + r\_M(t) \tag{16}$$

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> -0.2

Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition

http://dx.doi.org/10.5772/55690

101

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> -0.1

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> -0.05

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> -0.1

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> -0.1

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> -0.05

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> -0.01


0 0.1

0 0.05

> 0 0.1

> 0 0.1

0 0.05

0 0.01

**Figure 7.** IMFs of diastolic murmurs from the normal people

**Figure 6.** Diastolic murmurs of a normal object



0

0.05

0.1

0.15

*x*(*t*)=∑ *j*=1 *M cj* (*t*) + *rM* (*t*) is an IMF where j represents the number of corresponding IMFs and *cj* (*t*)

is residue. The EMD decomposes non-stationary signals into narrow-band components with decreasing frequency. The decomposition is complete, almost orthogonal, local and adap‐ tive. All IMFs form a completely and nearly orthogonal basis for the original signal. The ba‐ sis comes directly from the signal, which guarantees the inherent characteristic of signal and avoids the diffusion and leakage of signal energy. The sifting process eliminates riding waves, so each IMF is more symmetrical and is actually a zero mean AM-FM component.

**Figure 5.** A classical IMF

Heart sounds are recorded from the chest of normal objects and CAD patients using a specially designed high sensitivity cardiac microphone. The ECG signals are also recorded as a time reference to aid in locating the diastolic phase. For each cycle, the central portion of diastole is digitized (sample frequency equals 2.0 kHz).

Figure6 shows the diastolic murmurs of a normal object. Figure7 shows the IMFs of the murmur obtained by EMD. The diastolic murmurs can be decomposed into six IMFs. The Hilbert spectrum is shown in figure 8. The vertical bars on the right of the panel give the relative amplitude scale. Figure6 provides more distinct information on the time-frequen‐ cy contents of diastolic murmurs, which reveals clearly the dynamic characteristic of murmurs in the time-frequency plane. The Hilbert spectrum contains no energy with frequency above 350Hz. The spectrum appears in the skeleton form and can provide the frequency variations from one instance to the next. Figure 9 shows the marginal spec‐ trum of the diastolic murmurs. It can be clearly seen that the energy mainly concentrates on the lower frequency domain.

Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition http://dx.doi.org/10.5772/55690 101

**Figure 6.** Diastolic murmurs of a normal object

1 () () () *M*

*j xt c t r t* =

*x*(*t*)=∑ *j*=1 *M cj*

100 Adaptive Filtering - Theories and Applications


**Figure 5.** A classical IMF

digitized (sample frequency equals 2.0 kHz).

on the lower frequency domain.

*j M*

is residue. The EMD decomposes non-stationary signals into narrow-band components with decreasing frequency. The decomposition is complete, almost orthogonal, local and adap‐ tive. All IMFs form a completely and nearly orthogonal basis for the original signal. The ba‐ sis comes directly from the signal, which guarantees the inherent characteristic of signal and avoids the diffusion and leakage of signal energy. The sifting process eliminates riding waves, so each IMF is more symmetrical and is actually a zero mean AM-FM component.

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> -0.04

Heart sounds are recorded from the chest of normal objects and CAD patients using a specially designed high sensitivity cardiac microphone. The ECG signals are also recorded as a time reference to aid in locating the diastolic phase. For each cycle, the central portion of diastole is

Figure6 shows the diastolic murmurs of a normal object. Figure7 shows the IMFs of the murmur obtained by EMD. The diastolic murmurs can be decomposed into six IMFs. The Hilbert spectrum is shown in figure 8. The vertical bars on the right of the panel give the relative amplitude scale. Figure6 provides more distinct information on the time-frequen‐ cy contents of diastolic murmurs, which reveals clearly the dynamic characteristic of murmurs in the time-frequency plane. The Hilbert spectrum contains no energy with frequency above 350Hz. The spectrum appears in the skeleton form and can provide the frequency variations from one instance to the next. Figure 9 shows the marginal spec‐ trum of the diastolic murmurs. It can be clearly seen that the energy mainly concentrates

(*t*) + *rM* (*t*) is an IMF where j represents the number of corresponding IMFs and *cj*

= + å (16)

(*t*)

**Figure 7.** IMFs of diastolic murmurs from the normal people

higher spectral energies are concentrated on high frequency compared with those of normal people. More energy distributes in the frequency band over 200Hz and a peak also lies around 350Hz, which often does not appear in diastolic murmurs of normal people. It can be explained as follows: for the CAD patient, the narrowed coronary arteries lead to the blood flow in coronary artery changing from laminar flow to turbulence flow, from simplicity to complexity. Coronary arterial stenosis gives rise to high frequencies of diastolic murmurs. The EMD method makes no assumption about the linearity or stationarity of the signal, and the IMFs are usually easy to interpret and relevant to the underlying dynamic processes being studied.

Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition

http://dx.doi.org/10.5772/55690

103

0 50 100 150 200 250

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> -0.1

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> -0.2

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> -0.1

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> -0.1

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> -0.05

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> -0.02


0 0.1

0 0.2

0 0.1

0 0.1

0 0.05

0 0.02

**Figure 11.** Six IMFs of diastolic murmurs from patient

**Figure 10.** Diastolic Murmurs of CAD patient


0

0.1

0.2

0.3

**Figure 8.** Hilbert spectrum of the diastolic murmurs

**Figure 9.** Marginal spectrum of the diastolic murmurs

Figure 10 shows the diastolic murmurs of the CAD patient, as diagnosed by coronary artery radiography. The left anterior descending artery is stenosed about 60% and the right coronary artery is stenosed about 85%. Figure 11 shows the IMFs of the murmur obtained by EMD. The diastolic cardiac cycle can be decomposed into six IMFs. The Hilbert spectrum is illustrated in figure 12. Figure 13 shows the marginal spectrum of diastolic murmurs. The HHT spectrum has superior temporal and frequency resolutions. The spectrums show precise time-frequency representation of signal. The energies spread over a much wider frequency domain. Much higher spectral energies are concentrated on high frequency compared with those of normal people. More energy distributes in the frequency band over 200Hz and a peak also lies around 350Hz, which often does not appear in diastolic murmurs of normal people. It can be explained as follows: for the CAD patient, the narrowed coronary arteries lead to the blood flow in coronary artery changing from laminar flow to turbulence flow, from simplicity to complexity. Coronary arterial stenosis gives rise to high frequencies of diastolic murmurs. The EMD method makes no assumption about the linearity or stationarity of the signal, and the IMFs are usually easy to interpret and relevant to the underlying dynamic processes being studied.

**Figure 10.** Diastolic Murmurs of CAD patient


50 100 150 200

0 100 200 300 400 500 600 700 800 900 1000

Frequency (Hz)

Figure 10 shows the diastolic murmurs of the CAD patient, as diagnosed by coronary artery radiography. The left anterior descending artery is stenosed about 60% and the right coronary artery is stenosed about 85%. Figure 11 shows the IMFs of the murmur obtained by EMD. The diastolic cardiac cycle can be decomposed into six IMFs. The Hilbert spectrum is illustrated in figure 12. Figure 13 shows the marginal spectrum of diastolic murmurs. The HHT spectrum has superior temporal and frequency resolutions. The spectrums show precise time-frequency representation of signal. The energies spread over a much wider frequency domain. Much

Frequency (Hz)

0

2

4

6

8

102 Adaptive Filtering - Theories and Applications

**Figure 8.** Hilbert spectrum of the diastolic murmurs

**Figure 9.** Marginal spectrum of the diastolic murmurs

**Figure 11.** Six IMFs of diastolic murmurs from patient

locations with values unchanged, but this method will give a distorted view of the local mean near the boundaries. We propose a simpler method to restrict the end effect in spline interpo‐ lation [11]. The key points are to determine the values and locations of extrema nearby end

Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition

http://dx.doi.org/10.5772/55690

105

**1.** Finding all the maxima and minima, and considering the end points both as maximum and minimum, that is, maximum= [l maximum N] and minimum= [1 minimum N].

**2.** The end points are still considered both as maximum and minimum, whereas their values can be adapted to *rM* (*t*) and *δ*1, *γ*1. Taking *δ<sup>N</sup>* , *γ<sup>N</sup>* , *δ*1 as the mean of all maximum except for the first and last maximum (the subscript represents location of maximum). Similarly, taking *δ<sup>N</sup>* , *γ*1 as the mean of all minimum except for the first and last minimum (the

**3.** Comparing *γN* with x (1), *δ*1 with x (N), *δN* with x(1) and *γ*1 with x (N), respectively.

If *γ<sup>N</sup>* <x(1) then *δ*1= x (1);if *δ*1< x (N) then *δ<sup>N</sup>* = x (N); if *δ<sup>N</sup>* > x (1)then *γ*1=x (1);If *γ*1>x(N) then

**4.** Using cubic splines interpolation to get top and bottom envelopes, and repeating the

The performance of the proposed method is compared with the traditional method where the endpoints are considered both as maximum and minimum with values unchanged. As an example, we decompose a sinusoid signal by the sifting process. Figure 14 shows the signal.

points. Suppose the length of data x is N, the steps can be implemented as follows:

subscript represents location of minimum).

second step of above sifting process to extract IMF.

*γ<sup>N</sup>* = x(N).

**Figure 14.** A sinusoid signal

**Figure 12.** Hilbert spectrum of the diastolic murmurs from patient

**Figure 13.** Marginal spectrum of the diastolic murmurs from patient

#### **4. A new method for processing end effect in empirical mode decomposition**

In the procedure of EMD, the cubic splines interpolation creates top and bottom envelopes that are implemented in the first step of the above sifting process. It is difficult to interpolate data near the beginnings or ends, where the cubic splines can have swings. The common method to deal with end effect is to consider the end points both as maximum and minimum locations with values unchanged, but this method will give a distorted view of the local mean near the boundaries. We propose a simpler method to restrict the end effect in spline interpo‐ lation [11]. The key points are to determine the values and locations of extrema nearby end points. Suppose the length of data x is N, the steps can be implemented as follows:


If *γ<sup>N</sup>* <x(1) then *δ*1= x (1);if *δ*1< x (N) then *δ<sup>N</sup>* = x (N); if *δ<sup>N</sup>* > x (1)then *γ*1=x (1);If *γ*1>x(N) then *γ<sup>N</sup>* = x(N).

**4.** Using cubic splines interpolation to get top and bottom envelopes, and repeating the second step of above sifting process to extract IMF.

The performance of the proposed method is compared with the traditional method where the endpoints are considered both as maximum and minimum with values unchanged. As an example, we decompose a sinusoid signal by the sifting process. Figure 14 shows the signal.

**Figure 14.** A sinusoid signal


50 100 150 200

0 100 200 300 400 500 600 700 800 900 1000

In the procedure of EMD, the cubic splines interpolation creates top and bottom envelopes that are implemented in the first step of the above sifting process. It is difficult to interpolate data near the beginnings or ends, where the cubic splines can have swings. The common method to deal with end effect is to consider the end points both as maximum and minimum

Frequency (Hz)

**4. A new method for processing end effect in empirical mode**

104 Adaptive Filtering - Theories and Applications

**Figure 12.** Hilbert spectrum of the diastolic murmurs from patient

**Figure 13.** Marginal spectrum of the diastolic murmurs from patient

•Frequency (Hz)

0

**decomposition**

1

2

3

4

5

6

0 5 10 15 -1.5

**Figure 17.** Cubic splines interpolation in sifting process using the proposed method

**Figure 18.** IMF and residue of the sinusoid signal using the proposed method

that the method we proposed is effective.

Time(s)

Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition

http://dx.doi.org/10.5772/55690

107

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> -1

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> -1

Time(s)

Secondly, applying the proposed method above to restrict the end effect, figure 17 shows the top and bottom envelopes calculated by cubic splines interpolation in the sifting process. Red circles represent the end values predicted. The sinusoid signal is decomposed into one IMF and a residue by the sifting process as depicted in figure 18. The IMF is just the sinusoid and the value of the residue is smaller than 10-6. From figure 18, it can easily be seen that the swings appear near both ends and propagate inwards and produce superfluous IMFs. Actually, the sinusoid signal is an IMF itself in nature because it satisfies the IMF conditions which has the same numbers of zero-crossing and extrema, and can also be local symmetric. Therefore, the sifting process as represented by figure 18 should extract only one IMF. The results indicate




**Figure 15.** Cubic splines interpolation in sifting process using the traditional method

**Figure 16.** IMFs of the sinusoid signal

Firstly, we consider the endpoints both as maximum and minimum with value unchanged. Figure 15 shows the top and bottom envelopes calculated by cubic splines interpolation in the sifting process. Top and bottom red dash dot line represent the envelopes. The sinusoid signal is decomposed into six IMFs and one residue by sifting process as depicted in figure 16.

Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition http://dx.doi.org/10.5772/55690 107

**Figure 17.** Cubic splines interpolation in sifting process using the proposed method

**Figure 18.** IMF and residue of the sinusoid signal using the proposed method

**Figure 15.** Cubic splines interpolation in sifting process using the traditional method

Firstly, we consider the endpoints both as maximum and minimum with value unchanged. Figure 15 shows the top and bottom envelopes calculated by cubic splines interpolation in the sifting process. Top and bottom red dash dot line represent the envelopes. The sinusoid signal is decomposed into six IMFs and one residue by sifting process as depicted in figure 16.

**Figure 16.** IMFs of the sinusoid signal

106 Adaptive Filtering - Theories and Applications

Secondly, applying the proposed method above to restrict the end effect, figure 17 shows the top and bottom envelopes calculated by cubic splines interpolation in the sifting process. Red circles represent the end values predicted. The sinusoid signal is decomposed into one IMF and a residue by the sifting process as depicted in figure 18. The IMF is just the sinusoid and the value of the residue is smaller than 10-6. From figure 18, it can easily be seen that the swings appear near both ends and propagate inwards and produce superfluous IMFs. Actually, the sinusoid signal is an IMF itself in nature because it satisfies the IMF conditions which has the same numbers of zero-crossing and extrema, and can also be local symmetric. Therefore, the sifting process as represented by figure 18 should extract only one IMF. The results indicate that the method we proposed is effective.

### **5. Instantaneous frequency estimation of diastolic murmurs based on EMD and TEO**

For a monochromatic signal, the output by TEO is proportional to the squared product of

Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition

w

The two results above can be combined to estimate the frequency and amplitude of the signal

2 4 Y = ( ( )) *xt A* &

<sup>2</sup> ( ( )) <sup>ˆ</sup> ( ) ( ( )) *x t <sup>t</sup> x t*

2 <sup>2</sup> ( ( )) <sup>ˆ</sup> | ( )| ( ( )) *x t A t*

*x t*

The estimate of instantaneous frequency and amplitude above are also suitable for AM, FM

A discrete-time real value AM-FM signal that is usually used to model time-varying amplitude

 ww

0 *n*

modulation, *a*(*n*) is the carrier frequency, *ωc* is the maximum frequency deviation from the carrier frequency and *ωm*, 0<*ω<sup>m</sup>* <*ω<sup>c</sup>* is the frequency deviation function and |*q*(*n*)| ≤1 is the initial phase shift. The derivative of the phase *θ*, that is, the FM part of the signal is called as

> ( ) ( ) ( ) *c m d n*

f

w

*<sup>n</sup> q n dn*

 ww

<sup>0</sup> ( ) ( )cos( ( )) ( )cos( ( ) ) *<sup>n</sup> c m x n a n n a n n q k dk* = = ++

Y= Y

w

*ω* <sup>2</sup>

http://dx.doi.org/10.5772/55690

(20)

& (21)

Y= <sup>Y</sup> & (22)

<sup>2</sup> Y = -- + ( ( )) ( ) ( 1) ( 1) *xn x n xn xn* (23)

 q

= = + (25)

ò (24)

*q*(*k*)*dk* + *θ*) is the time-varying amplitude

of *x*˙(*t*) produce

109

frequency and amplitude.The TEO of the first order derivative *Ψ*(*x*(*t*))= *A*<sup>2</sup>

the output:

*Ψ*(*x*˙(*t*))= *A*<sup>2</sup>

*ω* <sup>4</sup>

and AM-FM signals.

as follows [14]:

The discrete-time counterpart of TEO can be defined as:

f

and frequency patterns can be expressed as:

Where *x*(*n*)=*a*(*n*)cos(*ϕ*(*n*))=*a*(*n*)cos(*ωcn* + *ωm∫*

instantaneous frequency:

Diastolic murmurs can provide clinicians with valuable diagnostic and prognostic information about the function of heart valves. Quantitative analysis of instantaneous frequency (IF) of the murmurs can aid diagnosis [1, 13].

Instantaneous Frequency (IF) is an important signal characteristic, which characterizes the transients and fast changes in frequency as time progresses. The IF of diastolic murmur is used to describe the time-varying spectral contents of the characteristic frequency bands that are of interest for cardiovascular research. The IF of a signal is traditionally obtained by taking the first derivative of the phase of the signal with respect to time using the Hilbert transform. However, this definition is questionable and will mislead interpretation of instantaneous frequency, such as negative frequency. Instantaneous frequency can also be obtained from a time–frequency distribution (TFD) as the first conditional moment in the frequency, suggesting that the instantaneous frequency is the average frequency at each time, whereas the cross terms existing in TFD will lead to a very rapid degradation of performance and severely pollute the instantaneous frequency estimation [14].

TEO is a powerful nonlinear operator and has been successfully used in a number of applica‐ tions including speech signal processing, image processing, etc. [15]. TEO can track the modulation energy and estimate the instantaneous amplitude and frequency of AM-FM signals with the form

$$x(t) = a(t)\cos[2\pi\int\_0^t o(\tau)d\tau] \tag{17}$$

*x*(*t*)=*a*(*t*)cos 2*π∫* 0 *t ω*(*τ*)*dτ* and *a*(*t*) are the instantaneous amplitude and frequency respectively.

In continuous time domain, TEO is defined by

$$\Psi(\mathbf{x}(t)) = \left[\dot{\mathbf{x}}(t)\right]^2 - \mathbf{x}(t)\ddot{\mathbf{x}}(t) \tag{18}$$

*Ψ*(*x*(*t*))= *x*˙(*t*) <sup>2</sup> − *x*(*t*)*x* ¨(*t*) corresponds to continuous signal, *x*(*t*) and *x*˙(*t*) are the first order and second order time derivatives of *x* ¨(*t*) respectively.

For example, for a sinusoid signal *x*(*t*), the TEO gives

$$
\Psi(\mathbf{x}(t)) = A^2 \alpha^2 \tag{19}
$$

For a monochromatic signal, the output by TEO is proportional to the squared product of frequency and amplitude.The TEO of the first order derivative *Ψ*(*x*(*t*))= *A*<sup>2</sup> *ω* <sup>2</sup> of *x*˙(*t*) produce the output:

$$
\Psi(\dot{\mathbf{x}}(t)) = A^2 \alpha^4 \tag{20}
$$

The two results above can be combined to estimate the frequency and amplitude of the signal *Ψ*(*x*˙(*t*))= *A*<sup>2</sup> *ω* <sup>4</sup> as follows [14]:

$$
\hat{\phi}^2(t) = \frac{\Psi(\dot{x}(t))}{\Psi(x(t))} \tag{21}
$$

$$|\,\hat{A}^2(t) \doteq \frac{\Psi^2(\mathfrak{x}(t))}{\Psi(\dot{\mathfrak{x}}(t))}\tag{22}$$

The estimate of instantaneous frequency and amplitude above are also suitable for AM, FM and AM-FM signals.

The discrete-time counterpart of TEO can be defined as:

**5. Instantaneous frequency estimation of diastolic murmurs based on EMD**

Diastolic murmurs can provide clinicians with valuable diagnostic and prognostic information about the function of heart valves. Quantitative analysis of instantaneous frequency (IF) of the

Instantaneous Frequency (IF) is an important signal characteristic, which characterizes the transients and fast changes in frequency as time progresses. The IF of diastolic murmur is used to describe the time-varying spectral contents of the characteristic frequency bands that are of interest for cardiovascular research. The IF of a signal is traditionally obtained by taking the first derivative of the phase of the signal with respect to time using the Hilbert transform. However, this definition is questionable and will mislead interpretation of instantaneous frequency, such as negative frequency. Instantaneous frequency can also be obtained from a time–frequency distribution (TFD) as the first conditional moment in the frequency, suggesting that the instantaneous frequency is the average frequency at each time, whereas the cross terms existing in TFD will lead to a very rapid degradation of performance and severely pollute the

TEO is a powerful nonlinear operator and has been successfully used in a number of applica‐ tions including speech signal processing, image processing, etc. [15]. TEO can track the modulation energy and estimate the instantaneous amplitude and frequency of AM-FM

<sup>0</sup> ( ) ( )cos[2 ( ) ] *<sup>t</sup>*

p wt t

¨(*t*) respectively.

w

2 2 Y = ( ( )) *xt A*

*ω*(*τ*)*dτ* and *a*(*t*) are the instantaneous amplitude and frequency respectively.

¨(*t*) corresponds to continuous signal, *x*(*t*) and *x*˙(*t*) are the first order

<sup>2</sup> Y= - ( ( )) [ ( )] ( ) ( ) *xt xt xtxt* & && (18)

*<sup>d</sup>* ò (17)

(19)

*xt at* =

**and TEO**

murmurs can aid diagnosis [1, 13].

108 Adaptive Filtering - Theories and Applications

instantaneous frequency estimation [14].

signals with the form

*x*(*t*)=*a*(*t*)cos 2*π∫*

*Ψ*(*x*(*t*))= *x*˙(*t*) <sup>2</sup> − *x*(*t*)*x*

0 *t*

and second order time derivatives of *x*

For example, for a sinusoid signal *x*(*t*), the TEO gives

In continuous time domain, TEO is defined by

$$\Psi(\mathbf{x}(n)) = \mathbf{x}^2(n) - \mathbf{x}(n-1)\mathbf{x}(n+1) \tag{23}$$

A discrete-time real value AM-FM signal that is usually used to model time-varying amplitude and frequency patterns can be expressed as:

$$\mathbf{x}(n) = a(n)\cos(\phi(n)) = a(n)\cos(\alpha\_c n + \alpha\_m \int\_0^n q(k)dk + \theta) \tag{24}$$

Where *x*(*n*)=*a*(*n*)cos(*ϕ*(*n*))=*a*(*n*)cos(*ωcn* + *ωm∫* 0 *n q*(*k*)*dk* + *θ*) is the time-varying amplitude modulation, *a*(*n*) is the carrier frequency, *ωc* is the maximum frequency deviation from the carrier frequency and *ωm*, 0<*ω<sup>m</sup>* <*ω<sup>c</sup>* is the frequency deviation function and |*q*(*n*)| ≤1 is the initial phase shift. The derivative of the phase *θ*, that is, the FM part of the signal is called as instantaneous frequency:

$$\alpha(n) = \frac{d\phi(n)}{dn} = \alpha\_c + \alpha\_m q(n) \tag{25}$$

The instantaneous frequency *ω*(*n*)= *<sup>d</sup>ϕ*(*n*) *dn* <sup>=</sup>*ω<sup>c</sup>* <sup>+</sup> *<sup>ω</sup>mq*(*n*) and amplitude *ω*(*n*) of the AM-FM modulated signal *a*(*n*) at any time instant can be respectively demodulated by applying the TEO to *x*(*n*) and its difference, which is called the Discrete Energy Separation Algorithm (DESA):

$$\mathbf{y}(n) = \mathbf{x}(n) - \mathbf{x}(n-1) \tag{26}$$

The theoretic instantaneous frequency is shown in figure 20. The estimated instantaneous frequency by DESA-2 is shown in figure 21. The estimated amplitude envelope is also illustrated in figure 22. Note that there are no apparent discrepancies between the real values and the DESA-2 calculations. The errors are very slow but less smooth. The results indicate that DESA-2 can be used to track the instantaneous frequency and amplitude accurately.

Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition

http://dx.doi.org/10.5772/55690

111

**Figure 19.** Original AM-FM signal

**Figure 20.** Theoretic instantaneous frequency

$$\rho\_l(n) = \arccos\left(1 - \frac{\Psi\{y(n)\} + \Psi\{y(n+1)\}}{4\Psi\{x(n)\}}\right) \tag{27}$$

$$a \mid a(n) = \sqrt{\frac{\Psi \{ \chi(n) \}}{\sin^2(o(n))}} \tag{28}$$

or

$$\alpha(n) = \frac{1}{2} \arccos\left(1 - \frac{\Psi\{\mathbf{x}(n+1) - \mathbf{x}(n-1)\}}{2\Psi\{\mathbf{x}(n)\}}\right) \tag{29}$$

$$\mathbb{P}\left[\mathfrak{a}(n)\right] = \frac{2\Psi\{\mathfrak{x}(n)\}}{\sqrt{\Psi\{\mathfrak{x}(n+1) - \mathfrak{x}(n-1)\}}} \tag{30}$$

The estimates above are valid under the assumptions that the signal does not vary too fast nor too much compared to the carrier frequency. In general, the first demodulation algorithm (26) ~ (28) is called DESA-1 where '1' implies the approximation of derivatives with a single sample difference. That is, the signal derivative is approximated by the average of forward and backward 1-point differences. The second demodulation algorithm (29) ~ (30) is called DESA-2 where '2' implies a difference between samples whose time indices differ by 2. Both DESA-1 and DESA-2 algorithms yield very small errors and can give the accurate estimate of instan‐ taneous frequency. The DESA-2 algorithm is less computationally complex and has an excellent, almost instantaneous, time resolution which can also lead to a simpler mathematical analysis. In this paper, we focus on the instantaneous frequency rather than the instantaneous amplitude by DESA-2.

Figure 19 shows an AM-FM signal |*a*(*n*)| <sup>=</sup> <sup>2</sup>*Ψ*{*x*(*n*)} *Ψ*{*x*(*n* + 1) − *x*(*n* − 1)} where

$$a(n) = 1 + 0.6\cos(0.01\pi n)$$

$$\phi(n) = \frac{\pi}{10}n + \cos\frac{\pi}{80}n\tag{31}$$

The theoretic instantaneous frequency is shown in figure 20. The estimated instantaneous frequency by DESA-2 is shown in figure 21. The estimated amplitude envelope is also illustrated in figure 22. Note that there are no apparent discrepancies between the real values and the DESA-2 calculations. The errors are very slow but less smooth. The results indicate that DESA-2 can be used to track the instantaneous frequency and amplitude accurately.

**Figure 19.** Original AM-FM signal

The instantaneous frequency *ω*(*n*)= *<sup>d</sup>ϕ*(*n*)

110 Adaptive Filtering - Theories and Applications

w

*n*

*a n*

Figure 19 shows an AM-FM signal |*a*(*n*)| <sup>=</sup> <sup>2</sup>*Ψ*{*x*(*n*)}

f

w

(DESA):

or

amplitude by DESA-2.

*dn* <sup>=</sup>*ω<sup>c</sup>* <sup>+</sup> *<sup>ω</sup>mq*(*n*) and amplitude *ω*(*n*) of the AM-FM

*yn xn xn* ( ) ( ) ( 1) = -- (26)

è ø <sup>Y</sup> (27)

<sup>Y</sup> <sup>=</sup> (28)

è ø <sup>Y</sup> (29)

<sup>Y</sup> <sup>=</sup> Y +- - (30)

where

= + (31)

modulated signal *a*(*n*) at any time instant can be respectively demodulated by applying the TEO to *x*(*n*) and its difference, which is called the Discrete Energy Separation Algorithm

> { ( )} { ( 1)} ( ) arccos 1 4 { ( )} *yn yn <sup>n</sup>*

<sup>1</sup> { ( 1) ( 1)} ( ) arccos 1 <sup>2</sup> 2 { ( )}

2 { ( )} | ( )| { ( 1) ( 1)}

æ ö Y +- - = - ç ÷

*x n*

*xn xn*

The estimates above are valid under the assumptions that the signal does not vary too fast nor too much compared to the carrier frequency. In general, the first demodulation algorithm (26) ~ (28) is called DESA-1 where '1' implies the approximation of derivatives with a single sample difference. That is, the signal derivative is approximated by the average of forward and backward 1-point differences. The second demodulation algorithm (29) ~ (30) is called DESA-2 where '2' implies a difference between samples whose time indices differ by 2. Both DESA-1 and DESA-2 algorithms yield very small errors and can give the accurate estimate of instan‐ taneous frequency. The DESA-2 algorithm is less computationally complex and has an excellent, almost instantaneous, time resolution which can also lead to a simpler mathematical analysis. In this paper, we focus on the instantaneous frequency rather than the instantaneous

*a n*

æ ö Y +Y + = - ç ÷

2 { ( )} | ( )| sin ( ( ))

*x n*

w*n*

*xn xn*

*x n*

*Ψ*{*x*(*n* + 1) − *x*(*n* − 1)}

p

( ) 1 0.6cos(0.01 )

 p

*a n n*

*nn n*

( ) cos 10 80

p

= +

*x n*

**Figure 20.** Theoretic instantaneous frequency

**Figure 23.** A mixture signal of two chirp signals

**Figure 24.** Estimated IF of two IMFs by DESA-2

by TEO.

In this paper, we present a novel method to estimate the IF of diastolic murmurs using Empirical Mode Decomposition (EMD) and nonlinear the Teager Energy Operator (TEO). EMD has been analysed as in section 3 and can decompose diastolic murmurs into a series of Intrinsic Mode Functions (IMFs), then accurate IF estimation can be acquired

Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition

http://dx.doi.org/10.5772/55690

113

**Figure 21.** Estimated instantaneous frequency by DESA-2

**Figure 22.** Estimated amplitude envelope by DESA-2

Another mixture signal is composed of two linear swept-frequency signals shown in figure 23. The frequency of one chirp signal varies from 1Hz to 0.1 Hz and the other varies from 2 Hz to 0.1 Hz. The estimated IF is shown in figure 24. The two chirp signals are also better identified and localized except for near boundaries.

**Figure 23.** A mixture signal of two chirp signals

**Figure 21.** Estimated instantaneous frequency by DESA-2

112 Adaptive Filtering - Theories and Applications

**Figure 22.** Estimated amplitude envelope by DESA-2

and localized except for near boundaries.

Another mixture signal is composed of two linear swept-frequency signals shown in figure 23. The frequency of one chirp signal varies from 1Hz to 0.1 Hz and the other varies from 2 Hz to 0.1 Hz. The estimated IF is shown in figure 24. The two chirp signals are also better identified

**Figure 24.** Estimated IF of two IMFs by DESA-2

In this paper, we present a novel method to estimate the IF of diastolic murmurs using Empirical Mode Decomposition (EMD) and nonlinear the Teager Energy Operator (TEO). EMD has been analysed as in section 3 and can decompose diastolic murmurs into a series of Intrinsic Mode Functions (IMFs), then accurate IF estimation can be acquired by TEO.

**Figure 25.** Block diagram of Instantaneous Frequency (IF) estimate based on EMD-TEO

The block diagram of the instantaneous frequency estimate based on EMD-TEO is shown in figure 25 (IF refers to the instantaneous frequency in the block diagram).

The instantaneous frequency of the original signal can be obtained in the following steps:


$$\alpha(t) = \sum\_{j=1}^{M} IF\_j(t) / M \tag{32}$$

0 50 100 150 200 250 300 350 400 450 500 -0.15

Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition

http://dx.doi.org/10.5772/55690

115

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> <sup>300</sup> <sup>350</sup> <sup>400</sup> <sup>450</sup> <sup>500</sup> -0.1

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> <sup>300</sup> <sup>350</sup> <sup>400</sup> <sup>450</sup> <sup>500</sup> -0.2

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> <sup>300</sup> <sup>350</sup> <sup>400</sup> <sup>450</sup> <sup>500</sup> -0.1

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> <sup>300</sup> <sup>350</sup> <sup>400</sup> <sup>450</sup> <sup>500</sup> -0.1

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> <sup>300</sup> <sup>350</sup> <sup>400</sup> <sup>450</sup> <sup>500</sup> -0.05

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> <sup>300</sup> <sup>350</sup> <sup>400</sup> <sup>450</sup> <sup>500</sup> -0.02

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> <sup>300</sup> <sup>350</sup> <sup>400</sup> <sup>450</sup> <sup>500</sup> 0.02


> 0 0.1

> 0 0.2

> 0 0.1

> 0 0.1

0 0.05

0 0.02

**Figure 27.** Six IMFs and one residue by EMD

0.04 0.06

**Figure 26.** Diastolic Murmurs of CAD object

It is the average frequency of mainly IMFs at each instant time.

Next we estimate the IF of diastolic murmurs from clinical coronary artery disease (CAD) patient based on the EMD-Teager method. The left anterior descending artery is stenosed about 40% and the right coronary artery is stenosed about 55%, which has already been diagnosed by catherization. Figure 26 shows the diastolic murmurs. Figure 27 shows the IMFs obtained by EMD. The diastolic murmurs can be decomposed into six IMFs and one residue. The amplitudes of IMF5 and IMF6 are smaller compared with the original signal. So IMF5 and IMF6 are abandoned. Figure 28 shows the IF of each effective IMF by DESA-2. Figure 29 shows the average IF of diastolic murmurs. Then some features such as mean, standard deviation, etc., can be extracted from the average IF. For the normal subject, figure 30 shows the IF of each effective IMF and figure 31 shows average IF of diastolic murmurs.

Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition http://dx.doi.org/10.5772/55690 115

**Figure 26.** Diastolic Murmurs of CAD object

EMD

Decompose the original single into IMFs:

**b.** Calculate the instantaneous frequency *cj*

IMF1

IMF2

IMFM

**Figure 25.** Block diagram of Instantaneous Frequency (IF) estimate based on EMD-TEO

figure 25 (IF refers to the instantaneous frequency in the block diagram).

**c.** Calculate the average instantaneous frequency of the original signal:

w

It is the average frequency of mainly IMFs at each instant time.

effective IMF and figure 31 shows average IF of diastolic murmurs.

1 ( ) ( )/ *M*

 *t IF t M* =

*j*

*j*

Next we estimate the IF of diastolic murmurs from clinical coronary artery disease (CAD) patient based on the EMD-Teager method. The left anterior descending artery is stenosed about 40% and the right coronary artery is stenosed about 55%, which has already been diagnosed by catherization. Figure 26 shows the diastolic murmurs. Figure 27 shows the IMFs obtained by EMD. The diastolic murmurs can be decomposed into six IMFs and one residue. The amplitudes of IMF5 and IMF6 are smaller compared with the original signal. So IMF5 and IMF6 are abandoned. Figure 28 shows the IF of each effective IMF by DESA-2. Figure 29 shows the average IF of diastolic murmurs. Then some features such as mean, standard deviation, etc., can be extracted from the average IF. For the normal subject, figure 30 shows the IF of each

Signal

114 Adaptive Filtering - Theories and Applications

**a.**

DESA-2

IF1

IF2

Average IF

IFM

DESA-2

DESA-2

*a*(*n*)=1 + 0.6cos(0.01*πn*)

<sup>10</sup> *<sup>n</sup>* <sup>+</sup> cos

(*t*) of each IMF *I Fj*

*π* <sup>80</sup> *<sup>n</sup>*

<sup>=</sup> å (32)

j=1…M.

(*t*) by DESA-2.

The block diagram of the instantaneous frequency estimate based on EMD-TEO is shown in

*<sup>ϕ</sup>*(*n*)= *<sup>π</sup>*

The instantaneous frequency of the original signal can be obtained in the following steps:

**Figure 27.** Six IMFs and one residue by EMD

**Figure 30.** Estimated instantaneous frequency of normal object by DESA-2 algorithm

Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition

http://dx.doi.org/10.5772/55690

117

For the CAD object, we can see that both the IF of each IMF and average IF are higher than those for normal subject. The diastolic murmurs contain rich higher frequencies. The mean of

**Figure 31.** Estimated IF of normal object

**Figure 28.** Estimated IF of four selective IMFs by DESA-2

**Figure 29.** The average instantaneous frequency of diastolic murmurs

Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition http://dx.doi.org/10.5772/55690 117

**Figure 30.** Estimated instantaneous frequency of normal object by DESA-2 algorithm

**Figure 31.** Estimated IF of normal object

**Figure 28.** Estimated IF of four selective IMFs by DESA-2

116 Adaptive Filtering - Theories and Applications

**Figure 29.** The average instantaneous frequency of diastolic murmurs

For the CAD object, we can see that both the IF of each IMF and average IF are higher than those for normal subject. The diastolic murmurs contain rich higher frequencies. The mean of average IF is 185Hz and the standard deviation is 40Hz. For the normal subject, the mean of average IF is 140Hz and the standard deviation is 26Hz. These can be explained as follows: for the CAD subject, the narrowed coronary arteries lead to the blood flow in coronary artery changing from laminar flow to turbulence flow, from simplicity to complexity. Coronary arterial stenosis gives rise to high frequencies of diastolic murmurs. The instantaneous frequency features effectively reveal the information as to whether the arteries are blocked and denote the frequency change of diastolic murmurs when the coronary arteries are occluded.

**References**

Systems (2002). , 844-847.

of London An(1998).

Biometrika (1996). , 83(4), 727-745.

statics (1981). , 9(6), 1135-1151.

gy operators, ICASSP (1992).

cuits, and Systems (2007). , 841-845.

plications in Biomedicine (2007). , 49-52.

constraints. Journal of the Franklin Institute (2000).

[1] Akay, M, et al. Harmonic decomposition of diastolic heart sounds associated with

Adaptive Analysis of Diastolic Murmurs for Coronary Artery Disease Based on Empirical Mode Decomposition

http://dx.doi.org/10.5772/55690

119

[2] Akay, M, et al. Comparative study of advanced signal processing techniques for de‐ tection Coronary Artery Disease. Proceedings of the Annual International Confer‐ ence of the IEEE Engineering in Medicine and Biology Society (1991). , 2139-2140.

[3] Djebbari, A. Bereksi Reguig F. Short-time Fourier transform analysis of the phonocar‐ diogram signal. The 7th IEEE International Conference on Electronics, Circuits and

[4] Debbal, S. M. Bereksi Reguig F. Time-frequency analysis of the first and the second heartbeat sounds, Applied Mathematics and Computation (2007). , 184(2), 1041-1052.

[5] Khadr, L, Matalgah, M, et al. The wavelet transform and its applications to phonocar‐

[6] Huang, N. E, et al. The empirical mode composition and the Hilbert spectrum for non linear and non-stationary time series analysis, Proceedings of the Royal Society

[7] Cheng, J, et al. Research on the intrinsic mode function (IMF) criterion in EMD meth‐

[8] Gao, H. Y, & Bruce, A. G. Understanding waveshrink: variance and bias estimation,

[9] Gao, H. Y. Wavelet shrinkage denoising using the non-negative garrote, Journal of

[10] Stein, C. Estimation of the mean of a multivariate normal distribution, Annuals of

[11] Zhao, Z, & Wang, Y. A new method for processing end effect in Empirical Mode De‐ composition, Communications,International Conference on Communications, Cir‐

[12] Gauthier, D, et al. Spectral Analysis of Heart Sounds Associated With Coronary Oc‐ clusions 6th International Special Topic Conference on Information Technology Ap‐

[13] Oliveira, P. M, & Barroso, V. Definitions of Instantaneous Frequency under physical

[14] Maragos, P, et al. On separating amplitude from frequency modulations using ener‐

diogram signal analysis, Medical informatics (1991). , 16(3), 221-227.

od, Mechanical Systems and Signal Processing (2006). , 20(4), 817-824.

Computational and Graphical Statistics (1998). , 7(4), 469-488.

coronary artery disease. Signal Processing (1995). , 41(1), 79-90.

#### **6. Conclusion**

Diastolic murmurs contain the information of coronary artery occlusions which give the basis of CAD diagnosis. The Hilbert Huang Transform is an adaptive powerful method to analyse nonlinear and non-stationary time series. The important part of HHT is the Empirical Mode composition (EMD). In this paper, we firstly studied wavelet shrinkage denoising using the generalized threshold function and the data-driven SURE threshold value, which successfully removed noise from the PCG signal. Secondly, we obtained the Hilbert spectrum and marginal spectrum of diastolic murmurs for normal subjects and CAD patients after EMD. They provide higher resolution and energy concentration in the time-frequency plane. The Hilbert spectrum and marginal spectrum effectively reveal the information as to whether the arteries are blocked and provide a reliable indicator of CAD. For restricting the end effect of EMD, a simple, powerful and effective method is presented. The IF estimation algorithm is studied based on EMD-TEO. The results indicate that the IF of diastolic murmurs effectively reveal the infor‐ mation on whether the arteries are blocked and provide a reliable indicator of CAD and provides a reliable indicator of coronary artery disease.

#### **Acknowledgements**

This paper is partly supported by the National Natural Science Foundation of China (grant no. 61102133) and supported by the Key Project of the Science Technology Department of Zhejiang Province (grant no.2010C11065) and the project of Hangzhou Science and Technology Committee (grant no. 20110833B31).

#### **Author details**

Zhidong Zhao1\*, Yi Luo1 , Fangqin Ren1 , Li Zhang1 and Changchun Shi2

\*Address all correspondence to: mailzzd@yahoo.com.cn

1 Hangzhou Dianzi University, Hangzhou, China

2 Hangzhou Normal University, Hangzhou, China

#### **References**

average IF is 185Hz and the standard deviation is 40Hz. For the normal subject, the mean of average IF is 140Hz and the standard deviation is 26Hz. These can be explained as follows: for the CAD subject, the narrowed coronary arteries lead to the blood flow in coronary artery changing from laminar flow to turbulence flow, from simplicity to complexity. Coronary arterial stenosis gives rise to high frequencies of diastolic murmurs. The instantaneous frequency features effectively reveal the information as to whether the arteries are blocked and denote the frequency change of diastolic murmurs when the coronary arteries are occluded.

Diastolic murmurs contain the information of coronary artery occlusions which give the basis of CAD diagnosis. The Hilbert Huang Transform is an adaptive powerful method to analyse nonlinear and non-stationary time series. The important part of HHT is the Empirical Mode composition (EMD). In this paper, we firstly studied wavelet shrinkage denoising using the generalized threshold function and the data-driven SURE threshold value, which successfully removed noise from the PCG signal. Secondly, we obtained the Hilbert spectrum and marginal spectrum of diastolic murmurs for normal subjects and CAD patients after EMD. They provide higher resolution and energy concentration in the time-frequency plane. The Hilbert spectrum and marginal spectrum effectively reveal the information as to whether the arteries are blocked and provide a reliable indicator of CAD. For restricting the end effect of EMD, a simple, powerful and effective method is presented. The IF estimation algorithm is studied based on EMD-TEO. The results indicate that the IF of diastolic murmurs effectively reveal the infor‐ mation on whether the arteries are blocked and provide a reliable indicator of CAD and

This paper is partly supported by the National Natural Science Foundation of China (grant no. 61102133) and supported by the Key Project of the Science Technology Department of Zhejiang Province (grant no.2010C11065) and the project of Hangzhou Science and Technology

, Li Zhang1

and Changchun Shi2

provides a reliable indicator of coronary artery disease.

, Fangqin Ren1

\*Address all correspondence to: mailzzd@yahoo.com.cn

1 Hangzhou Dianzi University, Hangzhou, China

2 Hangzhou Normal University, Hangzhou, China

**6. Conclusion**

118 Adaptive Filtering - Theories and Applications

**Acknowledgements**

**Author details**

Zhidong Zhao1\*, Yi Luo1

Committee (grant no. 20110833B31).


[15] Zhao, Z. D, Zhao, Z. J, et al. Time-frequency analysis of heart sound based on HHT. International Conference on Communications, Circuits and Systems (2005).

**Chapter 5**

**Performance of Adaptive Hybrid System in Two**

Edgar Omar Lopez-Caudana and

Additional information is available at the end of the chapter

Hector Manuel Perez-Meana

**Figure 1.** Adaptive noise cancelling approach

properly cited.

http://dx.doi.org/10.5772/51517

**1. Introduction**

**Scenarios: Echo Phone and Acoustic Noise Reduction**

The adaptive noise cancellation has proved being very efficient method in various practical applications such as voice clearance, recognition systems for voice, hands-free telephony, and medical applications such as hearing aids and fetal electrocardiography [1], etc. Figure 1 [1], depicts the basic principle of noise cancellation (understanding that noise is an unwant‐

Acoustic noise has been studied in recent years due to growing interest in cancelling acous‐ tic noise through active control, since it is increasingly common to find sources of noise in many industrial processes. Basic outlines of noise cancellation were based on the application

> © 2013 Lopez-Caudana and Perez-Meana; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is

© 2013 Lopez-Caudana and Perez-Meana; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

ed signal, d(n)), which is described by main signals that feed the system.


### **Performance of Adaptive Hybrid System in Two Scenarios: Echo Phone and Acoustic Noise Reduction**

Edgar Omar Lopez-Caudana and Hector Manuel Perez-Meana

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/51517

#### **1. Introduction**

[15] Zhao, Z. D, Zhao, Z. J, et al. Time-frequency analysis of heart sound based on HHT. International Conference on Communications, Circuits and Systems (2005).

[16] Zhao, Z. D, & Pan, M. Instantaneous Frequency Estimation of Diastolic Murmurs Based on EMD and TEO. 1st International Conference on Bioinformatics and Bio‐

[17] Zhao, Z. D. Wavelet shrinkage denoising by generalized threshold function, Interna‐ tional Conference on Machine Learning and Cybernetics (2005). , 5501-5506.

[18] Yoshida, H, Shino, H, & Yana, K. Instantaneous frequency analysis of systolic mur‐ mur for phonocardiogram,19th Annual International Conference of the IEEE Engi‐

medical Engineering, (2007). , 829-832.

120 Adaptive Filtering - Theories and Applications

neering in Medicine and Biology Society (1997).

The adaptive noise cancellation has proved being very efficient method in various practical applications such as voice clearance, recognition systems for voice, hands-free telephony, and medical applications such as hearing aids and fetal electrocardiography [1], etc. Figure 1 [1], depicts the basic principle of noise cancellation (understanding that noise is an unwant‐ ed signal, d(n)), which is described by main signals that feed the system.

properly cited.

Acoustic noise has been studied in recent years due to growing interest in cancelling acous‐ tic noise through active control, since it is increasingly common to find sources of noise in many industrial processes. Basic outlines of noise cancellation were based on the application

© 2013 Lopez-Caudana and Perez-Meana; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is © 2013 Lopez-Caudana and Perez-Meana; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

of passive attenuators that were used for many years without much success [2], however, development of digital signal processing has become increasingly feasible systems active noise cancellation. Active noise cancellation systems cancel unwanted acoustic noise based on the superposition principle: an acoustic noise of equal amplitude but opposite phase is generated in order to cancel out the unwanted noise.

**2. Adaptive systems as a solution to problems of signal cancellation**

An adaptive filter responds to changes in its parameters like its resonance frequency, input signal or transfer function that varies with time, for example. This behavior is possible since the adaptive filter coefficients vary over time and are updated automatically by an adaptive algorithm. Therefore, these filters can be used in applications where the input signal is un‐ known or not necessarily stationary. An adaptive filter is composed of two parts: digital fil‐

Performance of Adaptive Hybrid System in Two Scenarios: Echo Phone and Acoustic Noise Reduction

http://dx.doi.org/10.5772/51517

123

One of the most important applications for this kind of system is active noise control (ANC). ANC systems must respond to changes in frequency of the primary noise they want to can‐ cel out. In other words the primary non-stationary noise varies; hence we must use some kind of adaptive system, to get an acceptable cancellation that carried out many operations at a high speed. The ability of an adaptive filter to operate and respond satisfactorily to an unknown environment, and variations that may be involved in signal reference, to make a powerful adaptive filter for signal processing and control applications. There are several types of adaptive filters but generally all share the characteristic of working with an input signal (input vector), and a desired response (output vector). These two signals are used to compute an estimate of error (error signal), which allows control of the coefficients of the

In other words, ANC is an approach to noise reduction and a secondary noise source that destructively interferes with the unwanted noise is introduced. In general, active noise con‐ trol systems rely on multiple sensors to measure the unwanted noise field and the effect of the cancellation. The noise field is modeled as a stochastic process, and an adaptive algo‐ rithm is used to adaptively estimate the parameters of the process. Thus, active noise control involves an electroacoustic or electromechanical system that cancels the primary (unwanted) noise based on the principle of superposition; specifically, an anti-noise of equal amplitude and opposite phase is generated and combined with the primary noise, thus resulting in the cancellation of both noises. ANC is developing rapidly because it permits improvements in noise control, often with potential benefits in size, weight, volume, and cost. Thus, the active noise control has been object of an intense research and central subject in many scientific ar‐

On the other hand, unwanted acoustic noise is a by-product of many industrial processes and systems. This problem has become more and more evident as the applications of elec‐ tronic communication systems increase, since their effects represent an important source of annoyances for the end user and they can reduce considerable the efficiency, the quality and the reliability of this type of systems. These ANC systems use an active form of noise control which includes the use of a second source of sound that generates a signal of the same char‐ acteristic as echo but with different phase. This allows to cancel this signal because the waves of sounds propagate linearly, which is known as superposition effect Also, since the characteristics of the signal to cancel change constantly, in this case the echo, the system re‐

**2.1. Adaptive Filtering: Active Noise Control**

ter and adaptive algorithm.

adjustable filter*.*

ticles in the last 10 years.

This work discusses a scheme of active noise cancellation using adaptive algorithms of the digital filters required for the correct operation of the proposed system. The signal genera‐ tion "anti-noise" to cancel the primary source of noise is a problem different from change of environment, since the signal is generated by electrical means and must be propagated acoustically to have the desired effect; this creates a delayed signal in the generation and propagation, so this change is necessary to calculate the required signal. This work consid‐ ers the estimation of this modification done "offline" [2].

Hybrid ANC systems correspond to a combination of control structures from the feedback and feedforward systems, where the cancelling signal is generated based on the outputs of both the reference sensor and the error sensor. While the feedforward system attenuates the primary noise, which is correlated with the reference signal, the feedback system cancels the predicta‐ ble components of the primary noise signal that are not observed by the reference sensor.

As an example of the efficiency of the adaptive hybrid systems, this work evaluates a Hy‐ brid Active Noise Control (HANC) system under feedback acoustic situation. Proposed scheme objective is to compare the performance of HANC versus common references: feed‐ back, feedforward and neutralization systems; the inner nature of HANC gives two main characteristics: on line modeling of secondary path and a good performance under acoustic feedback conditions. In the evaluated system, two least mean square (LMS) adaptive filters are used in the noise control process: one for the feedforward stage and the other for the feedback stage; both of them use the same error signal as used in the adaptation of the mod‐ eling filter. Then, the combination of the feedback and feedforward stages, results in a solid robustness for the system in acoustic feedback situation.

This chapter discusses a vital application in telecommunications processes, which is the echo in telephone line and the same time a new proposal: the hybrid structure proposed as a solution to this problem. Finally, the computer simulations are presented to show the suc‐ cess of the proposed system. So, this chapter presents an adaptive hybrid system to resolve the problems described: the noise cancellation using adaptive filtering and one proposal for echo cancellation system. Furthermore, we present a hybrid structure which consists of a feedforward structure, used to estimate the noise path, and a feedback structure, used to cancel the noise, i.e., the unwanted signal: echo in telephony systems or noise signals like conversations, snoring or engines. Hybrid active noise cancellation systems are a good solu‐ tion to these two important problems, since they have the properties of both the feedfor‐ ward and feedback systems.

#### **2. Adaptive systems as a solution to problems of signal cancellation**

#### **2.1. Adaptive Filtering: Active Noise Control**

of passive attenuators that were used for many years without much success [2], however, development of digital signal processing has become increasingly feasible systems active noise cancellation. Active noise cancellation systems cancel unwanted acoustic noise based on the superposition principle: an acoustic noise of equal amplitude but opposite phase is

This work discusses a scheme of active noise cancellation using adaptive algorithms of the digital filters required for the correct operation of the proposed system. The signal genera‐ tion "anti-noise" to cancel the primary source of noise is a problem different from change of environment, since the signal is generated by electrical means and must be propagated acoustically to have the desired effect; this creates a delayed signal in the generation and propagation, so this change is necessary to calculate the required signal. This work consid‐

Hybrid ANC systems correspond to a combination of control structures from the feedback and feedforward systems, where the cancelling signal is generated based on the outputs of both the reference sensor and the error sensor. While the feedforward system attenuates the primary noise, which is correlated with the reference signal, the feedback system cancels the predicta‐ ble components of the primary noise signal that are not observed by the reference sensor.

As an example of the efficiency of the adaptive hybrid systems, this work evaluates a Hy‐ brid Active Noise Control (HANC) system under feedback acoustic situation. Proposed scheme objective is to compare the performance of HANC versus common references: feed‐ back, feedforward and neutralization systems; the inner nature of HANC gives two main characteristics: on line modeling of secondary path and a good performance under acoustic feedback conditions. In the evaluated system, two least mean square (LMS) adaptive filters are used in the noise control process: one for the feedforward stage and the other for the feedback stage; both of them use the same error signal as used in the adaptation of the mod‐ eling filter. Then, the combination of the feedback and feedforward stages, results in a solid

This chapter discusses a vital application in telecommunications processes, which is the echo in telephone line and the same time a new proposal: the hybrid structure proposed as a solution to this problem. Finally, the computer simulations are presented to show the suc‐ cess of the proposed system. So, this chapter presents an adaptive hybrid system to resolve the problems described: the noise cancellation using adaptive filtering and one proposal for echo cancellation system. Furthermore, we present a hybrid structure which consists of a feedforward structure, used to estimate the noise path, and a feedback structure, used to cancel the noise, i.e., the unwanted signal: echo in telephony systems or noise signals like conversations, snoring or engines. Hybrid active noise cancellation systems are a good solu‐ tion to these two important problems, since they have the properties of both the feedfor‐

generated in order to cancel out the unwanted noise.

122 Adaptive Filtering - Theories and Applications

ers the estimation of this modification done "offline" [2].

robustness for the system in acoustic feedback situation.

ward and feedback systems.

An adaptive filter responds to changes in its parameters like its resonance frequency, input signal or transfer function that varies with time, for example. This behavior is possible since the adaptive filter coefficients vary over time and are updated automatically by an adaptive algorithm. Therefore, these filters can be used in applications where the input signal is un‐ known or not necessarily stationary. An adaptive filter is composed of two parts: digital fil‐ ter and adaptive algorithm.

One of the most important applications for this kind of system is active noise control (ANC). ANC systems must respond to changes in frequency of the primary noise they want to can‐ cel out. In other words the primary non-stationary noise varies; hence we must use some kind of adaptive system, to get an acceptable cancellation that carried out many operations at a high speed. The ability of an adaptive filter to operate and respond satisfactorily to an unknown environment, and variations that may be involved in signal reference, to make a powerful adaptive filter for signal processing and control applications. There are several types of adaptive filters but generally all share the characteristic of working with an input signal (input vector), and a desired response (output vector). These two signals are used to compute an estimate of error (error signal), which allows control of the coefficients of the adjustable filter*.*

In other words, ANC is an approach to noise reduction and a secondary noise source that destructively interferes with the unwanted noise is introduced. In general, active noise con‐ trol systems rely on multiple sensors to measure the unwanted noise field and the effect of the cancellation. The noise field is modeled as a stochastic process, and an adaptive algo‐ rithm is used to adaptively estimate the parameters of the process. Thus, active noise control involves an electroacoustic or electromechanical system that cancels the primary (unwanted) noise based on the principle of superposition; specifically, an anti-noise of equal amplitude and opposite phase is generated and combined with the primary noise, thus resulting in the cancellation of both noises. ANC is developing rapidly because it permits improvements in noise control, often with potential benefits in size, weight, volume, and cost. Thus, the active noise control has been object of an intense research and central subject in many scientific ar‐ ticles in the last 10 years.

On the other hand, unwanted acoustic noise is a by-product of many industrial processes and systems. This problem has become more and more evident as the applications of elec‐ tronic communication systems increase, since their effects represent an important source of annoyances for the end user and they can reduce considerable the efficiency, the quality and the reliability of this type of systems. These ANC systems use an active form of noise control which includes the use of a second source of sound that generates a signal of the same char‐ acteristic as echo but with different phase. This allows to cancel this signal because the waves of sounds propagate linearly, which is known as superposition effect Also, since the characteristics of the signal to cancel change constantly, in this case the echo, the system re‐ quires a great capacity of adaptation. These adaptively systems, represent a feasible alterna‐ tive for echo cancellation in telephone lines due to their processing, capacity and lower cost.

The most suitable tool for solving the two aforementioned problems is adaptive filtering

Performance of Adaptive Hybrid System in Two Scenarios: Echo Phone and Acoustic Noise Reduction

http://dx.doi.org/10.5772/51517

125

Figure 4 shows, in a simplified way, an ANC Feedforward System, in which the digital filter W(z) is used to estimate the unknown plant P(z). It is assumed that both the plant and the filter have the same input signal x(n). Moreover, a Filtered LMS (Filtered-X Least Mean Square, FXLMS) algorithm is introduced, which is a varying form of the LMS algorithm [2]. FXLMS algorithm solves the secondary path problem, described as the set of transforma‐ tions that the filter signal and the adaptive error signal go through, on their way from an electric to an acoustic domain. During this electro-acoustic process, the signal may be de‐ layed or altered in such a way that it is necessary to minimize such effects. The FXLMS algo‐ rithm technique consists of placing a filter, with the same properties as the secondary path, in the reference signal going towards the adaptive least mean square filter (LMS), as shown

From Figure 3, filter Ŝ(z) is the model of the secondary path, defined by filter S(z). Taking

*<sup>w</sup>*¯(*<sup>n</sup>* + 1)=*w*¯(*n*) <sup>+</sup> *μx*^¯(*n*)*e*(*n*) (1)

which has been successfully applied in the solution of several practical problems [3].

**3. ANC Systems: types and problematic**

**Figure 3.** ANC Feedforward system with FXLMS algorithm

this into consideration, the update of filter W(z) is given as follows:

**3.1. Types of ANC Systems**

*3.1.1. A priori (Feedforward)*

in figure 3.

Where

#### **2.2. Cancelling Telephone Echo**

Telephone echo, is a phenomenon produced by the mismatching impedance of the hybrid circuit used to couple the two lines with the four lines sections of long distance communica‐ tion systems that considerably degrades the quality of telecommunication systems. Several systems have been proposed in the literature during the last several years, to solve these problems, such as adaptive echo cancelers. The figure 2 depicts the basic structure of descri‐ bed system.

**Figure 2.** Echocancelling in long distance telephone systems

An echo canceler generates a replica of the echo signal and subtracts it from the signal to be transmitted generating the so-called pseudo echo, which is then used to update the echo canceler coefficients such that the mean square value of residual echo becomes a minimum. However the real time estimation of the hybrid impulse response is a difficult task for sever‐ al reasons:


The most suitable tool for solving the two aforementioned problems is adaptive filtering which has been successfully applied in the solution of several practical problems [3].

#### **3. ANC Systems: types and problematic**

#### **3.1. Types of ANC Systems**

quires a great capacity of adaptation. These adaptively systems, represent a feasible alterna‐ tive for echo cancellation in telephone lines due to their processing, capacity and lower cost.

Telephone echo, is a phenomenon produced by the mismatching impedance of the hybrid circuit used to couple the two lines with the four lines sections of long distance communica‐ tion systems that considerably degrades the quality of telecommunication systems. Several systems have been proposed in the literature during the last several years, to solve these problems, such as adaptive echo cancelers. The figure 2 depicts the basic structure of descri‐

An echo canceler generates a replica of the echo signal and subtracts it from the signal to be transmitted generating the so-called pseudo echo, which is then used to update the echo canceler coefficients such that the mean square value of residual echo becomes a minimum. However the real time estimation of the hybrid impulse response is a difficult task for sever‐

**1.** The echo path impulse response is non-stationary, and then the convergence of adapta‐

**2.** The power spectral density of speech signals is not flat. This fact results in a slower con‐

**3.** In most cases the echo canceler requires one hundred or more taps for an accurate esti‐ mation of a hybrid impulse response and several thousand of taps in the acoustic echo

**4.** The presence of both, near and far-end speakers simultaneously often occurs, which re‐ quire some robust mechanisms or adaptation algorithms to handle it. Thus the develop‐ ment of low complexity and high convergence rate echo canceler structures has received a lot of attention, resulting in several efficient echo canceler structures and

vergence rate when gradient search based adaptive algorithms are used.

path case, which makes the use of efficient adaptation algorithms difficult.

tion algorithm must be fast enough to track these changes.

**2.2. Cancelling Telephone Echo**

124 Adaptive Filtering - Theories and Applications

**Figure 2.** Echocancelling in long distance telephone systems

bed system.

al reasons:

adaptation algorithms.

#### *3.1.1. A priori (Feedforward)*

Figure 4 shows, in a simplified way, an ANC Feedforward System, in which the digital filter W(z) is used to estimate the unknown plant P(z). It is assumed that both the plant and the filter have the same input signal x(n). Moreover, a Filtered LMS (Filtered-X Least Mean Square, FXLMS) algorithm is introduced, which is a varying form of the LMS algorithm [2]. FXLMS algorithm solves the secondary path problem, described as the set of transforma‐ tions that the filter signal and the adaptive error signal go through, on their way from an electric to an acoustic domain. During this electro-acoustic process, the signal may be de‐ layed or altered in such a way that it is necessary to minimize such effects. The FXLMS algo‐ rithm technique consists of placing a filter, with the same properties as the secondary path, in the reference signal going towards the adaptive least mean square filter (LMS), as shown in figure 3.

**Figure 3.** ANC Feedforward system with FXLMS algorithm

From Figure 3, filter Ŝ(z) is the model of the secondary path, defined by filter S(z). Taking this into consideration, the update of filter W(z) is given as follows:

$$
\overline{w}(n+1) = \overline{w}(n) + \mu \stackrel{\Delta}{\mathbf{x}}(n)e(n) \tag{1}
$$

Where

$$\stackrel{\wedge}{\chi}(n) = \stackrel{\wedge}{\mathbf{s}}(n) \* \stackrel{\wedge}{\bar{\chi}}(n) \tag{2}$$

The secondary path in the basic ANC system is also taken into consideration in the hybrid

Performance of Adaptive Hybrid System in Two Scenarios: Echo Phone and Acoustic Noise Reduction

http://dx.doi.org/10.5772/51517

127

**2.** The other two systems present much more significant plant noise than the hybrid sys‐

**3.** The combination of both systems allows for much more flexibility in regards of design;

The block diagram if the hybrid ANC system in Figure 5 also shows FXLMS algorithm to

As mentioned previously, the process that transforms the resulting signal from the adaptive filter y(n) into signal e(n), is defined as secondary path. This characteristic takes into consid‐ eration the digital to analog converter, the reconstruction filter, the sound source, the ampli‐ fier, the acoustic path from the sound source to the error sensor, the error microphone, and the analog to digital converter. There are two techniques to estimate the secondary path, both with characteristics that make each method more comprehensive and sophisticated in certain ways; these techniques are: offline secondary path modeling and online secondary path modeling. The first method is performed with a *Feedforward* system, where the plant is

make up for the possible delays or problems induced by the secondary path [4].

**1.** The fact that lower order filters may be used to achieve the same performance;

system, and is given by the transfer function S(z).

tem;

and,

Among the advantages of hybrid ANC systems we can mention:

**4.** Cancellation of both narrowband and broadband noise.

**Figure 5.** Hybrid ANC system with FXLMS Algorithm

**3.2. Main problems in ANC Systems**

*3.2.1. Secondary Path Modeling*

#### *3.1.2. A posteriori (Feedback)*

There are some situations in which it is not possible to take into account the reference signal from the primary noise source in a Feedforward ANC system, due to difficult access to the source, or another reason that makes it hard to identify a specific signal through the refer‐ ence microphone. A solution to this problem is to introduce a system, which will predict the behavior of the input signal; this system is known as *a posteriori* ANC (Feedback ANC), which is known for using only an error sensor and a secondary sound source to achieve noise control.

Figure 4 describes a Feedback ANC system with FXLMS algorithm, in which d(n) is the noise signal, e(n) is the error signal, defined as the difference between d(n) and signal y'(n), which is the adaptive filter's output once the secondary path has been crossed. Finally, the adaptive filter's input signal is generated by the sum of the error signal and the resulting signal from the convolution between the secondary path Ŝ(z) and the estimated output of the adaptive filter, y(n).

**Figure 4.** Feedback ANC system with FXLMS algorithm

#### **3.1.3 What is a Hybrid system?**

A hybrid ANC system is made up of an identification stage (*feedforward*) and a prediction stage (*feedback*). The combination of both stages needs two reference sensors: one close to the primary noise source and other with the residual error signal. Figure 6 shows the detailed block diagram of a hybrid ANC system, in which it is possible to observe the basic systems (*Feedforward*, *Feedback*) involved in the design. The attenuation signal, given by y(n), results from the addition of both adaptive filter outputs, W(z) and M(z). Filter M(z) represents the *Feedback* process of the adaptive filter, while filter W(z)represents the *Feedforward* process. The secondary path in the basic ANC system is also taken into consideration in the hybrid system, and is given by the transfer function S(z).

Among the advantages of hybrid ANC systems we can mention:

*x* ^¯(*n*)= *<sup>s</sup>*

There are some situations in which it is not possible to take into account the reference signal from the primary noise source in a Feedforward ANC system, due to difficult access to the source, or another reason that makes it hard to identify a specific signal through the refer‐ ence microphone. A solution to this problem is to introduce a system, which will predict the behavior of the input signal; this system is known as *a posteriori* ANC (Feedback ANC), which is known for using only an error sensor and a secondary sound source to achieve

Figure 4 describes a Feedback ANC system with FXLMS algorithm, in which d(n) is the noise signal, e(n) is the error signal, defined as the difference between d(n) and signal y'(n), which is the adaptive filter's output once the secondary path has been crossed. Finally, the adaptive filter's input signal is generated by the sum of the error signal and the resulting signal from the convolution between the secondary path Ŝ(z) and the estimated output of

A hybrid ANC system is made up of an identification stage (*feedforward*) and a prediction stage (*feedback*). The combination of both stages needs two reference sensors: one close to the primary noise source and other with the residual error signal. Figure 6 shows the detailed block diagram of a hybrid ANC system, in which it is possible to observe the basic systems (*Feedforward*, *Feedback*) involved in the design. The attenuation signal, given by y(n), results from the addition of both adaptive filter outputs, W(z) and M(z). Filter M(z) represents the *Feedback* process of the adaptive filter, while filter W(z)represents the *Feedforward* process.

*3.1.2. A posteriori (Feedback)*

126 Adaptive Filtering - Theories and Applications

noise control.

the adaptive filter, y(n).

**Figure 4.** Feedback ANC system with FXLMS algorithm

**3.1.3 What is a Hybrid system?**

^(*n*)\* *<sup>x</sup>*¯(*n*) (2)


**Figure 5.** Hybrid ANC system with FXLMS Algorithm

The block diagram if the hybrid ANC system in Figure 5 also shows FXLMS algorithm to make up for the possible delays or problems induced by the secondary path [4].

#### **3.2. Main problems in ANC Systems**

#### *3.2.1. Secondary Path Modeling*

As mentioned previously, the process that transforms the resulting signal from the adaptive filter y(n) into signal e(n), is defined as secondary path. This characteristic takes into consid‐ eration the digital to analog converter, the reconstruction filter, the sound source, the ampli‐ fier, the acoustic path from the sound source to the error sensor, the error microphone, and the analog to digital converter. There are two techniques to estimate the secondary path, both with characteristics that make each method more comprehensive and sophisticated in certain ways; these techniques are: offline secondary path modeling and online secondary path modeling. The first method is performed with a *Feedforward* system, where the plant is now S(z) and the coefficients from the adaptive filter are the secondary path estimation, as shown in Figure 6 [4].

*3.2.3. Online Acoustic Feedback Path Modeling*

**Figure 8.** Hybrid ANC system with acoustic feedback

system against the neutralization system.

**Figure 9.** Kuo's Neutralization System

ure 8.

Most common way to eliminate acoustic feedback is to make an online path modeling, like in‐ dicated on [3] and, more recently, in relevant papers by [7] and [8]. However, one of the main characteristics of the hybrid system presented by the authors in [9] is that it does not take the secondary path modeling into consideration. Instead, the proposed hybrid system takes ad‐ vantage of the inherent robustness of hybrid systems when it comes to acoustic feedback, fig‐

Performance of Adaptive Hybrid System in Two Scenarios: Echo Phone and Acoustic Noise Reduction

http://dx.doi.org/10.5772/51517

129

The system in Figure 9, proposed by [10], was used to compare the robustness of the HANC

**Figure 6.** Offline Secondary Path Modeling

#### *3.2.2. Acoustic Feedback*

This property is typical of feedforward systems. Figure 7 shows the contribution of attenua‐ tion signal y(n), which causes the system to degrade because of the signal present in the ref‐ erence microphone.

**Figure 7.** *Feedforward* ANC process with acoustic feedback

Two possible solutions for acoustic feedback problem are: acoustic feedback neutralization and the proposal of a hybrid system, which has a better performance in the frequency range and attenuation level of interest [4]. To evaluate this approach, we used a hybrid system as shown in Figure 8, where F(z) is the transfer function of the feedback process.

The system proposed in [5] will be analyzed and this system, with a set of signals and exper‐ imental conditions, was completely evaluated in [6].

#### *3.2.3. Online Acoustic Feedback Path Modeling*

now S(z) and the coefficients from the adaptive filter are the secondary path estimation, as

This property is typical of feedforward systems. Figure 7 shows the contribution of attenua‐ tion signal y(n), which causes the system to degrade because of the signal present in the ref‐

Two possible solutions for acoustic feedback problem are: acoustic feedback neutralization and the proposal of a hybrid system, which has a better performance in the frequency range and attenuation level of interest [4]. To evaluate this approach, we used a hybrid system as

The system proposed in [5] will be analyzed and this system, with a set of signals and exper‐

shown in Figure 8, where F(z) is the transfer function of the feedback process.

shown in Figure 6 [4].

128 Adaptive Filtering - Theories and Applications

**Figure 6.** Offline Secondary Path Modeling

**Figure 7.** *Feedforward* ANC process with acoustic feedback

imental conditions, was completely evaluated in [6].

*3.2.2. Acoustic Feedback*

erence microphone.

Most common way to eliminate acoustic feedback is to make an online path modeling, like in‐ dicated on [3] and, more recently, in relevant papers by [7] and [8]. However, one of the main characteristics of the hybrid system presented by the authors in [9] is that it does not take the secondary path modeling into consideration. Instead, the proposed hybrid system takes ad‐ vantage of the inherent robustness of hybrid systems when it comes to acoustic feedback, fig‐ ure 8.

**Figure 8.** Hybrid ANC system with acoustic feedback

The system in Figure 9, proposed by [10], was used to compare the robustness of the HANC system against the neutralization system.

**Figure 9.** Kuo's Neutralization System

The details of the system in Figure 9 can be consulted in [10]. However, an important fact of this system is that it uses additive noise for modeling, also, as mentioned in [7], regarding predictable noise sources.

major problem is handle the simultaneous presence of echo near the speaker's voice. The sit‐ uation we want to avoid is to interpret the speaker's voice echoing nearby, and make great changes in the echo channel estimated in an unsuccessful attempt to cancel this. A checked algorithm could operate incorrectly when the distant partner is present, so it is necessary to

Performance of Adaptive Hybrid System in Two Scenarios: Echo Phone and Acoustic Noise Reduction

http://dx.doi.org/10.5772/51517

131

There are few references about the convenience of using adaptive hybrid schemes for solv‐ ing the problem of echo cancellation, and given the results obtained for applications for can‐ cellation of acoustic noise [15], hybrid scheme is proposed for electrical noise cancellation, since it is on the phone lines where there is the problem described. Be detailed later about

A long distance telephone system basically consists of a 2-wire portion, known as the sub‐ scriber circuit, and connects the subscriber to the local exchange and long-distance circuits itself; this system consists of a transmission channel and another receiving, each of which consists of two wires. A hybrid transformer is used to couple circuits' long distance sub‐ scriber circuit and ideally isolate the transmission channels and reception of long-distance circuit. However due to the decoupling impedance, they are not completely isolated so that a portion of the received signal is delayed in the form of echoes. A similar problem arises in teleconferencing systems with so-called acoustic echo which occurs due to coupling between the microphone and speaker in the teleconference system. This result in a delayed and dis‐ torted replica of the signal produced by the loudspeaker is fed back into the microphone.

In both cases there is deterioration in the communication system, which resulted in the ap‐ pearance of echo cancellers. These cancellers have proved to be the best way to solve this problem [11, 12]. The basic principle of echo cancellation, which is illustrated in Figure 11, is

incorporate certain mechanisms within the system to avoid this effect [11, 12, 13, 14].

**Figure 10.** A typical impulse response of acoustic echo channel

how to do and the results achieved.

**4.2. Telephone Systems**

#### **4. Echo Cancellation**

#### **4.1. Definition and general review**

The echo is a problem that significantly degrades the quality of telecommunication systems. This occurs, in telephone line, due to the decoupling impedance hybrid which exists in the coils and are used to couple subscriber communication channel with the long distance chan‐ nels. There is also the so-called acoustic echo which occurs in teleconferencing systems and hands free telephone systems. This type echo occurs due to acoustic coupling between loud‐ speakers and microphones used in these communication systems.

Several systems which try to solve this problem have appeared in the literature in recent years. Among these are: directional microphone arrangement [11], echo suppressors and adaptive echo cancellers [11, 12], etc. Among them, adaptive echo cancellation seems to be the best way to reduce the echo problem [13, 14]. An echo canceller generates an echo repli‐ ca and subtracts the signal to be transmitted, generating a so-called residual echo. The echo residual is then used to adapt the coefficients of the system, using in most cases a gradientbased algorithm, in a way that the mean square value of the residual echo is progressively minimized [11, 12, 13, 14]. However, the real-time estimate of the impulse response of the hybrid or echo channel is a complex problem for several reasons:

1. The duration of the impulse response of a typical echo channels in teleconferencing sys‐ tems in the order of several hundreds of milliseconds, which means that transversal filter coefficients of several thousand would be needed to reduce echo to acceptable levels. The impulse response of a typical acoustic echo channel is shown in Figure 10.

2. The impulse response of echo channel is non-stationary because it changes with the move‐ ment of the interlocutors, or the number of active subscribers on a given time. Thus the adaptive algorithm should be fast enough to track those changes.

3. The power density spectrum of the voice is not flat, and in many cases reduces speed of convergence of the adaptive algorithm. The correct estimate of the echo channel using struc‐ tures with the least possible complexity and the relatively high speeds obtain convergence of the adaptation algorithm, as mentioned above, are non-trivial problems which have re‐ ceived considerable attention in recent years; among the different proposed have been pro‐ posed several echo cancellation systems, among we can mention: transverse echo cancellers, echo cancellers in the frequency domain, echo cancellers infinite impulse response subband echo cancellers, etc., [11, 12, 13, 14].

Besides the reduction in the complexity of the canceller, to allow correct estimation of the echo channel and the development of adaptive algorithms with rapid convergence, another major problem is handle the simultaneous presence of echo near the speaker's voice. The sit‐ uation we want to avoid is to interpret the speaker's voice echoing nearby, and make great changes in the echo channel estimated in an unsuccessful attempt to cancel this. A checked algorithm could operate incorrectly when the distant partner is present, so it is necessary to incorporate certain mechanisms within the system to avoid this effect [11, 12, 13, 14].

**Figure 10.** A typical impulse response of acoustic echo channel

There are few references about the convenience of using adaptive hybrid schemes for solv‐ ing the problem of echo cancellation, and given the results obtained for applications for can‐ cellation of acoustic noise [15], hybrid scheme is proposed for electrical noise cancellation, since it is on the phone lines where there is the problem described. Be detailed later about how to do and the results achieved.

#### **4.2. Telephone Systems**

The details of the system in Figure 9 can be consulted in [10]. However, an important fact of this system is that it uses additive noise for modeling, also, as mentioned in [7], regarding

The echo is a problem that significantly degrades the quality of telecommunication systems. This occurs, in telephone line, due to the decoupling impedance hybrid which exists in the coils and are used to couple subscriber communication channel with the long distance chan‐ nels. There is also the so-called acoustic echo which occurs in teleconferencing systems and hands free telephone systems. This type echo occurs due to acoustic coupling between loud‐

Several systems which try to solve this problem have appeared in the literature in recent years. Among these are: directional microphone arrangement [11], echo suppressors and adaptive echo cancellers [11, 12], etc. Among them, adaptive echo cancellation seems to be the best way to reduce the echo problem [13, 14]. An echo canceller generates an echo repli‐ ca and subtracts the signal to be transmitted, generating a so-called residual echo. The echo residual is then used to adapt the coefficients of the system, using in most cases a gradientbased algorithm, in a way that the mean square value of the residual echo is progressively minimized [11, 12, 13, 14]. However, the real-time estimate of the impulse response of the

1. The duration of the impulse response of a typical echo channels in teleconferencing sys‐ tems in the order of several hundreds of milliseconds, which means that transversal filter coefficients of several thousand would be needed to reduce echo to acceptable levels. The

2. The impulse response of echo channel is non-stationary because it changes with the move‐ ment of the interlocutors, or the number of active subscribers on a given time. Thus the

3. The power density spectrum of the voice is not flat, and in many cases reduces speed of convergence of the adaptive algorithm. The correct estimate of the echo channel using struc‐ tures with the least possible complexity and the relatively high speeds obtain convergence of the adaptation algorithm, as mentioned above, are non-trivial problems which have re‐ ceived considerable attention in recent years; among the different proposed have been pro‐ posed several echo cancellation systems, among we can mention: transverse echo cancellers, echo cancellers in the frequency domain, echo cancellers infinite impulse response subband

Besides the reduction in the complexity of the canceller, to allow correct estimation of the echo channel and the development of adaptive algorithms with rapid convergence, another

speakers and microphones used in these communication systems.

hybrid or echo channel is a complex problem for several reasons:

adaptive algorithm should be fast enough to track those changes.

impulse response of a typical acoustic echo channel is shown in Figure 10.

predictable noise sources.

130 Adaptive Filtering - Theories and Applications

**4. Echo Cancellation**

**4.1. Definition and general review**

echo cancellers, etc., [11, 12, 13, 14].

A long distance telephone system basically consists of a 2-wire portion, known as the sub‐ scriber circuit, and connects the subscriber to the local exchange and long-distance circuits itself; this system consists of a transmission channel and another receiving, each of which consists of two wires. A hybrid transformer is used to couple circuits' long distance sub‐ scriber circuit and ideally isolate the transmission channels and reception of long-distance circuit. However due to the decoupling impedance, they are not completely isolated so that a portion of the received signal is delayed in the form of echoes. A similar problem arises in teleconferencing systems with so-called acoustic echo which occurs due to coupling between the microphone and speaker in the teleconference system. This result in a delayed and dis‐ torted replica of the signal produced by the loudspeaker is fed back into the microphone.

In both cases there is deterioration in the communication system, which resulted in the ap‐ pearance of echo cancellers. These cancellers have proved to be the best way to solve this problem [11, 12]. The basic principle of echo cancellation, which is illustrated in Figure 11, is to generate an echo replica, this is subtracted from the signal to be transmitted, resulting in the so-called residual echo that consists of part of the signal echo which could not be can‐ celed more near the speaker's voice, if this is present [11, 12]. The residual echo is then used to adapt the parameters of the canceller in such a way that the residual echo power is pro‐ gressively minimized.

which is used to estimate the noise path, P(z), and a predictive structure, M(z), which is used to cancel the distortion due to the acoustic feedback path, F(z). Since the samples of

Performance of Adaptive Hybrid System in Two Scenarios: Echo Phone and Acoustic Noise Reduction

http://dx.doi.org/10.5772/51517

133

**1.** The error signal to update the adaptive filter, *W(z),* which corresponds to the feedfor‐

**2.** To update the linear predictive filter *M(z)*, which intends to cancel the distortion pro‐ duced by the feedback propagation from the canceling loudspeaker to the input micro‐

The hybrid ANC contains the advantages of feedback and feedforward systems. The model presented by [16] was modified to adapt the system for a specific objective: reduce the resid‐ ual echo. This system uses two input signal *x(n)* and *din(n),* one for each talker. The plant that models echo refers to the effect of mismatch of impedance present in the telephone cir‐ cuit. The echo signal is *d(n)* and the residual echo plus the far-end signal is represented by *e(n).* This system incorporates the signal of the feedforward and the feedback effect that means both systems contribute to generate the cancelling signal, which approximates to the echo signal. Also this system includes a switch on the feedback system: when the echo signal and the far-end signal are highly correlated, the feedback system cancels part of the far-end

*S z*( ), which represents the online secondary path modeling adaptive filter.

feedback distortion are strongly correlated among them, they can be predicted [15].

As shown in Figure 12 signal, *a(n),* is used simultaneously as:

ward stage used to identify the noise path, and,

phone thorough the system *F(z)*; and,

**3.**

To estimate <sup>ˆ</sup>

**Figure 12.** Evaluated hybrid ANC structure

signal even if the hybrid system already converged [17].

**Figure 11.** Echocancelling in long distance telephone systems

Echo canceller consists of two main parts. An adaptive filter, which generates an echo repli‐ ca and is subtracted from the signal being transmitted, and a system commonly known as double-talk detector, this system prevents distortion due to the presence of the speaker's voice service or in the absence of the partner away. The first component is the structure of the adaptive filter along with its adaptation algorithm.

Some researchers have resulted in the appearance of various structures, such as transversal filters, subband structures, structures in the frequency domain, etc., and various adaptive al‐ gorithms, mostly based on gradient descent search. Second component, despite its impor‐ tance, has received much less attention than the first component. Thus conducting research aimed at developing highly reliable mechanisms to avoid distortion due to the simultaneous presence of both parties, "double-talk detector," especially when using algorithms based on gradient descent search is of great importance.

#### **5. Delimitation of Proposed ANC System and its Application**

#### **5.1. Evaluated ANC Structure**

Figure 12 shows the block diagram of the evaluated hybrid ANC structure with online sec‐ ondary path modeling. This hybrid ANC structure consists of a feedforward stage, W(z), which is used to estimate the noise path, P(z), and a predictive structure, M(z), which is used to cancel the distortion due to the acoustic feedback path, F(z). Since the samples of feedback distortion are strongly correlated among them, they can be predicted [15].

As shown in Figure 12 signal, *a(n),* is used simultaneously as:


to generate an echo replica, this is subtracted from the signal to be transmitted, resulting in the so-called residual echo that consists of part of the signal echo which could not be can‐ celed more near the speaker's voice, if this is present [11, 12]. The residual echo is then used to adapt the parameters of the canceller in such a way that the residual echo power is pro‐

Echo canceller consists of two main parts. An adaptive filter, which generates an echo repli‐ ca and is subtracted from the signal being transmitted, and a system commonly known as double-talk detector, this system prevents distortion due to the presence of the speaker's voice service or in the absence of the partner away. The first component is the structure of

Some researchers have resulted in the appearance of various structures, such as transversal filters, subband structures, structures in the frequency domain, etc., and various adaptive al‐ gorithms, mostly based on gradient descent search. Second component, despite its impor‐ tance, has received much less attention than the first component. Thus conducting research aimed at developing highly reliable mechanisms to avoid distortion due to the simultaneous presence of both parties, "double-talk detector," especially when using algorithms based on

Figure 12 shows the block diagram of the evaluated hybrid ANC structure with online sec‐ ondary path modeling. This hybrid ANC structure consists of a feedforward stage, W(z),

**5. Delimitation of Proposed ANC System and its Application**

gressively minimized.

132 Adaptive Filtering - Theories and Applications

**Figure 11.** Echocancelling in long distance telephone systems

the adaptive filter along with its adaptation algorithm.

gradient descent search is of great importance.

**5.1. Evaluated ANC Structure**

To estimate <sup>ˆ</sup> *S z*( ), which represents the online secondary path modeling adaptive filter.

**Figure 12.** Evaluated hybrid ANC structure

The hybrid ANC contains the advantages of feedback and feedforward systems. The model presented by [16] was modified to adapt the system for a specific objective: reduce the resid‐ ual echo. This system uses two input signal *x(n)* and *din(n),* one for each talker. The plant that models echo refers to the effect of mismatch of impedance present in the telephone cir‐ cuit. The echo signal is *d(n)* and the residual echo plus the far-end signal is represented by *e(n).* This system incorporates the signal of the feedforward and the feedback effect that means both systems contribute to generate the cancelling signal, which approximates to the echo signal. Also this system includes a switch on the feedback system: when the echo signal and the far-end signal are highly correlated, the feedback system cancels part of the far-end signal even if the hybrid system already converged [17].

**Figure 13.** Adapted Hybrid ANC for Active Echo Cancellation

To analyze the system is necessary to consider the correlation between signals, as shown in the equation (3):

$$R = E\left[\bar{\boldsymbol{\omega}}(\boldsymbol{n})\bar{\boldsymbol{\omega}}^2(\boldsymbol{n})\right] \tag{3}$$

wires). The acoustic echo is the direct or indirect feedback of reflected signals to the micro‐ phone during a conversation. There are two controls applied to echo: suppressor and cancel‐ ler systems. Echo Cancellation systems need to consider the disturbances in the far-end talker's signal and the superposition of the near-end talker's that generates double-talk. Two general approaches are the use of suppressors and the use of cancellers. The echo suppres‐ sor has a sensor that measures the voice signal power in each part of the circuit to decrease the impact of the echo. The echo suppressor changes the full duplex channel to a half-duplex channel [14, 18]. This characteristic is a disadvantage of this type of control because it can‐ cels part of the speech. Echo cancellers use the superposition principle that means this sys‐ tem generates a similar signal with delay and attenuation similar to the transmitted signal. It is recommended to train the system to approach the characteristics of the echo signal. For this problem some authors [19, 20], offered different solutions based on Double-Talk Detec‐ tor (DTD) [21]; this principle detects the presence of simultaneous speech of both talkers and pause the coefficient updating of the adaptive filter. It is known that the adaptive filter is the key to treat echo problems. It is necessary to consider the speed of convergence and robust‐ ness of the system. Most of echo cancellation systems use transversal filters and the LMS al‐

Performance of Adaptive Hybrid System in Two Scenarios: Echo Phone and Acoustic Noise Reduction

http://dx.doi.org/10.5772/51517

135

The result is an error signal named as residual echo signal due to estimation of the adaptive

where *d(n)* is the echo signal and *y(n)* is the response generated by the adaptive filter after processing the algorithm. Also [3], presents the criteria of the Mean Square Error (MSE) to find the convergence point of the system. To analyze the performance of the Echo Cancella‐

*xn dn yn* () () () = - (7)

filter [21], this scenario, adapted to an ANC system is shown in Figure 14 [3].

gorithm or variations of this to adjust the coefficients [22].

**Figure 14.** System identification viewpoint of ANC

From Figure 14, the residual echo e*(n)* is defined as

The cross correlation vector between the entrance and the echo is given by:

$$\bar{p} = \mathbb{E}[d(n)\mathbf{x}(n)] \tag{4}$$

and the correlation matrix can be written as follows:

$$
\overline{\bar{Rw}\_0} = \bar{p} \tag{5}
$$

where *w*<sup>0</sup> ¯ is the optimum vector of the transversal filter. In the selected algorithm, LMS, the reference signal *x*(*n*) is processed by an adaptive filter *W(z).* In this case the coefficients of the filter are updated by the gradient of the error signal power obtained plus the previous coefficients and μstep size:

$$w(n+1) = w(n) + \mu \mathbf{x}(n)e(n) \tag{6}$$

#### **5.2. Active Echo Cancellation in Telephone Lines**

There are two kinds of echo: electric and acoustic. The electric echo is present in traditional telephony lines because of the impedance mismatch of the conversion (from two to four wires). The acoustic echo is the direct or indirect feedback of reflected signals to the micro‐ phone during a conversation. There are two controls applied to echo: suppressor and cancel‐ ler systems. Echo Cancellation systems need to consider the disturbances in the far-end talker's signal and the superposition of the near-end talker's that generates double-talk. Two general approaches are the use of suppressors and the use of cancellers. The echo suppres‐ sor has a sensor that measures the voice signal power in each part of the circuit to decrease the impact of the echo. The echo suppressor changes the full duplex channel to a half-duplex channel [14, 18]. This characteristic is a disadvantage of this type of control because it can‐ cels part of the speech. Echo cancellers use the superposition principle that means this sys‐ tem generates a similar signal with delay and attenuation similar to the transmitted signal. It is recommended to train the system to approach the characteristics of the echo signal. For this problem some authors [19, 20], offered different solutions based on Double-Talk Detec‐ tor (DTD) [21]; this principle detects the presence of simultaneous speech of both talkers and pause the coefficient updating of the adaptive filter. It is known that the adaptive filter is the key to treat echo problems. It is necessary to consider the speed of convergence and robust‐ ness of the system. Most of echo cancellation systems use transversal filters and the LMS al‐ gorithm or variations of this to adjust the coefficients [22].

The result is an error signal named as residual echo signal due to estimation of the adaptive filter [21], this scenario, adapted to an ANC system is shown in Figure 14 [3].

**Figure 14.** System identification viewpoint of ANC

**Figure 13.** Adapted Hybrid ANC for Active Echo Cancellation

134 Adaptive Filtering - Theories and Applications

and the correlation matrix can be written as follows:

**5.2. Active Echo Cancellation in Telephone Lines**

the equation (3):

coefficients and μstep size:

To analyze the system is necessary to consider the correlation between signals, as shown in

(*n*) (3)

*p* (5)

(6)

*p* =*E d*(*n*)*x*(*n*) (4)

*R* =*E x*¯(*n*)*x*¯ <sup>2</sup>

*Rw*<sup>0</sup> −−−− = ¯

*wn wn xnen* ( 1) ( ) ( ) ( ) += + m

There are two kinds of echo: electric and acoustic. The electric echo is present in traditional telephony lines because of the impedance mismatch of the conversion (from two to four

where *w*<sup>0</sup> ¯ is the optimum vector of the transversal filter. In the selected algorithm, LMS, the reference signal *x*(*n*) is processed by an adaptive filter *W(z).* In this case the coefficients of the filter are updated by the gradient of the error signal power obtained plus the previous

The cross correlation vector between the entrance and the echo is given by:

¯

From Figure 14, the residual echo e*(n)* is defined as

$$\mathbf{x}(n) = d(n) - y(n) \tag{7}$$

where *d(n)* is the echo signal and *y(n)* is the response generated by the adaptive filter after processing the algorithm. Also [3], presents the criteria of the Mean Square Error (MSE) to find the convergence point of the system. To analyze the performance of the Echo Cancella‐ tion system Echo Return Loss Enhancement (ERLE) criteria was developed. The ERLE crite‐ rion is described in equation (8).

$$ERLE = 10\log\left\{\frac{E\left[\boldsymbol{d}^2(n)\right]}{E\left[\boldsymbol{e}^2(n)\right]}\right\}\tag{8}$$

**2.** A reference signal composed of the sum of narrow band sinusoidal signals of 100, 200,

**3.** The rest of the reference signals are.wav audio files with recordings of real noise sour‐

The most important values are modeling error, as was defined by [28], and MSE, given by

0 10 1 2 0

å

0 <sup>10</sup> <sup>1</sup> <sup>2</sup>

*M*


*i M*

é ù é ù ê ú ë û <sup>=</sup> ê ú

= -

å

10log

0

*i*

For experience, we need to train the system before to start to work [16]. So, we have two

**1.** For echo cancellation, we adapt the plant for 20 representative coefficients instead the 1000 given by [24]. The adaptive filter was a vector of 20 coefficients initialized in zero. The near-end voice was a female voice and silence for the far-end. The step size value were change until get the higher level of ERLE, after run the simulation of the system using Matlab®, with a software interface developed specifically for this purpose, the re‐ sults of the adaptive filter were retaken to repeat the processing, when a 40dB of cancel‐ lation were achieve the training was stopped. The scenario for training work was

**2.** For the situation for Acoustic Noise Reduction, secondary path is offline modeling stop‐ ped when the error is reduced-35dB similar to [15]. The excitation signal v(n) used was

To consider an approximation of a real system the results of processing echo of voice with the hybrid proposed system. We present the results using the female voice signal (Figure 15) in the near-end and two different masculine voice signals in far-end (Figure

=

å


*M*

1 2

Performance of Adaptive Hybrid System in Two Scenarios: Echo Phone and Acoustic Noise Reduction

*sn sn*

[ ( )]

( )

<sup>1</sup> <sup>2</sup>

*e n*

*i*

*i*

ê ú é ù ë û ê ú ë û

*x n*

( )

(9)

137

http://dx.doi.org/10.5772/51517

(10)

*s n*

ë û

*i i i M i i*

the ratio between the power of the error signal, and the power of the reference signal:

[ ( ) ( )] <sup>ˆ</sup> ( ) 10log

é ù - D = ê ú

å

400, and 600 Hz; and,

**6.2. System Training**

**7. Analysis of Results**

16 and Figure 17).

**7.1. Echo cancellation phone lines**

considerations:

ces, which are "*motor"* and "*airplane"*, as in [16].

*S dB*

( )

single-talk with a single voice signal in the near-end.

white Gaussian noise with variance of 0.05.

*MSE dB*

ERLE parameter was used to evaluate the present proposed system.

#### **6. Performance Parameters and Several Aspects Considered**

#### **6.1. Parameters and issues**

The proposed system has different parameters to consider. These parameters determine whether the system converges or not.


Step size values were taken by [16, 23]. The plant simulates the effect of echo that the nearend suffer because of the impedance mismatch, proposed by [24].

The input signals utilized are sorted into one of three types, considering the classification proposed by [3, 25], as well as companies such as [26].


For Acoustic Noise Reduction applications, the system was tested with several real sound signals taken from an Internet database [27]. The sound files were selected taking into ac‐ count that the system is to be implemented in a duct-like environment. Also, six different types of signals were used for the analyzed system:

**1.** A sinusoidal reference signal with frequency of 300 Hz, and 30 dB SNR;


The most important values are modeling error, as was defined by [28], and MSE, given by the ratio between the power of the error signal, and the power of the reference signal:

$$
\Delta S(d\mathcal{B}) = 10 \log\_{10} \left[ \frac{\sum\_{\iota=0}^{M-1} [s\_\iota(n) - \hat{s}\_\iota(n)]^2}{\sum\_{\iota=0}^{M-1} [s\_\iota(n)]^2} \right] \tag{9}
$$

$$MSE(dB) = 10\log\_{10}\left[\frac{\sum\_{i=0}^{M-1} \left[e\_i(n)\right]^2}{\sum\_{i=0}^{M-1} \left[\chi\_i(n)\right]^2}\right] \tag{10}$$

#### **6.2. System Training**

tion system Echo Return Loss Enhancement (ERLE) criteria was developed. The ERLE crite‐

*ERLE*

ERLE parameter was used to evaluate the present proposed system.

**3.** A*daptive filter W* (*z*) *:* length and values for established plants

end suffer because of the impedance mismatch, proposed by [24].

**5.** E*ntrance signals:* including the near-end and the far-end

proposed by [3, 25], as well as companies such as [26].

types of signals were used for the analyzed system:

**4.** *Number of blocks and iterations:* reflected in the number of samples observed

**3.** *Impulsive:* the level of noise presents impulses in a brief period of time.

**1.** A sinusoidal reference signal with frequency of 300 Hz, and 30 dB SNR;

**6. Performance Parameters and Several Aspects Considered**

2 2 ( ) 10log ( ) *Ed n*

(8)

*Ee n* ì ü é ù ï ï ë û <sup>=</sup> í ý é ù ï ï î þ ë û

The proposed system has different parameters to consider. These parameters determine

**1.** *Step size (μ):* controls the system stability and speed of convergence, one for each part of

Step size values were taken by [16, 23]. The plant simulates the effect of echo that the near-

The input signals utilized are sorted into one of three types, considering the classification

**1.** *Continuous;* the level of sound remains constant or nearly constant with small fluctua‐ tions. For Echo cancellation, the selected signals were vacuum, four tones and silence.

**2.** *Intermittent:* the level of sound presents some fluctuations that can be periodic or ran‐ dom. The selected signals are real voices recorded in a computer for Echo considera‐

For Acoustic Noise Reduction applications, the system was tested with several real sound signals taken from an Internet database [27]. The sound files were selected taking into ac‐ count that the system is to be implemented in a duct-like environment. Also, six different

rion is described in equation (8).

136 Adaptive Filtering - Theories and Applications

**6.1. Parameters and issues**

tions.

whether the system converges or not.

**2.** P*lant:* simulates the echo effect

the system (feedback and feedforward).

For experience, we need to train the system before to start to work [16]. So, we have two considerations:


#### **7. Analysis of Results**

#### **7.1. Echo cancellation phone lines**

To consider an approximation of a real system the results of processing echo of voice with the hybrid proposed system. We present the results using the female voice signal (Figure 15) in the near-end and two different masculine voice signals in far-end (Figure 16 and Figure 17).

The echo signal generated by the adaptation of the plant is represented in the Fig 18.

Performance of Adaptive Hybrid System in Two Scenarios: Echo Phone and Acoustic Noise Reduction

http://dx.doi.org/10.5772/51517

139

Applying the function with the parameters of Table 1, the obtained results are shown in Fig‐ ure 19 and Figure 20. Both figures show that system achieves cancellation of the echo signal.

> **Parameters Value** Step size 0.1 Plant From [24] Blocks 1000 Iteration 80

**Figure 19.** ERLE using female voice in the near-end and masculine voice 1 in far-end

**Figure 18.** Echo of the female voice signal with adapted plant

**Table 1.** Analysis Parameters

**Figure 15.** Female voice signal

**Figure 16.** First masculine voice signal

**Figure 17.** Second masculine voice signal

#### The echo signal generated by the adaptation of the plant is represented in the Fig 18.

**Figure 18.** Echo of the female voice signal with adapted plant

Applying the function with the parameters of Table 1, the obtained results are shown in Fig‐ ure 19 and Figure 20. Both figures show that system achieves cancellation of the echo signal.


**Figure 15.** Female voice signal

138 Adaptive Filtering - Theories and Applications

**Figure 16.** First masculine voice signal

**Figure 17.** Second masculine voice signal

**Figure 19.** ERLE using female voice in the near-end and masculine voice 1 in far-end

Looking for a detailed analysis in the cancelling signal (Figure 21), which imitates echo sig‐ nal, for the first masculine signal, the system begins to diverge. This occurs because of the high correlation between the two entrances voices; this effect is given by the feedback be‐

Performance of Adaptive Hybrid System in Two Scenarios: Echo Phone and Acoustic Noise Reduction

http://dx.doi.org/10.5772/51517

141

Then, instead of the first male signal, another signal was used and the system converged better, this can be seen in Figure 22, this situation is because the correlation between this sig‐

As mentioned before, the step size factor has a major impact on the development of the sys‐ tem, and proved to be the main reason to make the system converge; additional simulations were performed using the parameters in Table 2; this means a smaller size step and the male

> Parameters Value Step size 0.01 Plant From [24] Blocks 1000 Iteration 80

The system improves its performance using the parameters of Table 2. The generated cancel‐

cause even when the system already converge starts to cancel the far-end signal [29].

nal and the female is smaller.

**Table 2.** Analysis Parameters for Additional Test

ling signal (Figure 23), does not have impulsive periods.

**Figure 23.** Cancelling voice signal, system with masculine voice 1 and adjusted step size

voice first.

**Figure 20.** ERLE using female voice in the near-end and masculine voice 2 in far-end

**Figure 21.** Cancelling voice signal, system with masculine voice 1

**Figure 22.** Cancelling voice signal, system with masculine voice 2

Looking for a detailed analysis in the cancelling signal (Figure 21), which imitates echo sig‐ nal, for the first masculine signal, the system begins to diverge. This occurs because of the high correlation between the two entrances voices; this effect is given by the feedback be‐ cause even when the system already converge starts to cancel the far-end signal [29].

Then, instead of the first male signal, another signal was used and the system converged better, this can be seen in Figure 22, this situation is because the correlation between this sig‐ nal and the female is smaller.

As mentioned before, the step size factor has a major impact on the development of the sys‐ tem, and proved to be the main reason to make the system converge; additional simulations were performed using the parameters in Table 2; this means a smaller size step and the male voice first.


**Table 2.** Analysis Parameters for Additional Test

**Figure 20.** ERLE using female voice in the near-end and masculine voice 2 in far-end

140 Adaptive Filtering - Theories and Applications

**Figure 21.** Cancelling voice signal, system with masculine voice 1

**Figure 22.** Cancelling voice signal, system with masculine voice 2

The system improves its performance using the parameters of Table 2. The generated cancel‐ ling signal (Figure 23), does not have impulsive periods.

**Figure 23.** Cancelling voice signal, system with masculine voice 1 and adjusted step size

#### **7.2. Active Noise Cancellation**

#### *7.2.1. General results (MSE and Modelling Error)*

This section presents the simulation experiments performed for acoustic noise reduction. First, an offline modeling was used to obtain FIR representations of tap weight length 20 for *P*(*z*) and of tap weight length 20 for *S*(*z*) . The control filter *W* (*z*) and the modeling filter *S* ^ (*z*) are FIR filters of tap weight length of *L* =20 both of them. A null vector initializes the control filter *W* (*z*) . To initialize *S* ^ (*z*) , offline secondary path modeling is performed, which is stopped when the modeling error has been reduced to -5dB. The step size parameters are adjusted by trial and error for fast and stable convergence.

coefficients, from iteration 1000 to 1100. The best response was shown by the continuous sig‐ nal; Figure 24 shows the Modeling error for this case, while Figure 25 shows the MSE.

Performance of Adaptive Hybrid System in Two Scenarios: Echo Phone and Acoustic Noise Reduction

http://dx.doi.org/10.5772/51517

143

From Table 3, it can be noticed that the step sizes had to be considerably reduced, in the or‐ der of 1000, in comparison to the values established for the tests with Echo cancellation. This is due to the fact that the coefficient values are not necessarily within a range of -1 to 1, so the secondary path modeling needs a smaller step size to be able to achieve convergence.

For the intermittent signal, the effects of the small step sizes were similar: the system took more time to converge and the level of noise cancellation was reduced. Nonetheless, the re‐

**Figure 24.** Relative modeling error for continuous signal

**Figure 25.** MSE for continuous signal

Various articles on the subject of ANC were references taken into consideration before estab‐ lishing main analysis parameters to determine the hybrid system's performance:

a) Filter order; it is important to evaluate the system under filters of different orders. In this case, 20 coefficients were selected (we considered the fact that the distance between the noise source and the control system is not supposed to be very large).

b) Nature of the filter coefficients; on a first stage, the coefficients were set according to real values taken from a previous study made on a specific air duct [2]. These coefficients were taken from the work done in [16] to determine the values of the primary and secondary path filters for an air duct.

The simulation results are presented according to the following parameters:


Table 3 shows the values used for the feedforward and feedback step sizes, as well as the range of step sizes used for the secondary path filter. The values were set by trial and error, starting with the values that were determined with the previous test.


**Table 3.** Filters Step Size Used in Proposed Analysis

Also, a white noise with zero mean and variance equal to 0.05 was used in the system. Since there were not enough resources to implement an abrupt secondary path change (which means there was only one set of values available for the secondary path filter from [15], a gradual change was made, given by the sum of a sinusoidal function to the secondary path coefficients, from iteration 1000 to 1100. The best response was shown by the continuous sig‐ nal; Figure 24 shows the Modeling error for this case, while Figure 25 shows the MSE.

**Figure 24.** Relative modeling error for continuous signal

**7.2. Active Noise Cancellation**

142 Adaptive Filtering - Theories and Applications

control filter *W* (*z*) . To initialize *S*

filters for an air duct.

**1.** Mean Square Error (MSE); and

**Table 3.** Filters Step Size Used in Proposed Analysis

*S* ^

*7.2.1. General results (MSE and Modelling Error)*

This section presents the simulation experiments performed for acoustic noise reduction. First, an offline modeling was used to obtain FIR representations of tap weight length 20 for *P*(*z*) and of tap weight length 20 for *S*(*z*) . The control filter *W* (*z*) and the modeling filter

(*z*) are FIR filters of tap weight length of *L* =20 both of them. A null vector initializes the

is stopped when the modeling error has been reduced to -5dB. The step size parameters are

Various articles on the subject of ANC were references taken into consideration before estab‐

a) Filter order; it is important to evaluate the system under filters of different orders. In this case, 20 coefficients were selected (we considered the fact that the distance between the

b) Nature of the filter coefficients; on a first stage, the coefficients were set according to real values taken from a previous study made on a specific air duct [2]. These coefficients were taken from the work done in [16] to determine the values of the primary and secondary path

Table 3 shows the values used for the feedforward and feedback step sizes, as well as the range of step sizes used for the secondary path filter. The values were set by trial and error,

μw, μ<sup>m</sup>

Continuous 0.000001 0.0001 – 0.001 Intermittent 0.000001 0.0001 – 0.001 Impulsive 0.000001 0.0001 – 0.001

Also, a white noise with zero mean and variance equal to 0.05 was used in the system. Since there were not enough resources to implement an abrupt secondary path change (which means there was only one set of values available for the secondary path filter from [15], a gradual change was made, given by the sum of a sinusoidal function to the secondary path

lishing main analysis parameters to determine the hybrid system's performance:

(*z*) , offline secondary path modeling is performed, which

Step size μs

^

noise source and the control system is not supposed to be very large).

The simulation results are presented according to the following parameters:

adjusted by trial and error for fast and stable convergence.

**2.** Modeling error from online secondary path modeling.

starting with the values that were determined with the previous test.

Signal Step size

**Figure 25.** MSE for continuous signal

From Table 3, it can be noticed that the step sizes had to be considerably reduced, in the or‐ der of 1000, in comparison to the values established for the tests with Echo cancellation. This is due to the fact that the coefficient values are not necessarily within a range of -1 to 1, so the secondary path modeling needs a smaller step size to be able to achieve convergence.

For the intermittent signal, the effects of the small step sizes were similar: the system took more time to converge and the level of noise cancellation was reduced. Nonetheless, the re‐ sponse achieved stability during the simulation. Figure 26 and Figure 27 correspond to the Modeling error and MSE for the intermittent signal, respectively.

*7.2.2. Comparison versus Neutralization and Feedforward Systems*

report an extreme condition for a real duct under analysis.

System Primary Path

**Figure 28.** MSE with "*sinusoidal"* reference signal for Feedforward System

μP

noise with variance equal to 0.05.

Neutralization System

> Hybrid System

**Table 4.** Filters Step Size Used in Proposed Analysis

μ, is shown in Table 4.

In this section, three paths were used: the main or primary path P(s), the secondary path S(s), and the acoustic feedback path F(s). All the filters used in the evaluated proposals are finite response filters (FIR). The values of these paths are based on [2], and represent the ex‐ perimental values of a given duct. A total of 25 coefficients will be used in all paths so as to

Performance of Adaptive Hybrid System in Two Scenarios: Echo Phone and Acoustic Noise Reduction

To initialize Ŝ(z), the offline secondary path modeling is stopped when the modeling error has been reduced up to -35dB, similar to [15]. The excitation signal *v(n)*, is white Gaussian

The values for the step size are adjusted by trial-and-error to achieve a faster convergence and stability, following the guidelines from previous work on Hybrid Active Noise Control [16], and the values selected in [7] for neutralization. A summary of the selected values for

> Secondary Path μS

0.000001 0.00005 0.00005

0.001 0.001 NA

Feedback Path μF

http://dx.doi.org/10.5772/51517

145

**Figure 26.** Relative modelling error for intermittent signal

**Figure 27.** MSE for intermittent signal

Finally, for impulsive input signal the results were not as good as expected. The results can be explained since there are very abrupt changes in the signal amplitude, and the step size is very small. Hence, the values of the coefficients tend to infinity and the simulation stops abruptly.

#### *7.2.2. Comparison versus Neutralization and Feedforward Systems*

sponse achieved stability during the simulation. Figure 26 and Figure 27 correspond to the

Finally, for impulsive input signal the results were not as good as expected. The results can be explained since there are very abrupt changes in the signal amplitude, and the step size is very small. Hence, the values of the coefficients tend to infinity and the simulation stops abruptly.

Modeling error and MSE for the intermittent signal, respectively.

144 Adaptive Filtering - Theories and Applications

**Figure 26.** Relative modelling error for intermittent signal

**Figure 27.** MSE for intermittent signal

In this section, three paths were used: the main or primary path P(s), the secondary path S(s), and the acoustic feedback path F(s). All the filters used in the evaluated proposals are finite response filters (FIR). The values of these paths are based on [2], and represent the ex‐ perimental values of a given duct. A total of 25 coefficients will be used in all paths so as to report an extreme condition for a real duct under analysis.

To initialize Ŝ(z), the offline secondary path modeling is stopped when the modeling error has been reduced up to -35dB, similar to [15]. The excitation signal *v(n)*, is white Gaussian noise with variance equal to 0.05.

The values for the step size are adjusted by trial-and-error to achieve a faster convergence and stability, following the guidelines from previous work on Hybrid Active Noise Control [16], and the values selected in [7] for neutralization. A summary of the selected values for μ, is shown in Table 4.


**Table 4.** Filters Step Size Used in Proposed Analysis

**Figure 28.** MSE with "*sinusoidal"* reference signal for Feedforward System

Figure 28 to Figure 41 show the result of the systems analysis with the previously men‐ tioned set of signals. All results are shown in dBs, measuring the error power at the output (Mean Square Error).

First, we show the main signal for ANC systems, the sinusoidal signal. Figures 28 to 30 show the MSE value obtained.

**Figure 31.** MSE with "*4 tones"* reference signal for Feedforward System

Performance of Adaptive Hybrid System in Two Scenarios: Echo Phone and Acoustic Noise Reduction

http://dx.doi.org/10.5772/51517

147

**Figure 32.** MSE with "*4 tones"* reference signal for Neutralization System

**Figure 33.** MSE with "*4 tones"* reference signal for Hybrid System

**Figure 29.** MSE with "*sinusoidal"* reference signal for Neutralization System

**Figure 30.** MSE with "*sinusoidal"* reference signal for Hybrid System

Another important is a narrow band signal, as explained before, is composed of the sum of narrow band sinusoidal signals of 100, 200, 400, and 600 Hz. Figures 31 to 33 show the re‐ sults for this consideration

Performance of Adaptive Hybrid System in Two Scenarios: Echo Phone and Acoustic Noise Reduction http://dx.doi.org/10.5772/51517 147

**Figure 31.** MSE with "*4 tones"* reference signal for Feedforward System

Figure 28 to Figure 41 show the result of the systems analysis with the previously men‐ tioned set of signals. All results are shown in dBs, measuring the error power at the output

First, we show the main signal for ANC systems, the sinusoidal signal. Figures 28 to 30

(Mean Square Error).

show the MSE value obtained.

146 Adaptive Filtering - Theories and Applications

**Figure 29.** MSE with "*sinusoidal"* reference signal for Neutralization System

**Figure 30.** MSE with "*sinusoidal"* reference signal for Hybrid System

sults for this consideration

Another important is a narrow band signal, as explained before, is composed of the sum of narrow band sinusoidal signals of 100, 200, 400, and 600 Hz. Figures 31 to 33 show the re‐

**Figure 32.** MSE with "*4 tones"* reference signal for Neutralization System

**Figure 33.** MSE with "*4 tones"* reference signal for Hybrid System

Finally, we use two recorded signals, corresponding to a "plane" and one to a "motor", meaning the evidence most relevant to our system. Of Figures 34 through 41, shows the con‐ vergence achieved with the proposed system.

Finally, it is important to consider that an ANC system should respond successfully to a change in the status of secondary path, which corresponds, for example, a possible move‐ ment of the microphone in a pipeline, or any vibration or change in of the system. Figures 24 and 25 show an abrupt change in secondary path at iteration 1000 [5]. We can observe that the behavior of both remain stable.

Figures 40 and 41 show selected results for the neutralization and hybrid systems, which are of greatest interest.

**Figure 36.** MSE with "*Motor"* reference signal for Hybrid System

Performance of Adaptive Hybrid System in Two Scenarios: Echo Phone and Acoustic Noise Reduction

http://dx.doi.org/10.5772/51517

149

**Figure 37.** MSE with "*Airplane"* reference signal for Feedforward System

**Figure 38.** MSE with "*Airplane"* reference signal for Neutralization System

**Figure 34.** MSE with "*Motor"* reference signal for Feedforward System

**Figure 35.** MSE with "*Motor"* reference signal for Neutralization System

Performance of Adaptive Hybrid System in Two Scenarios: Echo Phone and Acoustic Noise Reduction http://dx.doi.org/10.5772/51517 149

**Figure 36.** MSE with "*Motor"* reference signal for Hybrid System

Finally, we use two recorded signals, corresponding to a "plane" and one to a "motor", meaning the evidence most relevant to our system. Of Figures 34 through 41, shows the con‐

Finally, it is important to consider that an ANC system should respond successfully to a change in the status of secondary path, which corresponds, for example, a possible move‐ ment of the microphone in a pipeline, or any vibration or change in of the system. Figures 24 and 25 show an abrupt change in secondary path at iteration 1000 [5]. We can observe that

Figures 40 and 41 show selected results for the neutralization and hybrid systems, which are

vergence achieved with the proposed system.

**Figure 34.** MSE with "*Motor"* reference signal for Feedforward System

**Figure 35.** MSE with "*Motor"* reference signal for Neutralization System

the behavior of both remain stable.

148 Adaptive Filtering - Theories and Applications

of greatest interest.

**Figure 37.** MSE with "*Airplane"* reference signal for Feedforward System

**Figure 38.** MSE with "*Airplane"* reference signal for Neutralization System

**8. Conclusions**

name a few.

fields of acoustics and telephony.

The adaptive filtering is a powerful tool that offers various solutions to many fields of sci‐ ence today. This chapter shows the efficiency of the hybrid system in reducing electrical noise and noise currently present in conventional systems where noise becomes a significant cause of health problems, or a situation that can affect communications Internet or phone, to

Performance of Adaptive Hybrid System in Two Scenarios: Echo Phone and Acoustic Noise Reduction

http://dx.doi.org/10.5772/51517

151

Adaptive filtering, which has been successfully applied in the solution of several practical problems which main kinds are described some in this chapter, has relied mainly in the transversal filter structures. However, when the filter order becomes large, the transversal computational complexity and convergence rate may limit its capability for solving practical

In particular, there are few references about hybrid systems, those conjoined feature more traditional patterns such as *a priori* and *a posteriori* systems. Of course they inherit the prob‐ lems of these two, but the advantage they offer is based on the robustness of such systems for signals of different characteristics as continuous, intermittent and impulsive, and we tested a hybrid system in two interesting and relevant scenarios: unwanted signals in the

The proposed system works in an acceptable way for telephone echo problems, but it is nec‐ essary to consider and adjust the different parameters. The system is capable of cancelling echo of voice signals and can be applied to simulated scenarios of double talk without use the Double Talk Detector. Also it is necessary to evaluate the correlation between input sig‐ nals since this correlation has a great impact of the performance of the system. If both sig‐ nals are highly correlated, it is necessary to use a small step size for both feedback and feedforward systems. We established the double talk situation in telephony conversations as the test system for our Hybrid system including some talks simulating a real conversation.

With respect to Acoustic Noise Reduction, it must be notice that the results presented for a real-value filter coefficients refer to only one specific type of duct. This means that the re‐ sponse could probably improve in a different environment or in a duct with different prop‐ erties. This situation represents a problem for the designer of a hybrid ANC, because for each environment where the system is to be applied would be no need to identify accurately the parameters to achieve the desired response. However difficult, this may not be impossi‐

This chapter discusses a new Hybrid Active Noise Control system and the impact adaptive filtering has on this field. The objective is to achieve improved performance at a reasonable computational cost in a Hybrid ANC system that considers two of the more important trou‐ bles of the ANC. We show two examples to prove the contribution of this system, one is a little generalist about cancelling several kinds of noise, and one very specific, which repre‐ sents one persistent problem like telephone echo on telecommunications nowadays: net‐ works have been modified by the use of new technologies and constant innovations have

ble to do, so there is still a lot of work to be done with hybrid ANC systems.

problems. This chapter presented an overview of the Hybrid System.

**Figure 39.** MSE with "*Airplane"* reference signal for Hybrid System

**Figure 40.** MSE with "*4 tones"* reference signal for Neutralization System, considering changing secondary path

**Figure 41.** MSE with "*4 tones"* reference signal for Hybrid System, considering changing secondary path

#### **8. Conclusions**

**Figure 39.** MSE with "*Airplane"* reference signal for Hybrid System

150 Adaptive Filtering - Theories and Applications

**Figure 40.** MSE with "*4 tones"* reference signal for Neutralization System, considering changing secondary path

**Figure 41.** MSE with "*4 tones"* reference signal for Hybrid System, considering changing secondary path

The adaptive filtering is a powerful tool that offers various solutions to many fields of sci‐ ence today. This chapter shows the efficiency of the hybrid system in reducing electrical noise and noise currently present in conventional systems where noise becomes a significant cause of health problems, or a situation that can affect communications Internet or phone, to name a few.

Adaptive filtering, which has been successfully applied in the solution of several practical problems which main kinds are described some in this chapter, has relied mainly in the transversal filter structures. However, when the filter order becomes large, the transversal computational complexity and convergence rate may limit its capability for solving practical problems. This chapter presented an overview of the Hybrid System.

In particular, there are few references about hybrid systems, those conjoined feature more traditional patterns such as *a priori* and *a posteriori* systems. Of course they inherit the prob‐ lems of these two, but the advantage they offer is based on the robustness of such systems for signals of different characteristics as continuous, intermittent and impulsive, and we tested a hybrid system in two interesting and relevant scenarios: unwanted signals in the fields of acoustics and telephony.

The proposed system works in an acceptable way for telephone echo problems, but it is nec‐ essary to consider and adjust the different parameters. The system is capable of cancelling echo of voice signals and can be applied to simulated scenarios of double talk without use the Double Talk Detector. Also it is necessary to evaluate the correlation between input sig‐ nals since this correlation has a great impact of the performance of the system. If both sig‐ nals are highly correlated, it is necessary to use a small step size for both feedback and feedforward systems. We established the double talk situation in telephony conversations as the test system for our Hybrid system including some talks simulating a real conversation.

With respect to Acoustic Noise Reduction, it must be notice that the results presented for a real-value filter coefficients refer to only one specific type of duct. This means that the re‐ sponse could probably improve in a different environment or in a duct with different prop‐ erties. This situation represents a problem for the designer of a hybrid ANC, because for each environment where the system is to be applied would be no need to identify accurately the parameters to achieve the desired response. However difficult, this may not be impossi‐ ble to do, so there is still a lot of work to be done with hybrid ANC systems.

This chapter discusses a new Hybrid Active Noise Control system and the impact adaptive filtering has on this field. The objective is to achieve improved performance at a reasonable computational cost in a Hybrid ANC system that considers two of the more important trou‐ bles of the ANC. We show two examples to prove the contribution of this system, one is a little generalist about cancelling several kinds of noise, and one very specific, which repre‐ sents one persistent problem like telephone echo on telecommunications nowadays: net‐ works have been modified by the use of new technologies and constant innovations have led to automate the process of interconnection of subscribers, and the inclusion of forms of streaming media.

[7] Akhtar, E., et al. (2007). Acoustic feedback neutralization in active noise control sys‐

Performance of Adaptive Hybrid System in Two Scenarios: Echo Phone and Acoustic Noise Reduction

http://dx.doi.org/10.5772/51517

153

[8] Akhtar, D. (2007). On active Noise Control Systems with Online Acoustic Feedback Path Modeling, *IEEE. Transactions on Audio, Speech, and Language Processing* Febru‐

[9] Lopez-Caudana, E., et al. (2008). A Hybrid Noise Cancelling Algorithm with Secon‐ dary Path Estimation,, *WSEAS TRANSACTIONS on SIGNAL PROCESSING*, 4(12). [10] Kuo, Sen. M. (2002). Active Noise Control System and Method for On-Line Feedback

[11] Pérez-Meana, H., et al. (1994). Echo Cancellation in Audio Terminals, *Memoria Técni‐*

[12] Pérez-Meana, H., & Nakano, M. (1990). Cancelación de Eco en Sistemas de Teleco‐

[13] Gritton, C. W., & Li, A. W. (1984). Echo Cancellation Algorithms, *IEEE ASSP Maga‐*

[14] Murano, K., & Amano, F. (1993). Echo Cancelling Algorithms, *Enciclopedia de Teleco‐*

[15] Lopez-Caudana, E., et al. (2008). A hybrid active noise cancelling with secondary path modeling, *Circuits and Systems, 2008. MWSCAS 2008. 51st Midwest Symposium*

[16] Lopez-Caudana, E., et al. (2009). Evaluation of a Hybrid ANC System with Acoustic Feedback and Online Secondary Path Modeling, *19th International Conference on Elec‐*

[17] Mehmood & Tufail. (2009). A new variable step size method for online feedback path modeling in active noise control systems, *Multitopic Conference INMIC 2009. IEEE*

[18] Lee, E., & Messerschmitt, D. (1993). *Digital Communication*, Kluwer Academic Pub‐

[19] Buchner, H., et al. (2006). Robust extended multidelay filter and double-talk detector for acoustic echo cancellation, *Audio, Speech, and Language Processing, IEEE Transac‐*

[20] Kun & Xiaoli. (2008). A double-talk detector based on generalized mutual informa‐ tion for stereophonic acoustic echo cancellation systems with nonlinearity, *Signals,*

[21] Jae & Dong. (2005). Network echo canceller based on the practical adaptive filter, *In‐ telligent Signal Processing and Communication Systems, 2005. ISPACS 2005. Proceedings*

*tronics, Communications and Computers 2009, Cholula, Puebla*, 26-28.

*Systems and Computers, 2008 42nd Asilomar Conference*, 2161-2164.

*of 2005 International Symposium on*, 693-696.

tems, *IEICE Electronics Express*, 4(7), 221-226.

Path Modeling,, *US Patent 6,418,227*.

municación, *Mundo Electrónico*, 207, 143-150.

*municaciones*, 6, Marcel Decker Inc, 383-409.

*ca, MEXICON 94*, 159-164.

*zine*, 30-37.

*on.*, 277-280.

*13th International*, 1-6.

lisher, Norwell, MA.

*tions on*, 14(5), 1633-1644.

ary., 15(2), 593-599.

Therefore, he was a rigorous analysis of the results and their parameters under the above considerations. The results show the relevance of hybrid systems for consideration in re‐ moving acoustic noise or echo in telephony, with tools of adaptive systems. The advisability of this hybrid system is a matter that must be analyzed in depth.

### **Acknowledgements**

This work was supported by the Department of Mechatronics, part of the School of Design, Architecture and Engineering, Tecnológico de Monterrey, Campus Ciudad de Mexico.

### **Author details**

Edgar Omar Lopez-Caudana1\* and Hector Manuel Perez-Meana2

1 Tecnológico de Monterrey, Campus Ciudad de México, Mexico

2 SEPI, ESIME Culhuacan, IPN, Mexico

#### **References**


[7] Akhtar, E., et al. (2007). Acoustic feedback neutralization in active noise control sys‐ tems, *IEICE Electronics Express*, 4(7), 221-226.

led to automate the process of interconnection of subscribers, and the inclusion of forms of

Therefore, he was a rigorous analysis of the results and their parameters under the above considerations. The results show the relevance of hybrid systems for consideration in re‐ moving acoustic noise or echo in telephony, with tools of adaptive systems. The advisability

This work was supported by the Department of Mechatronics, part of the School of Design, Architecture and Engineering, Tecnológico de Monterrey, Campus Ciudad de Mexico.

[1] Widrow & Stearns. (1985). Adaptive Signal Processing. *Prentice Hall, Englewood Cliff,*

[2] Kuo & Morgan. (1996). Active Noise Control Systems: Algorithms and DSP Imple‐ mentations,. New York: Wiley *Series in Telecommunications and Signal Processing Edi‐*

[3] Kuo & Morgan. (1999). Active Noise Control Systems: A tutorial review, *Proceedings*

[4] Perez-Meana, H., et al. (2007). Active Noise Canceling: Structures and Adaptation Al‐ gorithms, *Advances in audio and Speech Signal Processing: Technologies and Applications*,

[5] Lopez-Caudana, E., et al. (2008). A Hybrid Active Noise Canceling Structure, *Interna‐*

[6] Lopez-Caudana, E., et al. (2010). Evaluation for a Hybrid Active Noise Control Sys‐ tem with Acoustic Feedback, *53rd IEEE Int'l Midwest Symposium on Circuits & Sys‐*

*tional Journal of Circuits, Systems and Signal Processing*, 2(2), 340-346.

of this hybrid system is a matter that must be analyzed in depth.

Edgar Omar Lopez-Caudana1\* and Hector Manuel Perez-Meana2

1 Tecnológico de Monterrey, Campus Ciudad de México, Mexico

2 SEPI, ESIME Culhuacan, IPN, Mexico

*of the IEEE*, 87(6), 943-973.

Idea Group Publishing, Hershey, 286-308.

streaming media.

152 Adaptive Filtering - Theories and Applications

**Acknowledgements**

**Author details**

**References**

*NJ.*

*tors*1996.

*tems*, 1-4.


[22] Tandon, , et al. (2004). An efficient, low-complexity, normalized LMS algorithm for echo cancellation," Circuits and Systems. *NEWCAS 2004. The 2nd Annual IEEE North‐*

[23] Shoureshi, R. (1994). Active noise control: a marriage of acoustics and control, *Ameri‐*

[24] Paleologu, , et al. (2010). An Efficient Proportionate Affine Projection Algorithm for Echo Cancellation *Signal Processing Letters, IEEE* February., 17(2), 165-168.

[25] Romero, A., et al. (2008). A Hybrid Active Noise Canceling Structure, *International*

[26] Brüel & Kjær Sound & Vibration Measurement A/S. Environmental Noise Booklet.

[27] Free Sound Effects, Samples & Music Free Sound Effects Categories. http://

[28] Akhtar, M. T., et al. (2006). A new variable step size LMS algorithm-based method for improved online secondary path modeling in active noise control systems. *IEEE*

[29] Mohammadzaheri, M., et al. (2009). A design approach for feedback-feedforward control systems, *Control and Automation, 2009. ICCA 2009. IEEE International Confer‐*

*Transactions on Audio Speech, and Language Processing*, 14(2), 720-726.

*Journal of Circuits, Systems and Signal Processing*, 2(2), 340-346.

(2008). http://www.nonoise.org/library/envnoise/index.htm.

www.freesfx.co.uk/soundeffectcats.html (2011).

*east Work shop on*, 161-164.

154 Adaptive Filtering - Theories and Applications

*ence on*, 2266-2271.

*can Control Conference*, 3, 3444-3448.

### *Edited by Lino García Morales*

Adaptive filtering can be used to characterize unknown systems in time-variant environments. The main objective of this approach is to meet a difficult comprise: maximum convergence speed with maximum accuracy. Each application requires a certain approach which determines the filter structure, the cost function to minimize the estimation error, the adaptive algorithm, and other parameters; and each selection involves certain cost in computational terms, that in any case should consume less time than the time required by the application working in real-time. Theory and application are not, therefore, isolated entities but an imbricated whole that requires a holistic vision. This book collects some theoretical approaches and practical applications in different areas that support expanding of adaptive systems.

Photo by MihailUlianikov / iStock

Adaptive Filtering - Theories and Applications

Adaptive Filtering

Theories and Applications

*Edited by Lino García Morales*