**2. Wave field analysis/synthesis**

WFA and WFS are techniques based on microphones and loudspeakers arrays, arranged in order to shape a closed curve. The process, which permits to obtain the sound field in the area enclosed by the microphones array starting from microphones signals, is called *sound field extrapolation*. Essentially, the sound field extrapolation and its subsequently reproduction can be done in three ways:


The holophonic technique does not need intermediate processing between recording and reproduction signals. However, its main limitation is given by the fact that the recorded sound field can only be reproduced through loudspeakers exactly placed at microphones positions. For this reason, the holophony technique can not be used in practical applications.

2 Will-be-set-by-IN-TECH

number of inputs/outputs causing an unreasonable computational complexity. This led to the introduction of Wave Domain Adaptive Filtering − WDAF (Buchner, Spors & Rabenstein, 2004). WDAF is an extension of Frequency Domain Adaptive Filtering − FDAF (Shynk, 1992): the filtering is not only performed in temporal frequency domain (as in FDAF), but also in angular frequency domain that takes into account the spatial components of the acoustic field shape. Several WDAF applications can be found in the recent literature (Peretti et al., 2007;

In order to evaluate the performance of WDAF-based algorithms, all the points of the sound field have to be analyzed in order to give a complete view of the acoustic scene at several time instants. The main topic of this chapter concerns with a detailed explanation of the steps regarding numerical sound fields simulations during the application of WDAF-based algorithms in order to analyze their performance in terms of sound field reconstruction. In section 2 the theory of WFS and WFA is summarized and their discrete versions are derived. Then, the basic concepts of WDAF are reviewed and the involved transformations are described in section 3. Therefore, in section 4, the overall simulation processing is described. The discussion starts from the derivation of the sound field of a simple monopole source. Then, the sound field of a static source is virtually reproduced by an array of loudspeakers through the WFS algorithm. In the next step, starting from the simulation of sound field recording with an array of microphones, the PC simulation of the WDAF algorithm is performed. Variations of the whole set of involved parameters are taking into account. Finally, a WDAF application to the attenuation of undesired sources is derived from the well known mono channel approach. The results of the cancellation of the entire sound field of a virtual source through an adaptive algorithm based on WDAF approach is shown. Conclusions are

WFA and WFS are techniques based on microphones and loudspeakers arrays, arranged in order to shape a closed curve. The process, which permits to obtain the sound field in the area enclosed by the microphones array starting from microphones signals, is called *sound field extrapolation*. Essentially, the sound field extrapolation and its subsequently reproduction

• In the reproduction room the loudspeakers are positioned in the same location of the microphones in the recording room (Fig. 1(a)). This technique is called *holophony*, (Nicol &

• The sound field is extrapolated in arbitrary positions starting from the microphones signals. In this way the loudspeakers can not be placed in the same microphones position

• The room impulse response is recorded with the microphones array. In the reproduction stage this response is convolved with the desired signal, previously recorded in an anechoic chamber. In this way, any signal can be reproduced with the acoustic

The holophonic technique does not need intermediate processing between recording and reproduction signals. However, its main limitation is given by the fact that the recorded sound field can only be reproduced through loudspeakers exactly placed at microphones positions.

For this reason, the holophony technique can not be used in practical applications.

characteristics of the recording room (Hulsebos, 2004).

2008; Spors et al., 2005).

reported in section 5.

can be done in three ways:

Emerit, 1999);

(Verheijen, 1997);

**2. Wave field analysis/synthesis**

Fig. 1. Schematic representation of audio recording and reproduction techniques. (a) Holophony. (b) WFA/WFS.

On the other hand, the second and the third approaches need a processing of the recorded signals. In these cases the loudspeakers can be positioned in an arbitrary way and the loudspeakers number is not compulsorily equal to the microphones number. Even if their processing needs a higher computational load, these techniques present more flexibility and can be easily used in a real application. Sound field extrapolation can be obtained by the combination of WFA and WFS techniques. These approaches allow to record the entire sound field in the recording room (WFA) and subsequently to reproduce it in the listening room (WFS) more or less accurately depending on loudspeakers/microphones number (Fig. 1(b)). It was found that circular microphone arrays permit a very good sound field extrapolation and the problem can be treated more easily in circular coordinates (Hulsebos et al., 2002), hence in the following this geometry is considered.

### **2.1 Sound field reproduction**

The basic idea of WFS derives from Huygens' Principle (1678). It states that, at any time *t*, all the points on the wave front due to a point source can be taken as point sources for the production of secondary wavelets. Following this principle, the sound field of an arbitrary primary source *p* can be reproduced by a series of secondary sources *s* positioned on the primary wavelet. These sources can be obtained by considering the medium local vibrations on *s* due to *p*. Kirchhoff's theorem is the generalization of the Huygens' Principle:

*Considering a source-free volume V, the sound pressure inside the volume generated by external sources can be calculated if the pressure and the normal particle velocity on the enclosing surface S are known.*

The Kirchhoff-Helmholtz integral is the mathematical formulation of the Kirchoff's theorem. With reference to Fig. 2, considering a sound propagation in a volume *V* enclosed by a surface *S*, the Kirchhoff-Helmholtz integral is given by (Williams, 1999)

$$P(\vec{r}\_0, \omega) = \frac{1}{4\pi} \oint\_S \left( G\_{\vec{r}\_0}(\vec{r}, \omega) \frac{\partial P\_{\vec{r}\_0}(\vec{r}, \omega)}{\partial n} - P(\vec{r}, \omega) \frac{\partial G\_{\vec{r}\_0}(\vec{r}, \omega)}{\partial n} \right) dS\_\prime \tag{1}$$

where *r*<sup>0</sup> and*r* denote the generic position inside *V* and on *S* respectively, *ω* is the angular frequency, *G<sup>r</sup>*<sup>0</sup> (*r*, *ω*) is the Green function which synthesizes the secondary sources located in *r*0, while *P* defines the sound pressure level.

In two dimensions, Kirchhoff-Helmholtz integral (3) is also valid considering a closed curve *L* instead of the surface *S* (Hulsebos et al., 2002). In this case, the appropriate Green functions

<sup>547</sup> Performance Evaluation of Adaptive Algorithms

*<sup>i</sup>* is the Hankel function of kind *j* and order *i*. The Green function (8) leads to 2D

<sup>0</sup> (*<sup>k</sup>* <sup>|</sup>*<sup>r</sup>* <sup>−</sup>*<sup>r</sup>*0|) + *<sup>P</sup>*(*<sup>r</sup>*, *<sup>ω</sup>*) cos *<sup>ϕ</sup>H*(2)

<sup>0</sup> (*<sup>k</sup>* <sup>|</sup>*<sup>r</sup>* <sup>−</sup>*<sup>r</sup>*0|) + *<sup>P</sup>*(*<sup>r</sup>*, *<sup>ω</sup>*) cos *<sup>ϕ</sup>H*(1)

<sup>0</sup> (*<sup>k</sup>* <sup>|</sup>*<sup>r</sup>* <sup>−</sup>*<sup>r</sup>*0|) + *<sup>P</sup>*(*θ*, *<sup>ω</sup>*) cos *<sup>ϕ</sup>H*(2)

<sup>0</sup> (*<sup>k</sup>* <sup>|</sup>*<sup>r</sup>* <sup>−</sup>*<sup>r</sup>*0|) + *<sup>P</sup>*(*θ*, *<sup>ω</sup>*) cos *<sup>ϕ</sup>H*(1)

<sup>1</sup> (*<sup>k</sup>* <sup>|</sup>*<sup>r</sup>* <sup>−</sup>*ri*|) + *<sup>j</sup>ρcVn*(*θi*, *<sup>ω</sup>*)*H*(2)

<sup>1</sup> (*<sup>k</sup>* <sup>|</sup>*<sup>r</sup>* <sup>−</sup>*ri*|) + *<sup>j</sup>ρcVn*(*θi*, *<sup>ω</sup>*)*H*(1)

<sup>0</sup> (*k* |*r* −*r*0|) (8)

<sup>1</sup> (*k* |*r* −*r*0|)

<sup>1</sup> (*k* |*r* −*r*0|)

<sup>1</sup> (*k* |*r* −*r*0|)

<sup>1</sup> (*k* |*r* −*r*0|)

<sup>0</sup> (*k* |*r* −*ri*|)

<sup>0</sup> (*k* |*r* −*ri*|)

 *Rdθ*

 *Rdθ*.

(12)

(13)

*R*Δ*θ* (14)

*R*Δ*θ*. (15)

*dL*. (10)

*dL*. (11)

<sup>0</sup> (*k* |*r* −*r*0|), (9)

*<sup>G</sup><sup>r</sup>*<sup>0</sup> (*<sup>r</sup>*, *<sup>ω</sup>*) = <sup>−</sup>*jπH*(2)

*<sup>G</sup><sup>r</sup>*<sup>0</sup> (*<sup>r</sup>*, *<sup>ω</sup>*) = *<sup>j</sup>πH*(1)

Similarly, the Green function (9) leads to 2D inverse Kirchhoff-Helmholtz integral

It is worth underlying that the forward Kirchhoff-Helmholtz integral is used for deriving the sound field generated by a source positioned outside *S*, while the inverse integral is used if

In order to obtain Kirchhoff-Helmholtz integrals relative to circular geometry of radius *R*, (10)

The discrete forms of (12) and (13), necessary for a practical implementation, are obtained

Fig. 3 explains the meaning of*ri*,*r*, *ϕ* and Δ*θ*. In section 4 it is shown how to implement (14)

Initially, in (Berkhout et al., 1993) the sound field recording was done through 2D microphones matrices. However, this technique was complicated and its practical realization was very difficult due to the necessity of extremely high performance hardware. In order to reduce the number of signals to be processed, the advantages of Huygens' principle were exploited also in the recording stage and microphones arrays enclosing the area of interest were used as

*<sup>j</sup>ρ*0*cVn*(*<sup>r</sup>*, *<sup>ω</sup>*)*H*(2)

for Wave Field Analysis/Synthesis Using Sound Field Simulations

*<sup>j</sup>ρ*0*cVn*(*<sup>r</sup>*, *<sup>ω</sup>*)*H*(1)

*<sup>j</sup>ρ*0*cVn*(*θ*, *<sup>ω</sup>*)*H*(2)

*<sup>j</sup>ρ*0*cVn*(*θ*, *<sup>ω</sup>*)*H*(1)

are

where *H*(*j*)

*P*(2)

*P*(1)

*P*(2)

*P*(1)

*P*(2)

*P*(1)

forward Kirchhoff-Helmholtz integral

4 *L* 

4 *L* 

4

4

4

4

*NL* ∑ *i*=1 

*NL* ∑ *i*=1 

and (15) in an experimental configuration.

the source is located inside *S* (or *L* in the 2D case).

through a sampling operation at the loudspeakers positions

alternative to microphones matrices (Berkhout et al., 1997).

*<sup>P</sup>*(*θi*, *<sup>ω</sup>*) cos *<sup>ϕ</sup>H*(2)

*<sup>P</sup>*(*θi*, *<sup>ω</sup>*) cos *<sup>ϕ</sup>H*(1)

and (11) are simplified in (Hulsebos et al., 2002)

 2*π* 0

 2*π* 0

(*<sup>r</sup>*, *<sup>ω</sup>*) = <sup>−</sup>*jk*

(*<sup>r</sup>*, *<sup>ω</sup>*) = <sup>−</sup>*jk*

(*<sup>r</sup>*, *<sup>ω</sup>*) = <sup>−</sup>*jk*

(*<sup>r</sup>*, *<sup>ω</sup>*) = <sup>−</sup>*jk*

(*<sup>r</sup>*, *<sup>ω</sup>*) = <sup>−</sup>*jk*

(*<sup>r</sup>*, *<sup>ω</sup>*) = <sup>−</sup>*jk*

**2.2 Sound field recording**

Fig. 2. Geometry used for the Kirchhoff-Helmholtz integral.

It is possible to write (1) in terms of sound pressure *P* and particle velocity *Vn* normal to *S*. From Euler's equation we have that

$$\frac{\partial P\_{\vec{r}\_0}(\vec{r},\omega)}{\partial n} = j\rho\_0 ck V\_n(\vec{r},\omega),\tag{2}$$

where *c* is the sound velocity, *k* = *ω*/*c* is the wave number, and *ρ*<sup>0</sup> is the medium density. Therefore, (1) becomes

$$P(\vec{r}\_0, \omega) = \frac{1}{4\pi} \oint\_S \left( j\rho\_0 ckV\_{\text{fl}}(\vec{r}, \omega) \mathbb{G}\_{\vec{r}\_0}(\vec{r}, \omega) - P(\vec{r}, \omega) \frac{\partial \mathbb{G}\_{\vec{r}\_0}(\vec{r}, \omega)}{\partial \mathbf{n}} \right) d\mathbf{S}.\tag{3}$$

The Green function is not unique. The simplest Green functions are the monopole solutions for a source at the position*r*<sup>0</sup>

$$G\_{\vec{r}\_0}(\vec{r}, \omega) = \frac{e^{-jk|\vec{r} - \vec{r}\_0|}}{|\vec{r} - \vec{r}\_0|} \tag{4}$$

$$G\_{\vec{r}\_0}(\vec{r}, \omega) = \frac{e^{jk|\vec{r} - \vec{r}\_0|}}{|\vec{r} - \vec{r}\_0|}. \tag{5}$$

By inserting (4) in (3), the 3D forward Kirchhoff-Helmholtz integral is obtained

$$P(\vec{r}\_0, \omega) = \frac{1}{4\pi} \oint\_S \left(jk\rho\_0 c V\_n(\vec{r}, \omega) + P(\vec{r}, \omega) \frac{1 + jk \left|\vec{r} - \vec{r}\_0\right|}{\left|\vec{r} - \vec{r}\_0\right|} \cos\varphi\right) \frac{e^{-jk|\vec{r} - \vec{r}\_0|}}{\left|\vec{r} - \vec{r}\_0\right|} d\mathcal{S},\tag{6}$$

where *ϕ* is the angle between the vector normal to *S* and the vector*r*<sup>0</sup> −*r* (Fig. 2). From (6) it can be seen that the sound field inside *V* is obtained by a distribution of monopole and dipole sources (Berkhout et al., 1993). Equation (6) represents the direct integral and indicates a direct propagation due to the presence of the term *e*−*jk*<sup>|</sup>*<sup>r</sup>*<sup>−</sup>*<sup>r</sup>*0<sup>|</sup> . Similarly, by inserting (5) in (3), the 3D inverse Kirchhoff-Helmholtz integral is obtained

$$P(\vec{r}\_{0},\omega) = \frac{1}{4\pi} \oint\_{S} \left(jk\rho\_{0}cV\_{n}(\vec{r},\omega) + P(\vec{r},\omega)\frac{1-jk\left|\vec{r}-\vec{r}\_{0}\right|}{\left|\vec{r}-\vec{r}\_{0}\right|}\cos\varphi\right) \frac{e^{jk\left|\vec{r}-\vec{r}\_{0}\right|}}{\left|\vec{r}-\vec{r}\_{0}\right|}dS,\tag{7}$$

where the term *ejk*<sup>|</sup>*<sup>r</sup>*<sup>−</sup>*<sup>r</sup>*0<sup>|</sup> indicates an inverse propagation (Yon et al., 2003).

4 Will-be-set-by-IN-TECH

It is possible to write (1) in terms of sound pressure *P* and particle velocity *Vn* normal to *S*.

where *c* is the sound velocity, *k* = *ω*/*c* is the wave number, and *ρ*<sup>0</sup> is the medium density.

*jρ*0*ckVn*(*r*, *ω*)*G<sup>r</sup>*<sup>0</sup> (*r*, *ω*) − *P*(*r*, *ω*)

The Green function is not unique. The simplest Green functions are the monopole solutions

*<sup>G</sup><sup>r</sup>*<sup>0</sup> (*<sup>r</sup>*, *<sup>ω</sup>*) = *<sup>e</sup>*−*jk*<sup>|</sup>*<sup>r</sup>*<sup>−</sup>*<sup>r</sup>*0<sup>|</sup>

*<sup>G</sup><sup>r</sup>*<sup>0</sup> (*<sup>r</sup>*, *<sup>ω</sup>*) = *<sup>e</sup>jk*<sup>|</sup>*<sup>r</sup>*<sup>−</sup>*<sup>r</sup>*0<sup>|</sup>

where *ϕ* is the angle between the vector normal to *S* and the vector*r*<sup>0</sup> −*r* (Fig. 2). From (6) it can be seen that the sound field inside *V* is obtained by a distribution of monopole and dipole sources (Berkhout et al., 1993). Equation (6) represents the direct integral and indicates

By inserting (4) in (3), the 3D forward Kirchhoff-Helmholtz integral is obtained

*jkρ*0*cVn*(*r*, *ω*) + *P*(*r*, *ω*)

*jkρ*0*cVn*(*r*, *ω*) + *P*(*r*, *ω*)

where the term *ejk*<sup>|</sup>*<sup>r</sup>*<sup>−</sup>*<sup>r</sup>*0<sup>|</sup> indicates an inverse propagation (Yon et al., 2003).


1 + *jk* |*r* −*r*0|

1 − *jk* |*r* −*r*0|

<sup>|</sup>*<sup>r</sup>* <sup>−</sup>*<sup>r</sup>*0<sup>|</sup> cos *<sup>ϕ</sup>*

<sup>|</sup>*<sup>r</sup>* <sup>−</sup>*<sup>r</sup>*0<sup>|</sup> cos *<sup>ϕ</sup>*

*<sup>∂</sup><sup>n</sup>* <sup>=</sup> *<sup>j</sup>ρ*0*ckVn*(*<sup>r</sup>*, *<sup>ω</sup>*), (2)

*∂G<sup>r</sup>*<sup>0</sup> (*r*, *ω*) *∂n*

<sup>|</sup>*<sup>r</sup>* <sup>−</sup>*<sup>r</sup>*0<sup>|</sup> (4)

. (5)

*e*−*jk*<sup>|</sup>*<sup>r</sup>*<sup>−</sup>*<sup>r</sup>*0<sup>|</sup>

. Similarly, by inserting (5) in (3),

 *ejk*<sup>|</sup>*<sup>r</sup>*<sup>−</sup>*<sup>r</sup>*0<sup>|</sup> |*r* −*r*0|

<sup>|</sup>*<sup>r</sup>* <sup>−</sup>*<sup>r</sup>*0<sup>|</sup> *dS*, (6)

*dS*, (7)

*dS*. (3)

*∂P<sup>r</sup>*<sup>0</sup> (*r*, *ω*)

Fig. 2. Geometry used for the Kirchhoff-Helmholtz integral.

From Euler's equation we have that

*<sup>P</sup>*(*<sup>r</sup>*0, *<sup>ω</sup>*) = <sup>1</sup>

4*π S* 

Therefore, (1) becomes

for a source at the position*r*<sup>0</sup>

*<sup>P</sup>*(*<sup>r</sup>*0, *<sup>ω</sup>*) = <sup>1</sup>

*<sup>P</sup>*(*<sup>r</sup>*0, *<sup>ω</sup>*) = <sup>1</sup>

4*π S* 

4*π S* 

a direct propagation due to the presence of the term *e*−*jk*<sup>|</sup>*<sup>r</sup>*<sup>−</sup>*<sup>r</sup>*0<sup>|</sup>

the 3D inverse Kirchhoff-Helmholtz integral is obtained

In two dimensions, Kirchhoff-Helmholtz integral (3) is also valid considering a closed curve *L* instead of the surface *S* (Hulsebos et al., 2002). In this case, the appropriate Green functions are

$$G\_{\vec{r}\_0}(\vec{r},\omega) = -j\pi H\_0^{(2)}(k|\vec{r} - \vec{r}\_0|) \tag{8}$$

$$G\_{\vec{r}\_0}(\vec{r}, \omega) = j\pi H\_0^{(1)}(k|\vec{r} - \vec{r}\_0|),\tag{9}$$

where *H*(*j*) *<sup>i</sup>* is the Hankel function of kind *j* and order *i*. The Green function (8) leads to 2D forward Kirchhoff-Helmholtz integral

$$P^{(2)}(\vec{r},\omega) = \frac{-jk}{4} \oint\_{L} \left( \dot{\rho}\rho\_{0}cV\_{n}(\vec{r},\omega)H\_{0}^{(2)}(k|\vec{r}-\vec{r}\_{0}|) + P(\vec{r},\omega)\cos\varphi H\_{1}^{(2)}(k|\vec{r}-\vec{r}\_{0}|) \right) dL. \tag{10}$$

Similarly, the Green function (9) leads to 2D inverse Kirchhoff-Helmholtz integral

$$P^{(1)}(\vec{r},\omega) = \frac{-jk}{4} \oint\_{L} \left( j\rho\_0 c V\_n(\vec{r},\omega) H\_0^{(1)}(k|\vec{r}-\vec{r}\_0|) + P(\vec{r},\omega) \cos qH\_1^{(1)}(k|\vec{r}-\vec{r}\_0|) \right) dL. \tag{11}$$

It is worth underlying that the forward Kirchhoff-Helmholtz integral is used for deriving the sound field generated by a source positioned outside *S*, while the inverse integral is used if the source is located inside *S* (or *L* in the 2D case).

In order to obtain Kirchhoff-Helmholtz integrals relative to circular geometry of radius *R*, (10) and (11) are simplified in (Hulsebos et al., 2002)

$$P^{(2)}(\vec{r},\omega) = \frac{-j\mathbf{k}}{4} \oint\_0^{2\pi} \left( j\rho\_0 c V\_n(\theta,\omega) H\_0^{(2)}(k|\vec{r} - \vec{r}\_0|) + P(\theta,\omega) \cos\varphi H\_1^{(2)}(k|\vec{r} - \vec{r}\_0|) \right) R d\theta \tag{12}$$

$$P^{(1)}(\\\vec{r},\omega) = \frac{-jk}{4} \oint\_0^{2\pi} \left( j\rho\_0 c V\_n(\theta,\omega) H\_0^{(1)}(k|\vec{r} - \vec{r}\_0|) + P(\theta,\omega) \cos\varphi H\_1^{(1)}(k|\vec{r} - \vec{r}\_0|) \right) \mathcal{R}d\theta. \tag{13}$$

The discrete forms of (12) and (13), necessary for a practical implementation, are obtained through a sampling operation at the loudspeakers positions

$$P^{(2)}(\vec{r},\omega) = \frac{-jk}{4} \sum\_{i=1}^{N\_{\vec{l}}} \left( P(\theta\_{\vec{l}},\omega) \cos\varphi H\_1^{(2)}(\vec{k}\,|\vec{r}-\vec{r}\_{\vec{l}}|) + j\rho c V\_n(\theta\_{\vec{l}},\omega) H\_0^{(2)}(\vec{k}\,|\vec{r}-\vec{r}\_{\vec{l}}|) \right) R\Delta\theta \tag{14}$$

$$P^{(1)}(\vec{r},\omega) = \frac{-jk}{4} \sum\_{i=1}^{N\_L} \left( P(\theta\_{i\nu}\omega) \cos\varphi H\_1^{(1)}(k|\vec{r}-\vec{r}\_i|) + j\rho c V\_n(\theta\_{i\nu}\omega) H\_0^{(1)}(k|\vec{r}-\vec{r}\_i|) \right) R\Delta\theta. \tag{15}$$

Fig. 3 explains the meaning of*ri*,*r*, *ϕ* and Δ*θ*. In section 4 it is shown how to implement (14) and (15) in an experimental configuration.

#### **2.2 Sound field recording**

Initially, in (Berkhout et al., 1993) the sound field recording was done through 2D microphones matrices. However, this technique was complicated and its practical realization was very difficult due to the necessity of extremely high performance hardware. In order to reduce the number of signals to be processed, the advantages of Huygens' principle were exploited also in the recording stage and microphones arrays enclosing the area of interest were used as alternative to microphones matrices (Berkhout et al., 1997).

In order to obtain the normal velocity in terms of cylindrical components for a circular configuration, the ∇ operator can be reduced to partial derivative with respect to radial

<sup>549</sup> Performance Evaluation of Adaptive Algorithms

2 *H*(1)

2 *H*(2)

*<sup>k</sup><sup>θ</sup>* (*R*, *<sup>θ</sup>*, *<sup>ω</sup>*) + *<sup>M</sup>*(2)

*<sup>k</sup><sup>θ</sup>* (*R*, *<sup>θ</sup>*, *<sup>ω</sup>*) + *<sup>M</sup>*(2)

 2*π* 0

> 2*π* 0

*<sup>k</sup><sup>θ</sup>* (*kR*) + *<sup>M</sup>*(2)

*<sup>k</sup><sup>θ</sup>* (*kR*) + *<sup>M</sup>*(2)

*<sup>k</sup><sup>θ</sup>* (*kR*) <sup>−</sup> *<sup>H</sup>*(2)

*<sup>k</sup><sup>θ</sup>* (*kR*) <sup>−</sup> *<sup>H</sup>*(1)

*P*(*θ*, *ω*)*e*

*Vn*(*θ*, *ω*)*e*

where *M*(1) and *M*(2) are the incoming and outgoing expansion coefficients of the sound field in terms of cylindrical harmonics. To find *M*(1) and *M*(2) behaviors, a spatial Fourier

2*π*

2*π*

*kθ*

*kθ V*˜

> � (1)

*<sup>k</sup><sup>θ</sup>* (*kR*)*P*˜(*k<sup>θ</sup>* , *<sup>ω</sup>*) <sup>−</sup> *<sup>H</sup>*(2)

*<sup>k</sup><sup>θ</sup>* (*kR*)*P*˜(*k<sup>θ</sup>* , *<sup>ω</sup>*) <sup>−</sup> *<sup>H</sup>*(1)

By substituting (30) in (26) and (31) in (27), the following relations keep valid for each *k<sup>θ</sup>*

(*k<sup>θ</sup>* , *<sup>ω</sup>*)*H*(1)

(*k<sup>θ</sup>* , *ω*)*H*

*<sup>k</sup><sup>θ</sup>* (*kR*)*H*�(2)

*<sup>k</sup><sup>θ</sup>* (*kR*)*H*�(1)

taking into account the definition of the derivative of Hankel functions (Abramowitz & Stegun, 1970). Therefore, the sound field can be decomposed in cylindrical harmonics

*<sup>∂</sup><sup>r</sup>* <sup>=</sup> *<sup>j</sup>ωρVn*. (23)

*<sup>k</sup>θ*+1(*kr*)

*<sup>k</sup>θ*+1(*kr*)

(*k<sup>θ</sup>* , *<sup>ω</sup>*)P(2)

(*k<sup>θ</sup>* , *<sup>ω</sup>*)V(2)

*<sup>k</sup><sup>θ</sup>* (*R*, *<sup>θ</sup>*, *<sup>ω</sup>*)

*<sup>k</sup><sup>θ</sup>* (*R*, *<sup>θ</sup>*, *<sup>ω</sup>*)

*<sup>P</sup>*˜(*k<sup>θ</sup>* , *<sup>ω</sup>*)*ejk<sup>θ</sup> <sup>θ</sup>* (30)

*<sup>n</sup>*(*k<sup>θ</sup>* , *<sup>ω</sup>*)*ejk<sup>θ</sup> <sup>θ</sup>* . (31)

� (2)

*<sup>n</sup>*(*k<sup>θ</sup>* , *ω*)

*<sup>k</sup><sup>θ</sup>* (*kR*)

*<sup>n</sup>*(*k<sup>θ</sup>* , *ω*)

*<sup>k</sup><sup>θ</sup>* (*kR*)

(*k<sup>θ</sup>* , *<sup>ω</sup>*)*H*(2)

(*k<sup>θ</sup>* , *ω*)*H*

*<sup>k</sup><sup>θ</sup>* (*kR*)*jρcV*˜

*<sup>k</sup><sup>θ</sup>* (*kR*)*jρcV*˜

*<sup>k</sup><sup>θ</sup>* (*kR*)*H*�(1)

*<sup>k</sup><sup>θ</sup>* (*kR*)*H*�(2)

<sup>−</sup>*jk<sup>θ</sup> <sup>θ</sup>dθ* (28)

<sup>−</sup>*jk<sup>θ</sup> <sup>θ</sup>dθ* (29)

*<sup>k</sup><sup>θ</sup>* (*kR*) (32)

*<sup>k</sup><sup>θ</sup>* (*kR*). (33)

. (35)

(34)

*ejk<sup>θ</sup> <sup>θ</sup>* (24)

*ejk<sup>θ</sup> <sup>θ</sup>* , (25)

(26)

, (27)

*<sup>k</sup>θ*−1(*kr*) <sup>−</sup> *<sup>H</sup>*(1)

*<sup>k</sup>θ*−1(*kr*) <sup>−</sup> *<sup>H</sup>*(2)

*∂P*

*<sup>k</sup><sup>θ</sup>* (*kr*)*ejk<sup>θ</sup> <sup>θ</sup>* <sup>=</sup> <sup>1</sup>

*<sup>k</sup><sup>θ</sup>* (*kr*)*ejk<sup>θ</sup> <sup>θ</sup>* <sup>=</sup> <sup>1</sup>

(*k<sup>θ</sup>* , *<sup>ω</sup>*)P(1)

(*k<sup>θ</sup>* , *<sup>ω</sup>*)V(1)

*<sup>P</sup>*˜(*k<sup>θ</sup>* , *<sup>ω</sup>*) = <sup>F</sup>*<sup>θ</sup>* (*P*) = <sup>1</sup>

*<sup>n</sup>*(*k<sup>θ</sup>* , *<sup>ω</sup>*) = <sup>F</sup>*<sup>θ</sup>* (*Vn*) = <sup>1</sup>

*P*(*θ*, *ω*) = ∑

*Vn*(*θ*, *ω*) = ∑

The application of the second Newton's Law to (20) and (21) results in

� (1)

for Wave Field Analysis/Synthesis Using Sound Field Simulations

� (2)

component

components

*<sup>j</sup>ρc*V(1)

*<sup>j</sup>ρc*V(2)

*P*(*θ*, *ω*) = ∑

*Vn*(*θ*, *ω*) = ∑

*<sup>k</sup><sup>θ</sup>* (*r*, *<sup>θ</sup>*, *<sup>ω</sup>*) = *<sup>H</sup>*

*<sup>k</sup><sup>θ</sup>* (*r*, *<sup>θ</sup>*, *<sup>ω</sup>*) = *<sup>H</sup>*

*kθ*

*kθ*

transform F*<sup>θ</sup>* is performed with respect to *θ*:

*V*˜

*<sup>P</sup>*˜(*k<sup>θ</sup>* , *<sup>ω</sup>*) = *<sup>M</sup>*(1)

*<sup>n</sup>*(*k<sup>θ</sup>* , *<sup>ω</sup>*) = *<sup>M</sup>*(1)

(*k<sup>θ</sup>* , *ω*) =

(*k<sup>θ</sup>* , *ω*) =

Therefore, it is possible to solve these equations for *M*(1) and *M*(2):

*H* � (2)

*H* � (1)

*H*(1)

*H*(2)

that are the Fourier coefficients of the series

*jρcV*˜

*M*(1)

*M*(2)

 *M*(1)

 *M*(1)

Fig. 3. Geometry used for the driving function calculation relative to circular array.

In section 2.1 we have seen the Kirchhoff-Helmholtz integrals referred to the circular array configuration. The global sound field is given by the sum of *P*(1)(�*r*, *ω*) and *P*(2)(�*r*, *ω*) which represent the inverse and forward extrapolated sound field, respectively (Hulsebos et al., 2002). By using Kirchhoff-Helmholtz integrals only the sound field within the closed curve can be extrapolated. To avoid this limitation, the sound field extrapolation through circular microphones array can be performed by using the cylindrical harmonics decomposition (Hulsebos, 2004).

Cylindrical harmonics of order *k<sup>θ</sup>* are defined as solutions of the homogeneous wave equation in terms of cylindrical coordinates (Blackstock, 2000; Williams, 1999)

$$\mathcal{P}^{(1)}(r,\theta,\omega) = H\_{k\rho}^{(1)}(kr)e^{j k\_{\theta}\theta}e^{j k\_{\tilde{z}}z} \tag{16}$$

$$\mathcal{P}^{(2)}(r,\theta,\omega) = H\_{k\_{\theta}}^{(2)}(kr)e^{jk\_{\theta}\theta}e^{jk\_z z},\tag{17}$$

where *<sup>H</sup>*(1) *<sup>m</sup>* and *<sup>H</sup>*(2) *<sup>m</sup>* are the Hankel function of kind 1 and 2, respectively, and *<sup>m</sup>* represents its order. *k<sup>θ</sup>* and *kz* are the wave numbers along the azimuth and *z* components of the cylindrical coordinates system defined by (*r*, *θ*, *z*) (Fig. 3). The far field approximation (*kr* >> 1) of the Hankel functions is given by

$$H\_{k\_\theta}^{(1)}(kr) \approx (-j)^{k\_\theta} \frac{1 - j}{\sqrt{\pi}} \frac{e^{jkr}}{\sqrt{kr}} \tag{18}$$

$$H\_{k\_\theta}^{(2)}(kr) \approx j^{k\_\theta} \frac{1+j}{\sqrt{\pi}} \frac{e^{-jkr}}{\sqrt{kr}}.\tag{19}$$

In case of planar geometry, we have *kz* = 0, hence (16) and (17) become

$$\mathcal{P}^{(1)}(\vec{r},\omega) = H\_{k\_{\theta}}^{(1)}(kr)e^{jk\_{\theta}\theta} \tag{20}$$

$$\mathcal{P}^{(2)}(\vec{r},\omega) = H\_{k\_{\theta}}^{(2)}(kr)e^{jk\_{\theta}\theta},\tag{21}$$

where (1) and (2) are referred to incoming and outgoing cylindrical harmonics, respectively, while *k<sup>θ</sup>* can be any signed integer number.

The objective is to decompose the entire sound field recorded by circular microphones array in cylindrical harmonics components. It is necessary to know both pressure and normal particle velocity values. The radial velocity can be calculated starting from the second Newton's Law

$$-\nabla P = j\omega\rho \vec{V}.\tag{22}$$

6 Will-be-set-by-IN-TECH

Fig. 3. Geometry used for the driving function calculation relative to circular array.

in terms of cylindrical coordinates (Blackstock, 2000; Williams, 1999)

<sup>P</sup>(1)

<sup>P</sup>(2)

*H*(1)

*H*(2)

In case of planar geometry, we have *kz* = 0, hence (16) and (17) become

<sup>P</sup>(1)

<sup>P</sup>(2)

*<sup>k</sup><sup>θ</sup>* (*kr*) ≈ *<sup>j</sup>*

(Hulsebos, 2004).

Hankel functions is given by

while *k<sup>θ</sup>* can be any signed integer number.

In section 2.1 we have seen the Kirchhoff-Helmholtz integrals referred to the circular array configuration. The global sound field is given by the sum of *P*(1)(�*r*, *ω*) and *P*(2)(�*r*, *ω*) which represent the inverse and forward extrapolated sound field, respectively (Hulsebos et al., 2002). By using Kirchhoff-Helmholtz integrals only the sound field within the closed curve can be extrapolated. To avoid this limitation, the sound field extrapolation through circular microphones array can be performed by using the cylindrical harmonics decomposition

Cylindrical harmonics of order *k<sup>θ</sup>* are defined as solutions of the homogeneous wave equation

where *<sup>H</sup>*(1) *<sup>m</sup>* and *<sup>H</sup>*(2) *<sup>m</sup>* are the Hankel function of kind 1 and 2, respectively, and *<sup>m</sup>* represents its order. *k<sup>θ</sup>* and *kz* are the wave numbers along the azimuth and *z* components of the cylindrical coordinates system defined by (*r*, *θ*, *z*) (Fig. 3). The far field approximation (*kr* >> 1) of the

*<sup>k</sup><sup>θ</sup>* (*kr*) <sup>≈</sup> (−*j*)*k<sup>θ</sup>* <sup>1</sup> <sup>−</sup> *<sup>j</sup>*

(�*r*, *<sup>ω</sup>*) = *<sup>H</sup>*(1)

(�*r*, *<sup>ω</sup>*) = *<sup>H</sup>*(2)

where (1) and (2) are referred to incoming and outgoing cylindrical harmonics, respectively,

The objective is to decompose the entire sound field recorded by circular microphones array in cylindrical harmonics components. It is necessary to know both pressure and normal particle velocity values. The radial velocity can be calculated starting from the second Newton's Law

*<sup>k</sup><sup>θ</sup>* 1 + *j* <sup>√</sup>*<sup>π</sup>*

*<sup>k</sup><sup>θ</sup>* (*kr*)*ejk<sup>θ</sup> <sup>θ</sup> <sup>e</sup>jkz <sup>z</sup>*

<sup>√</sup>*<sup>π</sup>*

*e*−*jkr*

*ejkr*

*<sup>k</sup><sup>θ</sup>* (*kr*)*ejk<sup>θ</sup> <sup>θ</sup> <sup>e</sup>jkz <sup>z</sup>* (16)

, (17)

<sup>√</sup>*kr* (18)

<sup>√</sup>*kr* . (19)

*<sup>k</sup><sup>θ</sup>* (*kr*)*ejk<sup>θ</sup> <sup>θ</sup>* (20)

*<sup>k</sup><sup>θ</sup>* (*kr*)*ejk<sup>θ</sup> <sup>θ</sup>* , (21)

− ∇*<sup>P</sup>* <sup>=</sup> *<sup>j</sup>ωρV*� . (22)

(*r*, *<sup>θ</sup>*, *<sup>ω</sup>*) = *<sup>H</sup>*(1)

(*r*, *<sup>θ</sup>*, *<sup>ω</sup>*) = *<sup>H</sup>*(2)

In order to obtain the normal velocity in terms of cylindrical components for a circular configuration, the ∇ operator can be reduced to partial derivative with respect to radial component

$$\frac{\partial P}{\partial r} = j\omega\rho V\_n.\tag{23}$$

The application of the second Newton's Law to (20) and (21) results in

$$j\rho c \mathcal{V}\_{k\rho}^{(1)}(r,\theta,\omega) = H\_{k\rho}^{'(1)}(kr)e^{j k\_{\theta}\theta} = \frac{1}{2}\left\{H\_{k\rho-1}^{(1)}(kr) - H\_{k\rho+1}^{(1)}(kr)\right\}e^{j k\_{\theta}\theta} \tag{24}$$

$$j\rho c \mathcal{V}\_{k\_{\theta}}^{(2)}(r,\theta,\omega) = H\_{k\_{\theta}}^{'(2)}(kr)e^{jk\_{\theta}\theta} = \frac{1}{2}\left\{H\_{k\_{\theta}-1}^{(2)}(kr) - H\_{k\_{\theta}+1}^{(2)}(kr)\right\}e^{jk\_{\theta}\theta},\tag{25}$$

taking into account the definition of the derivative of Hankel functions (Abramowitz & Stegun, 1970). Therefore, the sound field can be decomposed in cylindrical harmonics components

$$P(\theta,\omega) = \sum\_{k\_{\theta}} \left( M^{(1)}(k\_{\theta},\omega) \mathcal{P}\_{k\_{\theta}}^{(1)}(\mathbb{R},\theta,\omega) + M^{(2)}(k\_{\theta},\omega) \mathcal{P}\_{k\_{\theta}}^{(2)}(\mathbb{R},\theta,\omega) \right) \tag{26}$$

$$V\_{\boldsymbol{\theta}}(\boldsymbol{\theta},\boldsymbol{\omega}) = \sum\_{\boldsymbol{k}\_{\theta}} \left( M^{(1)}(\boldsymbol{k}\_{\theta},\boldsymbol{\omega}) \mathcal{V}\_{\boldsymbol{k}\_{\theta}}^{(1)}(\boldsymbol{R},\boldsymbol{\theta},\boldsymbol{\omega}) + M^{(2)}(\boldsymbol{k}\_{\theta},\boldsymbol{\omega}) \mathcal{V}\_{\boldsymbol{k}\_{\theta}}^{(2)}(\boldsymbol{R},\boldsymbol{\theta},\boldsymbol{\omega}) \right), \tag{27}$$

where *M*(1) and *M*(2) are the incoming and outgoing expansion coefficients of the sound field in terms of cylindrical harmonics. To find *M*(1) and *M*(2) behaviors, a spatial Fourier transform F*<sup>θ</sup>* is performed with respect to *θ*:

$$\tilde{P}(k\_{\theta},\omega) = \mathcal{F}\_{\theta}(P) = \frac{1}{2\pi} \int\_{0}^{2\pi} P(\theta,\omega) e^{-jk\_{\theta}\theta} d\theta \tag{28}$$

$$\tilde{V}\_{\hbar}(k\_{\theta},\omega) = \mathcal{F}\_{\theta}(V\_{\hbar}) = \frac{1}{2\pi} \int\_{0}^{2\pi} V\_{\hbar}(\theta,\omega) e^{-j\mathbf{k}\_{\theta}\theta} d\theta \tag{29}$$

that are the Fourier coefficients of the series

$$P(\theta,\omega) = \sum\_{k\_{\theta}} \tilde{P}(k\_{\theta'}\omega)e^{j k\_{\theta}\theta} \tag{30}$$

$$V\_{\hbar}(\theta,\omega) = \sum\_{k\_{\theta}} \tilde{V}\_{\hbar}(k\_{\theta},\omega)e^{jk\_{\theta}\theta}.\tag{31}$$

By substituting (30) in (26) and (31) in (27), the following relations keep valid for each *k<sup>θ</sup>*

$$\tilde{P}(k\_{\theta},\omega) = M^{(1)}(k\_{\theta},\omega)H\_{k\_{\theta}}^{(1)}(k\mathcal{R}) + M^{(2)}(k\_{\theta},\omega)H\_{k\_{\theta}}^{(2)}(k\mathcal{R})\tag{32}$$

$$j\rho c \tilde{V}\_{\text{fl}}(k\_{\theta}, \omega) = M^{(1)}(k\_{\theta}, \omega) H\_{k\_{\theta}}^{'(1)}(kR) + M^{(2)}(k\_{\theta}, \omega) H\_{k\_{\theta}}^{'(2)}(kR). \tag{33}$$

Therefore, it is possible to solve these equations for *M*(1) and *M*(2):

$$M^{(1)}(k\_{\theta},\omega) = \frac{H\_{k\_{\theta}}^{'(2)}(kR)\tilde{P}(k\_{\theta},\omega) - H\_{k\_{\theta}}^{(2)}(kR)j\rho c\tilde{V}\_{\text{\textquotedblleft}}(k\_{\theta},\omega)}{H\_{k\_{\theta}}^{(1)}(kR)H\_{k\_{\theta}}^{'(2)}(kR) - H\_{k\_{\theta}}^{(2)}(kR)H\_{k\_{\theta}}^{'(1)}(kR)}\tag{34}$$

$$M^{(2)}(k\_{\theta},\omega) = \frac{H\_{k\_{\theta}}^{'(1)}(kR)\tilde{P}(k\_{\theta},\omega) - H\_{k\_{\theta}}^{(1)}(kR)j\rho c\tilde{V}\_{\text{h}}(k\_{\theta},\omega)}{H\_{k\_{\theta}}^{(2)}(kR)H\_{k\_{\theta}}^{'(1)}(kR) - H\_{k\_{\theta}}^{(1)}(kR)H\_{k\_{\theta}}^{'(2)}(kR)}. \tag{35}$$

(a) (b)

<sup>551</sup> Performance Evaluation of Adaptive Algorithms

transformed domain the computational load is particularly reduced because the convolution operation becomes a simple multiplication. Overlap and Save (OLS) and Overlap and Add (OLA) are used for avoiding circular convolution problems (Oppenheim et al., 1999). The general scheme of FLMS algorithm is shown in Fig. 4(a). Lowercase letters represent time domain signals while uppercase letters represent frequency domain signals. **u** is the input signal, **d** is the reference signal and **y** is the output signal. **W** is the adaptive filter where its coefficients are updated in real-time so that the output signal **y** is equal to **d** as much as

Therefore, WDAF derives directly from FLMS: assuming to record the sound field with a circular array composed of *M* pressure and *M* particle velocity microphones (section 2.2), the basic scheme of WDAF is shown in Fig. 4(b), where **u** represents the 2*M* input signals, **d** represents the 2*M* desired signals, **y** represents the 2*Q* outputs relative to microphone positions (in all three cases pressure and particle velocity signals are considered) and **W** represents the adaptive process composed of 2*N* filters and based on least mean square approach. In this case, Fourier transform F*<sup>t</sup>* has to be replaced by a wave domain transform T . This transform derives directly from sound field extrapolation, just introduced in section 2.2. Therefore, starting from pressure *p*(*θ*, *t*) and particle velocity *vn*(*θ*, *t*) signals, the T -transform can be obtained by two Fourier transforms F*t*, F*<sup>θ</sup>* (in temporal and spatial frequency domain) followed by cylindrical harmonics decomposition M (Fig. 5(a)). The first Fourier transform is

−∞

−∞

As regards the second Fourier transform and the cylindrical harmonics decomposition, they

As for the inverse transform (<sup>T</sup> <sup>−</sup>1), we can obtain the temporal frequency domain acoustic field at any point by applying <sup>M</sup>−<sup>1</sup> (Fig. 5(b)) given by (36) and (37). Finally, an inverse

*p*(*θ*, *t*)*e*

*vn*(*θ*, *t*)*e*

−*jωt*

−*jωt*

*dt* (40)

*dt* (41)

Fig. 4. Typical configuration of adaptive algorithms. (a) FLMS. (b) WDAF.

for Wave Field Analysis/Synthesis Using Sound Field Simulations

performed in the temporal frequency domain and it is given by

have just been introduced in (28) (29) (34) and (35).

*<sup>P</sup>*(*θ*, *<sup>ω</sup>*) = <sup>F</sup>*t*(*p*) = <sup>∞</sup>

*Vn*(*θ*, *<sup>ω</sup>*) = <sup>F</sup>*t*(*vn*) = <sup>∞</sup>

possible.

Finally, the sound field can be reconstructed:

$$P(r,\theta,\omega) = \sum\_{k\_{\theta}} \left( M^{(1)}(k\_{\theta},\omega) \mathcal{P}\_{k\_{\theta}}^{(1)}(r,\theta,\omega) + M^{(2)}(k\_{\theta},\omega) \mathcal{P}\_{k\_{\theta}}^{(2)}(r,\theta,\omega) \right) \tag{36}$$

$$V\_{\mathbb{M}}(r,\theta,\omega) = \sum\_{k\_{\theta}} \left( M^{(1)}(k\_{\theta},\omega) \mathcal{V}\_{k\_{\theta}}^{(1)}(r,\theta,\omega) + M^{(2)}(k\_{\theta},\omega) \mathcal{V}\_{k\_{\theta}}^{(2)}(r,\theta,\omega) \right). \tag{37}$$

By using cylindrical harmonics decomposition it is possible to extrapolate also the sound field external to the microphones array. Therefore, the sound field extrapolation can be summarized in 4 steps:


Moreover, it has been seen that the plane wave decomposition is a flexible format that can be used in order to recreate the sound field at any position (Hulsebos et al., 2002; Spors et al., 2005). The mathematical derivation of this decomposition (exploiting far field approximation of Hankel functions (18) and (19)) is given by

$$s^{(1),(2)}(\\\theta,\omega) = \frac{1}{2\pi} \sum\_{k\_\theta} j^{k\_\theta} M^{(1),(2)}(k\_{\theta'}\omega) e^{jk\_\theta\theta} \tag{38}$$

$$P(r,\theta,\omega) = \int\_0^{2\pi} s^{(1),(2)}(\theta',\omega)e^{-jkr\cos(\theta-\theta')}d\theta',\tag{39}$$

where *s*(1) and *s*(2) are the incoming and outgoing part of the wave field, respectively. The decomposition into incoming and outgoing waves can be used in order to distinguish between sources inside and outside the recording area.

#### **3. Wave domain adaptive filtering**

A practical implementation of digital signal processing algorithms (room equalization, noise reduction, echo cancellation) to WFA/WFS needs efficient solutions with the aim of decreasing the high computational cost due to the extremely high number of microphones/loudspeakers. To this end, Wave Domain Adaptive Filtering (WDAF) has been introduced (Buchner, Spors & Kellermann, 2004) as an extension in the spatial domain of Fast LMS (FLMS) (Haykin, 1996; Shynk, 1992). A more detailed explanation of this technique can also be found in (Buchner, 2008; Buchner & Spors, 2008; Spors, 2005).

It is known that FLMS optimal performance arises from the orthogonality property of the DFT basis functions. Therefore, in order to apply adaptive filtering to WFA/WFS systems, a proper set of orthogonal basis functions has been identified allowing a combined spatio-temporal transform, denoted with <sup>T</sup> and <sup>T</sup> <sup>−</sup>1, directly based on sound field extrapolation. FLMS has been derived from time-domain block-LMS (Shynk, 1992). However, in the FLMS case, the adaptation process is performed in the temporal frequency domain. By operating in the 8 Will-be-set-by-IN-TECH

By using cylindrical harmonics decomposition it is possible to extrapolate also the sound field external to the microphones array. Therefore, the sound field extrapolation can be

1. Signals recording by circular pressure and velocity microphones array to obtain *P*(*θ*, *ω*)

2. Fourier transform with respect to angle *θ* of *P*(*θ*, *ω*) and *Vn*(*θ*, *ω*) to obtain *P*˜(*k<sup>θ</sup>* , *ω*) and

3. Calculation of the expansion coefficients *M*(1)(*k<sup>θ</sup>* , *ω*) and *M*(2)(*k<sup>θ</sup>* , *ω*) in terms of

Moreover, it has been seen that the plane wave decomposition is a flexible format that can be used in order to recreate the sound field at any position (Hulsebos et al., 2002; Spors et al., 2005). The mathematical derivation of this decomposition (exploiting far field approximation

*<sup>k</sup><sup>θ</sup> M*(1),(2)

−*jkr* cos(*θ*−*θ*�

) *dθ*�

<sup>2</sup>*<sup>π</sup>* ∑ *kθ j*

where *s*(1) and *s*(2) are the incoming and outgoing part of the wave field, respectively. The decomposition into incoming and outgoing waves can be used in order to distinguish between

A practical implementation of digital signal processing algorithms (room equalization, noise reduction, echo cancellation) to WFA/WFS needs efficient solutions with the aim of decreasing the high computational cost due to the extremely high number of microphones/loudspeakers. To this end, Wave Domain Adaptive Filtering (WDAF) has been introduced (Buchner, Spors & Kellermann, 2004) as an extension in the spatial domain of Fast LMS (FLMS) (Haykin, 1996; Shynk, 1992). A more detailed explanation of this technique can

It is known that FLMS optimal performance arises from the orthogonality property of the DFT basis functions. Therefore, in order to apply adaptive filtering to WFA/WFS systems, a proper set of orthogonal basis functions has been identified allowing a combined spatio-temporal transform, denoted with <sup>T</sup> and <sup>T</sup> <sup>−</sup>1, directly based on sound field extrapolation. FLMS has been derived from time-domain block-LMS (Shynk, 1992). However, in the FLMS case, the adaptation process is performed in the temporal frequency domain. By operating in the

*<sup>k</sup><sup>θ</sup>* (*r*, *<sup>θ</sup>*, *<sup>ω</sup>*) + *<sup>M</sup>*(2)

*<sup>k</sup><sup>θ</sup>* (*r*, *<sup>θ</sup>*, *<sup>ω</sup>*) + *<sup>M</sup>*(2)

(*k<sup>θ</sup>* , *<sup>ω</sup>*)P(2)

(*k<sup>θ</sup>* , *<sup>ω</sup>*)V(2)

*<sup>k</sup><sup>θ</sup>* (*r*, *<sup>θ</sup>*, *<sup>ω</sup>*)

*<sup>k</sup><sup>θ</sup>* (*r*, *<sup>θ</sup>*, *<sup>ω</sup>*)

(*k<sup>θ</sup>* , *<sup>ω</sup>*)*ejk<sup>θ</sup> <sup>θ</sup>* (38)

, (39)

(36)

. (37)

(*k<sup>θ</sup>* , *<sup>ω</sup>*)P(1)

(*k<sup>θ</sup>* , *<sup>ω</sup>*)V(1)

4. Sound field extrapolation at each point of the area through (36) and (37).

(*θ*, *<sup>ω</sup>*) = <sup>1</sup>

 2*π* 0 *s* (1),(2) (*θ*� , *ω*)*e*

also be found in (Buchner, 2008; Buchner & Spors, 2008; Spors, 2005).

Finally, the sound field can be reconstructed:

*kθ*

*kθ*

 *M*(1)

 *M*(1)

*P*(*r*, *θ*, *ω*) = ∑

*Vn*(*r*, *θ*, *ω*) = ∑

summarized in 4 steps:

cylindrical harmonics;

of Hankel functions (18) and (19)) is given by

sources inside and outside the recording area.

**3. Wave domain adaptive filtering**

*s* (1),(2)

*P*(*r*, *θ*, *ω*) =

and *Vn*(*θ*, *ω*);

*V*˜(*k<sup>θ</sup>* , *ω*);

Fig. 4. Typical configuration of adaptive algorithms. (a) FLMS. (b) WDAF.

transformed domain the computational load is particularly reduced because the convolution operation becomes a simple multiplication. Overlap and Save (OLS) and Overlap and Add (OLA) are used for avoiding circular convolution problems (Oppenheim et al., 1999). The general scheme of FLMS algorithm is shown in Fig. 4(a). Lowercase letters represent time domain signals while uppercase letters represent frequency domain signals. **u** is the input signal, **d** is the reference signal and **y** is the output signal. **W** is the adaptive filter where its coefficients are updated in real-time so that the output signal **y** is equal to **d** as much as possible.

Therefore, WDAF derives directly from FLMS: assuming to record the sound field with a circular array composed of *M* pressure and *M* particle velocity microphones (section 2.2), the basic scheme of WDAF is shown in Fig. 4(b), where **u** represents the 2*M* input signals, **d** represents the 2*M* desired signals, **y** represents the 2*Q* outputs relative to microphone positions (in all three cases pressure and particle velocity signals are considered) and **W** represents the adaptive process composed of 2*N* filters and based on least mean square approach. In this case, Fourier transform F*<sup>t</sup>* has to be replaced by a wave domain transform T . This transform derives directly from sound field extrapolation, just introduced in section 2.2. Therefore, starting from pressure *p*(*θ*, *t*) and particle velocity *vn*(*θ*, *t*) signals, the T -transform can be obtained by two Fourier transforms F*t*, F*<sup>θ</sup>* (in temporal and spatial frequency domain) followed by cylindrical harmonics decomposition M (Fig. 5(a)). The first Fourier transform is performed in the temporal frequency domain and it is given by

$$P(\theta,\omega) = \mathcal{F}\_t(p) = \int\_{-\infty}^{\infty} p(\theta, t) e^{-j\omega t} dt\tag{40}$$

$$V\_{\rm ll}(\theta,\omega) = \mathcal{F}\_{\rm t}(v\_{\rm ll}) = \int\_{-\infty}^{\infty} v\_{\rm ll}(\theta,t)e^{-j\omega t}dt\tag{41}$$

As regards the second Fourier transform and the cylindrical harmonics decomposition, they have just been introduced in (28) (29) (34) and (35).

As for the inverse transform (<sup>T</sup> <sup>−</sup>1), we can obtain the temporal frequency domain acoustic field at any point by applying <sup>M</sup>−<sup>1</sup> (Fig. 5(b)) given by (36) and (37). Finally, an inverse

Fig. 6. Sound field of an ideal monopole with reproducing frequency 500 Hz at [0, −2].

By considering (12) and (13) it is possible to obtain the sound field of Fig. 6 through the knowledge of the sound pressure and the normal particle velocity on the perimeter of a circle of radius *R*. The sound pressure can be easily evaluated through (44) at positions*r* = [*R*, *θ*] for each *θ*, while the particle velocity normal to the circumference can be obtained by difference quotient between the sound pressure levels at*rR*−*<sup>h</sup>* = [*<sup>R</sup>* − *<sup>h</sup>*, *<sup>θ</sup>*] and*rR*<sup>+</sup>*<sup>h</sup>* = [*<sup>R</sup>* + *<sup>h</sup>*, *<sup>θ</sup>*] (circular

<sup>553</sup> Performance Evaluation of Adaptive Algorithms

for Wave Field Analysis/Synthesis Using Sound Field Simulations

*Vn*(*ω*, *<sup>θ</sup>*) = *pR*−*h*(*ω*, *<sup>θ</sup>*) <sup>−</sup> *pR*<sup>+</sup>*h*(*ω*, *<sup>θ</sup>*)

The results of the Kirchhoff-Helmoltz integral application are shown in Fig. 7(a) and Fig. 7(b). A circle of radius 1 m and 48 sensors of sound pressure and 48 sensors of sound velocity have been taking into account, and forward and inverse extrapolation have been performed. Inside the circle the sound field can be reconstructed in a good way. Good performance can be obtained also considering sound source inside the sensors array (Fig. 7(c) and Fig. 7(d)). As stated in section 2.1 it is possible to obtain correct sound fields only inside the circular array. In order to obtain suitable signals for loudspeakers, dipole sources have to be eliminated. Recently, an analytic secondary source selection criterion has been introduced where the selection depends on the sound source position in the area of interest (Spors, 2007): if the local propagation direction of the signal to be reproduced coincides with the loudspeaker axis direction *n*ˆ then the loudspeaker is turned on (Fig. 8). Fig. 9 shows results of simulations by considering only monopoles as secondary sources for sound sources either outside and inside the array. As happens in time sampling, there is a minimal distance Δ*d* between two adjacent loudspeakers that permits to prevent the aliasing (in this case, spatial aliasing) (Spors & Rabenstein, 2006). Therefore, the maximum reproducible frequency without artefacts is *c*/Δ*d*. Fig. 10 shows sound fields generated by a circular array that does not respect the

Wave Domain Transformations <sup>T</sup> and <sup>T</sup> <sup>−</sup><sup>1</sup> are based on sound field extrapolation, i.e., cylindrical harmonics or plane waves components. Sound field reconstruction depends on the maximum order *k<sup>θ</sup>* that is used in (36). Fig. 11 shows how the sound field of a 500 Hz monopole can be extrapolated depending on *k<sup>θ</sup>* . The larger the area the higher the order, but there is a limit for *k<sup>θ</sup>* where the area does not increase anymore. Furthermore, the sound field near the origin of the circle can not be extrapolated because of the considerable floating point error arising for the presence of multiplications between numbers with different orders

<sup>2</sup>*jhωρ* . (45)

**4.1 Sound field reproduction**

spatial aliasing condition.

**4.2 Wave Domain Transformations**

coordinates)

Fig. 5. Schemes of wave domain transforms. (a) Direct transform T . (b) Inverse transform <sup>T</sup> <sup>−</sup>1.

Fourier transform <sup>F</sup>−<sup>1</sup> *<sup>t</sup>* is needed to come back to the time domain

$$p(\theta, t) = \mathcal{F}\_t^{-1}(p) = \frac{1}{2\pi} \int\_{-\infty}^{\infty} P(\theta, \omega) e^{j\omega t} d\omega \tag{42}$$

$$v\_{\hbar}(\theta, t) = \mathcal{F}\_{t}^{-1}(v\_{\hbar}) = \frac{1}{2\pi} \int\_{-\infty}^{\infty} V\_{\hbar}(\theta, \omega) e^{j\omega t} d\omega. \tag{43}$$

As stated, loudspeakers number and their positions in the area are totally independent from the number and the positions of the microphones and the generalization is extremely easy by using a <sup>T</sup> <sup>−</sup><sup>1</sup> transform at the loudspeakers location.

In (Peretti et al., 2008) a frame-by-frame WDAF implementation for streaming scenarios has been proposed. Furthermore, the computational cost of the WDAF implementation can be reduced taking advantage of filter banks theory. Moreover, the signal decomposition into subbands allows to achieve a faster convergence rate and a lower mean-square error for these adaptive algorithms.

#### **4. Numerical simulations**

Adaptive algorithms in WFA/WFS strictly depend on sound field quality that can be obtained through the aforementioned techniques. In fact, transformations <sup>T</sup> and <sup>T</sup> <sup>−</sup>1, WDAF is based on, have been directly derived from sound field extrapolation steps. Subsequently, the adaptive processing is the same as the mono-channel case, hence its performance depends on the numerical implementations of <sup>T</sup> and <sup>T</sup> <sup>−</sup><sup>1</sup> and it is necessary to understand how the parameters, introduced in previous sections, weigh on the sound field extrapolation.

The reference is an ideal monopole source pulsing at a certain frequency *f* = 2*πω*. Fig. 6 shows the sound field behaviour of an ideal 500 Hz-monopole obtained with PC simulations. It is derived by the evaluation of the sound pressure level through

$$p(\omega, \vec{r}) = A \frac{e^{-jk|\vec{r} - \vec{r}\_s|}}{|r - \vec{r}\_s|} \tag{44}$$

at the desired position*r* (virtual microphone) on the environment.*r*<sup>0</sup> is the monopole position and *A* is a fixed gain. Therefore, a matrix of virtual microphones is considered in order to obtain the monopole behaviour.

#### **4.1 Sound field reproduction**

10 Will-be-set-by-IN-TECH

(a) (b)

Fig. 5. Schemes of wave domain transforms. (a) Direct transform T . (b) Inverse transform

2*π*

2*π*

As stated, loudspeakers number and their positions in the area are totally independent from the number and the positions of the microphones and the generalization is extremely easy by

In (Peretti et al., 2008) a frame-by-frame WDAF implementation for streaming scenarios has been proposed. Furthermore, the computational cost of the WDAF implementation can be reduced taking advantage of filter banks theory. Moreover, the signal decomposition into subbands allows to achieve a faster convergence rate and a lower mean-square error for these

Adaptive algorithms in WFA/WFS strictly depend on sound field quality that can be obtained through the aforementioned techniques. In fact, transformations <sup>T</sup> and <sup>T</sup> <sup>−</sup>1, WDAF is based on, have been directly derived from sound field extrapolation steps. Subsequently, the adaptive processing is the same as the mono-channel case, hence its performance depends on the numerical implementations of <sup>T</sup> and <sup>T</sup> <sup>−</sup><sup>1</sup> and it is necessary to understand how the

The reference is an ideal monopole source pulsing at a certain frequency *f* = 2*πω*. Fig. 6 shows the sound field behaviour of an ideal 500 Hz-monopole obtained with PC simulations.

*<sup>p</sup>*(*ω*,*<sup>r</sup>*) = *<sup>A</sup> <sup>e</sup>*−*jk*<sup>|</sup>*<sup>r</sup>*<sup>−</sup>*rs*<sup>|</sup>

at the desired position*r* (virtual microphone) on the environment.*r*<sup>0</sup> is the monopole position and *A* is a fixed gain. Therefore, a matrix of virtual microphones is considered in order to

parameters, introduced in previous sections, weigh on the sound field extrapolation.

It is derived by the evaluation of the sound pressure level through

 ∞ −∞

 ∞ −∞

*P*(*θ*, *ω*)*ejω<sup>t</sup>*

*Vn*(*θ*, *ω*)*ejω<sup>t</sup>*

*dω* (42)

*dω*. (43)

<sup>|</sup>*<sup>r</sup>* <sup>−</sup>*rs*<sup>|</sup> (44)

*<sup>t</sup>* is needed to come back to the time domain

*<sup>t</sup>* (*p*) = <sup>1</sup>

*<sup>t</sup>* (*vn*) = <sup>1</sup>

*<sup>p</sup>*(*θ*, *<sup>t</sup>*) = <sup>F</sup>−<sup>1</sup>

*vn*(*θ*, *<sup>t</sup>*) = <sup>F</sup>−<sup>1</sup>

using a <sup>T</sup> <sup>−</sup><sup>1</sup> transform at the loudspeakers location.

<sup>T</sup> <sup>−</sup>1.

Fourier transform <sup>F</sup>−<sup>1</sup>

adaptive algorithms.

**4. Numerical simulations**

obtain the monopole behaviour.

By considering (12) and (13) it is possible to obtain the sound field of Fig. 6 through the knowledge of the sound pressure and the normal particle velocity on the perimeter of a circle of radius *R*. The sound pressure can be easily evaluated through (44) at positions*r* = [*R*, *θ*] for each *θ*, while the particle velocity normal to the circumference can be obtained by difference quotient between the sound pressure levels at*rR*−*<sup>h</sup>* = [*<sup>R</sup>* − *<sup>h</sup>*, *<sup>θ</sup>*] and*rR*<sup>+</sup>*<sup>h</sup>* = [*<sup>R</sup>* + *<sup>h</sup>*, *<sup>θ</sup>*] (circular coordinates)

$$V\_n(\omega, \theta) = \frac{p\_{R-h}(\omega, \theta) - p\_{R+h}(\omega, \theta)}{2jh\omega\rho}.\tag{45}$$

The results of the Kirchhoff-Helmoltz integral application are shown in Fig. 7(a) and Fig. 7(b). A circle of radius 1 m and 48 sensors of sound pressure and 48 sensors of sound velocity have been taking into account, and forward and inverse extrapolation have been performed. Inside the circle the sound field can be reconstructed in a good way. Good performance can be obtained also considering sound source inside the sensors array (Fig. 7(c) and Fig. 7(d)). As stated in section 2.1 it is possible to obtain correct sound fields only inside the circular array. In order to obtain suitable signals for loudspeakers, dipole sources have to be eliminated. Recently, an analytic secondary source selection criterion has been introduced where the selection depends on the sound source position in the area of interest (Spors, 2007): if the local propagation direction of the signal to be reproduced coincides with the loudspeaker axis direction *n*ˆ then the loudspeaker is turned on (Fig. 8). Fig. 9 shows results of simulations by considering only monopoles as secondary sources for sound sources either outside and inside the array. As happens in time sampling, there is a minimal distance Δ*d* between two adjacent loudspeakers that permits to prevent the aliasing (in this case, spatial aliasing) (Spors & Rabenstein, 2006). Therefore, the maximum reproducible frequency without artefacts is *c*/Δ*d*. Fig. 10 shows sound fields generated by a circular array that does not respect the spatial aliasing condition.

#### **4.2 Wave Domain Transformations**

Wave Domain Transformations <sup>T</sup> and <sup>T</sup> <sup>−</sup><sup>1</sup> are based on sound field extrapolation, i.e., cylindrical harmonics or plane waves components. Sound field reconstruction depends on the maximum order *k<sup>θ</sup>* that is used in (36). Fig. 11 shows how the sound field of a 500 Hz monopole can be extrapolated depending on *k<sup>θ</sup>* . The larger the area the higher the order, but there is a limit for *k<sup>θ</sup>* where the area does not increase anymore. Furthermore, the sound field near the origin of the circle can not be extrapolated because of the considerable floating point error arising for the presence of multiplications between numbers with different orders

Fig. 8. Application example of the analytic secondary source selection criterion. Filled

<sup>555</sup> Performance Evaluation of Adaptive Algorithms

(a) (b)

Fig. 9. Sound fields obtained through forward Kirchhoff-Helmholtz integrals by taking into account a sensors array of radius 1 m and 48 monopole sources activated following the secondary source criterion referred to a 500 Hz-sound source. (a) Source positioned outside

in which the noise signal is denoted with **n**, the global signal (superimposition of noise and desired fields) is denoted with **u**, and the noise free signal is **e** (Fig. 14(a)). The adaptation is carried out in the frequency domain through FLMS algorithm. Therefore, it can be applied to WFS/WFA systems through the introduction of WDAF (by employing transforms T and

Some results of this WDAF application are reported in Fig. 15. Fig. 15(a) shows the desired sound field produced by a 700 Hz-monopole coming from the top with respect to the figure, and the noise field generated by a 400 Hz-monopole coming from bottom-right (denoted by **u** in the figures). In order to analyze the performance of the algorithm (Fig. 14(b)), the sound field related to the expansion coefficients of cylindrical harmonics **e** has been reconstructed. Sound field extrapolation has been carried out using plane waves decomposition (with 32 microphones and 32 loudspeakers arranged in a circular array of radius 1 m) as previously reported. Fig. 15(b) shows the resulting sound field relative to **e** after the steady state

loudspeakers represent active secondary sources.

for Wave Field Analysis/Synthesis Using Sound Field Simulations

the array. (b) Source positioned inside the array.

<sup>T</sup> <sup>−</sup>1) as shown in Fig. 14(b).

condition.

Fig. 7. Sound fields obtained through forward and inverse Kirchhoff-Helmholtz integrals by taking into account a sensors array of 48 elements and radius 1 m. (a) and (b) Forward and inverse sound field referred to a 500 Hz-sound source positioned outside the array [0, −2]. (c) and (d) Forward and inverse sound field referred to a 500 Hz-sound source positioned inside the array [0.5, 0.5].

of magnitude. It should be noted that, *k<sup>θ</sup>* being equal, the width of the area is in inverse proportion with respect to frequency (Fig. 12).

By considering plane waves decomposition the problem of floating pointing precision could be avoided and sound field can extrapolated also near the origin. The width of the area to be extrapolated depends on time frequency *ω* and spatial frequency *k<sup>θ</sup>* as in the cylindrical harmonics decomposition (Fig. 13).

#### **4.3 Application example: Adaptive source cancellation**

An application example of WDAF algorithm can be found in (Peretti et al., 2007) with a novel approach to adaptive source cancellation. In a realistic scenario undesired noise sources can be present in the recording room. Therefore, it would be desirable to reduce the negative effect they have on the listener's acoustic experience. The algorithm for WFA/WFS systems can be obtained by using the spatio-temporal transforms <sup>T</sup> and <sup>T</sup> <sup>−</sup><sup>1</sup> presented in the previous section. Let us first consider an adaptive noise cancellation scheme used in traditional systems 12 Will-be-set-by-IN-TECH

(a) (b)

(c) (d)

Fig. 7. Sound fields obtained through forward and inverse Kirchhoff-Helmholtz integrals by taking into account a sensors array of 48 elements and radius 1 m. (a) and (b) Forward and inverse sound field referred to a 500 Hz-sound source positioned outside the array [0, −2]. (c) and (d) Forward and inverse sound field referred to a 500 Hz-sound source positioned inside

of magnitude. It should be noted that, *k<sup>θ</sup>* being equal, the width of the area is in inverse

By considering plane waves decomposition the problem of floating pointing precision could be avoided and sound field can extrapolated also near the origin. The width of the area to be extrapolated depends on time frequency *ω* and spatial frequency *k<sup>θ</sup>* as in the cylindrical

An application example of WDAF algorithm can be found in (Peretti et al., 2007) with a novel approach to adaptive source cancellation. In a realistic scenario undesired noise sources can be present in the recording room. Therefore, it would be desirable to reduce the negative effect they have on the listener's acoustic experience. The algorithm for WFA/WFS systems can be obtained by using the spatio-temporal transforms <sup>T</sup> and <sup>T</sup> <sup>−</sup><sup>1</sup> presented in the previous section. Let us first consider an adaptive noise cancellation scheme used in traditional systems

the array [0.5, 0.5].

proportion with respect to frequency (Fig. 12).

**4.3 Application example: Adaptive source cancellation**

harmonics decomposition (Fig. 13).

Fig. 8. Application example of the analytic secondary source selection criterion. Filled loudspeakers represent active secondary sources.

Fig. 9. Sound fields obtained through forward Kirchhoff-Helmholtz integrals by taking into account a sensors array of radius 1 m and 48 monopole sources activated following the secondary source criterion referred to a 500 Hz-sound source. (a) Source positioned outside the array. (b) Source positioned inside the array.

in which the noise signal is denoted with **n**, the global signal (superimposition of noise and desired fields) is denoted with **u**, and the noise free signal is **e** (Fig. 14(a)). The adaptation is carried out in the frequency domain through FLMS algorithm. Therefore, it can be applied to WFS/WFA systems through the introduction of WDAF (by employing transforms T and <sup>T</sup> <sup>−</sup>1) as shown in Fig. 14(b).

Some results of this WDAF application are reported in Fig. 15. Fig. 15(a) shows the desired sound field produced by a 700 Hz-monopole coming from the top with respect to the figure, and the noise field generated by a 400 Hz-monopole coming from bottom-right (denoted by **u** in the figures). In order to analyze the performance of the algorithm (Fig. 14(b)), the sound field related to the expansion coefficients of cylindrical harmonics **e** has been reconstructed. Sound field extrapolation has been carried out using plane waves decomposition (with 32 microphones and 32 loudspeakers arranged in a circular array of radius 1 m) as previously reported. Fig. 15(b) shows the resulting sound field relative to **e** after the steady state condition.

(a) (b)

<sup>557</sup> Performance Evaluation of Adaptive Algorithms

(a) (b)

Fig. 13. Sound fields obtained through plane waves decomposition by taking into account a sensors array of 48 elements and radius 1 m. A 500 Hz-monopole generated at [0, −2] has been considered. Simulations have been performed taking into account different order for *k<sup>θ</sup>* .

Fig. 12. Sound fields obtained through cylindrical harmonics derivation by taking into account a sensors array of 48 elements and radius 1 m. A 1000 Hz-monopole generated at [0, −2] has been considered. Simulations have been performed taking into account different

order for *k<sup>θ</sup>* . (a) *k<sup>θ</sup>* = −15 : +15. (b) *k<sup>θ</sup>* = −20 : +20.

for Wave Field Analysis/Synthesis Using Sound Field Simulations

(a) *k<sup>θ</sup>* = −15 : +15. (b) *k<sup>θ</sup>* = −20 : +20.

Fig. 10. Sound fields obtained through Kirchhoff-Helmholtz integrals by taking into account a sensors array of 8 elements and radius 1 m. They are referred to 500 Hz-sound source generated outside the array [0, −2]. (a) Forward integral. (b) Inverse integral.

Fig. 11. Sound fields obtained through cylindrical harmonics derivation by taking into account a sensors array of 48 elements and radius 1 m. A 500 Hz-monopole generated at [0, −2] has been considered. Simulations have been performed taking into account different order for *k<sup>θ</sup>* . (a) *k<sup>θ</sup>* = −9 : +9. (b) *k<sup>θ</sup>* = −12 : +12. (c) *k<sup>θ</sup>* = −15 : +15. in (d) *k<sup>θ</sup>* = −20 : +20.

14 Will-be-set-by-IN-TECH

(a) (b)

Fig. 10. Sound fields obtained through Kirchhoff-Helmholtz integrals by taking into account a sensors array of 8 elements and radius 1 m. They are referred to 500 Hz-sound source

(a) (b)

(c) (d)

Fig. 11. Sound fields obtained through cylindrical harmonics derivation by taking into account a sensors array of 48 elements and radius 1 m. A 500 Hz-monopole generated at [0, −2] has been considered. Simulations have been performed taking into account different order for *k<sup>θ</sup>* . (a) *k<sup>θ</sup>* = −9 : +9. (b) *k<sup>θ</sup>* = −12 : +12. (c) *k<sup>θ</sup>* = −15 : +15. in (d) *k<sup>θ</sup>* = −20 : +20.

generated outside the array [0, −2]. (a) Forward integral. (b) Inverse integral.

Fig. 12. Sound fields obtained through cylindrical harmonics derivation by taking into account a sensors array of 48 elements and radius 1 m. A 1000 Hz-monopole generated at [0, −2] has been considered. Simulations have been performed taking into account different order for *k<sup>θ</sup>* . (a) *k<sup>θ</sup>* = −15 : +15. (b) *k<sup>θ</sup>* = −20 : +20.

Fig. 13. Sound fields obtained through plane waves decomposition by taking into account a sensors array of 48 elements and radius 1 m. A 500 Hz-monopole generated at [0, −2] has been considered. Simulations have been performed taking into account different order for *k<sup>θ</sup>* . (a) *k<sup>θ</sup>* = −15 : +15. (b) *k<sup>θ</sup>* = −20 : +20.

**5. Conclusions**

**6. References**

Inc., United States.

Aachen, Germany.

Canada, pp. 117–120.

Amsterdam, The Netherlands.

Analysis in Enclosures, 102(5): 2757–2770.

for Wave Field Analysis/Synthesis Using Sound Field Simulations

*Acoust. Soc. Amer.* 93(5): 2764–2778.

36(12): 977–995.

Wave Field Analysis and Synthesis are two techniques that permit direct sound field recording and reproduction using microphones and loudspeakers arrays. In order to use these techniques in real world applications (e.g., cinema, home theatre, teleconferencing), it is necessary to apply multi-channel Digital Signal Processing algorithms, already developed for traditional systems. A straightforward implementation needs an extremely high computational complexity, hence Wave Domain Adaptive Filtering (WDAF) has been introduced extending the well-known Fast LMS algorithm. Since WDAF is derived from transforms related to WFA/WFS, it inherits their strengths: the extension of optimal listening area and a better source localization in the environment. It is based on cylindrical harmonics decomposition. In this chapter, the numerical implementation of WDAF transformations have been described and results of simulations have been presented. In particular, considering circular arrays, the area, the sound field is correctly extrapolated on, becomes larger increasing the maximum order of cylindrical harmonic and decreasing the reproduced frequency. Through the only cylindrical harmonics decomposition, some numerical problems could arise nearby the origin of the circular array. Therefore, plane wave decomposition, based on far field approximations, can be used in order to overcome this problem. Numerical simulations of the application of WDAF to adaptive noise cancellation show the effectiveness of the algorithm.

<sup>559</sup> Performance Evaluation of Adaptive Algorithms

Abramowitz, M. & Stegun, I. (1970). *Handbook of Mathematical Functions*, Dover Pubblications

Berkhout, A. J. (1988). A holographic approach to acoustic control, *J. Audio Eng. Soc.*

Berkhout, A. J., de Vries, D. & Sonke, J. J. (1997). Array Technology for Acoustic Wave Field

Berkhout, A. J., De Vries, D. & Vogel, P. (1993). Acoustic control by wave field synthesis, *J.*

Blackstock, D. T. (2000). *Fundamentals of Physical Acoustics*, Wiley-Interscience, United States. Buchner, H. (2008). Acoustic Echo Cancellation for Multiple Reproduction Channels: from

Buchner, H. & Spors, S. (2008). A general derivation of Wave-Domain Adaptive Filtering

Buchner, H., Spors, S. & Rabenstein, R. (2004). Efficient active listening room compensation

Daniel, J., Nicol, N. & Moreau, S. (2003). Further investigations of high order ambisonics

for wave field synthesis, *Proc. 116th AES Conv.*, Berlin, Germany.

*on Signals, Systems, and Computers*, Pacific Grove, CA, USA, pp. 816–823. Buchner, H., Spors, S. & Kellermann, W. (2004). Wave-domain adaptive filtering: Acoustic

first principles to Real-Time Solutions, *Proc. ITG Conference on Speech Communication*,

and application to Acoustic Echo Cancellation, *Proc. IEEE 42nd Asilomar Conference*

echo cancellation for full- duplex systems based on wave-field synthesis, *Proc. IEEE International Conference on Acoustics, Speech and Signal Processing*, Vol. 4, Montreal,

and wavefield synthesis for holophonic sound imaging, *Proc. of 114th AES Conv.*,

Fig. 14. Adaptive noise cancellation schemes. (a) Traditional mono-channel approach. (b) Wave domain extension for WFA/WFS systems; in case of loudspeakers positions different from microphones positions, a further extrapolation is needed for signal **u**.

Fig. 15. Adaptive Noise Cancellation with WDAF approach. (a) Corrupted input sound field. (b) Noise-free output sound field.
