**Array Processing: Underwater Acoustic Source Localization**

Salah Bourennane, Caroline Fossati and Julien Marot *Institut Fresnel, Ecole Centrale Marseille France*

#### **1. Introduction**

12 Will-be-set-by-IN-TECH

12 Underwater Acoustics

Xiang-Gen, X.; Kenjing, L. (2005). A generalized Chinese reminder theorem for residue sets

with low sampling rates, *IEEE Process. Lett.*, Vol. 12, page numbers (768-771). Goldreich, O.; Ron, D.; Sudan, M. (2000). Chinese remindering with errors, *IEEE Trans. Inf.*

Lee, A.K.T.; Lucas, J.; Virr, L.E (1989). Microcomputer-controlled acoustic range finding

Yang, M.; Hill, S.L.; Bury, B.; Gray, J.O (1994). A multifrequency AM-based ultrasonics system

technique. *J. Phys. E: Sci. Instrum.*, Vol.22, page numbers (52-58).

*Theory*, Vol. 93, page numbers (1330-1338).

43, No.6, page numbers (861-866).

with errors and its application in frequency determination from multiple sensors

for accuracy distance measurement. *IEEE Trans. Instrumentation and measurement*, Vol.

Array processing is used in diverse areas such as radar, sonar, communications and seismic exploration. Usually the parameters of interest are the directions of arrival of the radiating sources. The High-Resolution subspace-based methods for direction-of-arrival (DOA) estimation have been a topic of great interest. The subspace-based methods well-developed so far require a fundamental assumption, which is that the background noise is uncorrelated from sensor to sensor, or known to within a multiplicative scalar. In practice this assumption is rarely fulfilled and the noise received by the array may be a combination of multiple noise sources such as flow noise, traffic noise, or ambient noise, which is often correlated along the array (Reilly & Wong, 1992; Wu & Wong, 1994). However, the spatial noise is estimated by measuring the spectrum of the received data when no signal is present. The data for parameter estimation is then pre-whitened using the measured noise. The problem with this method is that the actual noise covariance matrix varies as a function of time in many applications. At low signal-to-noise ratio (SNR) the deviations from the assumed noise characteristics are critical and the degradation may be severe for the localization result. In this chapter, we present an algorithm to estimate the noise with band covariance matrix. This algorithm is based on the noise subspace spanned by the eigenvectors associated with the smallest eigenvalues of the covariance matrix of the recorded data. The goal of this study is to investigate how perturbations in the assumed noise covariance matrix affect the accuracy of the narrow-band signal DOA estimates (Stoica et al., 1994). A maximum likelihood algorithm is presented in (Wax, 1991), where the spatial noise covariance is modeled as a function of certain unknown parameters. Also in (Ye & DeGroat, 1995) a maximum likelihood estimator is analyzed. The problem of incomplete pre-whitening or colored noise is circumvented by modeling the noise with a simple descriptive model. There are other approaches to the problem of spatially correlated noise: one is based on the assumption that the correlation structure of the noise field is invariant under a rotation or a translation of the array, while another is based on a certain linear transformation of the sensor output vectors (Zhang & Ye, 2008; Tayem et al., 2006). These methods do not require the estimation of the noise correlation function, but they may be quite sensitive to the deviations from the invariance assumption made, and they are not applicable when the signals also satisfy the invariance assumption.

#### **2. Problem formulation**

Consider an array of *N* sensors which receive the signals in one wave field generated by *P* (*P* < *N*) sources in the presence of an additive noise. The received signal vector is sampled


Array Processing: Underwater Acoustic Source Localization 15

Consequently, the additive noise is the sum of these two noise terms and the spatial covariance

In many applications when a uniform linear array antenna system is used, it is reasonable to assume that noise correlation is decreasing along the array (see Fig. 1). This is a widely used model for colored noise. We can then obtain a specific model for noise correlation under the

• the correlation length is *K* which means that the spatial correlation attains up to the *K*−th

• the noise realizations received by sensors which are separated with a distance no less than

Fig. 1. Noise correlation along an uniform linear array with *N* sensors, *ρ* is the noise spatial

In this chapter the noise covariance matrix is modeled as an Hermitian, positive-definite band

*ρmi* = 0, *f or* |*i* − *m*| ≥ *K i*, *m* = 1, . . . , *N*

matrix Γ*n*(*f*), with half-bandwidth *K*. The (*i*, *m*)−th element of Γ*n*(*f*) is *ρmi* with:

*Kd*, where *d* is the distance between sensors, are considered uncorrelated,

The noise correlation model which is obtained is represented on Fig. 1.

*<sup>n</sup>*(*f*) + **<sup>Γ</sup>***<sup>B</sup>*

**Γ***n*(*f*) = **Γ***<sup>S</sup>*

**4. Modeling the covariance matrix of the band noise**

*αkβk*, where *α<sup>k</sup>* are unknown parameters and *β<sup>k</sup>* are complex weighting

*<sup>n</sup>* (*f*) is positive definite and of band structure.

*<sup>n</sup>* (*f*) (5)

1995), **Γ***<sup>B</sup>*

matrix is

*<sup>n</sup>* (*f*) =

following assumptions:

correlation coefficient.

sensor;

*K* ∑ *k*=1

matrices, *β<sup>k</sup>* are chosen such that **Γ***<sup>B</sup>*

and the DFT algorithm is used to transform the data into the frequency domain. We represent these samples by:

$$\mathbf{r}(f) = \mathbf{A}(f)\mathbf{s}(f) + \mathbf{n}(f) \tag{1}$$

where **r**(*f*), **s**(*f*) and **n**(*f*) are respectively the Fourier transforms of the array outputs, the source signals and the noise vectors. The **A**(*f*), matrix of dimensions (*N* × *P*) is the transfer matrix of the source-sensor array systems with respect to some chosen reference point. The sensor noise is assumed to be independent of the source signals and partially spatially correlated. The sources are assumed to be uncorrelated. The covariance matrix of the data can be defined by the (*N* × *N*)-dimensional matrix.

$$\mathbf{T}(f) = E[\mathbf{r}(f)\mathbf{r}^+(f)] \tag{2}$$

$$\mathbf{T}(f) = \mathbf{A}(f)\mathbf{T}\_s(f)\mathbf{A}^+(f) + \mathbf{T}\_\mathbf{n}(f) \tag{3}$$

Where *E*[.] denotes the expectation operator, superscript + represents conjugate transpose, **<sup>Γ</sup>***n*(*f*) = *<sup>E</sup>*[**n**(*f*)**n**+(*f*)] is the (*<sup>N</sup>* <sup>×</sup> *<sup>N</sup>*) noise covariance matrix, and **<sup>Γ</sup>***s*(*f*) = *<sup>E</sup>*[**s**(*f*)**s**+(*f*)] is the (*P* × *P*) signal covariance matrix. The above assumption concerning the non-correlation of the sources means that **Γ***s*(*f*) is full rank.

The High-Resolution algorithms of array processing assume that the matrix **Γ***n*(*f*) is diagonal. The subspace-based techniques are based on these properties. For example the MUSIC (Multiple Signal Classification)(Cadzow, 1998) null-spectrum *Pmusic*(*θ*) is defined by :

$$P\_{\text{muscle}}(\theta) = \frac{1}{|\mathbf{a}^+(\theta)\hat{\mathbf{V}}\_N(f)\hat{\mathbf{V}}\_N^+(f)\mathbf{a}(\theta)|}\tag{4}$$

and it is expected that *Pmusic*(*θ*) has maximum points around *θ* ∈ {*θ*1, ..., *θP*}, where *θ*1, ..., *θ<sup>p</sup>* are the directions of arrival of the sources and **<sup>V</sup>**<sup>ˆ</sup> *<sup>N</sup>*(*f*) = {**v***P*+1(*f*)... **<sup>v</sup>***N*(*f*)}. Therefore, we can estimate the DOA by taking the local maximum points of *Pmusic*(*θ*).

In this chapter, we consider that the matrix **Γ***n*(*f*) is not diagonal because the noise realizations are spatially correlated and then the performances of these methods are considerably degraded.

#### **3. Modeling the noise field**

A fundamental limitation of the standard parametric array processing algorithms is that the covariance matrix of background noise cannot, in general, be estimated along with the signal parameters. So this leads to an unidentifiable parametrization, the measured data should always be regarded to consist of only the noise with a covariance matrix equal to that of the observed sample. This is a reason for imposing a model on the background noise. Several parametric noise models have been proposed in some literatures recently. Here, as well as in (Zhang & Ye, 2008), a descriptive model will be used, that is, the spatial noise covariance matrix is assumed to consist of a linear combination of some unknown parameters, which are weighted by known basis matrices. There are two different noise phenomenons to be described. We can model the noise as:


2 Will-be-set-by-IN-TECH

and the DFT algorithm is used to transform the data into the frequency domain. We represent

where **r**(*f*), **s**(*f*) and **n**(*f*) are respectively the Fourier transforms of the array outputs, the source signals and the noise vectors. The **A**(*f*), matrix of dimensions (*N* × *P*) is the transfer matrix of the source-sensor array systems with respect to some chosen reference point. The sensor noise is assumed to be independent of the source signals and partially spatially correlated. The sources are assumed to be uncorrelated. The covariance matrix of the data can

Where *E*[.] denotes the expectation operator, superscript + represents conjugate transpose, **<sup>Γ</sup>***n*(*f*) = *<sup>E</sup>*[**n**(*f*)**n**+(*f*)] is the (*<sup>N</sup>* <sup>×</sup> *<sup>N</sup>*) noise covariance matrix, and **<sup>Γ</sup>***s*(*f*) = *<sup>E</sup>*[**s**(*f*)**s**+(*f*)] is the (*P* × *P*) signal covariance matrix. The above assumption concerning the non-correlation

The High-Resolution algorithms of array processing assume that the matrix **Γ***n*(*f*) is diagonal. The subspace-based techniques are based on these properties. For example the MUSIC

<sup>|</sup>**a**+(*θ*)**V**<sup>ˆ</sup> *<sup>N</sup>*(*f*)**V**<sup>ˆ</sup> <sup>+</sup>

and it is expected that *Pmusic*(*θ*) has maximum points around *θ* ∈ {*θ*1, ..., *θP*}, where *θ*1, ..., *θ<sup>p</sup>* are the directions of arrival of the sources and **<sup>V</sup>**<sup>ˆ</sup> *<sup>N</sup>*(*f*) = {**v***P*+1(*f*)... **<sup>v</sup>***N*(*f*)}. Therefore, we

In this chapter, we consider that the matrix **Γ***n*(*f*) is not diagonal because the noise realizations are spatially correlated and then the performances of these methods are considerably

A fundamental limitation of the standard parametric array processing algorithms is that the covariance matrix of background noise cannot, in general, be estimated along with the signal parameters. So this leads to an unidentifiable parametrization, the measured data should always be regarded to consist of only the noise with a covariance matrix equal to that of the observed sample. This is a reason for imposing a model on the background noise. Several parametric noise models have been proposed in some literatures recently. Here, as well as in (Zhang & Ye, 2008), a descriptive model will be used, that is, the spatial noise covariance matrix is assumed to consist of a linear combination of some unknown parameters, which are weighted by known basis matrices. There are two different noise phenomenons to be


(Multiple Signal Classification)(Cadzow, 1998) null-spectrum *Pmusic*(*θ*) is defined by :

*Pmusic*(*θ*) = <sup>1</sup>

can estimate the DOA by taking the local maximum points of *Pmusic*(*θ*).

**r**(*f*) = **A**(*f*)**s**(*f*) + **n**(*f*) (1)

**Γ**(*f*) = *E*[**r**(*f*)**r**+(*f*)] (2)

*<sup>N</sup>*(*f*)**a**(*θ*)<sup>|</sup> (4)

*<sup>n</sup>*(*f*) is

**Γ**(*f*) = **A**(*f*)**Γ***s*(*f*)**A**+(*f*) + **Γ***n*(*f*) (3)

these samples by:

degraded.

diagonal.

**3. Modeling the noise field**

described. We can model the noise as:

be defined by the (*N* × *N*)-dimensional matrix.

of the sources means that **Γ***s*(*f*) is full rank.


matrices, *β<sup>k</sup>* are chosen such that **Γ***<sup>B</sup> <sup>n</sup>* (*f*) is positive definite and of band structure.

Consequently, the additive noise is the sum of these two noise terms and the spatial covariance matrix is

$$
\Gamma\_n(f) = \Gamma\_n^S(f) + \Gamma\_n^B(f) \tag{5}
$$

#### **4. Modeling the covariance matrix of the band noise**

In many applications when a uniform linear array antenna system is used, it is reasonable to assume that noise correlation is decreasing along the array (see Fig. 1). This is a widely used model for colored noise. We can then obtain a specific model for noise correlation under the following assumptions:


The noise correlation model which is obtained is represented on Fig. 1.

Fig. 1. Noise correlation along an uniform linear array with *N* sensors, *ρ* is the noise spatial correlation coefficient.

In this chapter the noise covariance matrix is modeled as an Hermitian, positive-definite band matrix Γ*n*(*f*), with half-bandwidth *K*. The (*i*, *m*)−th element of Γ*n*(*f*) is *ρmi* with:

*ρmi* = 0, *f or* |*i* − *m*| ≥ *K i*, *m* = 1, . . . , *N*

**Step 3 :** Calculate the (*i*, *j*)th element of the current noise covariance matrix

[Γ<sup>1</sup>

<sup>11</sup>(*f*) <sup>−</sup> <sup>Δ</sup>*<sup>l</sup>*

*<sup>K</sup>*1(*f*) <sup>−</sup> <sup>Δ</sup>*<sup>l</sup>*

. .

. .

*<sup>n</sup>*(*f*)]*ij* = [Γˆ(*f*) <sup>−</sup> <sup>Δ</sup>1]*ij* <sup>i</sup> *<sup>f</sup>* <sup>|</sup> *<sup>i</sup>* <sup>−</sup> *<sup>j</sup>* <sup>|</sup><sup>&</sup>lt; *K i*, *<sup>j</sup>* <sup>=</sup> 1, ..., *<sup>N</sup>*

Array Processing: Underwater Acoustic Source Localization 17

*<sup>n</sup>*(*f*)]*ij* = 0 i *f* | *i* − *j* |≥ *K*

<sup>11</sup>(*f*) ··· <sup>Γ</sup>*<sup>l</sup>*

*<sup>K</sup>*1(*f*) ··· <sup>Γ</sup>*<sup>l</sup>*

··· 0 ··· .

··· .

··· <sup>Γ</sup>*<sup>l</sup>*

··· <sup>Γ</sup>*<sup>l</sup>*

The difference between the Frobenius norms of matrices Γ*l*+<sup>1</sup> *<sup>n</sup>* (*f*) and Γ*<sup>l</sup>*

�Γ*l*+<sup>1</sup> *<sup>n</sup>* (*f*) <sup>−</sup> <sup>Γ</sup>*<sup>l</sup>*

*<sup>n</sup>*(*f*)]*ij*.

*<sup>n</sup>* ]1(*f*)�*<sup>F</sup>* < or not.

. ... ...

. ... ... 0 ··· ···

> . .

> . .

> > � *<sup>N</sup>* ∑ *i*,*j*=1 *t* 2 *ij*(*f*) �1/2

*NK*(*f*)

*NN*(*f*)

*NK*(*f*) <sup>−</sup> <sup>Δ</sup>*<sup>l</sup>*

*NN*(*f*) <sup>−</sup> <sup>Δ</sup>*<sup>l</sup>*

are calculated using the previous steps. Repeat the algorithm until a significant improvement

*<sup>n</sup>*(*f*)�*<sup>F</sup>* =

In the previously proposed iterative algorithm, the spatial correlation length of the noise is supposed to be known. In practice, this is aforehand uncertain, therefore the search for a criterion of an estimate of *K* is necessary. In (Tayem et al., 2006), one algorithm which jointly estimates the number of sources and the spatial correlation length of the noise is presented. We propose to vary the value of *K* until the stability of the result is reached, that is, until the noise covariance matrix does not vary when *K* varies. The algorithm incorporating the choice of the correlation length *K* is presented in Fig. 2. In the stop test, we check whether

In the following simulations, a uniform linear array of *N* = 10 omnidirectional sensors with

the velocity of propagation. The number of independent realizations used for estimating the

<sup>4</sup> *fo* is used, where *fo* is the mid-band frequency and *c* is

<sup>1</sup>*K*(*f*) <sup>−</sup> <sup>Δ</sup>*<sup>l</sup>*

*KK*(*f*) <sup>−</sup> <sup>Δ</sup>*<sup>l</sup>*

*<sup>n</sup>*(*f*)]. The new matrices Δ2(*f*) and Γ<sup>2</sup>

*<sup>n</sup>*(*f*)�*<sup>F</sup>* < with a fixed threshold.

*<sup>n</sup>*(*f*) is given by :

<sup>1</sup>*K*(*f*)

⎞

⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

*<sup>n</sup>*(*f*)

*KK*(*f*)

[Γ<sup>1</sup>

⎛

Γ*l*

Γ*l*

**Step 4 :** Eigendecomposition of the matrix [Γˆ(*f*) <sup>−</sup> <sup>Γ</sup><sup>1</sup>

of the estimated noise covariance matrix is obtained.

**Stop test :** The iteration is stopped when �Γ*l*+<sup>1</sup> *<sup>n</sup>* (*f*) <sup>−</sup> <sup>Γ</sup>*<sup>l</sup>*

⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

Γ*l <sup>n</sup>*(*f*) =

where *tij*(*f*)=[Γ*l*+<sup>1</sup> *<sup>n</sup>* (*f*) <sup>−</sup> <sup>Γ</sup>*<sup>l</sup>*

**5.2 Spatial correlation length**

�[Γ*K*+<sup>1</sup> *<sup>n</sup>* ]1(*f*) <sup>−</sup> [Γ*<sup>K</sup>*

**6. Simulation results**

equal inter-element spacing *d* = *<sup>c</sup>*

and

$$
\Gamma\_{\mathbb{R}} = \begin{pmatrix}
\sigma\_1^2(f) & \rho\_{12}(f) & & & \\
\rho\_{12}^\*(f) & \sigma\_2^2(f) & & & \\
\vdots & \ddots & & & \\
\rho\_{1K}^\*(f) & \cdots & \rho\_{K-1}^\*(f) \\
\vdots & \ddots & & \vdots \\
\vdots & \ddots & & \vdots \\
\rho\_{1K}(f) & \cdots & & 0 \\
\rho\_{1(K+1)}(f) & \cdots & & 0 \\
\vdots & \ddots & & \vdots \\
\sigma\_K^2(f) & \cdots & \rho\_{KN}(f) \\
\cdots & \cdots & \cdots & \vdots \\
\cdots & \cdots & \cdots & \vdots \\
\cdots & \rho\_{K(N-1)}^\*(f) & \sigma\_N^2(f)
\end{pmatrix}
$$

Where *ρmi* = *ρ*¯*mi* + *jρ*˜*mi*; *i*,*m*=1,...,*N*; *ρmi* are complex variables, *j* <sup>2</sup> <sup>=</sup> <sup>−</sup>1 and *<sup>σ</sup>*<sup>2</sup> *<sup>i</sup>* is the noise variance at the *i*th sensor, and ∗ denotes complex conjugate.

In the following section, an algorithm to estimate the band noise covariance matrix is developed for narrow-band signals.

#### **5. Estimation of the noise covariance matrix**

#### **5.1 Proposed algorithm**

Several methods have been proposed for estimating the directions of arrival of multiple sources in unknown noise fields. Initially the noise spectral matrix is measured, when signals of interest are not present. Other techniques (Abeidaa & Delmas, 2007) based on the maximum likelihood algorithm are developed, which incorporate a noise model to reduce the bias for estimating both the noise covariance matrix and the directions of arrival of the sources.

Our approach is realized in two steps. Using an iterative algorithm, the noise covariance matrix is estimated, then this estimate is subtracted from the covariance matrix of the received signals.

The proposed algorithm for estimating the noise covariance matrix can be summarized as follows:

**Step 1 :** Estimate the covariance matrix Γ(*f*) of the received signals by the expectation of *T* time measures noted by Γˆ(*f*). Γˆ(*f*) = <sup>1</sup> *T* � *<sup>T</sup>* ∑ *t*=1 **rt**(**f**)**r** + **<sup>t</sup>** (**f**) � . The eigendecomposition of this

matrix is given by:

Γˆ(*f*) = **V**(*f*)Λ(*f*)**V**+(*f*) with Λ(*f*) = *diag*[*λ*1(*f*), ..., *λN*(*f*)] and **V**(*f*) = [**v**1(*f*), **v**2(*f*), ..., **v***N*(*f*)] where *λi*(*f*), *i* = 1...*N*,(*λ*<sup>1</sup> ≥ *λ*<sup>2</sup> ≥, ..., ≥ *λN*), and **v***i*(*f*) are respectively the *i-*th eigenvalue and the corresponding eigenvector.

And we initialize the noise covariance matrix by Γ<sup>0</sup> *<sup>n</sup>*(*f*) = **0**.

**Step 2 :** Calculate **<sup>W</sup>***<sup>P</sup>* = **<sup>V</sup>***S*(*f*)Λ1/2 *<sup>S</sup>* (*f*), and **V***S*(*f*)=[**v**1(*f*), **v**2(*f*), ..., **v***P*(*f*)] is the matrix of the *P* eigenvectors associated with the first *P* largest eigenvalues of Γˆ(*f*). Let Δ<sup>1</sup> = **W***P*(*f*)**W**<sup>+</sup> *<sup>P</sup>* (*f*).

**Step 3 :** Calculate the (*i*, *j*)th element of the current noise covariance matrix

$$[\Gamma^1\_n(f)]\_{i\dot{j}} = [\hat{\Gamma}(f) - \Delta^1]\_{i\dot{j}} \quad \text{if } \ |\ i - j \ | < K \ \text{i.} \ j = 1, \dots, N$$

and

4 Will-be-set-by-IN-TECH

<sup>12</sup>(*f*) *<sup>σ</sup>*<sup>2</sup>

<sup>1</sup> (*f*) *ρ*12(*f*) ···

. ... ...

. ... ... 0 ··· *ρ*<sup>∗</sup>

*<sup>K</sup>*(*f*) ··· *ρKN*(*f*) ··· ... .

*K*(*N*−1)

In the following section, an algorithm to estimate the band noise covariance matrix is

Several methods have been proposed for estimating the directions of arrival of multiple sources in unknown noise fields. Initially the noise spectral matrix is measured, when signals of interest are not present. Other techniques (Abeidaa & Delmas, 2007) based on the maximum likelihood algorithm are developed, which incorporate a noise model to reduce the bias for estimating both the noise covariance matrix and the directions of arrival of the sources.

Our approach is realized in two steps. Using an iterative algorithm, the noise covariance matrix is estimated, then this estimate is subtracted from the covariance matrix of the received

The proposed algorithm for estimating the noise covariance matrix can be summarized as

**Step 1 :** Estimate the covariance matrix Γ(*f*) of the received signals by the expectation of

Γˆ(*f*) = **V**(*f*)Λ(*f*)**V**+(*f*) with Λ(*f*) = *diag*[*λ*1(*f*), ..., *λN*(*f*)] and **V**(*f*) = [**v**1(*f*), **v**2(*f*), ..., **v***N*(*f*)] where *λi*(*f*), *i* = 1...*N*,(*λ*<sup>1</sup> ≥ *λ*<sup>2</sup> ≥, ..., ≥ *λN*), and **v***i*(*f*) are

**rt**(**f**)**r** + **<sup>t</sup>** (**f**) �

*<sup>n</sup>*(*f*) = **0**.

*<sup>S</sup>* (*f*), and **V***S*(*f*)=[**v**1(*f*), **v**2(*f*), ..., **v***P*(*f*)] is the matrix of

*T* � *<sup>T</sup>* ∑ *t*=1

respectively the *i-*th eigenvalue and the corresponding eigenvector.

the *P* eigenvectors associated with the first *P* largest eigenvalues of Γˆ(*f*).

<sup>1</sup>*K*(*f*) ··· *ρ*<sup>∗</sup>

*ρ*1*K*(*f*) ··· 0 *ρ*1(*K*+1)(*f*) ··· 0 ··· ... .

<sup>2</sup> (*f*) ···

*<sup>K</sup>*−1(*f*)

⎞

⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

<sup>2</sup> <sup>=</sup> <sup>−</sup>1 and *<sup>σ</sup>*<sup>2</sup>

. The eigendecomposition of this

*<sup>i</sup>* is the noise

*KN*(*f*)

. .

. .

*<sup>N</sup>*(*f*)

(*f*) *σ*<sup>2</sup>

Γ*<sup>n</sup>* =

⎛

*σ*2

*ρ*∗

*ρ*∗

*σ*2

··· *ρ*<sup>∗</sup>

. .

. .

⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

Where *ρmi* = *ρ*¯*mi* + *jρ*˜*mi*; *i*,*m*=1,...,*N*; *ρmi* are complex variables, *j*

variance at the *i*th sensor, and ∗ denotes complex conjugate.

**5. Estimation of the noise covariance matrix**

*T* time measures noted by Γˆ(*f*). Γˆ(*f*) = <sup>1</sup>

**Step 2 :** Calculate **<sup>W</sup>***<sup>P</sup>* = **<sup>V</sup>***S*(*f*)Λ1/2

*<sup>P</sup>* (*f*).

And we initialize the noise covariance matrix by Γ<sup>0</sup>

developed for narrow-band signals.

**5.1 Proposed algorithm**

signals.

follows:

matrix is given by:

Let Δ<sup>1</sup> = **W***P*(*f*)**W**<sup>+</sup>

$$[\Gamma^1\_n(f)]\_{\vec{i}\vec{j}} = 0 \quad \text{if} \quad |\, i - j\,| \ge K$$

$$
\Gamma\_{n}^{l}(f) = \begin{pmatrix}
\Gamma\_{11}^{l}(f) - \Delta\_{11}^{l}(f) & \cdots & \Gamma\_{1K}^{l}(f) - \Delta\_{1K}^{l}(f) \\
\vdots & \ddots & \vdots \\
\Gamma\_{K1}^{l}(f) - \Delta\_{K1}^{l}(f) & \cdots & \Gamma\_{\rm KK}^{l}(f) - \Delta\_{\rm KK}^{l}(f) \\
\vdots & \ddots & \vdots \\
0 & \cdots & \cdots & \cdots \\
\cdots & 0 & \\
\cdots & \vdots \\
\cdots & \Gamma\_{\rm KK}^{l}(f) - \Delta\_{\rm KK}^{l}(f) \\
\cdots & \vdots \\
\cdots & \Gamma\_{\rm NN}^{l}(f) - \Delta\_{\rm NN}^{l}(f)
\end{pmatrix},
$$

**Step 4 :** Eigendecomposition of the matrix [Γˆ(*f*) <sup>−</sup> <sup>Γ</sup><sup>1</sup> *<sup>n</sup>*(*f*)]. The new matrices Δ2(*f*) and Γ<sup>2</sup> *<sup>n</sup>*(*f*) are calculated using the previous steps. Repeat the algorithm until a significant improvement of the estimated noise covariance matrix is obtained.

**Stop test :** The iteration is stopped when �Γ*l*+<sup>1</sup> *<sup>n</sup>* (*f*) <sup>−</sup> <sup>Γ</sup>*<sup>l</sup> <sup>n</sup>*(*f*)�*<sup>F</sup>* < with a fixed threshold. The difference between the Frobenius norms of matrices Γ*l*+<sup>1</sup> *<sup>n</sup>* (*f*) and Γ*<sup>l</sup> <sup>n</sup>*(*f*) is given by :

$$\|\|\Gamma\_n^{l+1}(f) - \Gamma\_n^l(f)\|\|\_F = \left[\sum\_{i,j=1}^N t\_{ij}^2(f)\right]^{1/2}$$

where *tij*(*f*)=[Γ*l*+<sup>1</sup> *<sup>n</sup>* (*f*) <sup>−</sup> <sup>Γ</sup>*<sup>l</sup> <sup>n</sup>*(*f*)]*ij*.

#### **5.2 Spatial correlation length**

In the previously proposed iterative algorithm, the spatial correlation length of the noise is supposed to be known. In practice, this is aforehand uncertain, therefore the search for a criterion of an estimate of *K* is necessary. In (Tayem et al., 2006), one algorithm which jointly estimates the number of sources and the spatial correlation length of the noise is presented. We propose to vary the value of *K* until the stability of the result is reached, that is, until the noise covariance matrix does not vary when *K* varies. The algorithm incorporating the choice of the correlation length *K* is presented in Fig. 2. In the stop test, we check whether �[Γ*K*+<sup>1</sup> *<sup>n</sup>* ]1(*f*) <sup>−</sup> [Γ*<sup>K</sup> <sup>n</sup>* ]1(*f*)�*<sup>F</sup>* < or not.

#### **6. Simulation results**

In the following simulations, a uniform linear array of *N* = 10 omnidirectional sensors with equal inter-element spacing *d* = *<sup>c</sup>* <sup>4</sup> *fo* is used, where *fo* is the mid-band frequency and *c* is the velocity of propagation. The number of independent realizations used for estimating the

and,

as in example 1:

can be perfectly estimated.

many simulations.

[Γ*n*(*f*)]*i*,*<sup>m</sup>* = 0 *i f* |*i* − *m*| ≥ *K*

Array Processing: Underwater Acoustic Source Localization 19

where *σ*<sup>2</sup> is the noise variance equal for every sensor and *ρ* is the spatial correlation coefficient.

In each of the two studied cases (*K* = 3 and *K* = 5), the noise covariance matrix is estimated with a fixed threshold value *�* = 10−<sup>5</sup> after a few iterations and we notice that the number of

In this example, the covariance matrix elements are chosen such that their values are decreasing along the array of antenna. The noise covariance matrix has the same structure

*<sup>σ</sup>*<sup>2</sup> *<sup>ρ</sup>*<sup>2</sup> ··· *<sup>ρ</sup><sup>K</sup>* ··· <sup>0</sup>

<sup>2</sup> *<sup>σ</sup>*<sup>2</sup> *<sup>ρ</sup>*<sup>2</sup> ··· ··· <sup>0</sup>

. ... ... ... ... .

. ... ... ... ... .

... ... ... ... .

*<sup>K</sup>* ··· *ρ*<sup>∗</sup>

. . ⎞

⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

. .

. .

<sup>2</sup> *<sup>σ</sup>*<sup>2</sup>

The values which are retained are: *σ*<sup>2</sup> = 1 and *ρ* = 0.7.

Γ*<sup>n</sup>* =

⎛

*ρ*∗

. .

*ρ*∗ *K*

. .

0 ··· *ρ*<sup>∗</sup>

The parameters used are: in the case of *K* = 3, *σ*<sup>2</sup> = 1, *ρ*<sup>2</sup> = 0.4 + 0.3*j* and *ρ*<sup>3</sup> = 0.1 + 0.07*j*. Using the proposed algorithm, the three complex parameters of the noise covariance matrix


Each element of the band noise covariance matrix is obtained by the average of several simulations simulated with random numbers uniformly distributed in the interval (0,1).

For *K* = 3, Fig. 3 shows the differences between the 10 elements of the principal diagonal of

For *K* = 5, Fig. 4 shows the obtained results. Comparing these two results, we can remark that when *K* increases the estimation error increases. This concluding remark is observed on

Figures 5, 6, 7, 8, 9 and 10 show the localization results of three sources before and after the preprocessing. Before the preprocessing, we use directly the MUSIC method to localize the sources. Once the noise covariance matrix is estimated with the proposed algorithm, this matrix is subtracted from the initial covariance matrix of the received signals, and then we use the MUSIC method to localize the sources. The three simulated sources are 5◦, 10◦ and 20◦ for Fig. 5 and 6; 5◦, 15◦ and 20◦ for Figs. 7 and 8; 5◦, 15◦ and 25◦ for Fig. 9 and 10. For

Figs. 7 and 8, the simulated SNR is greater than those of Figs. 5, 6 and Figs. 7 and 8.

⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

The proposed algorithm gives good estimates of the simulated parameters.

**Example 3 :** *Band noise covariance matrix with random elements:*

the simulated matrix and those of the estimated matrix.

iterations for *K* = 5 is greater than that of *K* = 3. **Example 2 :** *Band-Toeplitz noise covariance matrix*

Fig. 2. Integration of the choice of *K* in the algorithm, where [Γ*<sup>K</sup> <sup>n</sup>* ]1(*f*) indicates the principal diagonal of the banded noise covariance matrix Γ*n*(*f*) with spatial correlation length *K*.

covariance matrix of the received signals is 1000. The signal sources are temporally stationary zero-mean white Gaussian processes with the same frequency *fo* = 115 *Hz*. Three equi-power uncorrelated sources impinge on the array from different angles with the *SNR* = 10*dB*. The noise power is taken as the average of the diagonal elements of the noise covariance matrix

$$
\sigma^2 = \frac{1}{N} \sum\_{i=1}^N \sigma\_i^2.
$$

To demonstrate the performance of the proposed algorithm, three situations are considered:




In each case, two spatial correlation lengths are studied: *K* = 3 and *K* = 5.

#### **6.1 Noise covariance matrix estimation and results obtained**

To localize the directions of arrival of sources and to evaluate the performance of the proposed algorithm, the High-Resolution methods such as MUSIC (Kushki & Plataniotis, 2009; Hawkes & Nehorai, 2001) are used after the preprocessing, with the exact number of sources (*P* = 3). This detection problem is not considered in this study.

#### **Example 1 :** *Band-Toeplitz noise covariance matrix:*

In this example, the spatial correlation between the noises is exponentially decreasing along the array of antenna and the elements of the noise covariance matrix are expressed as:

$$[\Gamma\_n(f)]\_{i,m} = \sigma^2 \rho^{|i-m|} e^{j\pi(i-m)/2} \quad \text{if} \quad |i-m| < K$$

and,

6 Will-be-set-by-IN-TECH

*<sup>n</sup>* ]1(*f*) indicates the principal

Fig. 2. Integration of the choice of *K* in the algorithm, where [Γ*<sup>K</sup>*


**6.1 Noise covariance matrix estimation and results obtained**

[Γ*n*(*f*)]*i*,*<sup>m</sup>* = *σ*2*ρ*|*i*−*m*<sup>|</sup>

This detection problem is not considered in this study.

**Example 1 :** *Band-Toeplitz noise covariance matrix:*

*σ*<sup>2</sup> = <sup>1</sup> *N N* ∑ *i*=1 *σ*2 *i* .

diagonal of the banded noise covariance matrix Γ*n*(*f*) with spatial correlation length *K*.

covariance matrix of the received signals is 1000. The signal sources are temporally stationary zero-mean white Gaussian processes with the same frequency *fo* = 115 *Hz*. Three equi-power uncorrelated sources impinge on the array from different angles with the *SNR* = 10*dB*. The noise power is taken as the average of the diagonal elements of the noise covariance matrix

To demonstrate the performance of the proposed algorithm, three situations are considered: - Band-Toeplitz noise covariance matrix, with each element given by a modeling function;

To localize the directions of arrival of sources and to evaluate the performance of the proposed algorithm, the High-Resolution methods such as MUSIC (Kushki & Plataniotis, 2009; Hawkes & Nehorai, 2001) are used after the preprocessing, with the exact number of sources (*P* = 3).

In this example, the spatial correlation between the noises is exponentially decreasing along the array of antenna and the elements of the noise covariance matrix are expressed as:

*<sup>e</sup>jπ*(*i*−*m*)/2 *i f* <sup>|</sup>*<sup>i</sup>* <sup>−</sup> *<sup>m</sup>*<sup>|</sup> <sup>&</sup>lt; *<sup>K</sup>*


In each case, two spatial correlation lengths are studied: *K* = 3 and *K* = 5.

$$|\Gamma\_{\mathfrak{n}}(f)|\_{\mathfrak{i},\mathfrak{m}} = 0 \qquad\qquad\qquad\text{if}\quad |\mathfrak{i}-\mathfrak{m}| \ge K$$

where *σ*<sup>2</sup> is the noise variance equal for every sensor and *ρ* is the spatial correlation coefficient. The values which are retained are: *σ*<sup>2</sup> = 1 and *ρ* = 0.7.

In each of the two studied cases (*K* = 3 and *K* = 5), the noise covariance matrix is estimated with a fixed threshold value *�* = 10−<sup>5</sup> after a few iterations and we notice that the number of iterations for *K* = 5 is greater than that of *K* = 3.

#### **Example 2 :** *Band-Toeplitz noise covariance matrix*

In this example, the covariance matrix elements are chosen such that their values are decreasing along the array of antenna. The noise covariance matrix has the same structure as in example 1:

$$
\Gamma\_n = \begin{pmatrix}
\sigma^2 & \rho\_2 & \cdots & \rho\_K & \cdots & 0 \\
\rho\_2^\* & \sigma^2 & \rho\_2 & \cdots & \cdots & 0 \\
\vdots & \ddots & \ddots & \ddots & \ddots & \vdots \\
\rho\_K^\* & \ddots & \ddots & \ddots & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & \ddots & \vdots \\
0 & \cdots & \rho\_K^\* & \cdots & \rho\_2^\* & \sigma^2
\end{pmatrix}
$$

The parameters used are: in the case of *K* = 3, *σ*<sup>2</sup> = 1, *ρ*<sup>2</sup> = 0.4 + 0.3*j* and *ρ*<sup>3</sup> = 0.1 + 0.07*j*. Using the proposed algorithm, the three complex parameters of the noise covariance matrix can be perfectly estimated.


**Example 3 :** *Band noise covariance matrix with random elements:*

Each element of the band noise covariance matrix is obtained by the average of several simulations simulated with random numbers uniformly distributed in the interval (0,1).

For *K* = 3, Fig. 3 shows the differences between the 10 elements of the principal diagonal of the simulated matrix and those of the estimated matrix.

For *K* = 5, Fig. 4 shows the obtained results. Comparing these two results, we can remark that when *K* increases the estimation error increases. This concluding remark is observed on many simulations.

Figures 5, 6, 7, 8, 9 and 10 show the localization results of three sources before and after the preprocessing. Before the preprocessing, we use directly the MUSIC method to localize the sources. Once the noise covariance matrix is estimated with the proposed algorithm, this matrix is subtracted from the initial covariance matrix of the received signals, and then we use the MUSIC method to localize the sources. The three simulated sources are 5◦, 10◦ and 20◦ for Fig. 5 and 6; 5◦, 15◦ and 20◦ for Figs. 7 and 8; 5◦, 15◦ and 25◦ for Fig. 9 and 10. For Figs. 7 and 8, the simulated SNR is greater than those of Figs. 5, 6 and Figs. 7 and 8.

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>20</sup> <sup>25</sup> <sup>30</sup> <sup>0</sup>

Array Processing: Underwater Acoustic Source Localization 21

Azimut

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>20</sup> <sup>25</sup> <sup>30</sup> <sup>0</sup>

Azimut

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>20</sup> <sup>25</sup> <sup>30</sup> <sup>0</sup>

Azimut

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>20</sup> <sup>25</sup> <sup>30</sup> <sup>0</sup>

Azimut

In order to evaluate the performances of this algorithm, we study, below, the influence of the

<sup>4</sup> x 104 Without preprocessing for K=3

<sup>12</sup> x 107 After preprocessing for K=3

Fig. 5. Localization of the three sources at 5◦, 10◦ and 20◦ without and with noise

<sup>2</sup> x 104 Without preprocessing for K=5

3.5 x 104 After preprocessing for K=5

Fig. 6. Localization of the three sources at 5◦, 10◦ and 20◦ without and with noise

1 2 3

0.5 1 1.5

0.5 1 1.5 2 2.5 3

pre-processing for *K* = 5.

involved parameters.

pre-processing for *K* = 3.

Fig. 3. Variations of the estimation error along the principal diagonal of the noise covariance matrix for *K* = 3.

Fig. 4. Variations of the estimation error along the principal diagonal of the noise covariance matrix for *K* = 5.

The comparison of the results of Figs. 5, 6 and 7, 8 comes to the conclusion that the MUSIC method cannot separate the close sources without the preprocessing when the SNR is low, so in Fig. 5 we can only detect two sources before preprocessing. And for each case we can note that there is improvement in the results obtained with the preprocessing. Comparing the results of *K* = 3 with that of *K* = 5 for each figure, we can also reconfirm that when *K* increases, the estimation error increases on whitening, so we obtain better results with the preprocessing for *K* = 3 than *K* = 5.

8 Will-be-set-by-IN-TECH

<sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> <sup>5</sup> <sup>6</sup> <sup>7</sup> <sup>8</sup> <sup>9</sup> <sup>10</sup> <sup>0</sup>

10 elements of the principal diagonal

Fig. 3. Variations of the estimation error along the principal diagonal of the noise covariance

<sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> <sup>5</sup> <sup>6</sup> <sup>7</sup> <sup>8</sup> <sup>9</sup> <sup>10</sup> 0.1

Fig. 4. Variations of the estimation error along the principal diagonal of the noise covariance

The comparison of the results of Figs. 5, 6 and 7, 8 comes to the conclusion that the MUSIC method cannot separate the close sources without the preprocessing when the SNR is low, so in Fig. 5 we can only detect two sources before preprocessing. And for each case we can note that there is improvement in the results obtained with the preprocessing. Comparing the results of *K* = 3 with that of *K* = 5 for each figure, we can also reconfirm that when *K* increases, the estimation error increases on whitening, so we obtain better results with the

10 elements of the principal diagonal

0.002

0.004

0.006


0.2

preprocessing for *K* = 3 than *K* = 5.

0.3

0.4

0.5


0.6

0.7

0.8

matrix for *K* = 3.

matrix for *K* = 5.

0.008

0.01

0.012

Fig. 5. Localization of the three sources at 5◦, 10◦ and 20◦ without and with noise pre-processing for *K* = 3.

Fig. 6. Localization of the three sources at 5◦, 10◦ and 20◦ without and with noise pre-processing for *K* = 5.

In order to evaluate the performances of this algorithm, we study, below, the influence of the involved parameters.

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>20</sup> <sup>25</sup> <sup>30</sup> <sup>0</sup>

Array Processing: Underwater Acoustic Source Localization 23

Azimut

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>20</sup> <sup>25</sup> <sup>30</sup> <sup>0</sup>

Azimut

Without preprocessing for K=5

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>20</sup> <sup>25</sup> <sup>30</sup> <sup>0</sup>

Azimut

After preprocessing for K=5

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>20</sup> <sup>25</sup> <sup>30</sup> <sup>0</sup>

Azimut

Figure 11 shows that if *P* = 1, the estimation error is null until *K* = 5. On the other hand for *P* = 9, we have an estimation error of the covariance matrix of the noise as soon as *K* = 2 and

*<sup>n</sup>* <sup>−</sup> <sup>Γ</sup>*estimated*

*<sup>n</sup>* �*F*.

Fig. 10. Localization of the three sources at 5◦, 15◦ and 25◦ without and with noise

fixed to be *<sup>P</sup>*, *<sup>P</sup>* <sup>=</sup> 1 or *<sup>P</sup>* <sup>=</sup> 9. This error is defined by: *EE* <sup>=</sup> �Γ*simulated*

3.5 x 104 Without preprocessing for K=3

<sup>14</sup> x 108 After preprocessing for K=3

Fig. 9. Localization of the three sources at 5◦, 15◦ and 25◦ without and with noise

0.5 1 1.5 2 2.5 3

the error is greater than the error for *P* = 1.

pre-processing for *K* = 5.

pre-processing for *K* = 3.

Fig. 7. Localization of the three sources at 5◦, 15◦ and 20◦ without and with noise pre-processing with greater SNR than figure 5 for *K* = 3.

Fig. 8. Localization of the three sources at 5◦, 15◦ and 20◦ without and with noise pre-processing with greater SNR than figure 6 for *K* = 5.

#### **6.2 Choice of the parameters**

#### **6.2.1 Spatial correlation length of the noise**

Figure 11 shows the variations of the estimation error of the noise covariance matrix when the spatial correlation length of the noise *K* is increasing from 2 to *N* and the number of sources is 10 Will-be-set-by-IN-TECH

<sup>10</sup> x 104 Without preprocessing for K=3

<sup>8</sup> x 109 After preprocessing for K=3

Fig. 7. Localization of the three sources at 5◦, 15◦ and 20◦ without and with noise

<sup>4</sup> x 104 Without preprocessing for K=5

3.5 x 104 After preprocessing for K=5

Fig. 8. Localization of the three sources at 5◦, 15◦ and 20◦ without and with noise

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>20</sup> <sup>25</sup> <sup>30</sup> <sup>0</sup>

Azimut

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>20</sup> <sup>25</sup> <sup>30</sup> <sup>0</sup>

Azimut

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>20</sup> <sup>25</sup> <sup>30</sup> <sup>0</sup>

Azimut

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>20</sup> <sup>25</sup> <sup>30</sup> <sup>0</sup>

Azimut

Figure 11 shows the variations of the estimation error of the noise covariance matrix when the spatial correlation length of the noise *K* is increasing from 2 to *N* and the number of sources is

2 4 6

1 2 3

0.5 1 1.5 2 2.5 3

**6.2.1 Spatial correlation length of the noise**

**6.2 Choice of the parameters**

pre-processing with greater SNR than figure 6 for *K* = 5.

pre-processing with greater SNR than figure 5 for *K* = 3.

Fig. 9. Localization of the three sources at 5◦, 15◦ and 25◦ without and with noise pre-processing for *K* = 3.

Fig. 10. Localization of the three sources at 5◦, 15◦ and 25◦ without and with noise pre-processing for *K* = 5.

fixed to be *<sup>P</sup>*, *<sup>P</sup>* <sup>=</sup> 1 or *<sup>P</sup>* <sup>=</sup> 9. This error is defined by: *EE* <sup>=</sup> �Γ*simulated <sup>n</sup>* <sup>−</sup> <sup>Γ</sup>*estimated <sup>n</sup>* �*F*.

Figure 11 shows that if *P* = 1, the estimation error is null until *K* = 5. On the other hand for *P* = 9, we have an estimation error of the covariance matrix of the noise as soon as *K* = 2 and the error is greater than the error for *P* = 1.

2 3 4 5 6 7 8 9 10

Array Processing: Underwater Acoustic Source Localization 25

Correlation length(K)

Fig. 12. Bias of the estimated direction according to spatial correlation length of the noise.

number of sources to be localized and the angular difference between the sources.

efficiency and the speed also depend on the signal-to-noise ratio, the number of sensors, the

The spatial correlation length *K* authorized by the algorithm is a function of the number of sensors and the number of sources. Indeed, the number of parameters of the signal to be estimated is *P*<sup>2</sup> and the number of parameters of the noise is *Nber*(*K*). In order to estimate them it is necessary that *<sup>N</sup>*<sup>2</sup> <sup>≥</sup> *<sup>P</sup>*<sup>2</sup> <sup>+</sup> *Nber*(*K*) and that *<sup>K</sup>* <sup>≤</sup> *<sup>N</sup>*. In the limit case: *<sup>P</sup>* <sup>=</sup> *<sup>N</sup>* <sup>−</sup> 1, we have *Nber*(*K*) ≤ 2*N* − 1, which corresponds to a bi-diagonal noise covariance matrix. If the model of the noise covariance matrix is band-Toeplitz (Tayem et al., 2006; Werner & Jansson, 2007), the convergence of the proposed algorithm is fast, and the correlation length of noise

The studied signals are recorded during an underwater acoustic experiment. The experiment is carried out in an acoustic tank under conditions which are similar to those in a marine environment. The bottom of the tank is filled with sand. The experimental device is presented in figure 14. A source emits a narrow-band signal (*fo* = 350 *KHz*). In addition to the signal source a spatially correlated Gaussian noise is emitted. The signal-to-noise ratio is 5 *dB*. Our objective is to estimate the directions of arrival of the signals during the experiment. The signals are received on one uniform linear array. The observed signals come from various reflections on the objects being in the tank. Generally the aims of acousticians is the detection,

0

can reach *N*.

**7. Experimental data**

0.05

0.1

0.15

Bias(deg.)

0.2

0.25

0.3

0.35

Fig. 11. Estimation error of the covariance matrix of the noise according to its spatial correlation length with P=1 and P=9.

To study the influence of *K* on the localization, we draw, Fig. 12, according to the spatial correlation length *K* of the noise, the variations of the bias of estimate of the azimuth in the case of only one source localized at 10◦.

We define that the bias of the *P* estimated directions of the arrival of the sources is calculated by:

$$Bias = \frac{1}{P} \sum\_{p=1}^{P} bias(p).$$

where

$$bias(p) = E\left[\theta(p) - \hat{\theta}(p)\right] = \frac{1}{T} \sum\_{i=1}^{T} |\theta(i) - \hat{\theta}(i)|$$

The experimental results presented in Figs. 11 and 12 show that the correlation length and the number of sources influence the estimate of the covariance matrix of the noise and then the estimate of the DOA values. We study, below, the influence of the signal-to-noise ratio *SNR* on the estimate of the covariance matrix of the noise.

#### **6.2.2 Signal-to-noise ratio influence**

In order to study the performances of the proposed algorithm according to the signal-to-noise ratio, we plot, Fig. 13, the estimation error over the noise covariance matrix (the estimation error is defined in the section 6.2.1) according to the spatial correlation length *K* for *SNR* = 0 *dB* and *SNR* = 10 *dB*.

We conclude that the choice of the value of *K* influences the speed and the efficiency of this algorithm. Indeed, many simulations show that this algorithm estimates the matrix quickly, if *K* � *N*. On the other hand if *K* is close to *N*, the algorithm requires a great number of iterations. This is due to the increase of the number of unknown factors to estimate. The 12 Will-be-set-by-IN-TECH

<sup>2</sup> <sup>3</sup> <sup>4</sup> <sup>5</sup> <sup>6</sup> <sup>7</sup> <sup>8</sup> <sup>9</sup> <sup>10</sup> <sup>0</sup>

P=1

Correlation length(K)

To study the influence of *K* on the localization, we draw, Fig. 12, according to the spatial correlation length *K* of the noise, the variations of the bias of estimate of the azimuth in the

We define that the bias of the *P* estimated directions of the arrival of the sources is calculated

*θ*(*p*) <sup>=</sup> <sup>1</sup> *T*

The experimental results presented in Figs. 11 and 12 show that the correlation length and the number of sources influence the estimate of the covariance matrix of the noise and then the estimate of the DOA values. We study, below, the influence of the signal-to-noise ratio *SNR*

In order to study the performances of the proposed algorithm according to the signal-to-noise ratio, we plot, Fig. 13, the estimation error over the noise covariance matrix (the estimation error is defined in the section 6.2.1) according to the spatial correlation length *K* for *SNR* =

We conclude that the choice of the value of *K* influences the speed and the efficiency of this algorithm. Indeed, many simulations show that this algorithm estimates the matrix quickly, if *K* � *N*. On the other hand if *K* is close to *N*, the algorithm requires a great number of iterations. This is due to the increase of the number of unknown factors to estimate. The

*P* ∑ *p*=1

*bias*(*p*)

*T* ∑ *i*=1

<sup>|</sup>*θ*(*i*) <sup>−</sup> <sup>ˆ</sup>

*θ*(*i*)|

Fig. 11. Estimation error of the covariance matrix of the noise according to its spatial

*Bias* <sup>=</sup> <sup>1</sup> *P*

*<sup>θ</sup>*(*p*) <sup>−</sup> <sup>ˆ</sup>

*bias*(*p*) = *E*

on the estimate of the covariance matrix of the noise.

**6.2.2 Signal-to-noise ratio influence**

0 *dB* and *SNR* = 10 *dB*.

2

correlation length with P=1 and P=9.

case of only one source localized at 10◦.

by:

where

4

6

Estimation error

8

10

P=9

12

Fig. 12. Bias of the estimated direction according to spatial correlation length of the noise.

efficiency and the speed also depend on the signal-to-noise ratio, the number of sensors, the number of sources to be localized and the angular difference between the sources.

The spatial correlation length *K* authorized by the algorithm is a function of the number of sensors and the number of sources. Indeed, the number of parameters of the signal to be estimated is *P*<sup>2</sup> and the number of parameters of the noise is *Nber*(*K*). In order to estimate them it is necessary that *<sup>N</sup>*<sup>2</sup> <sup>≥</sup> *<sup>P</sup>*<sup>2</sup> <sup>+</sup> *Nber*(*K*) and that *<sup>K</sup>* <sup>≤</sup> *<sup>N</sup>*. In the limit case: *<sup>P</sup>* <sup>=</sup> *<sup>N</sup>* <sup>−</sup> 1, we have *Nber*(*K*) ≤ 2*N* − 1, which corresponds to a bi-diagonal noise covariance matrix. If the model of the noise covariance matrix is band-Toeplitz (Tayem et al., 2006; Werner & Jansson, 2007), the convergence of the proposed algorithm is fast, and the correlation length of noise can reach *N*.

#### **7. Experimental data**

The studied signals are recorded during an underwater acoustic experiment. The experiment is carried out in an acoustic tank under conditions which are similar to those in a marine environment. The bottom of the tank is filled with sand. The experimental device is presented in figure 14. A source emits a narrow-band signal (*fo* = 350 *KHz*). In addition to the signal source a spatially correlated Gaussian noise is emitted. The signal-to-noise ratio is 5 *dB*. Our objective is to estimate the directions of arrival of the signals during the experiment. The signals are received on one uniform linear array. The observed signals come from various reflections on the objects being in the tank. Generally the aims of acousticians is the detection,

Fig. 15. Experimental setup.

30

**estimation in noisy environment**

signal vector, in the frequency domain, is given by

Fig. 16. Received signals.

40

50

60

Index of hydrophone

70

80

90

<sup>8500</sup> <sup>9000</sup> <sup>9500</sup> <sup>10000</sup> <sup>10500</sup> <sup>11000</sup> <sup>11500</sup> <sup>12000</sup> <sup>20</sup>

Array Processing: Underwater Acoustic Source Localization 27

Number of measurement points

Figure 18 shows the obtained results using our algorithm. The two objects are localized.

**8. Cumulant based coherent signal subspace method for bearing and range**

In the rest of this chapter, we consider an array of *N* sensors which received the signals emitted by *P* wide band sources (*N* > *P*) in the presence of an additive Gaussian noise. The received

**r**(*fn*) = **A**(*fn*)**s**(*fn*) + **n**(*fn*), for *n* = 1, ..., *M* (6)

Fig. 13. Estimation error of the covariance matrix of the noise according to the spatial correlation length *K* and the signal-to-noise ratio *SNR*.

localization and identification of these objects. In this experiment we have recorded the reflected signals by a single receiver. This receiver is moved along a straight line between position *Xmin* = 50*mm* and position *Xmax* = 150*mm* with a step of *α*= 1*mm* in order to create a uniform linear array (see figure 15).

Fig. 14. Experimental device.

Two objects are placed at the bottom of the tank and the emitting source describes a circular motion with a step of 0.5◦ by covering the angular band going from 1◦ to 8.5◦. The signals received when the angle of emission is *θ*=5◦ are shown in figure 16. This figure shows that there exists two paths, which may correspond to the reflected signals on the two objects. The results of the localization are given in figures 17 and 18. We note that in spite of the presence of the correlated noise our algorithm estimate efficiently the DOA of the reflected signals during the experiment.

Figure 17 shows the obtained results of the localization using MUSIC method on the covariance matrices. The DOA of the reflected signals on the two objects are not estimated. This is due to the fact that the noise is correlated.

Fig. 15. Experimental setup.

14 Will-be-set-by-IN-TECH

SNR=0 dB

SNR=10 dB

<sup>2</sup> <sup>3</sup> <sup>4</sup> <sup>5</sup> <sup>6</sup> <sup>7</sup> <sup>8</sup> <sup>9</sup> <sup>10</sup> <sup>0</sup>

Correlation length(K)

localization and identification of these objects. In this experiment we have recorded the reflected signals by a single receiver. This receiver is moved along a straight line between position *Xmin* = 50*mm* and position *Xmax* = 150*mm* with a step of *α*= 1*mm* in order to create a

Two objects are placed at the bottom of the tank and the emitting source describes a circular motion with a step of 0.5◦ by covering the angular band going from 1◦ to 8.5◦. The signals received when the angle of emission is *θ*=5◦ are shown in figure 16. This figure shows that there exists two paths, which may correspond to the reflected signals on the two objects. The results of the localization are given in figures 17 and 18. We note that in spite of the presence of the correlated noise our algorithm estimate efficiently the DOA of the reflected signals during

Figure 17 shows the obtained results of the localization using MUSIC method on the covariance matrices. The DOA of the reflected signals on the two objects are not estimated.

Fig. 13. Estimation error of the covariance matrix of the noise according to the spatial

2

uniform linear array (see figure 15).

Fig. 14. Experimental device.

This is due to the fact that the noise is correlated.

the experiment.

correlation length *K* and the signal-to-noise ratio *SNR*.

4

Estimation error

6

8

Fig. 16. Received signals.

Figure 18 shows the obtained results using our algorithm. The two objects are localized.

## **8. Cumulant based coherent signal subspace method for bearing and range estimation in noisy environment**

In the rest of this chapter, we consider an array of *N* sensors which received the signals emitted by *P* wide band sources (*N* > *P*) in the presence of an additive Gaussian noise. The received signal vector, in the frequency domain, is given by

$$\mathbf{r}(f\_{\boldsymbol{\eta}}) = \mathbf{A}(f\_{\boldsymbol{\eta}})\mathbf{s}(f\_{\boldsymbol{\eta}}) + \mathbf{n}(f\_{\boldsymbol{\eta}}), \text{ for } n = 1, \ldots, M \tag{6}$$

E{ �

**C***g* �

**r**(*fn*) ⊗ **r**∗(*fn*)

��**r**(*fn*) <sup>⊗</sup> **<sup>r</sup>**∗(*fn*)

**C**1(*fn*) �

kurtosis matrix, its *i*th element is defined as Cum�

**Γ***r*(*fn*) = E

�

If the noise is white then: Γ*n*(*fn*) = *σ*<sup>2</sup>

where **Γ***n*(*fn*) = E

�

**C1**(**fn**) is exploited so as

**s**(*fn*)**s**+(*fn*)

E � � **r**(*fn*)**r**

**n**(*fn*)**n**+(*fn*)

In matrix representation, equation (12) can be written

�

(*k*<sup>1</sup> − 1)*N* + *l*1,(*l*<sup>2</sup> − 1)*N* + *k*<sup>2</sup>

=Cum�

. . . . . . . . . . . .

⎡ ⎢ ⎢ ⎢ ⎣

=

using the first row of **C***g*(*fn*) and reshape it into an (*N* × *N*) hermitian matrix, i.e.

(*k*<sup>1</sup> − 1)*N* + *l*1,(*l*<sup>2</sup> − 1)*N* + *k*<sup>2</sup>

**C***g* �

When there are *<sup>N</sup>* array sensors, **<sup>C</sup>***g*(*fn*) is (*N*<sup>2</sup> <sup>×</sup> *<sup>N</sup>*2) matrix. The rows of **<sup>C</sup>***g*(*fn*) are indexed by (*k*<sup>1</sup> − 1)*N* + *l*1, and the columns are indexed by (*l*<sup>2</sup> − 1)*N* + *k*2. In terms of the vector **r**(*fn*), the cumulant matrix **C***g*(*fn*) is organized compatibly with the matrix

Array Processing: Underwater Acoustic Source Localization 29

where *ri* is the *i*th element of the vector **r**. In order to reduce the calculating time, instead of using of the cumulant matrix (*N*<sup>2</sup> <sup>×</sup> *<sup>N</sup>*2) **<sup>C</sup>***g*(*fn*), a cumulant slice matrix (*<sup>N</sup>* <sup>×</sup> *<sup>N</sup>*) of the observation vector at frequency *fn* can be calculated and it offers the same algebraic properties of **C***g*(*fn*). This matrix is denoted **C**1(*fn*). If we consider a cumulant slice, for example, by

*r*1(*fn*),*r*<sup>∗</sup>

*<sup>c</sup>*1,1 *<sup>c</sup>*1,*N*+<sup>1</sup> ··· *<sup>c</sup>*1,*N*<sup>2</sup>−*N*+<sup>1</sup> *<sup>c</sup>*1,2 *<sup>c</sup>*1,*N*+<sup>2</sup> ··· *<sup>c</sup>*1,*N*<sup>2</sup>−*N*+<sup>2</sup>

*c*1,*<sup>N</sup> c*1,2*<sup>N</sup>* ··· *c*1,*N*<sup>2</sup>

where *c*1,*<sup>j</sup>* is the (1, *j*) element of the cumulant matrix **C***g*(*fn*) and **U***s*(*fn*) is the diagonal

**C**1(*fn*) can be reported as the classical covariance or spectral matrix of received data

is the spectral matrix of the complex amplitudes **s**(*fn*).

*N* ∑ *i*=1

*<sup>n</sup>*(*fn*)**I**, where *σ*<sup>2</sup>

*λi*(*fn*)**vi**(**fn**)**v**<sup>+</sup>

(*N* × *N*) identity matrix. The signal subspace is shown to be spanned by the *P* eigenvectors corresponding to *P* largest eigenvalues of the data spectral matrix Γ*r*(*fn*). But in practice, the noise is not often white or its spatial structure is unknown, hence the interest of the high order statistics as shown in equation (10) in which the fourth order cumulants are not affected by additive Gaussian noise (i.e., Γ*n*(*fn*) = **0**), so as no noise spatial structure assumption is necessary. If the eigenvalues and the corresponding eigenvectors of **C1**(**fn**) are denoted by {*λi*(*fn*)}*i*=1..*<sup>N</sup>* and {**vi**(**fn**)}**i**=**1**..**N**. Then, the eigendecomposition of the cumulant matrix

+(*fn*) �

**C1**(**fn**) =

*si*(*fn*),*s*<sup>∗</sup>

for *k*1, *k*2, *l*1, *l*<sup>2</sup> = 1, 2, ..., *N* and

�

<sup>1</sup> (*fn*),**r**(*fn*),**r**+(*fn*)

�+}. In other words, the elements of **<sup>C</sup>***g*(*fn*) are given by

= Cum(*rk*<sup>1</sup> ,*rk*<sup>2</sup> ,*rl*<sup>1</sup> ,*rl*<sup>2</sup> ) (9)

�

⎤ ⎥ ⎥ ⎥ ⎦

=**A**(*fn*)**U***s*(*fn*)**A**+(*fn*) (10)

*<sup>i</sup>* (*fn*),*si*(*fn*),*s*<sup>∗</sup>

� is the spectral matrix of the noise vector and **Γ***s*(*fn*) =

**C**1(*fn*) = **V**(*fn*)**Λ**(*fn*)**V**+(*fn*) (13)

= **A**(*fn*)**Γs**(*fn*)**A**+(*fn*) + **Γn**(*fn*) (11)

*<sup>i</sup>* (*fn*) �

*<sup>n</sup>*(*fn*) is the noise power and **I** is the

**<sup>i</sup>** (**fn**) (12)

with *i* = 1, ..., *P*.

Fig. 17. Localization results with MUSIC.

Fig. 18. Localization results with proposed algorithm.

where **A**(*fn*) = [**a**(*fn*, *θ*1), **a**(*fn*, *θ*2), ..., **a**(*fn*, *θP*)], **s**(*fn*)=[*s*1(*fn*),*s*2(*fn*), ...,*sP*(*fn*)]*T*, and **n**(*fn*) = [*n*1(*fn*), *n*2(*fn*), ..., *nN*(*fn*)] *T*.

**r**(*fn*) is the Fourier transforms of the array output vector, **s**(*fn*) is the vector of zero-mean complex random non-Gaussian source signals, assumed to be stationary over the observation interval, **n**(*fn*) is the vector of zero-mean complex white Gaussian noise and statistically independent of signals and **A**(*fn*) is the transfer matrix (steering matrix) of the source sensor array systems computed by the **a***k*(*fn*) (*k* = 1, ..., *P*) source steering vectors, assumed to have full column rank. In addition to the model (6), we also assume that the signals are statistically independent. In this case, a fourth order cumulant is given by

$$\begin{aligned} \text{Cum}(r\_{k\_1}, r\_{k\_2}, r\_{l\_1}, r\_{l\_2}) &= \text{E}\{r\_{k\_1} r\_{k\_2} r\_{l\_1}^\* r\_{l\_2}^\*\} - \text{E}\{r\_{k\_1} r\_{l\_1}^\*\} \text{E}\{r\_{k\_2} r\_{l\_2}^\*\} \\ &- \text{E}\{r\_{k\_1} r\_{l\_2}^\*\} \text{E}\{r\_{k\_2} r\_{l\_1}^\*\} \end{aligned} \tag{7}$$

where *rk*<sup>1</sup> is the *k*<sup>1</sup> element in the vector **r** The indices *k*2, *l*1, *l*<sup>2</sup> are similarly defined. The cumulant matrix consisting of all possible permutations of the four indices {*k*1, *k*2, *l*1, *l*2} is given in (Yuen & Friedlander, 1997) as

$$\mathbf{C}\_{\mathcal{S}}(f\_{\boldsymbol{\eta}}) \stackrel{\triangle}{=} \sum\_{\mathcal{S}=1}^{P} \left( \mathbf{a}\_{\mathcal{S}}(f\_{\boldsymbol{\eta}}) \otimes \mathbf{a}\_{\mathcal{S}}^{\*}(f\_{\boldsymbol{\eta}}) \right) \boldsymbol{\upmu}\_{\mathcal{S}}(f\_{\boldsymbol{\eta}}) \left( \mathbf{a}\_{\mathcal{S}}(f\_{\boldsymbol{\eta}}) \otimes \mathbf{a}\_{\mathcal{S}}^{\*}(f\_{\boldsymbol{\eta}}) \right)^{+} \tag{8}$$

where *ug*(*fn*) is the source kurtosis (i.e., fourth order analog of variance) defined by *ug*(*fn*) = Cum *sg*(*fn*),*s*∗ *g*(*fn*),*sg*(*fn*),*s*<sup>∗</sup> *<sup>g</sup>*(*fn*) of the *<sup>g</sup>*th complex amplitude source and <sup>⊗</sup> is the Kronecker product, Cum(.) denotes the cumulant.

16 Will-be-set-by-IN-TECH

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> <sup>60</sup> <sup>0</sup>

Azimuth(deg.)

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> <sup>60</sup> <sup>0</sup>

Azimuth(deg.)

where **A**(*fn*) = [**a**(*fn*, *θ*1), **a**(*fn*, *θ*2), ..., **a**(*fn*, *θP*)], **s**(*fn*)=[*s*1(*fn*),*s*2(*fn*), ...,*sP*(*fn*)]*T*, and

**r**(*fn*) is the Fourier transforms of the array output vector, **s**(*fn*) is the vector of zero-mean complex random non-Gaussian source signals, assumed to be stationary over the observation interval, **n**(*fn*) is the vector of zero-mean complex white Gaussian noise and statistically independent of signals and **A**(*fn*) is the transfer matrix (steering matrix) of the source sensor array systems computed by the **a***k*(*fn*) (*k* = 1, ..., *P*) source steering vectors, assumed to have full column rank. In addition to the model (6), we also assume that the signals are statistically

> *l*1 *r*∗ *l*2

− E{*rk*<sup>1</sup> *r*<sup>∗</sup> *l*2

where *rk*<sup>1</sup> is the *k*<sup>1</sup> element in the vector **r** The indices *k*2, *l*1, *l*<sup>2</sup> are similarly defined. The cumulant matrix consisting of all possible permutations of the four indices {*k*1, *k*2, *l*1, *l*2} is

> *<sup>g</sup>*(*fn*) *ug*(*fn*)

where *ug*(*fn*) is the source kurtosis (i.e., fourth order analog of variance) defined by

} − E{*rk*<sup>1</sup> *r*<sup>∗</sup>

}E{*rk*<sup>2</sup> *r*<sup>∗</sup> *l*1 *l*1

**a***g*(*fn*) ⊗ **a**<sup>∗</sup>

}E{*rk*<sup>2</sup> *r*<sup>∗</sup> *l*2 }

> *<sup>g</sup>*(*fn*) <sup>+</sup>

of the *<sup>g</sup>*th complex amplitude source and <sup>⊗</sup> is

} (7)

(8)

5

Fig. 17. Localization results with MUSIC.

Fig. 18. Localization results with proposed algorithm.

*T*.

independent. In this case, a fourth order cumulant is given by

Cum(*rk*<sup>1</sup> ,*rk*<sup>2</sup> ,*rl*<sup>1</sup> ,*rl*<sup>2</sup> ) =E{*rk*<sup>1</sup> *rk*<sup>2</sup> *r*<sup>∗</sup>

**a***g*(*fn*) ⊗ **a**<sup>∗</sup>

*<sup>g</sup>*(*fn*)

*g*(*fn*),*sg*(*fn*),*s*<sup>∗</sup>

the Kronecker product, Cum(.) denotes the cumulant.

Music

**n**(*fn*) = [*n*1(*fn*), *n*2(*fn*), ..., *nN*(*fn*)]

given in (Yuen & Friedlander, 1997) as

*ug*(*fn*) = Cum

**C***g*(*fn*) � = *P* ∑ *g*=1 

*sg*(*fn*),*s*∗

10

Music

15

20

When there are *<sup>N</sup>* array sensors, **<sup>C</sup>***g*(*fn*) is (*N*<sup>2</sup> <sup>×</sup> *<sup>N</sup>*2) matrix. The rows of **<sup>C</sup>***g*(*fn*) are indexed by (*k*<sup>1</sup> − 1)*N* + *l*1, and the columns are indexed by (*l*<sup>2</sup> − 1)*N* + *k*2. In terms of the vector **r**(*fn*), the cumulant matrix **C***g*(*fn*) is organized compatibly with the matrix E{ � **r**(*fn*) ⊗ **r**∗(*fn*) ��**r**(*fn*) <sup>⊗</sup> **<sup>r</sup>**∗(*fn*) �+}. In other words, the elements of **<sup>C</sup>***g*(*fn*) are given by **C***g* � (*k*<sup>1</sup> − 1)*N* + *l*1,(*l*<sup>2</sup> − 1)*N* + *k*<sup>2</sup> � for *k*1, *k*2, *l*1, *l*<sup>2</sup> = 1, 2, ..., *N* and

$$\mathbf{C}\_{\mathcal{S}}\left( (k\_1 - 1)N + l\_1, (l\_2 - 1)N + k\_2 \right) = \mathbf{C} \mathbf{um}(r\_{k\_1}, r\_{k\_2}, r\_{l\_1}, r\_{l\_2}) \tag{9}$$

where *ri* is the *i*th element of the vector **r**. In order to reduce the calculating time, instead of using of the cumulant matrix (*N*<sup>2</sup> <sup>×</sup> *<sup>N</sup>*2) **<sup>C</sup>***g*(*fn*), a cumulant slice matrix (*<sup>N</sup>* <sup>×</sup> *<sup>N</sup>*) of the observation vector at frequency *fn* can be calculated and it offers the same algebraic properties of **C***g*(*fn*). This matrix is denoted **C**1(*fn*). If we consider a cumulant slice, for example, by using the first row of **C***g*(*fn*) and reshape it into an (*N* × *N*) hermitian matrix, i.e.

$$\begin{aligned} \mathbf{C}\_{1}(f\_{n}) & \stackrel{\triangle}{=} \text{cum}\left(r\_{1}(f\_{n}), r\_{1}^{\*}(f\_{n}), \mathbf{r}(f\_{n}), \mathbf{r}^{+}(f\_{n})\right) \\ &= \begin{bmatrix} c\_{1,1} & c\_{1,N+1} & \cdots & c\_{1,N^{2}-N+1} \\ c\_{1,2} & c\_{1,N+2} & \cdots & c\_{1,N^{2}-N+2} \\ \vdots & \vdots & \vdots & \vdots \\ c\_{1,N} & c\_{1,2N} & \cdots & c\_{1,N^{2}} \end{bmatrix} \end{aligned}$$

$$=\mathbf{A}(f\_n)\mathbf{U}\_s(f\_n)\mathbf{A}^+(f\_n) \tag{10}$$

where *c*1,*<sup>j</sup>* is the (1, *j*) element of the cumulant matrix **C***g*(*fn*) and **U***s*(*fn*) is the diagonal kurtosis matrix, its *i*th element is defined as Cum� *si*(*fn*),*s*<sup>∗</sup> *<sup>i</sup>* (*fn*),*si*(*fn*),*s*<sup>∗</sup> *<sup>i</sup>* (*fn*) � with *i* = 1, ..., *P*.

**C**1(*fn*) can be reported as the classical covariance or spectral matrix of received data

$$\mathbf{T}\_r(f\_n) = \mathbf{E}\left[\mathbf{r}(f\_n)\mathbf{r}^+(f\_n)\right] = \mathbf{A}(f\_n)\mathbf{T}\_\mathbf{s}(f\_n)\mathbf{A}^+(f\_n) + \mathbf{T}\_\mathbf{n}(f\_n) \tag{11}$$

where **Γ***n*(*fn*) = E � **n**(*fn*)**n**+(*fn*) � is the spectral matrix of the noise vector and **Γ***s*(*fn*) = E � **s**(*fn*)**s**+(*fn*) � is the spectral matrix of the complex amplitudes **s**(*fn*).

If the noise is white then: Γ*n*(*fn*) = *σ*<sup>2</sup> *<sup>n</sup>*(*fn*)**I**, where *σ*<sup>2</sup> *<sup>n</sup>*(*fn*) is the noise power and **I** is the (*N* × *N*) identity matrix. The signal subspace is shown to be spanned by the *P* eigenvectors corresponding to *P* largest eigenvalues of the data spectral matrix Γ*r*(*fn*). But in practice, the noise is not often white or its spatial structure is unknown, hence the interest of the high order statistics as shown in equation (10) in which the fourth order cumulants are not affected by additive Gaussian noise (i.e., Γ*n*(*fn*) = **0**), so as no noise spatial structure assumption is necessary. If the eigenvalues and the corresponding eigenvectors of **C1**(**fn**) are denoted by {*λi*(*fn*)}*i*=1..*<sup>N</sup>* and {**vi**(**fn**)}**i**=**1**..**N**. Then, the eigendecomposition of the cumulant matrix **C1**(**fn**) is exploited so as

$$\mathbf{C\_{1}(f\_{n})} = \sum\_{i=1}^{N} \lambda\_{i}(f\_{n}) \mathbf{v\_{i}(f\_{n})} \mathbf{v\_{i}^{+}(f\_{n})} \tag{12}$$

In matrix representation, equation (12) can be written

$$\mathbf{C}\_1(f\_n) = \mathbf{V}(f\_n)\mathbf{A}(f\_n)\mathbf{V}^+(f\_n) \tag{13}$$

where the columns of **V***l*(*fo*, *fn*) and of **V***r*(*fo*, *fn*) are the left and right singular vectors of the matrix **A**(*fn*, *θi*)**A**+(*fo*, *θi*) where *θ<sup>i</sup>* is an initial vector of the estimates of the angles-of-arrival,

Array Processing: Underwater Acoustic Source Localization 31

It has been shown that the performances of these methods depend on *θ<sup>i</sup>* (Bourennane et al., 1997). In practice, it is very difficult to obtain the accurate estimate of the DOAs. So in order to resolve this initialization problem, the Two-Sided Correlation Transformation (TCT) algorithm is proposed in (Valaee & Kabal, 1995). The focusing matrices **T**(*fo*, *fn*) are obtained

�**P**(*fo*) <sup>−</sup> **<sup>T</sup>**(*fo*, *fn*)**P**(*fn*)**T**+(*fo*, *fn*)�*<sup>F</sup>*

**T**(*fo*, *fn*) = **V**(*fo*)**V**+(*fn*) (18)

*<sup>s</sup>* (*fo*) (19)

*<sup>s</sup>* (*fn*) (20)

*<sup>s</sup>* (*fo*) (22)

**T**(*fo*, *fn*)**Γ***r*(*fn*)**T**+(*fo*, *fn*) (21)

*<sup>s</sup>* (*fo*)

**Λ***s*(*fn*) is the arithmetic mean of the largest eigenvalues of the spectral

*s*.*t* **T**+(*fo*, *fn*)**T**(*fo*, *fn*) = **I**

**<sup>P</sup>**(.) = **<sup>A</sup>**(.)Γ*s*(.)**A**+(.) and �.�*<sup>F</sup>* is the Frobenius matrix norm. The solution (Valaee & Kabal,

In order to reduce the calculating time for constructing this operator equation (18), an improved solution is developed in (Bendjama et al., 1998), where only the signal subspace

**T**(*fo*, *fn*)**V***s*(*fn*) = **V**<sup>+</sup>

**T**(*fo*, *fn*) = **V***s*(*fo*)**V**<sup>+</sup>

Once **Γ***r*(*fn*) and **T**(*fo*, *fn*), *n* = 1..., *M* are formed, the estimate of **Γˆ***r*(*fo*) can be written

*M* ∑ *n*=1

*M*

*M* ∑ *n*=1

= **V***s*(*fo*)**Λˆ** *<sup>s</sup>*(*fo*)**V**<sup>+</sup>

Note that, the number of sources, *P*, can be computed by the number of non-zero eigenvalues

The efficiency of the different focusing algorithms is depended on the prior knowledge of the noise. All the transform matrices solution of equation system (16) are obtained in the presence

**V***s*(*fo*)**Λ***s*(*fn*)**V**<sup>+</sup>

where **V***s*(*fn*)=[**v**1(*fn*), **v**2(*fn*), ..., **v***P*(*fn*)] are the eigenvectors corresponding to *P* largest

where **V**(*fo*) and **V**(*fn*) are the eigenvector matrices of **P**(*fo*) and **P**(*fn*), respectively.

given by an ordinary beamforming preprocess.

 *min* **T**(*fo*, *fn*)

where **P**(.) is the array spectral matrix in noise free environment,

**<sup>Γ</sup>ˆ***r*(*fo*) = <sup>1</sup>

**<sup>Γ</sup>ˆ***r*(*fo*) = <sup>1</sup>

of white noise or with the known spatial correlation structure.

*M*

by minimizing

is used, however

where **<sup>Λ</sup><sup>ˆ</sup>** *<sup>s</sup>*(*fo*) = <sup>1</sup>

of **Γˆ***r*(*fo*).

so the operator becomes

eigenvalues of the spectral matrix **Γ***r*(*fn*).

In particular case, when equation (20) is used

*M*

matrices **Γ***r*(*fn*) (*n* = 1, ..., *M*).

*M* ∑ *n*=1

1995)is

where **V**(*fn*)=[**v**1(*fn*), ..., **v***N*(*fn*)] and **Λ**(*fn*) = *diag*(*λ*1(*fn*), ..., *λN*(*fn*)).

Assuming that the columns of **A**(*fn*) are all different and linearly independent it follows that for nonsingular **C**1(*fn*), the rank of **A**(*fn*)**U***s*(*fn*)**A**+(*fn*) is *P*. This rank property implies that: - the (*N* − *P*) multiplicity of its smallest eigenvalues : *λP*+1(*fn*) = ... = *λN*(*fn*) ∼= 0.


$$\mathbf{V}\_{\boldsymbol{n}}(f\_{\boldsymbol{n}}) \stackrel{\triangle}{=} \{ \mathbf{v}\_{P+1}(f\_{\boldsymbol{n}}) \dots \mathbf{v}\_{N}(f\_{\boldsymbol{n}}) \} \perp \{ \mathbf{a}(\theta\_{1}, f\_{\boldsymbol{n}}) \dots \mathbf{a}(\theta\_{P}, f\_{\boldsymbol{n}}) \} \tag{14}$$

The eigenstructure based techniques are based on the exploitation of these properties. Then the directions of arrival of the sources are obtained, at the frequency *fn*, by the peak positions in a so-called spatial spectrum (MUSIC)

$$\mathbf{Z}(f\_{\boldsymbol{n}},\boldsymbol{\theta}) = \frac{1}{\mathbf{a}^{+}(f\_{\boldsymbol{n}},\boldsymbol{\theta})\mathbf{V}\_{\boldsymbol{n}}(f\_{\boldsymbol{n}})\mathbf{V}\_{\boldsymbol{n}}^{+}(f\_{\boldsymbol{n}})\mathbf{a}(f\_{\boldsymbol{n}},\boldsymbol{\theta})}\tag{15}$$

For locating the wide band sources several solutions have been proposed in literature and are regrouped in this chapter into two groups:



In the following section, the coherent subspace methods are described.

#### **9. Coherent subspace methods**

In the high resolution algorithm, the signal subspace is defined as the column span of the steering matrix **A**(*fn*) which is function of the frequency *fn* and the angles-of-arrival. Thus, the signal subspaces at different frequency bins are different. The coherent subspace methods (Hung & Kaveh, 1988) combine the different subspaces in the analysis band by the use of the focusing matrices. The focusing matrices **T**(**fo**,**fn**) compensate the variations of the transfer matrix with the frequency. So these matrices verify

$$\mathbf{T}(f\_0, f\_n)\mathbf{A}(f\_n) = \mathbf{A}(f\_0), \quad \text{for} \quad n = 1, \ldots, M \tag{16}$$

where *fo* is the focusing frequency.

Initially, Hung et al. (Hung & Kaveh, 1988) have developed the solutions for equation (16), the proposed solution under constraint **T**(*fo*, *fn*)**T**+(*fo*, *fn*) = **I**, is

$$\mathbf{\hat{T}}(f\_{\rm o}, f\_{\rm n}) = \mathbf{V}\_{\rm l}(f\_{\rm o}, f\_{\rm n}) \mathbf{V}\_{\rm r}^{+}(f\_{\rm o}, f\_{\rm n}) \tag{17}$$

where the columns of **V***l*(*fo*, *fn*) and of **V***r*(*fo*, *fn*) are the left and right singular vectors of the matrix **A**(*fn*, *θi*)**A**+(*fo*, *θi*) where *θ<sup>i</sup>* is an initial vector of the estimates of the angles-of-arrival, given by an ordinary beamforming preprocess.

It has been shown that the performances of these methods depend on *θ<sup>i</sup>* (Bourennane et al., 1997). In practice, it is very difficult to obtain the accurate estimate of the DOAs. So in order to resolve this initialization problem, the Two-Sided Correlation Transformation (TCT) algorithm is proposed in (Valaee & Kabal, 1995). The focusing matrices **T**(*fo*, *fn*) are obtained by minimizing

$$\begin{cases} \min\_{\mathbf{T}(f\_{o}f\_{n})} \|\mathbf{P}(f\_{o}) - \mathbf{T}(f\_{o\prime}f\_{n})\mathbf{P}(f\_{n})\mathbf{T}^{+}(f\_{o\prime}f\_{n})\|\_{F} \\ \text{s.t.} \quad \mathbf{T}^{+}(f\_{o\prime}f\_{n})\mathbf{T}(f\_{o\prime}f\_{n}) = \mathbf{I} \end{cases}$$

where **P**(.) is the array spectral matrix in noise free environment, **<sup>P</sup>**(.) = **<sup>A</sup>**(.)Γ*s*(.)**A**+(.) and �.�*<sup>F</sup>* is the Frobenius matrix norm. The solution (Valaee & Kabal, 1995)is

$$\mathbf{T}(f\_{o\prime}f\_n) = \mathbf{V}(f\_o)\mathbf{V}^+(f\_n) \tag{18}$$

where **V**(*fo*) and **V**(*fn*) are the eigenvector matrices of **P**(*fo*) and **P**(*fn*), respectively.

In order to reduce the calculating time for constructing this operator equation (18), an improved solution is developed in (Bendjama et al., 1998), where only the signal subspace is used, however

$$\mathbf{T}(f\_{\boldsymbol{\theta}}, f\_{\boldsymbol{\eta}}) \mathbf{V}\_{\boldsymbol{s}}(f\_{\boldsymbol{\eta}}) = \mathbf{V}\_{\boldsymbol{s}}^{+}(f\_{\boldsymbol{\theta}}) \tag{19}$$

so the operator becomes

18 Will-be-set-by-IN-TECH

Assuming that the columns of **A**(*fn*) are all different and linearly independent it follows that for nonsingular **C**1(*fn*), the rank of **A**(*fn*)**U***s*(*fn*)**A**+(*fn*) is *P*. This rank property implies that:


The eigenstructure based techniques are based on the exploitation of these properties. Then the directions of arrival of the sources are obtained, at the frequency *fn*, by the peak positions

**a**+(*fn*, *θ*)**V***n*(*fn*)**V**<sup>+</sup>

For locating the wide band sources several solutions have been proposed in literature and are



In the high resolution algorithm, the signal subspace is defined as the column span of the steering matrix **A**(*fn*) which is function of the frequency *fn* and the angles-of-arrival. Thus, the signal subspaces at different frequency bins are different. The coherent subspace methods (Hung & Kaveh, 1988) combine the different subspaces in the analysis band by the use of the focusing matrices. The focusing matrices **T**(**fo**,**fn**) compensate the variations of the transfer

Initially, Hung et al. (Hung & Kaveh, 1988) have developed the solutions for equation (16),

**Tˆ**(*fo*, *fn*) = **<sup>V</sup>***l*(*fo*, *fn*)**V**<sup>+</sup>

**T**(*fo*, *fn*)**A**(*fn*) = **A**(*fo*), *f or n* = 1, . . . , *M* (16)

*<sup>r</sup>* (*fo*, *fn*) (17)

= {**v***P*+1(*fn*)... **v***N*(*fn*)}⊥{**a**(*θ*1, *fn*)... **a**(*θP*, *fn*)} (14)

*<sup>n</sup>* (*fn*)**a**(*fn*, *<sup>θ</sup>*) (15)


**<sup>Z</sup>**(*fn*, *<sup>θ</sup>*) = <sup>1</sup>

where **V**(*fn*)=[**v**1(*fn*), ..., **v***N*(*fn*)] and **Λ**(*fn*) = *diag*(*λ*1(*fn*), ..., *λN*(*fn*)).

the matrix **A**(*fn*), namely, the steering vectors of the signals

**V***n*(*fn*) �

in a so-called spatial spectrum (MUSIC)

regrouped in this chapter into two groups:

for each considered frequency bin separately

matrix with the frequency. So these matrices verify

**9. Coherent subspace methods**

where *fo* is the focusing frequency.

the obtained subspace is used to estimate the source parameters.

the proposed solution under constraint **T**(*fo*, *fn*)**T**+(*fo*, *fn*) = **I**, is

In the following section, the coherent subspace methods are described.

$$\mathbf{T}(f\_{o\prime}f\_n) = \mathbf{V}\_s(f\_o)\mathbf{V}\_s^+(f\_n) \tag{20}$$

where **V***s*(*fn*)=[**v**1(*fn*), **v**2(*fn*), ..., **v***P*(*fn*)] are the eigenvectors corresponding to *P* largest eigenvalues of the spectral matrix **Γ***r*(*fn*).

Once **Γ***r*(*fn*) and **T**(*fo*, *fn*), *n* = 1..., *M* are formed, the estimate of **Γˆ***r*(*fo*) can be written

$$\hat{\mathbf{T}}\_r(f\_o) = \frac{1}{M} \sum\_{n=1}^{M} \mathbf{T}(f\_{0\prime} f\_n) \mathbf{T}\_r(f\_n) \mathbf{T}^+(f\_{0\prime} f\_n) \tag{21}$$

In particular case, when equation (20) is used

$$\begin{split} \hat{\mathbf{f}}\_{r}(f\_{o}) &= \frac{1}{M} \sum\_{n=1}^{M} \mathbf{V}\_{s}(f\_{o}) \mathbf{A}\_{s}(f\_{n}) \mathbf{V}\_{s}^{+}(f\_{o}) \\ &= \mathbf{V}\_{s}(f\_{o}) \hat{\mathbf{A}}\_{s}(f\_{o}) \mathbf{V}\_{s}^{+}(f\_{o}) \end{split} \tag{22}$$

where **<sup>Λ</sup><sup>ˆ</sup>** *<sup>s</sup>*(*fo*) = <sup>1</sup> *M M* ∑ *n*=1 **Λ***s*(*fn*) is the arithmetic mean of the largest eigenvalues of the spectral matrices **Γ***r*(*fn*) (*n* = 1, ..., *M*).

Note that, the number of sources, *P*, can be computed by the number of non-zero eigenvalues of **Γˆ***r*(*fo*).

The efficiency of the different focusing algorithms is depended on the prior knowledge of the noise. All the transform matrices solution of equation system (16) are obtained in the presence of white noise or with the known spatial correlation structure.

Using the eigenvectors focusing operator defined as

**Cˆ** *<sup>t</sup>*(*fo*) = <sup>1</sup>

**Cˆ** *<sup>t</sup>*(*fo*) = **V***ts*(*fo*)**Λˆ** *ts*(*fo*)**V**<sup>+</sup>

<sup>=</sup> **<sup>T</sup>**(*fo*, *fn*)**B**⊗(*fn*) and **Uˆ** *<sup>s</sup>*(*fo*) = <sup>1</sup>

<sup>⊗</sup>(*fo*), we get

<sup>⊗</sup>(*fo*)**Cˆ** *<sup>t</sup>*(*fo*) = **<sup>B</sup>**<sup>+</sup>

**B**<sup>+</sup>

reordered and each column is multiplied by a complex constant.

**<sup>A</sup>**⊗(*fo*) =

The information we want is **A**⊗(*fo*), which is given by (29)

**Cˆ** *<sup>t</sup>*(*fo*)**B**<sup>+</sup>

*M*

*M* ∑ *n*=1

= **V***ts*(*fo*)**Λˆ** *ts*(*fo*)**V**<sup>+</sup>

average cumulant matrix is

where **<sup>Λ</sup><sup>ˆ</sup>** *ts*(*fo*) = <sup>1</sup>

where **<sup>B</sup>**⊗(*fo*) �

**B**<sup>+</sup>

Or

Multiplying both sides by **B**<sup>+</sup>

*M*

*M* ∑ *n*=1

cumulant matrix **C***t*(*fn*). It is easy to show that

**B**<sup>+</sup>

**T**(*fo*, *fn*) = **V***ts*(*fo*)**V**<sup>+</sup>

where **V***ts*(.) are the eigenvectors of the largest eigenvalues of the cumulant matrix **C***t*(.). The

Array Processing: Underwater Acoustic Source Localization 33

**T**(*fo*, *fn*)**C***t*(*fn*)**T**+(*fo*, *fn*)

**Λ***ts*(*fn*) is the arithmetic mean of the first largest eigenvalues of the

*ts*(*fo*) = **<sup>B</sup>**⊗(*fo*)**Uˆ** *<sup>s</sup>*(*fo*)**B**<sup>+</sup>

**U***s*(*fn*).

*M*

Because the columns of **B**⊗(*fo*) are orthogonal and the sources are decorrelated,

<sup>⊗</sup>(*fo*)**B**⊗(*fo*)**Uˆ** *<sup>s</sup>*(*fo*) is a diagonal matrix which we will denote by **<sup>D</sup>**(*fo*), so that we have

<sup>⊗</sup>(*fo*)**Cˆ** *<sup>t</sup>*(*fo*) = **<sup>D</sup>**(*fo*)**B**<sup>+</sup>

This equation tells us that the columns of **<sup>B</sup>**⊗(*fo*) are the left eigenvectors of **Cˆ** *<sup>t</sup>*(*fo*) corresponding to the eigenvalues on the diagonal of the matrix **D**(*fo*): however, since **Cˆ** *<sup>t</sup>*(*fo*) is Hermitian, they are also the (right) eigenvectors of **Cˆ** *<sup>t</sup>*(*fo*). Furthermore, the columns of **V***ts*(*fo*) are also eigenvectors of **Cˆ** *<sup>t</sup>*(*fo*) corresponding to the same (non-zero) eigenvectors of the diagonal matrix **Uˆ** *<sup>s</sup>*(*fo*). Given that the eigenvalues of **Cˆ** *<sup>t</sup>*(*fo*) are different, the orthonormal eigenvectors are unique up to phase term, this will likely be the case if the source kurtosis are different. Hence the difference between **V***ts*(*fo*) and **B**⊗(*fo*) is that the columns may be

**H**(*fo*) ⊗ **H**∗(*fo*)

†

*M* ∑ *n*=1

⊗(*fo*)**B**⊗(*fo*)**Uˆ** *<sup>s</sup>*(*fo*)**B**<sup>+</sup>

*ts*(*fn*) (30)

*ts*(*fo*) (31)

<sup>⊗</sup>(*fo*) (32)

<sup>⊗</sup>(*fo*) (33)

<sup>⊗</sup>(*fo*) (34)

**B**⊗(*fo*) (36)

<sup>⊗</sup>(*fo*) = **<sup>D</sup>**(*fo*)**B**⊗(*fo*) (35)

In practice, as in underwater acoustic this assumption is not fulfilled then the performances of the subspace algorithms are degraded. To improve these methods in noisy data cases, in this paper, an algorithm is proposed to remove the noises. The basic idea is based on the combination of the high order statistics of received data, the multidimensional filtering and the frequential smoothing for eliminating the noise contributions.

#### **10. Cumulant based coherent signal subspace**

The high order statistics of received data are used to eliminate the Gaussian noise. For this the cumulant slice matrix expression (10) is computed. Then the (*P* × *N*) matrix denoted **H**(*fn*) is formed in order to transform the received data with an aim of obtaining a perfect orthogonalization and eliminate the orthogonal noise component.

$$\mathbf{H}(f\_n) = \mathbf{A}\_{\mathbf{s}}(f\_n)^{-1/2}\mathbf{V}\_{\mathbf{s}}^{+}(f\_n) \tag{23}$$

where **V***s*(*fn*) and **Λ***s*(*fn*) are the *P* largest eigenvectors and the corresponding eigenvalues of the slice cumulant matrix **C**1(*fn*) respectively. We note that it is necessary that the eigenvalues in **Λ***s*(*fn*) be distinct. This is the case when the source kurtosis are different. If they are not, then the proposed algorithm will not provide correct estimates of the subspace signal sources. The (*P* × 1) vector of the transformed received data is

$$\mathbf{r}\_l(f\_n) = \mathbf{H}(f\_n)\mathbf{r}(f\_n) = \mathbf{H}(f\_n)\mathbf{A}(f\_n)\mathbf{s}(f\_n) + \mathbf{H}(f\_n)\mathbf{n}(f\_n) \tag{24}$$

Afterwards, the corresponding *<sup>P</sup>*<sup>2</sup> <sup>×</sup> *<sup>P</sup>*<sup>2</sup> cumulant matrix can be expressed as

$$\begin{aligned} \mathbf{C}\_{l}(f\_{\boldsymbol{\eta}}) &= \text{cum}\left(\mathbf{r}\_{l}(f\_{\boldsymbol{\eta}}), \mathbf{r}\_{l}^{+}(f\_{\boldsymbol{\eta}}), \mathbf{r}\_{l}(f\_{\boldsymbol{\eta}}), \mathbf{r}\_{l}^{+}(f\_{\boldsymbol{\eta}})\right) \\ &= \left( (\mathbf{HA})(f\_{\boldsymbol{\eta}}) \otimes (\mathbf{HA})^{\*}(f\_{\boldsymbol{\eta}}) \right) \mathbf{U}\_{\boldsymbol{s}}(f\_{\boldsymbol{\eta}}) \left( (\mathbf{HA})(f\_{\boldsymbol{\eta}}) \otimes (\mathbf{HA})^{\*}(f\_{\boldsymbol{\eta}}) \right)^{+} \end{aligned} \tag{25}$$

Or

$$\mathbf{C}\_{\mathbf{f}}(f\_n) = \underbrace{\left(\mathbf{B}(f\_n) \otimes \mathbf{B}^\*(f\_n)\right)}\_{=\mathbf{B}\_{\mathbb{B}}(f\_n)} \mathbf{U}\_{\mathbf{s}}(f\_n) \left(\mathbf{B}(f\_n) \otimes \mathbf{B}^\*(f\_n)\right)^{+} \tag{26}$$

Using the property (Mendel, 1991)

$$\mathbf{WX} \otimes \mathbf{YZ} = (\mathbf{W} \otimes \mathbf{Y})(\mathbf{X} \otimes \mathbf{Z}) \tag{27}$$

We can show that

$$\begin{aligned} \mathbf{C}\_{t}(f\_{n}) &= \left(\mathbf{H}(f\_{n}) \otimes \mathbf{H}^{\*}(f\_{n})\right) \left(\mathbf{A}(f\_{n}) \otimes \mathbf{A}^{\*}(f\_{n})\right) \mathbf{U}\_{\mathbf{s}}(f\_{n}) \\ & \quad \left(\mathbf{A}(f\_{n}) \otimes \mathbf{A}^{\*}(f\_{n})\right)^{+} \left(\mathbf{H}(f\_{n}) \otimes \mathbf{H}^{\*}(f\_{n})\right)^{+} \\\\ &\quad \cdots \end{aligned}$$

$$\mathbf{H} = \left(\mathbf{H}(f\_{\boldsymbol{n}}) \otimes \mathbf{H}^\*(f\_{\boldsymbol{n}})\right) \mathbf{A}\_{\odot}(f\_{\boldsymbol{n}}) \mathbf{U}\_{\rm s}(f\_{\boldsymbol{n}}) \mathbf{A}\_{\odot}^+(f\_{\boldsymbol{n}}) \left(\mathbf{H}(f\_{\boldsymbol{n}}) \otimes \mathbf{H}^\*(f\_{\boldsymbol{n}})\right)^+\tag{28}$$

Using (26) and (28), we have

$$\mathbf{B}\_{\odot}(f\_{\hbar}) = \left(\mathbf{H}(f\_{\hbar}) \otimes \mathbf{H}^\*(f\_{\hbar})\right) \mathbf{A}\_{\odot}(f\_{\hbar}) \tag{29}$$

Using the eigenvectors focusing operator defined as

$$\mathbf{T}(f\_{o\cdot}f\_{\rm n}) = \mathbf{V}\_{\rm ts}(f\_o)\mathbf{V}\_{\rm ts}^+(f\_{\rm n})\tag{30}$$

where **V***ts*(.) are the eigenvectors of the largest eigenvalues of the cumulant matrix **C***t*(.). The average cumulant matrix is

$$
\hat{\mathbf{C}}\_{l}(f\_{o}) = \frac{1}{M} \sum\_{n=1}^{M} \mathbf{T}(f\_{o}, f\_{n}) \mathbf{C}\_{l}(f\_{n}) \mathbf{T}^{+}(f\_{o}, f\_{n})
$$

$$
= \mathbf{V}\_{\rm fs}(f\_{o}) \hat{\mathbf{A}}\_{\rm ts}(f\_{o}) \mathbf{V}\_{\rm ts}^{+}(f\_{o}) \tag{31}
$$

where **<sup>Λ</sup><sup>ˆ</sup>** *ts*(*fo*) = <sup>1</sup> *M M* ∑ *n*=1 **Λ***ts*(*fn*) is the arithmetic mean of the first largest eigenvalues of the cumulant matrix **C***t*(*fn*). It is easy to show that

$$
\hat{\mathbf{C}}\_{\rm t}(f\_o) = \mathbf{V}\_{\rm ts}(f\_o)\hat{\mathbf{A}}\_{\rm ts}(f\_o)\mathbf{V}\_{\rm ts}^+(f\_o) = \mathbf{B}\_{\otimes}(f\_o)\hat{\mathbf{U}}\_{\rm s}(f\_o)\mathbf{B}\_{\otimes}^+(f\_o) \tag{32}
$$

where **<sup>B</sup>**⊗(*fo*) � <sup>=</sup> **<sup>T</sup>**(*fo*, *fn*)**B**⊗(*fn*) and **Uˆ** *<sup>s</sup>*(*fo*) = <sup>1</sup> *M M* ∑ *n*=1 **U***s*(*fn*).

Multiplying both sides by **B**<sup>+</sup> <sup>⊗</sup>(*fo*), we get

$$\mathbf{B}\_{\otimes}^{+}(f\_{o})\hat{\mathbf{C}}\_{l}(f\_{o}) = \mathbf{B}\_{\otimes}^{+}(f\_{o})\mathbf{B}\_{\otimes}(f\_{o})\hat{\mathbf{U}}\_{s}(f\_{o})\mathbf{B}\_{\otimes}^{+}(f\_{o}) \tag{33}$$

Because the columns of **B**⊗(*fo*) are orthogonal and the sources are decorrelated, **B**<sup>+</sup> <sup>⊗</sup>(*fo*)**B**⊗(*fo*)**Uˆ** *<sup>s</sup>*(*fo*) is a diagonal matrix which we will denote by **<sup>D</sup>**(*fo*), so that we have

$$\mathbf{B}\_{\odot}^{+}(f\_o)\hat{\mathbf{C}}\_t(f\_o) = \mathbf{D}(f\_o)\mathbf{B}\_{\odot}^{+}(f\_o) \tag{34}$$

Or

20 Will-be-set-by-IN-TECH

In practice, as in underwater acoustic this assumption is not fulfilled then the performances of the subspace algorithms are degraded. To improve these methods in noisy data cases, in this paper, an algorithm is proposed to remove the noises. The basic idea is based on the combination of the high order statistics of received data, the multidimensional filtering and

The high order statistics of received data are used to eliminate the Gaussian noise. For this the cumulant slice matrix expression (10) is computed. Then the (*P* × *N*) matrix denoted **H**(*fn*) is formed in order to transform the received data with an aim of obtaining a perfect

**H**(*fn*) = **Λ***s*(*fn*)−1/2**V**<sup>+</sup>

Afterwards, the corresponding *<sup>P</sup>*<sup>2</sup> <sup>×</sup> *<sup>P</sup>*<sup>2</sup> cumulant matrix can be expressed as

**B**(*fn*) ⊗ **B**∗(*fn*)

 =**B**⊗(*fn*)

**<sup>A</sup>**⊗(*fn*)**U***s*(*fn*)**A**<sup>+</sup>

**H**(*fn*) ⊗ **H**∗(*fn*)

*<sup>t</sup>* (*fn*),**r***t*(*fn*),**r**<sup>+</sup>

<sup>∗</sup>(*fn*) **U***s*(*fn*) 

where **V***s*(*fn*) and **Λ***s*(*fn*) are the *P* largest eigenvectors and the corresponding eigenvalues of the slice cumulant matrix **C**1(*fn*) respectively. We note that it is necessary that the eigenvalues in **Λ***s*(*fn*) be distinct. This is the case when the source kurtosis are different. If they are not, then the proposed algorithm will not provide correct estimates of the subspace signal sources.

**r***t*(*fn*) = **H**(*fn*)**r**(*fn*) = **H**(*fn*)**A**(*fn*)**s**(*fn*) + **H**(*fn*)**n**(*fn*) (24)

(**HA**)(*fn*) ⊗ (**HA**)

**B**(*fn*) ⊗ **B**∗(*fn*)

**WX** ⊗ **YZ** = (**W** ⊗ **Y**)(**X** ⊗ **Z**) (27)

 **U***s*(*fn*)

<sup>+</sup>

*<sup>t</sup>* (*fn*) 

**U***s*(*fn*) 

**A**(*fn*) ⊗ **A**∗(*fn*)

**A**(*fn*) ⊗ **A**∗(*fn*)

<sup>⊗</sup>(*fn*) 

*<sup>s</sup>* (*fn*) (23)

<sup>∗</sup>(*fn*) <sup>+</sup>

<sup>+</sup>

**H**(*fn*) ⊗ **H**∗(*fn*)

**H**(*fn*) ⊗ **H**∗(*fn*)

<sup>+</sup>

<sup>+</sup>

**A**⊗(*fn*) (29)

(25)

(26)

(28)

the frequential smoothing for eliminating the noise contributions.

orthogonalization and eliminate the orthogonal noise component.

**10. Cumulant based coherent signal subspace**

The (*P* × 1) vector of the transformed received data is

**r***t*(*fn*),**r** +

(**HA**)(*fn*) ⊗ (**HA**)

**H**(*fn*) ⊗ **H**∗(*fn*)

**B**⊗(*fn*) =

**C***t*(*fn*) =Cum

= 

Using the property (Mendel, 1991)

**C***t*(*fn*) =

= 

Using (26) and (28), we have

We can show that

**C***t*(*fn*) =

**H**(*fn*) ⊗ **H**∗(*fn*)

Or

$$
\hat{\mathbf{C}}\_{t}(f\_{o})\mathbf{B}\_{\otimes}^{+}(f\_{o}) = \mathbf{D}(f\_{o})\mathbf{B}\_{\otimes}(f\_{o})\tag{35}
$$

This equation tells us that the columns of **<sup>B</sup>**⊗(*fo*) are the left eigenvectors of **Cˆ** *<sup>t</sup>*(*fo*) corresponding to the eigenvalues on the diagonal of the matrix **D**(*fo*): however, since **Cˆ** *<sup>t</sup>*(*fo*) is Hermitian, they are also the (right) eigenvectors of **Cˆ** *<sup>t</sup>*(*fo*). Furthermore, the columns of **V***ts*(*fo*) are also eigenvectors of **Cˆ** *<sup>t</sup>*(*fo*) corresponding to the same (non-zero) eigenvectors of the diagonal matrix **Uˆ** *<sup>s</sup>*(*fo*). Given that the eigenvalues of **Cˆ** *<sup>t</sup>*(*fo*) are different, the orthonormal eigenvectors are unique up to phase term, this will likely be the case if the source kurtosis are different. Hence the difference between **V***ts*(*fo*) and **B**⊗(*fo*) is that the columns may be reordered and each column is multiplied by a complex constant.

The information we want is **A**⊗(*fo*), which is given by (29)

$$\mathbf{A}\_{\otimes}(f\_o) = \left(\mathbf{H}(f\_o) \otimes \mathbf{H}^\*(f\_o)\right)^\dagger \mathbf{B}\_{\otimes}(f\_o) \tag{36}$$

of the received data have improved the spatial resolution. Indeed, the eight sources are perfectly separated. This improvement is due to the fact that the noise is removed using

Array Processing: Underwater Acoustic Source Localization 35

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> <sup>60</sup> <sup>0</sup>

Azimuth(°)

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> <sup>60</sup> <sup>0</sup>

Azimuth(°)

The localization of a correlated sources is the delicate problem. The spatial smoothing is a solution (Pillai & Kwon, 1989), for the narrow band sources but limited to a linear antenna. In presence of white noise, it is well known that the frequential smoothing leads to localize the wide band correlated sources (Valaee & Kabal, 1995). In this experiment, the eight sources form two fully correlated groups in the presence of Gaussian noise with an unknown spectral matrix. Figures 21 and 22 give the source localization results, the obtained results show that

Fig. 19. Spectral matrix-Incoherent signal subspace, 8 uncorrelated sources.

Fig. 20. Cumulant-Incoherent signal subspace, 8 uncorrelated sources.

the cumulants.

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

*Experiment 2: Localization of the correlated sources*

Normalized Spectrum

Normalized Spectrum

with † denoting the pseudo-inverse of matrix. We do not have the matrix **B**⊗(*fo*), but we have the matrix **V***ts*(*fo*). Hence we can obtain a matrix **A**� <sup>⊗</sup>(*fo*) such that

$$\mathbf{A}\_{\odot}^{'}(f\_o) = \left(\mathbf{H}(f\_o) \otimes \mathbf{H}^\*(f\_o)\right)^{\dagger} \mathbf{V}\_{\rm ts}(f\_o) \tag{37}$$

Furthermore, we obtain **Aˆ** (*fo*) by extracting out the first *N* rows and the first *P* columns of **A**� <sup>⊗</sup>(*fo*). The estimate **Aˆ** (*fo*) will be permuted and scaled (column-wise) version of **<sup>A</sup>**(*fo*).

This algorithm leads to the estimation of the transfer matrix without prior knowledge of the steering vector or the propagation model such as in the classical techniques.

Therefore, the present algorithm for locating the wide band acoustic sources in the presence of unknown noise can be formulated as the following sequence of steps:

1) Form **C**1(*fn*) equation (10) and perform its eigendecomposition;

2) Form **H**(*fn*) equation (23) *n* = 1, ..., *M*;

3) Calculate **r***t*(*fn*) equation (24);

4) Form **C***t*(*fn*) equation (25) and perform its eigendecomposition;

5) Form **T**(*fo*, *fn*) equation (30) *n* = 1, ..., *M*;

6) Form **Cˆ** *<sup>t</sup>*(*fo*) equation (31) and perform its eigendecomposition;

7) Determine **A**� <sup>⊗</sup>(*fo*) equation (37) and **Aˆ** (*fo*).

Naturally one can use the high resolution algorithm to estimate the azimuths of the sources. The orthogonal projector **Π**(**fo**) is

$$\mathbf{II}(f\_o) = \mathbf{I} - [\hat{\mathbf{A}}^+(f\_o)\hat{\mathbf{A}}(f\_o)]^{-1}\hat{\mathbf{A}}^+(f\_o) \tag{38}$$

where **I** is the (N×N) identity matrix. Hence, the narrow band Music can be directly applied to estimate the DOA of wide band sources according to

$$Z(f\_{0\prime}\theta) = \frac{1}{\mathbf{a}^+(f\_{0\prime}\theta)\Pi(f\_0)\mathbf{a}(f\_{0\prime}\theta)}\tag{39}$$

with *θ* ∈ [−*π*/2,*π*/2].

#### **11. Numerical results**

In the following simulations, a linear antenna of *N* = 10 equispaced sensors with equal interelement spacing *d* = *<sup>c</sup>* <sup>2</sup> *fo* is used, where *fo* is the mid band frequency and *c* is the velocity of propagation. Eight (*P* = 8) wide band source signals arrive at directions : *θ*<sup>1</sup> = 2◦, *θ*<sup>2</sup> = 5◦, *θ*<sup>3</sup> = 10◦, *θ*<sup>4</sup> = 12◦, *θ*<sup>5</sup> = 30◦, *θ*<sup>6</sup> = 32◦ *θ*<sup>7</sup> = 40◦ and *θ*<sup>8</sup> = 42◦, are temporally stationary zero-mean band pass with the same central frequency *fo* = 110*Hz* and the same bandwidth *B* = 40*Hz*. The additive noise is uncorrelated with the signals. The Signal-to-Noise Ratio (SNR) is defined here as the ratio of the power of each source signal to the average power of the noise variances, equals on all examples to *SNR* = 10*dB*.

#### *Experiment 1: Improvement of source localization*

In order to point out the improvement of the localization of the sources based on the higher order statistics, our first experiment is carried out in the context of the presence of Gaussian noise. Figures 19 and 20 show the obtained localization results of the eight simulated sources. The number of the sources is taken equal to 8. We can conclude that the fourth order statistics 22 Will-be-set-by-IN-TECH

with † denoting the pseudo-inverse of matrix. We do not have the matrix **B**⊗(*fo*), but we have

**H**(*fo*) ⊗ **H**∗(*fo*)

Furthermore, we obtain **Aˆ** (*fo*) by extracting out the first *N* rows and the first *P* columns of

Therefore, the present algorithm for locating the wide band acoustic sources in the presence

Naturally one can use the high resolution algorithm to estimate the azimuths of the sources.

where **I** is the (N×N) identity matrix. Hence, the narrow band Music can be directly applied

In the following simulations, a linear antenna of *N* = 10 equispaced sensors with equal

of propagation. Eight (*P* = 8) wide band source signals arrive at directions : *θ*<sup>1</sup> = 2◦, *θ*<sup>2</sup> = 5◦, *θ*<sup>3</sup> = 10◦, *θ*<sup>4</sup> = 12◦, *θ*<sup>5</sup> = 30◦, *θ*<sup>6</sup> = 32◦ *θ*<sup>7</sup> = 40◦ and *θ*<sup>8</sup> = 42◦, are temporally stationary zero-mean band pass with the same central frequency *fo* = 110*Hz* and the same bandwidth *B* = 40*Hz*. The additive noise is uncorrelated with the signals. The Signal-to-Noise Ratio (SNR) is defined here as the ratio of the power of each source signal to the average power of

In order to point out the improvement of the localization of the sources based on the higher order statistics, our first experiment is carried out in the context of the presence of Gaussian noise. Figures 19 and 20 show the obtained localization results of the eight simulated sources. The number of the sources is taken equal to 8. We can conclude that the fourth order statistics

*<sup>Z</sup>*(*fo*, *<sup>θ</sup>*) = <sup>1</sup>

<sup>⊗</sup>(*fo*). The estimate **Aˆ** (*fo*) will be permuted and scaled (column-wise) version of **<sup>A</sup>**(*fo*). This algorithm leads to the estimation of the transfer matrix without prior knowledge of the

<sup>⊗</sup>(*fo*) such that

**<sup>Π</sup>**(*fo*) = **<sup>I</sup>** <sup>−</sup> [**Aˆ** +(*fo*)**Aˆ** (*fo*)]−<sup>1</sup>**Aˆ** +(*fo*) (38)

<sup>2</sup> *fo* is used, where *fo* is the mid band frequency and *c* is the velocity

**<sup>a</sup>**+(*fo*, *<sup>θ</sup>*)**Π**(*fo*)**a**(*fo*, *<sup>θ</sup>*) (39)

**V***ts*(*fo*) (37)

†

the matrix **V***ts*(*fo*). Hence we can obtain a matrix **A**�

2) Form **H**(*fn*) equation (23) *n* = 1, ..., *M*;

5) Form **T**(*fo*, *fn*) equation (30) *n* = 1, ..., *M*;

3) Calculate **r***t*(*fn*) equation (24);

The orthogonal projector **Π**(**fo**) is

7) Determine **A**�

with *θ* ∈ [−*π*/2,*π*/2].

**11. Numerical results**

interelement spacing *d* = *<sup>c</sup>*

**A**�

**A**�

<sup>⊗</sup>(*fo*) =

steering vector or the propagation model such as in the classical techniques.

of unknown noise can be formulated as the following sequence of steps:

1) Form **C**1(*fn*) equation (10) and perform its eigendecomposition;

4) Form **C***t*(*fn*) equation (25) and perform its eigendecomposition;

6) Form **Cˆ** *<sup>t</sup>*(*fo*) equation (31) and perform its eigendecomposition;

<sup>⊗</sup>(*fo*) equation (37) and **Aˆ** (*fo*).

to estimate the DOA of wide band sources according to

the noise variances, equals on all examples to *SNR* = 10*dB*.

*Experiment 1: Improvement of source localization*

of the received data have improved the spatial resolution. Indeed, the eight sources are perfectly separated. This improvement is due to the fact that the noise is removed using the cumulants.

Fig. 19. Spectral matrix-Incoherent signal subspace, 8 uncorrelated sources.

Fig. 20. Cumulant-Incoherent signal subspace, 8 uncorrelated sources.

#### *Experiment 2: Localization of the correlated sources*

The localization of a correlated sources is the delicate problem. The spatial smoothing is a solution (Pillai & Kwon, 1989), for the narrow band sources but limited to a linear antenna. In presence of white noise, it is well known that the frequential smoothing leads to localize the wide band correlated sources (Valaee & Kabal, 1995). In this experiment, the eight sources form two fully correlated groups in the presence of Gaussian noise with an unknown spectral matrix. Figures 21 and 22 give the source localization results, the obtained results show that

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> <sup>60</sup> <sup>0</sup>

Array Processing: Underwater Acoustic Source Localization 37

Azimuth(°)

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> <sup>60</sup> <sup>0</sup>

Azimuth(°)

Fig. 24. Cumulant matrix incoherent signal subspace, 8 uncorrelated sources with whitening

Figure 25 gives the obtained results with the coherent signal subspace when the sources are fully correlated. This last part points out the performance of our algorithm to localize the wide band correlated sources in the presence of unknown noise fields. Note that this results can be considered as an outstanding contribution of our study for locating the acoustic sources. Indeed, our algorithm allows to solve several practical problems. *Proposed algorithm*

In order to study and to compare the performance of our algorithm to the classical wide band methods. For doing so, an experiment is carried out with the same former signals. Figure 26

Fig. 23. Cumulant matrix incoherent signal subspace, 8 uncorrelated sources without

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Normalized Spectrum

whitening data

data

*performance:*

Normalized Spectrum

even the coherent signal subspace is used the performance of the high resolution algorithm is degraded (figure 21). Figure 22 shows the cumulant matrix improves the localization, this effectiveness is due to the fact that the noise contribution is null. It follows that the SNR after focusing is improved.

Fig. 21. Spectral matrix-Coherent signal subspace, two groups of correlated sources and Gaussian noise

Fig. 22. Cumulant matrix-Coherent signal subspace, two groups of correlated sources and Gaussian noise

#### *Experiment 3: Noise reduction- signal subspace projection*

In this part, our algorithm is applied. The noise is modeled as the sum of Gaussian noise and spatially correlated noise. The eight sources are uncorrelated. Figure 23 shows that the cumulant matrix alone is not sufficient to localize the sources. But if the preprocessing of the received data is carried out by the projection of the data on the signal subspace to eliminate the components of the noise which is orthogonal to the signal sources, the DOA will be estimated as shown in figure 24.

24 Will-be-set-by-IN-TECH

even the coherent signal subspace is used the performance of the high resolution algorithm is degraded (figure 21). Figure 22 shows the cumulant matrix improves the localization, this effectiveness is due to the fact that the noise contribution is null. It follows that the SNR after

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> <sup>60</sup> <sup>0</sup>

Azimuth(°)

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> <sup>60</sup> <sup>0</sup>

Azimuth(°)

Fig. 22. Cumulant matrix-Coherent signal subspace, two groups of correlated sources and

In this part, our algorithm is applied. The noise is modeled as the sum of Gaussian noise and spatially correlated noise. The eight sources are uncorrelated. Figure 23 shows that the cumulant matrix alone is not sufficient to localize the sources. But if the preprocessing of the received data is carried out by the projection of the data on the signal subspace to eliminate the components of the noise which is orthogonal to the signal sources, the DOA will be estimated

Fig. 21. Spectral matrix-Coherent signal subspace, two groups of correlated sources and

focusing is improved.

Gaussian noise

Gaussian noise

as shown in figure 24.

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

*Experiment 3: Noise reduction- signal subspace projection*

Normalized Spectrum

Normalized Spectrum

Fig. 23. Cumulant matrix incoherent signal subspace, 8 uncorrelated sources without whitening data

Fig. 24. Cumulant matrix incoherent signal subspace, 8 uncorrelated sources with whitening data

Figure 25 gives the obtained results with the coherent signal subspace when the sources are fully correlated. This last part points out the performance of our algorithm to localize the wide band correlated sources in the presence of unknown noise fields. Note that this results can be considered as an outstanding contribution of our study for locating the acoustic sources. Indeed, our algorithm allows to solve several practical problems. *Proposed algorithm performance:*

In order to study and to compare the performance of our algorithm to the classical wide band methods. For doing so, an experiment is carried out with the same former signals. Figure 26

**12. Conclusion**

in the presence of no completely Gaussian noise.

**13. Acknowledgment**

his useful collaboration.

ISSN 1053-587X .

**14. References**

In this chapter the problem of estimating the direction-of-arrival (DOA) of the sources in the presence of spatially correlated noise is studied. The spatial covariance matrix of the noise is modeled as a band matrix and is supposed to have a certain structure. In the numerical example, the noise covariance matrix is supposed to be the same for all sources, which covers many practical cases where the sources are enclosed. This algorithm can be applied to the localization of the sources when the spatial-spectrum of the noise or the spatial correlation function between sensors is known. The obtained results show that the proposed algorithm improves the direction estimates compared to those given by the MUSIC algorithm without preprocessing. Several applications on synthetic data and experiment have been presented to show the limits of these estimators according to the signal-to-noise ratio, the spatial correlation length of the noise, the number of sources and the number of sensors of the array. The motivation of this work is to reduce the computational loads and to reduce the effect of the additive spatially correlated gaussian noise for estimating the DOA of the sources. We also presented methods to estimate the DOA algorithm for the wide band signals based on fourth order cumulants is presented. This algorithm is also applied to noisy data. Both the cumulants of the received data and multidimensional filtering processing are used to remove the additive noises. The principal interest of this preprocessing is the improvement of the signal to noise ratio at each analysis frequency. Then the signal subspace estimated from the average cumulant matrix resolves the fully correlated wide band acoustic sources. The simulation results are used to evaluate the performance and to compare the different coherent signal subspace algorithms. The obtained results show that the performances of the proposed algorithm are similar to those of the spectral matrix based methods when the noise is white and are better in the presence of Gaussian or an unknown noise. The Std variations and the localization results indicate that the whitening of the received data improves the localization

Array Processing: Underwater Acoustic Source Localization 39

The authors would like to thank Dr Jean-Pierre SESSAREGO for providing real data and for

Reilly, J. & Wong, K. (1992). Estimation of the direction-of-arrival of signals in unknown

Wu,Q. & Wong, K.(1994). UN-MUSIC and UN-CLE: An application of generated correlation

Wax, M. (1991). Detection and localization of multiple sources in noise with known covariance. *IEEE Trans. on signal processing*, Vol. 40, No. 1, (Jan. 1992) (245-249), ISSN 1053-587X. Ye, H. & De Groat, R. (1995). Maximum likelihood DOA estimation and asymptotic Cramer

noise. *IEEE Trans. on signal processing*, Vol.42, No. 9, (1994) (2331-2343). Stoica, P.; Vieberg, M. & Ottersten, B. (1994). Instrumental variable approach to array

No. 1, (Jan. 1994) (121-133), ISSN 1053-587X .

No. 4, (Apr. 1995) (938-949), ISSN 1053-587X .

correlated noise. Part II: Asymptotic behavior and performance of the MAP approach, *IEEE Trans. on signal processing*, Vol. 40, No. 8, (Aug. 1992) (2018-2028),

analysis to the estimation of direction-of-arrival of signals in unknown correlated

processing in spatially correlated noise fields. *IEEE Trans. on signal processing*, Vol.42,

Rao Bounds for additive unknown colored noise. *IEEE on signal processing*, Vol. 43,

Fig. 25. Coherent cumulant matrices for two groups correlated sources after whitening data

shows the variation of the standard deviation (std) as function of *SNR*. The std is defined as *std* = (1/*k*)[∑*<sup>k</sup> <sup>i</sup>*=<sup>1</sup> <sup>|</sup><sup>ˆ</sup> *θ<sup>i</sup>* − *θi*| 2] 1/2, *k* is number of independent trials. The *SNR* is varied from −40*dB* to +40*dB* with *k* = 500 independent trials. One can remark the interest of the use of cumulant matrix instead of the spectral matrix and the improvement due to the whitening preprocessing or multidimensional filtering included in our algorithm. Our method after whitening presents the smallest std in all cases.

Fig. 26. Standard deviation comparison estimation: −◦ Wang algorithm;

−. TCT algorithm; −� proposed method without whitening; −∗ proposed method after whitening.

#### **12. Conclusion**

26 Will-be-set-by-IN-TECH

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> <sup>60</sup> <sup>0</sup>

Azimuth(°)

Fig. 25. Coherent cumulant matrices for two groups correlated sources after whitening data

shows the variation of the standard deviation (std) as function of *SNR*. The std is defined as

−40*dB* to +40*dB* with *k* = 500 independent trials. One can remark the interest of the use of cumulant matrix instead of the spectral matrix and the improvement due to the whitening preprocessing or multidimensional filtering included in our algorithm. Our method after

−40 −30 −20 −10 <sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>0</sup>

−. TCT algorithm; −� proposed method without whitening; −∗ proposed method after

Fig. 26. Standard deviation comparison estimation: −◦ Wang algorithm;

1/2, *k* is number of independent trials. The *SNR* is varied from

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

*<sup>i</sup>*=<sup>1</sup> <sup>|</sup><sup>ˆ</sup>

*θ<sup>i</sup>* − *θi*| 2]

whitening presents the smallest std in all cases.

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

*std* = (1/*k*)[∑*<sup>k</sup>*

whitening.

Normalized Spectrum

In this chapter the problem of estimating the direction-of-arrival (DOA) of the sources in the presence of spatially correlated noise is studied. The spatial covariance matrix of the noise is modeled as a band matrix and is supposed to have a certain structure. In the numerical example, the noise covariance matrix is supposed to be the same for all sources, which covers many practical cases where the sources are enclosed. This algorithm can be applied to the localization of the sources when the spatial-spectrum of the noise or the spatial correlation function between sensors is known. The obtained results show that the proposed algorithm improves the direction estimates compared to those given by the MUSIC algorithm without preprocessing. Several applications on synthetic data and experiment have been presented to show the limits of these estimators according to the signal-to-noise ratio, the spatial correlation length of the noise, the number of sources and the number of sensors of the array. The motivation of this work is to reduce the computational loads and to reduce the effect of the additive spatially correlated gaussian noise for estimating the DOA of the sources. We also presented methods to estimate the DOA algorithm for the wide band signals based on fourth order cumulants is presented. This algorithm is also applied to noisy data. Both the cumulants of the received data and multidimensional filtering processing are used to remove the additive noises. The principal interest of this preprocessing is the improvement of the signal to noise ratio at each analysis frequency. Then the signal subspace estimated from the average cumulant matrix resolves the fully correlated wide band acoustic sources. The simulation results are used to evaluate the performance and to compare the different coherent signal subspace algorithms. The obtained results show that the performances of the proposed algorithm are similar to those of the spectral matrix based methods when the noise is white and are better in the presence of Gaussian or an unknown noise. The Std variations and the localization results indicate that the whitening of the received data improves the localization in the presence of no completely Gaussian noise.

#### **13. Acknowledgment**

The authors would like to thank Dr Jean-Pierre SESSAREGO for providing real data and for his useful collaboration.

#### **14. References**


**0**

**3**

*France*

**Localization of Buried Objects in Sediment Using**

Non-invasive range and bearing estimation of buried objects, in the underwater acoustic

Many studies have been recently developed. Some of them use acoustic scattering to localize objects by analyzing acoustic resonance in the time-frequency domain, but these processes are usually limited to simple shaped objects (Nicq & Brussieux, 1998). In (Guillermin et al., 2000) the inversion of measured scattered acoustical waves is used to image buried object, but the frequencies used are high and the application in a real environment should be difficult. The acoustic imagery technique uses high frequencies that are too strongly attenuated inside the sediment therefore it is not suitable. Another method which uses a low frequency synthetic aperture sonar (SAS) has been applied on partially and shallowly buried cylinders in a sandy seabed (Hetet et al., 2004). Other techniques based on signal processing such as time reversal technic (Roux & Fink, 2000), have been also developed for object detection and localization but their applicability in real life has been proven only on cylinders oriented in certain ways and point scatterers. Furthermore, having techniques that operate well for simultaneous range and bearing estimation using wideband and fully correlated signals scattered from nearfield

In this chapter, the proposed method is based on array processing methods combined with an acoustic scattering model. Array processing techniques, as the MUSIC method, have been widely used for acoustic point sources localization. Typically these techniques assume that the point sources are on the seabed and are in the farfield of the array so that the measured wavefronts are all planar. The goal then is to determine the direction of arrival (bearing) of these wavefronts. These techniques have not been used for bearing and range estimation of buried objects and in this chapter we are interested to extend them to this problem. This extension is a challenging problem because here the objects are not point sources, are buried in the seabed and can be everywhere (in the farfield or in the nearfield array). Thus the knowledge of the bearing is not sufficient to localize the buried object. Furthermore, the signals are correlated and the Gaussian noise should be taken into account. In addition we consider that the objects have known shapes. The principal parameters that disturb the object localization problem, are the noise, the lack of knowledge of the scattering model and the presence of correlated signals. In the literature there is any method able to solve all those parameters. However we can found a satisfying method to cope with each parameter (noise,

environment, has received considerable attention (Granara et al., 1998).

and farfield objects, in a noisy environment, remains a challenging problem.

**1. Introduction**

**High Resolution Array Processing Methods**

Caroline Fossati, Salah Bourennane and Julien Marot

*Institut Fresnel, Ecole Centrale Marseille*

