**1.2 Capon beamformer under norm inequality constraint (NICCB)**

The Capon beamformer can experience significant performance degradation when there is a mismatch between the presumed and actual characteristics of the source or array. The goal of NICCB is to impose an additional inequality constraint on the Euclidean norm of **w** for

Robust Beamforming and DOA Estimation 91

where −<sup>1</sup> **R** is the inversion of **R** , namely ( )−<sup>1</sup> ⋅ denotes the matrix inversion. Here, we can

⎛ ⎞ = = ⎜ ⎟ <sup>=</sup> ⎝ ⎠

<sup>2</sup> − − −− 2 1 11 **R R RR** = = ⋅ , the above result using the Hermitian property of **<sup>R</sup>** .

2 <sup>2</sup> <sup>1</sup>

when the above condition is satisfied, the standard Capon beamformer solution (1.7) satisfies the norm constraint of NICCB, hence, is also the solution to NICCB. For this case,

( )

*H*

**w w R I s R Iw R I s**

 λς

λ μ

2 1, , ,, ˆ 2 , *H H f f S*

 λ

μ. Hence,

( ) 3 2 <sup>1</sup> <sup>1</sup> , <sup>ˆ</sup> *<sup>H</sup> f f*

 λς

<sup>−</sup> = =− +

<sup>1</sup> <sup>ˆ</sup> *<sup>H</sup>*

( ) <sup>1</sup>

λ <sup>−</sup> <sup>=</sup> **sR Is** +

( ) <sup>1</sup>

 λ

*H H*

( ) () () ()

ς

( )

**sR Is**

 λ

, ˆ λ μ μ

 μ

) with respect to

μ

() ( )

 λ μ

Δ

λ

− + −+

 λ

−

2 1

ς

beamformer. To deal with this case, we can rewrite the *f*<sup>1</sup> (**w**, ,

*H*

( ) ( ) ( ) <sup>2</sup> <sup>1</sup>

 λ μ

λ μ

λ,μ

) , and let:

μ

 μ <

2 <sup>2</sup> <sup>1</sup>

− −

**sR s sR s**

− − <sup>≤</sup> **sR s sR s**

( )

*H H*

2

= 0 and the norm constraint in NICCB is inactive.

Otherwise, we have the condition:

which is an upper bound on

1

*f*

Clearly, we have:

Insert μ , ,

Δ

λ,μ

λ μ

The maximization of *f*<sup>2</sup> (

ˆ into *f*<sup>2</sup> (

λμ

Hence, the unconstrained minimizer of *f*<sup>1</sup> (**w**, ,

**w ww**

*H*

11 2

**R s R s sR s**

**sR s sR s sR s**

ς

*H H <sup>H</sup>* −− − − − <sup>−</sup>

11 2 <sup>1</sup> *<sup>H</sup> <sup>H</sup>*

( )

so that NICCB is different from the standard Capon

λ μ

 μλ

λ and μ

 μ

is given by:

<sup>−</sup> **w R Is** = + (1.12)

1 1

− −

2

<sup>=</sup> <sup>⎡</sup> −+ + −+ − ⎤⎡ ⎤ <sup>⎣</sup> ⎦⎣ ⎦

 μ

λ

) , for fixed

 λς

μ

λ

**sR Is** +

<sup>−</sup> = **w s** =− + − + ≤ **R I s w Rw w** ∈ (1.13)

(1.8)

(1.9)

(1.10)

) as follows:

, is given by:

(1.14)

(1.15)

(1.11)

have:

λ

where ( )

Consider the condition:

the purpose of improving the robustness to pointing errors and random perturbations in sensor parameters, here **w** denotes the array weight vector. This requires incorporating a norm inequality constraint on **w** of the form:

$$\|\mathbf{w}\|^2 \le \varphi \tag{1.1}$$

where ς is the norm constraint parameter. Consequently, the NICCB problem is formulated as follows:

$$\begin{cases} \min\_{\mathbf{w}} \mathbf{w}^H \mathbf{R} \mathbf{w} \\ \text{s.t.} \quad \mathbf{w}^H \overline{\mathbf{s}} = 1 \\ \quad \qquad \left\| \mathbf{w} \right\|^2 \le \varepsilon \end{cases} \tag{1.2}$$

where **R** is the data covariance matrix, **s** is the presumed signal steering vector, and ( )*<sup>H</sup>*⋅ denotes the conjugate transposition, ⋅ denotes the vector 2*l* norm. For the convenience of analysis, and analyzing the choice of the norm constraint parameter, the solution to NICCB [22] is introduced as follows.

#### **1.2.1 Solution to NICCB**

Let *S* be the set defined by the constraints in above optimization problem, namely:

$$S = \left\{ \mathbf{w} \, \middle| \, \mathbf{w}^H \overline{\mathbf{s}} = 1, \|\mathbf{w}\|^2 \le \boldsymbol{\zeta} \right\} \tag{1.3}$$

Define a function:

$$f\_1(\mathbf{w}, \boldsymbol{\lambda}, \boldsymbol{\mu}) = \mathbf{w}^H \mathbf{R} \mathbf{w} + \boldsymbol{\lambda} \left( \left\| \mathbf{w} \right\|^2 - \boldsymbol{\varphi} \right) + \boldsymbol{\mu} \left( -\mathbf{w}^H \overline{\mathbf{s}} - \overline{\mathbf{s}}^H \mathbf{w} + \mathbf{2} \right) \tag{1.4}$$

where λ is the real-valued Lagrange multiplier, and 0 λ ≥ satisfying 0 **R I** + λ > so that ( ) <sup>1</sup>*f* **w**, , λ μ can be minimized with respect to **w** . μ is the arbitrary Lagrange multiplier. Then:

$$f\_1(\mathbf{w}, \boldsymbol{\lambda}, \boldsymbol{\mu}) \le \mathbf{w}^H \mathbf{R} \mathbf{w} \quad \quad \mathbf{w} \in \mathcal{S} \tag{1.5}$$

with equality on the boundary of *S* .

For the standard Capon beamformer

$$\begin{cases} \min\_{\mathbf{w}} \mathbf{w}^H \mathbf{R} \mathbf{w} \\ \text{s.t.} \quad \mathbf{w}^H \overline{\mathbf{s}} = \mathbf{1} \end{cases} \tag{1.6}$$

The optimal solution is:

$$\mathbf{w} = \frac{\mathbf{R}^{-1}\overline{\mathbf{s}}}{\overline{\mathbf{s}}^{H}\mathbf{R}^{-1}\overline{\mathbf{s}}}\tag{1.7}$$

90 Fourier Transform Applications

the purpose of improving the robustness to pointing errors and random perturbations in sensor parameters, here **w** denotes the array weight vector. This requires incorporating a

> 2 **w** ≤ ς

min

**w**

Let *S* be the set defined by the constraints in above optimization problem, namely:

λ

is the real-valued Lagrange multiplier, and 0

λ μ

can be minimized with respect to **w** .

⎧ ⎪ ⎪

is the norm constraint parameter. Consequently, the NICCB problem is formulated

2

ς

ς

λ

μ

.. 1

**w s w**

*H*

**w Rw**

*<sup>H</sup> s t*

<sup>⎨</sup> <sup>=</sup> <sup>⎪</sup> ⎪ ≤ ⎩

where **R** is the data covariance matrix, **s** is the presumed signal steering vector, and ( )*<sup>H</sup>*⋅ denotes the conjugate transposition, ⋅ denotes the vector 2*l* norm. For the convenience of analysis, and analyzing the choice of the norm constraint parameter, the solution to NICCB

{ } <sup>2</sup> 1, *<sup>H</sup> <sup>S</sup>* <sup>=</sup> **ww s w** = ≤

( ) ( ) ( ) <sup>2</sup> <sup>1</sup> , , 2 *H H <sup>H</sup> f* **w w Rw w**

<sup>1</sup> ( ,, , ) *<sup>H</sup> f S* **w w Rw w**

.. 1

**w s**

*H*

**w Rw**

1 *H* 1 − <sup>−</sup> <sup>=</sup> **R s**

min

⎧ ⎪ ⎨

**w**

*<sup>H</sup> s t*

⎪ = ⎩ **w**

= + −+− − +

 ςμ

(1.1)

(1.2)

(1.3)

λ

> so that

**ws sw** (1.4)

is the arbitrary Lagrange multiplier.

(1.6)

**sR s** (1.7)

≥ satisfying 0 **R I** +

≤ ∈ (1.5)

norm inequality constraint on **w** of the form:

where ς

as follows:

[22] is introduced as follows.

λ μ

with equality on the boundary of *S* . For the standard Capon beamformer

The optimal solution is:

**1.2.1 Solution to NICCB** 

Define a function:

λ

where

Then:

( ) <sup>1</sup>*f* **w**, , λ μ

where −<sup>1</sup> **R** is the inversion of **R** , namely ( )−<sup>1</sup> ⋅ denotes the matrix inversion. Here, we can have:

$$\left\|\mathbf{w}\right\|^{2} = \mathbf{w}^{\prime H}\mathbf{w} = \left(\frac{\mathbf{R}^{-1}\overline{\mathbf{s}}}{\overline{\mathbf{s}}^{H}\mathbf{R}^{-1}\overline{\mathbf{s}}}\right)^{H}\frac{\mathbf{R}^{-1}\overline{\mathbf{s}}}{\overline{\mathbf{s}}^{H}\mathbf{R}^{-1}\overline{\mathbf{s}}} = \frac{\overline{\mathbf{s}}^{H}\mathbf{R}^{-2}\overline{\mathbf{s}}}{\left(\overline{\mathbf{s}}^{H}\mathbf{R}^{-1}\overline{\mathbf{s}}\right)^{2}}\tag{1.8}$$

where ( ) <sup>2</sup> − − −− 2 1 11 **R R RR** = = ⋅ , the above result using the Hermitian property of **<sup>R</sup>** . Consider the condition:

$$\frac{\overline{\mathbf{s}}^H \mathbf{R}^{-2} \overline{\mathbf{s}}}{\left(\overline{\mathbf{s}}^H \mathbf{R}^{-1} \overline{\mathbf{s}}\right)^2} \le \boldsymbol{\zeta} \tag{1.9}$$

when the above condition is satisfied, the standard Capon beamformer solution (1.7) satisfies the norm constraint of NICCB, hence, is also the solution to NICCB. For this case, λ= 0 and the norm constraint in NICCB is inactive.

Otherwise, we have the condition:

$$\zeta < \frac{\overline{\mathbf{s}}^H \mathbf{R}^{-2} \overline{\mathbf{s}}}{\left(\overline{\mathbf{s}}^H \mathbf{R}^{-1} \overline{\mathbf{s}}\right)^2} \tag{1.10}$$

which is an upper bound on ς so that NICCB is different from the standard Capon beamformer. To deal with this case, we can rewrite the *f*<sup>1</sup> (**w**, , λ μ) as follows:

$$\begin{split} f\_1(\mathbf{w}, \boldsymbol{\lambda}, \boldsymbol{\mu}) &= \left[\mathbf{w} - \mu \left(\mathbf{R} + \boldsymbol{\lambda}\mathbf{I}\right)^{-1} \overline{\mathbf{s}}\right]^H \left(\mathbf{R} + \boldsymbol{\lambda}\mathbf{I}\right) \left[\mathbf{w} - \mu \left(\mathbf{R} + \boldsymbol{\lambda}\mathbf{I}\right)^{-1} \overline{\mathbf{s}}\right] - \\ &- \mu^2 \overline{\mathbf{s}}^H \left(\mathbf{R} + \boldsymbol{\lambda}\mathbf{I}\right)^{-1} \overline{\mathbf{s}} - \boldsymbol{\lambda}\boldsymbol{\xi} + 2\mu \end{split} \tag{1.11}$$

Hence, the unconstrained minimizer of *f*<sup>1</sup> (**w**, , λ μ ) , for fixed λ and μ, is given by:

$$
\hat{\mathbf{w}}\_{\vec{\lambda},\mu} = \mu \left(\mathbf{R} + \lambda \mathbf{I}\right)^{-1} \overline{\mathbf{s}} \tag{1.12}
$$

Clearly, we have:

$$f\_2\left(\boldsymbol{\lambda},\boldsymbol{\mu}\right) \stackrel{\scriptstyle \mathcal{A}}{=} f\_1\left(\hat{\mathbf{w}}\_{\boldsymbol{\lambda},\boldsymbol{\mu}},\boldsymbol{\lambda},\boldsymbol{\mu}\right) = -\mu^2 \overline{\mathbf{s}}^H \left(\mathbf{R} + \boldsymbol{\lambda}\mathbf{I}\right)^{-1} \overline{\mathbf{s}} - \boldsymbol{\lambda}\boldsymbol{\xi} + 2\mu \le \mathbf{w}^H \mathbf{R} \mathbf{w}\_{\boldsymbol{\lambda}} \qquad \mathbf{w} \in \mathcal{S} \tag{1.13}$$

The maximization of *f*<sup>2</sup> (λ,μ ) with respect to μ . Hence, μis given by:

$$
\hat{\mu} = \frac{1}{\overline{\mathbf{s}}^H \left(\mathbf{R} + \lambda \mathbf{I}\right)^{-1} \overline{\mathbf{s}}} \tag{1.14}
$$

Insert μˆ into *f*<sup>2</sup> (λ,μ) , and let:

$$f\_3\left(\boldsymbol{\lambda}\right) \stackrel{\boldsymbol{A}}{=} f\_2\left(\boldsymbol{\lambda}, \hat{\boldsymbol{\mu}}\right) = -\boldsymbol{\lambda}\boldsymbol{\xi} + \frac{1}{\overline{\mathbf{s}}^H \left(\mathbf{R} + \boldsymbol{\lambda}\mathbf{I}\right)^{-1} \overline{\mathbf{s}}} \tag{1.15}$$

Robust Beamforming and DOA Estimation 93

respectively, M is the total number of degrees-of-freedom. For the convenience of analysis, we assume that the eigenvalues / eigenvectors of **R** are sorted in descending order, i.e. ,

> 1 2 ≥ ≥≥ λ

( ) ( ) ( )

= =

∑ ∑

<sup>=</sup> <sup>=</sup>

∑ ∑

1 1

+ + <sup>=</sup> <sup>=</sup>

*i i i i*

*<sup>H</sup> M M H H <sup>i</sup> i i*

**s uu s s u**

<sup>⎡</sup> ⎤⎡ ⎤ <sup>⎢</sup> ⎥⎢ ⎥ <sup>+</sup> ⎣ ⎦ <sup>⎢</sup> <sup>⎥</sup> <sup>+</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

2 2 2

+ + ⎛ ⎞ +

<sup>∑</sup> ∑ ∑

*H H <sup>M</sup> <sup>H</sup> M M i i <sup>i</sup>*

2 2 2

*H H <sup>H</sup> M M i i <sup>i</sup>*

**su su s u**

2

λ

λ

λ

+

**su su s u**

**s uu s s u**

 λ

*<sup>M</sup> <sup>H</sup> i i <sup>i</sup>*

2 2

1

2 2 2 2 <sup>2</sup>

2 2 2 2 <sup>2</sup> <sup>1</sup>

λ λ

λ λ

*i i*

2 2 <sup>2</sup>

λ λ

λ

λ λ

λ λ

 λλ

" *<sup>M</sup>* (1.22)

(1.23)

(1.24)

(1.25)

2

≥ [22] , then:

1

1

*i*

=

<sup>=</sup> ∑ **s u** (1.26)

<sup>+</sup> (1.27)

+ (1.28)

*M*

1

*i*

=

*M*

1

λ

1

*h*

λ

*i i*

) is monotonically increasing function of 0

**su su**

( ) ( ) ( )

= =

∑ ∑

λλ

λλ

*i i i M*

*M M H H*

1 1 1

+ + ⎣ ⎦⎣ ⎦

*i i i*

= =

( ) ( ) ( )

= =

∑ ∑

λ λ

1 1

= =

Alternately, the above inequality relationship can be expressed as:

λλ

*i i i M*

1 1 1

⎢ ⎥⎢ ⎥ + + ⎣ ⎦⎣ ⎦

*<sup>M</sup> <sup>H</sup> <sup>M</sup> <sup>H</sup>*

*M H H*

λ λ

λλ

2 2

*i i*

1 1 1

=≤= ⎜ ⎟ ⎡ ⎤⎡ ⎤ <sup>+</sup> ⎝ ⎠ ⎢ ⎥⎢ ⎥

2 2

=≥= ⎜ ⎟ ⎡ ⎤ ⎡ ⎤⎝ ⎠ <sup>+</sup>

λ λ

+ +⎛⎞ +

<sup>∑</sup> ∑ ∑

 λ λ

1

≤

1 *M* λ

1 λ*M* λ

λ +

*i* γ

> γς λ

γς ≥

=

*M H i*

*i i i i i M*

**s u s u**

 λ λ

 λ λ

We can have:

Therefore, *h*(

and

and let:

and

λ

*h*

*h*

λ

λ

The maximization of the above function *f*<sup>3</sup> (λ ) with respect to λgives:

$$\frac{\overline{\mathbf{s}}^H \left(\mathbf{R} + \lambda \mathbf{I}\right)^{-2} \overline{\mathbf{s}}}{\overline{\left[\overline{\mathbf{s}}^H \left(\mathbf{R} + \lambda \mathbf{I}\right)^{-1} \overline{\mathbf{s}}\right]^2}} = \zeta \tag{1.16}$$

Hence, the optimal Lagrange multiplier ˆ λ can be obtained efficiently via, for example, a Newton's method from the above equation of λ.

Note that using μˆ in , ˆ**w**λμyields:

$$\hat{\mathbf{w}} = \frac{\left(\mathbf{R} + \lambda\mathbf{I}\right)^{-1}\overline{\mathbf{s}}}{\overline{\mathbf{s}}^{H}\left(\mathbf{R} + \lambda\mathbf{I}\right)^{-1}\overline{\mathbf{s}}}\tag{1.17}$$

which satisfies the constraints of NICCB, namely:

$$\hat{\mathbf{w}}^H \overline{\mathbf{s}} = 1 \tag{1.18}$$

and

$$\left\|\hat{\mathbf{w}}\right\|^2 = \boldsymbol{\zeta} \tag{1.19}$$

Hence, **w**ˆ belongs to the boundary of *S* . Therefore, **w**ˆ is our sought solution to the NICCB optimization problem, which has the same form as the Capon beamformer with a diagonal loading term λ**I** added to **R** , namely, NICCB also belongs to the class of diagonal loading approaches.

From the above analysis, we can see that if the Lagrange multiplier λ is obtained, the optimal weight vector for NICCB will be solved. In order to obtain the Lagrange multiplier λ, we must solve the following equation via Newton's method, and let:

$$h(\mathcal{N}) = \frac{\overline{\mathbf{s}}^H \left(\mathbf{R} + \mathcal{A}\mathbf{I}\right)^{-2} \overline{\mathbf{s}}}{\left[\overline{\mathbf{s}}^H \left(\mathbf{R} + \mathcal{A}\mathbf{I}\right)^{-1} \overline{\mathbf{s}}\right]^2} \tag{1.20}$$

Hence, the key problem of NICCB is finding the optimal Lagrange multiplier by above equation (1.20). In this chapter, we will give the complete investigation on NICCB, and the existence of its solution is analyzed as follows.

#### **1.2.2 Solution to the optimal Lagrange multiplier**

In order to solve the equation (1.20), we perform an eigenvalue decomposition (EVD) of the sample covariance matrix as follows:

$$\mathbf{R} = \mathbf{U} \cdot \mathbf{T} \cdot \mathbf{U}^H = \sum\_{i=1}^{M} \mathcal{A}\_i \mathbf{u}\_i \mathbf{u}\_i^H \tag{1.21}$$

where **Γ** = *diag*(λ1 2 ,,, λ λ " *<sup>M</sup>* ) is diagonal matrix, **U uu u** = ( 1 2 , ,, " *<sup>M</sup>* ) is an Hermitian matrix, λ*<sup>i</sup>* ( 1,2 *i M* = " ) and **u***i* ( 1,2 *i M* = " ) are the eigenvalues and eigenvectors of **R** , respectively, M is the total number of degrees-of-freedom. For the convenience of analysis, we assume that the eigenvalues / eigenvectors of **R** are sorted in descending order, i.e. ,

$$
\lambda\_1 \ge \lambda\_2 \ge \cdots \ge \lambda\_M \tag{1.22}
$$

We can have:

92 Fourier Transform Applications

2 <sup>2</sup> <sup>1</sup>

− − <sup>+</sup> <sup>=</sup>

) with respect to

ς

λgives:

can be obtained efficiently via, for example, a

ˆ 1 *<sup>H</sup>* **w s** = (1.18)

(1.19)

λ

(1.20)

is obtained, the

(1.16)

(1.17)

λ

( ) ( )

**sR I s sR Is**

⎡ ⎤ <sup>+</sup> ⎣ ⎦

λ

λ

λ

λ.

( ) ( )

<sup>1</sup> <sup>ˆ</sup> *<sup>H</sup>* λ

> 2 **w**ˆ = ς

Hence, **w**ˆ belongs to the boundary of *S* . Therefore, **w**ˆ is our sought solution to the NICCB optimization problem, which has the same form as the Capon beamformer with a diagonal

optimal weight vector for NICCB will be solved. In order to obtain the Lagrange multiplier

( )

Hence, the key problem of NICCB is finding the optimal Lagrange multiplier by above equation (1.20). In this chapter, we will give the complete investigation on NICCB, and the

In order to solve the equation (1.20), we perform an eigenvalue decomposition (EVD) of the

1

*<sup>i</sup>* ( 1,2 *i M* = " ) and **u***i* ( 1,2 *i M* = " ) are the eigenvalues and eigenvectors of **R** ,

**R U**= ⋅⋅ = **<sup>Γ</sup> U uu** ∑ (1.21)

" *<sup>M</sup>* ) is diagonal matrix, **U uu u** = ( 1 2 , ,, " *<sup>M</sup>* ) is an Hermitian

*i* λ

=

*M H H iii*

<sup>⎡</sup> <sup>+</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup>

**sR I s sR Is**

λ

λ

( ) ( )

<sup>+</sup> <sup>=</sup>

*H*

*H*

**I** added to **R** , namely, NICCB also belongs to the class of diagonal loading

2 <sup>2</sup> <sup>1</sup>

− −

<sup>+</sup> <sup>=</sup> <sup>+</sup>

**w**

From the above analysis, we can see that if the Lagrange multiplier

, we must solve the following equation via Newton's method, and let:

λ

*h*

existence of its solution is analyzed as follows.

sample covariance matrix as follows:

λ1 2 ,,, λ

where **Γ** = *diag*(

λ

matrix,

**1.2.2 Solution to the optimal Lagrange multiplier** 

 λ 1

− −

λ

**R Is**

**sR Is**

*H H*

The maximization of the above function *f*<sup>3</sup> (

Hence, the optimal Lagrange multiplier ˆ

μˆ in , ˆ**w**λ μ

Note that using

and

λ

loading term

approaches.

λ

Newton's method from the above equation of

which satisfies the constraints of NICCB, namely:

yields:

$$h(\boldsymbol{\lambda}) = \frac{\sum\_{i=1}^{M} \frac{\overline{\mathbf{s}}^{H} \mathbf{u}\_{i} \mathbf{u}\_{i}^{H} \overline{\mathbf{s}}}{\left(\boldsymbol{\lambda}\_{i} + \boldsymbol{\lambda}\right)^{2}}}{\left[\sum\_{i=1}^{M} \frac{\overline{\mathbf{s}}^{H} \mathbf{u}\_{i} \mathbf{u}\_{i}^{H} \overline{\mathbf{s}}}{\boldsymbol{\lambda}\_{i} + \boldsymbol{\lambda}}\right]^{2}} = \frac{\sum\_{i=1}^{M} \left\|\overline{\mathbf{s}}^{H} \mathbf{u}\_{i}\right\|^{2}}{\left[\sum\_{i=1}^{M} \left\|\overline{\mathbf{s}}^{H} \mathbf{u}\_{i}\right\|^{2}\right]^{2}}\tag{1.23}$$

Therefore, *h*(λ ) is monotonically increasing function of 0 λ≥ [22] , then:

$$\mathcal{H}(\boldsymbol{\lambda}) = \frac{\sum\_{i=1}^{M} \left\| \overline{\mathbf{s}}^{H} \mathbf{u}\_{i} \right\|^{2}}{\left[ \sum\_{i=1}^{M} \frac{\left\| \overline{\mathbf{s}}^{H} \mathbf{u}\_{i} \right\|^{2}}{\lambda\_{i} + \lambda} \right]^{2}} \leq \sum\_{i=1}^{M} \frac{\left\| \overline{\mathbf{s}}^{H} \mathbf{u}\_{i} \right\|^{2}}{\left( \lambda\_{M} + \lambda \right)^{2}} = \left( \frac{\lambda\_{1} + \lambda}{\lambda\_{M} + \lambda} \right)^{2} \frac{1}{\sum\_{i=1}^{M} \left\| \overline{\mathbf{s}}^{H} \mathbf{u}\_{i} \right\|^{2}} \tag{1.24}$$

and

$$h(\boldsymbol{\lambda}) = \frac{\sum\_{i=1}^{M} \left\| \overline{\mathbf{s}}^{H} \mathbf{u}\_{i} \right\|^{2}}{\left\| \sum\_{i=1}^{M} \frac{\left\| \overline{\mathbf{s}}^{H} \mathbf{u}\_{i} \right\|^{2}}{\lambda\_{i} + \overline{\lambda}} \right\|^{2}} \ge \frac{\sum\_{i=1}^{M} \left\| \overline{\mathbf{s}}^{H} \mathbf{u}\_{i} \right\|^{2}}{\left[ \sum\_{i=1}^{M} \frac{\left\| \overline{\mathbf{s}}^{H} \mathbf{u}\_{i} \right\|^{2}}{\lambda\_{M} + \overline{\lambda}} \right]^{2}} = \left( \frac{\lambda\_{M} + \lambda}{\lambda\_{1} + \lambda} \right)^{2} \frac{1}{\sum\_{i=1}^{M} \left\| \overline{\mathbf{s}}^{H} \mathbf{u}\_{i} \right\|^{2}}\tag{1.25}$$

and let:

$$\mathcal{Y} = \sum\_{i=1}^{M} \left\| \overline{\mathbf{s}}^{H} \mathbf{u}\_{i} \right\|^{2} \tag{1.26}$$

Alternately, the above inequality relationship can be expressed as:

$$
\sqrt{\gamma\xi} \le \frac{\lambda\_1 + \lambda}{\lambda\_M + \lambda} \tag{1.27}
$$

and

$$
\sqrt{\chi\varsigma} \ge \frac{\lambda\_M + \lambda}{\lambda\_1 + \lambda} \tag{1.28}
$$

Robust Beamforming and DOA Estimation 95

min <sup>2</sup> <sup>1</sup> 0

> 2 max

*<sup>M</sup>* <sup>1</sup> *h h* λ

λ

*<sup>M</sup>* <sup>&</sup>lt; <sup>&</sup>lt; <sup>1</sup> , there isn't a solution (2) (2)

2

2

λλ

. Hence, we can have the selecting bound of the norm inequality constraint

*M* λ

*M* λ

⎛ ⎞

2

2 1

 Δ

> ς

> > ς

is out of the above bound, there is no solution

*M*

λ

 γ

− − = = > ⎡ ⎤ ⎣ ⎦ **sR s sR s**

ς

( )

min max

 λ<sup>∈</sup> <sup>⎡</sup> , <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> satisfies *h*(

 ς

2 max

λ

(1.35)

(1.36)

λ ) = ς.

is large, it is

min max

 λ<sup>∈</sup> ⎡ ⎤ , ⎣ ⎦

is small, there isn't

λλ

ς

ς

*<sup>M</sup>* , there is a unique solution (1) (1)

< < (1.37)

(1.38)

(1.39)

should be chosen in the

*H H*

1

λλ

γς λλ

*<sup>M</sup>* , there is a unique solution (1) (1)

From above analysis, we can see that it is important to select the norm inequality constraint

<sup>1</sup> 1

γς λ

<sup>1</sup> 1 1

 γλ

<<⋅⎜ ⎟ ⎝ ⎠

min 0 max

From above analyses, we can see that the norm inequality constraint can enhance the robustness of NICCB. Since the inequality relationship has a wide range, the norm of the

=<< ⎨ ⎬ ⎜ ⎟ =

ς

 γ λ

<sup>⎧</sup> <sup>⎫</sup> ⎪ ⎪ ⎛ ⎞

⎪ ⎪ ⎝ ⎠ ⎩ ⎭

, we can obtain:

ς γ

1 1 min ,

ςς

for NICCB. If the norm inequality constraint parameter

λ λ

⎛ ⎞ <sup>+</sup> <sup>=</sup> ≥ = ⎜ ⎟ ⎝ ⎠ <sup>+</sup>

( ) ( ) ( )

2

( ) ( ) ( ) ( )

inactive. On the contrary, if the norm inequality constraint parameter

γς λλ

2 max

λ

In a word, we can conclude that when 1 1 < <

λ ) = ς.

**1.3 Norm inequality constraint parameter selection** 

( )

*H H*

ς

ς

If the norm inequality constraint parameter

interval defined by the above inequalities.

2

− − <sup>&</sup>lt; <sup>=</sup> **sR s sR s**

1

Δ

to NICCB. Hence, the norm inequality constraint parameter

**1.4 Capon beamformer under norm equality constraint (NECCB)** 

γ

2 0

ς

Δ

 γς

and

λλ

parameter

satisfies *h*(

parameter

Namely:

Hence, when 1

min max

ς

λ ) = ς

ς

Add the condition of

a solution to satisfy NICCB.

We have analyzed that when 1 1 < <

as follows:

 λ<sup>∈</sup> ⎡ ⎤ , ⎣ ⎦ satisfies *h*(

λ λ *h h* λ

Next, we analyze the bound of the Lagrange multiplier λand its existence.

1. If γς> 1 , using (1.27) and (1.28), we can have:

$$\begin{cases} \sqrt{\mathcal{V}\xi} \left( \mathcal{\lambda}\_{\mathcal{M}} + \mathcal{\lambda} \right) \le \mathcal{\lambda}\_1 + \mathcal{\lambda} \\ \sqrt{\mathcal{V}\xi} \left( \mathcal{\lambda}\_1 + \mathcal{\lambda} \right) \ge \mathcal{\lambda}\_{\mathcal{M}} + \mathcal{\lambda} \end{cases} \implies \begin{cases} \mathcal{\lambda} \le \frac{\mathcal{\lambda}\_1 - \sqrt{\mathcal{V}\xi} \mathcal{\lambda}\_{\mathcal{M}}}{\sqrt{\mathcal{V}\xi} - 1} \\ \mathcal{\lambda} \ge \frac{\mathcal{\lambda}\_{\mathcal{M}} - \sqrt{\mathcal{V}\xi} \mathcal{\lambda}\_1}{\sqrt{\mathcal{V}\xi} - 1} \end{cases} \tag{1.29}$$

Since γς > 1 , but 1 λ γς λ *<sup>M</sup>* − < 0 , and 0 λ ≥ , so that 1 1 λ − >⇔ < γς λ γς λ λ *<sup>M</sup>* 0 *<sup>M</sup>* . Therefore, the bound of the Lagrange multiplier λ under 1 1 < < γς λλ*<sup>M</sup>* is given as follows:

$$
\lambda\_{\rm min}^{(1)} \stackrel{A}{=} 0 \le \lambda \le \frac{\lambda\_1 - \sqrt{\gamma \xi} \lambda\_M}{\sqrt{\gamma \xi} - 1} = \lambda\_{\rm max}^{(1)} \tag{1.30}
$$

Then, we have:

$$h\left(\lambda\_{\min}^{(1)}\right) = h\left(0\right) = \frac{\overline{\mathbf{s}}^H \mathbf{R}^{-2} \overline{\mathbf{s}}}{\left[\overline{\mathbf{s}}^H \mathbf{R}^{-1} \overline{\mathbf{s}}\right]^2} > \boldsymbol{\xi} \tag{1.31}$$

and

$$\left\| h \left( \boldsymbol{\lambda}\_{\text{max}}^{(1)} \right) = h \left( \boldsymbol{\lambda} \right) \right\|\_{\boldsymbol{\lambda}\_{\text{max}}^{(1)}} \leq \left( \frac{\boldsymbol{\lambda}\_1 + \boldsymbol{\lambda}}{\boldsymbol{\lambda}\_M + \boldsymbol{\lambda}} \right)^2 \frac{1}{\gamma} \Big\|\_{\boldsymbol{\lambda}\_{\text{max}}^{(1)}} = \boldsymbol{\zeta} \tag{1.32}$$

Hence, when 1 1 < < γς λλ*<sup>M</sup>* , there is unique solution ( ) <sup>1</sup> ( ) <sup>1</sup> min max λλ λ , <sup>∈</sup> <sup>⎡</sup> <sup>⎤</sup> ⎢⎣ ⎥⎦ satisfies *h*(λ ) = ς . 2. If γς< 1 , using (1.27) and (1.28), we can have:

$$\begin{cases} \sqrt{\rho \varsigma} \left( \lambda\_{\mathcal{M}} + \lambda \right) \le \lambda\_1 + \lambda \\ \sqrt{\rho \varsigma} \left( \lambda\_1 + \lambda \right) \ge \lambda\_{\mathcal{M}} + \lambda \end{cases} \implies \begin{cases} \lambda \ge \frac{\sqrt{\rho \varsigma} \lambda\_{\mathcal{M}} - \lambda\_1}{1 - \sqrt{\rho \varsigma}} \\ 1 - \sqrt{\rho \varsigma} \\ \lambda \le \frac{\sqrt{\rho \varsigma} \lambda\_1 - \lambda\_{\mathcal{M}}}{1 - \sqrt{\rho \varsigma}} \end{cases} \tag{1.33}$$

Since γς < 1 , but 1 γς λ λ *<sup>M</sup>* − < 0 , and 0 λ ≥ , so that 1 1 γς λ λ γς λ λ − >⇔ > *M M* 0 . Therefore, the bound of the Lagrange multiplier λ under 1 λ λ γς *<sup>M</sup>* < < 1 is given as follows:

$$
\lambda\_{\rm min}^{(2)} \stackrel{A}{=} 0 \le \lambda \le \frac{\sqrt{\gamma \xi} \, \lambda\_1 - \lambda\_M}{1 - \sqrt{\gamma \xi}} \stackrel{A}{=} \lambda\_{\rm max}^{(2)} \tag{1.34}
$$

Then, we have:

$$h\left(\lambda\_{\min}^{(2)}\right) = h\left(0\right) = \frac{\overline{\mathbf{s}}^H \mathbf{R}^{-2} \overline{\mathbf{s}}}{\left[\overline{\mathbf{s}}^H \mathbf{R}^{-1} \overline{\mathbf{s}}\right]^2} > \boldsymbol{\zeta} \tag{1.35}$$

and

94 Fourier Transform Applications

1

<sup>⎧</sup> <sup>−</sup> <sup>⎪</sup> <sup>≤</sup> ⎧⎪ ⎪ +≤+ <sup>−</sup> ⎨ ⎨ <sup>⇒</sup>

⎪ ⎪ +≥ + <sup>−</sup> <sup>⎩</sup> <sup>≥</sup> <sup>⎪</sup> <sup>−</sup> <sup>⎩</sup>

λ

( ) 1 1 ( ) 1 min max 0

min <sup>2</sup> <sup>1</sup> 0

> 1 max

> > 1

<sup>⎧</sup> <sup>−</sup> <sup>⎪</sup> <sup>≥</sup> ⎧⎪ ⎪ +≤+ <sup>−</sup> ⎨ ⎨ <sup>⇒</sup>

⎪ ⎪ +≥ + <sup>−</sup> <sup>⎩</sup> <sup>≤</sup> <sup>⎪</sup> <sup>−</sup> <sup>⎩</sup>

λ

( ) 2 1 ( ) 2 min max 0 1

γς λ λ

1 1

λ

<sup>1</sup> <sup>1</sup> max

λ

λ

1 1

*M M*

λ

λ

λ

1 *M*

2

2

1

− − = = > ⎡ ⎤ ⎣ ⎦

**sR s sR s**

 γς λ

*H H*

*M*

*<sup>M</sup>* , there is unique solution ( ) <sup>1</sup> ( ) <sup>1</sup>

λ λ

⎛ ⎞ <sup>+</sup> <sup>=</sup> ≤ = ⎜ ⎟ + ⎝ ⎠

λ λγ

γς

 Δ

> λ

> > ς

( )

λ

λλ

1

γςλ

γςλ λ

*M*

1

*M M*

λ

*M*

γς

 Δ

> λ

λ

λ

1 max

 ς

min max

1

 ≥ , so that 1 1 γς λ λ

> γς*<sup>M</sup>* < < 1 is given as

 λ

γς

γς

 under 1 λ λ

<sup>−</sup> =≤ ≤ = − (1.34)

 λ, <sup>∈</sup> <sup>⎡</sup> <sup>⎤</sup> ⎢⎣ ⎥⎦

λ

1

λ

λ and its existence.

1

γς λ

γς

γς

*M*

(1.29)

 γς λ λ

(1.31)

(1.32)

satisfies *h*(

(1.33)

 γς λ λ

− >⇔ > *M M* 0 .

λ ) = ς.

*<sup>M</sup>* 0 *<sup>M</sup>* .

*<sup>M</sup>* is given as

1

 ≥ , so that 1 1 λ

> under 1 1 < < γς λλ

<sup>−</sup> =≤ ≤ = − (1.30)

γς λ

− >⇔ <

γς λ

Next, we analyze the bound of the Lagrange multiplier

> 1 , using (1.27) and (1.28), we can have:

γς λ λ λ λ

γς λ λ λ λ

 γς λ*<sup>M</sup>* − < 0 , and 0

Therefore, the bound of the Lagrange multiplier

 > 1 , but 1 λ

( ) ( )

Δ

λλ

( ) ( ) ( )

1

( ) ( ) ( ) ( )

( ) ( )

*<sup>M</sup>* − < 0 , and 0

Δ

λλ

*M*

*h h*

< 1 , using (1.27) and (1.28), we can have:

γς λ λ λ λ

γς λ λ λ λ

λ *h h* λ

*M*

1. If

Since

follows:

and

2. If

Since

follows:

γς

γς

Then, we have:

Hence, when 1 1 < <

γς

γς

Then, we have:

γς λλ

 < 1 , but 1 γς λ λ

Therefore, the bound of the Lagrange multiplier

$$\left.h\left(\mathcal{\lambda}\_{\text{max}}^{(2)}\right) = h\left(\mathcal{\lambda}\right)\right|\_{\mathcal{\lambda}\_{\text{max}}^{(2)}} \geq \left(\frac{\mathcal{\lambda}\_M + \mathcal{\lambda}}{\mathcal{\lambda}\_1 + \mathcal{\lambda}}\right)^2 \frac{1}{\mathcal{\mathcal{N}}}\bigg|\_{\mathcal{\lambda}\_{\text{max}}^{(2)}} = \mathcal{\zeta} \tag{1.36}$$

Hence, when 1 λ λ γς *<sup>M</sup>* <sup>&</sup>lt; <sup>&</sup>lt; <sup>1</sup> , there isn't a solution (2) (2) min max λλ λ <sup>∈</sup> <sup>⎡</sup> , <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> satisfies *h*(λ ) = ς . In a word, we can conclude that when 1 1 < < γς λλ*<sup>M</sup>* , there is a unique solution (1) (1) min max λλ λ <sup>∈</sup> ⎡ ⎤ , ⎣ ⎦ satisfies *h*(λ ) = ς.

#### **1.3 Norm inequality constraint parameter selection**

From above analysis, we can see that it is important to select the norm inequality constraint parameter ς for NICCB. If the norm inequality constraint parameter ς is large, it is inactive. On the contrary, if the norm inequality constraint parameter ς is small, there isn't a solution to satisfy NICCB.

We have analyzed that when 1 1 < < γς λλ*<sup>M</sup>* , there is a unique solution (1) (1) min max λλ λ <sup>∈</sup> ⎡ ⎤ , ⎣ ⎦ satisfies *h*(λ ) = ς . Hence, we can have the selecting bound of the norm inequality constraint parameter ςas follows:

$$1 < \sqrt{\chi \xi} < \frac{\lambda\_1}{\lambda\_M} \tag{1.37}$$

Namely:

$$\frac{1}{\mathcal{I}} < \boldsymbol{\zeta} < \frac{1}{\mathcal{I}} \cdot \left(\frac{\mathcal{A}\_1}{\mathcal{A}\_{\mathcal{M}}}\right)^2 \tag{1.38}$$

Add the condition of ( ) 2 2 0 1 *H H* Δ ς ς − − <sup>&</sup>lt; <sup>=</sup> **sR s sR s** , we can obtain:

$$\varphi\_{\min} \stackrel{A}{=} \frac{1}{\mathcal{Y}} < \xi < \min \left\{ \varphi\_{0\ \prime} \quad \frac{1}{\mathcal{Y}} \left( \frac{\mathcal{X}\_1}{\mathcal{X}\_M} \right)^2 \right\} \stackrel{A}{=} \varphi\_{\max} \tag{1.39}$$

If the norm inequality constraint parameter ς is out of the above bound, there is no solution to NICCB. Hence, the norm inequality constraint parameter ς should be chosen in the interval defined by the above inequalities.

#### **1.4 Capon beamformer under norm equality constraint (NECCB)**

From above analyses, we can see that the norm inequality constraint can enhance the robustness of NICCB. Since the inequality relationship has a wide range, the norm of the

Robust Beamforming and DOA Estimation 97

1 1 1 1

λ

− − ≤ ≤ − −

 < <sup>1</sup> *<sup>M</sup>* , since 1 λ

( ) 1 1 1 ( ) 1 min max 1 1

− − =− ≤ ≤ <sup>=</sup> − −

<sup>=</sup> − < <sup>0</sup> . Hence, when 1 <sup>1</sup> < <

1 1

γς

γς

*<sup>M</sup>* < < 1 there isn't a solution in the bound of ( ) <sup>2</sup>

1 0 1 min

 λλ

*<sup>M</sup>* <sup>&</sup>lt; <sup>&</sup>lt; <sup>1</sup> , the solution in the bound of ( ) <sup>2</sup>

*M M*

 λ

1 1

 *M M* λ

− − ≤ ≤ − −

> , since 1

( ) 2 1 1 ( ) 2 min max 1 1

is the true solution to NECCB.

*<sup>M</sup>* , the solution in the bound of ( ) <sup>1</sup>

 γ ς*<sup>M</sup>* , .

3. NECCB has the form of diagonal loading with negative loading level, but NICCB has

− − =− ≤ ≤ <sup>=</sup> − −

 γς λ λ

γς λ λ*<sup>M</sup>* − < 0 , but

*M M*

 γς λ λ

λλ

γς

<sup>=</sup> − < <sup>0</sup> , with the above analysis of NICCB, we can obtain that

 γς

> Δ

min λ, 0 <sup>⎡</sup> <sup>⎤</sup> ⎢⎣ ⎥⎦

min λ, 0 <sup>⎡</sup> <sup>⎤</sup> ⎢⎣ ⎥⎦

ς

ς

 λ

 γς λ*M*

 γς λ*<sup>M</sup>* − < 0 , but

> γς λ

λλ

γς

 Δ

(1.44)

γς λλ

is the same as NICCB, but the solution in the bound of

(1.43)

(1.45)

(1.46)

max <sup>⎡</sup>0, λ<sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup>

λ

λ

<sup>&</sup>gt; <sup>0</sup> . Therefore, if

*<sup>M</sup>* , the solution to NECCB

<sup>&</sup>gt; <sup>0</sup> . Therefore, if

to NECCB, but the

is the true solution to

is the true solution to

should be chosen in the bound

should be chosen in the

γς

λ*M*

γς λλ

Δ

λ

min max

 λ λ

γς λ λ

γςλλ

γς λ λ

Δ

*<sup>M</sup>* , we can have:

<sup>&</sup>gt; <sup>0</sup> , and (1) (1)

in the bound of ( ) <sup>1</sup>

< 1 , then:

*<sup>M</sup>* < < 1 , we can have:

<sup>&</sup>gt; <sup>0</sup> , and ( ) <sup>2</sup> ( ) <sup>2</sup>

 γς

γς λλ

solution in the bound of ( ) <sup>2</sup>

1. When 1 1 < <

2. When 1 λ λ

> λ*M* λ γς

λ

λ

max <sup>⎡</sup>0, λ<sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup>

− > *<sup>M</sup>* 0 , then *<sup>M</sup>* <sup>1</sup>

λ

min max

 λ

min λ, 0 <sup>⎡</sup> <sup>⎤</sup> ⎢⎣ ⎥⎦

NECCN, and the norm equality constraint parameter

interval defined by {( ) } <sup>2</sup>

NECCB, and the norm equality constraint parameter

 γ ς

the form of diagonal loading with positive loading level.

γ < < ς

1 0 < < min 1 , .

 γς

of ( ) { } <sup>2</sup>

From above analysis, we can conclude as follows:

is the true solution to NECCB.

If 1 λ

 γς λ− > *<sup>M</sup>* 0 , then

<sup>1</sup> 1 < < γς λλ

Since (1) max λ

( ) 1 min λ, 0 ⎡ ⎤ ⎢ ⎥ ⎣ ⎦

2. If

If 1 γς λ λ

λ λ

1

Since ( ) <sup>2</sup> max λ

when 1 λ λ

 γς

γς

γς λ

 γς λ

γς

γς

weight vector will vary in the relevant wide range. If the fluctuation of weight vector norm is acutely, the performance improvement will be weakened greatly. Because the norm equality constraint (NEC) is stronger than the norm inequality constraint (QIC), NECCB will have more ascendant robust performance than NICCB. Hence, NECCB is proposed and is solved effectively in this chapter.

NECCB is to impose an additional equality constraint on the Euclidean norm of **w** . The NECCB problem is formulated as follows:

$$\begin{cases} \min\_{\mathbf{w}} \mathbf{w}^H \mathbf{R} \mathbf{w} \\ \text{s.t.} \quad \mathbf{w}^H \overline{\mathbf{s}} = 1 \\ \quad \quad \left\| \mathbf{w} \right\|^2 = \boldsymbol{\zeta} \end{cases} \tag{1.40}$$

Compare NECCB with NICCB, we can educe the conclusion as follows: (1) The solution to NICCB is obtained on the boundary of its constraint, similarly, for NECCB, the solution is also obtained on its constraint boundary. (2) The solving methods of the two beamformers (or the optimization problem) is different, such as the forenamed solution to NICCB, the Lagrange multiplier of NICCB is taken as positive real-value only, but for NECCB, the Lagrange multiplier is taken as arbitrary real-value, namely, it will be not only the positive real-value, but also the negative real-value. Hence, if we analyze from the point of view of the solving optimization problem, NECCB has two solutions to the optimal Lagrange multiplier, one is positive, and another is negative. Actually, the positive one is the solution to NICCB. For the sake of distinguishing the otherness, the negative solution is interested to NECCB. In order to solve NECCB, we must make use of the discussed results of NICCB, since the manipulation of some inequality, such as the inequality lessening and enlarging is only right for the positive real-value when we solve NICCB.

Similar to NICCB, the solution to NECCB can also be solved by the Lagrange multiplier methodology. And the optimal weight vector of NECCB has the same form as NICCB. The difference between NECCB and NICCB is only the Lagrange multiplier λ , for NICCB, λ ≥ 0 , here λ is arbitrary real-value.

Although the solution to NECCB has the same form as NICCB, but the bound of the Lagrange multiplier is different. In order to use the analyzed results of NICCB for NECCB, replace the Lagrange multiplier by its absolute value, namely the bound of the Lagrange multiplier λ for NECCB is given by:

$$\sqrt{\mathcal{Y}\bar{\zeta}} \le \frac{\mathcal{\lambda}\_1 + \left|\bar{\mathcal{\lambda}}\right|}{\mathcal{\lambda}\_{\mathcal{M}} + \left|\bar{\mathcal{\lambda}}\right|}\tag{1.41}$$

and

$$\sqrt{\gamma\varepsilon} \ge \frac{\bar{\lambda}\_M + \left|\bar{\bar{\lambda}}\right|}{\bar{\lambda}\_1 + \left|\bar{\bar{\lambda}}\right|}\tag{1.42}$$

1. If γς> 1 , then: 96 Fourier Transform Applications

weight vector will vary in the relevant wide range. If the fluctuation of weight vector norm is acutely, the performance improvement will be weakened greatly. Because the norm equality constraint (NEC) is stronger than the norm inequality constraint (QIC), NECCB will have more ascendant robust performance than NICCB. Hence, NECCB is proposed and is

NECCB is to impose an additional equality constraint on the Euclidean norm of **w** . The

*H*

**w Rw**

*<sup>H</sup> s t*

<sup>⎨</sup> <sup>=</sup> <sup>⎪</sup> ⎪ = ⎩

min

**w**

⎧ ⎪ ⎪

only right for the positive real-value when we solve NICCB.

is arbitrary real-value.

for NECCB is given by:

2

ς

(1.40)

λ

(1.41)

(1.42)

, for NICCB,

.. 1

Compare NECCB with NICCB, we can educe the conclusion as follows: (1) The solution to NICCB is obtained on the boundary of its constraint, similarly, for NECCB, the solution is also obtained on its constraint boundary. (2) The solving methods of the two beamformers (or the optimization problem) is different, such as the forenamed solution to NICCB, the Lagrange multiplier of NICCB is taken as positive real-value only, but for NECCB, the Lagrange multiplier is taken as arbitrary real-value, namely, it will be not only the positive real-value, but also the negative real-value. Hence, if we analyze from the point of view of the solving optimization problem, NECCB has two solutions to the optimal Lagrange multiplier, one is positive, and another is negative. Actually, the positive one is the solution to NICCB. For the sake of distinguishing the otherness, the negative solution is interested to NECCB. In order to solve NECCB, we must make use of the discussed results of NICCB, since the manipulation of some inequality, such as the inequality lessening and enlarging is

Similar to NICCB, the solution to NECCB can also be solved by the Lagrange multiplier methodology. And the optimal weight vector of NECCB has the same form as NICCB. The

Although the solution to NECCB has the same form as NICCB, but the bound of the Lagrange multiplier is different. In order to use the analyzed results of NICCB for NECCB, replace the Lagrange multiplier by its absolute value, namely the bound of the Lagrange

> 1 *M*

> > 1

λ

λ*M* λ

+

+

+

+

λ

λ

λ

λ

γς λ

γς ≥

≤

difference between NECCB and NICCB is only the Lagrange multiplier

**w s w**

solved effectively in this chapter.

λ

≥ 0 , here

multiplier

and

1. If

γς

> 1 , then:

λ

λ

NECCB problem is formulated as follows:

$$\frac{\bar{\mathcal{A}\_{\text{M}}} - \sqrt{\gamma \zeta} \mathcal{A}\_{1}}{\sqrt{\gamma \zeta} - 1} \le \left| \bar{\mathcal{A}} \right| \le \frac{\bar{\mathcal{A}\_{1}} - \sqrt{\gamma \zeta} \mathcal{A}\_{\text{M}}}{\sqrt{\gamma \zeta} - 1} \tag{1.43}$$

If 1 λ γς λ − > *<sup>M</sup>* 0 , then γς λλ < <sup>1</sup> *<sup>M</sup>* , since 1 λ γς λ *<sup>M</sup>* − < 0 , but λ <sup>&</sup>gt; <sup>0</sup> . Therefore, if <sup>1</sup> 1 < < γς λλ*<sup>M</sup>* , we can have:

$$
\bar{\lambda}\_{\text{min}}^{(1)} = \frac{\bar{\lambda}\_1 - \sqrt{\gamma \varsigma} \bar{\lambda}\_M}{\sqrt{\gamma \varsigma} - 1} \le \bar{\lambda} \le \frac{\bar{\lambda}\_1 - \sqrt{\gamma \varsigma} \bar{\lambda}\_M}{\sqrt{\gamma \varsigma} - 1} = \bar{\lambda}\_{\text{max}}^{(1)} \tag{1.44}
$$

Since (1) max λ <sup>&</sup>gt; <sup>0</sup> , and (1) (1) min max λ λ <sup>=</sup> − < <sup>0</sup> . Hence, when 1 <sup>1</sup> < < γς λλ*<sup>M</sup>* , the solution to NECCB in the bound of ( ) <sup>1</sup> max <sup>⎡</sup>0, λ <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> is the same as NICCB, but the solution in the bound of ( ) 1 min λ , 0 ⎡ ⎤ ⎢ ⎥ ⎣ ⎦ is the true solution to NECCB. 2. If γς< 1 , then:

$$\frac{\sqrt{\mathcal{\mathcal{Y}\zeta}}\mathcal{\mathcal{\lambda}\_{\mathcal{M}}}-\mathcal{\mathcal{\lambda}\_{1}}}{1-\sqrt{\mathcal{\mathcal{Y}\zeta}}} \leq \left|\bar{\mathcal{\lambda}}\right| \leq \frac{\sqrt{\mathcal{\mathcal{Y}\zeta}}\mathcal{\mathcal{\lambda}\_{1}}-\mathcal{\mathcal{\lambda}\_{M}}}{1-\sqrt{\mathcal{\mathcal{Y}\zeta}}} \tag{1.45}$$

If 1 γς λ λ − > *<sup>M</sup>* 0 , then *<sup>M</sup>* <sup>1</sup> γςλλ > , since 1 γς λ λ *<sup>M</sup>* − < 0 , but λ <sup>&</sup>gt; <sup>0</sup> . Therefore, if 1 λ λ γς*<sup>M</sup>* < < 1 , we can have:

$$
\lambda\_{\rm min}^{(2)} = \frac{\sqrt{\gamma \varepsilon} \,\lambda\_1 - \lambda\_M}{1 - \sqrt{\gamma \varepsilon}} \le \bar{\lambda} \le \frac{\sqrt{\gamma \varepsilon} \,\lambda\_1 - \lambda\_M}{1 - \sqrt{\gamma \varepsilon}} = \lambda\_{\rm max}^{(2)} \tag{1.46}
$$

Since ( ) <sup>2</sup> max λ <sup>&</sup>gt; <sup>0</sup> , and ( ) <sup>2</sup> ( ) <sup>2</sup> min max λ λ <sup>=</sup> − < <sup>0</sup> , with the above analysis of NICCB, we can obtain that when 1 λ λ γς *<sup>M</sup>* < < 1 there isn't a solution in the bound of ( ) <sup>2</sup> max <sup>⎡</sup>0, λ <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> to NECCB, but the solution in the bound of ( ) <sup>2</sup> min λ , 0 <sup>⎡</sup> <sup>⎤</sup> ⎢⎣ ⎥⎦ is the true solution to NECCB.

From above analysis, we can conclude as follows:


Robust Beamforming and DOA Estimation 99

The variation of the beamformer SNR versus samples number is given in Fig. 2. We can see that with the change of the samples number, the SNRs varies accordingly. The SNR of NICCB is almost closed to the SNR of SCB, and is lower than the SNR of Ideal-SCB, but NECCB is the best of all, especially for the small number, it has preferable performance. Hence, the norm constraint can improve the SNR, and NECCB has the highest SNR among

> SNR: Ideal-SCB SNR: SCB SNR: NICCB SNR: NECCB

0 200 400 600 800 1000

Ideal-SCB SCB NICCB NECCB


Angle Error (deg)

Number of Samples

the listed algorithms.

15

10

5

0

Beamfomer SNR (dB)



10

5

0






Fig. 3. Output SNR versus angle mismatch

Beamformer Output SNR (dB)

Fig. 2. Output SNR versus samples number

## **1.5 Simulation analysis**

In order to validate the correctness and the efficiency of the proposed algorithms, we analyze as follows. In our simulations, we assume a uniform linear array with N=10 omnidirectional sensors spaced half a wavelength apart. Through all examples, we assume that there is one desired source, namely, there is a signal from direction 0º, the Signal Noise Ratio (SNR) is -5dB. Therein, the presumed signal direction is equal to 5º (i.e., there is a 5º direction mismatch).

For the comparison, the benchmark standard Capon beamforming algorithm that corresponds to the ideal case when the covariance matrix is estimated by the maximun likelihood estimator (MLE) and the actual steering vector is used, this algorithm does not correspond to any real situation but is included in our simulations for the sake of comparison only, and is denoted by Ideal-SCB in the figure. The other algorithms include: standard Capon beamformer (SCB), NICCB, NECCB. For NICCB and NECCB, the constraint parameter is selected as the median of the allowable bound.

#### **1.5.1 Effectivity analyzing**

In order to show the effectivity of the proposed algorithms, we first compare the pattern of the mentioned Capon beamforming algorithms. The Capon beamformer pattern is given in Fig. 1. Since the signal direction mismatch, the mainlobe of SCB departs from the signal direction. The performance of NICCB is slightly better than SCB, and NECCB is the best of all, the direction mismatch is overcame commendably and NECCB also has lower sidelobe level. Here, NICCB uses the positive optimal loading level, NECCB uses the negative optimal loading level. From the comparison, we can see that NECCB has the better performance than NICCB.

Fig. 1. Capon beamformer pattern comparison

98 Fourier Transform Applications

In order to validate the correctness and the efficiency of the proposed algorithms, we analyze as follows. In our simulations, we assume a uniform linear array with N=10 omnidirectional sensors spaced half a wavelength apart. Through all examples, we assume that there is one desired source, namely, there is a signal from direction 0º, the Signal Noise Ratio (SNR) is -5dB. Therein, the presumed signal direction is equal to 5º (i.e., there is a 5º

For the comparison, the benchmark standard Capon beamforming algorithm that corresponds to the ideal case when the covariance matrix is estimated by the maximun likelihood estimator (MLE) and the actual steering vector is used, this algorithm does not correspond to any real situation but is included in our simulations for the sake of comparison only, and is denoted by Ideal-SCB in the figure. The other algorithms include: standard Capon beamformer (SCB), NICCB, NECCB. For NICCB and NECCB, the constraint

In order to show the effectivity of the proposed algorithms, we first compare the pattern of the mentioned Capon beamforming algorithms. The Capon beamformer pattern is given in Fig. 1. Since the signal direction mismatch, the mainlobe of SCB departs from the signal direction. The performance of NICCB is slightly better than SCB, and NECCB is the best of all, the direction mismatch is overcame commendably and NECCB also has lower sidelobe level. Here, NICCB uses the positive optimal loading level, NECCB uses the negative optimal loading level. From the comparison, we can see that NECCB has the better


Ideal-SCB SCB NICCB NECCB

Azimuth Angle (deg)

parameter is selected as the median of the allowable bound.

**1.5 Simulation analysis** 

direction mismatch).

**1.5.1 Effectivity analyzing** 

performance than NICCB.

0




Beamformer pattern (dB)




Fig. 1. Capon beamformer pattern comparison

The variation of the beamformer SNR versus samples number is given in Fig. 2. We can see that with the change of the samples number, the SNRs varies accordingly. The SNR of NICCB is almost closed to the SNR of SCB, and is lower than the SNR of Ideal-SCB, but NECCB is the best of all, especially for the small number, it has preferable performance. Hence, the norm constraint can improve the SNR, and NECCB has the highest SNR among the listed algorithms.

Fig. 2. Output SNR versus samples number

Fig. 3. Output SNR versus angle mismatch

Robust Beamforming and DOA Estimation 101

**-6.09**



From the above simulation results, we can see that the loading level has a great impact on the performance of the Capon beamformer, and NECCB has the best pointing performance, namely, the optimal negative loading is the best. This is also consistent to the theory analysis, for the robust beamformer with diagonal loading, the improvement is determined by the optimal loading level, when the loading level is optimal, the performance

**-6.09**

Digital Loading Level ( ) λ

Ideal-SCB SCB NCCB Optimal λ

Ideal-SCB SCB NCCB Optimal λ

Digital Loading Level ( ) λ

Beamformer SNR (dB)

6

4

2

0





1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

Fig. 5. Weight vector norm versus loading level

Weight Vector Norm

Fig. 4. Output SNR versus loading level

The variation of the Capon beamformer output signal-to-noise-ratio (SNR) versus signal direction mismatch is given in Fig. 3. We can see that with the change of the signal direction mismatch, the SNR varies accordingly, when the angle error is in the range of [-7º, 7º], NECCB will has higher SNR than SCB, NICCB. The NECCB has the higher SNR can be explained by the Fig. 1 of the beam pattern comparison, NECCB not only has the good pointing performance, but also has the lower sidelobe level. Namely, for the same desired signal output, the output noise of NECCB is lower. The simulation results can also be explained as follows, for the used scene, the Signal Noise Ratio is -5dB, and for NECCB, the optimal Lagrange is negative, namely the optimal loading level is negative, but for others, the loading level is zero or positive. Therefore, for the NECCB beamformer, the output noise power is decreased, but for other beamformers, the output noise power is increased. Hence, the NECCB has higher output SNR than others. For the sake of saving space, the corresponding beam pattern comparison isn't given, but in the simulation, NECCB pattern also points to the actual signal direction exactly. Hence, NECCB has the better robustness in the signal direction mismatch case.

From above analysis, we can see that NECCB has the best robustness against the signal direction mismatch.
