Fundamentals of Signal Processing and Adaptive Filters

#### **Chapter 1**

## Fundamentals of Narrowband Array Signal Processing

*Zeeshan Ahmad*

#### **Abstract**

Array signal processing is an actively developing research area connected to the progress in optimization theory, and remains the key technological development that attracts prevalent attention in signal processing. This chapter provides an overview of the fundamental concepts and essential terminologies employed in narrowband array signal processing. We first develop a general signal model for narrowband adaptive arrays and discuss the beamforming operation. We next introduce the basic performance parameters of adaptive arrays and the second order statistics of the array data. We then formulate various optimal weigh vector solution criteria. Finally, we discuss various types of adaptive filtering algorithms. Besides, this chapter emphasizes the theory of narrowband array signal processing employed in narrowband beamforming and direction-of-arrival (DOA) estimation algorithms.

**Keywords:** Adaptive algorithms, Adaptive arrays, Array signal processing, Beamforming

#### **1. Introduction**

Array signal processing [1, 2] is an indispensable technique in signal processing with ubiquitous applications. The fundamental principles and techniques of array signal processing are applicable in various fields such as sonar, radar, and wireless communications etc. Antenna array processing manipulate and process each sensor output according to a certain algorithm to achieve better system performance than just a single antenna, and estimate the signal parameters from the data accumulated over the spatial aperture of an antenna array. [3, 4]. These parameters of interest include the signal content itself, their DOAs, and power. To get this information, the sensor array data is processed using statistical and adaptive signal processing techniques. These techniques include parameter estimation and adaptive filtering applied to array signal processing. Meanwhile, it also plays an important role in the multi-input multi-output (MIMO) communication system and a waveform diversity MIMO radar system, by improving its performance, reducing the clutter, and increasing the array resolution [1–4].

All in all, there are numerous potential advantages of array signal processing techniques, such as improved system capacity, signal bandwidth, the space division multiple access (SDMA), high signal-to-noise ratio (SNR), frequency reuse factor, side-lobe offsets or nulls, degree of freedoms, and the resolution of the antenna array [5]. In this chapter, we introduce the basic principle of array signal processing techniques to further understand its implementation process and applications. We begin by formulating the signal mathematical model used as a basis for discussing

array signal processing in beamforming and direction-of-arrival (DOA) estimation algorithms. We also provide some introductory materials about beamforming techniques, performance analysis parameters, and a brief overview of some basic beamforming algorithms.

#### **2. Adaptive array signal model**

Since the real signal transmission environment is complex, so a strict mathematical model is the basis for adaptive beamforming and lays the groundwork for the discussion of beamforming algorithms. To simplify the analysis of the model, the signal source used in this chapter is a narrowband signal, that is, the bandwidth of the received array signal is much smaller than the carrier frequency of the signal, assuming that [6]:


Although the above assumptions are not valid for wideband signal source, the fundamental model used for them is very similar. Therefore, this chapter focuses on the mathematical model based on narrowband signal beamforming principle.

Adaptive antenna arrays may have different geometrical configurations. Different spatial distribution of array elements leads to different array configurations, such as linear arrays, circular arrays, rectangular arrays, and triangular arrays etc. [7, 8]. For an arbitrary array structure with *M*-elements as shown in **Figure 1**, *θ* and *ϕ* denote the elevation angle and the azimuth angle, respectively. Vector *a* and *p<sup>i</sup>* respectively denote the direction vector of the signal and the coordinates of the *i* � *th* array element. Since the signal received by each array element has a certain delay relative to the origin of the coordinates, the delay time [9] for the signal received at the *i* � *th* array element is

$$\pi\_i = \frac{\mathbf{a}^T \mathbf{p}\_i}{c} \tag{1}$$

where *c* is the speed of light, and

$$\mathfrak{a} = \begin{bmatrix} -\sin\theta\cos\phi \\ -\sin\theta\sin\phi \\ -\cos\theta \end{bmatrix}. \tag{2}$$

*Fundamentals of Narrowband Array Signal Processing DOI: http://dx.doi.org/10.5772/intechopen.98702*

**Figure 1.** *Geometry of array.*

$$\mathbf{p}\_{i} = \begin{bmatrix} \mathbf{x}\_{i} \\ \mathbf{y}\_{i} \\ \mathbf{z}\_{i} \end{bmatrix}, \quad i = \mathbf{1}, \mathbf{2}, \cdots, M. \tag{3}$$

The signal received by the first sensor located at the origin of the coordinates is

$$
\tilde{\boldsymbol{\omega}}\_1(t) = \boldsymbol{\omega}\_1(t)\boldsymbol{e}^{j\alpha t}.\tag{4}
$$

The overall signal received by the array can be expressed as

$$\mathbf{x}(t) = \begin{bmatrix} \mathbf{x}\_1(t) \\\\ \mathbf{x}\_2(t) \\\\ \cdots \\\\ \mathbf{x}\_M(t) \end{bmatrix} = \begin{bmatrix} \mathbf{x}\_1(t-\tau\_1)e^{j\alpha(t-\tau\_1)} \\\\ \mathbf{x}\_2(t-\tau\_2)e^{j\alpha(t-\tau\_2)} \\\\ \cdots \\\\ \mathbf{x}\_M(t-\tau\_M)e^{j\alpha(t-\tau\_L)} \end{bmatrix}. \tag{5}$$

If the received signal is a narrowband, we can ignore its amplitude changes for different elements. Consider the phase change only [10], the array received signal is simplified to

$$\mathbf{x}(t) = \begin{bmatrix} \varkappa\_1(t) \\\\ \varkappa\_2(t) \\\\ \cdots \\\\ \varkappa\_M(t) \end{bmatrix} = \varkappa\_1(t) \begin{bmatrix} \mathcal{e}^{j\alpha(t-\tau\_1)} \\\\ \mathcal{e}^{j\alpha(t-\tau\_2)} \\\\ \cdots \\\\ \mathcal{e}^{j\alpha(t-\tau\_M)} \end{bmatrix} \tag{6}$$

Let us consider a uniform linear array (ULA) composed of *M* elements with inter-element spacing *d* as shown in **Figure 2**. Assume the first array element located at origin of coordinate as a reference element. Consider the far field source with *P* signals *s*0ð Þ*t* , *s*1ð Þ*t* , … , *sP*�<sup>1</sup>ð Þ*t* , having the same center carrier frequency *fc*, the narrowband signal *si*ð Þ*t* impinges on the array at an angle *θ<sup>i</sup>* relative to the broadside, which refers to the direction normal to the array, where *i* ¼ 0, … , *P* � 1 (without taking into account the azimuth angle, consider only the elevation angle).

Due to multipath propagation, each element receive the same signal with a different time delay. Due to the fact that the incident signal is a narrowband signal,

**Figure 2.** *Structure of uniform linear array antenna.*

the amplitude variation is negligible, and only phase delay is considered. This delay is determined by the array element spacing *d* and the elevation angle of incidence. Consider the signal received by the first array element as a reference signal, then the analytical expression for the *i* � *th* signal received with respect to the reference array element is

$$s\_i(t) = m\_i(t)e^{j2\pi f\_c t}, \quad i = 0, \ldots, P - 1 \tag{7}$$

where *mi*ð Þ*t* is the complex envelope of the *i* � *th* modulated signal, and *fc* is the carrier frequency.

The propagation delay of the received signal from reference array element to the *m* � *th* array element can be expressed as

$$\tau\_m(\theta\_i) = \frac{d}{c}(m-1)\sin\theta\_i, \quad m = 1, \ldots, M. \tag{8}$$

According to Eq. (7), the signal received at the *m* � *th* array element can be expressed as the superposition of all the signals, that is

$$\alpha\_m(t) = \sum\_{i=0}^{P-1} m\_i(t - \tau\_m(\theta\_i)) e^{j2\pi f\_c(t - \tau\_m(\theta\_i))} + n\_m(t),\tag{9}$$

where *nm*ð Þ*t* is the Gaussian noise signal received at the *m* � *th* array element having zero mean and variance *σ*2.

Since we consider a narrowband signal source located in the far-field, the bandwidth *B* of the signal satisfy the condition *B* < < *f <sup>c</sup>*, and *mi*ð Þ*t* changes relatively slowly because the signal delay is *<sup>τ</sup>m*ð Þ *<sup>θ</sup><sup>i</sup>* < < <sup>1</sup> *<sup>B</sup>* . Therefore, complex envelope of the signal can be approximated as *mi*ð Þ *t* � *τm*ð Þ *θ<sup>i</sup>* ≈*mi*ð Þ*t* , that is, the difference in the array received signal complex envelope can be neglected. Thus, Eq. (9) is simplified as

$$\varkappa\_m(t) = \sum\_{i=0}^{P-1} m\_i(t) e^{j2\pi f\_c(t-\tau\_m(\theta\_i))} + n\_m(t). \tag{10}$$

Since the carrier component in the system does not affect the analysis, and the adaptive algorithm is often carried out in the baseband (complex envelope), so the carrier part *e <sup>j</sup>*2*<sup>π</sup> fct* in the Eq. (10) can be ignored. Eq. (10) can then be expressed as *Fundamentals of Narrowband Array Signal Processing DOI: http://dx.doi.org/10.5772/intechopen.98702*

$$\varkappa\_m(t) \approx \sum\_{i=0}^{P-1} m\_i(t) e^{-j(m-1)kd\sin\theta\_i} + n\_m(t). \tag{11}$$

where *k* is the free-space wave number given by [11].

$$k = 2\pi f\_{\mathfrak{c}}/\mathfrak{c}.\tag{12}$$

At time *t*, the overall received signal can be expressed as

$$\begin{aligned} \mathbf{x}(t) &= \begin{bmatrix} \mathbf{x}\_1(t) \\ \mathbf{x}\_2(t) \\ \vdots \\ \mathbf{x}\_M(t) \end{bmatrix} = \mathbf{A}\mathbf{m}(t) + \mathbf{n}(t) \\\ &= \begin{bmatrix} 1 & 1 & \cdots & 1 \\ e^{-jkd\sin\theta\_0} & e^{-jkd\sin\theta\_1} & \cdots & e^{-jkd\sin\theta\_{P-1}} \\ \vdots & \vdots & \vdots & \vdots \\ e^{-j(M-1)kd\sin\theta\_0} & e^{-j(M-1)kd\sin\theta\_1} & \cdots & e^{-j(M-1)kd\sin\theta\_{P-1}} \end{bmatrix} \begin{bmatrix} m\_1(t) \\ m\_2(t) \\ \vdots \\ m\_P(t) \end{bmatrix} + \begin{bmatrix} n\_1(t) \\ n\_2(t) \\ \vdots \\ n\_M(t) \end{bmatrix} \end{aligned} \tag{13}$$

where *A* ¼ ½ � *a*ð Þ *θ*<sup>0</sup> *a*ð Þ *θ*<sup>1</sup> ⋯ *a*ð Þ *θ<sup>P</sup>*�<sup>1</sup> is the direction matrix (also called the array manifold matrix), *a*ð Þ *θ<sup>i</sup>* is the direction vector for the *i* � *th* signal *si*ð Þ*t* , and *n*ð Þ*t* is the noise vector, expressed as

$$\mathfrak{a}(\theta\_i) = \begin{bmatrix} \mathbf{1} & \mathbf{e}^{-j2\pi f\_\varepsilon \tau\_1(\theta\_i)} & \cdots & \mathbf{e}^{-j2\pi f\_\varepsilon \tau\_M(\theta\_i)} \end{bmatrix}^T \tag{14}$$

$$\mathbf{n}(t) = \begin{bmatrix} n\_1(t) & n\_2(t) & \cdots & n\_M(t) \end{bmatrix}^T \tag{15}$$

where the sign ½�*<sup>T</sup>* denotes the transpose operation.

#### **3. Adaptive beamforming**

Beamforming is a concept originating in array signal processing. The fundamental aim of beamforming is to estimate the desired signal properties by adjusting the complex weights at each sensor applied to the received signal which result in enhancement of desired signal and place nulls in the direction of interference. Adaptive arrays are capable to adjust its weights automatically according to the environment.

The beamforming can be classified into two types that are analog beamforming and digital beamforming [12].

The analog beamforming is performed in the analog domain. The block diagram of an analog beamformer is shown in **Figure 3**. The analog RF signal received by the antenna array is converted to an intermediate frequency by the RF front end, which is the analog intermediate frequency signal. The weight vector is calculated by the weights update algorithm. The weighted sum of the analog IF signal is obtained, and the array received signal is synthesized. At this point the signal is still analog signal; then by analog-to-digitical converter (A/D) the analog signal is sampled and quantized, and the analog IF signal is converted to a digital intermediate frequency signa. Then the digital IF signal is given to the next - level processing.

**Figure 3.** *The structure of analog beamforming.*

**Figure 4.** *Structure diagram of adaptive beamforming.*

The digital beamforming is carried out in the digital domain, which is shown in the **Figure 4**.

Adaptive beamforming is a subclass of digital beamforming. Usually adaptive beamformer [13] comprises of RF Front-end, A/D converter module, and the signal processing (beam-control formation) module. A basic adaptive beamformer is shown in **Figure 4** which is composed of antenna array elements and an adaptive signal processor.

The antenna array elements receive the spatially-propagating desired signal and interference signal at the array aperture. In the RF Front-end, the received signal is down-converted to baseband signal [14], and then transformed into a digital signal through A/D converter, which is then processed by the adaptive processor. In adaptive processor, suitable adaptive filtering algorithm according to the requirements is applied to get the optimal weight vector. The weights are applied to the received signal at each array element to obtain a weighted sum of the signal. After the adaptive processing, the weighted signals are combined to get the output of the beamformer, which direct the main lobe in the direction of the desired signal and nulls in the directions of the interferers. The interference and noise are suppressed, *Fundamentals of Narrowband Array Signal Processing DOI: http://dx.doi.org/10.5772/intechopen.98702*

and the output signal-to-interference-plus-noise ratio (SINR) of beamformer is thus improved.

Clearly, based on the adaptive beamformer structure shown in **Figure 4**, the output of each element is multiplied by a complex weight and summed to form the array output, *y t*ð Þ, expressed as

$$\mathbf{y}(t) = \mathbf{w}^H \mathbf{x}(t) = \sum\_{m=1}^{M} w\_m^\* \mathbf{x}\_m(t) \tag{16}$$

where the symbol ½�*<sup>H</sup>* represents the Hermitian (complex conjugate) transpose, ðÞ <sup>∗</sup> indicates the conjugate, and **<sup>w</sup>** is the *<sup>M</sup>* � 1 dimensional optimal weight vector computed by an adaptive filtering algorithm, given as

$$\mathbf{w} = \begin{bmatrix} w\_1 & w\_2 & \cdots & w\_M \end{bmatrix}^T. \tag{17}$$

In this way, the array output, *y t*ð Þ, is obtained by combining the weighted sum of each of the sensor signals. The different weight vectors for beamforming of signals from different directions have different response, thus pointing to the desired signal and suppress the interference signal.

Array output signal power is expressed as

$$P\_{out} = E[\boldsymbol{y}(t)^\* \boldsymbol{y}(t)] = \boldsymbol{w}^H \boldsymbol{R} \boldsymbol{w}. \tag{18}$$

where

$$\mathbf{R} = E\left[\mathbf{x}(t)\mathbf{x}^H(t)\right],\tag{19}$$

is the covariance matrix of the received signal, and *E*½� denotes the expectation operator. Substitute Eq. (13) into Eq. (19), the covariance matrix can be expressed as

$$\mathbf{R} = \sum\_{i=0}^{P-1} p\_i \mathbf{a}(\theta\_i) \mathbf{a}(\theta\_i)^H + \sigma^2 \mathbf{I},\tag{20}$$

where *pi* is the power of signal *si*ð Þ*t* , and **I** represents a identity/unit matrix. If the input signal in space has only one desired signal *s*0ð Þ*t* , and *P* � 1 interference signals, then the covariance matrix can be expressed as

$$\begin{split} \mathbf{R} &= p\_0 a(\theta\_i) a(\theta\_i)^H + \sum\_{i=1}^{P-1} p\_i a(\theta\_i) a(\theta\_i)^H + \sigma^2 \mathbf{I} \\ &= \mathbf{R}\_\iota + \mathbf{R}\_i + \mathbf{R}\_n, \end{split} \tag{21}$$

where *R<sup>s</sup>* is the covariance matrix of the desired signal, **R***<sup>i</sup>* is an interference signal covariance matrix, and **R***<sup>n</sup>* is the covariance matrix of the noise. Substitute Eq. (21) into Eq. (18), the output signal power can be expressed as a sum of desired signal power *Pos*, interference power as *Poi* and noise power *Pon*.

$$P\_{os} = \boldsymbol{w}^{H} \mathbf{R}\_{\boldsymbol{s}} \boldsymbol{w} \tag{22}$$

$$P\_{oi} = \boldsymbol{w}^H \mathbf{R}\_i \boldsymbol{w} \tag{23}$$

$$P\_{on} = \mathbf{w}^H \mathbf{R}\_n \mathbf{w} \tag{24}$$

The output SINR, a performance parameter of the beamformer, is defined as the ratio of the output desired signal power and the output power due to interferenceplus-noise, and can be expressed as

$$\text{SINR}\_{out} = \frac{P\_{os}}{P\_{oi} + P\_{on}} = \frac{\mathbf{w}^H \mathbf{R}\_i \mathbf{w}}{\mathbf{w}^H \mathbf{R}\_i \mathbf{w} + \mathbf{w}^H \mathbf{R}\_n \mathbf{w}},\tag{25}$$

Adaptive antenna array takes the output SINR as an index to compute the optimal weights by maximizing the output SINR [15].

The most important performance indicator of the beamforming is the direction of the beampattern. It can be quite obvious to determine whether the resolution of any beamforming method is enough to enhance the desired signal and the extent of the suppression of interference signal is large enough. Array beampattern is defined as

$$B(\theta) = \left| \mathfrak{w}^H \mathfrak{a}(\theta) \right|. \tag{26}$$

When using analog beamforming, the hardware circuit is very complex, and the accuracy is low. In digital beamforming, the operations of phase shifting and amplitude scaling for each antenna element, and summation of received signals, are done digitally through a general-purpose DSP or dedicated beamforming chips. Therefore, digital beamforming is more flexible and do not require modification of the hardware structure.

Compared with analog beamforming, the digital beamforming has the following advantages:


#### **4. Basic parameters of adaptive array antenna**

The performance parameters of an adaptive array antenna are basically the same as that of a single antenna, but because of the weight of the array, the specific values of each parameter depend on the array element characteristics, the weight vector, and geometry of the array [16].

#### **4.1 Array pattern**

The array pattern is the visual performance parameter of an antenna array. According to the pattern multiplication theorem of array antenna, the overall array pattern is the product of the element pattern *PE*ð Þ *φ*, *θ* and the array pattern *PA*ð Þ *φ*, *θ* , that is

$$P(\varphi,\theta) = P\_E(\varphi,\theta)P\_A(\varphi,\theta). \tag{27}$$

Generally, it is assumed that the array elements are identical and omnidirectional, hence

$$P\_{\mathcal{E}}(\boldsymbol{\varrho}, \boldsymbol{\theta}) = \mathbf{1}.\tag{28}$$

Thus, mostly adaptive array antenna patterns defined in the literature refers to the array factor part only, and the relationship between the received signal and the output signal is given as

$$\boldsymbol{y}(t) = \boldsymbol{w}^{\mathrm{H}} \boldsymbol{x}(t). \tag{29}$$

Let's assume a single array element with the input signal power 1, the output signal power can be expressed as

$$P(\boldsymbol{\rho}, \boldsymbol{\theta}) = \left| \boldsymbol{w}^{\mathrm{H}} \boldsymbol{a}(\boldsymbol{\rho}, \boldsymbol{\theta}) \right|^{2}. \tag{30}$$

The above expression defines the power pattern of the array antenna. As can be seen from Eq. (30), the antenna beampattern is determined by the value of the weight vector; on the other hand, it also depends on the direction vector which is determined by the array geometry. Since we define the power pattern *P*ð Þ *φ*, *θ* as the squared magnitude of the beampattern, therefore

$$B(\boldsymbol{\varrho}, \boldsymbol{\theta}) = \left| \boldsymbol{w}^{\mathrm{H}}(\boldsymbol{\theta}\_{0}) \mathbf{a}(\boldsymbol{\theta}) \right|. \tag{31}$$

#### **4.2 Array directivity and directivity index**

The directivity of an adaptive array is closely related to the pattern of the array, which can be expressed as follows

$$D = \frac{4\pi P\_{\text{max}}(\rho\_0, \theta\_0)}{\int\_0^{\pi} d\theta \int\_0^{2\pi} \sin\theta P(\rho, \theta) d\rho},\tag{32}$$

where *P*max *φ*<sup>0</sup> ð Þ , *θ*<sup>0</sup> is the maximum pattern that points to the direction of the main lobe.

The directivity is usually expressed in dB and is called array directivity index (*DI*) given by

$$DI = 10\log\_{10} D.\tag{33}$$

#### **4.3 Array gain**

The purpose of antenna array is to improve the *G=T* (gain of an antenna divided by its system temperature) ratio of the antenna. Array gain *G* is the main parameter to measure the SNR of the array, which is defined as the ratio of the output signal to noise ratio *SNRo* and the input signal to noise ratio *SNRi*.

$$G = \frac{\text{SNR}\_o}{\text{SNR}\_i}.\tag{34}$$

#### **4.4 Sensitivity**

The array beampattern is a function of weight vector and direction vector. However, due to the influence of various errors, the weight vector and the direction vector will have some errors, such as sensor position errors, covariance matrix estimation errors, inconsistent channel errors, and the mutual coupling between the array elements cause weight vector errors. Suppose the error-free weight vector *w*<sup>0</sup> of the *m* � *th* element is

$$\mathbf{w}\_{m}^{0} = \mathbf{g}\_{m}^{0} e^{j\varphi\_{m}^{0}}.\tag{35}$$

The *m* � *th* element weight vector with error is

$$\mathbf{w}\_m = (\mathbf{g}\_m^0 + \Delta \mathbf{g}\_m)e^{j\left(\phi\_m^0 + \Delta \phi\_m\right)},\tag{36}$$

where the error Δ*gm* and Δ*φ<sup>m</sup>* are zero mean Gauss random variables, and the variance is

$$\operatorname{Var}\left(\Delta \mathbf{g}\_m\right) = \sigma\_\mathbf{g}^2\tag{37}$$

$$\operatorname{Var}(\Delta \rho\_m) = \sigma\_\varphi^2 \tag{38}$$

For the direction vector, the error is mainly derived from the array element position errors. For the *m* � *th* element, if there is no error in the array element position coordinates, then

$$\mathbf{p}\_m^0 = \begin{bmatrix} p\_{mx} & p\_{my} & p\_{mx} \end{bmatrix}^\mathrm{T}. \tag{39}$$

While the coordinate with the error can be expressed as

$$\mathbf{p}\_m = \begin{bmatrix} p\_{mx} + \Delta p\_{mx} & p\_{my} + \Delta p\_{my} & p\_{mx} + \Delta p\_{mx} \end{bmatrix}^T,\tag{40}$$

where the error quantity is Gauss random variable, which are zero mean, and the variance is

$$\operatorname{Var}\left(\Delta p\_{mx}\right) = \operatorname{Var}\left(\Delta p\_{my}\right) = \operatorname{Var}\left(\Delta p\_{mx}\right) = \sigma\_p^2. \tag{41}$$

The array pattern at this instant is

$$P(\boldsymbol{\rho}, \boldsymbol{\theta}) = P^0 \boldsymbol{e}^{-\left(\sigma\_{\boldsymbol{\varphi}}^2 + \sigma\_{\boldsymbol{\lambda}}^2\right)} + \sum\_{m=1}^{M} \left(\mathbf{g}\_m^0\right)^2 \left(\mathbf{1} + \sigma\_{\boldsymbol{\mathcal{g}}}^2 - \mathbf{e}^{-\left(\sigma\_{\boldsymbol{\varphi}}^2 + \sigma\_{\boldsymbol{\lambda}}^2\right)}\right),\tag{42}$$

where *λ* is the wavelength, and *P*<sup>0</sup> denotes the error-free pattern given by

$$P^0 = \left| \mathbf{w}^0 \mathbf{a}^0 \right|^2,\tag{43}$$

and the variance is

$$
\sigma\_{\lambda}^{2} = \left(\frac{2\pi}{\lambda}\right)^{2} \sigma\_{p}^{2}.\tag{44}
$$

From Eq. (42), it is seen that the actual pattern consists of two parts. The first part is the error free pattern, i.e., the first term of the equation, and the error in the second term. In the second term, the coefficient *g*<sup>0</sup> *<sup>m</sup>* is used to amplify the error, so the sensitivity of the array is defined as

$$T\_s = \sum\_{m=1}^{M} \left(\mathbf{g}\_m^0\right)^2. \tag{45}$$

#### **5. Optimal beamforming**

In beamforming, the weight vector is computed by solving the optimization of the cost function. The different cost functions corresponds to different criteria. Some of the most frequently used performance criteria's include minimum mean squared error (MMSE), maximum signal-to-interference-and noise ratio (MSINR), maximum likelihood (ML), minimum noise variance (MV), minimum output power (MP), and maximum gain, etc. [17]. These criteria's are often expressed as cost functions which are typically inversely associated with the quality of the signal at the array output. As the weights are iteratively adjusted, the cost function becomes smaller and smaller. When the cost function is minimized, the performance criterion is met and the algorithm is said to have converged.

#### **5.1 Maximum signal-to-interferer-noise ratio**

As can be seen from Eq. (21), the array output signal power consists of the desired signal power, interference power and noise power, and they are mutually uncorrelated. Since the interference signal and the noise is independent i.e. mutually uncorrelated and zero mean, so, **R**<sup>i</sup> þ **R***<sup>n</sup>* is a full rank and Hermite positive definite matrix. By unitary transformation it can be converted into unitary matrix as

$$\begin{aligned} \mathbf{U}^\*(\mathbf{R}\_i + \mathbf{R}\_n)\mathbf{U}^T &= \mathbf{U}^\* E\left[ \left( \sum\_{i=0}^{p-1} m\_i(t)\mathbf{a}(\theta\_i) \right) \left( \sum\_{i=0}^{p-1} m\_i(t)\mathbf{a}(\theta\_i) \right)^H \right] \mathbf{U}^T + \sigma^2 \mathbf{I} \\ &= E\left[ \left( \mathbf{U} \sum\_{i=0}^{p-1} m\_i(t)\mathbf{a}(\theta\_i) \right)^\* \left( \mathbf{U} \sum\_{i=0}^{p-1} m\_i(t)\mathbf{a}(\theta\_i) \right)^T \right] + \sigma^2 \mathbf{I} \\ &= \sigma^2 \mathbf{I} \end{aligned} \tag{46}$$

If we make

$$
\boldsymbol{w} = \mathbf{U}^T \hat{\boldsymbol{w}},\tag{47}
$$

the output SINR will be

$$\begin{split} \text{SINR}\_{\text{out}} &= \frac{\hat{\mathbf{w}}^{H} \boldsymbol{E} \Big[ \left( \mathbf{U} m\_{0} \boldsymbol{\mathfrak{a}}(\boldsymbol{\theta}\_{0}) \right)^{\*} \left( \mathbf{U} m\_{0} \boldsymbol{\mathfrak{a}}(\boldsymbol{\theta}\_{0}) \right)^{T} \Big] \hat{\mathbf{w}}}{\hat{\mathbf{w}}^{H} \boldsymbol{E} \Big[ \left( \mathbf{U} \sum\_{i=0}^{P-1} m\_{i} \mathbf{a}(\boldsymbol{\theta}\_{i}) \right)^{\*} \left( \mathbf{U} \sum\_{i=0}^{P-1} m\_{i} \mathbf{a}(\boldsymbol{\theta}\_{i}) \right)^{T} \Big] \hat{\mathbf{w}} + \sigma^{2} \mathbf{I}}} \\ &= \frac{\hat{\mathbf{w}}^{H} \boldsymbol{E} \Big[ \left( m\_{0} \mathbf{U} \boldsymbol{a}(\boldsymbol{\theta}\_{0}) \right)^{\*} \left( m\_{0} \mathbf{U}^{\*} \boldsymbol{a}(\boldsymbol{\theta}\_{0}) \right)^{T} \Big] \hat{\mathbf{w}}}{\left\| \left\| \hat{\mathbf{w}} \right\| \right\|^{2}}. \end{split} \tag{48}$$

According to Cauchy-Schwartz inequality

*Adaptive Filtering - Recent Advances and Practical Implementation*

$$\text{SINR}\_o \le \left\| m\_0 \mathbf{U} \mathbf{a}(\theta\_0) \right\|^2 = E\left[ \left| m\_0 \right|^2 \right] \cdot \left\| \mathbf{U}^\* \mathbf{a}(\theta\_0) \right\|^2. \tag{49}$$

When the equality holds, then

$$
\hat{\mathbf{w}} = \mathbf{U}^\* \boldsymbol{\mathfrak{a}}(\theta\_0). \tag{50}
$$

The optimal solution for the weight vector

$$\mathbf{w}\_{\text{MSINR}} = \mathbf{U}^T \mathbf{U}^\* \boldsymbol{\mathfrak{a}}(\theta\_0) = (\mathbf{R}\_i + \mathbf{R}\_n)^{-1} \boldsymbol{\mathfrak{a}}(\theta\_0). \tag{51}$$

The optimal weight vector solution of the MSINR has the following advantages: only the DOA of the desired signal is required, and the DOA information for the interference signals is not needed; *R<sup>i</sup>* þ *R<sup>n</sup>* can be obtained through sampling and estimating the signal of each array element when the desired signal is interrupted; taking into account the constraints of the interference and noise signal, the output has a maximum SINR.

#### **5.2 Minimum mean square error**

Mean squared error refers to the mean squared difference between the beamformer output and the desired signal. The MMSE algorithm minimizes the error with respect to a reference signal *d t*ð Þ. If the signal prior knowledge is known, the receiver can generate a local reference signal which has a strong correlation with the desired signal. The main idea of MMSE is to adjust the weight vector in real time, so that the mean squared error between the array output signal and the reference signal can be minimized. The estimator is of the form

$$
\mathfrak{y} = \mathfrak{w}^H \mathfrak{x}.\tag{52}
$$

The cost function, i.e., the mean square value of the error signal is

$$J(\mathfrak{w}) = E\left[\left|\mathfrak{w}^H \mathfrak{x} - d\right|^2\right]. \tag{53}$$

Expanding the right-side of Eq. (53) and *w* should be taken out of the expectation operator, *E*½ �� , because it is not a statistical variable, we get

$$J(\boldsymbol{\mathfrak{w}}) = \boldsymbol{\mathfrak{w}}^H \boldsymbol{E} \left[ \boldsymbol{\mathfrak{x}} \boldsymbol{\mathfrak{x}}^H \right] \boldsymbol{\mathfrak{w}} - \boldsymbol{E} \left[ d \boldsymbol{\mathfrak{x}}^H \right] \boldsymbol{\mathfrak{w}} - \boldsymbol{\mathfrak{w}}^H \boldsymbol{E} [\boldsymbol{\mathfrak{x}} \boldsymbol{d}^\*] + \boldsymbol{E} [d \boldsymbol{d}^\*].\tag{54}$$

According to the Lagrange multiplier method, in order to minimize the mean squared error function, taking the derivative with respect to *w* of the above expression

$$\begin{split} \frac{\partial}{\partial \boldsymbol{w}} J(\boldsymbol{w}) &= 2E[\boldsymbol{\varpi}\boldsymbol{\bf{x}}^H] \boldsymbol{w} - 2E[\boldsymbol{\varpi}\boldsymbol{d}\,\,^\*] \\ &= 2\mathbf{R} \,\boldsymbol{w} - 2\mathbf{r}\_{\boldsymbol{\bf{x}}d}, \end{split} \tag{55}$$

where *rxd* is the cross-correlation vector between the input signal and the reference signal. Set the above result equal to 0 and solve for *w*, the optimal MMSE weights are

$$
\omega\_{\text{MMSE}} = \mathbf{R}^{-1} \mathbf{r}\_{\text{xd}}.\tag{56}
$$

Since the reference signal is only related to the desired signal, and is not related to the interference signal and noise, therefore

$$\mathbf{r}\_{\text{xd}} = E[\mathbf{x} d^\*] = E[m\_0 \mathfrak{a}(\theta\_0) d^\*] = E[m\_0 d^\*] \mathfrak{a}(\theta\_0) = p\_0 \mathfrak{a}(\theta\_0),\tag{57}$$

and according to the matrix inversion formula

$$\mathbf{R}^{-1} = \frac{\left(\mathbf{R}\_i + \mathbf{R}\_n\right)^{-1}}{1 + p\_0 \mathfrak{a}(\theta\_0)^H (\mathbf{R}\_i + \mathbf{R}\_n)^{-1} \mathfrak{a}(\theta\_0)}.\tag{58}$$

Substitute Eq. (57) and Eq. (58) into Eq. (56), we get

$$\mathbf{w}\_{\text{MMSE}} = \frac{p\_0}{\mathbf{1} + p\_0 \mathbf{a}(\theta\_0)^H (\mathbf{R}\_i + \mathbf{R}\_n)^{-1} \mathbf{a}(\theta\_0)} \mathbf{w}\_{\text{MSINR}}.\tag{59}$$

From the above analysis, it can be seen that the received signal is correlated with the desired signal. Therefore, it is not required to decompose the received signal into the desired signal and interference signal, and the correlation of the received signal and the reference signal can be estimated by sampling, so it is not difficult to determine.

On the other hand, from Eq. (59) it can be shown that the MMSE beamformer **w***MMSE* is a scalar multiple of the Max-SINR beamformer **w***MSINR* in Eq. (51), i.e., the adaptive weights obtained by using the MMSE and Max-SINR criteria are proportional to each other. Since the multiplicative constants in adaptive weights do not matter, these two techniques are therefore equivalent.

#### **5.3 Minimum variance**

In the signal received by the array, the desired signal is the content of cooperative communication, and the interference is often unpredictable, so the form of the desired signal and DOA of the signal should be known. In this case, in order to detect the desired signal more efficiently, it is necessary to eliminate the clutter background. From Eq. (22)-(24) it is shown that the array output power includes three parts: desired signal power, interference power and noise power, while the interference and noise power can be considered as the variance of the desired signal error. The smaller the variance is, the more close is it to the expectation. Interference and noise power can be expressed as

$$P\_{oi} + P\_{on} = \mathbf{w}^H \mathbf{R}\_i \mathbf{w} + \mathbf{w}^H \mathbf{R}\_n \mathbf{w} \tag{60}$$

For array main-lobe (desired look direction), the unit gain is considered, that is

$$\begin{cases} \min\_{w} & \mathcal{w}^H \mathbf{R}\_{i+n} w \\ \text{s.t.} & \mathcal{w}^H \mathbf{a}(\theta\_0) = 1 \end{cases} \tag{61}$$

Therefore, the minimum interference and noise variance is the choice of the appropriate **w**, using the Eq. (61) constraints, so that the Eq. (60) is minimized. The weight vector **w** that minimizes Eq. (60) subject to the constraint in Eq. (61) can be selected by using a vector Lagrange multiplier to form the modified performance measure. According to Lagrange multiplier method, the objective function is

$$L(\mathbf{w}) = \mathbf{w}^H \mathbf{R}\_i \mathbf{w} + \mathbf{w}^H \mathbf{R}\_n \mathbf{w} + \lambda \left(\mathbf{w}^H a(\theta\_0) - 1\right) \tag{62}$$

Setting the derivative of the above expression Eq. (62) with respect to **w** equal to zero to obtain optimal weight vector **w***MV*based on minimum variance criteria, requiring **w***MV* to satisfy the constraint in Eq. (61) to evaluate *μ*, and substituting the resulting value of *μ* into **w***MV* gives the minimum variance weight vector solution

$$\frac{\partial}{\partial \mathbf{w}} L(\mathbf{w}) = 2(\mathbf{R}\_i + \mathbf{R}\_n)\mathbf{w} + \lambda \mathbf{a}(\theta\_0) = \mathbf{0} \tag{63}$$

Solution of the above equation yields the optimal weights vector by the minimum interference and the noise variance criterion.

$$\mathbf{w}\_{MV} = \mu(\mathbf{R}\_i + \mathbf{R}\_n)^{-1}\mathbf{a}(\theta\_0) = \mu\mathbf{w}\_{MSINR} \tag{64}$$

According to the constraint conditions of the main beam, using the property that ð Þ **R***<sup>i</sup>* þ **R***<sup>n</sup>* is the Hermitian matrix, can be obtained as

$$\mu = \frac{1}{\mathfrak{a}^{H}(\theta\_{0}) \left(\mathbf{R}\_{i} + \mathbf{R}\_{n}\right)^{-1} \mathfrak{a}(\theta\_{0})} \tag{65}$$

When the snapshot data used to estimate **R** contains only the noise and interference environment, this processor is referred to as minimum variance distortionless response (MVDR). In the event, the desired signal is also present in the snapshot data, the same solution for the weight vector results, but is sometimes referred to as minimum power distortionless response (MPDR) to indicate the difference in the observed data [2]. In practice, the distinction makes a significant difference in terms of the required snapshot support to achieve good performance [18].

#### **5.4 Minimum power**

The formulation of the MV can be derived by minimizing the total output power of the array subject to the similar constraint of distortion-less response of Eq. (61). The total power of the output signal is considered, if the gain of the desired signal is kept fixed, that is the same as the constraint condition of Eq. (61), which is equivalent to the received power of the signal under the condition of ensuring the normal receiving of the desired signal while suppressing interference and noise power, the resultant criterion is defined as the minimum total output power of the array (MP). The cost function is

$$\begin{cases} \min\_{w} & \boldsymbol{w}^{H} \mathbf{R} \boldsymbol{w} \\\\ \text{s.t.} & \boldsymbol{w}^{H} \boldsymbol{a}(\theta\_{0}) = \mathbf{1} \end{cases} \tag{66}$$

Also using the method of Lagrange multiplier, the objective function to be minimized is

$$L(\boldsymbol{w}) = \boldsymbol{w}^H \mathbf{R} \boldsymbol{w} + \lambda \left(\boldsymbol{w}^H \boldsymbol{a}(\theta\_0) - 1\right) \tag{67}$$

Taking the complex gradient with respect to *w* and setting to zero

$$\frac{\partial}{\partial \boldsymbol{w}} L(\boldsymbol{w}) = 2\mathbf{R}\boldsymbol{w} + \lambda \boldsymbol{a}(\theta\_0) = \mathbf{0} \tag{68}$$

*Fundamentals of Narrowband Array Signal Processing DOI: http://dx.doi.org/10.5772/intechopen.98702*

Under this criterion, the optimal weight vector is

$$
\omega\_{\rm MP} = \kappa \mathbf{R}^{-1} \mathbf{a}(\theta\_0) \tag{69}
$$

where the constant (normalize the array main beam gain to unity) is

$$\kappa = \frac{1}{a^H(\theta\_0) \mathbf{R}^{-1} \mathbf{a}(\theta\_0)}\tag{70}$$

This criterion (MP) compared with the previously defined criterion (MV) is almost equivalent, since minimizing the total output power of the beamformer while preserving the desired signal is equivalent to minimizing the output power due to interference-plus-noise. The difference is only in the optimal weight vector of the MP criterion, and it is not necessary to separate the interference and noise, and only the covariance matrix of the received signal is estimated and thus the two optimization problems in Eq. (61) and Eq. (66) are equivalent.

#### **5.5 Maximum likelihood criterion**

Assume the space has only one desired signal and number of interference signals, the input signals can be expressed as

$$\mathbf{x} = m\_0 \mathbf{a}\_0 + \sum\_{i=1}^{M} m\_i \mathbf{a}\_i + \mathbf{n} = m\_0 \mathbf{a}\_0 + \left(\sum\_{i=1}^{M} m\_i \mathbf{a}\_i + \mathbf{n}\right) \tag{71}$$

If the interference signal and noise are zero mean Gaussian random process, the above equation is a Gaussian random process, and its mean is the desired signal *m*0**a**0. The output signal is defined as the likelihood function vector

$$L(\mathbf{x}) = -\ln\left(P\left(\mathbf{x} \middle| \mathbf{x} = \sum\_{i=1}^{M} m\_i \mathbf{a}\_i + \mathbf{n}\right)\right) \tag{72}$$

The expression of the conditional probability can be further changed to

$$L(\mathfrak{x}) = c(\mathfrak{x} - m\_0 \mathfrak{a}\_0)^H (\mathbf{R}\_i + \mathbf{R}\_n)^{-1} (\mathfrak{x} - m\_0 \mathfrak{a}\_0) \tag{73}$$

where *c* is a constant independent of *x* and *m*0*a*<sup>0</sup> . Taking derivative of the above expression with respect to *m*<sup>0</sup> and set the result equal to zero, we will get the maximum likelihood estimation *m*<sup>0</sup>

$$\frac{\partial}{\partial m\_0} L(\mathbf{x}) = -2a\_0^\text{H} (\mathbf{R}\_i + \mathbf{R}\_n)^{-1} \mathbf{x} + 2m\_0 a\_0^\text{H} (\mathbf{R}\_i + \mathbf{R}\_n)^{-1} \mathbf{a}\_0 = \mathbf{0} \tag{74}$$

$$m\_{0ML}(t) = \frac{a\_0^H (\mathbf{R}\_i + \mathbf{R}\_n)^{-1}}{a\_0^H (\mathbf{R}\_i + \mathbf{R}\_n)^{-1} a\_0} \mathbf{x} \tag{75}$$

The optimal weight vector is obtained by the above equation of the maximum likelihood criterion.

$$\mathcal{w}\_{ML} = \frac{\left(\mathbf{R}\_i + \mathbf{R}\_n\right)^{-1} \mathbf{a}\_0^H}{\mathbf{a}\_0^H \left(\mathbf{R}\_i + \mathbf{R}\_n\right)^{-1} \mathbf{a}\_0} \tag{76}$$

Compared with the weight vector solution under the Maximum Signal-to-- Interferer-Noise Ratio (MSINR) criterion, the above expression can be rewritten as

$$\mathfrak{w}\_{\rm ML} = \frac{1}{\mathfrak{a}\_0^\rm H} \frac{1}{\left(\mathbf{R}\_i + \mathbf{R}\_n\right)^{-1} \mathfrak{a}\_0} \mathfrak{w}\_{\rm MSINR} \tag{77}$$

From Eq. (77) it is clear that, the ML beamformer **w***ML* is a scalar multiple of the Max-SINR beamformer **w***MSINR* in Eq. (51). i.e., the adaptive weights obtained using the ML and Max SINR criteria are proportional to each other. Since multiplicative constants in the adaptive weights have no impact on the array beampattern, these two techniques have no essential difference and are therefore equivalent.

#### **6. Adaptive filtering algorithms**

The expression of the optimal weight vector is obtained by solving the equations based on the optimization theory. In practical engineering, the optimal weight vector is obtained by the adaptive filtering algorithms. When there is a reference signal available, the reference signal may be the training sequence of the desired signal or the DOA information of the desired signal, the resultant technique is categorized as a non-blind adaptive spatial filtering. These classical adaptive algorithms include Direct Matrix Inversion (DMI) [19], Least Mean Square (LMS) [20– 22], Recursive Least Square (RLS) [23–25], Conjugate Gradient (CG) and its improved algorithms [26, 27]. When there is no reference signal available, the optimal weight vector solution can be obtained by using other characteristics of the signal, the resultant techniques are categorized as blind adaptive spatial filtering. Blind algorithm mainly includes Constant Modulus (CM) algorithm [28–30], smooth circulation (Cyclo-stationary) algorithm [31], and High Order Cumulant (HOC) algorithm [32].

#### **6.1 Direct matrix inversion algorithm**

The basic idea of DMI algorithm is to compute the optimal weight vector directly instead of calculating it iteratively, based on an estimate of the correlation matrix *<sup>R</sup>* <sup>¼</sup> *<sup>E</sup> <sup>x</sup>*ð Þ*<sup>t</sup> <sup>x</sup><sup>H</sup>*ð Þ*<sup>t</sup>* of the adaptive array output samples [33]. In communication systems, the signal source consists of a desired signal, interference and noise, therefore, the maximum SINR criterion, the minimum mean square error (MMSE) criterion, the minimum variance (MV) criterion and the maximum likelihood (ML) criterion need to know the covariance matrix of the interference signal and the noise signal, and do not contain the covariance matrix of the desired signal. So these criteria are not suitable for communication systems, and are suitable for radar systems, because it is easy to realize the interference and noise superimposed signal as long as the radar does not transmit the signal but only receives the signal.

For the MP criterion, the solution also needs the desired signal DOA, which is based on Eqs. (68) and (69), thus obtaining the desired signal direction vector *a*ð Þ *θ*<sup>0</sup> . On the other hand, unlike the MV criterion, the signal covariance matrix of MP criterion is the sum of the covariance matrices of the desired signal, the interference and the noise. Therefore, the MP criterion is suitable for the communication system.

Assume that there are *P* signals in the space, wherein, the desired signal is **s**<sup>0</sup> ¼ *m*0*a*ð Þ *θ*<sup>0</sup> , the power is *p*0, and the interference signals are **s**<sup>1</sup> ¼ *m*1*a*ð Þ *θ*<sup>1</sup> , … , **s***<sup>P</sup>* ¼ *mPa*ð Þ *<sup>θ</sup><sup>P</sup>*�<sup>1</sup> with power *<sup>p</sup>*1, … , *pP*�<sup>1</sup>, respectively. The noise vector is *<sup>n</sup>*, and power is *σ*2. According to the definition of covariance matrix

*Fundamentals of Narrowband Array Signal Processing DOI: http://dx.doi.org/10.5772/intechopen.98702*

$$\begin{split} \mathbf{R} &= E\left[ \left( \sum\_{i=0}^{p-1} \mathbf{s}\_i + \mathbf{n} \right) \left( \sum\_{i=0}^{p-1} \mathbf{s}\_i + \mathbf{n} \right)^H \right] \\ &= E\left[ \left( \sum\_{i=0}^{p-1} m\_i \mathbf{a}(\theta\_i) + \mathbf{n} \right) \left( \sum\_{i=0}^{p-1} m\_i \mathbf{a}(\theta\_i) + \mathbf{n} \right)^H \right] \end{split} \tag{78}$$

Because the spatial separation between signal and interference is large enough, they are spatially uncorrelated. When sources are uncorrelated

$$E\left[\mathfrak{a}(\theta\_i)\mathfrak{a}\left(\theta\_j\right)^H\right] = \mathbf{0} \quad i \neq j \tag{79}$$

At the same time

$$E\left[m\_i^2 \mathfrak{a}(\theta\_i)\mathfrak{a}(\theta\_i)^H\right] = p\_i \tag{80}$$

$$E\left[\mathfrak{m}^H\right] = \sigma^2\tag{81}$$

Obviously, in practical applications, it is very difficult to estimate the covariance matrix by the respective amount of power, instead it can be estimated from samples of the received signal. DMI algorithm assumes that the covariance matrix has been estimated, and the expression **R**�<sup>1</sup> is obtained by matrix inversion, combine with the known DOA, calculate the direction vector *a*ð Þ *θ*<sup>0</sup> , and the optimal weight vector solution is obtained by MP criterion.

Because the actual covariance matrix is not ideal, the performance of the DMI algorithm is affected by the eigen-value spread of the covariance matrix. The divergence is determined by the temporal and spatial correlation between the desired signal and the interference or between the interference and interference.

The optimal weight vector by DMI algorithm can be computed as:

The *K* snapshots constitute data matrix *X*, the covariance matrix *R* is given as

$$\mathbf{R} = \frac{\mathbf{X}\mathbf{X}^{\mathrm{H}}}{K} \tag{82}$$

Directly estimate the covariance matrix and then by matrix inversion, obtain the inverse matrix *R*�<sup>1</sup> combined with the desired signal direction vector, and the optimal weight vector is calculated according to Eq. (69).

$$\boldsymbol{w} = \frac{\mathbf{R}^{-1} \mathbf{a}\_0}{\mathbf{a}\_0^H \mathbf{R}^{-1} \mathbf{a}\_0} \tag{83}$$

DMI algorithm needs to choose suitable number of sampling snapshots *K*. When the number of snapshots *K* is sufficiently large, the covariance matrix *R* is more accurate, but larger number of sampling snapshots increases the computing load [34]. The major disadvantage of DMI algorithm is its computational complexity which makes it difficult to implement on FPGA and DSP. On the other hanf, the truncated finite number of computation makes the matrix inverse operation instable.

extremely simple and numerically robust.

#### **6.2 Least mean square algorithm**

The least mean square (LMS) algorithm proposed by Widrow et al. [20] is the most classical algorithm in signal processing. The LMS algorithm is extremely

#### *Adaptive Filtering - Recent Advances and Practical Implementation*

simple and numerically robust. More detailed description about the LMS algorithm is given in Ref. [18, 35]. The LMS algorithm is based on the method of steepest descent, and therefore sometime it is referred to as a Stochastic Gradient Descent (SGD) algorithm. The unconstrained LMS algorithm is a training sequence based adaptive spatial filtering algorithm which recursively compute and update the optimal weight vector. It uses the gradient search method to solve the weight vector, thus avoiding the direct matrix inversion of the covariance matrix. Its iterative equation is given as

$$
\mathfrak{w}(k+1) = \mathfrak{w}(k) + \mathfrak{zg}(\mathfrak{w}(k)) \tag{84}
$$

where *<sup>w</sup>*ð Þ *<sup>k</sup>* <sup>þ</sup> <sup>1</sup> represents the new weight vector computed at the ð Þ *<sup>k</sup>* <sup>þ</sup> <sup>1</sup> *th* iteration, *g w*ð Þ ð Þ*k* is the gradient vector of the squared error (objective function) with respect to the weight vector *w*ð Þ*k* , and the scalar constant *μ* is the step size parameter which controls the rate of convergence [33]. The gradient vector is given by

$$\mathbf{g}(\boldsymbol{w}(k)) = -2\boldsymbol{\kappa}(k+1)\boldsymbol{\varepsilon}^\*\left(\boldsymbol{w}(k)\right) \tag{85}$$

where *x*ð Þ *k* þ 1 is the *k* þ 1 array snapshots, namely the *k* þ 1 array sample, and *<sup>ε</sup>* <sup>∗</sup> ð Þ *<sup>w</sup>*ð Þ*<sup>k</sup>* is the error between the array output and the reference signal [33]. Thus, the estimated gradient vector is the product of the error between the array output and the reference signal, and the array signal received at the *k* � *th* iteration. The error *<sup>ε</sup>* <sup>∗</sup> ð Þ *<sup>w</sup>*ð Þ*<sup>k</sup>* can be expressed as

$$\boldsymbol{\varepsilon}(\boldsymbol{\mathfrak{x}}(k)) = \boldsymbol{d}(k+1) - \boldsymbol{w}^{H}(k)\boldsymbol{x}(k+1) \tag{86}$$

where *d k*ð Þ <sup>þ</sup> <sup>1</sup> is the reference signal at the ð Þ *<sup>k</sup>* <sup>þ</sup> <sup>1</sup> *th* iteration. As one of the most classical adaptive filtering algorithms, ULMS has the advantage of computational simplicity and simple hardware requirement, but its convergence speed is relatively slow. In order to ensure the convergence of the algorithm, the iterative step size must meet the following condition [18, 20, 33–37].

$$0 < \mu < \frac{2}{\lambda\_{\text{max}}} \tag{87}$$

where *λ*max denoted the largest eigenvalue of the received signal covariance matrix.

The algorithm is based on the gradient of the adaptive algorithm, which is an important feature of the gradient of the average value problem. The mean of the gradient estimate is expressed as

$$\overline{\mathbf{g}}(\mathbf{w}(k)) = \mathbf{2}\mathbf{R}w - \mathbf{2}r\_{\ge l} \tag{88}$$

In the iterative process of the algorithm, the gradient vector can be obtained by estimation. From the mean or expected value of the gradient estimate, the estimate is unbiased. At the same time, the estimation of the variance has also an effect on the performance of the algorithm. The variance is defined as

$$\xi(\boldsymbol{w}(k)) = E\left\{ \left| d(k) - \boldsymbol{w}^H(k)\boldsymbol{x}(k+1) \right|^2 \right\} \tag{89}$$

whose value is the error between the reference signal and the array output signal. From this, we can see that the Misadjustment of LMS algorithm is

*Fundamentals of Narrowband Array Signal Processing DOI: http://dx.doi.org/10.5772/intechopen.98702*

$$\text{MA} = \mu tr \left\{ \left[ \mathbf{I} - \mu \mathbf{R} \right]^{-1} \mathbf{R} \right\} \tag{90}$$

The misadjustment defined as a ratio provides a measure of how close an adaptive algorithm is to optimality in the mean-square-error sense. The smaller the misadjustment, the more accurate is the steady-state solution of the algorithm. In other words, the difference between the weights estimated by the adaptive algorithm and optimal weights is further characterized by the ratio of the average excess steady-state MSE and the MMSE. It is referred to as the misadjustment. It is a dimensionless parameter and measures the performance of the algorithm. The misadjustment is a kind of noise and is caused by the use of noisy estimate of the gradient [38, 39].

From the above analysis, we can see that the LMS algorithm has different performance when choosing different steps and different covariance matrix estimation methods.

The basic steps of the LMS algorithm are as follows:


$$e(k+1) = d(k+1) - \mathbf{w}^T(k)\mathbf{x}(k+1)$$

$$\mathbf{w}(k+1) = \mathbf{w}(k) + \mu\mathbf{x}(k+1)e(k+1)$$

3. Stop iteration after the weight vector *w*ð Þ*k* is convergent, so this time define*k* ¼ *K*,*w*ð Þ *K* is the desired weight vector.

**Figure 5** shows the learning curve of the LMS algorithm with different step size parameters. It can be seen that when the step size parameter *μ* is small, the algorithm converges slowly, while the large value of step size parameter *μ* make the algorithm converge faster.

**Figure 5.** *Learning curve of the LMS algorithm.*

The least mean square algorithm requires the training sequence, if the training sequence in the LMS algorithm is replaced by the DOA information of the desired signal, the Frost LMS algorithm can be obtained [40].

Iterative equation of the Frost LMS algorithm is

$$\mathcal{w}(k+1) = \mathbf{P}\{\mathbf{w}(k) - \mu \mathbf{g}(\mathcal{w}(k))\} + \frac{\mathbf{a}\_0}{L} \tag{91}$$

where the matrix

$$\mathbf{P} = \mathbf{I} - \mathbf{a}\_0 \left(\mathbf{a}\_0^\text{H} \mathbf{a}\_0\right)^{-1} \mathbf{a}\_0^\text{H} \tag{92}$$

and *g w*ð Þ ð Þ*k* is the gradient vector of the output signal power with respect to the weight vector *w*ð Þ*k* , and is given by

$$\mathbf{g}\left(\mathfrak{w}(k)\right) = \mathfrak{x}(k+1)\mathfrak{y}^\*\left(k+1\right) \tag{93}$$

In the above equation, the output signal is given as

$$\mathbf{w}(k+1) = \mathbf{w}^{\mathsf{H}}(k)\mathbf{x}(k+1) \tag{94}$$

Moreover, the initial value of the weights is given as

$$
\omega(0) = \frac{\mathfrak{a}\_0}{L} \tag{95}
$$

In order to ensure the convergence of the iterative algorithm, the iterative step size still needs to meet the following conditions *μ*< 2*=λ*max, where *λ*max is the largest eigenvalue of the covariance matrix of the received signal.

Basic steps for the Frost LMS algorithm are as follows:

1.First initialize,*w*ð Þ¼ <sup>0</sup> *<sup>a</sup>*<sup>0</sup> *<sup>L</sup>* ,*k* ¼ 0

2. Iterative updates, so that *k* ¼ *k* þ 1;

$$\mathbf{y}(k+1) = \mathbf{w}^{\mathrm{H}}(k)\mathbf{x}(k+1);$$

$$\mathbf{w}(k+1) = \left(\mathbf{I} - \mathbf{a}\_{0}(\mathbf{a}\_{0}^{\mathrm{H}}\mathbf{a}\_{0})^{-1}\mathbf{a}\_{0}^{\mathrm{H}}\right)\{\mathbf{w}(k) - \mu\mathbf{x}(k+1)\mathbf{y}^{\ast}(k+1)\} + \frac{\mathbf{a}\_{0}}{L};$$

3. Stop iteration after the weight vector *w*ð Þ*k* is convergent, so this time define *k* ¼ *K*, *w*ð Þ *K* is the desired weight vector.

The convergence rate of both the LMS algorithm and Frost LMS algorithm is associated with the step size parameter. Since, the eigenvalues of the received signal covariance matrix are not easy to obtain, the appropriate step size parameter cannot be chosen easily.

If the step size is too larger than twice the reciprocal of the maximum eigenvalue of the covariance matrix of the received signal, the weight vector diverges. Large *μ*'s (step-size) speed up the convergence of the algorithm but also lower the precision of the steady-state solution of the algorithm. It should be noted that value of the step size must be less than twice the reciprocal of the maximum eigenvalue. Similarly, when the step-size is much less than twice the reciprocal of the maximum eigenvalue of the covariance matrix of received signals, the offset (steady state error) is small but the weight vector converges slowly.

*Fundamentals of Narrowband Array Signal Processing DOI: http://dx.doi.org/10.5772/intechopen.98702*

**Figure 6.** *Learning curve of the NLMS algorithm.*

Another variant of the LMS family is the normalized LMS (NLMS) algorithm. This algorithm replaces the constant-step-size of conventional LMS algorithm with a data-dependent normalized step size at each iteration. At the *k*-th iteration, the step size is given by

$$
\mu(k) = \frac{\mu\_0}{\mathcal{X}^H(k)\mathfrak{x}(k)}\tag{96}
$$

where *μ*<sup>0</sup> is a constant. The .convergence of the NLMS algorithm is faster as compared to the LMS algorithm due to the data-dependent step size. **Figure 6** shows the convergence behavior of the NLMS algorithm with different *μ*0.

One major advantage of the LMS algorithm is its simplicity, and when the step size is selected appropriately, the algorithm is stable (converged properly) and easy to be realized [21]. However, the LMS algorithm is sensitive to eigenvalues of the covariance matrix of received signals, and the convergence of the algorithm is poor when the eigenvalues are dispersed.

Various other variants of LMS algorithm are briefly discusses in [21]. In recent years, adaptive filtering algorithms have been extended into DOA estimation. DOA estimation based on adaptive filtering algorithms can be found in [41, 42].

#### **6.3 Conjugate Gradient Method**

The Conjugate Gradient Method (CGM) [43–45] proposed by Hestenes and Stiefel in 1952 (as direct method), is generally applied to the symmetric positive definite linear systems equations of the form *Aw* ¼ *b*. In application of antenna arrays, the the weight vector computation by conjugate gradient method is discussed in [46]. Here, we have briefly outlined the conjugate gradient method (CGM) in application to beamforming [47].

In array signal processing, *w* represent the array weight vector, *A* is a matrix whose columns are corresponded to the consecutive samples obtained from array elements, while *b* is a vector containing consecutive samples of the desired signal. Thus, a residual vector

$$r = b - Aw\tag{97}$$

refers to the error between the desired signal and array output at each sample, with the sum of the squared error given by *r<sup>H</sup>r*.

The process is started with weight vector *w*ð Þ 0 as an initial guess, to get a residual

$$r(\mathbf{0}) = \mathbf{b} - \mathbf{A}w(\mathbf{0})\tag{98}$$

and the initial direction vector can be expressed as

$$\mathbf{g(0)} = \mathbf{A}^H \mathbf{r(0)}\tag{99}$$

Then moves the weights in this direction to yield a weight update equation

$$
\mathfrak{w}(k+1) = \mathfrak{w}(k) + \mu(k)\mathfrak{g}(k) \tag{100}
$$

where the step size *μ*ð Þ*k* is

$$\mu(k) = \frac{\left|\mathbf{A}^H r(k)\right|^2}{\left|\mathbf{A}^H \mathbf{g}(k)\right|^2} \tag{101}$$

The residual *r*ð Þ*k* and the direction vector *g*ð Þ*k* are updated using

$$r(k+1) = r(k) + \mu(k)\mathbf{A}\mathbf{g}(k)\tag{102}$$

and

$$\mathbf{g}(k+1) = \mathbf{A}^H \mathbf{r}(k+1) - a(k)\mathbf{g}(k) \tag{103}$$

with

$$a(k) = \frac{\left|\mathbf{A}^H \mathbf{r}(k+1)\right|^2}{\left|\mathbf{A}^H \mathbf{r}(k)\right|^2} \tag{104}$$

A pre-determined threshold level is defined and the algorithm is stopped when the residual falls below the threshold level.

It should be noted that the direction vector points in the direction of error surface gradient *<sup>r</sup><sup>H</sup>*ð Þ*<sup>k</sup> <sup>r</sup>*ð Þ*<sup>k</sup>* at the *<sup>k</sup>* � *th* iteration, which the algorithm is trying to minimize. The method converges to the error surface minimum within at most *K* iterations for a *K*-rank matrix equation, and thus provides the fastest convergence of all iterative methods [46, 48].

#### **6.4 Recursive least square algorithm**

In order to further improve the convergence rate, a more sophisticated algorithm is recursive least square algorithm. RLS algorithm is based on the Recursive Least Squares Estimation (RLSE), which uses time average instead of statistical (ensemble) average or stochastic expectations. The RLS algorithm work well even when the eigenvalue spread of the input signal correlation matrix is large [49, 50]. So RLS algorithm has an advantage of insensitivity to variations in eigenvalue spread of the input correlation matrix [49, 50]. These algorithms have excellent

*Fundamentals of Narrowband Array Signal Processing DOI: http://dx.doi.org/10.5772/intechopen.98702*

**Figure 7.** *Learning curve of the RLS algorithm.*

performance when working in time-varying environments [49, 50]. Therefore, in the practical application, the forgetting factor *μ* is usually taken into account, and the optimal weight vector solution is slightly different. According to the optimal weight vector solution of MP criterion, the covariance matrix estimation is defined as

$$\boldsymbol{\Phi}(k) = \sum\_{k=1}^{K} \boldsymbol{\mu}^{K-k} \mathbf{x}(k) \mathbf{x}^{H}(k) \tag{105}$$

where the parameter *μ* should be chosen in the range 0 ≪ *μ*≤ 1. The above equation can also be expressed as

$$\boldsymbol{\Phi}(k) = \mu \boldsymbol{\Phi}(k-1) + \boldsymbol{\varkappa}(k)\boldsymbol{\varkappa}^H(k) \tag{106}$$

Using Matrix Inversion Lemma [14, 36, 51–54] (See Appendix A)

$$\begin{split} \boldsymbol{P}(k) &= \boldsymbol{\Phi}^{-1}(k) \\ &= \boldsymbol{\mu}^{-1} \boldsymbol{\Phi}^{-1}(k-1) - \frac{\boldsymbol{\mu}^{-2} \boldsymbol{\Phi}^{-1}(k-1) \boldsymbol{\varkappa}(k) \boldsymbol{\varkappa}^{H}(k) \boldsymbol{\Phi}^{-1}(k-1)}{1 + \boldsymbol{\mu}^{-1} \boldsymbol{\varkappa}^{H}(k) \boldsymbol{\Phi}^{-1}(k-1) \boldsymbol{\varkappa}(k)} \end{split} \tag{107}$$

Let

$$\mathbf{g}(k) = \frac{\mu^{-1} \boldsymbol{\Phi}^{-1}(k-1) \boldsymbol{\varkappa}(k)}{1 + \mu^{-1} \boldsymbol{\varkappa}^{H}(k) \boldsymbol{\Phi}^{-1}(k-1) \boldsymbol{\varkappa}(k)} \tag{108}$$

then Eq. (106) can be expressed as

$$\mathbf{P}(k) = \mu^{-1}\mathbf{P}(k-1) - \mu^{-1}\mathbf{g}(k)\mathbf{x}^H(k)\mathbf{P}(k-1) \tag{109}$$

The iterative formula of the algorithm can be expressed as.

*Adaptive Filtering - Recent Advances and Practical Implementation*

$$\begin{split} \boldsymbol{\omega}(\boldsymbol{k}) &= \boldsymbol{\Lambda}(\boldsymbol{k}) \Big[ \boldsymbol{\mu}^{-1} \mathbf{P}(\boldsymbol{k} - \mathbf{1}) - \boldsymbol{\mu}^{-1} \mathbf{g}(\boldsymbol{k}) \mathbf{x}^{H}(\boldsymbol{k}) \mathbf{P}(\boldsymbol{k} - \mathbf{1}) \Big] \boldsymbol{a}(\theta\_{0}) \\ &= \left\{ \frac{\boldsymbol{\Lambda}(\boldsymbol{k})}{\mu \boldsymbol{\Lambda}(\boldsymbol{k}) - \mathbf{1}} \left[ \mathbf{I} - \mathbf{g}(\boldsymbol{k}) \mathbf{x}^{H}(\boldsymbol{k}) \right] \right\} \boldsymbol{w}(\boldsymbol{k} - \mathbf{1}) \end{split} \tag{110}$$

By taking different values of the *K*, the optimal weight vector recursion expression can be obtained. Compared with the LMS algorithm, RLS has a faster convergence rate, which is also a closed-loop adaptive algorithm.

The implementation of the RLS algorithm is carried out with different values of the forgetting factor *μ*. **Figure 7** shows the learning curves of the RLS algorithm. With the forgetting factor *μ* = 1, the algorithm requires only 50 iterations to converge to its steady-state. It takes only 25 adaptation cycles to converge the RLS algorithm with a lower forgetting factor of *μ* = 0.9.

#### **7. Conclusion**

In this chapter, we have introduced the basic principles and theoretical background of narrowband array signal processing. In particular, this chapter emphasized the fundamentals of narrowband signal processing exclusively used for the narrowband beamforming and DOA estimation. Furthermore, we reviewed the geometry of adaptive array antennas, the mathematical approaches for the development of signal models of the receiver array, and the selection criteria of the received signal processing technique, i.e. the criteria and guidelines related to adaptive filtering algorithms for solving the optimal weights. Considering the farfield narrowband signal using a uniform linear array as an example, the mathematical model is established in this chapter for the adaptive array antenna beamforming system. The basic theory of this chapter also laid a foundation for the theory of the wideband signal beamforming, which is then convenient for us to understand.

#### **Appendix A**

Matrix Inversion Lemma [52]: Let *A* and *B* be two positive-definite *N* � *N* matrices, *C* a *N* � *M* matrix, and *D* a positive definite *M* � *M* matrix. If they are related by

$$\mathbf{A} = \mathbf{B} + \mathbf{C}\mathbf{D}^{-1}\mathbf{C}^T,$$

then the inverse of the matrix *A* is

$$\mathbf{A}^{-1} = \mathbf{B}^{-1} - \mathbf{B}^{-1}\mathbf{C} \left(\mathbf{D} + \mathbf{C}^T \mathbf{B}^{-1} \mathbf{C}\right)^{-1} \mathbf{C}^T \mathbf{B}^{-1}$$

*Fundamentals of Narrowband Array Signal Processing DOI: http://dx.doi.org/10.5772/intechopen.98702*

### **Author details**

Zeeshan Ahmad School of Electronic and Information Engineering, Ningbo University of Technology, Ningbo, China

\*Address all correspondence to: azee@nbut.edu.cn

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Haykin S (Ed.). Advances in spectrum analysis and array processing (Vol. III). Englewood Cliffs, New Jersey, USA: Prentice Hall, 1995.

[2] Pillai S U. Array signal processing. New York, USA: Springer-Verlag, 2002.

[3] Haykin S, Liu K J R. Handbook on array processing and sensor networks. NJ, Hoboken: Wiley-IEEE Press, 2010.

[4] Haykin S. Adaptive filter theory: International edition. 5th Ed., Pearson Education: Prentice Hall, 2014.

[5] Frank G. Smart antennas with MATLAB, 2nd ed.New York: McGraw-Hill Education ; 2015.

[6] Salous S. Radio Propagation measurement and channel modelling. Hoboken, NJ, USA: John Wiley & Sons, 2013.

[7] Shirvani-Moghaddam S., Akbari F. A novel ULA-based geometry for improving AOA estimation. EURASIP Journal on Advances in Signal Processing, 2011, 2011(1): 39.

[8] Haykin S, Justice J H. Array signal processing. Englewood Cliffs, New Jersey, USA: Prentice-Hall signal processing series, 1985.

[9] Han Y. A rao–blackwellized particle filter for adaptive beamforming with strong interference. IEEE Transactions on Signal Processing, 2012, 60(6): 2952-2961.

[10] Hou Y. Research on digital beamf orming algorithms [thesis]. Harbin: Harbin Institute of technology, 2010.

[11] Zhang X. Modern signal processing, 3rd ed. Beijing: Tsinghua University Press, 2015.

[12] Guo Y. Constant modulus algorithm and its application in blind

beamforming [thesis]. Xi'an: Xidian University, 2001.

[13] Gooch R, Lundell J. The CM array: An adaptive beamformer for constant modulus signals. ICASSP '86. IEEE International Conference on Acoustics, Speech, and Signal Processing, 1986, 11: 2523-2526.

[14] Krim H, Viberg M. Two decades of array signal processing research: the parametric approach. IEEE Signal Processing Magazine, 1996, 13(4): 67-94.

[15] Rong Y, Eldar Y C, Gershman A B. Performance tradeoffs among beamforming approaches. 2006 IEEE Sensor Array and Multichannel Signal Processing Workshop Proceedings, SAM 2006., 2006, pp.26-30.

[16] Trees H, Di S, Tang J. Optimal array processing technology. Beijing: Tsinghua University Press, 2008.

[17] Compton R T. Adaptive Antennas: Concept and Performance. Englewood Cliffs, New Jersey, USA: Prentice Hall, 1988.

[18] Monzingo R A, Miller T W. Introduction to Adaptive Arrays. New York, USA: John Wiley & Sons, 1980.

[19] Bao C-H, Shui P-L. Fast System Identification Using Direct Matrix Inversion and a critically Sampled Subband Adaptive Filter. Journal of Electronics Information Technology, 2008, 30(1): 139-143.

[20] Widrow B, Stearns S D. Adaptive Signal Processing. Upper Saddle River, New Jersey, USA: Prentice Hall, 1985.

[21] Srar J A, Chung K S, Mansour A. Adaptive Array Beamforming Using a Combined LMS-LMS Algorithm. IEEE Transactions on Antennas and Propagation, 2010, 58(11): 3545-3557.

*Fundamentals of Narrowband Array Signal Processing DOI: http://dx.doi.org/10.5772/intechopen.98702*

[22] Griffiths L J. A simple adaptive algorithm for real-time processing in antenna arrays. Proceedings of the IEEE, 1969, 57(10): 1696-1704.

[23] Bhotto MZA, Antoniou A. Robust Recursive Least-Squares Adaptive-Filtering Algorithm for Impulsive-Noise Environments. IEEE Signal Processing Letters, 2011, 18(3): 185-188.

[24] Eweda E, Macchi O. Convergence of the RLS and LMS adaptive filters. IEEE Transactions on Circuits and Systems, 1987, 34(7): 799-803.

[25] Eleftheriou E, Falconer D D. Tracking properties and steady-state performance of RLS adaptive filter algorithms. IEEE Transactions on Acoustics, Speech and Signal Processing, 1986, 34(5): 1097-1110.

[26] Ahmad N A. A Globally Convergent Stochastic Pairwise Conjugate Gradient-Based Algorithm for Adaptive Filtering. IEEE Signal Processing Letters, 2008, 15: 914-917.

[27] Choi S, Kim D H. Adaptive antenna array utilizing the conjugate gradient method for compensation of multipath fading in a land mobile communication. [1992 Proceedings] Vehicular Technology Society 42nd VTS Conference - Frontiers of Technology. , 1992, 1: 33-36.

[28] Treichler J, Agee B. A new approach to multipath correction of constant modulus signals. IEEE Transactions on Acoustics, Speech, and Signal Processing, 1983, 31(2): 459-472.

[29] Elnashar A, Elnoubi S, Elmikati H. Sample-by-sample and block-adaptive robust constant modulus-based algorithms. IET Signal Processing, 2012, 6(8): 805-813.

[30] Shynk J J, Chan C K. Performance surfaces of the constant modulus algorithm based on a conditional Gaussian model. IEEE Transactions on Signal Processing, 1993, 41(5): 1965-1969.

[31] Rebeiz E, Urriza P, Cabric D. Optimizing Wideband Cyclostationary Spectrum Sensing Under Receiver Impairments. IEEE Transactions on Signal Processing, 2013, 61(15): 3931-3943.

[32] Xi J, Han W. Application of highorder cumulant in the phase-space reconstruction of multivariate chaotic series. Proceedings of 2010 International Conference on Intelligent Control and Information Processing, ICICIP 2010., Dalian, 2010, pp. 49-53.

[33] Hanzo L, Blogh J, Ni S. 3G, HSPA and FDD versus TDD Networking: Smart Antennas and Adaptive Modulation. 2nd ed., Wiley-IEEE Press, 2008.

[34] Zaharov VV, Teixeira M. SMI-MVDR Beamformer Implementations for Large Antenna Array and Small Sample Size. IEEE Transactions on Circuits and Systems I: Regular Papers, 2008, 55(10): 3317-3327.

[35] Widrow B, Mantey P E, Griffiths LJ, Goode BB. Adaptive antenna systems. Proceedings of the IEEE, 1967, 55(12): 2143-2159.

[36] Hollemans W, Prasad R, Kegel A. Performance analysis of cellular digital mobile radio systems including diversity techniques. The 8th IEEE International Symposium on Personal, Indoor and Mobile Radio Communications. Waves of the Year 2000. PIMRC '97., 1997, 2: 266-270.

[37] Winters J H, Salz J, Gitlin R D. The impact of antenna diversity on the capacity of wireless communication systems. IEEE Transactions on Communications, 1994, 42(234): 1740-1751.

[38] Widrow B. Adaptive filters I: Fundamentals. Technical Report 6784– 8, Stanford Electrical Engineering Laboratory, Stanford University, Stanford, CA, December 1966.

[39] Godara L C. Smart Antennas (Electrical Engineering & Applied Signal Processing Series). 1st ed., Boca Raton, Fla: CRC Press, 2004.

[40] Ahmad, Z., Yaoliang, S., & Du, Q. Adaptive wideband beamforming based on digital delay filter. Journal of Microwaves, Optoelectronics and Electromagnetic Applications, 2016, 15 (3), 261-274.

[41] Zeng, H., Ahmad, Z., Zhou, J., Wang, Q., Wang, Y. DOA estimation algorithm based on adaptive filtering in spatial domain. China Communications, 2016, 13(12): 49-58.

[42] Liu, C., Zhao, H. Efficient DOA Estimation Method Using Biascompensated Adaptive Filtering. IEEE Transactions on Vehicular Technology, 2020. doi: 10.1109/TVT.2020.3020946.

[43] Hestenes M R, Stiefel E. Method of conjugate gradients for solving linear systems. Journal of Research of the National Bureau of Standards, 1952, 49 (6): 409–436.

[44] Daniel J W. The Conjugate Gradient Method for Linear and Nonlinear Operator Equations. SIAM Journal on Numerical Analysis, 1967, 4(1): 10–26.

[45] Sarkar T, Siarkiewicz K, Stratton R. Survey of numerical methods for solution of large systems of linear equations for electromagnetic field problems. IEEE Transactions on Antennas and Propagation, 1981, 29(6): 847-856.

[46] Choi S, Sarkar T K. Adaptive antenna array utilizing the conjugate gradient method for multipath mobile communication. Signal Processing, 1992, 29(3): 319-333.

[47] Godara L C. Application of antenna arrays to mobile communications. II. Beam-forming and direction-of-arrival considerations. Proceedings of the IEEE, 1997, 85(8): 1195-1245.

[48] Choi S. Application of the Conjugate Gradient Method for Optimum Array Processing. Progress in Electromagnetics Research, 1991, 5: 589– 624.

[49] Diniz P S R. Adaptive Filtering: Algorithms and Practical Implementation. New York, USA: Springer US, 2012, pp.209-247

[50] Petroit R G, Ucci D R. A LMS and RLS look-ahead adaptive array for spread spectrum signals. Antennas and Propagation Society International Symposium. AP-S. Merging Technologies for the 90's. Digest., Dallas, TX, USA, 1990, 4: 1442-1445.

[51] Ceng W H W T. Cordless Telecommunications in Europe. London, U.K: Springer-Verlag London, 1990.

[52] Ochsner H. The digital european cordless telecommunications specification, DECT. in Tuttlebee W H W (Ed.) . Cordless telecommunications in Europe: the evolution of personal communications., Springer-Verlag New York, NJ, 1991, pp. 273–285.

[53] Petrus P, Reed J H, Rappaport T S. Effects of directional antennas at the base station on the Doppler spectrum. IEEE Communications Letters, 1997, 1 (2): 40-42.

[54] T. Kailath, Linear Systems. Englewood Cliffs, New Jersey, USA: Prentice-Hall, 1980.

#### **Chapter 2**

## Reconfigurable Filter Design

*Tae-Hak Lee, Sang-Gyu Lee, Jean-Jacques Laurin and Ke Wu*

#### **Abstract**

This chapter discusses recent development of reconfigurable filters. The technical terminology reconfigurable means that a circuit is designed in a way to have various electrical characteristics comparing with one which has a static feature. For the filter design, the various electrical characteristics can be considered as the filter can tune its operating frequency, bandwidth, and/or have multiple operational modes, that is, bandstop or bandpass modes. Also, recently, the filters that can exhibit an improved impedance matching performance over its stopband have been reported. It provides more options for the filter designers to realize the reconfigurable filters having reflective and/or absorptive frequency response types to satisfy a prior given requirement. In this chapter, recently devised filter designs will be covered and essential frequency tuning elements to realize the reconfigurable characteristic will be introduced as well.

**Keywords:** resonant frequency, operational modes, reflective, absorptive, frequency tuning elements

#### **1. Introduction**

Microwave filters play an important role in the chain of radio frequency (RF) front end to transmit and receive the required signals or to block the undesired ones. Most filter designs are dependent on the electrical length of the operating frequency or the field configuration of the resonant modes inside a cavity so the reconfigurable characteristic such as the capability to tune the operating frequency, bandwidth, and operational modes can be obtained by controlling the dependent design parameters. Recently, some researchers embarked on the development of filters that have the better matching performance at its stopband region to avoid using the isolators. The improved impedance matching characteristic is achieved in a way the input signal not to be reflected back to the input port by absorbing the input signal inside the filter structure so the devised circuits having the improved impedance performance are often named as an absorptive or a reflectionless filter.

In this chapter, we will explore the recent development of reconfigurable filter designs that can change their operating frequency, bandwidth, and operational modes along with some tuning components. Besides, the filter designed to have both reflective and absorptive characteristics will be shown. The frequency tunable substrate integrated waveguide (SIW) resonators are used to change the operating frequency of the reconfigurable filter and, to achieve frequency agility, the tuning elements based on the piezoelectric disk or the electromagnet are also given. To verify the tuning method using the electromagnet, frequency tunable filtering balun is fabricated and tested using four electromagnets. In the following section, we first start from the theoretical modelings of the frequency responses of the

reconfigurable filter using coupling coefficients and equivalent circuits, and then the circuit model with simulation and measurement results are given to support the theoretical modelings.

#### **2. Filter designs**

The coupling coefficients and their array in form of a matrix containing the inverter values are widely used to explain or represent the operation mechanism of the filter structure. The fundamental theory and detailed concepts to establish the coupling coefficients and matrix can be found from well-known textbooks [1, 2]. In this section, we will briefly cover the definition of two coupling coefficients and the experimental process to assess the coupling coefficients of the physical external, or inter-resonator coupling structures. Both couplings essentially need to be defined for the theoretical frequency responses and the associated coefficient to each coupling structure should be realized from the coupling structures for the filter to satisfy the requirements.

#### **2.1 Coupling coefficients modeling**

In this subsection, two kinds of the coupling coefficients, external and internal, are explained and the way to obtain two different coefficients from the simulation or measurement process with ease will be analytically given.

**Figure 1** shows an *n*th order filter circuit with serially connected LC resonators. The couplings between the resonators and between a resonator and input/output ports are shown with K-inverters and the values for the inverters are given using *Ki*,*<sup>j</sup>* and *K*0,1, respectively. Note that the given *n*th order circuit consists of the serially connected resonators and impedance inverters but the identical *n*th order frequency response can also be realized using parallel connected LC resonators and admittance inverters. In other words, in this chapter, we extract the theoretical frequency responses using the series LC resonators with K-inverters but the same results can be obtained with parallel resonators with J-inverters due to the duality theorem.

The inverter values for the required bandpass responses can be defined as follows [3],

$$K\_{0,1} = \sqrt{\frac{Z\_0 L\_1}{\mathcal{g}\_0 \mathcal{g}\_1}} \Delta \alpha , \quad K\_{i,j} = \sqrt{\frac{L\_i L\_j}{\mathcal{g}\_i \mathcal{g}\_j}} \Delta \alpha , \quad K\_{n,n+1} = \sqrt{\frac{Z\_0 L\_n}{\mathcal{g}\_n \mathcal{g}\_{n+1}}} \Delta \alpha \tag{1}$$

where the Δ*ω* represents the bandwidth of the filter and the lowpass prototype elements are given using *gn*. When the series LC resonators are replaced with generalized resonators which have the reactance slope parameter, *xi*, and the inverter values given in Eq. (1) can be driven as Eq. (2),

**Figure 1.** N*th order filter structure composed of series LC components.*

*Reconfigurable Filter Design DOI: http://dx.doi.org/10.5772/intechopen.97446*

$$K\_{0,1} = \sqrt{\frac{Z\_0 \mathbf{x}\_1}{\mathbf{g}\_0 \mathbf{g}\_1} \mathbf{F} \mathbf{B} \mathbf{W}}, \quad K\_{i,j} = \sqrt{\frac{\mathbf{x}\_i \mathbf{x}\_j}{\mathbf{g}\_i \mathbf{g}\_j} \mathbf{F} \mathbf{B} \mathbf{W}}, \quad K\_{n,n+1} = \sqrt{\frac{Z\_0 \mathbf{x}\_n}{\mathbf{g}\_n \mathbf{g}\_{n+1}} \mathbf{F} \mathbf{B} \mathbf{W}}.\tag{2}$$

where the FBW stands for the fractional bandwidth. In Eq. (2), port impedance *Z*<sup>0</sup> can be normalized and the lowpass prototype elements, g-parameters, can be replaced using the normalized coupling coefficients, *Mn*,*n*þ1. As a result, the K-inverter values of the *n*th order bandpass filter for the external and interresonator coupling structures are given in Eq. (3).

$$K\_{0,1} = \sqrt{\mathbf{x}\_1 \mathbf{F} \mathbf{B} \mathbf{W}} \mathbf{M}\_{0,1}, \quad K\_{i,j} = \sqrt{\mathbf{x}\_i \mathbf{x}\_j} \mathbf{F} \mathbf{B} \mathbf{W} \mathbf{M}\_{i,j}, \quad K\_{n,n+1} = \sqrt{\mathbf{x}\_n \mathbf{F} \mathbf{B} \mathbf{W}} \mathbf{M}\_{n,n+1}. \tag{3}$$

In a similar way, J-inverter values can be obtained with the susceptance slope parameters of parallel resonators and those are given in Eq. (4).

$$J\_{0,1} = \sqrt{b\_1 \text{FBWM}} \mathbf{M}\_{0,1}, \quad J\_{i,j} = \sqrt{b\_i b\_j} \text{FBWM}\_{i,j}, \quad J\_{n,n+1} = \sqrt{b\_n \text{FBWM}} \mathbf{M}\_{n,n+1}. \tag{4}$$

The theoretical frequency responses based on the normalized coupling coefficients, *Mn*,*n*þ1, can be generated from the lowpass prototype g-parameters, *gn*. **Figure 2** shows the theoretical frequency responses of the second-order filtering structures. The coupling coefficient matrices associated with the structure are given in the inset of the figures. Note that the filter can produce either bandpass or bandstop frequency response according to the coupling scheme. The theoretical responses are generated with an assumption that the filters are designed and realized with resonators having a quality factor of 350. The fractional bandwidth of the filter is set as 0.023. Note that both frequency responses are designed to have the 3-dB bandwidth at the normalized frequency of *ω* is equal to 1.

In order to meet the requirement regarding the frequency response of the reconfigurable filter, the coupling structure should be designed based on the simulation or measurement process to find a suitable value for each coupling structure. With the given values for the normalized coupling coefficients and the fractional bandwidth, the inverter values for both external and internal coupling structures need to be determined to realize the required filtering responses.

**Figure 3** shows a simulation or measurement setup for the external coupling and its reflection coefficient result. The input impedance seen from the source and the reflection coefficient, Γ, can be given in Eq. (5).

**Figure 2.** *Theoretical frequency responses.*

**Figure 3.** *Simulation setup for an external coupling coefficient and its phase response of the reflection coefficient.*

$$Z\_{in} = \frac{K\_{0,1}^2}{j\alpha\_0 L \left(\frac{\alpha}{\alpha\_0} - \frac{\alpha y}{\alpha}\right)}, \qquad \Gamma = \frac{Z\_{in} - R\_a}{Z\_{in} + R\_a}.\tag{5}$$

The reflection coefficient can be reorganized with the input impedance, *Zin*, and the frequency points at which the phase of the reflection coefficient meets �90°. In other words, the reflection coefficient can be organized with respect to *ω* when its phase is 90° or �90°, and two positive solutions are given in Eq. (6).

$$\alpha\_{+\Re0} = \frac{-k\_{01}^2 + \sqrt{k\_{01}^4 + 4(R\_a w\_0 L)^2}}{2R\_a L}, \quad \alpha\_{-\Re0} = \frac{k\_{01}^2 + \sqrt{k\_{01}^4 + 4(R\_a w\_0 L)^2}}{2R\_a L}. \tag{6}$$

So, the difference between two frequency points can represent the inverter value and, with the predefined form of Eq. (3), it can be summarized as Eq. (7) when the source impedance, *Ra*, is the same as the port impedance, *Z*0.

$$f\_{-90} - f\_{+90} = \Delta f \cdot M\_{0,1}^2. \tag{7}$$

where Δ*f* is the bandwidth in Hz.

It means that the physical external coupling structure can be tuned during simulation or measurement process to meet the required design value. The design goal can be calculated with the given values such as the normalized coupling coefficient and bandwidth. As a result, the dimensions for external coupling structure can be determined or fine-tuned to achieve the prescribed frequency responses.

In addition to the external coupling structure, the reconfigurable filter also possesses the internal coupling structures and the inter-resonator coupling coefficients represent its coupling strength between resonators. **Figure 4** shows the equivalent circuit for the simulation or measurement setup for the inter-resonator coupling and its transmission coefficient result. As shown in the figure, two resonators are coupled each other through a coupling structure modeled with an inverter whose value is *Ki*,*<sup>j</sup>*, and both resonators are fed from source or load with loosely coupled through the inverters, *<sup>K</sup>*<sup>0</sup> or *<sup>K</sup>*″. In other words, to minimize any likely effects from input and output ports given with *Z*<sup>0</sup> in **Figure 4** on the interresonator coupling coefficients, the simulation or measurement setup for the interresonator coupling coefficients is designed to have small *K*<sup>0</sup> or *K*<sup>00</sup> values. The input impedance seen from the loosely coupled external ports is given in Eq. (8).

$$Z\_{in} = j\alpha\_0 L\_i \left(\frac{\alpha}{\alpha\_0} - \frac{\alpha\_0}{\alpha}\right) + \frac{K\_{i,j}^2}{j\alpha\_0 L\_j \left(\frac{\alpha}{\alpha\_0} - \frac{\alpha\_0}{\alpha}\right)}.\tag{8}$$

*Reconfigurable Filter Design DOI: http://dx.doi.org/10.5772/intechopen.97446*

**Figure 4.**

*Simulation setup for an inter-resonator coupling coefficient and its magnitude response of the transmission coefficient.*

Since two resonant peaks in the transmission response given in **Figure 4** coincide with the short-circuited frequencies, we can have frequency points by calculating its zeros of Eq. (8) with respect to the *ω*. Each positive solution from two different equations can be given as *f* <sup>1</sup> and *f* <sup>2</sup>.

$$f\_1 = \frac{-K\_{i\bar{j}} + \sqrt{K\_{i\bar{j}}^2 + 4w\_0^2 L\_i L\_j}}{4\pi\sqrt{L\_i L\_j}}, \quad f\_2 = \frac{K\_{i\bar{j}} + \sqrt{K\_{i\bar{j}}^2 + 4w\_0^2 L\_i L\_j}}{4\pi\sqrt{L\_i L\_j}}.\tag{9}$$

Similar with the case for the external coupling coefficient design, the interresonator coupling coefficient can also be estimated from the distance between two frequency points and it can be calculated as Eq. (10) with the theoretical values of *Ki*,*<sup>j</sup>* with ease.

$$f\_2 - f\_1 = \Delta f \cdot \mathbf{M}\_{1,2}.\tag{10}$$

Based on Eqs. (7) and (10), one can estimate the coupling structures for both external and internal couplings and precisely optimize the dimensions of structures to realize the required filter responses.

Up to this, we designed coupling structures for the external and inter-resonator couplings, and it could be done by using both theoretical responses and simulation or measurement processes. The reconfigurable characteristic can be obtained by applying the electronic components such as the varactor or pin diodes to the static coupling structures. For example, the shunt-connected varactor diodes can be embedded in the inter-resonator coupling structure and it results in different coupling coefficients comparing with the one without varactor diodes. The external coupling coefficients can also be tuned with electronic components loading to provide the proper impedance matching performance. In the following subsection, the equivalent circuits of coupling structures with tuning elements are presented in more detail.

#### **2.2 Equivalent circuit modeling**

In the previous subsection, the theoretical coupling values comprising a filter structure are given and simulation or measurement setup to extract the coupling coefficients is also provided. In order to realize the required filter responses with the fabricated filter, we establish the equivalent circuits based on the theoretical values and also perform the full-wave simulation process with commercial tools such as Keysight Advanced Design System and Ansys High Frequency Structure Simulator. In this subsection, two equivalent circuits for the different frequency responses, frequency tunable bandpass and absorptive bandstop, are given to describe the operation mechanism of a reconfigurable filter [4]. Both equivalent circuits consist of the coupling structures that are covered in the previous section and those contain the electronic components to achieve the required reconfigurable characteristic. During the simulation, the capability of loaded electronic components that change the coupling coefficients can be figured out and the detailed simulation results are given in [4].

**Figure 5** shows an equivalent circuit for a frequency tunable two-pole bandpass response. It contains two LC resonators coupled each other through an iris which is modeled using an inductor, *Lc*. A short-circuited microstrip line with shunt connected capacitors also contributes the inter-resonator coupling at the same time. The input and ouput lines are also coupled to the resonators. As mentioned in the previous section, the reconfigurable characteristics can be realized by tuning the coupling structures and, in this equivalent circuit, the short-circuited capacitors, *Ci* and *Ce*, are placed to optimize the internal and external coupling coefficients, respectively. The operating frequency agility is basically achieved by changing the value of *Cr* and the passband bandwidth can mainly be tuned by *Ci*. A capacitor serially shunt-connected to the input and output port using a short length of microstrip line ð Þ *lm* can control the matching performance. As a result, all features for the bandpass frequency responses can be controlled with three different capacitors.

**Figure 6** shows the simulation results for the operating and bandwidth tuning performance of the bandpass equivalent circuit. The detailed circuit parameters can

**Figure 5.** *Equivalent circuit for bandpass frequency response of the reconfigurable filter.*

**Figure 6.**

*Simulation results for the operating frequency and bandwidth tuning performance of the bandpass equivalent circuit.*

#### *Reconfigurable Filter Design DOI: http://dx.doi.org/10.5772/intechopen.97446*

be initially decided when the coupling structures for external and inter-resonator couplings are investigated using the full-wave simulation process as given in the previous section. The initial circuit parameters except for the three capacitance values are *N*<sup>1</sup> = 1.05, *N*<sup>2</sup> = 2.2, *lm* = 0.357 in, *li* = 0.315 in, *Lr* = 10.467 nH and *Lc* = 6.978 nH. The capacitance for resonators *Cr* is used from 22 pF to 70 pF and the other two capacitance values are determined based on the reasonable values from the commercial varactor diodes. The frequency tuning ratio of larger than 1.7:1 is expected based on the simulation results. The different capacitance loading *Ci* changes the resonant frequency of a resonant mode so it results in the bandwidth control capability. Note that the simulation for the bandwidth tuning is performed with a fixed *Cr* value and the similar tuning performance can be obtained over the frequency tuning range of interest. The simulation results exhibit that one can tune the center frequency and bandwidth of the filter maintaining the impedance matching performance.

**Figure 7** shows the equivalent circuit for the absorptive bandstop frequency response. Similar to the one for the bandpass frequency response it can change the operating frequency by changing the capacitance which is the part of resonators. In order to realize the unity coupling value between source and load as given in the inset of the theoretical frequency response, the microstrip line which is designed to have an electrical length of 270<sup>∘</sup> at 2.2 GHz is added between two ports. Except for the microstrip line for *Msl*, the rest of the circuit parameters are the same as those for the bandpass mode equivalent circuit. The absorptive frequency responses given in **Figure 8** can be characterized with the reduced reflection coefficients at the stopband of the bandstop filtering response. The equivalent circuit shows the best absorptive characteristic at 2.2 GHz of the predetermined center frequency but the amount of the reflection coefficient increases as the operating frequency is tuned from the predetermined one since the electrical length of the microstrip line deviates from the center frequency. In other words, the frequency tuning range of the absorptive bandstop mode is mainly limited by the electrical length of the filter structures and, in this case, it is about 500 MHz with less than 10 dB reflection coefficient.

Two equivalent circuits shown in **Figures 5** and **7** are realized using frequency tunable substrate integrated waveguide (SIW) resonators, varactor diodes, and single pole double throw (SPDT) switches. Two frequency tunable resonators and the microstrip line structure are placed at the different substrate and are coupled through coupling slots. The commercial varactor diodes are embedded in the microstrip line and are used to tune the coupling strength with different input voltages. In order to realize both bandpass and absorptive bandstop responses using a filter structure, two SPDT switches are also embedded in the microstrip line designed to provide *Msl* coupling between source and load.

**Figure 7.** *Equivalent circuit for absorptive bandstop frequency response of the reconfigurable filter.*

**Figure 9.** *Reconfigurable filter configuration and a photograph of the filter with frequency tuning elements.*

#### **2.3 SIW resonator-based reconfigurable filter**

**Figure 9** shows the filter configuration and a photograph of the fabricated filter with two frequency tuning elements [4]. At the top of the fabricated circuit, the microstrip line for the RF input signal is etched together with direct current (DC) bias lines for the electronic components. The coupling slots in the ground plane of the microstrip line are used to control the external or internal coupling and their size should be optimized in order to satisfy both bandpass and absorptive bandstop frequency responses. In virtue of the shunt-connected varactor diodes embedded in the microstrip, one can realize or achieve the coupling coefficient requirement over the frequency tuning range of interest. The SIW resonator consists of conductive

#### *Reconfigurable Filter Design DOI: http://dx.doi.org/10.5772/intechopen.97446*

posts at the center of the resonator and, with a copper membrane attached at the bottom of the resonator, high capacitance is loaded between the copper membrane and a circular-shaped ground plane of the post. The frequency agility is achieved by changing the loaded capacitance, and it means that the thickness of the air gap determines the resonant frequency of the frequency tunable SIW resonator. The frequency tuning elements attached at the bottom of the resonator are designed to change the air-gap thickness with the movement of a shaft connected to a piezoelectric disk actuator. More about the tuning element will be given in the next section.

**Figure 10** shows the measurement results of the bandpass mode of the reconfigurable filter. The operating frequency is tuned from 1.86 to 3.3 GHz. The resonant frequency is tuned using two piezo disk-based tuning elements and the impedance matching performance of at least below 10 dB reflection coefficient is maintained over the frequency tuning range thanks to the varactor diodes for the external coupling optimization. As shown in the frequency tuning measurement results, the filter exhibits the different passband bandwidth as its operating frequency is tuned. Since the coupling strength generated from the coupling slots is unavoidably frequency dependent, so the coupling coefficient from a coupling slot with a fixed dimension can be different as the operating frequency of the filter is tuned. As mentioned earlier, the passband bandwidth is mainly controlled with the varactor diodes which have a capacitance value of *Ci* in **Figure 5**. The fabricated filter can also maintain its passband bandwidth even though the operating frequency is changed. In the right graph of **Figure 10**, the capacitance values are set to realize 80 MHz constant bandwidth with about 640 MHz frequency tuning range. **Figure 11** shows the measurement results of the absorptive bandstop mode of the fabricated filter. As expected from the simulation result given in **Figure 8**, the absorptive characteristic is maintained over the frequency tuning range of interest and the amount of reflection coefficient and attenuation performance get worse when the operation frequency moves away from the frequency where the *l*<sup>2</sup> given in **Figure 7** satisfies the electrical length of 270<sup>∘</sup> . The fabricated filter provides more than 500 MHz of frequency tuning range when it operates as an absorptive bandstop mode.

In the following section, the frequency tuning elements that can be applied to the aforementioned frequency tunable SIW resonators will be given. First, the piezoelectric disk-based tiny ultra-linear actuator will be introduced with its detailed operating mechanism and then we will provide the recently proposed tuning method based on the magnetically actuated tuning elements. Finally, a filtering balun structure is fabricated and its frequency tunable characteristic is tested using four electromagnets to verify the proposed magnetically actuated tuning method.

**Figure 10.** *Measured bandpass mode frequency responses.*

**Figure 11.** *Measured absorptive bandstop mode frequency responses.*

#### **3. Frequency tuning elements**

In order to achieve frequency agility, one can exploit the tuning elements in the filter structures. Those can be varactor or pin diodes when the filters are designed with lumped or distributed elements such as microstrip lines since they can change and perturb the electrical length or the electromagnetic fields in the structures [5]. In this section, two different kinds of frequency tuning elements and their application to the frequency tunable substrate integrated waveguide resonator will be covered.

#### **3.1 Piezoelectric disk-based elements**

The frequency tunable SIW resonator shown in the previous section utilizes a piezoelectric disk-based actuator to change the thickness of an air gap in which the electric field is strong [6]. The piezoelectric actuator can be applied to the SIW resonator in two different ways.

Firstly, as shown in [6], the piezo disks can be directly attached to the copper membrane and react to the applied DC voltage. The thickness of the disk can be varied with the applied voltage so the different air-gap thicknesses result in the resonant frequency tuning. However, there are two drawbacks in directly attaching the piezo disk to the resonators. One is the size of the disk itself as it should cover the whole of the copper membrane and supporting substrate to properly operate its function and it can limit the size of the resonator as well. It means that the piezo disk may not be large enough to cover the frequency tunable resonator designed to operate at lower frequency bands. The second drawback is the large input voltage range with the hysteresis effects. The applied input voltage is dependent on the piezo disks, but it could be from 200 to +200 V to satisfy the sufficient frequency tuning range requirements. In addition, the amount of changes in the thickness is not identical when the applied voltage moves from low to high level or from low to high level due to the hysteresis effect.

#### *Reconfigurable Filter Design DOI: http://dx.doi.org/10.5772/intechopen.97446*

Secondly, the frequency tunable substrate integrated waveguide resonator utilizes the tiny ultra-linear actuators, named TULA to tune the resonant frequency [7]. The devised element shown in **Figure 12** is comprised of a small piezo disk with a post attached to and the input voltage is applied by using a small driver circuit that can be controlled with commercial software. The applied voltage pulse has a form of sawtooth and the post attached to the piezo disk moves upward or downward. The reported linear actuator has an advantage over the disk type of piezoelectric actuator as it can provide a predictable amount of movement per the amplitude of pulse in spite of the hysteresis of the piezo disk. But these tuning elements can limit the filter to be assembled with a neighboring circuit due to the size of the fixture with a shaft. In addition, one end of the post should be glued to the copper membrane to control the air-gap thickness, so during the fabrication process, especially in the attachment of the post to the filter, there is a chance of serious damage to the copper membrane. In addition, it is not a practical way to realize the frequency tunable characteristic when low cost, compact volume designs are needed.

#### **3.2 Magnetically actuated tuning elements**

In this subsection, a recently reported frequency tuning method that can be applied to the frequency tunable substrate integrated waveguide resonators will be covered [8]. An electromagnet with a high permeability foil are utilized to tune the resonant frequency without any contacts between the resonators and tuning elements. A foil sheet is glued on the copper membrane during the fabrication process so the thickness of an air gap can be tuned with the applied magnetic flux from the electromagnet. Based on this method, the resonant frequency of the filter can be precisely tuned and the copper membrane can maintain its status as it was fabricated since the frequency tuning element, electromagnet, does not contact the filter circuit, unlike the piezoelectric actuator. A detailed explanation along with the simulation and measurement results is given in the following.

**Figure 13** shows simplified 3D and side view of the circular-shaped substrate integrated waveguide resonator. A copper plate is circularly etched at the bottom of the substrate and is also electrically connected to the top plate via conductive viaholes. A large amount of capacitance is generated at the air gap between the copper

**Figure 13.**

*Simplified view of a frequency tunable circular SIW resonator and its electrical characteristics.*

plate and the copper membrane so the resonant frequency tuning mainly can be done by changing the thickness of an air gap. The relationship between the thickness and resonant frequency as well as the quality factor is simulated and the results are also given in **Figure 13**.

Based on the Eigen model simulation results, a single-frequency tunable resonator is designed to support the electromagnet-based tuning method. As shown in the upper left figure of **Figure 14**, two conductor-backed coplanar waveguide lines which are designed to have 50 Ω characteristic impedance are used as input and output lines. Two radial-shaped slots control the external couplings between 50 Ω line and resonator so a tight coupling can be achieved with a larger coupling slot. The side view is also given for the fabrication process, and it is noted that a high permeability foil is glued at the bottom of the resonator with the same adhesive used for the lamination of two substrates. The photograph of a fabricated resonator with a high permeability foil is also shown in **Figure 14**. The high permeability sheet glued to the frequency tunable SIW resonator for the magnetically actuated tuning method is from Metglas, Inc.

Prior to the implementation of the proposed frequency tuning method to the fabricated filter structure, the magnetic flux density and the current consumption from an electromagnet need to be investigated and the measured results are given in **Figure 15**. The magnetic flux density is measured using a Tesla meter and a probe as

#### **Figure 14.**

*Simulation model of frequency tunable SIW resonator, a side view drawing of the fabricated circuit, and photographs of SIW resonator.*

**Figure 15.** *Measured magnetic flux density and current consumption from an electromagnet.*

shown in the inset of **Figure 15**. An electromagnet used for the measurement is a readily available one from market and its rated input voltage is 12 V, so applying the input voltage larger than the rated voltage for a long period of time can result in a damage of the electromagnet.

The electromagnet is placed both upper and lower sides of the fabricated onepole frequency tunable SIW resonator to tune the resonant frequency as shown in **Figure 16**. Since the electromagnet only generates an attraction force from the input voltage, two electromagnets are used to move the high permeability foil in the opposite direction. In order to maximize the frequency tuning range, the input voltage to the electromagnet is applied to each at a time. The resonant frequency is tuned from 1.3 to 3.7 GHz which is larger than the frequency tuning ratio of 2.8:1.

**Figure 16.**

*Photograph of the fabricated one-pole frequency tunable SIW resonator with electromagnets and its measured frequency responses.*

In future research on the magnetically actuated tuning elements, the smaller electromagnetic that can generate two-way magnetic forces with lower input voltage to satisfy the specific applications are expected.

#### **3.3 Frequency tunable filtering balun with magnetically actuated tuning method**

In this subsection, a filtering balun design is provided and its resonant frequency is tuned based on the aforementioned magnetically actuated tuning method. The fundamental design theory for the frequency tunable filtering balun follows the one reported in [9] except for the order of circuit structure and the required frequency tuning range.

**Figure 17** shows the exploded view of the frequency tunable filtering balun. Similar to the reconfigurable filter given in [8], the filtering balun consists of two different substrates and each substrate contains the microstrip lines or SIW resonators, respectively. The couplings between two substrates are achieved through coupling slots placed at the ground plane of the microstrip line. To meet the requirement of balun, one port is designed in form of a single-ended microstrip input line and the other two ports have differential output lines. The short-circuited microstrip line fed the SIW resonator and the two ports connected to the other microstrip line receive output signal having an equal magnitude and a phase difference of 180<sup>∘</sup> . This can be done by introducing a coupling slot at the center of the Ushaped microstrip [9]. As shown in the photograph of the fabricated filtering balun, two circular-shaped foils which have a high permeability are glued at the bottom of the devised circuit. Two electromagnets are placed at both sides of the circuit for each frequency tunable SIW resonator. To satisfy the required frequency tuning range, the electromagnets are placed as close as possible by optimizing the height of the plastic support. The electromagnets are the same as those used in **Figure 16**, so the input voltage can also be swept from 0 V to 12 V.

**Figure 19** presents the simulation results of the filtering balun. The required frequency tuning is about 17% at the center frequency of 2.9 GHz with the proper performance on the differential output line such as the amplitude and phased imbalance. The mixed-mode S-parameters (S*ds*21, S*cs*21, and S*dd*22) are calculated and

**Figure 17.** *Simulation model of the frequency tunable filtering balun and its detailed view with high permeability foils.*

*Reconfigurable Filter Design DOI: http://dx.doi.org/10.5772/intechopen.97446*

**Figure 18.** *Simulation results of the frequency tunable filtering balun.*

**Figure 19.** *Measurement results of the frequency tunable filtering balun.*

also given in **Figures 18** and **19**. The operating frequency of the fabricated filtering balun is tuned from 2.65 to 3.15 GHz which satisfies the requirement and both amplitude and phase imbalanced performance at the passbands are given. Some discrepancies between the simulated and measured results are from some unexpected factors raised from fabrication or assemblies that can impact the electrical characteristic of the differential output signal. In this section, the frequency tuning method utilizing the electromagnets with high-permeability foil has been tested and the measurement results show that it can provide a comparable performance with the one with piezoelectric disks.

#### **4. Conclusions**

In this chapter, we explore the design process of the reconfigurable filter which can exhibit both frequency tunable bandpass and absorptive bandstop frequency responses. The coupling structures that can satisfy the predetermined requirements are designed from the theoretical normalized coupling coefficients, and its simulation/measurement models are also given. In order to realize the frequency tunable characteristic, two different tuning elements which are based on the piezoelectric disks or electromagnets are shown with the operation mechanism, and its application to the fabricated filtering balun is implemented especially using the electromagnets.

#### **Acknowledgements**

The authors want to thank Mr. Traian Antonescu and Mr. Steve Dubé from Ecole Polytechniqe, PolyGrames Research Center for their professional fabrication.

#### **Author details**

Tae-Hak Lee<sup>1</sup> \*, Sang-Gyu Lee<sup>2</sup> , Jean-Jacques Laurin<sup>3</sup> and Ke Wu<sup>3</sup>

1 Yuhan University, Department of Electronic Engineering, Bucheon, Republic of Korea

2 Korea Aerospace Research Institute, Satellite Payload Research and Development Division, Daejeon, Republic of Korea

3 PolyGrames Research Center, École Polytechnique (Montréal University), Montréal, QC, Canada

\*Address all correspondence to: taehaklee@gmail.com

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Reconfigurable Filter Design DOI: http://dx.doi.org/10.5772/intechopen.97446*

#### **References**

[1] Cameron RJ, Kudsia CM, Mansour RR. Microwave Filters for Communication Systems: Fundamentals, Design, and Application. 2nd ed. Hoboken: Wiley; 2018. DOI: 10.1002/9781119292371

[2] Hong JS. Microstrip Filters for RF/ Microwave Applications. 2nd ed. New York: Wiley; 2011. DOI: 10.1002/ 9780470937297

[3] Matthaei G, Jones EMT, Young L. Microwave Filters, Impedance Matching Networks, and Coupling Structures. Norwood: Artech House; 1980

[4] Lee T-H, Laurin J-J, Wu K. Reconfigurable filter for bandpass-toabsorptive bandstop responses. IEEE Access. 2020;**8**:6484-6495. DOI: 10.1109/ACCESS.2019.2963710

[5] Yang T, Rebeiz GM. Bandpass-tobandstop reconfigurable tunable filters with frequency and bandwidth controls. IEEE Transaction on Microwave Theory and Technology. 2017;**65**:2288-2297. DOI: 10.1109/TMTT.2017.2679182

[6] Moon S, Sigmarsson HH, Joshi H, Chappell WJ. Substrate integrated evanescent mode cavity filter with a 3.5 to 1 tuning ratio. IEEE Microwave Wireless Componts Letters. 2010;**20**: 450-452. DOI: 10.1109/ LMWC.2010.2050680

[7] Lee B, Nam S, Lee T-H, Ahn C-S, Lee J. Single-filter structure with tunable operating frequency in noncontiguous bands. IEEE Transactions on Components, Packaging and Manufacturing Technology. 2017;**7**:98-105. DOI: 10.1109/TCPMT.2016.2623804

[8] Lee T-H, Laurin J-J, Wu K. A wideband frequency-tuning method using magnetically actuated mechanical tuning of a SIW resonator. In:

Proceedings of the IEEE International Microwave Conference (IMS 2019); 1-7 June 2019; Boston. MA: IEEE; 2019. pp. 1-4

[9] Hickle MD, Peroulis D. A widelytunable substrate-integrated balun filter. In: Proceedings of the IEEE International Microwave Conference (IMS 2017); 4-9 June 2017; Honololu. HI: IEEE; 2017. pp. 1-4

Section 2
