**5.1. A Mixed-Tone Normalised Orthogonal Gradient Adaptive (MT-NOGA) algorithm**

The orthogonal gradient adaptive (OGA) algorithm is formulated from the FGA algorithm [10] by introducing an orthogonal constraint between the present and previous direction vectors [17]. This OGA algorithm employs the optimised forgetting-factor on a sample-by-sample basis, so that the direction vector is orthogonal to the previous direction vector.

We then demonstrate the derivation of the mixed-tone normalised orthogonal gradient adaptive (MT-NOGA) algorithm for PTEQ in DMT-based systems. With this mixed-tone criterion in Section 4, the tap-weight estimate vector **p**ˆ *<sup>m</sup>*(*k*) at symbol *k* for *m* ∈ *M* is given adaptively as

$$
\hat{\mathbf{p}}\_m(k) = \hat{\mathbf{p}}\_m(k-1) + \mu\_m(k)\,\mathbf{d}\_m(k) \,\,\,\,\,\tag{10}
$$

where *µm*(*k*) is the step-size parameter and **d***m*(*k*) is the T × 1 direction vector.

The direction vector **d***m*(*k*) can be obtained recursively as

6 Advances in Discrete Time Systems

at symbol *k*.

where ˜

*<sup>ξ</sup>m*(*i*) = *<sup>x</sup>*˜*m*(*i*) − **<sup>p</sup>**<sup>ˆ</sup> *<sup>H</sup>*

The orthogonal projection matrix Π<sup>⊥</sup>

tap-weight estimated vector **p**ˆ*l*(*k*) as [14]

is mentioned by the vector **p**ˆ*l*(*k*) for *l* �= *m*.

*m* ∈ *M* as depicted in Fig. 3.

*<sup>ξ</sup>m*(*i*) = *<sup>x</sup>*˜*m*(*i*) − **<sup>p</sup>**<sup>ˆ</sup> *<sup>H</sup>*

orthogonal gradient adaptive algorithm.

step-size [15], [16].

Π⊥ *<sup>l</sup>* (*k*) = ˜

*<sup>m</sup>* (*k*)**y**˜ *<sup>m</sup>*(*i*) −

Π⊥

− ... −

− Π⊥

**I** − Πˆ *<sup>l</sup>*(*k*)

*L* ∑ *l*=1 Π⊥

**<sup>I</sup>** <sup>−</sup> **<sup>p</sup>**ˆ*l*(*k*) [**p**<sup>ˆ</sup> *<sup>H</sup>*

spanned by the tap-weight vector **p**ˆ*l*(*k*). We note that the orthogonal projection matrix Π<sup>⊥</sup>

With the definition for this cost function, the *mth*-term on the right hand side of (9) represents as the estimated mixed-tone error of the symbol *<sup>k</sup>* due to the *<sup>m</sup>th*-tone of equaliser **<sup>p</sup>**<sup>ˆ</sup> *<sup>m</sup>*(*k*) for

*<sup>l</sup>* (*k*)**p**ˆ*l*(*k*)

**5. Adaptive step-size normalised orthogonal gradient adaptive algorithms** Based on filtered gradient adaptive algorithm, adaptive algorithms employing orthogonal gradient filtering can provide with the development of simple and robust filter across a wide range of input environments. This section is therefore concerned with the development of simple and robust adaptive frequency-domain equalisation by defining normalised

In this section, we describe the orthogonal gradient adaptive (OGA) algorithm that is a class of the filtered gradient adaptive (FGA) algorithm using an orthogonal constraint. This employs the mixed-tone criterion described above in Section 4 in order to improve the

The idea for low complexity adaptive step-size algorithms with the mixed-tone cost function is described in Section 5.2. For a large prediction error, the algorithm will increase the step-size to track the change of system whereas a small error will result in the decreased

= ˜

*<sup>m</sup>* (*k*)**y**˜ *<sup>m</sup>*(*i*) −

convergence speed presented in Section 5.1, respectively.

*<sup>l</sup>* (*k*)**p**ˆ*l*(*k*)

Π⊥

where the parameter *<sup>x</sup>*˜*m*(*k*) is the *<sup>k</sup>th* transmitted DMT-symbol on tone *<sup>m</sup>*. The vector **<sup>p</sup>**<sup>ˆ</sup> *<sup>m</sup>*(*k*) is of complex-valued T-tap PTEQ for tone *m*. The vector ˜**y***m*(*k*) is the DFT output for tone *m*

*<sup>l</sup>*+1(*k*)**p**ˆ*l*+1(*k*)

*H* **y**˜*l*(*i*)

*<sup>L</sup>* (*k*)**p**ˆ *<sup>L</sup>*(*k*)

*<sup>l</sup>* (*k*) **<sup>p</sup>**ˆ*l*(*k*)]−<sup>1</sup> **<sup>p</sup>**<sup>ˆ</sup> *<sup>H</sup>*

*H*

**I** denotes as an identity matrix and Πˆ *<sup>l</sup>*(*k*) is the projection matrix onto the space

*H*

*H* **y**˜ *<sup>L</sup>*(*i*) ,

**<sup>y</sup>**˜*l*+1(*i*)

*<sup>l</sup>* (*k*) which is the matrix difference determined by the

for *m* �= *l* , *L* ≤ *M* − 1. (7)

*<sup>l</sup>* (*k*) , (8)

**y**˜*l*(*i*) , for *m* �= *l* , *L* ≤ *M* − 1. (9)

*<sup>l</sup>* (*k*)

$$\begin{split} \mathbf{d}\_{m}(k) &= \lambda\_{m}(k) \, \mathbf{d}\_{m}(k-1) + \mathbf{g}\_{m}(k) \\ &= \lambda\_{m}(k) \, \mathbf{d}\_{m}(k-1) - \nabla\_{\mathbf{\hat{p}}\_{m}(k)} I(k) \end{split} \tag{11}$$

where **g***m*(*k*) is the negative gradient of cost function *J*(*k*) in (6) and *λm*(*k*) is the forgetting-factor at symbol *k*.

By differentiating *J*(*k*) in (6) with respect to **p**ˆ *<sup>m</sup>*(*k*), we then get the gradient vector **g***m*(*k*) as

$$\begin{split} \mathbf{g}\_{m}(k) &= -\nabla\_{\hat{\mathbf{p}}\_{m}(k)} J(k) \\ &= -\mathsf{f}\_{m}(k) \, \frac{\partial \mathsf{f}\_{m}(k)}{\partial \hat{\mathbf{p}}\_{m}(k)} = \mathsf{y}\_{m}(k) \, \mathsf{f}\_{m}^{\*}(k) \,. \end{split} \tag{12}$$

where *ξm*(*k*) is the *a priori* mixed-tone weight-estimated error at symbol *k* for *m* ∈ *M* as

$$\mathfrak{F}\_{\mathfrak{M}}(k) = \mathfrak{x}\_{\mathfrak{M}}(k) - \mathfrak{p}\_{\mathfrak{M}}^{H}(k-1)\mathfrak{y}\_{\mathfrak{m}}(k) - \sum\_{l=1}^{L} \left(\Pi\_{l}^{\perp}(k)\mathfrak{p}\_{l}(k)\right)^{H}\mathfrak{y}\_{l}(k) \text{ , for } m \neq l \text{ , } L \le M-1. \tag{13}$$

We introduce the updating gradient vector **g***m*(*k*) by

$$\mathbf{g}\_m(k) = \lambda\_m(k)\mathbf{g}\_m(k-1) + \mathbf{\tilde{y}}\_m(k)\mathbf{\tilde{y}}\_m^\*(k) \,. \tag{14}$$

where *ξ*<sup>∗</sup> *<sup>m</sup>*(*k*) is the complex conjugate of the mixed-tone estimated error at symbol *k* for *m* ∈ *M* as given in (13).

A procedure of an orthogonal gradient adaptive (OGA) algorithm to determine *λm*(*k*) has been described in [17] by projecting the gradient vector **g***m*(*k*) onto the previous direction vector **d***m*(*k* − 1). This leads us to obtain the direction vector **d***m*(*k*).

By determining the direction vector **d***m*(*k*) through an orthogonal projection of the gradient vector **g***m*(*k*) onto the previous direction vector **d***m*(*k* − 1), we arrive

$$\mathbf{d}\_{m}(k) = \mathbf{g}\_{m}(k) - \frac{\mathbf{d}\_{m}(k-1)}{\mathbf{d}\_{m}^{H}(k-1)} \frac{\mathbf{d}\_{m}^{H}(k-1)}{\mathbf{d}\_{m}(k-1)} \mathbf{g}\_{m}(k) \,. \tag{15}$$

10.5772/52158

145

http://dx.doi.org/10.5772/52158

<sup>2</sup> , (20)

<sup>2</sup> , (21)

*<sup>m</sup>*(*k*) · *ξm*(*k* − 1)| , (22)

*<sup>m</sup>*(*k*) is the

*5.2.1. Modified Adaptive Step-size algorithm (MAS)*

*5.2.2. Adaptive Averaging Step-size algorithm (AAS)*

ˆ

AAS-MTNOGA algorithm is presented in Table 2.

**6. Computational complexity**

algorithm can be expressed as

*m* ∈ *M* as given in (13).

*ξm*(*k* − 1) is introduced as

in (13). The use of ˆ

Following [18] and [19], the step-size parameter is controlled by squared prediction mixed-tone error. If a large error will be the cause of increased step-size for fast tracking, while a small error will result in a decreased step-size to yield smaller misadjustment. This

Adaptive Step-Size Orthogonal Gradient-Based Per-Tone Equalisation in Discrete Multitone Systems

*µm*(*k* + 1) = *γ µm*(*k*) + *β*|*ξm*(*k*)|

where 0 ≤ *γ* < 1, *β* > 0 and *ξm*(*k*) is the *a priori* mixed-tone estimated error at symbol *k* for

We note that the instantaneous mixed-tone cost function controls the step-size parameter. This idea is that a large prediction error causes the step-size to increase and provides faster tracking, while a small prediction error will result in a decrease in the step-size to yield smaller misadjustment. The step-size parameter *µm*(*k*) at symbol *k* for *m* ∈ *M* is always positive and is controlled by the size of the prediction error and parameters *γ* and *β*. The

The objective is to ensure large step-size parameter when the algorithm is far from an optimum point with the step-size parameter decreasing as we approach the optimum [15]. This algorithm achieves the objective using an estimate of the autocorrelation between *ξm*(*k*) and *ξm*(*k* − 1) to control step-size update *µ*˜*m*(*k* + 1). The estimate of an averaging of *ξm*(*k*) ·

*ζm*(*k*)|

*ζm*(*k* − 1)+(1 − *α*)|*ξ*<sup>∗</sup>

where 0 ≤ *γ* < 1 and *β* is an independent variable for scaling the prediction error. The

complex conjugate of the mixed-tone estimated error at symbol *k* for *m* ∈ *M* as shown

autocorrelation is generally a good measure for the optimum. Second, it rejects the effect of the uncorrelated noise sequence on the update step-size. The summary of proposed

In this section, we investigate the additional computational complexity of the proposed low complexity MAS and AAS algorithms. We consider that a multiplication of two complex numbers is counted as 4-real multiplications and 2-real additions. A multiplication of a real

*ζm*(*k*) responds to two objectives as presented in [15]. First, the error

exponentially weighting parameter *α* should be close to 1. The parameter *ξ*<sup>∗</sup>

number with a complex number is computed by 2-real multiplications.

summary of proposed MAS-MTNOGA algorithm is presented in Table 1.

*µ*˜*m*(*k* + 1) = *γ µ*˜*m*(*k*) + *β* | ˆ

*ζm*(*k*) = *α* ˆ

Thus, **d***m*(*k*) is orthogonal to the previous direction vector **d***m*(*k* − 1) weighted by the forgetting-factor *λm*(*k*). We can easily optimise a value of *λm*(*k*) based on a sample-by-sample basis by taking the previous direction vector **d***m*(*k* − 1) in (11) and setting to zero as

$$\begin{split} \mathbf{d}\_{m}^{H}(k)\mathbf{d}\_{m}(k-1) &= \lambda\_{m}(k)\mathbf{d}\_{m}^{H}(k-1)\mathbf{d}\_{m}(k-1) + \mathbf{g}\_{m}^{H}(k)\mathbf{d}\_{m}(k-1) \\ &= 0 \end{split} \tag{16}$$

Meanwhile, the gradient vector **g***m*(*k*) becomes the direction vector **d***m*(*k*) when the gradient vector **<sup>g</sup>***m*(*k*) is orthogonal to previous direction vector **<sup>d</sup>***m*(*<sup>k</sup>* − <sup>1</sup>) by **<sup>g</sup>***<sup>H</sup> <sup>m</sup>* (*k*)**d***m*(*k* − 1) = 0 . The forgetting-factor parameter *λm*(*k*) can be calculated for each tone *m* at symbol *k* as

$$\lambda\_m(k) = \left| \frac{\mathbf{g}\_m^H(k) \, \mathbf{d}\_m(k-1)}{\mathbf{d}\_m^H(k-1) \, \mathbf{d}\_m(k-1)} \right| \, . \tag{17}$$

According to the results in [10], it is noticed that the results of FGA and OGA algorithms are similar to those obtained by the normalised version of OGA (NOGA) algorithm. The convergence rate of the NOGA algorithm is shown that it is better than that of both FGA and OGA.

Therefore, we introduce the mixed-tone normalised orthogonal gradient adaptive (MT-NOGA) algorithm which can be applied recursively as

$$\tilde{\mathbf{g}}\_m(k) = \tilde{\lambda}\_m(k)\,\tilde{\mathbf{g}}\_m(k-1) + \frac{\tilde{\mathbf{y}}\_m(k)\,\tilde{\xi}\_m^\*(k)}{||\tilde{\mathbf{y}}\_m(k)||^2},\tag{18}$$

$$
\tilde{\lambda}\_m(k) = \left| \frac{\tilde{\mathbf{g}}\_m^H(k) \, \mathbf{d}\_m(k-1)}{\mathbf{d}\_m^H(k-1) \, \mathbf{d}\_m(k-1)} \right| \, \text{ } \tag{19}
$$

where **g**˜*m*(*k*) is obtained instead of the gradient vector **g***m*(*k*) in (14) and (17) for this normalised version and *ξ*<sup>∗</sup> *<sup>m</sup>*(*k*) is the complex conjugate of the mixed-tone estimated error at symbol *k* for *m* ∈ *M* as given in (13).

#### **5.2. Adaptive step-size algorithms**

This section describes the proposed low complexity adaptive step-size algorithms with the method of the mixed-tone criterion as described in Section 4 as follows.

#### *5.2.1. Modified Adaptive Step-size algorithm (MAS)*

8 Advances in Discrete Time Systems

to zero as

OGA.

normalised version and *ξ*<sup>∗</sup>

at symbol *k* for *m* ∈ *M* as given in (13).

**5.2. Adaptive step-size algorithms**

**d***H*

vector **d***m*(*k* − 1). This leads us to obtain the direction vector **d***m*(*k*).

*<sup>m</sup>* (*k*)**d***m*(*<sup>k</sup>* − <sup>1</sup>) = *<sup>λ</sup>m*(*k*)**d***<sup>H</sup>*

gradient vector **g***m*(*k*) onto the previous direction vector **d***m*(*k* − 1), we arrive

**<sup>d</sup>***m*(*k*) = **<sup>g</sup>***m*(*k*) <sup>−</sup> **<sup>d</sup>***m*(*<sup>k</sup>* <sup>−</sup> <sup>1</sup>) **<sup>d</sup>***<sup>H</sup>*

vector **<sup>g</sup>***m*(*k*) is orthogonal to previous direction vector **<sup>d</sup>***m*(*<sup>k</sup>* − <sup>1</sup>) by **<sup>g</sup>***<sup>H</sup>*

 

*λm*(*k*) =

(MT-NOGA) algorithm which can be applied recursively as

*λ*˜ *<sup>m</sup>*(*k*) =

  **g**˜ *H*

**d***H*

method of the mixed-tone criterion as described in Section 4 as follows.

**d***H*

By determining the direction vector **d***m*(*k*) through an orthogonal projection of the

Thus, **d***m*(*k*) is orthogonal to the previous direction vector **d***m*(*k* − 1) weighted by the forgetting-factor *λm*(*k*). We can easily optimise a value of *λm*(*k*) based on a sample-by-sample basis by taking the previous direction vector **d***m*(*k* − 1) in (11) and setting

Meanwhile, the gradient vector **g***m*(*k*) becomes the direction vector **d***m*(*k*) when the gradient

According to the results in [10], it is noticed that the results of FGA and OGA algorithms are similar to those obtained by the normalised version of OGA (NOGA) algorithm. The convergence rate of the NOGA algorithm is shown that it is better than that of both FGA and

Therefore, we introduce the mixed-tone normalised orthogonal gradient adaptive

*<sup>m</sup>* (*k*) **d***m*(*k* − 1)

*<sup>m</sup>* (*k* − 1) **d***m*(*k* − 1)

where **g**˜*m*(*k*) is obtained instead of the gradient vector **g***m*(*k*) in (14) and (17) for this

This section describes the proposed low complexity adaptive step-size algorithms with the

**<sup>g</sup>**˜ *<sup>m</sup>*(*k*) = *<sup>λ</sup>*˜ *<sup>m</sup>*(*k*) **<sup>g</sup>**˜ *<sup>m</sup>*(*<sup>k</sup>* <sup>−</sup> <sup>1</sup>) + **<sup>y</sup>**˜ *<sup>m</sup>*(*k*) *<sup>ξ</sup>*<sup>∗</sup>

*<sup>m</sup>* (*k*) **d***m*(*k* − 1)

*<sup>m</sup>* (*k* − 1) **d***m*(*k* − 1)

The forgetting-factor parameter *λm*(*k*) can be calculated for each tone *m* at symbol *k* as

**g***H*

**d***H*

*<sup>m</sup>* (*k* − 1)

**g***m*(*k*) . (15)

*<sup>m</sup>* (*k*)**d***m*(*k* − 1) = 0 .

. (17)

�**y**˜ *<sup>m</sup>*(*k*)�<sup>2</sup> , (18)

, (19)

*<sup>m</sup>* (*k*)**d***m*(*k* − 1)

= 0 . (16)

 

*<sup>m</sup>*(*k*)

 

*<sup>m</sup>*(*k*) is the complex conjugate of the mixed-tone estimated error

*<sup>m</sup>* (*k* − 1) **d***m*(*k* − 1)

*<sup>m</sup>* (*<sup>k</sup>* − <sup>1</sup>)**d***m*(*<sup>k</sup>* − <sup>1</sup>) + **<sup>g</sup>***<sup>H</sup>*

Following [18] and [19], the step-size parameter is controlled by squared prediction mixed-tone error. If a large error will be the cause of increased step-size for fast tracking, while a small error will result in a decreased step-size to yield smaller misadjustment. This algorithm can be expressed as

$$
\mu\_m(k+1) = \gamma \,\mu\_m(k) + \beta |\xi\_m(k)|^2 \,\text{.}\tag{20}
$$

where 0 ≤ *γ* < 1, *β* > 0 and *ξm*(*k*) is the *a priori* mixed-tone estimated error at symbol *k* for *m* ∈ *M* as given in (13).

We note that the instantaneous mixed-tone cost function controls the step-size parameter. This idea is that a large prediction error causes the step-size to increase and provides faster tracking, while a small prediction error will result in a decrease in the step-size to yield smaller misadjustment. The step-size parameter *µm*(*k*) at symbol *k* for *m* ∈ *M* is always positive and is controlled by the size of the prediction error and parameters *γ* and *β*. The summary of proposed MAS-MTNOGA algorithm is presented in Table 1.

#### *5.2.2. Adaptive Averaging Step-size algorithm (AAS)*

The objective is to ensure large step-size parameter when the algorithm is far from an optimum point with the step-size parameter decreasing as we approach the optimum [15].

This algorithm achieves the objective using an estimate of the autocorrelation between *ξm*(*k*) and *ξm*(*k* − 1) to control step-size update *µ*˜*m*(*k* + 1). The estimate of an averaging of *ξm*(*k*) · *ξm*(*k* − 1) is introduced as

$$
\tilde{\mu}\_{\mathfrak{m}}(k+1) = \gamma \,\, \tilde{\mu}\_{\mathfrak{m}}(k) + \beta \, |\hat{\zeta}\_{\mathfrak{m}}(k)|^2 \,. \tag{21}
$$

$$
\hat{\zeta}\_{m}(k) = \alpha \hat{\zeta}\_{m}(k-1) + (1-\alpha)|\tilde{\zeta}\_{m}^{\*}(k) \cdot \tilde{\zeta}\_{m}(k-1)|\,\,\,\,\tag{22}
$$

where 0 ≤ *γ* < 1 and *β* is an independent variable for scaling the prediction error. The exponentially weighting parameter *α* should be close to 1. The parameter *ξ*<sup>∗</sup> *<sup>m</sup>*(*k*) is the complex conjugate of the mixed-tone estimated error at symbol *k* for *m* ∈ *M* as shown in (13). The use of ˆ *ζm*(*k*) responds to two objectives as presented in [15]. First, the error autocorrelation is generally a good measure for the optimum. Second, it rejects the effect of the uncorrelated noise sequence on the update step-size. The summary of proposed AAS-MTNOGA algorithm is presented in Table 2.

#### **6. Computational complexity**

In this section, we investigate the additional computational complexity of the proposed low complexity MAS and AAS algorithms. We consider that a multiplication of two complex numbers is counted as 4-real multiplications and 2-real additions. A multiplication of a real number with a complex number is computed by 2-real multiplications.


$$\begin{array}{c} \dot{\mathbf{p}}\_{m}(k) = \dot{\mathbf{p}}\_{m}(k-1) + \mu\_{m}(k)\,\dot{\mathbf{d}}\_{m}(k) \\ \dot{\mathbf{d}}\_{m}(k) = \tilde{\lambda}\_{m}(k)\,\dot{\mathbf{d}}\_{m}(k-1) + \breve{\mathbf{g}}\_{m}(k) \end{array}$$

$$\begin{array}{c} \breve{\mathbf{g}}\_{m}(k) = \tilde{\lambda}\_{m}(k)\,\breve{\mathbf{g}}\_{m}(k-1) + \frac{\breve{\mathbf{y}}\_{m}(k)\,\breve{\mathbf{g}}\_{m}^{\*}(k)}{||\breve{\mathbf{y}}\_{m}(k)||^{2}} \\ \text{where} \quad \tilde{\lambda}\_{m}(k) = \left| \frac{\breve{\mathbf{g}}\_{m}^{H}(k)\,\breve{\mathbf{d}}\_{m}(k-1)}{\breve{\mathbf{d}}\_{m}^{H}(k-1)\,\breve{\mathbf{d}}\_{m}(k-1)} \right|. \end{array}$$

10.5772/52158

147

http://dx.doi.org/10.5772/52158

• Starting with soft-constrained initialisation as :

• Do for *n* ∈ *Nd n* = 1, 2, . . . , compute.

*<sup>m</sup>*(0) = **I**; **d**˜ *<sup>m</sup>*(0) = **g**˜ *<sup>m</sup>*(0)=[1 0 ··· 0]

where *λ*˜ *<sup>m</sup>*(*k*) =

*µ*˜*m*(*k*) = *γ µ*˜*m*(*k* − 1) + *β* | ˆ

**<sup>I</sup>** <sup>−</sup> **<sup>p</sup>**<sup>ˆ</sup> *<sup>m</sup>*(*k*) [**p**<sup>ˆ</sup> *<sup>H</sup>*

*T* .

Adaptive Step-Size Orthogonal Gradient-Based Per-Tone Equalisation in Discrete Multitone Systems

*<sup>m</sup>*(*k*) �**y**˜ *<sup>m</sup>*(*k*)�<sup>2</sup> ,

*<sup>l</sup>* (*k*)**p**ˆ*l*(*k*)

for *m* �= *l* , *L* ≤ *M* − 1.

*H* **y**˜*l*(*k*) ,

 .

**p**ˆ *<sup>m</sup>*(*k*) = **p**ˆ *<sup>m</sup>*(*k* − 1) + *µ*˜*m*(*k*) **d**˜ *<sup>m</sup>*(*k*) , **d**˜ *<sup>m</sup>*(*k*) = *λ*˜ *<sup>m</sup>*(*k*) **d**˜ *<sup>m</sup>*(*k* − 1) + **g**˜ *<sup>m</sup>*(*k*)

**<sup>g</sup>**˜ *<sup>m</sup>*(*k*) = *<sup>λ</sup>*˜ *<sup>m</sup>*(*k*) **<sup>g</sup>**˜ *<sup>m</sup>*(*<sup>k</sup>* <sup>−</sup> <sup>1</sup>) + **<sup>y</sup>**˜ *<sup>m</sup>*(*k*) *<sup>ξ</sup>*<sup>∗</sup>

*<sup>m</sup>* (*k*) **d**˜ *<sup>m</sup>*(*k* − 1)

*<sup>m</sup>* (*k* − 1) **d**˜ *<sup>m</sup>*(*k* − 1)

*ζm*(*k* − 1)|

*<sup>m</sup>* (*k* − 1)**y**˜ *<sup>m</sup>*(*k*) −

*<sup>m</sup>* (*k*) **<sup>p</sup>**<sup>ˆ</sup> *<sup>m</sup>*(*k*)]−<sup>1</sup> **<sup>p</sup>**<sup>ˆ</sup> *<sup>H</sup>*

**Multiplications Additions Divisions**

2 ,

*L* ∑ *l*=1 Π⊥

*<sup>m</sup>*(*k*) · *ξm*(*k* − 1)| ,

*<sup>m</sup>* (*k*) .

  **g**˜ *H*

**d**˜ *H*

*ζm*(*k* − 1)+(1 − *α*)|*ξ*<sup>∗</sup>

**Table 2.** Summary of the proposed adaptive averaging step-size mixed-tone normalised orthogonal gradient adaptive

**Algorithm Number of operations per symbol**

MAS-MTNOGA 8T + 5 8T + 5 1 AAS-MTNOGA 8T + 8 8T + 6 1 MTNOGA [11] 8T + 2 8T + 4 1

**p**ˆ *<sup>m</sup>*(0) = **0**; Π<sup>⊥</sup>

for *m* = 1, 2, . . . , *M*. for *k* = 1, 2, . . . , *K*. 1. To compute **p**ˆ *<sup>m</sup>*(*k*) as:

2. To compute *µ*˜*m*(*k*) as:

end

(AAS-MTNOGA) PTEQs.

end end ˆ

Π⊥ *<sup>m</sup>*(*k*) = ˜

**Table 3.** The computational complexity per symbol [21].

*ζm*(*k*) = *α* ˆ

where *<sup>ξ</sup>m*(*k*) = *<sup>x</sup>*˜*m*(*k*) − **<sup>p</sup>**<sup>ˆ</sup> *<sup>H</sup>*

2. To compute *µm*(*k*) as:

$$
\mu\_m(k) = \gamma \,\mu\_m(k-1) + \beta \left| \mathfrak{F}\_m(k-1) \right|^2,
$$

$$
\text{where}
\qquad \mathfrak{F}\_m(k) = \mathfrak{F}\_m(k) - \mathfrak{p}\_m^H(k-1)\mathfrak{p}\_m(k) - \sum\_{l=1}^L \left( \Pi\_l^\perp(k)\mathfrak{p}\_l(k) \right)^H \mathfrak{F}\_l(k),
$$

$$
\text{for } m \neq l, \; L \le M - 1.
$$

$$
\Pi\_m^\perp(k) = \mathbf{I} - \mathfrak{p}\_m(k) \left[ \mathfrak{p}\_m^H(k)\,\mathfrak{p}\_m(k) \right]^{-1} \mathfrak{p}\_m^H(k) \;.
$$

end end end

The proposed AAS mechanism involves two additional updates (21) and (22) as while the proposed MAS approach employs only one update (20) compared with the MT-NOGA algorithm in [11].

Therefore, the computational complexity of the proposed MAS-MTNOGA, AAS-MTNOGA and FS-MTNOGA algorithms are listed in Table 3, where T is the number of taps of PTEQ. It is shown that the proposed algorithms require a few additional number of operations.

**Table 1.** Summary of the proposed modified adaptive step-size mixed-tone normalised orthogonal gradient adaptive (MAS-MTNOGA) PTEQs.


10 Advances in Discrete Time Systems

**p**ˆ *<sup>m</sup>*(0) = **0**; Π<sup>⊥</sup>

for *m* = 1, 2, . . . , *M*. for *k* = 1, 2, . . . , *K*. 1. To compute **p**ˆ *<sup>m</sup>*(*k*) as:

2. To compute *µm*(*k*) as:

end

(MAS-MTNOGA) PTEQs.

algorithm in [11].

end end

• Starting with soft-constrained initialisation as :

• Do for *n* ∈ *Nd n* = 1, 2, . . . , compute.

*<sup>m</sup>*(0) = **I**; **d**˜ *<sup>m</sup>*(0) = **g**˜ *<sup>m</sup>*(0)=[1 0 ··· 0]

where *λ*˜ *<sup>m</sup>*(*k*) =

where *<sup>ξ</sup>m*(*k*) = *<sup>x</sup>*˜*m*(*k*) − **<sup>p</sup>**<sup>ˆ</sup> *<sup>H</sup>*

Π⊥ *<sup>m</sup>*(*k*) = ˜ *T* .

*<sup>m</sup>*(*k*) �**y**˜ *<sup>m</sup>*(*k*)�<sup>2</sup> ,

*<sup>l</sup>* (*k*)**p**ˆ*l*(*k*)

for *m* �= *l* , *L* ≤ *M* − 1.

*H* **y**˜*l*(*k*) ,

 .

2 ,

*L* ∑ *l*=1 Π⊥

*<sup>m</sup>* (*k*) .

**p**ˆ *<sup>m</sup>*(*k*) = **p**ˆ *<sup>m</sup>*(*k* − 1) + *µm*(*k*) **d**˜ *<sup>m</sup>*(*k*) , **d**˜ *<sup>m</sup>*(*k*) = *λ*˜ *<sup>m</sup>*(*k*) **d**˜ *<sup>m</sup>*(*k* − 1) + **g**˜ *<sup>m</sup>*(*k*)

**<sup>g</sup>**˜ *<sup>m</sup>*(*k*) = *<sup>λ</sup>*˜ *<sup>m</sup>*(*k*) **<sup>g</sup>**˜ *<sup>m</sup>*(*<sup>k</sup>* <sup>−</sup> <sup>1</sup>) + **<sup>y</sup>**˜ *<sup>m</sup>*(*k*) *<sup>ξ</sup>*<sup>∗</sup>

*<sup>m</sup>* (*k* − 1)**y**˜ *<sup>m</sup>*(*k*) −

*<sup>m</sup>* (*k*) **<sup>p</sup>**<sup>ˆ</sup> *<sup>m</sup>*(*k*)]−<sup>1</sup> **<sup>p</sup>**<sup>ˆ</sup> *<sup>H</sup>*

*<sup>m</sup>* (*k*) **d**˜ *<sup>m</sup>*(*k* − 1)

*<sup>m</sup>* (*k* − 1) **d**˜ *<sup>m</sup>*(*k* − 1)

 

*µm*(*k*) = *γ µm*(*k* − 1) + *β* |*ξm*(*k* − 1)|

**<sup>I</sup>** <sup>−</sup> **<sup>p</sup>**<sup>ˆ</sup> *<sup>m</sup>*(*k*) [**p**<sup>ˆ</sup> *<sup>H</sup>*

**Table 1.** Summary of the proposed modified adaptive step-size mixed-tone normalised orthogonal gradient adaptive

The proposed AAS mechanism involves two additional updates (21) and (22) as while the proposed MAS approach employs only one update (20) compared with the MT-NOGA

Therefore, the computational complexity of the proposed MAS-MTNOGA, AAS-MTNOGA and FS-MTNOGA algorithms are listed in Table 3, where T is the number of taps of PTEQ. It is shown that the proposed algorithms require a few additional number of operations.

**g**˜ *H*

**d**˜ *H*

$$\begin{aligned} \label{Bmatrix} \hat{\mathbf{p}}\_{m}(k) &= \hat{\mathbf{p}}\_{m}(k-1) + \tilde{\mu}\_{m}(k) \, \tilde{\mathbf{d}}\_{m}(k) \\ \cline{Amatrix} \hat{\mathbf{d}}\_{m}(k) &= \tilde{\lambda}\_{m}(k) \, \tilde{\mathbf{d}}\_{m}(k-1) + \tilde{\mathbf{g}}\_{m}(k) \\ \tilde{\mathbf{g}}\_{m}(k) &= \tilde{\lambda}\_{m}(k) \, \tilde{\mathbf{g}}\_{m}(k-1) + \frac{\tilde{\mathbf{y}}\_{m}(k) \, \tilde{\mathbf{d}}\_{m}^{\*}(k)}{||\tilde{\mathbf{y}}\_{m}(k)||^{2}} \\ \text{where} \quad \tilde{\lambda}\_{m}(k) &= \left| \frac{\tilde{\mathbf{g}}\_{m}^{H}(k) \, \tilde{\mathbf{d}}\_{m}(k-1)}{\tilde{\mathbf{d}}\_{m}^{H}(k-1) \, \tilde{\mathbf{d}}\_{m}(k-1)} \right| .\end{aligned}$$

2. To compute *µ*˜*m*(*k*) as:

$$\begin{array}{ll}\tilde{\mu}\_{m}(k) = \gamma \left| \tilde{\mu}\_{m}(k-1) + \beta \left| \boldsymbol{\xi}\_{m}(k-1) \right|^{2} \\ \boldsymbol{\xi}\_{m}(k) = \alpha \,\boldsymbol{\xi}\_{m}(k-1) + (1-\alpha) \left| \boldsymbol{\xi}\_{m}^{\*}(k) \cdot \boldsymbol{\xi}\_{m}(k-1) \right| \\ \text{where} \qquad \boldsymbol{\xi}\_{m}(k) = \boldsymbol{\pi}\_{m}(k) - \boldsymbol{\mathfrak{p}}\_{m}^{H}(k-1) \boldsymbol{\mathfrak{y}}\_{m}(k) - \sum\_{l=1}^{L} \left( \boldsymbol{\Pi}\_{l}^{\perp}(k) \boldsymbol{\mathfrak{p}}\_{l}(k) \right)^{H} \boldsymbol{\mathfrak{y}}\_{l}(k) \\ \text{for } m \neq l \; \; L \le M-1. \\ \boldsymbol{\Pi}\_{m}^{\perp}(k) = \mathbf{I} - \boldsymbol{\mathfrak{p}}\_{m}(k) \left[ \boldsymbol{\mathfrak{p}}\_{m}^{H}(k) \,\boldsymbol{\mathfrak{p}}\_{m}(k) \right]^{-1} \boldsymbol{\mathfrak{p}}\_{m}^{H}(k) \;. \end{array}$$

end

end end

**Table 2.** Summary of the proposed adaptive averaging step-size mixed-tone normalised orthogonal gradient adaptive (AAS-MTNOGA) PTEQs.

