**7. Performance analysis**

The convergence behaviour and stability analysis of the proposed MAS and AAS mechanisms are investigated based on the mixed-tone weight-estimated error. The convergence analysis of both MAS and AAS mechanisms are carried out and the steady-state and mean-square expressions of the step-size parameter relating the mean convergence factor as presented in [21] .

In the following analysis, we study the steady-state performance of the proposed MAS and AAS algorithms. We assume that these algortihms have converged.

#### **7.1. Convergence analysis of the proposed MAS mechanism**

Taking expectations on both sides of (20), the steady-state step-size arrives at

$$E\{\mu\_m(k+1)\} = \gamma \, E\{\mu\_m(k)\} + \beta \, E\{|\mathfrak{f}\_m(k)|^2\}\,. \tag{23}$$

10.5772/52158

149

http://dx.doi.org/10.5772/52158

Employing assumption (ii) to (24), the steady-state step-size for the proposed MAS algorithm

*<sup>E</sup>*{*µm*(∞)} ≈ *<sup>β</sup>* (*ξ*min

It is noted that the steady-state performance of proposed MAS mechanism has derived in

(25) for predicting in the steady-state condition.

Following [20] and [22], the average estimate ˆ

ˆ

*k*−1 ∑ *i*=0

the expectation of (27) can be expressed as

where *α* is an exponential weighting parameter.

Using assumption (i) into (28), we have

For convenience of computation, let

*k*−1 ∑ *j*=0

<sup>2</sup>} = (1 − *α*)<sup>2</sup>

*α<sup>i</sup> α<sup>j</sup> ξ*<sup>∗</sup>

*k*−1 ∑ *i*=0

*E*{| ˆ *ζm*(*k*)|

A = (1 + *α*<sup>2</sup> + *α*<sup>4</sup> + ··· + *α*2*k*) · (*ξ*min

<sup>2</sup> = (1 − *α*)<sup>2</sup>

*E*{| ˆ *ζm*(*k*)|

*E*{| ˆ *ζm*(*k*)|

**7.2. Convergence analysis of the proposed AAS mechanism**

*k*−1 ∑ *i*=0

*α<sup>i</sup> ξ*<sup>∗</sup>

*<sup>m</sup>*(*<sup>k</sup>* <sup>−</sup> *<sup>i</sup>*) · *<sup>ξ</sup>m*(*<sup>k</sup>* <sup>−</sup> *<sup>i</sup>* <sup>−</sup> <sup>1</sup>) · *<sup>ξ</sup>*<sup>∗</sup>

We assume that the proposed algorithm has converged in the steady-state condition. Also,

*α*2*<sup>i</sup> E*{|*ξm*(*k* − *i*)|

<sup>2</sup>} = (1 − *α*)<sup>2</sup> (1 + *α*<sup>2</sup> + *α*<sup>4</sup> + ··· + *α*2*k*) · (*ξ*min

*ζm*(*k*)=(1 − *α*)

*<sup>m</sup>* )

Adaptive Step-Size Orthogonal Gradient-Based Per-Tone Equalisation in Discrete Multitone Systems

*ζm*(*k*) in (22) can be rewritten as

(<sup>1</sup> <sup>−</sup> *<sup>γ</sup>*) . (25)

*<sup>m</sup>*(*k* − *i*) · *ξm*(*k* − *i* − 1) . (26)

<sup>2</sup>} · *E*{|*ξm*(*k* − *i* − 1)|

*<sup>m</sup>* + *<sup>ξ</sup>*ex

<sup>2</sup>} = (1 − *α*)<sup>2</sup> A , (30)

*<sup>m</sup>* + *<sup>ξ</sup>*ex

*<sup>m</sup>*(*k* − *j*) · *ξm*(*k* − *j* − 1) . (27)

<sup>2</sup>} , (28)

*<sup>m</sup>* (∞))<sup>2</sup> . (29)

*<sup>m</sup>* (∞))<sup>2</sup> . (31)

becomes

and


where

To facilitate the analysis, the proposed MAS mechanism is under a few assumptions.

**Assumption (i).** *We consider the steady-state value of E*{*µm*(*k* + 1)} *by*

$$\begin{aligned} \lim\_{k \to \infty} E\{\mu\_m(k+1)\} &= \lim\_{k \to \infty} E\{\mu\_m(k)\} = E\{\mu\_m(\infty)\}, \\ \lim\_{k \to \infty} E\{|\mathfrak{f}\_m(k)|^2\} &= \mathfrak{f}\_m^{\min} + \mathfrak{f}\_m^{\text{ex}}(\infty) \end{aligned}$$

*where ξmin <sup>m</sup> is the minimum mean square error (MMSE) and ξex <sup>m</sup>* (∞) *is the excess of mean square error (EMSE) related with the optimisation criterion in the steady-state condition.*

Applying assumption (i) to (23), we obtain

$$E\{\mu\_{\mathfrak{m}}(\infty)\} = \gamma \, E\{\mu\_{\mathfrak{m}}(\infty)\} + \beta \left(\mathfrak{z}\_{\mathfrak{m}}^{\min} + \mathfrak{z}\_{\mathfrak{m}}^{\text{ex}}(\infty)\right)$$

$$\begin{split} \left(1 - \gamma\right) \, E\{\mu\_{\mathfrak{m}}(\infty)\} &= \beta \left(\mathfrak{z}\_{\mathfrak{m}}^{\min} + \mathfrak{z}\_{\mathfrak{m}}^{\text{ex}}(\infty)\right) \\ \, E\{\mu\_{\mathfrak{m}}(\infty)\} &= \frac{\beta \left(\mathfrak{z}\_{\mathfrak{m}}^{\min} + \mathfrak{z}\_{\mathfrak{m}}^{\text{ex}}(\infty)\right)}{(1 - \gamma)}. \end{split} \tag{24}$$

To simplify these expressions, let us consider another assumptions.

**Assumption (ii).** *Let us consider that for (24), where*

$$
\mathfrak{F}\_m^{\min} + \mathfrak{F}\_m^{\exp}(\infty) \approx \mathfrak{F}\_m^{\min} \text{ \AA$$

*and*

$$\left(\mathfrak{f}\_{m}^{\min} + \mathfrak{f}\_{m}^{\text{ex}}(\infty)\right)^{2} \approx \left(\mathfrak{f}\_{m}^{\min}\right)^{2}.$$

*We then assume that ξex <sup>m</sup>* (∞) ≪ *<sup>ξ</sup>min <sup>m</sup> , when the algorithm is close to optimum.* Employing assumption (ii) to (24), the steady-state step-size for the proposed MAS algorithm becomes

$$E\{\mu\_m(\infty)\} \approx \frac{\mathcal{J}\left(\mathfrak{f}\_m^{\min}\right)}{(1-\gamma)}.\tag{25}$$

It is noted that the steady-state performance of proposed MAS mechanism has derived in (25) for predicting in the steady-state condition.

#### **7.2. Convergence analysis of the proposed AAS mechanism**

Following [20] and [22], the average estimate ˆ *ζm*(*k*) in (22) can be rewritten as

$$\hat{\zeta}\_{m}(k) = (1 - \mathfrak{a}) \sum\_{i=0}^{k-1} \mathfrak{a}^{i} \tilde{\zeta}\_{m}^{\*}(k - i) \cdot \tilde{\zeta}\_{m}(k - i - 1) \,. \tag{26}$$

and

12 Advances in Discrete Time Systems

in [21] .

*where ξmin*

*and*

*We then assume that ξex*

**7. Performance analysis**

The convergence behaviour and stability analysis of the proposed MAS and AAS mechanisms are investigated based on the mixed-tone weight-estimated error. The convergence analysis of both MAS and AAS mechanisms are carried out and the steady-state and mean-square expressions of the step-size parameter relating the mean convergence factor as presented

In the following analysis, we study the steady-state performance of the proposed MAS and

*E*{*µm*(*k* + 1)} = *γ E*{*µm*(*k*)} + *β E*{|*ξm*(*k*)|

*<sup>k</sup>*→<sup>∞</sup> *<sup>E</sup>*{*µm*(*k*)} <sup>=</sup> *<sup>E</sup>*{*µm*(∞)} ,

*<sup>m</sup>* + *<sup>ξ</sup>*ex

*<sup>m</sup>* (∞))

(<sup>1</sup> <sup>−</sup> *<sup>γ</sup>*) . (24)

*<sup>m</sup>* (∞) ,

To facilitate the analysis, the proposed MAS mechanism is under a few assumptions.

<sup>2</sup>} = *ξmin*

*<sup>m</sup>* + *<sup>ξ</sup>ex*

*E*{*µm*(∞)} = *γ E*{*µm*(∞)} + *β* (*ξ*min

*<sup>m</sup>* + *<sup>ξ</sup>*ex

*<sup>m</sup>* + *ξ*ex

*<sup>m</sup>* (∞) ≈ *<sup>ξ</sup>min*

<sup>2</sup> ≈

*<sup>m</sup> , when the algorithm is close to optimum.*

*<sup>m</sup>* (∞)

*<sup>m</sup>* (∞))

*<sup>m</sup>* (∞))

*<sup>m</sup>* ,

*ξmin m* <sup>2</sup> . <sup>2</sup>} . (23)

*<sup>m</sup>* (∞) *is the excess of mean square error*

AAS algorithms. We assume that these algortihms have converged.

**7.1. Convergence analysis of the proposed MAS mechanism**

**Assumption (i).** *We consider the steady-state value of E*{*µm*(*k* + 1)} *by*

*<sup>k</sup>*→<sup>∞</sup> *<sup>E</sup>*{*µm*(*<sup>k</sup>* <sup>+</sup> <sup>1</sup>)} <sup>=</sup> lim

*<sup>m</sup> is the minimum mean square error (MMSE) and ξex*

(1 − *γ*) *E*{*µm*(∞)} = *β* (*ξ*min

To simplify these expressions, let us consider another assumptions.

 *ξmin <sup>m</sup>* + *<sup>ξ</sup>ex*

*<sup>m</sup>* (∞) ≪ *<sup>ξ</sup>min*

*<sup>E</sup>*{*µm*(∞)} <sup>=</sup> *<sup>β</sup>* (*ξ*min

*ξmin <sup>m</sup>* + *<sup>ξ</sup>ex*

*(EMSE) related with the optimisation criterion in the steady-state condition.*

lim

lim

Applying assumption (i) to (23), we obtain

**Assumption (ii).** *Let us consider that for (24), where*

*<sup>k</sup>*→<sup>∞</sup> *<sup>E</sup>*{|*ξm*(*k*)<sup>|</sup>

Taking expectations on both sides of (20), the steady-state step-size arrives at

$$|\mathring{\xi}\_{\mathfrak{m}}(k)|^2 = (1-a)^2 \sum\_{i=0}^{k-1} \sum\_{j=0}^{k-1} a^i a^j \, \mathfrak{f}\_{\mathfrak{m}}^\*(k-i) \cdot \mathfrak{f}\_{\mathfrak{m}}(k-i-1) \cdot \mathfrak{f}\_{\mathfrak{m}}^\*(k-j) \cdot \mathfrak{f}\_{\mathfrak{m}}(k-j-1) \, . \tag{27}$$

We assume that the proposed algorithm has converged in the steady-state condition. Also, the expectation of (27) can be expressed as

$$E\{\left|\hat{\xi}\_{\mathfrak{m}}(k)\right|^{2}\}=(1-a)^{2}\sum\_{i=0}^{k-1}a^{2i}\operatorname{E}\{\left|\tilde{\xi}\_{\mathfrak{m}}(k-i)\right|^{2}\}\cdot\operatorname{E}\{\left|\tilde{\xi}\_{\mathfrak{m}}(k-i-1)\right|^{2}\}\,. \tag{28}$$

where *α* is an exponential weighting parameter.

Using assumption (i) into (28), we have

$$E\{ |\hat{\zeta}\_m(k)|^2 \} = (1 - a)^2 \left( 1 + a^2 + a^4 + \dots + a^{2k} \right) \cdot \left( \tilde{\zeta}\_m^{\text{min}} + \tilde{\zeta}\_m^{\text{ex}}(\infty) \right)^2 \,. \tag{29}$$

For convenience of computation, let

$$E\{ |\hat{\zeta}\_m(k)|^2 \} = (1 - \mathfrak{a})^2 \mathcal{A} \,, \tag{30}$$

where

$$\mathcal{A} = (1 + \boldsymbol{\alpha}^2 + \boldsymbol{\alpha}^4 + \dots + \boldsymbol{\alpha}^{2k}) \cdot (\mathfrak{f}\_m^{\text{min}} + \mathfrak{f}\_m^{\text{ex}}(\infty))^2 \,. \tag{31}$$

By multiplying *α*<sup>2</sup> on both sides of A in (31), if *k* → ∞ and 0 < *α* < 1, we get

$$\begin{split} a^2 \mathcal{A} &= a^2 \cdot (1 + a^2 + a^4 + \dots + a^{2(k-1)} + a^{2k}) \cdot (\xi\_m^{\min} + \xi\_m^{\text{ex}}(\infty))^2 \\ &= (a^2 + a^4 + a^6 + \dots + a^{2(k-1)} + a^{2k}) \cdot (\xi\_m^{\min} + \xi\_m^{\text{ex}}(\infty))^2 \\ &= \mathcal{A} - (\xi\_m^{\min} + \xi\_m^{\text{ex}}(\infty))^2 . \end{split} \tag{32}$$

Rearranging (32) to get A, we arrive at

$$(1 - \alpha^2) \cdot \mathcal{A} = (\mathfrak{f}\_m^{\min} + \mathfrak{f}\_m^{\text{ex}}(\infty))^2$$

$$\mathcal{A} = \frac{(\mathfrak{f}\_m^{\min} + \mathfrak{f}\_m^{\text{ex}}(\infty))^2}{(1 - \alpha^2)}. \tag{33}$$

10.5772/52158

151

http://dx.doi.org/10.5772/52158

By using assumption (ii), the steady-state value of *E*{*µ*˜*m*(∞)} in (36) is approximately

We note that (37) has proven for predicting the steady-state performance of proposed AAS

We introduce the stability and performance analysis of proposed algorithm that is based on

Let us denote the weight-error vector ε*m*(*k*) at symbol *k* for each tone *m* by following [23]

*k* ∑ *i*=1

> *L* ∑ *l*=1 (Π<sup>⊥</sup>

*<sup>λ</sup>k*−*<sup>i</sup>* **<sup>y</sup>**˜ *<sup>m</sup>*(*i*) �**y**˜ *<sup>H</sup>*

> *k* ∑ *i*=1

> *k* ∑ *i*=1

*<sup>m</sup>* (*i*)**y**˜ *<sup>m</sup>*(*i*)�

*<sup>λ</sup>k*−*<sup>i</sup>* **<sup>y</sup>**˜ *<sup>m</sup>*(*i*) �**y**˜ *<sup>H</sup>*

*<sup>λ</sup>k*−*<sup>i</sup>* **<sup>y</sup>**˜ *<sup>m</sup>*(*i*) �**y**˜ *<sup>H</sup>*

where **p**opt,*<sup>m</sup>* denotes as the optimum Wiener solution for the tap-weight vector.

where *ξm*(*k*) is the *a priori* mixed-tone estimated error at symbol *k* for tone *m* as

*<sup>m</sup>* (*k* − 1)**y**˜ *<sup>m</sup>*(*k*) −

Subtracting **p**opt,*<sup>m</sup>* from both sides of (39) and using (40) to eliminate **p**ˆ *<sup>m</sup>*(*k*), we may rewrite

*k* ∑ *i*=1

+ *µm*(*k*)

− *µm*(*k*)

∗

The estimate tap-weight PTEQ vector **p**ˆ *<sup>m</sup>*(*k*) can be introduced as

**p**ˆ *<sup>m</sup>*(*k*) = **p**ˆ *<sup>m</sup>*(*k* − 1) + *µm*(*k*)

*<sup>l</sup>* (*i*)**p**ˆ*l*(*k*))*H***y**˜*l*(*i*)

*ξm*(*k*) = *x*˜*m*(*k*) − **p**ˆ *<sup>H</sup>*

**p***opt*,*<sup>m</sup>* − **p**ˆ *<sup>m</sup>*(*k*) = **p***opt*,*<sup>m</sup>* − **p**ˆ *<sup>m</sup>*(*k* − 1) + *µm*(*k*)

− *L* ∑ *l*=1 (Π<sup>⊥</sup>  *ξ*min *m* 2

Adaptive Step-Size Orthogonal Gradient-Based Per-Tone Equalisation in Discrete Multitone Systems

(<sup>1</sup> <sup>−</sup> *<sup>γ</sup>*) · (<sup>1</sup> <sup>+</sup> *<sup>α</sup>*) . (37)

ε*m*(*k*) = **p***opt*,*<sup>m</sup>* − **p**ˆ *<sup>m</sup>*(*k*) , (38)

*<sup>m</sup>*(*i*)

*<sup>l</sup>* (*k*)**p**ˆ*l*(*k*))*H***y**˜*l*(*k*) .

for *m* �= *l* , *L* ≤ *M* − 1 (40)

*<sup>m</sup>* (*i*)**y**˜ *<sup>m</sup>*(*i*)�

*<sup>m</sup>* (*i*)**y**˜ *<sup>m</sup>*(*i*)�

*x*˜*m*(*i*) − **p**ˆ *<sup>H</sup>*

 **p***H*

 **p***H*

*<sup>m</sup>* (*k* − 1)**y**˜ *<sup>m</sup>*(*i*)

∗

<sup>∗</sup> .

(41)

*opt*,*m***y**˜ *<sup>m</sup>*(*i*)

*opt*,*m***y**˜ *<sup>m</sup>*(*i*)

*<sup>m</sup>* (*i*) **<sup>y</sup>**˜ *<sup>m</sup>*(*i*)� , (39)

*<sup>λ</sup>k*−*<sup>i</sup>* **<sup>y</sup>**˜ *<sup>m</sup>*(*i*) *<sup>ξ</sup>*<sup>∗</sup>

�**y**˜ *<sup>H</sup>*

*<sup>E</sup>*{*µ*˜*m*(∞)} ≈ *<sup>β</sup>* (<sup>1</sup> <sup>−</sup> *<sup>α</sup>*) ·

as

algorithm.

and [24]

as

**7.3. Stability and performance analysis**

the mean-square value of the mixed-tone estimated *ξm*(*k*).

Substituting (33) into (30), we get

$$\begin{split} E\{ |\hat{\xi}\_{m}^{\star}(k)|^{2} \} &= \frac{(1-a)^{2} \cdot (\mathfrak{f}\_{m}^{\min} + \mathfrak{f}\_{m}^{\text{ex}}(\infty))^{2}}{(1-a^{2})} \\ &= \frac{(1-a) \cdot (1-a) \cdot (\mathfrak{f}\_{m}^{\text{min}} + \mathfrak{f}\_{m}^{\text{ex}}(\infty))^{2}}{(1+a) \cdot (1-a) \rangle} \\ &= \frac{(1-a) \cdot (\mathfrak{f}\_{m}^{\text{min}} + \mathfrak{f}\_{m}^{\text{ex}}(\infty))^{2}}{(1+a)}. \end{split} \tag{34}$$

Taking the expectation on both sides of (21), the mean behaviour of step-size *µ*˜*m*(*k*) is given as

$$E\{\vec{\mu}\_{\mathcal{m}}(k+1)\} = \gamma \, E\{\vec{\mu}\_{\mathcal{m}}(k)\} + \beta \, E\{|\vec{\zeta}\_{\mathcal{m}}(k)|^2\}\,. \tag{35}$$

Using assumption (i) and (34) into (35), we get

$$E\{\ddot{\mu}\_{m}(\infty)\} = \gamma \, E\{\ddot{\mu}\_{m}(\infty)\} + \frac{\beta \left(1 - \alpha\right) \cdot \left(\tilde{\boldsymbol{\xi}}\_{m}^{\text{min}} + \tilde{\boldsymbol{\xi}}\_{m}^{\text{ex}}(\infty)\right)^{2}}{\left(1 + \alpha\right)}$$

$$E\{1 - \gamma\} \cdot E\{\ddot{\mu}\_{m}(\infty)\} = \frac{\beta \left(1 - \alpha\right) \cdot \left(\tilde{\boldsymbol{\xi}}\_{m}^{\text{min}} + \tilde{\boldsymbol{\xi}}\_{m}^{\text{ex}}(\infty)\right)^{2}}{\left(1 + \alpha\right)}$$

$$E\{\ddot{\mu}\_{m}(\infty)\} = \frac{\beta \left(1 - \alpha\right) \cdot \left(\tilde{\boldsymbol{\xi}}\_{m}^{\text{min}} + \tilde{\boldsymbol{\xi}}\_{m}^{\text{ex}}(\infty)\right)^{2}}{\left(1 - \gamma\right) \cdot \left(1 + \alpha\right)}.\tag{36}$$

where *ξ*min *<sup>m</sup>* is the steady-state minimum value and *ξ*ex *<sup>m</sup>* (∞) is the steady-state excess error of mixed-tone cost function.

By using assumption (ii), the steady-state value of *E*{*µ*˜*m*(∞)} in (36) is approximately as

$$E\{\vec{\mu}\_m(\infty)\} \approx \frac{\beta \left(1 - \alpha\right) \cdot \left(\mathfrak{f}\_m^{\min}\right)^2}{(1 - \gamma) \cdot (1 + \alpha)}\,. \tag{37}$$

We note that (37) has proven for predicting the steady-state performance of proposed AAS algorithm.

#### **7.3. Stability and performance analysis**

14 Advances in Discrete Time Systems

By multiplying *α*<sup>2</sup> on both sides of A in (31), if *k* → ∞ and 0 < *α* < 1, we get

= A − (*ξ*min

Rearranging (32) to get A, we arrive at

Substituting (33) into (30), we get

as

where *ξ*min

mixed-tone cost function.

*E*{| ˆ *ζm*(*k*)|

Using assumption (i) and (34) into (35), we get

(<sup>1</sup> <sup>−</sup> *<sup>γ</sup>*) · *<sup>E</sup>*{*µ*˜*m*(∞)} <sup>=</sup> *<sup>β</sup>* (<sup>1</sup> <sup>−</sup> *<sup>α</sup>*) · (*ξ*min

*<sup>m</sup>* is the steady-state minimum value and *ξ*ex

*<sup>E</sup>*{*µ*˜*m*(∞)} <sup>=</sup> *<sup>β</sup>* (<sup>1</sup> <sup>−</sup> *<sup>α</sup>*) · (*ξ*min

*<sup>m</sup>* + *ξ*ex

*<sup>α</sup>*<sup>2</sup> A = *<sup>α</sup>*<sup>2</sup> · (<sup>1</sup> + *<sup>α</sup>*<sup>2</sup> + *<sup>α</sup>*<sup>4</sup> + ... + *<sup>α</sup>*2(*k*−1) + *<sup>α</sup>*2*k*) · (*ξ*min

(1 − *α*2) · A = (*ξ*min

<sup>2</sup>} <sup>=</sup> (<sup>1</sup> <sup>−</sup> *<sup>α</sup>*)<sup>2</sup> · (*ξ*min

<sup>=</sup> (<sup>1</sup> <sup>−</sup> *<sup>α</sup>*) · (*ξ*min

*E*{*µ*˜*m*(*k* + 1)} = *γ E*{*µ*˜*m*(*k*)} + *β E*{| ˆ

*<sup>E</sup>*{*µ*˜*m*(∞)} <sup>=</sup> *<sup>γ</sup> <sup>E</sup>*{*µ*˜*m*(∞)} <sup>+</sup> *<sup>β</sup>* (<sup>1</sup> <sup>−</sup> *<sup>α</sup>*) · (*ξ*min

<sup>A</sup> <sup>=</sup> (*ξ*min

*<sup>m</sup>* + *ξ*ex

*<sup>m</sup>* + *ξ*ex

*<sup>m</sup>* + *ξ*ex

(1 + *α*) · (1 − *α*))

*<sup>m</sup>* + *ξ*ex

*<sup>m</sup>* + *ξ*ex

*<sup>m</sup>* + *ξ*ex

(1 + *α*)

(1 − *α*2)

<sup>=</sup> (<sup>1</sup> <sup>−</sup> *<sup>α</sup>*) · (<sup>1</sup> <sup>−</sup> *<sup>α</sup>*) · (*ξ*min

Taking the expectation on both sides of (21), the mean behaviour of step-size *µ*˜*m*(*k*) is given

*<sup>m</sup>* (∞))<sup>2</sup>

*<sup>m</sup>* (∞))<sup>2</sup>

*<sup>m</sup>* (∞))<sup>2</sup>

*<sup>m</sup>* + *ξ*ex

*ζm*(*k*)|

*<sup>m</sup>* (∞))<sup>2</sup>

*<sup>m</sup>* (∞))<sup>2</sup>

(<sup>1</sup> <sup>+</sup> *<sup>α</sup>*) . (34)

*<sup>m</sup>* + *ξ*ex

(1 + *α*)

(<sup>1</sup> <sup>−</sup> *<sup>γ</sup>*) · (<sup>1</sup> <sup>+</sup> *<sup>α</sup>*) . (36)

*<sup>m</sup>* (∞) is the steady-state excess error of

*<sup>m</sup>* (∞))<sup>2</sup>

*<sup>m</sup>* (∞))<sup>2</sup>

*<sup>m</sup>* (∞))<sup>2</sup>

<sup>2</sup>} . (35)

= (*α*<sup>2</sup> + *<sup>α</sup>*<sup>4</sup> + *<sup>α</sup>*<sup>6</sup> + ... + *<sup>α</sup>*2(*k*−1) + *<sup>α</sup>*2*k*) · (*ξ*min

*<sup>m</sup>* + *ξ*ex

*<sup>m</sup>* + *ξ*ex

*<sup>m</sup>* (∞))<sup>2</sup> . (32)

*<sup>m</sup>* (∞))<sup>2</sup>

*<sup>m</sup>* (∞))<sup>2</sup>

(<sup>1</sup> <sup>−</sup> *<sup>α</sup>*2) . (33)

We introduce the stability and performance analysis of proposed algorithm that is based on the mean-square value of the mixed-tone estimated *ξm*(*k*).

Let us denote the weight-error vector ε*m*(*k*) at symbol *k* for each tone *m* by following [23] and [24]

$$\mathfrak{e}\_{m}(k) = \mathfrak{p}\_{opt,m} - \mathfrak{p}\_{m}(k) \, , \tag{38}$$

where **p**opt,*<sup>m</sup>* denotes as the optimum Wiener solution for the tap-weight vector.

The estimate tap-weight PTEQ vector **p**ˆ *<sup>m</sup>*(*k*) can be introduced as

$$
\hat{\mathbf{p}}\_m(k) = \hat{\mathbf{p}}\_m(k-1) + \mu\_m(k) \sum\_{i=1}^k \lambda^{k-i} \frac{\breve{\mathbf{y}}\_m(i)\,\tilde{\mathbf{y}}\_m^\*(i)}{||\breve{\mathbf{y}}\_m^H(i)\,\breve{\mathbf{y}}\_m(i)||}\,\tag{39}
$$

where *ξm*(*k*) is the *a priori* mixed-tone estimated error at symbol *k* for tone *m* as

$$\mathfrak{T}\_{m}(k) = \mathfrak{x}\_{m}(k) - \mathfrak{p}\_{m}^{H}(k-1)\mathfrak{y}\_{m}(k) - \sum\_{l=1}^{L} \left(\Pi\_{l}^{\perp}(k)\mathfrak{p}\_{l}(k)\right)^{H}\mathfrak{y}\_{l}(k)\,.$$
 
$$\text{for } m \neq l\text{, } L \le M-1 \tag{40}$$

Subtracting **p**opt,*<sup>m</sup>* from both sides of (39) and using (40) to eliminate **p**ˆ *<sup>m</sup>*(*k*), we may rewrite as

$$\begin{split} \mathbf{p}\_{opt,m} - \dot{\mathbf{p}}\_{m}(k) &= \mathbf{p}\_{opt,m} - \dot{\mathbf{p}}\_{m}(k-1) + \mu\_{m}(k) \sum\_{i=1}^{k} \lambda^{k-i} \frac{\ddot{\mathbf{y}}\_{m}(i)}{\|\dot{\mathbf{y}}\_{m}^{H}(i)\dot{\mathbf{y}}\_{m}(i)\|} \Big| \left( \mathbf{\tilde{x}}\_{m}(i) - \dot{\mathbf{p}}\_{m}^{H}(k-1)\dot{\mathbf{y}}\_{m}(i) \right) \\ &- \sum\_{l=1}^{L} (\Pi\_{l}^{\perp}(i)\dot{\mathbf{p}}\_{l}(k))^{H} \ddot{\mathbf{y}}\_{l}(i) \Big)^{\*} + \mu\_{m}(k) \sum\_{i=1}^{k} \lambda^{k-i} \frac{\ddot{\mathbf{y}}\_{m}(i)}{\|\dot{\mathbf{y}}\_{m}^{H}(i)\ddot{\mathbf{y}}\_{m}(i)\|} \Big( \mathbf{p}\_{opt,m}^{H} \ddot{\mathbf{y}}\_{m}(i) \Big)^{\*} \\ &- \mu\_{m}(k) \sum\_{i=1}^{k} \lambda^{k-i} \frac{\ddot{\mathbf{y}}\_{m}(i)}{\|\dot{\mathbf{y}}\_{m}^{H}(i)\ddot{\mathbf{y}}\_{m}(i)\|} \Big( \mathbf{p}\_{opt,m}^{H} \ddot{\mathbf{y}}\_{m}(i) \Big)^{\*} . \end{split} \tag{41}$$

Substituting (38) in (41), we get

$$\begin{split} \boldsymbol{\mathfrak{e}}\_{m}(k) &= \boldsymbol{\mathfrak{e}}\_{m}(k-1) - \mu\_{\text{m}}(k) \sum\_{i=1}^{k} \lambda^{k-i} \frac{\check{\mathbf{y}}\_{m}(i) \check{\mathbf{y}}\_{m}^{H}(i) \boldsymbol{\mathfrak{e}}\_{m}(k-1)}{||\check{\mathbf{y}}\_{m}^{H}(i) \check{\mathbf{y}}\_{m}(i)||} \\ &+ \mu\_{\text{m}}(k) \sum\_{i=1}^{k} \lambda^{k-i} \frac{\check{\mathbf{y}}\_{m}(i)}{||\check{\mathbf{y}}\_{m}^{H}(i) \check{\mathbf{y}}\_{m}(i)||} \Big\{ \boldsymbol{\uptilde{x}}\_{m}(i) - \boldsymbol{\upmu}\_{\text{opt},m}^{H} \check{\mathbf{y}}\_{m}(i) - \sum\_{l=1}^{L} (\Pi\_{l}^{\bot}(i) \check{\mathbf{p}}\_{l}(k))^{H} \check{\mathbf{y}}\_{l}(i) \Big\}^{\*}. \end{split} \tag{42}$$

Then, the weight-error vector ε*m*(*k*) can be expressed as

$$\boldsymbol{\varepsilon}\_{\rm m}(k) = \left[\mathbf{I} - \mu\_{\rm m}(k)\sum\_{i=1}^{k}\lambda^{k-i}\frac{\breve{\mathbf{y}}\_{\rm m}(i)\breve{\mathbf{y}}\_{\rm m}^{H}(i)}{||\breve{\mathbf{y}}\_{\rm m}^{H}(i)\breve{\mathbf{y}}\_{\rm m}(i)||}\right]\boldsymbol{\varepsilon}\_{\rm m}(k-1) + \mu\_{\rm m}(k)\sum\_{i=1}^{k}\lambda^{k-i}\frac{\breve{\mathbf{y}}\_{\rm m}(i)\,\,\boldsymbol{\xi}\_{\rm opt,m}^{\*}}{||\breve{\mathbf{y}}\_{\rm m}^{H}(i)\breve{\mathbf{y}}\_{\rm m}(i)||}\tag{43}$$

where *ξ*<sup>∗</sup> opt,*<sup>m</sup>* is the complex conjugate of estimation mixed-tone error produced in the optimum Wiener solution as

$$\mathfrak{X}\_{\text{opt},m} = \mathfrak{x}\_{m}(i) - \mathfrak{p}\_{\text{opt},m}^{H}\mathfrak{x}\_{m}(i) - \sum\_{l=1}^{L} (\Pi\_{l}^{\perp}(i)\mathfrak{p}\_{l}(k))^{H}\mathfrak{x}\_{l}(i)\ . $$
 
$$ \text{for } m \neq l \text{ } \space L \leq M - 1 \tag{44} $$

10.5772/52158

153

http://dx.doi.org/10.5772/52158

solution shown in (44).

*<sup>ξ</sup>m*(*k*) = *<sup>x</sup>*˜*m*(*k*) − **<sup>p</sup>**<sup>ˆ</sup> *<sup>H</sup>*

= *<sup>x</sup>*˜*m*(*k*) − **<sup>p</sup>***<sup>H</sup>*

<sup>=</sup> *<sup>ξ</sup>*opt,*<sup>m</sup>* <sup>+</sup> <sup>ε</sup>*<sup>H</sup>*

ˆ*Jm*(*k*) = *E*{ |*ξm*(*k*)|

= *E*{ 

By using assumption (iii), we assume that

*<sup>m</sup>* (*k*) = *E*{|*ξ*opt,*m*|

and by the orthogonality principle

*J ex*

where *Jmin*

and *Jex*

as

Since

filter for tone *m* as

*J min* = *E*{|*ξ*opt,*m*|

*<sup>m</sup>* (*k*) **y**˜ *<sup>m</sup>*(*k*) −

<sup>=</sup> *<sup>x</sup>*˜*m*(*k*) <sup>−</sup> (**p**opt,*<sup>m</sup>* <sup>−</sup> <sup>ε</sup>*m*(*k*))*H***y**˜ *<sup>m</sup>*(*k*) <sup>−</sup>

opt,*<sup>m</sup>* **<sup>y</sup>**˜ *<sup>m</sup>*(*k*) <sup>−</sup>

2 }

*<sup>ξ</sup>*opt,*<sup>m</sup>* <sup>+</sup> <sup>ε</sup>*<sup>H</sup>*

*L* ∑ *l*=1 (Π<sup>⊥</sup>

*L* ∑ *l*=1 (Π<sup>⊥</sup>

Let ˆ*Jm*(*k*) denotes as the expectation of mean square mixed-tone error at tone *m* for *m* ∈ *M*

*<sup>m</sup>* (*k*)**y**˜ *<sup>m</sup>*(*k*)

<sup>2</sup>} + *<sup>E</sup>*{**y**˜ *<sup>H</sup>*

ˆ*Jm*(*k*) = *J*

<sup>2</sup>} <sup>+</sup> *<sup>E</sup>*{ε*<sup>H</sup>*

*<sup>m</sup>* (*k*) = *<sup>E</sup>*{ <sup>ε</sup>*<sup>H</sup>*

<sup>+</sup> *<sup>E</sup>*{ε*<sup>H</sup>*

<sup>+</sup> *<sup>E</sup>*{ε*<sup>H</sup>*

*min <sup>m</sup>* + *J ex*

*<sup>m</sup>* (*k*)**y**˜ *<sup>m</sup>*(*k*)*ξ*<sup>∗</sup>

*<sup>m</sup>* (*k*)ε*m*(*k*)**y**˜ *<sup>H</sup>*

<sup>R</sup>**y**˜**y**˜ <sup>=</sup> *<sup>E</sup>*{**y**˜ *<sup>m</sup>*(*k*) **<sup>y</sup>**˜ *<sup>H</sup>*

*m* is the minimum mean square mixed-tone error produced by the optimum Wiener

*<sup>m</sup>* (*k*) is called the excess mean square mixed-tone error (EMSE) at symbol *k* for tone *m*

<sup>∗</sup>

*<sup>m</sup>* (*k*)**y**˜ *<sup>m</sup>*(*k*)*ξ*<sup>∗</sup>

*<sup>m</sup>* (*k*)ε*m*(*k*)**y**˜ *<sup>H</sup>*

*<sup>m</sup>* (*k*)ε*m*(*k*)*ξ*opt,*m*}

*<sup>l</sup>* (*k*)**p**ˆ*l*(*k*))*H***y**˜*l*(*k*)

*<sup>l</sup>* (*k*)**p**ˆ*l*(*k*))*<sup>H</sup>* **<sup>y</sup>**˜*l*(*k*) + <sup>ε</sup>*<sup>H</sup>*

*<sup>m</sup>* (*k*) **y**˜ *<sup>m</sup>*(*k*) . (46)

*<sup>ξ</sup>*opt,*<sup>m</sup>* <sup>+</sup> <sup>ε</sup>*<sup>H</sup>*

opt,*m*}

opt,*m*} <sup>+</sup> *<sup>E</sup>*{**y**˜ *<sup>H</sup>*

*<sup>l</sup>* (*k*)**p**ˆ*l*(*k*))*<sup>H</sup>* **<sup>y</sup>**˜*l*(*k*)

*<sup>m</sup>* (*k*)**y**˜ *<sup>m</sup>*(*k*)

 }

*<sup>m</sup>* (*k*)**y**˜ *<sup>m</sup>*(*k*)} . (47)

*<sup>m</sup>* (*k*) , (48)

*<sup>m</sup>* (*k*)**y**˜ *<sup>m</sup>*(*k*) } . (50)

*<sup>m</sup>* (*k*)} , (51)

*E*{*ξ*opt,*<sup>m</sup>* **y**˜ *<sup>m</sup>*(*k*)} ≈ 0 , (52)

*<sup>m</sup>* (*k*)ε*m*(*k*)*ξ*opt,*m*} , (49)

*<sup>m</sup>* (*k*) **y**˜ *<sup>m</sup>*(*k*)

*L* ∑ *l*=1 (Π<sup>⊥</sup>

Adaptive Step-Size Orthogonal Gradient-Based Per-Tone Equalisation in Discrete Multitone Systems

**Assumption (iii).** *We consider the condition necessary for the convergence of mean, that is*

$$E\{\ \|\mathfrak{e}\_m(k)\|\ \} \to 0 \; \; \; as \; \; k \to \infty$$

*or equivalently,*

$$E\{\not p\_m(k) \} \to \mathcal{p}\_{\text{opt},m} \quad \text{as} \ k \to \infty$$

*where* �ε*m*(*k*)� *is the Euclidean norm of the weight-error vector* ε*m*(*k*)*.*

We denote the mixed-tone estimated error for tone *m* at symbol *k* as

$$\mathfrak{F}\_{\mathfrak{m}}(k) = \mathfrak{x}\_{\mathfrak{m}}(k) - \mathfrak{p}\_{\mathfrak{m}}^{H}(k)\,\mathfrak{y}\_{\mathfrak{m}}(k) - \sum\_{l=1}^{L} (\Pi\_{l}^{\perp}(k)\mathfrak{p}\_{l}(k))^{H}\mathfrak{y}\_{l}(k)\,.$$

$$\text{for } m \neq l\text{, } L \le M - 1\tag{45}$$

Using (38) into (45), the estimation mixed-tone error *ξm*(*k*) at symbol *k* for each tone *m* is given as in (46), where *ξ*opt,*m* is the estimation mixed-tone error in the optimum Wiener solution shown in (44).

16 Advances in Discrete Time Systems

Substituting (38) in (41), we get

*k* ∑ *i*=1

*<sup>λ</sup>k*−*<sup>i</sup>* **<sup>y</sup>**˜ *<sup>m</sup>*(*i*) �**y**˜ *<sup>H</sup>*

*<sup>λ</sup>k*−*<sup>i</sup>* **<sup>y</sup>**˜ *<sup>m</sup>*(*i*)**y**˜ *<sup>H</sup>*

�**y**˜ *<sup>H</sup>*

Then, the weight-error vector ε*m*(*k*) can be expressed as

*ξ*opt,*<sup>m</sup>* = *x*˜*m*(*i*) − **p***<sup>H</sup>*

*where* �ε*m*(*k*)� *is the Euclidean norm of the weight-error vector* ε*m*(*k*)*.*

We denote the mixed-tone estimated error for tone *m* at symbol *k* as

*ξm*(*k*) = *x*˜*m*(*k*) − **p**ˆ *<sup>H</sup>*

*k* ∑ *i*=1 *<sup>λ</sup>k*−*<sup>i</sup>* **<sup>y</sup>**˜ *<sup>m</sup>*(*i*)**y**˜ *<sup>H</sup>*

*<sup>m</sup>* (*i*)**y**˜ *<sup>m</sup>*(*i*)�

�**y**˜ *<sup>H</sup>*

*<sup>m</sup>* (*i*)

opt,*m***y**˜ *<sup>m</sup>*(*i*) −

**Assumption (iii).** *We consider the condition necessary for the convergence of mean, that is*

*E*{ �ε*m*(*k*)�}→ 0 , *as k* → ∞

*<sup>E</sup>*{ *<sup>p</sup>*<sup>ˆ</sup> *<sup>m</sup>*(*k*) } → *<sup>p</sup>opt*,*<sup>m</sup>* , *as k* <sup>→</sup> <sup>∞</sup>

*<sup>m</sup>* (*k*) **y**˜ *<sup>m</sup>*(*k*) −

Using (38) into (45), the estimation mixed-tone error *ξm*(*k*) at symbol *k* for each tone *m* is given as in (46), where *ξ*opt,*m* is the estimation mixed-tone error in the optimum Wiener

*L* ∑ *l*=1 (Π<sup>⊥</sup>

*<sup>m</sup>* (*i*)**y**˜ *<sup>m</sup>*(*i*)�

*<sup>m</sup>* (*i*)ε*m*(*k* − 1)

*<sup>m</sup>* (*i*)**y**˜ *<sup>m</sup>*(*i*)�

*x*˜*m*(*i*) − **p***<sup>H</sup>*

ε*m*(*k* − 1) + *µm*(*k*)

opt,*<sup>m</sup>* is the complex conjugate of estimation mixed-tone error produced in the

*L* ∑ *l*=1 (Π<sup>⊥</sup>

opt,*m***y**˜ *<sup>m</sup>*(*i*) −

*L* ∑ *l*=1 (Π<sup>⊥</sup>

*k* ∑ *i*=1

*<sup>l</sup>* (*i*)**p**ˆ*l*(*k*))*H***y**˜*l*(*i*) .

*<sup>l</sup>* (*k*)**p**ˆ*l*(*k*))*H***y**˜*l*(*k*) .

for *m* �= *l* , *L* ≤ *M* − 1 (45)

for *m* �= *l* , *L* ≤ *M* − 1 (44)

*<sup>λ</sup>k*−*<sup>i</sup>* **<sup>y</sup>**˜ *<sup>m</sup>*(*i*) *<sup>ξ</sup>*<sup>∗</sup>

�**y**˜ *<sup>H</sup>*

*<sup>l</sup>* (*i*)**p**ˆ*l*(*k*))*H***y**˜*l*(*i*)

opt,*m*

*<sup>m</sup>* (*i*)**y**˜ *<sup>m</sup>*(*i*)� . (43)

∗ .

(42)

ε*m*(*k*) = ε*m*(*k* − 1) − *µm*(*k*)

ε*m*(*k*) =

where *ξ*<sup>∗</sup>

*or equivalently,*

+ *µm*(*k*)

**I** − *µm*(*k*)

optimum Wiener solution as

*k* ∑ *i*=1

$$\begin{split} \tilde{\mathsf{x}}\_{m}(k) &= \check{\mathsf{x}}\_{m}(k) - \hat{\mathsf{p}}\_{m}^{H}(k)\,\check{\mathsf{y}}\_{m}(k) - \sum\_{l=1}^{L} (\Pi\_{l}^{\perp}(k)\hat{\mathsf{p}}\_{l}(k))^{H}\check{\mathsf{y}}\_{l}(k) \\ &= \check{\mathsf{x}}\_{m}(k) - (\mathsf{p}\_{\mbox{opt},m} - \mathsf{e}\_{m}(k))^{H}\check{\mathsf{y}}\_{m}(k) - \sum\_{l=1}^{L} (\Pi\_{l}^{\perp}(k)\hat{\mathsf{p}}\_{l}(k))^{H}\check{\mathsf{y}}\_{l}(k) \\ &= \check{\mathsf{x}}\_{m}(k) - \mathsf{p}\_{\mbox{opt},m}^{H}\check{\mathsf{y}}\_{m}(k) - \sum\_{l=1}^{L} (\Pi\_{l}^{\perp}(k)\hat{\mathsf{p}}\_{l}(k))^{H}\check{\mathsf{y}}\_{l}(k) + \mathsf{e}\_{m}^{H}(k)\,\check{\mathsf{y}}\_{m}(k) \\ &= \check{\mathsf{x}}\_{\mbox{opt},m} + \mathsf{e}\_{m}^{H}(k)\,\check{\mathsf{y}}\_{m}(k) \,. \end{split} \tag{46}$$

Let ˆ*Jm*(*k*) denotes as the expectation of mean square mixed-tone error at tone *m* for *m* ∈ *M*

$$\begin{split} \hat{f}\_{\mathfrak{m}}(k) &= E\{ |\mathfrak{f}\_{\mathfrak{m}}(k)|^{2} \} \\ &= E\{ \left(\mathfrak{f}\_{\text{opt},\mathfrak{m}} + \mathfrak{e}\_{\mathfrak{m}}^{H}(k)\mathfrak{Y}\_{\mathfrak{m}}(k)\right)^{\*} \left(\mathfrak{f}\_{\text{opt},\mathfrak{m}} + \mathfrak{e}\_{\mathfrak{m}}^{H}(k)\mathfrak{Y}\_{\mathfrak{m}}(k)\right) \} \\ &= E\{ |\mathfrak{f}\_{\text{opt},\mathfrak{m}}|^{2} \} + E\{ \mathfrak{y}\_{\text{m}}^{H}(k)\mathfrak{e}\_{\mathfrak{m}}(k)\mathfrak{f}\_{\text{opt},\mathfrak{m}} \} \\ &\quad + E\{ \mathfrak{e}\_{\mathfrak{m}}^{H}(k)\mathfrak{y}\_{\mathfrak{m}}(k)\mathfrak{f}\_{\text{opt},\mathfrak{m}}^{\*} \} \\ &\quad + E\{ \mathfrak{e}\_{\mathfrak{m}}^{H}(k)\mathfrak{e}\_{\mathfrak{m}}(k)\mathfrak{y}\_{\text{m}}^{H}(k)\mathfrak{y}\_{\text{m}}(k) \} . \end{split} \tag{47}$$

By using assumption (iii), we assume that

$$
\hat{f}\_m(k) = f\_m^{\min} + f\_m^{\text{ex}}(k) \,\,\,\,\,\,\,\tag{48}
$$

where *Jmin m* is the minimum mean square mixed-tone error produced by the optimum Wiener filter for tone *m* as

$$J\_{m}^{\min}(k) = \mathbb{E}\{ |\mathfrak{f}\_{\text{opt},m}|^2 \} + \mathbb{E}\{ \mathfrak{e}\_{m}^{H}(k)\mathfrak{y}\_{m}(k)\mathfrak{f}\_{\text{opt},m}^{\*} \} + \mathbb{E}\{ \mathfrak{y}\_{m}^{H}(k)\mathfrak{e}\_{m}(k)\mathfrak{f}\_{\text{opt},m} \} \,\,\,\tag{49}$$

and *Jex <sup>m</sup>* (*k*) is called the excess mean square mixed-tone error (EMSE) at symbol *k* for tone *m* as

$$J\_m^{\text{ex}}(k) = E\left\{ \mathfrak{e}\_m^H(k)\mathfrak{e}\_m(k)\mathfrak{y}\_m^H(k)\mathfrak{y}\_m(k) \right\}. \tag{50}$$

Since

$$\mathcal{R}\_{\mathfrak{Y}\mathfrak{Y}} = \mathbb{E}\{\tilde{\mathbf{y}}\_m(k)\,\tilde{\mathbf{y}}\_m^H(k)\}\,. \tag{51}$$

and by the orthogonality principle

$$E\{\mathfrak{f}\_{\text{opt},m}\mathfrak{y}\_m(k)\} \approx 0 \; \; \; \tag{52}$$

the excess in mean square mixed-tone error is given by

$$f\_m^{\text{ex}}(k) = \mathbb{E}\{\mathfrak{e}\_m^H(k)\,\mathcal{R}\_\text{\text{\textquotedbl{}}}\mathfrak{e}\_m(k)\,\}\,. \tag{53}$$

10.5772/52158

155

http://dx.doi.org/10.5772/52158

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> <sup>300</sup> <sup>10</sup>−3

**Figure 4.** Learning curves of sum of squared mixed-tone errors of the proposed MAS-MTNOGA, AAS-MTNOGA and MTNOGA [11] algorithms with the sample of active tone *m* = 200. The other fixed parameters of the proposed ASS-MTNOGA

<sup>0</sup> <sup>100</sup> <sup>200</sup> <sup>300</sup> <sup>10</sup>−3

**Figure 5.** Learning curves of sum of squared mixed-tone errors of the proposed MAS-MTNOGA, AAS-MTNOGA and MTNOGA [11] algorithms with the sample of active tone *m* = 250. The other fixed parameters of the proposed ASS-MTNOGA

number of DMT−symbols

number of DMT−symbols

AAS−MTNOGA, µ(0)=1.0×10−1 MT−NOGA, µ=1.525×10−2 MT−NOGA, µ=5.125×10−3 MAS−MTNOGA, µ(0)=1.0×10−1

Adaptive Step-Size Orthogonal Gradient-Based Per-Tone Equalisation in Discrete Multitone Systems

AAS−MTNOGA, µ(0)=1.0×10−1 MT−NOGA, µ=1.525×10−2 MT−NOGA, µ=5.125×10−3 MAS−MTNOGA, µ(0)=1.0×10−1

10−2

algorithm are *γ* = 0.985, *β* = 1.25 × 10−2, and *α* = 0.995.

10−2

algorithm are *γ* = 0.985, *β* = 1.25 × 10−2, and *α* = 0.995.

10−1

ξ

m(k)

2

100

101

10−1

ξ

m(k)

2

100

101

where ε*m*(*k*) denotes as the weight-error vector at symbol *k* for each tone *m* shown in (38).
