**8. Simulation results**

In this section, we implemented transmission simulations for the ADSL-based downstream including additive white Gaussian noise (AWGN) and near-end crosstalk (NEXT) detailed as follows. The used tones for downstream transmission were starting at active tones 38 to 255 and unused tones including tones 8 to 32 for upstream transmission were set to zero. The samples of reference carrier serving area (CSA) loop were used for the entire test channel, which comprises 512 coefficients of channel impulse response. The ADSL downstream simulations with the CSA loop #4 was the representative of simulations with all 8 CSA loops detailed in [25] as follows. The CSA#4 loop is consisting of 26-gauge bridged tap of length of 400 ft. at 550 ft., of 800 ft. at 6800 ft. and 26-gauge loop of length of 800 ft. at 7600 ft., respectively. Other parameters were as the sampling rate *fs* = 2.208 MHz and the size of FFT *N* = 512. The length of CP (*ν*) was identical to 32. The synchronisation delay was of 45. The SNR gap of 9.8dB, the coding gain of 4.2dB, the noise margin of 6 dB, and the input signal power of -40 dBm/Hz were used for all active tones [1]. With the power of AWGN of -140dBm/Hz and NEXT from 24 ADSL disturbers were included in the test channel. The bit allocation calculation requires an estimate of signal to noise ratio (SNR) on tone *n* ∈ *Nd*, when the noise energy is estimated after per-tone equalisation.

We compare the proposed MAS-MTNOGA and AAS-MTNOGA PTEQs with variable step-size parameters compared with the fixed step-size MT-NOGA [11] PTEQ. The proposed algorithms were initialised with T = 32, **p**ˆ *<sup>m</sup>*(0)=[ 0 0 0 ... 0 ] *<sup>T</sup>*, **<sup>d</sup>**˜ *<sup>m</sup>*(0) = **<sup>g</sup>**˜ *<sup>m</sup>*(0) = [ 100 ... 0 ] *<sup>T</sup>* and Π<sup>⊥</sup> *<sup>m</sup>*(0) = **I**, where *λm*(0) = 0.95, ˆ *ζm*(0) = *σ*<sup>2</sup> *<sup>η</sup>* . The matrix **I** is the identity matrix and the parameter *σ*<sup>2</sup> *<sup>η</sup>* is the variance of AWGN and NEXT. We considered the use of the combining estimated of 3-adjacent tones (*M* = 3). All the following results were obtained by averaging over 50 Monte Carlo trials.

Fig. 4 and Fig. 5 show the sum of squared mixed-tone errors learning curves of proposed AAS-MTNOGA, MAS-MTNOGA and MT-NOGA PTEQs are illustrated with the different values of fixed step-size parameters for the samples of the active tone at *m* = 200 and 250, respectively. It is observed that the proposed AAS-MTNOGA algorithm can converge more rapidly to steady-state condition than MT-NOGA with the fixed step-size. Learning curves of the excess mean square mixed-tone errors (EMSE) *Jex <sup>m</sup>* (*k*) of proposed AAS-MTNOGA, MAS-MTNOGA and MT-NOGA PTEQs in Fig. 6 and Fig. 7 are depicted with the different values of fixed step-size parameters for the samples of the active tone at *m* = 200 and 250, respectively. Fig. 8 and Fig. 9 depict the trajectories of step-size parameters *µm*(*k*) of proposed MAS-MTNOGA and AAS-MTNOGA algorithms at different initial step-size settings with the sample of the active tone at *m* = 250, respectively. It is shown to converge to its own equilibrium despite large variations of initial step-size parameters.

18 Advances in Discrete Time Systems

**8. Simulation results**

[ 100 ... 0 ]

*<sup>T</sup>* and Π<sup>⊥</sup>

by averaging over 50 Monte Carlo trials.

matrix and the parameter *σ*<sup>2</sup>

the excess in mean square mixed-tone error is given by

*J ex*

*<sup>m</sup>* (*k*) = *<sup>E</sup>*{ <sup>ε</sup>*<sup>H</sup>*

tone *n* ∈ *Nd*, when the noise energy is estimated after per-tone equalisation.

*<sup>m</sup>*(0) = **I**, where *λm*(0) = 0.95, ˆ

algorithms were initialised with T = 32, **p**ˆ *<sup>m</sup>*(0)=[ 0 0 0 ... 0 ]

of the excess mean square mixed-tone errors (EMSE) *Jex*

equilibrium despite large variations of initial step-size parameters.

where ε*m*(*k*) denotes as the weight-error vector at symbol *k* for each tone *m* shown in (38).

In this section, we implemented transmission simulations for the ADSL-based downstream including additive white Gaussian noise (AWGN) and near-end crosstalk (NEXT) detailed as follows. The used tones for downstream transmission were starting at active tones 38 to 255 and unused tones including tones 8 to 32 for upstream transmission were set to zero. The samples of reference carrier serving area (CSA) loop were used for the entire test channel, which comprises 512 coefficients of channel impulse response. The ADSL downstream simulations with the CSA loop #4 was the representative of simulations with all 8 CSA loops detailed in [25] as follows. The CSA#4 loop is consisting of 26-gauge bridged tap of length of 400 ft. at 550 ft., of 800 ft. at 6800 ft. and 26-gauge loop of length of 800 ft. at 7600 ft., respectively. Other parameters were as the sampling rate *fs* = 2.208 MHz and the size of FFT *N* = 512. The length of CP (*ν*) was identical to 32. The synchronisation delay was of 45. The SNR gap of 9.8dB, the coding gain of 4.2dB, the noise margin of 6 dB, and the input signal power of -40 dBm/Hz were used for all active tones [1]. With the power of AWGN of -140dBm/Hz and NEXT from 24 ADSL disturbers were included in the test channel. The bit allocation calculation requires an estimate of signal to noise ratio (SNR) on

We compare the proposed MAS-MTNOGA and AAS-MTNOGA PTEQs with variable step-size parameters compared with the fixed step-size MT-NOGA [11] PTEQ. The proposed

the combining estimated of 3-adjacent tones (*M* = 3). All the following results were obtained

Fig. 4 and Fig. 5 show the sum of squared mixed-tone errors learning curves of proposed AAS-MTNOGA, MAS-MTNOGA and MT-NOGA PTEQs are illustrated with the different values of fixed step-size parameters for the samples of the active tone at *m* = 200 and 250, respectively. It is observed that the proposed AAS-MTNOGA algorithm can converge more rapidly to steady-state condition than MT-NOGA with the fixed step-size. Learning curves

MAS-MTNOGA and MT-NOGA PTEQs in Fig. 6 and Fig. 7 are depicted with the different values of fixed step-size parameters for the samples of the active tone at *m* = 200 and 250, respectively. Fig. 8 and Fig. 9 depict the trajectories of step-size parameters *µm*(*k*) of proposed MAS-MTNOGA and AAS-MTNOGA algorithms at different initial step-size settings with the sample of the active tone at *m* = 250, respectively. It is shown to converge to its own

*ζm*(0) = *σ*<sup>2</sup>

*<sup>η</sup>* is the variance of AWGN and NEXT. We considered the use of

*<sup>m</sup>* (*k*) R**y**˜**y**˜ ε*m*(*k*) } . (53)

*<sup>T</sup>*, **<sup>d</sup>**˜ *<sup>m</sup>*(0) = **<sup>g</sup>**˜ *<sup>m</sup>*(0) =

*<sup>η</sup>* . The matrix **I** is the identity

*<sup>m</sup>* (*k*) of proposed AAS-MTNOGA,

**Figure 4.** Learning curves of sum of squared mixed-tone errors of the proposed MAS-MTNOGA, AAS-MTNOGA and MTNOGA [11] algorithms with the sample of active tone *m* = 200. The other fixed parameters of the proposed ASS-MTNOGA algorithm are *γ* = 0.985, *β* = 1.25 × 10−2, and *α* = 0.995.

**Figure 5.** Learning curves of sum of squared mixed-tone errors of the proposed MAS-MTNOGA, AAS-MTNOGA and MTNOGA [11] algorithms with the sample of active tone *m* = 250. The other fixed parameters of the proposed ASS-MTNOGA algorithm are *γ* = 0.985, *β* = 1.25 × 10−2, and *α* = 0.995.

10.5772/52158

157

http://dx.doi.org/10.5772/52158

<sup>0</sup> <sup>50</sup> <sup>100</sup> <sup>150</sup> <sup>200</sup> <sup>250</sup> <sup>300</sup> <sup>350</sup> <sup>400</sup> <sup>450</sup> <sup>500</sup> <sup>10</sup>−4

**Figure 8.** Trajectories of the adaptive step-size *µm*(*k*) of the proposed MAS-MTNOGA algorithm using different setting of

0 200 400 600 800 1000 1200 1400 1600 1800

number of DMT−symbols

**Figure 9.** Trajectories of the adaptive step-size *µm*(*k*) of the proposed AAS-MTNOGA algorithm using different setting of

number of DMT−symbols

MAS−MTNOGA, µ(0)=1×10−1 MAS−MTNOGA, µ(0)=5×10−2 MAS−MTNOGA, µ(0)=1×10−4

Adaptive Step-Size Orthogonal Gradient-Based Per-Tone Equalisation in Discrete Multitone Systems

AAS−MTNOGA, µ(0)=1×10−1 AAS−MTNOGA, µ(0)=1×10−2 AAS−MTNOGA, µ(0)=1×10−4

10−3

10−5

*µ*(0) = 1 × 10−1, 1 × 10−<sup>2</sup> and 1 × 10−<sup>4</sup> with the sample of active tone *m* = 250.

10−4

10−3

µm(k)

10−2

10−1

*µ*(0) = 1 × 10−1, 5 × 10−<sup>2</sup> and 1 × 10−<sup>4</sup> with the sample of active tone *m* = 250.

10−2

µm(k)

10−1

**Figure 6.** Learning curves of EMSE *Jex <sup>m</sup>* (*k*) of the proposed MAS-MTNOGA, AAS-MTNOGA and MTNOGA [11] algorithms with the sample of active tone *m* = 200. The other fixed parameters of the proposed ASS-MTNOGA algorithm are *γ* = 0.985, *β* = 1.25 × 10−2, and *α* = 0.995.

**Figure 7.** Learning curves of EMSE *Jex <sup>m</sup>* (*k*) of the proposed MAS-MTNOGA, AAS-MTNOGA and MTNOGA [11] algorithms with the sample of active tone *m* = 250. The other fixed parameters of the proposed ASS-MTNOGA algorithm are *γ* = 0.985, *β* = 1.25 × 10−2, and *α* = 0.995.

20 Advances in Discrete Time Systems

10−14

10−12

10−10

10−8

EMSE; J ex

**Figure 7.** Learning curves of EMSE *Jex*

*β* = 1.25 × 10−2, and *α* = 0.995.

m (k)

10−6

10−4

10−2

100

**Figure 6.** Learning curves of EMSE *Jex*

*β* = 1.25 × 10−2, and *α* = 0.995.

10−12

10−10

10−8

EMSE; J ex

m (k)

10−6

10−4

10−2

100

100 200 300 400 500 600 700 800

MT−NOGA, µ=5.25×10−3 MT−NOGA, µ=1.525×10−2 MAS−MTNOGA, µ(0)=1.0×10−1 AAS−MTNOGA, µ(0)=1.0×10−1

*<sup>m</sup>* (*k*) of the proposed MAS-MTNOGA, AAS-MTNOGA and MTNOGA [11] algorithms with

MT−NOGA, µ=5.25×10−3 MT−NOGA, µ=1.525×10−2 MAS−MTNOGA, µ(0)=1.0×10−1 AAS−MTNOGA, µ(0)=1.0×10−1

*<sup>m</sup>* (*k*) of the proposed MAS-MTNOGA, AAS-MTNOGA and MTNOGA [11] algorithms with

number of DMT−symbols

the sample of active tone *m* = 200. The other fixed parameters of the proposed ASS-MTNOGA algorithm are *γ* = 0.985,

<sup>100</sup> <sup>200</sup> <sup>300</sup> <sup>400</sup> <sup>500</sup> <sup>600</sup> <sup>700</sup> <sup>10</sup>−14

the sample of active tone *m* = 250. The other fixed parameters of the proposed ASS-MTNOGA algorithm are *γ* = 0.985,

number of DMT−symbols

**Figure 8.** Trajectories of the adaptive step-size *µm*(*k*) of the proposed MAS-MTNOGA algorithm using different setting of *µ*(0) = 1 × 10−1, 5 × 10−<sup>2</sup> and 1 × 10−<sup>4</sup> with the sample of active tone *m* = 250.

**Figure 9.** Trajectories of the adaptive step-size *µm*(*k*) of the proposed AAS-MTNOGA algorithm using different setting of *µ*(0) = 1 × 10−1, 1 × 10−<sup>2</sup> and 1 × 10−<sup>4</sup> with the sample of active tone *m* = 250.
