**2.6 Discussions**

Based on the maximum and average evaluation errors obtained for the implemented NN architectures the authors has concluded that it should be used a NN structures that have even *tansig* either *logsig* transfer function implemented on the hidden layer. Generally, a *tansig*  transfer function will provide much better training results then a *logsing* function, but for totally new input values could provide less accurate results. Also it was observed that a higher number of neurons did not necessary provide more accurate results and instead of predicting the MVP values for new problem geometries it would identify the closest training input/output pair.

Studying the results provided by the identified optimal NN architectures in case of layer recurrent networks (AmpLrnNN8 for MVP amplitude evaluation, respectively PhaseLrnNN19 for MVP phase evaluation) and comparing to the MVP solutions provided by the optimal feed-forward architectures (AmpFfNN2 and respectively PhaseFfNN38) it can be observed that even if the studied problem does not necessary require the implementation of recurrent neural networks, we can get more accurate solutions by using recurrent networks.

The third possible solution (PhaseLrnNN44) for the amplitude network has 10 neurons with *logsig* transfer function on the hidden layer and uses a *sse* performance evaluation function*.*  In this case the obtained average evaluation error for the testing data set is 1.18% with a maximum 3.58% achieved evaluation error. Figure 17 presents the evaluation error obtained

Analysing the results shown in figures 15, 16 and 17 the authors concluded that the optimal layer recurrent NN architecture solution to evaluate the phase of the magnetic vector

Based on the maximum and average evaluation errors obtained for the implemented NN architectures the authors has concluded that it should be used a NN structures that have even *tansig* either *logsig* transfer function implemented on the hidden layer. Generally, a *tansig*  transfer function will provide much better training results then a *logsing* function, but for totally new input values could provide less accurate results. Also it was observed that a higher number of neurons did not necessary provide more accurate results and instead of predicting the MVP values for new problem geometries it would identify the closest training input/output pair.

Studying the results provided by the identified optimal NN architectures in case of layer recurrent networks (AmpLrnNN8 for MVP amplitude evaluation, respectively PhaseLrnNN19 for MVP phase evaluation) and comparing to the MVP solutions provided by the optimal feed-forward architectures (AmpFfNN2 and respectively PhaseFfNN38) it can be observed that even if the studied problem does not necessary require the implementation of recurrent neural networks, we can get more accurate solutions by using

Fig. 16. Absolute evaluation error for PhaseLrnNN19 network.

Fig. 17. Absolute evaluation error for PhaseLrnNN43 network.

potential it is PhaseLrnNN19 network structure.

**2.6 Discussions** 

recurrent networks.

for both training and testing data sets.

Comparing the results obtained with the implemented neural networks (figure 7÷10 and 12÷15) with those provided by the fuzzy logic block presented in (Satsios et al. 1999a, 1999b) (figure 5) we can observe a 50% or more accuracy increase in determining MVP amplitude and phase, depending on the implemented neural network architecture.

Once the magnetic vector potential is evaluated, the self and mutual impedance matrix, which describes the inductive coupling between the electrical power line and underground pipeline, can be evaluated using the relationships presented in (Christoforidis et al. 2003, 2005). After that the equivalent electrical circuit of the studied EPL-MP problem can be solved to obtain the induced AC voltage.
