**2.4 Results obtained with feed-forward neural networks**

To determine precisely the magnetic vector potential in each point of the studied domain, the amplitude and the phase of the MVP has to be evaluated. Considering the different variation range: 10-6÷10-4 [Wb/m] for amplitude, and -180°÷180° for phase, the authors chose to implement two different neural networks - one for amplitude and one for phase -

These to functions also provide a pre-processing of the four input parameters: *d* the separation distance between EPL and MP (which varies between 70 m and 2000 m), *ρ* the soil resistivity (which varies between 30 Ωm and 1000 Ωm) and *(x,y)* the coordinates of the point where the MVP is wanted to be evaluated (which varies between 0 and 2100 m,

To train the different NN architectures the Levenberg-Marquardt training method and the descendent gradient with momentum weight learning rule has been implemented. As training data base a set of MVP values evaluated with FEM and presented in (Satsios et al. 1999a, 1999b) were used. These MVP values were calculated in different points up to 15 different problem geometries (soil resistivity, separation distance) obtaining a set of 37 input/output pairs used to train the proposed NN. Table 1 presents some of the training

net – is the neural network that has to be trained;

*x* [m]

 P – is a RxQ1 matrix of Q1 representative R-element input vectors; T – is a SNxQ2 matrix of Q2 representative SN-element target vectors;

respectively between 0 m and -30 m); by scaling them in the range of [-1,+1].

*y* [m]

1 70 70 -15 30 36.1 -22.8 5 800 818.25 -13.5 30 3.88 -82.61 9 400 384.81 -7.82 70 17.2 -44.46 14 70 40 0 100 55.9 -18.53 18 1000 1022.5 0 100 7.23 -67.27 23 300 290.26 -15.8 500 35.5 -26.74 28 700 670 -22.5 700 26 -33.74 30 150 150.55 -16.99 900 53 -19.7 33 1500 1499.1 -17.48 900 15.6 -46.35 37 2000 2030 -5 1000 12.2 -52.73

Once the NN are trained they can provide automatically the corresponding output values

sim(net,X) (10)

To determine precisely the magnetic vector potential in each point of the studied domain, the amplitude and the phase of the MVP has to be evaluated. Considering the different variation range: 10-6÷10-4 [Wb/m] for amplitude, and -180°÷180° for phase, the authors chose to implement two different neural networks - one for amplitude and one for phase -

for any combination of input parameters by applying the following *MatLab* function:

where *net* is the implemented neural network and *X* is a set of input values.

**2.4 Results obtained with feed-forward neural networks** 

MVP

**Phase [º]** 

**Amp. 10-5 [Wb/m]** 

[Ω\*m]

where:

data sets.

No *<sup>d</sup>*

[m]

Table 1. Training data sets.

instead of implementing a single NN that would provide both amplitude and phase. Also the output values for NN which returns the MVP amplitude were scaled from 10-6÷10-4 to 0.1÷100, so the final output values has to be multiplied by 10-5 to obtain the actual MVP amplitude value in [Wb/m].

Initially, feed-forward neural networks with one output layer and one hidden layer were tested (figure 6). Some of the obtained results were already presented in (Micu et al. 2009) and (Czumbil et al. 2009). In the following a more detailed study is presented.

Fig. 6. Implemented feed-forward network architecture.

The number of neurons in the hidden layer was varied from 5 to 30 with a step of 5 neurons. The transfer function of the output layer was set to *purelin* (the linear transfer function) and the transfer function on the hidden layer was varied between *tansig* (the hyperbolic tangent sigmoid function), *logsig* (the logarithmic sigmoid function) and *purelin*. Also, the performance evaluation function was varied between *mse* (mean square error), *msereg* (mean square error with regularization performance) and *sse* (sum squared error).

After the implementation and training, the proposed feed-forward networks were submitted to a testing process. The error between the output values generated by NN and the magnetic vector potential evaluated with FEM for the training data sets was determined. Analysing the obtained errors, it was concluded that none of the tested architectures having the *purelin* transfer function on the hidden layer, had provided acceptable results. The average evaluation error was around 10% and the achieved maximum error was greater than 25%. For all the other NN architectures, the evaluation error for the training data sets was neglectable.


Table 2. Testing data sets.

Artificial Intelligence Techniques Applied to

for both training and testing data sets.

obtained for both training and testing data sets.

Fig. 9. Absolute evaluation error for PhaseFfNN38 network.

Fig. 10. Absolute evaluation error for PhaseFfNN43 network.

errors, the optimal NN solution could be considered PhaseFfNN38.

**2.5 Results obtained with recurrent neural networks** 

2011) but a more detailed study is given in the following.

Electromagnetic Interference Problems Between Power Lines and Metal Pipelines 263

function. In this case the obtained average evaluation error for the testing data set is 1.55% with a maximum 4.02% achieved evaluation error. Figure 9 presents the evaluation error

The second possible solution (PhaseFfNN43) for the amplitude network has 5 neurons with *logsig* transfer function on the hidden layer and uses an *sse* performance evaluation function*.*  In this case the obtained average evaluation error for the testing data set is 1.19% with a maximum 5.47% achieved evaluation error. Figure 10 presents the evaluation error obtained

Comparing the result for both possible phase NN architectures it can be observed that generally PhaseFfNN43 provides better results for the testing data sets. However, considering the fact that for the training data sets PhaseFfNN43 provides evaluation errors in range of 0.25 degrees while PhaseFfNN38 provides almost none existing evaluation

To find the best NN solution which would provide the most accurate results different layer recurrent architecture were also tested. Some of the results were presented in (Micu et al.

A layer recurrent feed-forward neural network with one output layer and one hidden layer is considered (figure 11). The number of neurons in the hidden layer was varied from 5 to 30 with a step of 5 neurons. The transfer function of the output layer was set to *purelin* (the linear transfer function) and the transfer function on the hidden layer was varied between *tansig* (the

Since the main goal was the implementations of a suitable NN, that would provide accurate solutions for any new problem geometry, a second testing database was used to select the optimal NN architecture. This second database is a totally new set of data, which was not applied to NN during the training process (table 2).

Analyzing the average and maximum evaluation errors obtained for the testing data sets, in case of the neural network which would evaluate the amplitude of the magnetic vector potential, two possible NN architectures were determined (AmpFfNN2 and AmpFfNN7). The first one (AmpFfNN2) has 10 neurons with *tansig* transfer function on the hidden layer and uses an *mse* performance evaluation function*.* In this case the obtained average evaluation error for the testing data set is 0.71% with a maximum 1.72% achieved evaluation error. Figure 7 presents the evaluation error obtained for both training and testing data sets.

Fig. 7. Absolute evaluation error for AmpFfNN2 network.

The second possible solution (AmpFfNN7) for the amplitude network has 5 neurons with *logsig* transfer function on the hidden layer and uses an *mse* performance evaluation function*.* In this case the obtained average evaluation error for the testing data set is 0.77% with a maximum 2.50% achieved evaluation error. Figure 8 presents the evaluation error obtained for both training and testing data sets.

Fig. 8. Absolute evaluation error for AmpFfNN7 network.

Comparing the result for both possible amplitude NN architectures it can be observed that AmpFfNN2 provides better results for both training and testing data sets.

Based on the obtained maximum and average evaluation errors for neural networks implemented for MVP phase evaluation, two possible optimal NN architectures were determined (PhaseFfNN38 and PhaseFfNN43). The first one (PhaseFfNN38) has 10 neurons with *tansig* transfer function on the hidden layer and uses an *sse* performance evaluation

Since the main goal was the implementations of a suitable NN, that would provide accurate solutions for any new problem geometry, a second testing database was used to select the optimal NN architecture. This second database is a totally new set of data, which was not

Analyzing the average and maximum evaluation errors obtained for the testing data sets, in case of the neural network which would evaluate the amplitude of the magnetic vector potential, two possible NN architectures were determined (AmpFfNN2 and AmpFfNN7). The first one (AmpFfNN2) has 10 neurons with *tansig* transfer function on the hidden layer and uses an *mse* performance evaluation function*.* In this case the obtained average evaluation error for the testing data set is 0.71% with a maximum 1.72% achieved evaluation error. Figure 7 presents the evaluation error obtained for both training and testing data sets.

The second possible solution (AmpFfNN7) for the amplitude network has 5 neurons with *logsig* transfer function on the hidden layer and uses an *mse* performance evaluation function*.* In this case the obtained average evaluation error for the testing data set is 0.77% with a maximum 2.50% achieved evaluation error. Figure 8 presents the evaluation error

Comparing the result for both possible amplitude NN architectures it can be observed that

Based on the obtained maximum and average evaluation errors for neural networks implemented for MVP phase evaluation, two possible optimal NN architectures were determined (PhaseFfNN38 and PhaseFfNN43). The first one (PhaseFfNN38) has 10 neurons with *tansig* transfer function on the hidden layer and uses an *sse* performance evaluation

applied to NN during the training process (table 2).

Fig. 7. Absolute evaluation error for AmpFfNN2 network.

Fig. 8. Absolute evaluation error for AmpFfNN7 network.

AmpFfNN2 provides better results for both training and testing data sets.

obtained for both training and testing data sets.

function. In this case the obtained average evaluation error for the testing data set is 1.55% with a maximum 4.02% achieved evaluation error. Figure 9 presents the evaluation error obtained for both training and testing data sets.

Fig. 9. Absolute evaluation error for PhaseFfNN38 network.

The second possible solution (PhaseFfNN43) for the amplitude network has 5 neurons with *logsig* transfer function on the hidden layer and uses an *sse* performance evaluation function*.*  In this case the obtained average evaluation error for the testing data set is 1.19% with a maximum 5.47% achieved evaluation error. Figure 10 presents the evaluation error obtained for both training and testing data sets.

Fig. 10. Absolute evaluation error for PhaseFfNN43 network.

Comparing the result for both possible phase NN architectures it can be observed that generally PhaseFfNN43 provides better results for the testing data sets. However, considering the fact that for the training data sets PhaseFfNN43 provides evaluation errors in range of 0.25 degrees while PhaseFfNN38 provides almost none existing evaluation errors, the optimal NN solution could be considered PhaseFfNN38.
