**3.3 Tested feed-forward architectures**

To identify the optimal solution for each of the proposed neural networks, different feed-forward architectures with one output layer and two hidden layers were implemented (figure 19). Based on the experience gained after implementing the neural network for MVP calculation, the transfer function for the output layer has been chosen to be *purelin* (linear function) and *tansig* (hyperbolic tangent sigmoid function) for the hidden layers. The number of neurons was varied from 5 to 30 for the first hidden layer and from 5 to 20 for the second hidden layer. The performance evaluation function was set to *mse* (mean square error) and the descendent gradient with momentum weight learning rule was selected to train the neural networks using the Levenberg-Marquardt method.

Fig. 19. Implemented feed-forward network architecture.

Artificial Intelligence Techniques Applied to

and testing problem geometry data sets.

Fig. 22. Error distribution for Real Part NN3

and testing problem geometry data sets.

Fig. 23. Error distribution for Imaginary Part NN3.

real and imaginary part.

Electromagnetic Interference Problems Between Power Lines and Metal Pipelines 271

and respectively the testing data sets. Figure 22 presents the error distribution for training

For the NN used to evaluate the imaginary part of the mutual impedances between EPL conductors (phase wires and sky wires) optimal NN3 architecture is a feed forward NN with two hidden layers: 30 neurons on the first hidden layer and 20 neurons in the second hidden layer. The average error is 0.073% for the training data set and respectively 0.097% for the testing data set, with a maximum achieved error of 2.98% and 0.46% for the training and respectively the testing data sets. Figure 23 presents the error distribution for training

Unfortunately in case on the NN used to evaluate the real part of the mutual impedances between MP and other conductors, none of the tested NN architectures provided acceptable results. This was caused by the fact that the real part of mutual impedances between MP

After a complete analysis of the real and imaginary part of the mutual impedance between MP and other conductors another approach was used; two NN were implemented to evaluate the amplitude and argument of the mutual impedances, instead of the impedance

In case of NN used to evaluate the amplitude of the mutual impedances between MP and other conductors the optimal NN2 architecture it was identified as a feed forward NN with two hidden layers: 30 neurons on the first hidden layer and 25 neurons in the second hidden layer. The average error is 0.066% for the training data set and respectively 0.087% for the

and other conductors varies in a very large range from 1E-11 to 1E-6.

The training process took around 1 to 5 minutes for each implemented neural network. Once the proposed neural network architectures were trained, to identify the optimal solution, the error between the output values provided by NN and the finite element solutions was calculated and compared for each NN.

In case of NN used to evaluate the real part of the self-impedances, the optimal NN1 architecture is a feed forward NN with two hidden layers: 30 neurons on the first hidden layer and 10 neurons in the second hidden layer. The average error is 0.048% for the training data set and respectively 0.064% for the testing data set, with a maximum achieved error of 0.8% and 0.3% for the training and respectively the testing data sets. Figure 20 presents the error distribution for training and testing problem geometry data sets.

Fig. 20. Percentege error distribution for Real Part NN1.

For the NN used to evaluate the imaginary part of the self-impedances, the optimal NN1 architecture is a feed forward NN with two hidden layers: 20 neurons on the first hidden layer and 20 neurons in the second hidden layer. The average percentage error is 0.14% for the training data set and respectively 0.129% for the testing data set, with a maximum achieved percentage error of 5.24% and 0.85% for the training and respectively the testing data sets. Figure 21 presents the percentage error distribution for training and testing problem geometry data sets.

Fig. 21. Error distribution for Imaginary Part NN1.

In case of NN used to evaluate the real part of the mutual impedances between EPL conductors (phase wires and sky wires), optimal NN3 architecture is a feed forward NN with two hidden layers: 20 neurons on the first hidden layer and 20 neurons on the second hidden layer. The average error is 0.014% for the training data set and respectively 0.034% for the testing data set, with a maximum achieved error of 3.12% and 0.14% for the training and respectively the testing data sets. Figure 22 presents the error distribution for training and testing problem geometry data sets.

Fig. 22. Error distribution for Real Part NN3

270 Recurrent Neural Networks and Soft Computing

The training process took around 1 to 5 minutes for each implemented neural network. Once the proposed neural network architectures were trained, to identify the optimal solution, the error between the output values provided by NN and the finite element

In case of NN used to evaluate the real part of the self-impedances, the optimal NN1 architecture is a feed forward NN with two hidden layers: 30 neurons on the first hidden layer and 10 neurons in the second hidden layer. The average error is 0.048% for the training data set and respectively 0.064% for the testing data set, with a maximum achieved error of 0.8% and 0.3% for the training and respectively the testing data sets. Figure 20 presents the

For the NN used to evaluate the imaginary part of the self-impedances, the optimal NN1 architecture is a feed forward NN with two hidden layers: 20 neurons on the first hidden layer and 20 neurons in the second hidden layer. The average percentage error is 0.14% for the training data set and respectively 0.129% for the testing data set, with a maximum achieved percentage error of 5.24% and 0.85% for the training and respectively the testing data sets. Figure 21 presents the percentage error distribution for training and testing

In case of NN used to evaluate the real part of the mutual impedances between EPL conductors (phase wires and sky wires), optimal NN3 architecture is a feed forward NN with two hidden layers: 20 neurons on the first hidden layer and 20 neurons on the second hidden layer. The average error is 0.014% for the training data set and respectively 0.034% for the testing data set, with a maximum achieved error of 3.12% and 0.14% for the training

solutions was calculated and compared for each NN.

Fig. 20. Percentege error distribution for Real Part NN1.

Fig. 21. Error distribution for Imaginary Part NN1.

problem geometry data sets.

error distribution for training and testing problem geometry data sets.

For the NN used to evaluate the imaginary part of the mutual impedances between EPL conductors (phase wires and sky wires) optimal NN3 architecture is a feed forward NN with two hidden layers: 30 neurons on the first hidden layer and 20 neurons in the second hidden layer. The average error is 0.073% for the training data set and respectively 0.097% for the testing data set, with a maximum achieved error of 2.98% and 0.46% for the training and respectively the testing data sets. Figure 23 presents the error distribution for training and testing problem geometry data sets.

Unfortunately in case on the NN used to evaluate the real part of the mutual impedances between MP and other conductors, none of the tested NN architectures provided acceptable results. This was caused by the fact that the real part of mutual impedances between MP and other conductors varies in a very large range from 1E-11 to 1E-6.

Fig. 23. Error distribution for Imaginary Part NN3.

After a complete analysis of the real and imaginary part of the mutual impedance between MP and other conductors another approach was used; two NN were implemented to evaluate the amplitude and argument of the mutual impedances, instead of the impedance real and imaginary part.

In case of NN used to evaluate the amplitude of the mutual impedances between MP and other conductors the optimal NN2 architecture it was identified as a feed forward NN with two hidden layers: 30 neurons on the first hidden layer and 25 neurons in the second hidden layer. The average error is 0.066% for the training data set and respectively 0.087% for the

Artificial Intelligence Techniques Applied to

**4. Conclusions** 

problem have to be studied.

forward structures.

**6. References** 

**5. Acknowledgment** 

Fig. 26. Global error distribution for NN2 optimal networks.

impedance matrix in case of a three vertical layer earth.

on Underground Metallic Gas Pipelines", No. 34/2010.

focusing on finding an easier method to identify the optimal solution.

Electromagnetic Interference Problems Between Power Lines and Metal Pipelines 273

In this chapter the authors implement neural networks based artificial intelligence techniques in the study of electromagnetic interference problems (right of way EPL-MP)

To solve the differential equation which describes the couplings between EPL and nearby MP usually a finite element method is used. This FEM calculation needs excessive computational time especially if different problem geometries of the same interference

In order to reduce computation time the authors proposed two neural networks based artificial intelligence techniques to scale the results from a set of known problem geometries to any new problem geometry. The proposed neural networks were implemented for specific EPL-MP interference problems. The first one evaluates the magnetic vector potential in case of a phase to earth EPL fault and second one determines the self and mutual

The obtained results with the proposed neural network solutions proved themselves to be very accurate. Thus ad shown neural network based solution to study EPL-MP interference problems could be a very effective one, especially if we take into account the fact that the

Also it has been shown that even there is a special requirement to use recurrent neural networks, these NN architectures could provide more accurate solutions than the basic feed-

The work was supported by TE\_253/2010\_CNCSIS project – "Modelling, Prediction and Design Solutions, with Maximum Effectiveness, for Reducing the Impact of Stray Currents

Al-Badi, A., Ellithy, K. & Al-Alawi, S., (2005), "An Artificial Neural Network model for

Predicting gas pipeline Induced Voltage caused by Power Lines under Fault

solutions provided are obtained instantaneously ones they are properly trained.

testing data set, with a maximum achieved error of 17.95% and 0.43% for the training and respectively the testing data sets. Figure 24 presents the error distribution for training and testing problem geometry data sets.

Fig. 24. Error distribution for Amplitude NN2.

For the NN used to evaluate the argument of the mutual impedances between MP and the other conductors the optimal NN2 architecture is a feed forward NN with two hidden layers: 20 neurons on the first hidden layer and 20 neurons in the second hidden layer. The average error is 0.249% for the training data set and respectively 0.313% for the testing data set, with a maximum achieved error of 6.71% and 1.43% for the training and respectively the testing data sets. Figure 25 presents the error distribution for training and testing problem geometry data sets.

Fig. 25. Error distribution for Argument NN2.

After identifying the optimal architecture of the neural networks these two NN were unified in a virtual black box in order to evaluate mutual impedances real and imaginary part. This unification procedure has as secondary unwanted result a change in the final complex mutual impedance evaluation error. Thus the average error become 0.665% for the training data set and respectively 0.407% for the testing data set, with a maximum achieved error of 18.728% and 1.107% for the training and respectively the testing data sets. Figure 26 presents the global evaluation error distribution of complex mutual impedance for both training and testing problem geometry data sets.

Also, the authors implemented and tested some layer recurrent neural network architectures. But unfortunately because of the large training database with very different output values the training process proved to be very time consuming (more than one hour) and the obtained results had the same accuracy as the previous feed-forward networks.

Fig. 26. Global error distribution for NN2 optimal networks.
