**2.3 MatLab implementation of prosed neural networks**

To identify the optimal neural network solution different feed-forward and layer recurrent architectures were evaluated. To implement these neural network architectures the *Neural Network Toolbox* of the *MatLab* software application was used. This software was chosen because it enables the creation of almost all types of NN from perceptrons (single layer networks used for classification) to more complex architectures of feed-forward or recurrent networks. To create a feed-forward neural network in *MatLab* the following function can be called from command line:

$$\text{net} \equiv \text{newff}(\text{P,T,S,TF,BIF,BLF,PF}) \tag{7}$$

where:


A similar function can be called to create a layer recurrent neural network:

$$\text{net} \equiv \text{newlrun}(\text{P}, \text{T}, \text{S}, \text{TF}, \text{BTF}, \text{BLF}, \text{PF}) \tag{8}$$

Once a neural network is created, to train it, the following *Matlab* function can be used:

$$\text{train}(\text{net}, \text{P}, \text{T})\tag{9}$$

Artificial Intelligence Techniques Applied to

amplitude value in [Wb/m].

was neglectable.

No *<sup>d</sup>*

[m]

Table 2. Testing data sets.

Electromagnetic Interference Problems Between Power Lines and Metal Pipelines 261

instead of implementing a single NN that would provide both amplitude and phase. Also the output values for NN which returns the MVP amplitude were scaled from 10-6÷10-4 to 0.1÷100, so the final output values has to be multiplied by 10-5 to obtain the actual MVP

Initially, feed-forward neural networks with one output layer and one hidden layer were tested (figure 6). Some of the obtained results were already presented in (Micu et al. 2009)

The number of neurons in the hidden layer was varied from 5 to 30 with a step of 5 neurons. The transfer function of the output layer was set to *purelin* (the linear transfer function) and the transfer function on the hidden layer was varied between *tansig* (the hyperbolic tangent sigmoid function), *logsig* (the logarithmic sigmoid function) and *purelin*. Also, the performance evaluation function was varied between *mse* (mean square error), *msereg* (mean

After the implementation and training, the proposed feed-forward networks were submitted to a testing process. The error between the output values generated by NN and the magnetic vector potential evaluated with FEM for the training data sets was determined. Analysing the obtained errors, it was concluded that none of the tested architectures having the *purelin* transfer function on the hidden layer, had provided acceptable results. The average evaluation error was around 10% and the achieved maximum error was greater than 25%. For all the other NN architectures, the evaluation error for the training data sets

> *y* [m]

1 70 40 -15 100 53.8 -19.34 2 70 81.66 -27.03 30 32.90 -25.57 3 400 392.25 -25.56 70 16.7 -46.05 4 300 281.66 -27.03 500 37.5 -25.93 5 700 690.36 -15.80 700 25.6 -34.07 6 1000 1007.50 0 70 5.68 -72.98 7 1000 1015 -30 100 7.16 -69.22 8 1500 1524.77 -6.93 900 15.40 -46.56

**MVP** 

**Phase [º]** 

**Amp. 10-5 [Wb/m]** 

[Ω\*m]

and (Czumbil et al. 2009). In the following a more detailed study is presented.

square error with regularization performance) and *sse* (sum squared error).

Fig. 6. Implemented feed-forward network architecture.

*x* [m]

where:


These to functions also provide a pre-processing of the four input parameters: *d* the separation distance between EPL and MP (which varies between 70 m and 2000 m), *ρ* the soil resistivity (which varies between 30 Ωm and 1000 Ωm) and *(x,y)* the coordinates of the point where the MVP is wanted to be evaluated (which varies between 0 and 2100 m, respectively between 0 m and -30 m); by scaling them in the range of [-1,+1].

To train the different NN architectures the Levenberg-Marquardt training method and the descendent gradient with momentum weight learning rule has been implemented. As training data base a set of MVP values evaluated with FEM and presented in (Satsios et al. 1999a, 1999b) were used. These MVP values were calculated in different points up to 15 different problem geometries (soil resistivity, separation distance) obtaining a set of 37 input/output pairs used to train the proposed NN. Table 1 presents some of the training data sets.


Table 1. Training data sets.

Once the NN are trained they can provide automatically the corresponding output values for any combination of input parameters by applying the following *MatLab* function:

$$\text{sim}(\text{net}\mathcal{X})\tag{10}$$

where *net* is the implemented neural network and *X* is a set of input values.
