*3.6.1 Training algorithm*

*Control Theory in Engineering*

**3.5 Feedforward structure**

accordingly.

**3.6 Training the model**

ing set or "training data" is imported.

data types and their corresponding outputs.

accordance with the error values per node.

probabilities for each output option to be true.

The feedforward structure was the first released structure for a neural networks

As these values go through the network in the same method as in a NN, an input pattern from the data set is passed through the input layer, and each value proceeds to propagate through the layers of the network with the weights being applied to the value until the output values are produced. Typically, the output values are a set of

When this process is occurring during the iterations through the training set, the set of output values is compared to the expected output set, and a set of error values is calculated for how much of an error each node contributes to the overall error. This error calculation occurs across all layers in the network due to the simple fact that inevitably all layers and their respective nodes contribute a small amount to the overall error. To combat this the error calculation occurs through each layer, and each input value to determine the amount connection has contributed to the overall error. This is then used in the backpropagation function to alter the weights

Training the model is the most crucial part of the process. Firstly, the weighting values are all initialized with random numbers between −1 and +1. Then the train-

Next begins the recursive process which, each time, presents one piece of training data to the network, which propagates through the network, and an output is produced. This output is then compared to the labelled output of the training data. Following this comparison, the training algorithm propagates backwards through the network, known as "backpropagation", and alters the weighting values accordingly. This is repeated a finite amount of times specified by the person running the

Backpropagation algorithm is a supervised learning technique which is applied to train neural networks. It works by propagating the error backwards through the network and internally altering the weight values between each node to try to improve the quality of output by minimising the error in the next runs. This is done in the "training" phase where the network is shown a "training set" which consists of a series of data and the correct output per data item is attached. This is then fed through the network, and backpropagation alters the weights to make it work for all

Backpropagation works by receiving the error value for each node, as calculated

This is explained in more detail [11] with "Once the error signal for each node has been determined, the errors are then used by the nodes to update the values for each connection weight until the network converges to a state that allows all the training patterns to be encoded. The Backpropagation algorithm looks for the minimum value of the error function in weight space using a technique called the

in the feedforward propagation, and utilises these error values while propagating through the network to adjust the node weightings positively or negatively in

training model, until it is completed and the values are stable.

and is also the simplest form. It originates with an X amount of input nodes, in which each receives one value. The number of input nodes is always equal to the size of one individual element in the data set. For example, when using an MLP for image processing, if the image is 20 × 20 pixels, there would need to be 400 input neurons to account for each pixel. This helps to determine the size of the network, and because of this all data being fed into the network must be of the same size.

**308**

The standard pseudocode for training a neural network using backpropagation method which can be adapted to any language is as follows (**Figure 4**):
