**3.2 Artificial neural networks**

An artificial neural network is a system that is designed to replicate the connections from the synapses within the human brain and their associated learning processes. The human brain consists of many neurons (cells) which all can process information independently. Within these cells are three main parts, the cell body, which contains the nucleus, the "dendrites" which receive information and a long axon connected to other cell's dendrites for outputting information. Information passed between neurons in the brain through the dendrites is in a form similar to that of an electrical signal. When the signal level is of a required value (which will be set within the individual nucleus), the neuron will activate, and information will be sent along its axon to other neurons [11]. This can be applied to a piece of written software to replicate this process for any modelling and problem-solving purposes.

Another similarity that can be achieved through an ANN is processing information via interconnected nodes using simple signals, each link between nodes has its own numerical "weight" when coded, and this is the primary means of learning, which is achieved by altering these "weight values" towards an optimum value to gain a connection with a high success rate.

ANNs are code-based representation of a biological neural network; they are incredibly adaptable and have a variety of different structures/training techniques which can be used to adapt the network based on the most efficient design to solve the problem in hand. NN applications are mainly used for data mining purposes; however, they are also adaptable for use for many other computational problems. One example of this is using a particular model of NN, renown as multilayer perceptron (MLP), for image recognition to note and recognise the difference between things such as handwriting styles, animal types, facial recognition and many others.

### **3.3 Learning process in ANN**

There are two types of learning strategies available to the training of neural networks:

• Supervised learning: within this the concept of a "teacher" is present, and its role is to compare the network output with the correct output and make "adjustments" if necessary. One of the most well-known examples is

**307**

**Figure 3.**

*Computational Efficiency: Can Something as Small as a Raspberry Pi Complete…*

within your training set that you are unaware of.

"backpropagation" algorithm, which is a gradient-based approach minimising the error level between the output and the desired results [11]. This is highly effective when your training data is of a high quality and well labelled. However, it runs risks in terms of performance should you have any anomalies

• Unsupervised learning: within this there is no concept of a "teacher", and thus the network learns by examining the input data over multiple iterations until any inherent properties are discovered [12]. This is much more effective over a

The learning process of ANNs is also known as the process of "training", where a set of test values or a "training set" is passed through the ANN and its output values are compared to those which are expected and then the values are altered accordingly for the next iterative trial. To compare this to the human NN, the ANN's nodes are trained to fire in response to a particular input of bits/pattern. Each input will have its own value or according weight, and this will be calculated by multiplying the value by its assigned weight. If the sum of all these inputs equals a greater value than the activation threshold, the neuron will activate; this is the same as in ANN's. Once these values are accurate for many cases, should the same pattern of input be placed into the ANN, its associated output will be given. This is a typical process of machine learning. If in non-training mode, the ANN will process the bits as normal and usually and if training was sufficient, the NN should still produce the correct output in the end. For this project, supervised learning was used as we had an abundance of very

longer period of time and can be very efficient when done correctly.

high-level training data to work with, all of which was already labelled.

Multilayer perceptrons (also known as backpropagation neural network) are a type of neural network. As seen in **Figure 3**, they consist of a feedforward network structure including one or more hidden layers and a non-linear activation function (e.g. sigmoid) and utilise a supervised learning approach to training using a method called "backpropagation". MLPs are the most commonly used form of ANNs since they have a very flexible form—meaning they are adaptable to many different

*DOI: http://dx.doi.org/10.5772/intechopen.88342*

**3.4 Multilayer perceptrons**

circumstances and problems [13].

*Backpropagation neural network with one hidden layer [14].*

*Computational Efficiency: Can Something as Small as a Raspberry Pi Complete… DOI: http://dx.doi.org/10.5772/intechopen.88342*

"backpropagation" algorithm, which is a gradient-based approach minimising the error level between the output and the desired results [11]. This is highly effective when your training data is of a high quality and well labelled. However, it runs risks in terms of performance should you have any anomalies within your training set that you are unaware of.

• Unsupervised learning: within this there is no concept of a "teacher", and thus the network learns by examining the input data over multiple iterations until any inherent properties are discovered [12]. This is much more effective over a longer period of time and can be very efficient when done correctly.

The learning process of ANNs is also known as the process of "training", where a set of test values or a "training set" is passed through the ANN and its output values are compared to those which are expected and then the values are altered accordingly for the next iterative trial. To compare this to the human NN, the ANN's nodes are trained to fire in response to a particular input of bits/pattern. Each input will have its own value or according weight, and this will be calculated by multiplying the value by its assigned weight. If the sum of all these inputs equals a greater value than the activation threshold, the neuron will activate; this is the same as in ANN's. Once these values are accurate for many cases, should the same pattern of input be placed into the ANN, its associated output will be given. This is a typical process of machine learning. If in non-training mode, the ANN will process the bits as normal and usually and if training was sufficient, the NN should still produce the correct output in the end.

For this project, supervised learning was used as we had an abundance of very high-level training data to work with, all of which was already labelled.

### **3.4 Multilayer perceptrons**

*Control Theory in Engineering*

**3. Computational foundations**

**3.2 Artificial neural networks**

gain a connection with a high success rate.

**3.3 Learning process in ANN**

**3.1 Machine learning**

These projects both used very large-scale NN to do the computation efficiently; thus, both were required to use an external server to do the computation. This was a problem as the device can only be used indoors with a suitable connection to the server. Despite the large-scale network, it could only be implemented on a smallscale track to ensure there was always an adequate connection to the server.

Machine learning is one of the main forms of artificial intelligence; it is an area concerned with how machines can learn to recognise patterns in data and thus enable them to predict future outcomes based on previous patterns. It is a branch of artificial intelligence which often interlaces with a wide variety of mathematic functions and pattern recognition techniques [10]. Some methods of machine learning include decision trees, K-nearest neighbours (KNN) and the most famous ANNs.

An artificial neural network is a system that is designed to replicate the connections from the synapses within the human brain and their associated learning processes. The human brain consists of many neurons (cells) which all can process information independently. Within these cells are three main parts, the cell body, which contains the nucleus, the "dendrites" which receive information and a long axon connected to other cell's dendrites for outputting information. Information passed between neurons in the brain through the dendrites is in a form similar to that of an electrical signal. When the signal level is of a required value (which will be set within the individual nucleus), the neuron will activate, and information will be sent along its axon to other neurons [11]. This can be applied to a piece of written software to replicate this process for any modelling and problem-solving purposes. Another similarity that can be achieved through an ANN is processing information via interconnected nodes using simple signals, each link between nodes has its own numerical "weight" when coded, and this is the primary means of learning, which is achieved by altering these "weight values" towards an optimum value to

ANNs are code-based representation of a biological neural network; they are incredibly adaptable and have a variety of different structures/training techniques which can be used to adapt the network based on the most efficient design to solve the problem in hand. NN applications are mainly used for data mining purposes; however, they are also adaptable for use for many other computational problems. One example of this is using a particular model of NN, renown as multilayer perceptron (MLP), for image recognition to note and recognise the difference between things such as handwriting styles, animal types, facial recognition and many others.

There are two types of learning strategies available to the training of neural

• Supervised learning: within this the concept of a "teacher" is present, and its role is to compare the network output with the correct output and make "adjustments" if necessary. One of the most well-known examples is

**306**

networks:

Multilayer perceptrons (also known as backpropagation neural network) are a type of neural network. As seen in **Figure 3**, they consist of a feedforward network structure including one or more hidden layers and a non-linear activation function (e.g. sigmoid) and utilise a supervised learning approach to training using a method called "backpropagation". MLPs are the most commonly used form of ANNs since they have a very flexible form—meaning they are adaptable to many different circumstances and problems [13].

**Figure 3.** *Backpropagation neural network with one hidden layer [14].*

#### **3.5 Feedforward structure**

The feedforward structure was the first released structure for a neural networks and is also the simplest form. It originates with an X amount of input nodes, in which each receives one value. The number of input nodes is always equal to the size of one individual element in the data set. For example, when using an MLP for image processing, if the image is 20 × 20 pixels, there would need to be 400 input neurons to account for each pixel. This helps to determine the size of the network, and because of this all data being fed into the network must be of the same size.

As these values go through the network in the same method as in a NN, an input pattern from the data set is passed through the input layer, and each value proceeds to propagate through the layers of the network with the weights being applied to the value until the output values are produced. Typically, the output values are a set of probabilities for each output option to be true.

When this process is occurring during the iterations through the training set, the set of output values is compared to the expected output set, and a set of error values is calculated for how much of an error each node contributes to the overall error. This error calculation occurs across all layers in the network due to the simple fact that inevitably all layers and their respective nodes contribute a small amount to the overall error. To combat this the error calculation occurs through each layer, and each input value to determine the amount connection has contributed to the overall error. This is then used in the backpropagation function to alter the weights accordingly.

#### **3.6 Training the model**

Training the model is the most crucial part of the process. Firstly, the weighting values are all initialized with random numbers between −1 and +1. Then the training set or "training data" is imported.

Next begins the recursive process which, each time, presents one piece of training data to the network, which propagates through the network, and an output is produced. This output is then compared to the labelled output of the training data. Following this comparison, the training algorithm propagates backwards through the network, known as "backpropagation", and alters the weighting values accordingly. This is repeated a finite amount of times specified by the person running the training model, until it is completed and the values are stable.

Backpropagation algorithm is a supervised learning technique which is applied to train neural networks. It works by propagating the error backwards through the network and internally altering the weight values between each node to try to improve the quality of output by minimising the error in the next runs. This is done in the "training" phase where the network is shown a "training set" which consists of a series of data and the correct output per data item is attached. This is then fed through the network, and backpropagation alters the weights to make it work for all data types and their corresponding outputs.

Backpropagation works by receiving the error value for each node, as calculated in the feedforward propagation, and utilises these error values while propagating through the network to adjust the node weightings positively or negatively in accordance with the error values per node.

This is explained in more detail [11] with "Once the error signal for each node has been determined, the errors are then used by the nodes to update the values for each connection weight until the network converges to a state that allows all the training patterns to be encoded. The Backpropagation algorithm looks for the minimum value of the error function in weight space using a technique called the

**309**

**Figure 4.**

*Training a neural network using a backpropagation algorithm [15].*

*Computational Efficiency: Can Something as Small as a Raspberry Pi Complete…*

method which can be adapted to any language is as follows (**Figure 4**):

train or under-train the network, as both have problems associated.

with too many hidden nodes will tend to over-fit the data pattern [16].

considered to be a solution to the learning problem" [11].

delta rule or gradient descent. The weights that minimise the error function are then

The standard pseudocode for training a neural network using backpropagation

While training the network to understand data, it is very important not to over-

Over-training can be caused by a network that is overly complex—meaning that it follows the major pattern exactly and when confronted with other data, may not produce results that are within the average range of the training data. Networks

This can be combatted by designing an adequate network structure prior to training and ensuring the rate/iterations of backpropagation are not too low or too high, thus creating a network that creates good solutions to new data problems.

Under-training can occur when the network does not have enough hidden layers/nodes within these layers to accurately represent the complexities of the problem accurately. The result of this is that the network is not of a sufficient size to recognise patterns effectively and will consequently under-fit the data pattern.

*DOI: http://dx.doi.org/10.5772/intechopen.88342*

*3.6.2 Over-training and under-training*

*3.6.1 Training algorithm*

delta rule or gradient descent. The weights that minimise the error function are then considered to be a solution to the learning problem" [11].
