**2.4 General architecture of an autonomous vehicle**

This is a generalised diagram of how these types of devices usually run.

Machine learning algorithms are often used in self-driving cars due to their advanced capabilities to be adapted for use in image processing. The most popular form of machine learning algorithm is known to be artificial neural networks, which are very good, in that a NN can be trained methodically for nearly every given circumstance imaginable. It can be highly adaptable in terms of size/computing power required and can be designed in any given way relevant to the user's needs. To do this, calculation and specification of the number of nodes in each layer need to accurately produce the required outputs (**Figure 2**).

In a NN, the connections between each node have a "weighting" value which is applied to the data which passed through it; these values are determined through training via the use of "training data" and then stored in a final version which is called the "model". The model can then be used with unseen data in real time to produce an accurate output; this is called "testing" via the use of "test data". Adapting these values to be correct for all input circumstances is known as the aforementioned "training"; this can be done through a variety of different methods, one of which is backpropagation. When training is complete, the weighting values can be stored, and thus further usage of the NN from then on will only require loading these stored values to make it run correctly (provided no changes have been made to the design of the NN) [7]. This means that once the network is trained, prediction of the movement becomes very fast. This is beneficial especially for real-time computation projects, as computation is done at a very efficient rate.

**305**

between.

**Figure 2.**

*Autonomous car system overview.*

**2.5 Related works**

*Computational Efficiency: Can Something as Small as a Raspberry Pi Complete…*

The middle (hidden) layers of the neural network are used to recognise patterns, such as in this case, the edges of the track, which are conceived from the input data. The output layer interprets these patterns to generate a probability for each output being true. These probabilities are then interpreted to determine whether to turn

By installing a Raspberry Pi, camera and ultrasonic sensors, the aim was to give the Raspberry Pi capabilities of driving the car through a written piece of software which is be run using C/C++. The intention was to create, within this, a form of artificial intelligence called a neural network. The neural network acts as the "brain" of the device and after extensive training can be used for image processing/classification of real-time camera data in order to aid the computation of real-time decisions on which movement instruction the car should follow during the movement step. This allows the device to drive autonomously and thus be adaptable to different situations. It does this by recognising two white lines either side of the vehicle, known as the "track" which the car should stay

Considered to be the first "self-driving remote-controlled (RC) car" design was done in 2015 by "Team Pegasus", a team of graduates from the Gothenburg University. They used Arduino and Raspberry Pi to build a robot car, integrated with a remote-controlled software application running on an Android device [8]. Team Pegasus is the first creator of an Android-powered RC car using similar computation methods; however, they used an external server which was connected

The only other similar project found was done by Zheng Wang using Python and utilising a NN to process the images. Again, this used an external server connected to the Pi network to do the computing. This was a very small-scale device but worked very well. Wang also utilised OpenCV—an open source computer vision library which has a large variety of functionalities. Wang's device was able to follow lines, recognise objects and stop signs [9]. The only real weakness of this was its

left/right or stay straight and go forwards or backwards.

to the mobile device to perform the computing process.

inability to do on-board computation.

*DOI: http://dx.doi.org/10.5772/intechopen.88342*

**Figure 1.** *An overview of canny edge detections [6].*

*Computational Efficiency: Can Something as Small as a Raspberry Pi Complete… DOI: http://dx.doi.org/10.5772/intechopen.88342*

#### **Figure 2.**

*Control Theory in Engineering*

go off-track.

The following results are presented in **Figure 1**.

**2.4 General architecture of an autonomous vehicle**

need to accurately produce the required outputs (**Figure 2**).

computation projects, as computation is done at a very efficient rate.

It appeared to work quite well, with adaptable options regarding the strength of the effect. It was decided an automatic level should be applied for ease of use. It is clear that this is easily adaptable for the issue of recognising two lines either side of the vehicle and would (assuming a plain background) solve the problem very efficiently. This reduction in noise on the image would also help with real-time processing and stop the issue of anomalous results which would cause the vehicle to potentially

This is a generalised diagram of how these types of devices usually run. Machine learning algorithms are often used in self-driving cars due to their advanced capabilities to be adapted for use in image processing. The most popular form of machine learning algorithm is known to be artificial neural networks, which are very good, in that a NN can be trained methodically for nearly every given circumstance imaginable. It can be highly adaptable in terms of size/computing power required and can be designed in any given way relevant to the user's needs. To do this, calculation and specification of the number of nodes in each layer

In a NN, the connections between each node have a "weighting" value which is applied to the data which passed through it; these values are determined through training via the use of "training data" and then stored in a final version which is called the "model". The model can then be used with unseen data in real time to produce an accurate output; this is called "testing" via the use of "test data". Adapting these values to be correct for all input circumstances is known as the aforementioned "training"; this can be done through a variety of different methods, one of which is backpropagation. When training is complete, the weighting values can be stored, and thus further usage of the NN from then on will only require loading these stored values to make it run correctly (provided no changes have been made to the design of the NN) [7]. This means that once the network is trained, prediction of the movement becomes very fast. This is beneficial especially for real-time

**304**

**Figure 1.**

*An overview of canny edge detections [6].*

*Autonomous car system overview.*

The middle (hidden) layers of the neural network are used to recognise patterns, such as in this case, the edges of the track, which are conceived from the input data. The output layer interprets these patterns to generate a probability for each output being true. These probabilities are then interpreted to determine whether to turn left/right or stay straight and go forwards or backwards.

By installing a Raspberry Pi, camera and ultrasonic sensors, the aim was to give the Raspberry Pi capabilities of driving the car through a written piece of software which is be run using C/C++. The intention was to create, within this, a form of artificial intelligence called a neural network. The neural network acts as the "brain" of the device and after extensive training can be used for image processing/classification of real-time camera data in order to aid the computation of real-time decisions on which movement instruction the car should follow during the movement step. This allows the device to drive autonomously and thus be adaptable to different situations. It does this by recognising two white lines either side of the vehicle, known as the "track" which the car should stay between.

#### **2.5 Related works**

Considered to be the first "self-driving remote-controlled (RC) car" design was done in 2015 by "Team Pegasus", a team of graduates from the Gothenburg University. They used Arduino and Raspberry Pi to build a robot car, integrated with a remote-controlled software application running on an Android device [8]. Team Pegasus is the first creator of an Android-powered RC car using similar computation methods; however, they used an external server which was connected to the mobile device to perform the computing process.

The only other similar project found was done by Zheng Wang using Python and utilising a NN to process the images. Again, this used an external server connected to the Pi network to do the computing. This was a very small-scale device but worked very well. Wang also utilised OpenCV—an open source computer vision library which has a large variety of functionalities. Wang's device was able to follow lines, recognise objects and stop signs [9]. The only real weakness of this was its inability to do on-board computation.

These projects both used very large-scale NN to do the computation efficiently; thus, both were required to use an external server to do the computation. This was a problem as the device can only be used indoors with a suitable connection to the server. Despite the large-scale network, it could only be implemented on a smallscale track to ensure there was always an adequate connection to the server.
