**2.2 Hardware libraries**

*Control Theory in Engineering*

real time.

**2.1 Hardware**

the Raspberry Pi.

connected to Raspberry Pi).

documented hardware in the Raspberry Pi.

instruction based on this data.

integrated with relevant image processing techniques. The study includes how the processed data outputted by ANN models is translated into defined movements of

Neural networks are often used for image processing. Once the network is trained and the weighting values/structure is stored, all that would be required on future runs of the program would be to load these stored values/designs for the trained network for it to have the ability to run correctly. This means that once the network is trained, real-time image classification and thus prediction of the movement become very efficient. This is a beneficial method for a project of this sort, as the car is driving in real time and thus computations must be made in

The overall objective was to allow the vehicle to drive autonomously around an

This project is a proof-of-concept study to demonstrate how existing ANN stateof-the-art knowledge and related theory can be used for developing an autonomous toy vehicle and letting it to operate without the need to use external computations. It has been adopted as a research problem, which means it has resulted from a lack of current evidence into the subject matter, and thus this project is an attempt at proving that such a concept is not only theoretically feasible but actually possible.

The system is comprised of three subsystems: an input unit (sensors, camera), the processing unit (Raspberry Pi) and an output unit (L293D motor controller

The input required two front-placed ultrasonic sensors both at a 15° left and right angle from the vehicle's direct line of sight. This is mainly used to ensure no objects are within a certain distance from the front of the vehicle (this prevents collisions) and a Pi Camera which is used to stream video data. This data is used in the processing subsystem by the neural network to calculate the direction and recognise the track and stop signs. Both peripherals are constantly sending data to

HC-SR04 sensors, raspiCam and L293D chip were the best hardware options of obvious choice—because they are all the most commonly used [1, 2] supported and

The output unit consists mainly of the hardware which is wired to the Pi such as the motor controller and the motors themselves, both of which are controlled

This report follows a structure where I outline the research into my theory, the design process of the vehicle itself including the neural network, implementation

and finally testing processes involved in proving this as a concept.

The processing unit handles multiple tasks such as dealing with input data, scaling down and applying necessary imaging filters to the data received from the camera, calculating distances based on data received from the ultrasonic sensors to be used for object detection, controlling the motors via L293D chip, utilising a neural network to process the data from the camera to recognise the track ahead and further recognise on-track stop signs and assigning a corresponding movement

the car and whether it is suitably efficient to follow a track.

unknown track with little to no error, utilising collision avoidance.

**2. Background and literature survey**

**302**

by the Pi.

A variety of libraries/application programming interfaces (APIs) were required to implement the hardware in C++:

WiringPi: "WiringPi is a PIN based GPIO access library written in C for the BCM2835 used in the Raspberry Pi. It is released under the GNU LGPLv3 licence and is usable from C, C++ and RTB (BASIC) as well as many other languages with suitable wrappers" [3]. WiringPi allows the user to apply and change power levels to the peripheral devices which are installed [4]. This results in a lot smoother code interface which avoids system calls to use the hardware, in particular, the use of the "softPwm" function which allows easy and convenient power on/off capabilities which will be necessary for sending "pulses" of power to the ultrasonic sensors.

libv4l (Video 4Linux): "libv4l is a collection of libraries which adds a thin abstraction layer on top of video 4linux2 devices. The purpose of this (thin) layer is to make it easy for application writers to support a wide variety of devices without having to write separate code for different devices in the same class." [5]. The libv4l library will allow the Pi Camera to be recognised as an external camera through the Pi, which allows extra functionalities on image processing and avoids the need of a system call through the C++ code to use it.

PIGPIO: "pigpio is a library for the Raspberry which allows control of the General-Purpose Input Outputs (GPIO). pigpio works on all versions of the Pi." [3].

These three libraries would allow me to use the camera through the Pi, control the motors' directions and send the relevant pulses to the ultrasonic sensors to calculate a distance regarding object detection.

## **2.3 Image processing**

The image stream from the camera is a 60 FPS video stream in 1920 × 1080 resolution. This obviously is too big to be fed into any kind of machine learning algorithm and would require huge amounts of processing, which, given the relatively low computational power of a Raspberry Pi, would not be efficient enough to give the car the ability to work in real time.

To combat this, a variety of image processing techniques can be applied. For example, the image would need to be scaled down to 10x10 which still is of high enough resolution to recognise lines but is more computable.

An interesting article regarding image processing by Rosebrock showed a very useful image processing process which would be very beneficial for my type of project called canny edge processing:


The following results are presented in **Figure 1**.

It appeared to work quite well, with adaptable options regarding the strength of the effect. It was decided an automatic level should be applied for ease of use. It is clear that this is easily adaptable for the issue of recognising two lines either side of the vehicle and would (assuming a plain background) solve the problem very efficiently.

This reduction in noise on the image would also help with real-time processing and stop the issue of anomalous results which would cause the vehicle to potentially go off-track.
