**2.1 Hardware**

The system is comprised of three subsystems: an input unit (sensors, camera), the processing unit (Raspberry Pi) and an output unit (L293D motor controller connected to Raspberry Pi).

The input required two front-placed ultrasonic sensors both at a 15° left and right angle from the vehicle's direct line of sight. This is mainly used to ensure no objects are within a certain distance from the front of the vehicle (this prevents collisions) and a Pi Camera which is used to stream video data. This data is used in the processing subsystem by the neural network to calculate the direction and recognise the track and stop signs. Both peripherals are constantly sending data to the Raspberry Pi.

HC-SR04 sensors, raspiCam and L293D chip were the best hardware options of obvious choice—because they are all the most commonly used [1, 2] supported and documented hardware in the Raspberry Pi.

The processing unit handles multiple tasks such as dealing with input data, scaling down and applying necessary imaging filters to the data received from the camera, calculating distances based on data received from the ultrasonic sensors to be used for object detection, controlling the motors via L293D chip, utilising a neural network to process the data from the camera to recognise the track ahead and further recognise on-track stop signs and assigning a corresponding movement instruction based on this data.

The output unit consists mainly of the hardware which is wired to the Pi such as the motor controller and the motors themselves, both of which are controlled by the Pi.

This report follows a structure where I outline the research into my theory, the design process of the vehicle itself including the neural network, implementation and finally testing processes involved in proving this as a concept.

**303**

*Computational Efficiency: Can Something as Small as a Raspberry Pi Complete…*

A variety of libraries/application programming interfaces (APIs) were required

WiringPi: "WiringPi is a PIN based GPIO access library written in C for the BCM2835 used in the Raspberry Pi. It is released under the GNU LGPLv3 licence and is usable from C, C++ and RTB (BASIC) as well as many other languages with suitable wrappers" [3]. WiringPi allows the user to apply and change power levels to the peripheral devices which are installed [4]. This results in a lot smoother code interface which avoids system calls to use the hardware, in particular, the use of the "softPwm" function which allows easy and convenient power on/off capabilities which will be necessary for sending "pulses" of power to the ultrasonic sensors. libv4l (Video 4Linux): "libv4l is a collection of libraries which adds a thin abstraction layer on top of video 4linux2 devices. The purpose of this (thin) layer is to make it easy for application writers to support a wide variety of devices without having to write separate code for different devices in the same class." [5]. The libv4l library will allow the Pi Camera to be recognised as an external camera through the Pi, which allows extra functionalities on image processing and avoids the need of a

PIGPIO: "pigpio is a library for the Raspberry which allows control of the General-Purpose Input Outputs (GPIO). pigpio works on all versions of the Pi." [3]. These three libraries would allow me to use the camera through the Pi, control the motors' directions and send the relevant pulses to the ultrasonic sensors to

The image stream from the camera is a 60 FPS video stream in 1920 × 1080 resolution. This obviously is too big to be fed into any kind of machine learning algorithm and would require huge amounts of processing, which, given the relatively low computational power of a Raspberry Pi, would not be efficient enough to

To combat this, a variety of image processing techniques can be applied. For example, the image would need to be scaled down to 10x10 which still is of high

An interesting article regarding image processing by Rosebrock showed a very useful image processing process which would be very beneficial for my type of

• "Step 1: Smooth the image using a Gaussian filter to remove high frequency

• Step 3: Apply non-maximum suppression to remove "false" responses to edge

• Step 4: Apply thresholding using a lower and upper boundary on the gradient

• Step 5: Track edges using hysteresis by suppressing weak edges that are not

• Step 2: Compute the gradient intensity representations of the image.

*DOI: http://dx.doi.org/10.5772/intechopen.88342*

system call through the C++ code to use it.

calculate a distance regarding object detection.

give the car the ability to work in real time.

project called canny edge processing:

connected to strong edges" [6].

enough resolution to recognise lines but is more computable.

**2.3 Image processing**

noise.

detection.

values.

to implement the hardware in C++:

**2.2 Hardware libraries**

*Computational Efficiency: Can Something as Small as a Raspberry Pi Complete… DOI: http://dx.doi.org/10.5772/intechopen.88342*
