**3.2 Computational system**

In our design, three computational subsystems are added to the system where two of them are planned for autonomous driving algorithms based on ROS platform. The first option is an embedded computer JETSON-TX2 from Nvidia. This low power consumption board is built around NVIDIA Pascal™-family graphics processing unit (GPU) and very suitable for matrix operations used in deep learning. Another option is a low power consumption small size embedded computer, AAEON Upboard. It has Intel® Atom™ x5 Z8350 Processor (Cherry Trail) of 64 bits up to 1.92 GHz. The final computational board is going to be used as a real-time

**83**

**Figure 3.**

*Perception sensors on wheelchair.*

*Conversion of a Conventional Wheelchair into an Autonomous Personal Transportation Testbed*

SICK LMS151 2D Lidar with field of view of 270° and maximum 50 m range. Scanning

RPLIDAR-A2M6 2D Lidar with field of view of 360° and maximum 18 m range. Scanning

frequency is 25–50 Hz

frequency is 5–15 Hz

JETSON-TX2 GPU 256 NVIDIA CUDA cores CPU Dual-Core NVIDIA Denver 2 64-Bit CPU Quad-

Intel® Atom™ ×5 Z8350 Processor (Cherry Trail) of 64 bits up to 1.92 GHz

Based on the STM32F446 microcontroller, including 9-axis accelerometer/gyroscope/

RGB-Depth camera with 87° × 58° × 95° field of view and 10 m range

Core ARM® Cortex®-A57 MPCore

magnetometer

low-level controller if the performance of the motor drivers' velocity control is not enough. This embedded board is B-F446E-96B01A from ST and is based on the STM32F446 microcontroller, including 9-axis accelerometer/gyroscope/ magnetometer. Similar to the aim of using different perception sensors, finding the optimum combination of computational hardware for autonomous algorithms is another objective for the project. Since it is not possible to know the computational

*DOI: http://dx.doi.org/10.5772/intechopen.93117*

**Device Summary**

**Device Summary**

*Perception sensors used in the testbed.*

*Computational hardware used in the testbed.*

AAEON Upboard

INTEL

REAL-SENSE—D435i

**Table 2.**

**Table 1.**

ST B-F446E-96B01A

*Conversion of a Conventional Wheelchair into an Autonomous Personal Transportation Testbed DOI: http://dx.doi.org/10.5772/intechopen.93117*


#### **Table 1.**

*Service Robotics*

next subsections.

**Figure 2.**

**3.1 Perception system**

18 m range, and 15 Hz scan rate.

**3.2 Computational system**

parts which are designed for the chair specifically.

**3. Sensors, computational system, and human-machine interface**

*Motor driver, encoder, on-off, and emergency buttons mounted.*

It is planned to use the system as a research and development testbed at the end of the conversion. For this reason, several sensors and computational hardware with different characteristics are mounted to the chair. These devices will be used for performance comparison in the future. **Tables 1** and **2** show the list of the sensors for perception and the computational hardware added to the autonomous wheelchair testbed. More information about these components is illustrated in the

In this study, we have three main perception sensors: one RGB-Depth camera (Intel Real Sense—D435i) that provides point cloud data with 87° × 58° × 95° field of view and 10 m range; one industrial relatively expensive 2D Lidar (SICK LMS151) that has 270° field of view, maximum 50 m range, and 50 Hz scan rate; and one low-cost Lidar (RPLIDAR-A2M6) that has 360° field of view, maximum

The aim is to analyze the performance of these different characteristic sensors on global mapping, local mapping, tracking, and localization. Some of these sensors can be removed from the system or can be used together depending on their perception performance. The angle of RGB-D camera can be adjusted manually. **Figure 3** shows these sensors that are mounted on the chair, using appropriate mechanical

In our design, three computational subsystems are added to the system where two of them are planned for autonomous driving algorithms based on ROS platform. The first option is an embedded computer JETSON-TX2 from Nvidia. This low power consumption board is built around NVIDIA Pascal™-family graphics processing unit (GPU) and very suitable for matrix operations used in deep learning. Another option is a low power consumption small size embedded computer, AAEON Upboard. It has Intel® Atom™ x5 Z8350 Processor (Cherry Trail) of 64 bits up to 1.92 GHz. The final computational board is going to be used as a real-time

**82**

*Perception sensors used in the testbed.*


#### **Table 2.**

*Computational hardware used in the testbed.*

low-level controller if the performance of the motor drivers' velocity control is not enough. This embedded board is B-F446E-96B01A from ST and is based on the STM32F446 microcontroller, including 9-axis accelerometer/gyroscope/ magnetometer. Similar to the aim of using different perception sensors, finding the optimum combination of computational hardware for autonomous algorithms is another objective for the project. Since it is not possible to know the computational

power requirements before the decision of autonomous driving algorithms, we will analyze the performance of each hardware. At the end, one of these boards can be removed from system or can be used together depending on their performance. **Figure 4** illustrates the computational hardware mounted in a drawer, which is designed for the chair specifically. The gray protection boxes for Jetson TX2 and Upboard, which are produced by 3D printer, can be seen from **Figure 4**.
