5. Wearables for navigation and safety systems

Automatic navigation in an unknown environment raises various challenges as many cues about orientation are difficult to perceive without the use of vision. Though assisted aids, such as global positioning system (GPS), a satellite-based radio-navigation system, which help in route finding, still it fails to fulfill safety requirements. This section proposes a framework that provides accurate guiding and information on the route traversal and the topography of the road ahead. The framework is composed of technologies, such as Lumigrids, Drone, GPS, Mobile applications andCloud storage which are used to map the road surface and generate proper navigation guidance to the end user. This is done in three stages: (1) off-line mapping of the road surface and storing this information in the cloud; (2) wearable technology used for obtaining in real-time surface information and comparing it to the data on the cloud facilitating accurate and safer navigation and (3) updating the cloud information with information collected by the pedestrian.

There are many technological navigation aids but none of them focus on pedestrian paths. Banovic et al. [19] claim that travelers require detailed information about the terrain and its challenges—size, curves, hurdles, fences, changes in elevation and proposes a three-phase safe navigation system that provides surface information of the pedestrian paths and uses this information while suggesting in real time routes to the visually impaired.

Most applications use location-sensing technology, such as GPS combined with a map to locate and guide pedestrians. Sendero [20] uses smart phone's location sensing power. Trekker Breeze [21] supports orientation using a commercial GPS receiver. In another work, [22] has combined crowd sourcing with computer vision techniques to provide additional information about traffic intersections and sidewalks or arbitrary images. Few open source [23] software systems provide similar navigation instructions on points of interest like restaurants and buildings to the user using speech or Braille output. Studies say that pedestrians are positive on using technological assisted aids to guide them for navigation [24].

## 5.1 The process phases

The proposed navigation system consists of the following three phases: (1) terrain mapping phase, (2) pedestrian guidance phase and (3) re-mapping of the terrain based on comparative walk-thru and terrain database. In the terrain mapping phase, an unmanned aerial vehicle is made to fly over the pedestrian path. This vehicle records the GPS coordinates of the mapped region and accurately identifies the actual terrain of the underlying pedestrian path. This data are versioned and stored in a cloud. This referential database is centrally shared for the visually impaired. The terrain mapping phase is essential to initially map all the pedestrian paths and populate the cloud with data. The pedestrian guidance phase is the phase where the stored terrain-related information on cloud is combined with the regular GPS-based route finding and in real time, it is used to guide a pedestrian in navigation. A shirt mounted device assists the visually impaired in achieving this. During the walk-thru, the mounted device with the visually impaired obtains the real-time terrain information of the path ahead and compares it to the existing information on the cloud to alert of the new challenges/hazards that may have cropped up.

pedestrians who use these pedestrian paths. The accelerometer of their mobile devices detects the vibration along the X, Y and Z-axes. The magnitude m of the

> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>X</sup><sup>2</sup> <sup>þ</sup> <sup>Y</sup><sup>2</sup> <sup>þ</sup> <sup>Z</sup><sup>2</sup> <sup>p</sup>

The terrain mapping system consists of a lumigrids projector and a GPS sensor mounted on a quadcopter which flies along the pedestrian path at height "h" above ground, as depicted in Figure 4. The captured data are associated with its exact location [GPS], which allows the comparison between images taken from the same location. As mentioned, the process is divided into two phases. In the first phase, the terrain image and data are taken and stored in the cloud storage. To ensure accurate terrain data, while the pedestrian walks, we recapture, in the second phase,

Figure 5 describes the process of obtaining the terrain topography using lumigrids projection. The first picture on the top-left is the image of the sidewalk, we refer to in this section. The picture on the top-right presents the projection of the lumigrids projector on the sidewalk. A complete flat terrain will produce and show a perfect grid picture. However, due to some bumps in the sidewalk, as presented in the right-bottom image, some of the projected squares are distorted, representing the bump location. The resulting grid is sent to the cloud application for analysis

Figure 6 depicts the data collection and processing from the impaired person guidance perspective. It assumes the use of a smart-phone application, which continuously transmits the current person location and orientation to the cloud and obtains the data about the terrain of the path ahead. The left-top image presents the shirt with a mounted unit, which the pedestrian wears. The unit consists of a lumigrids projector, camera and a communication unit. The projector flashes on the

. This is used to predict the terrain

acceleration is calculated as m ¼

Figure 3.

information of the pedestrian paths.

Light grids projected on ground by lumigrids projector.

DOI: http://dx.doi.org/10.5772/intechopen.86066

the same image from the same location.

and storing it in the cloud storage.

The basic set-up of capturing the terrain image and its data in phase 1.

Figure 4.

123

5.3 Capturing of the terrain topography in two phases

Wearable Devices and their Implementation in Various Domains

#### 5.2 The navigation system

The terrain mapping phase consists of the following components: Quadcopter unmanned aerial object and Raspberry—a microcomputer to run required image processing algorithms and save the information to the cloud. Figure 2 depicts the components and their interconnection used in terrain mapping phase.

Lumigrids—a LED projector projecting light in the shape of grids as presented in Figure 3.

Lumigrids are mounted on the quadcopter and placed facing the ground. These light grids can accurately extract the terrain information of the pedestrian path as the regular arrangement of the lights grid gets distorted based on the terrain. [24] shows how lumigrids can help cyclists to understand the terrain ahead at night and keep them safe. Camera—placed facing the direction of ground where the lumigrids are projected. It constantly takes the images of the patterns formed by the grids and sends it for image processing. GPS sensor is used to obtain the GPS location of the quadcopter drone. Raspberry Pi serves as the central computing unit for all the attached sensors. It processes the captured images of the formed light grids on the ground and obtains the required terrain information.

An interesting approach can also be used to obtain the terrain-related information by using the accelerometer data of the smart phones of other visually sound

Figure 2. Components used in terrain mapping phase.

Wearable Devices and their Implementation in Various Domains DOI: http://dx.doi.org/10.5772/intechopen.86066

Figure 3. Light grids projected on ground by lumigrids projector.

5.1 The process phases

Wearable Devices - The Big Wave of Innovation

5.2 The navigation system

in Figure 3.

Figure 2.

122

Components used in terrain mapping phase.

The proposed navigation system consists of the following three phases: (1) terrain mapping phase, (2) pedestrian guidance phase and (3) re-mapping of the terrain based on comparative walk-thru and terrain database. In the terrain mapping phase, an unmanned aerial vehicle is made to fly over the pedestrian path. This vehicle records the GPS coordinates of the mapped region and accurately identifies the actual terrain of the underlying pedestrian path. This data are versioned and stored in a cloud. This referential database is centrally shared for the visually impaired. The terrain mapping phase is essential to initially map all the pedestrian paths and populate the cloud with data. The pedestrian guidance phase is the phase where the stored terrain-related information on cloud is combined with the regular GPS-based route finding and in real time, it is used to guide a pedestrian in navigation. A shirt mounted device assists the visually impaired in achieving this. During the walk-thru, the mounted device with the visually impaired obtains the real-time terrain information of the path ahead and compares it to the existing information on

the cloud to alert of the new challenges/hazards that may have cropped up.

components and their interconnection used in terrain mapping phase.

ground and obtains the required terrain information.

The terrain mapping phase consists of the following components: Quadcopter unmanned aerial object and Raspberry—a microcomputer to run required image processing algorithms and save the information to the cloud. Figure 2 depicts the

Lumigrids—a LED projector projecting light in the shape of grids as presented

Lumigrids are mounted on the quadcopter and placed facing the ground. These light grids can accurately extract the terrain information of the pedestrian path as the regular arrangement of the lights grid gets distorted based on the terrain. [24] shows how lumigrids can help cyclists to understand the terrain ahead at night and keep them safe. Camera—placed facing the direction of ground where the lumigrids are projected. It constantly takes the images of the patterns formed by the grids and sends it for image processing. GPS sensor is used to obtain the GPS location of the quadcopter drone. Raspberry Pi serves as the central computing unit for all the attached sensors. It processes the captured images of the formed light grids on the

An interesting approach can also be used to obtain the terrain-related information by using the accelerometer data of the smart phones of other visually sound

pedestrians who use these pedestrian paths. The accelerometer of their mobile devices detects the vibration along the X, Y and Z-axes. The magnitude m of the acceleration is calculated as m ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>X</sup><sup>2</sup> <sup>þ</sup> <sup>Y</sup><sup>2</sup> <sup>þ</sup> <sup>Z</sup><sup>2</sup> <sup>p</sup> . This is used to predict the terrain information of the pedestrian paths.

## 5.3 Capturing of the terrain topography in two phases

The terrain mapping system consists of a lumigrids projector and a GPS sensor mounted on a quadcopter which flies along the pedestrian path at height "h" above ground, as depicted in Figure 4. The captured data are associated with its exact location [GPS], which allows the comparison between images taken from the same location. As mentioned, the process is divided into two phases. In the first phase, the terrain image and data are taken and stored in the cloud storage. To ensure accurate terrain data, while the pedestrian walks, we recapture, in the second phase, the same image from the same location.

Figure 5 describes the process of obtaining the terrain topography using lumigrids projection. The first picture on the top-left is the image of the sidewalk, we refer to in this section. The picture on the top-right presents the projection of the lumigrids projector on the sidewalk. A complete flat terrain will produce and show a perfect grid picture. However, due to some bumps in the sidewalk, as presented in the right-bottom image, some of the projected squares are distorted, representing the bump location. The resulting grid is sent to the cloud application for analysis and storing it in the cloud storage.

Figure 6 depicts the data collection and processing from the impaired person guidance perspective. It assumes the use of a smart-phone application, which continuously transmits the current person location and orientation to the cloud and obtains the data about the terrain of the path ahead. The left-top image presents the shirt with a mounted unit, which the pedestrian wears. The unit consists of a lumigrids projector, camera and a communication unit. The projector flashes on the

Figure 4. The basic set-up of capturing the terrain image and its data in phase 1.

1. The entire pedestrian path is divided into squares of equal area—called the sub-squares: let "k" be the area of each sub-square with side "x" which are

2. The height "h" is adjusted to generate the lumigrids of area "k" just enough to

4. The quadcopter flying at height "h" above the ground files to the calculated M from where it flashes the lumigrids of area "k" equal to the area of the sub-

5. The following image formed on the ground shows an undistorted lumigrids of area "k" formed on an ideally flat and perpendicular surface to the

6. This image is captured by the mounted camera and thresholding of the input image splits the lumigrids image data from rest of the image as explained in 10. Camera coordinates can be mapped to the real world coordinates by the

> Xc Yc Zc

1

the coordinates of the object in camera and Xa, Ya, Za are the coordinates of the same object in the real world and Tcm is the transformation matrix which

7. The dimensions and inclinations of each line segment of the n � n segmented sub-square are the parameters used to represent an ideally flat terrain

formed lumigrids square mesh indicates that the terrain beneath the formed lumigrids is not flat. It is either concave or convex in nature along

9. The angle between the line segments (tangents of the line segments at the point of intersection if they are skewed) if not the right angle indicates that there is an inclination in XY plane of the terrain beneath the formed lumigrids based on the quadrant (first quadrant or fourth quadrant) of the inclination. Let a<sup>0</sup> be the inclination of the line segments of the lumigrids and "a" the corresponding inclination in the ground is given by: a ¼ �d1 ∗ a<sup>0</sup>

where "d1" is the ratio of the inclination on the ground and the corresponding inclination caused by the lumigrids. And + indicates that the inclination is towards the first quadrant and—indicates that the inclination is towards the

10. After the image thresholding algorithm on the obtained image, the lumigrids

CA <sup>¼</sup> Tcm

Xa Ya Za 1

0

B@

<sup>n</sup> Inclination of each side <sup>¼</sup> <sup>90</sup>°

<sup>n</sup>Þ of any line segment (even skewed) of the

0

B@

square on ground. The lumigrids projector creates the light grids of

<sup>2</sup> ; long<sup>1</sup> <sup>þ</sup> <sup>x</sup>

� �:

CA, where Xc, Yc, Zc are

2

,

3. The midpoint M of the sub-square is calculated as: <sup>M</sup> <sup>¼</sup> lat<sup>1</sup> <sup>þ</sup> <sup>x</sup>

named as (1, 1), (1, 2) and so on.

DOI: http://dx.doi.org/10.5772/intechopen.86066

Wearable Devices and their Implementation in Various Domains

dimension n � n on the ground below.

following transformation matrix

can be calibrated for a camera.

Lengthð Þ <sup>¼</sup> Breadth of each side <sup>¼</sup> <sup>x</sup>

8. Shortening of length (less than <sup>x</sup>

the Z axis.

fourth quadrant.

125

are visible clearly as Figure 7.

quadcopter flying at a height "h" above the ground.

cover each sub square.

#### Figure 5.

The terrain mapping phase and its transmission to the cloud storage.

Figure 6. The process of the pedestrian guidance phase.

ground. The camera captures the grid image formed on the ground and continuously transmits it to the smartphone application, which then transmits it to the cloud application. The application compares the received image to the already stored image and generates the most accurate image representing the terrain situation at this moment. Accordingly, the application generates the proper instructions set and sends it back to the smartphone, which guides the pedestrian accordingly. In parallel, the discrepancy between the stored data in the cloud and the data accepted from the pedestrian, is analyzed and if there is a need to update the cloud data it is done by the cloud application.

The steps of the terrain mapping phase:

Wearable Devices and their Implementation in Various Domains DOI: http://dx.doi.org/10.5772/intechopen.86066


$$\text{following transformation matrix } \begin{pmatrix} \text{Xc} \\ \text{Yc} \\ \text{Zc} \end{pmatrix} = \text{Term} \begin{pmatrix} \text{Xa} \\ \text{Ya} \\ \text{Za} \end{pmatrix} \text{, where } \text{Xc, } \text{Yc, } \text{Zc} \text{ are}$$

the coordinates of the object in camera and Xa, Ya, Za are the coordinates of the same object in the real world and Tcm is the transformation matrix which can be calibrated for a camera.


ground. The camera captures the grid image formed on the ground and continuously transmits it to the smartphone application, which then transmits it to the cloud application. The application compares the received image to the already stored image and generates the most accurate image representing the terrain situation at this moment. Accordingly, the application generates the proper instructions set and sends it back to the smartphone, which guides the pedestrian accordingly. In parallel, the discrepancy between the stored data in the cloud and the data accepted from the pedestrian, is analyzed and if there is a need to update the cloud data it is

done by the cloud application.

The process of the pedestrian guidance phase.

The terrain mapping phase and its transmission to the cloud storage.

Wearable Devices - The Big Wave of Innovation

Figure 6.

124

Figure 5.

The steps of the terrain mapping phase:

Figure 7. Lumigrids formed over a pit.

In the above image, the required lengths between the skewed line segments are calculated.

11. Let a line segment of generated lumigrids of ideal expected size <sup>x</sup> <sup>n</sup> gets shorten by y% due to a skewed terrain. Let "d2" be the ratio of the absolute value of the vertical height on the ground indicated by the corresponding lumigrids to length of the corresponding line segment generated by the lumigrids. Then the absolute height "h" with reference to ideal flat surface of the ground is given by: h ¼ �d2 <sup>∗</sup> <sup>x</sup> <sup>n</sup> <sup>∗</sup> <sup>100</sup>�<sup>y</sup> <sup>100</sup> . Axiom 5 decides if h is positive or negative. h is positive for concave terrain and negative for convex terrain. If y = 100%, theoretically there could be a narrow pit or hill in the ground, as indicated by the non-visibility of the lumigrids.

lumigrids formed and transmits this image to the smart phone of the

Lumigrids formed by the shirt mounted unit of a pedestrian in the guidance phase.

Wearable Devices and their Implementation in Various Domains

DOI: http://dx.doi.org/10.5772/intechopen.86066

4. The terrain information obtained from the lumigrids are cross checked at real time with the terrain information available in the cloud to recognize and handle temporary terrain changes, like a dog sitting on the pedestrian's path or a random stone in the way, or sudden permanent terrain changes like a

5. If considerable discrepancies are found in the terrain, the person is alerted to find possible alternate route like "Stop and Move 3 feet to your right" and a match for the known pattern in the cloud is checked for. If a match is found,

6. If some permanent blocks are identified by the shirt mounted device, the cloud is notified about this so that the cloud can flag the terrain data of that pedestrian path as obsolete and can schedule a re-mapping of the terrain phase. An alternate route is found for the pedestrian and the pedestrian is

Re-mapping of the terrain based on comparative walk-thru and terrain database

phase consists of re-mapping of a pedestrian path either if the current data is flagged as obsolete by the pedestrian guidance phase, or a scheduled re-mapping

pedestrian (Figure 8).

the pedestrian is guided along that path.

Visualization of the terrain grid of a pedestrian path formed by the data.

road block.

Figure 8.

guided accordingly.

process or on-need basis.

Figure 9.

127


The cloud now has precise information of the terrain. The pedestrian guidance phase consists of the following steps:


Wearable Devices and their Implementation in Various Domains DOI: http://dx.doi.org/10.5772/intechopen.86066

Figure 8. Lumigrids formed by the shirt mounted unit of a pedestrian in the guidance phase.

lumigrids formed and transmits this image to the smart phone of the pedestrian (Figure 8).


Re-mapping of the terrain based on comparative walk-thru and terrain database phase consists of re-mapping of a pedestrian path either if the current data is flagged as obsolete by the pedestrian guidance phase, or a scheduled re-mapping process or on-need basis.

Figure 9. Visualization of the terrain grid of a pedestrian path formed by the data.

In the above image, the required lengths between the skewed line segments are

by y% due to a skewed terrain. Let "d2" be the ratio of the absolute value of the vertical height on the ground indicated by the corresponding lumigrids to length of the corresponding line segment generated by the lumigrids. Then the absolute height "h" with reference to ideal flat surface of the ground is

is positive for concave terrain and negative for convex terrain. If y = 100%, theoretically there could be a narrow pit or hill in the ground, as indicated by

12. To exactly identify if the terrain at a given position is concave or convex in nature, we observe the inter line segment distance i of the terrain. If

13. After calculating the terrain information of the given sub-square, the process is repeated to all the sub-squares so that the entire pedestrian path is scanned for its terrain details and mapped. The data thus obtained is pushed to the

The cloud now has precise information of the terrain. The pedestrian guidance

requests a route from source to destination. A GIS map is consulted to obtain various routes from the source to the destination. The data from the cloud has

1. When the pedestrian wishes to navigate, the pedestrian's smart phone

precise information about the terrain of each of the pedestrian paths presented in all these routes. An optimum route is selected based on the variations in the terrain in that route, pedestrian traffic density in the route,

the route with easy help in case of danger or need and various other parameters which govern the safety of the pedestrian are considered.

2. The smart phone guides the pedestrian along this route in the pedestrian path. All major terrain variations in the pedestrian path are alerted to the

3. The shirt mounted unit on the pedestrian flashes the lumigrids on the path ahead and the camera embedded on the unit captures the image of the

<sup>n</sup> ! concave surface, if i <sup>&</sup>lt; <sup>x</sup>

. Axiom 5 decides if h is positive or negative. h

<sup>n</sup> ! convex surface.

<sup>n</sup> gets shorten

11. Let a line segment of generated lumigrids of ideal expected size <sup>x</sup>

<sup>n</sup> <sup>∗</sup> <sup>100</sup>�<sup>y</sup> 100 

calculated.

Lumigrids formed over a pit.

Figure 7.

i = <sup>x</sup>

cloud.

pedestrian.

126

given by: h ¼ �d2 <sup>∗</sup> <sup>x</sup>

Wearable Devices - The Big Wave of Innovation

the non-visibility of the lumigrids.

<sup>n</sup> ! flat surface, if i > <sup>x</sup>

phase consists of the following steps:

The data on the cloud contains the terrain information of the pedestrian path capable of generating a terrain grid along with its GPS coordinates.

The visualization of the data represented as a terrain grid available on the cloud for a pedestrian path looks like Figure 9.


Accordingly the cloud decides if it needs to schedule a re-mapping phase for that

GPS Ver. h a Dirty bit (20, 30) 1 +20 3 1 (20, 31) 1 +20 7 1 (20, 32) 1 +20 10 1 (20, 33) 1 +20 10 1

GPS Ver. h a Dirty bit (20, 30) 2 +0 0 0 (20, 31) 2 +2 0 0 (20, 32) 2 +0 0 0 (20, 33) 2 +2 0 0

This section proposes a conceptual framework which fills the major gaps exist in the design of technological navigation aids and explains the software architecture, hardware and wearable devices requirements and the theoretical models necessary for building an infrastructure to seamlessly gather the terrain-related information of the pedestrian path and use this information to guide the pedestrians to navigate

In this chapter, we outlined various aspects of wearable technology and its implementation in a wide range of applications, starting with healthcare, continued with other domains and concludes with the integration of wearables to navigation and safety systems. Wearables technology is still at its development and growing stage. We expect wearables to continue its fast growth and be implemented in much more domains, transforming our life to be much more convenient, safe

terrain or to accept the information shared by the pedestrian shirt.

After the re-map, following is the data in the cloud:

Wearable Devices and their Implementation in Various Domains

DOI: http://dx.doi.org/10.5772/intechopen.86066

5.4 Summary

properly.

6. Conclusions

and automated.

129

1. A sample data from the cloud is as follows

GPS, the coordinates of the GPS location; Ver., the version number of the data; h, the height of the terrain; a, the inclination of the terrain; dirty bit, specifies if the data is obsolete


Wearable Devices and their Implementation in Various Domains DOI: http://dx.doi.org/10.5772/intechopen.86066


Accordingly the cloud decides if it needs to schedule a re-mapping phase for that terrain or to accept the information shared by the pedestrian shirt.

After the re-map, following is the data in the cloud:

