**2. Strap-down inertial positioning**

Some LPS systems reply on certain infrastructures and are difficult to setup and maintain. They are not suitable for applications that require quick setup, such as fire fighting operations, and emergency rescue services. We recommend self-contained LPS to locate objects residing or moving within a covered area. These objects are equipped with intelligent sensor modules, normally in form of microelectromechanical system (MEMS) component that can sense both the direction and speed of movement and compute the position of the objects within a given coordinate system. In this section we introduce a selfcontained pedestrian tracking technique for the establishment of a complete infrastructurefree positioning system. The technique is the synergism of two existing technologies of dead reckoning and inertial positioning. In the implementation of such a technology, the magnetic sensors take an important role in achieving acceptable performance.

### **2.1 Dead reckoning**

68 Magnetic Sensors – Principles and Applications

LPS can find the location of people equipped with an LPS tag, such as business men traveling in urban areas, workers working on a construction site, fire fighters in operation, such that the pertinent activities can actively monitored and coordinated for better effectiveness, efficiency and safety. An important application of LPS in public services is people-rescue, such as urban search and rescue (USAR). When disasters such as earthquakes, cyclones, tornadoes and floods occur, some specialized organizations, governments and private companies will dispatch task forces immediately. The task force will firstly locate and extricate victims entrapped, then provide and conduct first aid care. Those victims may be entrapped in confined spaces such as collapsed structures and trenches, mines and transportation accidents. Obviously, the positioning of the victims is

A typical application of LPS in transportation is so-called ITS. This new technology is still being developed to control traffic jam and improve road safety. ITS also reduces vehicle wear, travel time, and fuel consumption through managing competing factors such as vehicle type, load, and routes. Other than conventional services such as electronic toll collection and emergency vehicle notification, sophisticated services are also provided to control urban traffic and to guide drivers with automatic routes for optimal performance.

In the long run, both the technology and society environments will be changed greatly. Integrated circuits (IC) will be faster, smaller and cheaper according to Moore's law. The positioning will be more accurate and robust with advanced algorithms and new sensors. Building automation and indoor three-dimensional (3D) dynamic mapping will illustrate and control the whole building environment in much greater details such that both the quality of indoor environment and the operation cost-effectiveness will be significantly improved. Virtual world and real world will be merged to create new sports, games, entertainment and arts. People will have more wealth, enjoying more social networking, and receive more cares. As a result, LPS will surely have more and more applications. Personal positioning devices will be popular to every working persons and school students. Value added services will be provided through LBS (Location Based Services) and ITS. GIS (geographic information system) and LPS will be combined to provide quick emergency response and automatic rescuing. Much smarter supply chain management, traffic management, and work force management will be available to businesses. LPS may induce

Some LPS systems reply on certain infrastructures and are difficult to setup and maintain. They are not suitable for applications that require quick setup, such as fire fighting operations, and emergency rescue services. We recommend self-contained LPS to locate objects residing or moving within a covered area. These objects are equipped with intelligent sensor modules, normally in form of microelectromechanical system (MEMS) component that can sense both the direction and speed of movement and compute the position of the objects within a given coordinate system. In this section we introduce a selfcontained pedestrian tracking technique for the establishment of a complete infrastructurefree positioning system. The technique is the synergism of two existing technologies of dead

critical for an efficient rescue action.

a commercial exploitation in the future.

**2. Strap-down inertial positioning** 

LPS, integrated with GPS, makes these services possible.

Examples of existing self-contained LPS systems include dead reckoning systems for marine navigation (McBrewster, 2009), wheel sensing systems for automobiles (Thiessen & Dales, 1983), and pedometers for human beings (Chiang, 2007). Dead reckoning is a method of estimating an object's present position by projecting its courses steered and speeds over ground from the last known position. The dead reckoning position is only a rough approximation because it does not consider the effect of leeway, currents, helmsman error, or gyro error.

All dead reckoning systems compute the position of an object based on the last known position and the relative movement from that position. The relative movement is calculated according to the direction and speed of motion. This principle is employed in the aforementioned three existing self-contained LPS. Despite the advantage of independence on complicated infrastructure, these systems normally suffer from low accuracy.

### **2.2 Inertial positioning**

An inertial positioning system (IPS) indirectly obtains the speed and direction, or displacement, through integrating the measured accelerations and the angular velocity of an object over time. Accelerometers and gyroscopes are used to continuously measure and record translational and rotational motions. Thus inertial positioning and dead reckoning are different in the data acquired (acceleration v. s. speed, angular velocity v. s. course direction) and the devices used to measure these data (e. g. accelerometer v. s. speedometer). For a dead reckoning system, the distance can also be acquired through direct estimation. For example, in a pedometer, the step length is simply estimated, not measured or calculated. The distance is estimated by multiplying the estimated step length and the number of steps counted. Generally speaking, inertial positioning, or inertial navigation, is a modern technology superseding the dead reckoning, a relatively older technology.

A traditional inertial positioning system is gimbaled (Sarton and George, 1959). A mechanical device called gimbal-stabilized platform is used to establish a reference system in vehicles such as submarines, surface vehicles, aircrafts and space crafts. The sensors, the gyroscopes and accelerometers, or pick-ups, are mounted up on the stabilized platform to sense specific forces. The advantage of gimbaled systems is that the computation of position, velocity, orientation and angular velocity is less complex.

As shown in Fig. 1, with the initial local velocity **v**<sup>l</sup> (0) and initial local displacement **s**<sup>l</sup> (0) of an object known respectively, we are able to track the position of the object by subtracting the acceleration due to gravity, **g**, and then integrating the remaining acceleration **a**<sup>l</sup> (*t*)-**g**, once over a time period *t* to obtain velocity, and twice to obtain displacement in the local geographic frame (l-frame):

$$\mathbf{v}^{1}(t) = \mathbf{v}^{1}(0) + \int\_{0}^{t} (\mathbf{s}^{1}(\tau) - \mathbf{g}) \, d\tau \tag{1}$$

$$\mathbf{s}^1(t) = \mathbf{s}^1(0) + \int\_0^t \boldsymbol{v}^1(\tau) \, \mathrm{d}\tau \tag{2}$$

The Application of Magnetic Sensors in Self-Contained Local Positioning 71

to the surface of the earth, the orientation of the device will be estimated in this coordinate

In order to describe the components of a vector of arbitrary orientation in l-frame with respect to the original b-frame, or, in a new frame with respect to an old frame, after a certain rotation, a few rotation representations must be applied. We use the direct cosines representation to develop algorithms for tracking the orientations of sensors. To do this, a direct cosine matrix (DCM) has to be established by transformation of three sequential rotations from the original axes in the l-frame to the new body axes. These rotations continue as a *Ψ* rotation about *z*; a θ

> cos cos cos sin sin sin cos sin sin cos sin cos cos sin cos cos sin sin sin sin cos cos sin sin sin sin cos cos cos

The orientation of a moving object is tracked by integrating the angular velocity signal

At a time instant *t*, let ω=[ω<sup>x</sup> ω<sup>y</sup> ωz]T be the corresponding angular velocity sample,

0

With a short sampling intervalδ*t*, the DCM at time t, **R**(*t*), can be calculated from the DCM

sin( ) 1 cos( ) () ( ) *t t*

 

0

*z y z x y x*

 

**R** (5)

 

> 

 

 

denote the norm of angular velocity asξ,i.e.,ξ=∥ωx,ωy,ωz∥,and a matrix

0

> 

frame. Axes *x*b, **y**<sup>b</sup> and *z*b represent the orthogonal axes in the b-frame.

Correction

rotation about *y*, resulting from the first rotation; and, finally, a

 

 

obtained from rate gyroscope in the swing phase of walking.

from the second rotation. The resulting DCM is as follows:

respectively represent north, east and down in the l-frame fixed

for gravity Integration Integration

 

 

**Ω** (6)

2

2

**RR I Ω Ω** (7)

 rotation about *x*, resulting

Position

Velocity

Orientation

 

Fig. 2. Strap-down inertial navigation algorithm

, and *z*<sup>l</sup>

**2.3.1 Reference frames and rotations** 

 

 

**2.3.2 Tracking the orientation of a moving object** 

at time *t*-δ*t*, **R**(*t*-δ*t*), according to below equation

*t tt*

, **y**<sup>l</sup>

Integration

Projection

Denoting axes *x*<sup>l</sup>

Gyroscope Signal

Accelerometer

Signals

Fig. 1. Traditional inertial positioning algorithm

Using the rectangular rule, the above integration can be implemented in discrete-time form , denoting *δt* as the integration interval:

$$\mathbf{v}^{1}(t+\delta t) = \mathbf{v}^{1}(t) + \left(\mathbf{a}^{1}(t+\delta t) - \mathbf{g}^{1}\right) \cdot \delta t \tag{3}$$

$$\mathbf{s}^1\left(t+\delta t\right) = \mathbf{s}^1\left(t\right) + \mathbf{v}^1\left(t+\delta t\right) \cdot \delta t \tag{4}$$

Unfortunately, there are several problems with the gimbaled systems. Other than frictions from bearings and the dead zones of motors, extra power is needed to align the platform with the navigational frame. Moreover, gimbaled systems need high quality electromechanical parts including motors, slip rings and bearing, recalibration which is difficult, and regular maintenance by certified personnel in a clean room through a lengthy recertification process. Consequently, the traditional position systems are mostly used for airplanes, vessels, and intercontinental ballistic missiles.

### **2.3 Strap-down inertial positioning**

The strap-down IPS replaces the traditional gimbaled system for low-cost applications. The accelerometers and rate gyroscopes are rigidly mounted in the body of a tagged object thus there is no relative movement between them. This is a major hardware simplification compared with the stabilized platform in tradition system. The most significant advantage of the strap-down IPS in comparison with the gimbaled IPS is the considerably reduced size and weight, which normally result in lower cost, power consumption, and hardware complexity.

However, the processing of inertial navigation is more sophisticated. Fundamentally, for IPS, a number of Cartesian coordinate reference frames, such as the body frame (b-frame), positioning frame and local geographic frame (l-frame) have to be rigorously defined and precisely related. We define the b-frame as an orthogonal axis set, which is aligned with the roll, pitch and yaw axes of the body to be positioned. We also define the positioning frame as an l-frame which has its origin at the original position for positioning and axes aligned with the directions of north, east and the local vertical (down). In a strap-down IPS, the computational complexity is increased because output data are measured in the b-frame rather than the l-frame. An algorithm has to be applied to keep track of the orientation of the sensor module (and the object) and rotate the measurements from the b-frame to the l-frame. Fig. 2 shows this procedure.

Fig. 2. Strap-down inertial navigation algorithm

### **2.3.1 Reference frames and rotations**

70 Magnetic Sensors – Principles and Applications

Using the rectangular rule, the above integration can be implemented in discrete-time form ,

Correction Integration Integration

1 11 1 **v va g** ( ) () ( ) *tt t tt t*

1 11 **s sv** ( ) () ( ) *tt t ttt*

Unfortunately, there are several problems with the gimbaled systems. Other than frictions from bearings and the dead zones of motors, extra power is needed to align the platform with the navigational frame. Moreover, gimbaled systems need high quality electromechanical parts including motors, slip rings and bearing, recalibration which is difficult, and regular maintenance by certified personnel in a clean room through a lengthy recertification process. Consequently, the traditional position systems are mostly used for

The strap-down IPS replaces the traditional gimbaled system for low-cost applications. The accelerometers and rate gyroscopes are rigidly mounted in the body of a tagged object thus there is no relative movement between them. This is a major hardware simplification compared with the stabilized platform in tradition system. The most significant advantage of the strap-down IPS in comparison with the gimbaled IPS is the considerably reduced size and weight, which normally result in lower cost, power consumption, and

However, the processing of inertial navigation is more sophisticated. Fundamentally, for IPS, a number of Cartesian coordinate reference frames, such as the body frame (b-frame), positioning frame and local geographic frame (l-frame) have to be rigorously defined and precisely related. We define the b-frame as an orthogonal axis set, which is aligned with the roll, pitch and yaw axes of the body to be positioned. We also define the positioning frame as an l-frame which has its origin at the original position for positioning and axes aligned with the directions of north, east and the local vertical (down). In a strap-down IPS, the computational complexity is increased because output data are measured in the b-frame rather than the l-frame. An algorithm has to be applied to keep track of the orientation of the sensor module (and the object) and rotate the measurements from the b-frame to the l-frame.

   

Resolution Position

(3)

Orientation

Velocity

(4)

Fig. 1. Traditional inertial positioning algorithm

Orientation

Gravity

airplanes, vessels, and intercontinental ballistic missiles.

denoting *δt* as the integration interval:

Accelerometer

Gyroscope

Signal

Signals

**2.3 Strap-down inertial positioning** 

hardware complexity.

Fig. 2 shows this procedure.

Denoting axes *x*<sup>l</sup> , **y**<sup>l</sup> , and *z*<sup>l</sup> respectively represent north, east and down in the l-frame fixed to the surface of the earth, the orientation of the device will be estimated in this coordinate frame. Axes *x*b, **y**<sup>b</sup> and *z*b represent the orthogonal axes in the b-frame.

In order to describe the components of a vector of arbitrary orientation in l-frame with respect to the original b-frame, or, in a new frame with respect to an old frame, after a certain rotation, a few rotation representations must be applied. We use the direct cosines representation to develop algorithms for tracking the orientations of sensors. To do this, a direct cosine matrix (DCM) has to be established by transformation of three sequential rotations from the original axes in the l-frame to the new body axes. These rotations continue as a *Ψ* rotation about *z*; a θ rotation about *y*, resulting from the first rotation; and, finally, a rotation about *x*, resulting from the second rotation. The resulting DCM is as follows:

$$\mathbf{R} = \begin{pmatrix} \cos\theta\cos\psi & -\cos\phi\sin\psi + \sin\phi\sin\theta\cos\psi & \sin\phi\sin\psi + \cos\phi\sin\theta\cos\psi\\ \cos\theta\sin\psi & \cos\phi\cos\psi + \sin\phi\sin\theta\sin\psi & -\sin\phi\cos\psi + \cos\phi\sin\theta\sin\psi\\ -\sin\theta & \sin\phi\cos\theta & \cos\phi\cos\theta \end{pmatrix} \tag{5}$$

### **2.3.2 Tracking the orientation of a moving object**

The orientation of a moving object is tracked by integrating the angular velocity signal obtained from rate gyroscope in the swing phase of walking.

At a time instant *t*, let ω=[ω<sup>x</sup> ω<sup>y</sup> ωz]T be the corresponding angular velocity sample, denote the norm of angular velocity asξ,i.e.,ξ=∥ωx,ωy,ωz∥,and a matrix

$$\mathbf{Q} = \begin{pmatrix} 0 & -\alpha z & \alpha y \\ \alpha z & 0 & -\alpha x \\ -\alpha y & \alpha x & 0 \end{pmatrix} \tag{6}$$

With a short sampling intervalδ*t*, the DCM at time t, **R**(*t*), can be calculated from the DCM at time *t*-δ*t*, **R**(*t*-δ*t*), according to below equation

$$\mathbf{R}(t) = \mathbf{R}(t - \delta t) \cdot \left(\mathbf{I} + \frac{\sin(\xi \delta t)}{\xi} \mathbf{\Omega} + \frac{1 - \cos(\xi \delta t)}{\xi^2} \mathbf{\Omega}^2\right) \tag{7}$$

The Application of Magnetic Sensors in Self-Contained Local Positioning 73

completely. The residue acceleration originated from gravity will become a "bias" to the

In the strap-down navigation algorithm, the acceleration due to gravity, **g**, is deducted from the globally vertical acceleration signal before integration. When there are angular errors, there are tilt errors. Here a tilt error refers to the angle between the estimated vertical direction and

horizontal axes, resulting in a component of the acceleration due to gravity with magnitude:

globally horizontal acceleration signals. In the mean time, in globally vertical axis, there is a

In some cases, Such as human walk, the mean absolute acceleration measured is much smaller than the magnitude of gravity. As a contrast, a tilt error of 0.05° can cause a component of the acceleration due to gravity with magnitude near g/1000. This residue bias can cause a positioning error of 15.4 meters after only a minute of integration, or error of 0.49 meter after only 10 seconds (as demonstrated by the red dashed line in Fig. 3). Therefore, gyroscope errors, which propagate in the positioning algorithm, are critical errors affecting the accuracy of pedestrian tracking. Before the development of the algorithm presented in this chapter, it was believed that positioning with data from inertial sensors was not possible due to the

Precise tracking of people, especially of first responders, in a harsh environment, remains an open research area. For a practical person tracking system, the measuring device of the system should be portable or wearable for the person being tracked. Inertial positioning can be applied in people tracking theoretically. However, the integral drift of accelerometers and gyroscopes, as well as the tilt errors of gyroscopes, make the inertial positioning not practical. As a result, as mentioned earlier, the characteristic of the movement of the person should be incorporated to improve positioning accuracy. If pedestrian dead reckoning (PDR) is

integrated with IPS, the performance will be significantly improved, as shown is Fig. 3.

PDR is the application of dead reckoning in pedestrian tracking. Due to the difficulty in measuring the speed of a walking person directly, PDR estimate a pedestrian's present position based on estimated step length and heading from the last known position. Using probability models, Mezentsev et al (2005) confirm that the main source of errors in a PDR system is related to the estimation of the step length and the heading. Different random noise models can be applied to heading estimation and step length estimation (Mezentsev et al, 2005). Suppose the modeled mean step length *s* is a constant, and the *i*th length error *ωi* is normally distributed with a standard deviation σ, then the true *i*th step length satisfy *si*=*s*+*ωi*. Assuming the step errors are uncorrelated during a walk, the distance error variance

. This component can be treated as a residual bias due to gravity, remaining in the

in radian will cause the projection of gravity onto the

). Fortunately this problem is much less severe

<sup>v</sup>≈0. Therefore, a small tilt error will mainly cause positioning

v=g×(1-cos*e*

quadratic growth of errors caused by sensor drift during double integration.

true acceleration due to movement of the object.

true vertical direction. The tilt error *e*

error in the globally horizontal plane.

**4.1 Pedestrian dead reckoning** 

after *N* steps is

, we have *e*<sup>a</sup>

**4. Integration of PDR with inertial positioning** 

residual bias of magnitude: *e*<sup>a</sup>

*e*a

h=g×sin*e*

because for small *e*

With the DCM updated, it becomes possible to project the acceleration signal **a**b(*t*) from the accelerometers in the b-frame into acceleration **a**<sup>l</sup> (*t*) in the l-frame:

$$\mathbf{a}^1(t) = \mathbf{R}(t) \cdot \mathbf{a}^\mathbf{b}(t) \tag{8}$$

After the local acceleration is obtained, the velocity and the displacement of the object can be determined according to (1) and (2) respectively to locate the object.
