**Part 2**

**GNSS Navigation and Applications** 

104 Global Navigation Satellite Systems – Signal, Theory and Applications

B. Roturier, E. Chatre and J. Ventura-Traveset, "The SBAS Integrity Concept Standardised

I. Martini, "Analisi delle prestazioni degli algoritmi di integrità del sistema Europeo di navigazione satellitare Galileo", Università degli studi di Firenze, 2006. J. Rife, S. Pullen, B. Pervant and P. Enge, "Paired Overbounding and Application to GPS Augmentation", Stanford University & Illinois Institute of Technology, 2004.

by ICAO. Application to EGNOS", GNSS 2001, May 2001.

ESA-DEUI-NG-TN/01331, "Galileo Integrity Concept", issue 1, 5 July 2005.

GEAS Panel, "Phase II of the GNSS Evolutionary Architecture Study", Feb. 2010.

ESA, "EGNOS Fact Sheet – 3. Integrity Explained", May 2005.

**5** 

*Italy* 

**Estimation of Satellite-User Ranges** 

Marco Pini1, Gianluca Falco1 and Letizia Lo Presti2

*1Istituto Superiore Mario Boella* 

*2Politecnico di Torino* 

**Through GNSS Code Phase Measurements** 

A Global Navigation Satellite System (GNSS) receiver is able to compute the user position through a trilateration procedure, which includes the measure of the distance between the receiver and a set of satellites. Two different approaches are tipically used and implemented in commercial receivers. The former relies on code tracking, the latter leverages carrier

This chapter focuses on the first approach and discusses the procedures that GNSS receivers perform to finely estimate satellite-user ranges. First, in section 2 we introduce the concept of pseudorange and in section 3 we give some fundamentals on primary signal processing blocks of every GNSS receiver: signal acquisition, tracking and data demodulation. In section 4, two common methods used to estimate the user-satellite range, on the basis of code phase measurements are presented. Finally, section 5 completes the chapter, providing an example of combined Position, Velocity and Time (PVT) computation for a GPS/Galileo

Let us start with a simple example to introduce the concepts we will describe in the next

John usually bikes to school following a straight path, keeping a constant speed. John wants to measure the distance between his house and the school and decides to compute such a distance by measuring the time it takes to go to school. He uses the following formula:

• *t* is the difference between the time instant when John arrives at school and the time instant when he leaves home. In both cases, John reads the time on his digital watch.

*x vt* = ⋅ (1)

phase measurements performed during carrier tracking.

**1. Introduction** 

receiver.

sections.

where:

**2. Theory and methods** 

• *x* is the distance estimated by John;

• *v* is the average speed, read on the bike speedometer;

## **Estimation of Satellite-User Ranges Through GNSS Code Phase Measurements**

Marco Pini1, Gianluca Falco1 and Letizia Lo Presti2 *1Istituto Superiore Mario Boella 2Politecnico di Torino Italy* 

## **1. Introduction**

A Global Navigation Satellite System (GNSS) receiver is able to compute the user position through a trilateration procedure, which includes the measure of the distance between the receiver and a set of satellites. Two different approaches are tipically used and implemented in commercial receivers. The former relies on code tracking, the latter leverages carrier phase measurements performed during carrier tracking.

This chapter focuses on the first approach and discusses the procedures that GNSS receivers perform to finely estimate satellite-user ranges. First, in section 2 we introduce the concept of pseudorange and in section 3 we give some fundamentals on primary signal processing blocks of every GNSS receiver: signal acquisition, tracking and data demodulation. In section 4, two common methods used to estimate the user-satellite range, on the basis of code phase measurements are presented. Finally, section 5 completes the chapter, providing an example of combined Position, Velocity and Time (PVT) computation for a GPS/Galileo receiver.

## **2. Theory and methods**

Let us start with a simple example to introduce the concepts we will describe in the next sections.

John usually bikes to school following a straight path, keeping a constant speed. John wants to measure the distance between his house and the school and decides to compute such a distance by measuring the time it takes to go to school. He uses the following formula:

$$
\mathfrak{x} = \mathfrak{v} \cdot \mathfrak{t} \tag{1}
$$

where:


The following day, John repeats the experiment, but he measures *t* as the difference between the arrival time read on the school clock and the leaving time, read on his watch. John realizes that the estimated distance is significantly different from that estimated the previous day. Likely, his watch and the clock at school are not synchronized. In this case, the measured time interval can be written as follow:

$$
\tilde{t} = t + \delta t \tag{2}
$$

Estimation of Satellite-User Ranges Through GNSS Code Phase Measurements 109

*k-th* satellite coordinates), keeping their clocks synchronized to a common time scale. The user estimates the distances *ρk* with a set of satellites, measuring the travel time from the

> **Measured range (pseudorange)**

**Range due to the receiver clock bias**

*<sup>k</sup>* instead of a range. From this moment on, the reader

 δ

> δ

> > (6)

 δ

> δ

satellite to the receiving antenna.

Fig. 1. Example of trilateration in case of clock biased receiver

ρ

33 3

11 1 1

22 2 2

44 4 4

such a distance as a **pseudorange**

Jonge & Teunissen, 1996).

has to keep in mind this distinction.

ρ

ρ

⎪

⎨

ρ

ρ

The user needs at least 4 equations to be able to compute (,,) *k kk xyz* , because of the bias δ*t* between his clock and the satellite time scale. Due to the presence of a common bias that affects all the measures of distance between the user and the satellites, we have to refer to

> ( )( )( ) ( )( )( ) ( )( )( ) ( )( )( )

<sup>⎧</sup> <sup>=</sup> − + − + − +⋅ <sup>⎪</sup>

⎪ = − + − + − +⋅

<sup>⎪</sup> <sup>=</sup> − + − + − +⋅ <sup>⎪</sup> <sup>⎪</sup> <sup>=</sup> − + − + − +⋅ <sup>⎩</sup>

The system (6) is the set of equations that every GNSS receiver has to solve. With the problem stated above and having in mind the task of the receiver, this chapter explains the operations performed to measure the user-satellite ranges. The focus will be mainly on measurements taken on the received spreading codes, while for carrier-phase measurements, interested readers can find comprehensive theory in (Misra & Enge, 2001;

2 22

*x x y y z z tc x x y y z z tc x x y y z z tc x x y y z z tc*

*u uu u uu u u su u uu*

2 22

2 22

2 22

Equation (2) takes into account the bias δ*t* between John's watch and the school clock. Considering this term, John understands that this reflects to an error δ*x* on the estimated distance.

$$
\tilde{\mathfrak{X}} = \boldsymbol{\upsilon} \cdot \bar{\mathfrak{t}} = \boldsymbol{\upsilon} \cdot (\boldsymbol{t} + \boldsymbol{\delta}\boldsymbol{t}) = \boldsymbol{\mathfrak{x}} + \boldsymbol{\delta}\boldsymbol{\mathfrak{x}} \tag{3}
$$

At this point, John wants to compare the result with one of his friends. He asks Alice to do the same measurement from her house, since John knows that his house is exactly *500 m* away from hers. Before the measurements, Alice and John synchronize their watches. Referring the measurements taken by John and Alice to the subscripts *J* and *A*, equation (3) becomes:

$$\begin{cases} \mathbf{x}\_{\!\!\! } = \tilde{\mathbf{x}}\_{\!\!\! } -\delta \mathbf{x}\_{\!\!\! } = \mathbf{v}\_{\!\!\! } (\tilde{\mathbf{t}}\_{\!\!\! } -\delta \mathbf{t})\\ \mathbf{x}\_{A} = \tilde{\mathbf{x}}\_{A} - \delta \mathbf{x}\_{A} = \mathbf{v}\_{A}(\tilde{\mathbf{t}}\_{A} - \delta \mathbf{t}) \end{cases} \tag{4}$$

where:


Recalling that Alice's house is *500 m* away from John's, the previous system of equations can be rewritten as:

$$\begin{cases} \boldsymbol{x}\_{\boldsymbol{f}} &= \boldsymbol{\upsilon}\_{\boldsymbol{f}} (\tilde{\boldsymbol{t}}\_{\boldsymbol{f}} - \boldsymbol{\delta}\boldsymbol{t}) \\ \boldsymbol{x}\_{\boldsymbol{f}} + 500 = \boldsymbol{\upsilon}\_{A} (\tilde{\boldsymbol{t}}\_{A} - \boldsymbol{\delta}\boldsymbol{t}) \end{cases} \tag{5}$$

This new system has two equations and two unknowns: *<sup>J</sup> x* and δ*t* . In few steps, John can finally compute the distance between his house and the school, realizing that he obtains the same result of the first experiment. The conclusion of this simple example is that in 1 dimension, if the clocks used to measure time intervals are not synchronized, we need an additional equation to solve the problem.

Bringing the concept to a three-dimensional space, it is easy to understand that we need four equations to solve the problem and determine the unknown user position respect to a reference system. This is the case of Global Navigation Satellite System (GNSS) receivers.

Referring to the geometry sketched in Fig. 1, there are satellites in view broadcasting ranging signals, while a user on the Earth wants to estimate his unknown coordinates (*xu,yu,zu*). The satellites continuously transmit their positions (i.e. (*xk,yk,zk*) considering the 108 Global Navigation Satellite Systems – Signal, Theory and Applications

The following day, John repeats the experiment, but he measures *t* as the difference between the arrival time read on the school clock and the leaving time, read on his watch. John realizes that the estimated distance is significantly different from that estimated the previous day. Likely, his watch and the clock at school are not synchronized. In this case,

> *tt t* = + δ

Equation (2) takes into account the bias δ*t* between John's watch and the school clock. Considering this term, John understands that this reflects to an error δ*x* on the estimated

*x vt v t t x x* = ⋅=⋅ + = + ( ) δ

At this point, John wants to compare the result with one of his friends. He asks Alice to do the same measurement from her house, since John knows that his house is exactly *500 m* away from hers. Before the measurements, Alice and John synchronize their watches. Referring the measurements taken by John and Alice to the subscripts *J* and *A*, equation (3)

> *J J J JJ A A A AA x x x vt t x x x vt t* δ

*t* is the unknown bias between Alice and John's watches and the school clock. Recalling that Alice's house is *500 m* away from John's, the previous system of equations can

> *J JJ J AA x vt t x vt t* ⎧⎪ = − <sup>⎨</sup> ⎪ += − ⎩

finally compute the distance between his house and the school, realizing that he obtains the same result of the first experiment. The conclusion of this simple example is that in 1 dimension, if the clocks used to measure time intervals are not synchronized, we need an

Bringing the concept to a three-dimensional space, it is easy to understand that we need four equations to solve the problem and determine the unknown user position respect to a reference system. This is the case of Global Navigation Satellite System (GNSS) receivers. Referring to the geometry sketched in Fig. 1, there are satellites in view broadcasting ranging signals, while a user on the Earth wants to estimate his unknown coordinates (*xu,yu,zu*). The satellites continuously transmit their positions (i.e. (*xk,yk,zk*) considering the

δ

• *<sup>J</sup> x* and *Ax* are the distances estimated by John and Alice, respectively; • *<sup>J</sup> x* and *Ax* are the unknown distances John and Alice want to measure;

This new system has two equations and two unknowns: *<sup>J</sup> x* and

additional equation to solve the problem.

 are the time intervals measured by John and Alice; • *<sup>J</sup> v* and *Av* are the average speeds of John and Alice read on their speedometers;

⎧⎪ =− = − <sup>⎨</sup> ⎪ =− = − ⎩ 

δ

( ) ( )

δ δ ( )

500 ( )

 δ

δ

(4)

(5)

*t* . In few steps, John can

δ

(2)

(3)

the measured time interval can be written as follow:

distance.

becomes:

where:

• *<sup>J</sup> t*

δ

be rewritten as:

•

and *At*

*k-th* satellite coordinates), keeping their clocks synchronized to a common time scale. The user estimates the distances *ρk* with a set of satellites, measuring the travel time from the satellite to the receiving antenna.

Fig. 1. Example of trilateration in case of clock biased receiver

The user needs at least 4 equations to be able to compute (,,) *k kk xyz* , because of the bias δ*t* between his clock and the satellite time scale. Due to the presence of a common bias that affects all the measures of distance between the user and the satellites, we have to refer to such a distance as a **pseudorange** ρ *<sup>k</sup>* instead of a range. From this moment on, the reader has to keep in mind this distinction.

$$\begin{cases} \rho\_1 = \sqrt{(\mathbf{x}\_1 - \mathbf{x}\_u)^2 + (y\_1 - y\_u)^2 + (z\_1 - z\_u)^2} + \delta t \cdot c \\ \rho\_2 = \sqrt{(\mathbf{x}\_2 - \mathbf{x}\_u)^2 + (y\_2 - y\_u)^2 + (z\_2 - z\_u)^2} + \delta t \cdot c \\ \rho\_3 = \sqrt{(\mathbf{x}\_3 - \mathbf{x}\_u)^2 + (y\_3 - y\_u)^2 + (z\_s - z\_u)^2} + \delta t \cdot c \\ \rho\_4 = \sqrt{(\mathbf{x}\_4 - \mathbf{x}\_u)^2 + (y\_4 - y\_u)^2 + (z\_4 - z\_u)^2} + \delta t \cdot c \end{cases} \tag{6}$$

The system (6) is the set of equations that every GNSS receiver has to solve. With the problem stated above and having in mind the task of the receiver, this chapter explains the operations performed to measure the user-satellite ranges. The focus will be mainly on measurements taken on the received spreading codes, while for carrier-phase measurements, interested readers can find comprehensive theory in (Misra & Enge, 2001; Jonge & Teunissen, 1996).

Estimation of Satellite-User Ranges Through GNSS Code Phase Measurements 111

is a vector of test variables: τ represents the code delay and *df* the Doppler

π

(8)

 **18000 16000 14000 12000 10000 8000 6000**

*f* . *Dp* is known as search space.

shift. *p* is defined in a proper support *Dp* containing all possible values that can be

• [ ] *IF r n* is the local signal sampled with a rate equal to the sampling frequency used by

<sup>2</sup> [] ( ) *d s j f nT IF <sup>s</sup> r n c nT e*

 = −τ

where ( ) <sup>τ</sup> *<sup>s</sup> c nT* <sup>−</sup> is the local spreading code with delay <sup>τ</sup> , <sup>2</sup><sup>π</sup> *d s j f nT <sup>e</sup>* represents the local

Real acquisition systems find the values of τ and *df* that maximize equation (7). As an example, Fig. 2 reports (, ) τ *R df* over a predefined search space. A correlation peak corresponding to a defined pair of τ and *df* clearly raises above the cross-correlation noise floor and indicates a first rough alignment between the incoming and the local signals.

**Cross Ambiguity Function for GPS #PRN 26**

 **4000 2000 0 Doppler frequency f [KHz] <sup>d</sup> Code Offset [# samples]** τ

Generally, the first estimate of τ and *df* that maximizes equation (7) is followed by a decision process. The maximum of (, ) τ *R df* is taken as decision variable and compared against a threshold, that is often set according to the Neyman-Pearson (NP) theorem (Kay, 1998). If the maximum is higher than the threshold, the satellite is considered present,

Fig. 2. Two-dimensional function evaluated by GNSS signal acquisition

• *yIF* [*n*] represents the incoming signal, as stream of samples at the ADC output; • *L* is the number of samples used to process a portion of the incoming signal;

τ

where:

• (, ) *<sup>d</sup> p f* = τ

**6000**

**5000**

**4000**

**3000**

**Amplitude**

**2000**

**1000**

 **0**

**6 4 2 0 -2 -4 -6 -8**

assumed by the elements of (, ) *<sup>d</sup> p* =

the ADC and can be expressed as follows:

carrier, in-phase and quadrature, *Ts* is the sampling interval.

## **3. From the incoming signal to the pseudorange**

When the GPS signal arrives at the receiver, it is very weak and the received power, proportional to the distance between the satellite and the user, is well below the noise floor. However, GPS receivers are able to compute their position with an accuracy that ranges from a couple of meters to centimeters in case of carrier-phase measurements. Such performance are possible thanks to the spread-spectrum nature of GNSS signals. It is useful to recall that each satellite utilizes Direct Sequence Spread Spectrum (DSSS) modulation (Kaplan & Hegarty, 2006), broadcasting the navigation message on pseudo random noise (PRN) spreading codes, over the same frequency. Taking as example the GPS L1 C/A code, each satellite uses a Gold code, quasi-orthogonal with respect to those used by the other satellites. Applying signal processing algorithms based on the correlations between the incoming signal and local replicas, the receiver can de-spread the incoming signal and retrieve the navigation message. Such algorithms are used to perform two fundamental processes, commonly known as *acquisition* and *tracking*, respectively. The first aims at roughly estimating the Doppler frequency and the code delay of the received signal. The tracking phase adjusts the parameters assessed by the acquisition, to finely measure the phase of each tracked GPS signal, keeping trace of changes in the future. The estimate of the code delay for all the tracked satellites is at the basis of the pseudoranges computation.

#### **3.1 Signal acquisition**

The first task of a GNSS receiver is to detect the presence of the satellites in view. This is performed by the acquisition system, which also provides a coarse estimate of two parameters of the received Signal In Space (SIS): the Doppler shift and the delay of the received spreading code with respect to the local replica. In the next sections, we will see that the precise alignment between the received and the local spreading codes, is fundamental for the measure of user-satellites ranges, that is necessary to fix the receiver position.

There are two mathematical disciplines which govern the operation performed by acquisition systems: the *Estimation theory* and the *Signal Detection theory*. These two extensive theories are described in various literature, whereas comprehensive analysis and applications can be found in many papers. For a complete mathematical background of the operation performed by GNSS signal acquisition, interested readers can refer to (Kay, 1993, 1998).

Keeping our description terse, real acquisition systems search for a satellite in view, correlating the received signal with a local replica of the spreading code and a local carrier. The search consists in finding the values of code delay and carrier frequency of the local signals that maximize the correlation. Exploiting the concepts and the methodology of the *Estimation theory*, it is possible to show that the Maximum Likelihood (ML) estimate of the vector (, ) *<sup>d</sup> p* = τ *f* , whose elements are the two unknowns of the received signal *yIF* [*n*] , can be obtained by maximizing the following function

$$\hat{p}\_{ML} = \arg\max\_{\overline{\mathbb{P}}} \left| \frac{1}{L} \sum\_{n=0}^{L-1} y\_{IF}[n] \overline{\eta}\_{IF}[n] \right|^2 = \arg\max\_{\overline{\mathbb{P}}} \mathcal{R}(\overline{\tau}, \overline{f}\_d) \tag{7}$$

where:

110 Global Navigation Satellite Systems – Signal, Theory and Applications

When the GPS signal arrives at the receiver, it is very weak and the received power, proportional to the distance between the satellite and the user, is well below the noise floor. However, GPS receivers are able to compute their position with an accuracy that ranges from a couple of meters to centimeters in case of carrier-phase measurements. Such performance are possible thanks to the spread-spectrum nature of GNSS signals. It is useful to recall that each satellite utilizes Direct Sequence Spread Spectrum (DSSS) modulation (Kaplan & Hegarty, 2006), broadcasting the navigation message on pseudo random noise (PRN) spreading codes, over the same frequency. Taking as example the GPS L1 C/A code, each satellite uses a Gold code, quasi-orthogonal with respect to those used by the other satellites. Applying signal processing algorithms based on the correlations between the incoming signal and local replicas, the receiver can de-spread the incoming signal and retrieve the navigation message. Such algorithms are used to perform two fundamental processes, commonly known as *acquisition* and *tracking*, respectively. The first aims at roughly estimating the Doppler frequency and the code delay of the received signal. The tracking phase adjusts the parameters assessed by the acquisition, to finely measure the phase of each tracked GPS signal, keeping trace of changes in the future. The estimate of the code delay for all the tracked satellites is at the basis of the pseudoranges computation.

The first task of a GNSS receiver is to detect the presence of the satellites in view. This is performed by the acquisition system, which also provides a coarse estimate of two parameters of the received Signal In Space (SIS): the Doppler shift and the delay of the received spreading code with respect to the local replica. In the next sections, we will see that the precise alignment between the received and the local spreading codes, is fundamental for the measure of user-satellites ranges, that is necessary to fix the receiver

There are two mathematical disciplines which govern the operation performed by acquisition systems: the *Estimation theory* and the *Signal Detection theory*. These two extensive theories are described in various literature, whereas comprehensive analysis and applications can be found in many papers. For a complete mathematical background of the operation performed by GNSS signal acquisition, interested readers can refer to (Kay, 1993,

Keeping our description terse, real acquisition systems search for a satellite in view, correlating the received signal with a local replica of the spreading code and a local carrier. The search consists in finding the values of code delay and carrier frequency of the local signals that maximize the correlation. Exploiting the concepts and the methodology of the *Estimation theory*, it is possible to show that the Maximum Likelihood (ML) estimate of the

<sup>2</sup> <sup>1</sup>

<sup>1</sup> <sup>ˆ</sup> argmax [ ] [ ] argmax ( , )

*ML p IF IF p d*

*p y nr n <sup>R</sup> <sup>f</sup> <sup>L</sup>*

0

=

*L*

−

*n*

*f* , whose elements are the two unknowns of the received signal *yIF* [*n*] , can

τ

<sup>=</sup> ∑ <sup>=</sup> (7)

**3. From the incoming signal to the pseudorange** 

**3.1 Signal acquisition** 

position.

1998).

vector (, ) *<sup>d</sup> p* = τ

be obtained by maximizing the following function


$$
\overline{r}\_{IF} \{ n \} = c \{ n T\_s - \overline{\tau} \} e^{j2\pi f\_d n T\_s} \tag{8}
$$

where ( ) <sup>τ</sup> *<sup>s</sup> c nT* <sup>−</sup> is the local spreading code with delay <sup>τ</sup> , <sup>2</sup><sup>π</sup> *d s j f nT <sup>e</sup>* represents the local carrier, in-phase and quadrature, *Ts* is the sampling interval.

Real acquisition systems find the values of τ and *df* that maximize equation (7). As an example, Fig. 2 reports (, ) τ *R df* over a predefined search space. A correlation peak corresponding to a defined pair of τ and *df* clearly raises above the cross-correlation noise floor and indicates a first rough alignment between the incoming and the local signals.

Fig. 2. Two-dimensional function evaluated by GNSS signal acquisition

Generally, the first estimate of τ and *df* that maximizes equation (7) is followed by a decision process. The maximum of (, ) τ *R df* is taken as decision variable and compared against a threshold, that is often set according to the Neyman-Pearson (NP) theorem (Kay, 1998). If the maximum is higher than the threshold, the satellite is considered present,

Estimation of Satellite-User Ranges Through GNSS Code Phase Measurements 113

The stream of samples at the ADC output (i.e. *yIF* [*n*] ) is correlated with the local code and with two carriers, one in phase and one in quadrature, respectively. At the end of each integration period, the values of correlation are used to generate feedback control signals, one for the DLL and one for the PLL. *Early minus Late* DLLs use additional replicas of the local code, shifted of 0.5 chips earlier and later than the reference one, which is referred as *Prompt code*. The Early and Late correlations are combined to generate the DLL feedback on the basis of a proper discrimination function. Such a feedback is filtered to smooth the noise effect and is used to steer the code generator, that prepares the local code for the next loop iteration. In such a way the DLL continues to track the correlation peak in the time domain. The PLL works in a similar way. Generally, the in-phase and quadrature Prompt correlations are passed to a *Costas-PLL* (that is not sensitive to navigation bits transitions) (Kaplan & Hegarty, 2006; Misra & Enge, 2001) that generates the loop control signal. This is filtered and applied to the local carrier generator, that prepares the local carrier for the next iteration. This process repeats over time, making the receiver able to track the correlation

When both the DLL and PLL are locked, the incoming signal is despread and converted to baseband. The navigation data bits appear at the output of the in-phase Prompt correlator and can be decoded. In addition, with the DLL locked, the local and the incoming codes are aligned. Referring to the local code, the receiver exactly knows when a new code period starts and is able to recognize navigation data bits and boundaries of the navigation message. The receivers stays synchronized to the tracked satellites, continuously counting the number of received chips, full code periods, navigation bits and message frames. These counters are fundamental to measure the misalignment over different channels, tracking different satellites, and are used to compute the pseudoranges. For sake of completeness, note that real receivers generally use architecture more complex than that reported in Fig. 2. For example, a Frequency Lock Loop (FLL) is employed to refine the rough estimate performed by the signal acquisition and ease the PLL lock, reducing the transient time between the signal acquisition and the steady-state carrier/code tracking. Recently new techniques based on digital signal processing have been developed in order to obtain higher precision and reduced computational load, improving the robustness against noise and interference. In this section, we have recalled only some fundamentals of code and carrier tracking, with the

goal of providing the necessary background for the following part of the chapter.

Once the tracking loops are locked (i.e. the local code keeps the alignment with the incoming code and the local carrier is exactly a replica of the received one), the navigation data bits appear at the output of the Prompt correlator, on the in-phase branch of the tracking loops. Considering the GPS L1 C/A code, using an integration time equal to the code period, we obtain a bit value every ms. However, due to the low signal power, real receivers usually set the integration time to 20 ms, which is the inverse of the navigation data rate (i.e. 50 Hz). Fig. 4.a shows 1 second of normalized navigation data bits at the Prompt correlator output, using an integration time of both 1 ms (blue) and 20 ms (red). The same example could be repeated considering the Galileo E1-B signal. In this case, a proper value of integration time is 4 ms, that corresponds to either the code period and the inverse of the navigation data rate. An example of navigation data bits, recovered processing the signal tranmitted by a

**3.3 Navigation message demodulation, frame and page synchronization** 

simulated Galileo satellite, is shown in Fig. 4.b.

peak in frequency domain.

otherwise absent. Note that the performance of real acquisition algorithms are evaluated in terms of *Probability of Detection* and *Probability of False Alarm*.

It is important to highlight that for civilian GNSS signals (i.e. GPS L1 C/A, Galileo E1-B), the spreading code contained in *yIF* [*n*] is a periodic sequence with period equal to the code period *Tp* (i.e. 1 ms for the GPS L1 C/A code, 4 ms for the Galileo E1-B): therefore the delay τ can be estimated only in the range (0, ) *Tp* . In practice only a portion of this infinite sequence enters into the summation in equation (7) (i.e. the samples of the portion of signal under test for *n = 0,…,L – 1*). This means that for a given value of *df* , the correlation assumes the form of a circular correlation when the interval *(0,L* − *1)* contains an integer number of code periods. This remark is quite important and helps to understand why real acquisition systems are based on Fast Fourier Transforms (FFTs). In fact, FFTs are used to implement fast circular correlations and scan the search space efficiently. Insights on the design of FFT-based signal acquisition system is out of scope for this chapter. However, one can find many algorithms proposed in recent literature and can refer to (Borre et al., 2006) for didactical examples.

#### **3.2 Code and carrier tracking**

Digital receivers sample the analog signal and split the stream of samples over different digital channels. As seen above, the first step in GNSS processing is the signal acquisition: the satellites in view are detected and a first rough estimation of the Doppler shift and code delay is performed. The signal tracking follows the signal acquisition. Most of the receivers use a Delay Lock Loop (DLL) to synchronize the spreading code from each satellite (Parkinson & Spilker, 1996), while a Phase Lock Loop (PLL) is generally employed to track the phase of the incoming carrier. The theory behind digital tracking loops is reported in many books (Kaplan & Hegarty, 2006; Parkinson & Spilker, 1996). Here the signal tracking is only introduced to give fundamentals for the following sections.

Roughly speaking, the signal tracking relies on the properties of the signal correlation and is fundamental to demodulate the navigation message and estimate the range between the user and the satellites. A generic block diagram of the code and carrier tracking system for GNSS receivers is shown in Fig. 3.

Fig. 3. Block diagram of a generic code and carrier tracking system for GNSS receivers

112 Global Navigation Satellite Systems – Signal, Theory and Applications

otherwise absent. Note that the performance of real acquisition algorithms are evaluated in

It is important to highlight that for civilian GNSS signals (i.e. GPS L1 C/A, Galileo E1-B), the spreading code contained in *yIF* [*n*] is a periodic sequence with period equal to the code period *Tp* (i.e. 1 ms for the GPS L1 C/A code, 4 ms for the Galileo E1-B): therefore the delay

 can be estimated only in the range (0, ) *Tp* . In practice only a portion of this infinite sequence enters into the summation in equation (7) (i.e. the samples of the portion of signal under test for *n = 0,…,L – 1*). This means that for a given value of *df* , the correlation

number of code periods. This remark is quite important and helps to understand why real acquisition systems are based on Fast Fourier Transforms (FFTs). In fact, FFTs are used to implement fast circular correlations and scan the search space efficiently. Insights on the design of FFT-based signal acquisition system is out of scope for this chapter. However, one can find many algorithms proposed in recent literature and can refer to (Borre et al., 2006)

Digital receivers sample the analog signal and split the stream of samples over different digital channels. As seen above, the first step in GNSS processing is the signal acquisition: the satellites in view are detected and a first rough estimation of the Doppler shift and code delay is performed. The signal tracking follows the signal acquisition. Most of the receivers use a Delay Lock Loop (DLL) to synchronize the spreading code from each satellite (Parkinson & Spilker, 1996), while a Phase Lock Loop (PLL) is generally employed to track the phase of the incoming carrier. The theory behind digital tracking loops is reported in many books (Kaplan & Hegarty, 2006; Parkinson & Spilker, 1996). Here the signal tracking is

Roughly speaking, the signal tracking relies on the properties of the signal correlation and is fundamental to demodulate the navigation message and estimate the range between the user and the satellites. A generic block diagram of the code and carrier tracking system for

Input buffer Correlators

E P L Local Code Generator Local Carrier

Generator In-phase and

Quadrature

Fig. 3. Block diagram of a generic code and carrier tracking system for GNSS receivers

−

PLL/DLL discriminator Loop filters

DLL control signal

PLL control signal

 *1)* contains an integer

terms of *Probability of Detection* and *Probability of False Alarm*.

assumes the form of a circular correlation when the interval *(0,L* 

only introduced to give fundamentals for the following sections.

τ

for didactical examples.

**3.2 Code and carrier tracking** 

GNSS receivers is shown in Fig. 3.

HW RF Front end ADC Digital samples @ IF

The stream of samples at the ADC output (i.e. *yIF* [*n*] ) is correlated with the local code and with two carriers, one in phase and one in quadrature, respectively. At the end of each integration period, the values of correlation are used to generate feedback control signals, one for the DLL and one for the PLL. *Early minus Late* DLLs use additional replicas of the local code, shifted of 0.5 chips earlier and later than the reference one, which is referred as *Prompt code*. The Early and Late correlations are combined to generate the DLL feedback on the basis of a proper discrimination function. Such a feedback is filtered to smooth the noise effect and is used to steer the code generator, that prepares the local code for the next loop iteration. In such a way the DLL continues to track the correlation peak in the time domain. The PLL works in a similar way. Generally, the in-phase and quadrature Prompt correlations are passed to a *Costas-PLL* (that is not sensitive to navigation bits transitions) (Kaplan & Hegarty, 2006; Misra & Enge, 2001) that generates the loop control signal. This is filtered and applied to the local carrier generator, that prepares the local carrier for the next iteration. This process repeats over time, making the receiver able to track the correlation peak in frequency domain.

When both the DLL and PLL are locked, the incoming signal is despread and converted to baseband. The navigation data bits appear at the output of the in-phase Prompt correlator and can be decoded. In addition, with the DLL locked, the local and the incoming codes are aligned. Referring to the local code, the receiver exactly knows when a new code period starts and is able to recognize navigation data bits and boundaries of the navigation message. The receivers stays synchronized to the tracked satellites, continuously counting the number of received chips, full code periods, navigation bits and message frames. These counters are fundamental to measure the misalignment over different channels, tracking different satellites, and are used to compute the pseudoranges. For sake of completeness, note that real receivers generally use architecture more complex than that reported in Fig. 2. For example, a Frequency Lock Loop (FLL) is employed to refine the rough estimate performed by the signal acquisition and ease the PLL lock, reducing the transient time between the signal acquisition and the steady-state carrier/code tracking. Recently new techniques based on digital signal processing have been developed in order to obtain higher precision and reduced computational load, improving the robustness against noise and interference. In this section, we have recalled only some fundamentals of code and carrier tracking, with the goal of providing the necessary background for the following part of the chapter.

#### **3.3 Navigation message demodulation, frame and page synchronization**

Once the tracking loops are locked (i.e. the local code keeps the alignment with the incoming code and the local carrier is exactly a replica of the received one), the navigation data bits appear at the output of the Prompt correlator, on the in-phase branch of the tracking loops. Considering the GPS L1 C/A code, using an integration time equal to the code period, we obtain a bit value every ms. However, due to the low signal power, real receivers usually set the integration time to 20 ms, which is the inverse of the navigation data rate (i.e. 50 Hz). Fig. 4.a shows 1 second of normalized navigation data bits at the Prompt correlator output, using an integration time of both 1 ms (blue) and 20 ms (red). The same example could be repeated considering the Galileo E1-B signal. In this case, a proper value of integration time is 4 ms, that corresponds to either the code period and the inverse of the navigation data rate. An example of navigation data bits, recovered processing the signal tranmitted by a simulated Galileo satellite, is shown in Fig. 4.b.

Estimation of Satellite-User Ranges Through GNSS Code Phase Measurements 115

Fig. 5. Structure of the navigation message included in the GPS civil signal, transmitted on

The rate of the navigation data bits is 50 bits per second. The whole message is 12.5 minutes long and is divided in 25 frames. Each frame lasts 30 seconds and is further divided in 5 subframes, six seconds long. Each subframe of the navigation message always starts with

In case of the Galileo E1 signal, the complete navigation message is transmitted on the data channel (E1-B) as a sequence of frames. A frame is composed of several sub-frames, and a sub-frame, in turn, is composed of several pages. The page is the basic structure to build the navigation message. Fig. 6 shows the structure of the Galileo data and an example of page

Prior to the navigation data decoding, the receives seeks for the preamble, a defined sequence of *n* bits, that marks the beginning of a subframe for the GPS L1 C/A, a page for the Galileo E1-B. A simple, but efficient, way to detect the preamble is to correlate the navigation data stream with a local binary sequence equal to the preamble. A maximum is detected when such a local sequence is aligned with the preamble. Naturally, the bit pattern used for the preamble can occur anywhere in the received data stream, thus an additional check must be carried out to authenticate the real preamble (e.g. in case of GPS, only when the maximum of correlation is found exactly every 6 seconds). When the beginning of the subframe is identified, the content of the subframe can be decoded. The receiver retrieves all the orbital parameters (i.e. ephemeris) necessary to compute the satellite position

two special words, the Telemetry (TLM) and the Handover word (HOW).

the L1 frequency

for the E1-B message.

Fig. 4. Navigation data bits at the output of the inphase Prompt correlator both for GPS (a) and Galileo (b) signals

The stream of data bits must be decoded to recover the message broadcast by the satellite. The navigation data follow the scheme defined in the GPS Interface Control Document (Arinc Research Corporation, 1991) in case of GPS, while all the information regarding the navigation message of the Galileo Open Service can be found in (European Commission, 2010).

Since the navigation format is out of scope of this chapter we will give just an introduction to the argument by showing the general structure both of the GPS and Galileo message. In Fig. 5 the overall navigation data in case of GPS L1 C/A code is depicted.

114 Global Navigation Satellite Systems – Signal, Theory and Applications

(a)

(b) Fig. 4. Navigation data bits at the output of the inphase Prompt correlator both for GPS (a)

The stream of data bits must be decoded to recover the message broadcast by the satellite. The navigation data follow the scheme defined in the GPS Interface Control Document (Arinc Research Corporation, 1991) in case of GPS, while all the information regarding the navigation

Since the navigation format is out of scope of this chapter we will give just an introduction to the argument by showing the general structure both of the GPS and Galileo message. In

message of the Galileo Open Service can be found in (European Commission, 2010).

Fig. 5 the overall navigation data in case of GPS L1 C/A code is depicted.

and Galileo (b) signals

Fig. 5. Structure of the navigation message included in the GPS civil signal, transmitted on the L1 frequency

The rate of the navigation data bits is 50 bits per second. The whole message is 12.5 minutes long and is divided in 25 frames. Each frame lasts 30 seconds and is further divided in 5 subframes, six seconds long. Each subframe of the navigation message always starts with two special words, the Telemetry (TLM) and the Handover word (HOW).

In case of the Galileo E1 signal, the complete navigation message is transmitted on the data channel (E1-B) as a sequence of frames. A frame is composed of several sub-frames, and a sub-frame, in turn, is composed of several pages. The page is the basic structure to build the navigation message. Fig. 6 shows the structure of the Galileo data and an example of page for the E1-B message.

Prior to the navigation data decoding, the receives seeks for the preamble, a defined sequence of *n* bits, that marks the beginning of a subframe for the GPS L1 C/A, a page for the Galileo E1-B. A simple, but efficient, way to detect the preamble is to correlate the navigation data stream with a local binary sequence equal to the preamble. A maximum is detected when such a local sequence is aligned with the preamble. Naturally, the bit pattern used for the preamble can occur anywhere in the received data stream, thus an additional check must be carried out to authenticate the real preamble (e.g. in case of GPS, only when the maximum of correlation is found exactly every 6 seconds). When the beginning of the subframe is identified, the content of the subframe can be decoded. The receiver retrieves all the orbital parameters (i.e. ephemeris) necessary to compute the satellite position

Estimation of Satellite-User Ranges Through GNSS Code Phase Measurements 117

• all the satellites belonging to the same system (i.e. GPS, GLONASS, Galileo) are

• the receiver clock is not synchronized with the GNSS time-scale (as the school clock in the example of section 2 was not synchronized to Alice and John's watches). The actual

notation, the superscripts refer to the time-scale, while we use subscripts to identify

• all the examples and equations are given for the GPS satellites only but the explanation

With these hypothesis in mind, once the preamble has been correctly detected, every navigation data for each satellite in view can be tagged with additional information such as the corresponding subframe, the number of bits read from the beginning of that subframe as well as the number of samples processed up to that time instant by acquisition and tracking stages. In this way, it will be easy to make comparisons among channels and calculate the time delay of the satellites. In fact, "*during the collection of the digitized data there is no absolute time reference and the only time reference is the sampling frequency. Moreover, the pseudorange can be measured only in a relative way because the clock bias of the receiver is an unknown quantity*" (Tsui, 2000). Therefore the pseudorange can be computed as the distance (or time) between two reference points. The way the reference points are chosen makes the main difference in the two methods that are commonly used in commercial receivers for the pseudorange computation and that we can

According to this approach, since all the satellites are synchronized, they broadcast the same preamble at the same moment, which is received by the user at different instants, due to different propagation delays. This approach follows what pragmatically happens in a real

the right, Fig. 7 shows the local codes displacement at the receiver, assuming four tracked satellites. The blue rectangular is the TLM word of the subframe, which is received at

> *GPS GPS i rx i tx*

> > *rx i rx i tt b* −

*b* is unknown. If the receiver was able to compute *<sup>i</sup>*

Δ

*tx t* decoding the HOW of the previous subframe, which includes a

The left side of Fig. 7 represents the same subframe transmitted by the satellites at *GPS*

= ,

*rx i t* , because of the different traveling times *<sup>i</sup>*

τ

*rx i t* corresponds to the time instants , , = *R GPS*

truncated version of the absolute GPS time. The receiver reads ,

between the receiver and satellites would be simply obtained as:

can be considered valid and easily extended for other GNSS systems too.

call "**common transmission time**" and "**common reception time**", respectively.

scenario where the satellites have different distances with respect to the user.

Δ

Δ

*b* is the bias with respect to the clock on board of the

τ

*t t* − (9)

*R*

ρ τ *i i* = ⋅ *c* (10)

, where *GNSS t* is the actual time on

*b* remains constant over time. In the

*tx t* . On

. These can be written as :

*rx i t* , but it is not able to

, the distances

τ

on the receiver time-scale.

synchronized each others but they are not with respect to different GNSSs;

time at the receiver can be written as = *R GNSS tt b* −

satellite. For sake of simplicity, we assume that

Δ

the GNSS time-scale and

**4.1.1 Common transmission time** 

*GPS*

Δ

where *c* stands for the speed of light.

different instants ,

The receiver recovers *GPS*

*GPS rx i t* , since

where , *GPS*

compute ,

definite time instants;

corresponding to the transmission of the subframe. Through the process used for navigation data decoding, the receiver is able to understand which subframe and word a certain bit belongs to. In this way, the receiver can have an exact, precise and real-time "comprehension" of each sample/bit broadcast by the satellite. This aspect will be fundamental in the computation of the pseudoranges as it will be explained in the next section.

Fig. 6. Galileo I/NAV navigation message structure (a) and I/NAV nominal page with bits allocation (b)

## **4. Performing range measurements using GNSS signals**

In this section we focus on the measurements of the pseudorange, describing some methods commonly used to estimate the distance between the satellite and the user's receiver.

So far, we have explained how the detection of a preamble is an effective way to recognize the beginning of a subframe (a page in case of Galileo) and the starting point for decoding the navigation message. Here and in the following we want to introduce how GNSS receivers use the detection of a preamble to compute a valid pseudorange and estimate the user's position and velocity. According to (Borre et al., 2006), the pseudorange estimations can be divided into two sets of computations: the first is devoted to find the initial set of pseudoranges, the second keeps track of the pseudoranges after the first set is estimated.

#### **4.1 Computation of the first set of pseudoranges**

Before proceeding with the explanation of the pseudorange computation, it is useful to recall some hypothesis, that will be taken as true from now on.

All the clocks on-board of the satellites are assumed perfectly synchronized to a reference GNSS time-scale. In other words, we assume that the first chip of a definite subframe/page leaves the satellites at the same instant *GNSS tx t* . In addition:


With these hypothesis in mind, once the preamble has been correctly detected, every navigation data for each satellite in view can be tagged with additional information such as the corresponding subframe, the number of bits read from the beginning of that subframe as well as the number of samples processed up to that time instant by acquisition and tracking stages. In this way, it will be easy to make comparisons among channels and calculate the time delay of the satellites. In fact, "*during the collection of the digitized data there is no absolute time reference and the only time reference is the sampling frequency. Moreover, the pseudorange can be measured only in a relative way because the clock bias of the receiver is an unknown quantity*" (Tsui, 2000). Therefore the pseudorange can be computed as the distance (or time) between two reference points. The way the reference points are chosen makes the main difference in the two methods that are commonly used in commercial receivers for the pseudorange computation and that we can call "**common transmission time**" and "**common reception time**", respectively.

#### **4.1.1 Common transmission time**

116 Global Navigation Satellite Systems – Signal, Theory and Applications

corresponding to the transmission of the subframe. Through the process used for navigation data decoding, the receiver is able to understand which subframe and word a certain bit belongs to. In this way, the receiver can have an exact, precise and real-time "comprehension" of each sample/bit broadcast by the satellite. This aspect will be fundamental in the computation of the pseudoranges as it will be explained in the next

Fig. 6. Galileo I/NAV navigation message structure (a) and I/NAV nominal page with bits

In this section we focus on the measurements of the pseudorange, describing some methods

So far, we have explained how the detection of a preamble is an effective way to recognize the beginning of a subframe (a page in case of Galileo) and the starting point for decoding the navigation message. Here and in the following we want to introduce how GNSS receivers use the detection of a preamble to compute a valid pseudorange and estimate the user's position and velocity. According to (Borre et al., 2006), the pseudorange estimations can be divided into two sets of computations: the first is devoted to find the initial set of pseudoranges, the second keeps track of the pseudoranges after the first set is estimated.

Before proceeding with the explanation of the pseudorange computation, it is useful to

All the clocks on-board of the satellites are assumed perfectly synchronized to a reference GNSS time-scale. In other words, we assume that the first chip of a definite subframe/page

*tx t* . In addition:

commonly used to estimate the distance between the satellite and the user's receiver.

(a) (b)

**4. Performing range measurements using GNSS signals** 

**4.1 Computation of the first set of pseudoranges** 

leaves the satellites at the same instant *GNSS*

recall some hypothesis, that will be taken as true from now on.

section.

allocation (b)

According to this approach, since all the satellites are synchronized, they broadcast the same preamble at the same moment, which is received by the user at different instants, due to different propagation delays. This approach follows what pragmatically happens in a real scenario where the satellites have different distances with respect to the user.

The left side of Fig. 7 represents the same subframe transmitted by the satellites at *GPS tx t* . On the right, Fig. 7 shows the local codes displacement at the receiver, assuming four tracked satellites. The blue rectangular is the TLM word of the subframe, which is received at different instants , *GPS rx i t* , because of the different traveling times *<sup>i</sup>* τ. These can be written as :

$$
\tau\_i = t\_{rx,i}^{GPS} - t\_{tx}^{GPS} \tag{9}
$$

where , *GPS rx i t* corresponds to the time instants , , = *R GPS rx i rx i tt b* − Δon the receiver time-scale.

The receiver recovers *GPS tx t* decoding the HOW of the previous subframe, which includes a truncated version of the absolute GPS time. The receiver reads , *R rx i t* , but it is not able to compute , *GPS rx i t* , since Δ*b* is unknown. If the receiver was able to compute *<sup>i</sup>* τ , the distances between the receiver and satellites would be simply obtained as:

$$
\rho\_i = \mathbf{r}\_i \cdot \mathbf{c} \tag{10}
$$

where *c* stands for the speed of light.

Estimation of Satellite-User Ranges Through GNSS Code Phase Measurements 119

1 is the reference pseudorange, corresponds to the satellite closest to the user. Even if the receiver does not know the distance between this satellite and the user, a realistic

time from the satellites to the Earth is on the order of 65-83 ms, an appropriated value

note that such approximated reference pseudorange, does not affect the computation of the user's position. Eventually, a the error due to on such approximation falls in the

*b* has not been determined yet, but it will be solved computing the set of equations

The second approach performs the pseudoranges estimation, setting a common receiving

= = ,1 1,..,4 *R R*

Fig. 8 depicts the method of fixing a unique time of reception for four GPS satellites in view

Fig. 8. Pseudorange computation based on "comon reception time", evaluating the start of

has been computed, the receiver is able to calculate the pseudorange easily. This

Δ

*<sup>i</sup>* ) of delays with respect to the

*i* is stated as:

Δ

one), the receiver counts the elapsed time between the reception of subframe and *<sup>R</sup>*

*i u rx*

δ

*ut* over all the channels. Also in this case, the reference channel is the one that receives

*tx t* first. For all the tracked satellites (including the reference

*tt i* − ∀ (13)

ρ τ

<sup>1</sup> = 70 ms (Borre et al., 2006), thus

terms that takes into account the clock bias.;

1 can be set as an approximation. In fact, by considering that a typical travel

1 1 = = 20985.47 ⋅ *c Km* . It is important to

*ut* .This

•

•

ρ

Δ

(6).

time *<sup>R</sup>*

value of

could be

ρ

τ

**4.1.2 Common reception time** 

the subframe transmitted at *GPS*

the subframe for a GPS system

can be accomplished by evaluating the delta-difference (

satellites and the reference one. The aforementioned relative difference

Once the *<sup>i</sup>*

δ

means that the receiver measure the delays as:

Fig. 7. Pseudorange computation based on "comon tranmission time", evaluating the beginning of a subframe for GPS system

Referring to Fig. 7, the satellite tracked on channel 1 is taken as reference, since the subframe transmitted at *GPS tx t* arrives first. In other words this means that the satellite tracked on channel 1 has the shortest distance respect to the receiver. Since the same subframe from the other satellites is received at different times, the receiver has to count, for each tracked satellite, the amount of time past from the reception of the subframe on the reference channel. Regarding to this concept, it is important to stress that the measurement of the delay between the reference satellite and the others in view has not necessarily to be performed on the beginning of a subframe, but it must be computed consistently (i.e. with respect to the same word or data bit belonging to the same subframe).

When the receiver is able to compute, for each tracked satellite, the relative time difference with respect to the reference channel, the relative pseudoranges can be evaluated.

In formula this difference can be written as:

$$
\delta\_i = t\_{rx,i}^R - t\_{rx,1}^R \quad \forall i = 1, \dots, 4 \tag{11}
$$

*i* δ are measured through time counters, that are continuously updated by the tracking structures of each channel. With these time differences, the set of distances between the receiver and the satellites can be written as follows:

$$
\bullet \ p\_i = \rho\_1 + c \cdot \Delta b + c \cdot \delta\_i \quad \forall i = 1, \ldots, 4 \tag{12}
$$

where:

• *<sup>i</sup>* δ are the time differences between the beginning of the subframe received on channel *i* and the beginning of the subframe received on the reference channel. >0 0 *<sup>i</sup>* δ ∀ ≠*i* and δ<sup>1</sup> = 0 ;


#### **4.1.2 Common reception time**

118 Global Navigation Satellite Systems – Signal, Theory and Applications

Fig. 7. Pseudorange computation based on "comon tranmission time", evaluating the

respect to the same word or data bit belonging to the same subframe).

Referring to Fig. 7, the satellite tracked on channel 1 is taken as reference, since the

tracked on channel 1 has the shortest distance respect to the receiver. Since the same subframe from the other satellites is received at different times, the receiver has to count, for each tracked satellite, the amount of time past from the reception of the subframe on the reference channel. Regarding to this concept, it is important to stress that the measurement of the delay between the reference satellite and the others in view has not necessarily to be performed on the beginning of a subframe, but it must be computed consistently (i.e. with

When the receiver is able to compute, for each tracked satellite, the relative time difference

δ = = , ,1 1,..,4 *R R*

ρρ δ = = <sup>1</sup> 1,..,4 *i i* + *c bc i* ⋅ +⋅ ∀ Δ

 are the time differences between the beginning of the subframe received on channel *i* and the beginning of the subframe received on the reference channel. >0 0 *<sup>i</sup>*

 are measured through time counters, that are continuously updated by the tracking structures of each channel. With these time differences, the set of distances between the

with respect to the reference channel, the relative pseudoranges can be evaluated.

*tx t* arrives first. In other words this means that the satellite

*i rx i rx tt i* − ∀ (11)

(12)

δ∀ ≠*i*

beginning of a subframe for GPS system

In formula this difference can be written as:

receiver and the satellites can be written as follows:

subframe transmitted at *GPS*

*i* δ

where:

• *<sup>i</sup>* δ

> and δ<sup>1</sup> = 0 ;

The second approach performs the pseudoranges estimation, setting a common receiving time *<sup>R</sup> ut* over all the channels. Also in this case, the reference channel is the one that receives the subframe transmitted at *GPS tx t* first. For all the tracked satellites (including the reference one), the receiver counts the elapsed time between the reception of subframe and *<sup>R</sup> ut* .This means that the receiver measure the delays as:

$$\boldsymbol{\delta}\_{i}\boldsymbol{\delta}\_{i} = \boldsymbol{t}\_{u}^{\boldsymbol{R}} - \boldsymbol{t}\_{r\boldsymbol{x},1}^{\boldsymbol{R}} \quad \forall i = 1, \ldots, 4 \tag{13}$$

Fig. 8 depicts the method of fixing a unique time of reception for four GPS satellites in view

Fig. 8. Pseudorange computation based on "comon reception time", evaluating the start of the subframe for a GPS system

Once the *<sup>i</sup>* δ has been computed, the receiver is able to calculate the pseudorange easily. This can be accomplished by evaluating the delta-difference ( Δ*<sup>i</sup>* ) of delays with respect to the satellites and the reference one. The aforementioned relative difference Δ*i* is stated as:

$$
\Delta = \delta\_i - \delta\_1 \quad \forall i = 1, \dots, 4 \tag{14}
$$

Estimation of Satellite-User Ranges Through GNSS Code Phase Measurements 121

This section completes the chapter and deals with the estimation of the user's PVT that comes after the measurement of a set of pseudoranges, for at least four satellites in view.

In order to have an accurate estimate of the user's position, the receiver has to consider additional error sources that typically affect the measured pseudorange and that have to be compensated. These sources include atmospheric effects (e.g. ionosphere and troposhere, that generate a delay in the signal broadcast by the satellite) and other kinds of noise related

A valid PVT can be estimated after the receiver retrives the satellites' positions, (i.e.: *<sup>i</sup> x* , *<sup>i</sup> y* , *<sup>i</sup> z* as stated in equation (6)) from the navigation message. To compute the satellite position, the receiver needs the ephemeris and the time of transmission, which is usually referred to the beginning of the subframes. All the information the receiver needs is embedded in the navigation message. The time of transmission can be read every 6 seconds at the beginning of a subframe in a specific word that corresponds to the HOW. From the HOW the receiver retrieves a truncated version of the absolute GPS time (TOW). This number is referred to as Z-count. The Z-count is the number of seconds passed since the last GPS week rollover in units of 1.5s. The truncated Z-count in the HOW corresponds to the time of transmission of the next navigation data subframe. To get the time of transmission of the current subframe, the Z-count should be multiplied by 6 and 6s should be subtracted

Therefore, if we assume to perform the pseudorange estimation at the beginning of a new subframe, the time of transmission will be exactly equal to the value reported in the HOW of the previous subframe. Otherwise, if we implement the computation of the pseudorange in a different instant, we have to count the time elapsed between the beginning of the subframe and that instant. The way the time of transmission is computed represents the main difference between the two aforementioned methods (i.e. "common transmission time" and "common reception time"). According to the first method, all the satellites transmit the signal at the same time and, if we assume to calculate the pseudorange at the beginning of a subframe, this correspond to the TOW. On the contrary, if we consider the second approach, we have to keep in mind a different time of transmission for each satellite. Practically speaking, we have to sum up the TOW with the δ*i* delay that elapsed from the

respect to the Earth, it follows that the δ*i* delay will vary according to the satellite under

When four satellite have been correctly tracked, the full set of equations can be rewritten after having removed the satellite offset and atmospheric effects. According to the "common

2 2 21 2

ρ

ρ

ρ

ρ  Δ

Δ

Δ

Δ

 δ

(18)

 δ

 δ

*x x y y z z c bc x x y y z z c bc x x y y z z c bc*

⎪ − + − + − +⋅ +⋅

<sup>⎪</sup> <sup>−</sup> + − + − +⋅ +⋅ <sup>⎪</sup> <sup>⎪</sup> <sup>−</sup> + − + − +⋅ +⋅ <sup>⎩</sup>

3 3 31 3

4 4 41 4

2 22 1 1 11 2 22

<sup>⎧</sup> − + − + − +⋅ <sup>⎪</sup>

*x x y y z z cb*

( ) ( ) ( )= ( ) ( ) ( )= ( ) ( ) ( )= ( ) ( ) ( )=

*u uu u uu u uu u uu*

2 22

2 22

transmission time", the equations can be stated as in equation (6):

*ut* . Since every satellite has a different distance with

**5. Position, velocity and time (PVT) computation** 

to the presence of multipath and interference.

from the result (Borre et al. 2006).

starting point of the subframe and *<sup>R</sup>*

⎪

⎨

consideration.

and consequently, by modifying equation (12), the pseudoranges can be written as:

$$
\Delta \rho\_i = \rho\_1 + \mathbf{c} \cdot \Delta b + \mathbf{c} \cdot \Delta\_{\parallel} \quad \forall i = 1, \ldots, 4 \tag{15}
$$

where, as in the case of "common transmission time":


This second method is usually employed in commercial GPS receivers. The main reason behind this choice is the relative simplicity and suitability of that approach in real-time implementations, since it does not require to wait until all the channels have received the same data bit (e.g. the beginning of the same subframe) to compute the pseudoranges. This concept gets more clear if we consider that, during the data demodulation and the tracking process, the receiver continously counts the number of samples processed on that channels, as well as the number of frames, subframes and data bits decoded. As a consequence, through a system of counters, it becomes easy to compute the time difference Δ*<sup>i</sup>* among the channels at a certain *<sup>R</sup> ut* .

#### **4.2 Computation of the subsequent sets of pseudoranges**

Once the initial set ot pseudoranges has been computed, subsequent pseudoranges can be estimated. In this case, the computation of the reference pseudorange (i.e. ρ<sup>1</sup> ) can be refined with respect to the approximated value set during the first estimate (see section 4.1.1 for details). In fact, at this stage, the receiver has already computed the first estimate of its position and is able to accurately calculate the geometrical distance between the satellite and itself.

As far as the pseudoranges of the other satellites in view is concerned, let suppose that the receiver performs a new PVT every second.

According to the method based on common transmission time, the receiver has to measure a delay of 1s for the reference channel. By considering that a GPS navigation data lasts for 20 ms, it means that, after 1 second, 50 bits have been decoded for the reference satellite, starting from the beginning of the subframe. In order to estimate the time difference, the receiver must wait until each channel has demodulated 50 bits after the beginning of the subframe. Then, the pseudoranges can be computed as stated in equation (6) and the process repeats over time.

On the contrary, if the receiver follows the"common reception time", it moves ahead *<sup>R</sup> ut* of 1s, before measuring the time difference among the channels. Again, it is important to stress that this reception time is fixed by the receiver and it is independent from the number of navigation bits have been read for each tracked satellite.

The receiver can compute the user's position estimation at a rate much higher than 1 Hz. If we consider as the reference time the beginning of a new C/A code (i.e. every ms), the receiver can update the PVT at a 100 Hz rate.

## **5. Position, velocity and time (PVT) computation**

120 Global Navigation Satellite Systems – Signal, Theory and Applications

<sup>1</sup> = 1,..,4 *i i*

= = <sup>1</sup> 1,..,4 *i i*

This second method is usually employed in commercial GPS receivers. The main reason behind this choice is the relative simplicity and suitability of that approach in real-time implementations, since it does not require to wait until all the channels have received the same data bit (e.g. the beginning of the same subframe) to compute the pseudoranges. This concept gets more clear if we consider that, during the data demodulation and the tracking process, the receiver continously counts the number of samples processed on that channels, as well as the number of frames, subframes and data bits decoded. As a consequence,

Once the initial set ot pseudoranges has been computed, subsequent pseudoranges can be

refined with respect to the approximated value set during the first estimate (see section 4.1.1 for details). In fact, at this stage, the receiver has already computed the first estimate of its position and is able to accurately calculate the geometrical distance between the satellite and

As far as the pseudoranges of the other satellites in view is concerned, let suppose that the

According to the method based on common transmission time, the receiver has to measure a delay of 1s for the reference channel. By considering that a GPS navigation data lasts for 20 ms, it means that, after 1 second, 50 bits have been decoded for the reference satellite, starting from the beginning of the subframe. In order to estimate the time difference, the receiver must wait until each channel has demodulated 50 bits after the beginning of the subframe. Then, the pseudoranges can be computed as stated in equation (6) and the

On the contrary, if the receiver follows the"common reception time", it moves ahead *<sup>R</sup>*

1s, before measuring the time difference among the channels. Again, it is important to stress that this reception time is fixed by the receiver and it is independent from the number of

The receiver can compute the user's position estimation at a rate much higher than 1 Hz. If we consider as the reference time the beginning of a new C/A code (i.e. every ms), the

 Δ

*b* represents the clock bias between the one on board of the satellite and the receiver's

− ∀*i* (14)

Δ

ρ

*<sup>i</sup>* among the

<sup>1</sup> ) can be

*ut* of

+⋅ +⋅ ∀ *c bc i* (15)

Δ = δ δ

ρρ

where, as in the case of "common transmission time":

1 is the reference pseudorange ;

*ut* .

receiver performs a new PVT every second.

navigation bits have been read for each tracked satellite.

receiver can update the PVT at a 100 Hz rate.

**4.2 Computation of the subsequent sets of pseudoranges** 

•

•

itself.

ρ

Δ

one.

channels at a certain *<sup>R</sup>*

process repeats over time.

and consequently, by modifying equation (12), the pseudoranges can be written as:

 Δ

through a system of counters, it becomes easy to compute the time difference

estimated. In this case, the computation of the reference pseudorange (i.e.

This section completes the chapter and deals with the estimation of the user's PVT that comes after the measurement of a set of pseudoranges, for at least four satellites in view.

In order to have an accurate estimate of the user's position, the receiver has to consider additional error sources that typically affect the measured pseudorange and that have to be compensated. These sources include atmospheric effects (e.g. ionosphere and troposhere, that generate a delay in the signal broadcast by the satellite) and other kinds of noise related to the presence of multipath and interference.

A valid PVT can be estimated after the receiver retrives the satellites' positions, (i.e.: *<sup>i</sup> x* , *<sup>i</sup> y* , *<sup>i</sup> z* as stated in equation (6)) from the navigation message. To compute the satellite position, the receiver needs the ephemeris and the time of transmission, which is usually referred to the beginning of the subframes. All the information the receiver needs is embedded in the navigation message. The time of transmission can be read every 6 seconds at the beginning of a subframe in a specific word that corresponds to the HOW. From the HOW the receiver retrieves a truncated version of the absolute GPS time (TOW). This number is referred to as Z-count. The Z-count is the number of seconds passed since the last GPS week rollover in units of 1.5s. The truncated Z-count in the HOW corresponds to the time of transmission of the next navigation data subframe. To get the time of transmission of the current subframe, the Z-count should be multiplied by 6 and 6s should be subtracted from the result (Borre et al. 2006).

Therefore, if we assume to perform the pseudorange estimation at the beginning of a new subframe, the time of transmission will be exactly equal to the value reported in the HOW of the previous subframe. Otherwise, if we implement the computation of the pseudorange in a different instant, we have to count the time elapsed between the beginning of the subframe and that instant. The way the time of transmission is computed represents the main difference between the two aforementioned methods (i.e. "common transmission time" and "common reception time"). According to the first method, all the satellites transmit the signal at the same time and, if we assume to calculate the pseudorange at the beginning of a subframe, this correspond to the TOW. On the contrary, if we consider the second approach, we have to keep in mind a different time of transmission for each satellite. Practically speaking, we have to sum up the TOW with the δ*i* delay that elapsed from the starting point of the subframe and *<sup>R</sup> ut* . Since every satellite has a different distance with respect to the Earth, it follows that the δ*i* delay will vary according to the satellite under consideration.

When four satellite have been correctly tracked, the full set of equations can be rewritten after having removed the satellite offset and atmospheric effects. According to the "common transmission time", the equations can be stated as in equation (6):

$$\begin{cases} \sqrt{(x\_1 - x\_u)^2 + (y\_1 - y\_u)^2 + (z\_1 - z\_u)^2} = \rho\_1 + c \cdot \Delta b\\ \sqrt{(x\_2 - x\_u)^2 + (y\_2 - y\_u)^2 + (z\_2 - z\_u)^2} = \rho\_1 + c \cdot \Delta b + c \cdot \delta\_2\\ \sqrt{(x\_3 - x\_u)^2 + (y\_3 - y\_u)^2 + (z\_3 - z\_u)^2} = \rho\_1 + c \cdot \Delta b + c \cdot \delta\_3\\ \sqrt{(x\_4 - x\_u)^2 + (y\_4 - y\_u)^2 + (z\_4 - z\_u)^2} = \rho\_1 + c \cdot \Delta b + c \cdot \delta\_4 \end{cases} \tag{18}$$

Estimation of Satellite-User Ranges Through GNSS Code Phase Measurements 123

Pseudorange evolution for PRN #30

Common transmission time Common reception time

3.1174 3.1174 3.1174 3.1175 3.1176 3.1176 3.1176 3.1177

Time-TOW [s]

Fig. 9. Comparison between pseudoranges computed by using common reception time and

If we suppose to start the PVT computation at the beginning of a subframe, updating the computation every second, using the common transmission time method, the first set of pseudorange corresponds to a time of transmission equal to TOW and is always an integer time of seconds for the subsequent sets. On the contrary, using the common reception time method, the pseudorange is computed at a transmission time that is not the same as TOW, but changes according to the reception time that has been fixed in the receiver. An example of real values of transmission time, following the two different approaches is reported in

> TIME OF TRANSMISSION [s] for GPS SV 30 Common transmission time Common reception time

Table 1. Different time of transmission according to the "common transmission" and

311736 311736.277662467 311737 311737.277664239 311738 311738.27766595 311739 311739.277667661 311740 311740.277669371 311741 311741.277671082 311742 311742.277672854 311743 311743.277674503 311744 311744.277676275 311745 311745.277677986

2.301

Table 1 for GPS satellite with PRN 30.

"common reception" methods

common transmission time

2.3015

2.302

2.3025

Raw Pseudorange [m]

2.303

2.3035

x 107

x 105

where ρ1 corresponds to the reference channel relative to the satellite with the shortest path to the user. In case we want to follow the second approach (i.e. "common reception time"), equation (18) keeps the same except for the time delay *<sup>i</sup>* δ that has to be substituted by Δ*i* .

In case two different GNSS systems are tracked and used for the PVT calculation, equation (18) has to be slightly modified. For example, let assume to have 4 GPS and 2 Galileo satellites in view, respectively. By following the "common transmission time" method, we can rewrite equation (18) as:

1 11 2 22 3 33 4 44 2 22 1, 2 22 1, 2, 2 22 1, 3, 2 2 ( ) ( ) ( )= ( ) ( ) ( )= ( ) ( ) ( )= ( )( )( *GPS GPS GPS u u uGPS GPS GPS GPS GPS u u uGPS GPS GPS GPS GPS GPS u u uGPS GPS GPS GPS GPS GPS u u x x y y z z cb x x y y z z cb c x x y y z z cb c xx yy zz* ρ Δ ρ Δδ ρ Δδ −+ −+ − + ⋅ −+ −+ − +⋅ +⋅ −+ −+ − +⋅ +⋅ −+ −+ − 111 2 2 2 1, 4, 2 22 1, / 2 22 2 1, / 2, ) = ( )( )( )= ( )( )( )= *u GPS GPS GPS Gal Gal Gal u u u Gal GPS GPS Gal Gal Gal Gal u u u Gal GPS GPS Gal Gal cb c xx yy zz cb cb xx yy zz cb cb c* ρ Δδ ρΔ Δ ρΔ Δ δ ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ <sup>⎪</sup> +⋅ +⋅ <sup>⎪</sup> <sup>⎪</sup> −+ −+− +⋅ +⋅ <sup>⎪</sup> ⎪ −+ −+− +⋅ +⋅ +⋅ ⎪⎩ (19)

When we work with more than one GNSS we have to keep in mind that different GNSSs are not synchronized among each others. This fact implies to introduce additional unknowns that take into account the time-bias between the GNSS systems. For example, if we consider a GPS/Galileo receiver as stated in equation (19), we need a variable that estimates the bias offset between the GPS and the Galileo time scales. Finally the receiver is able to compute a valid position and velocity. One of the most commonly used algorithm for the position estimation is based on the least-squares (LS) method. The description of this technique is out of scope in this Chapter and a lot of material can be found in the scientific literature (Bjork, 1990; Borre et al., 2006; Kaplan & Hegarty, 2006).

Another noteworthy technique that is used in most of the commercial receivers to improve the accuracy of the PVT computed by using the LS approach, is the so-called Kalman filter (Anderson & Moore, 1979; Brown & Hwang, 1997; Kalman, 1960). By combining a system model with the measurements, this algorithm is able to smooth the solution calculated by the LS as well as to provide estimation of the user's position even when less than four satellites are tracked (e.g. this can be done by using the modeled system only).

#### **5.1 Examples using GPS and Galileo data**

This section provides an example of the evaluation of user's position, presenting the results obtained with the LS algorithm. Most of these results have been taken from (Rao et al. 2011).

Taking as an example the GPS satellite with PRN 30, Fig. 9 shows the comparison of the pseudoranges estimation obtained implementing the common transmission time and the common reception time methods. The blue marks represents the pseudorange computed by considering the "common transmission time*",* while the reds correspond to observables calculated fixing a unique time of reception. Though these two methods are conceptually different, as expected, no significant differences can be noticed in the pseudoranges estimates, that are substantially similar, but shifted in time due to the different computation instant.

122 Global Navigation Satellite Systems – Signal, Theory and Applications

to the user. In case we want to follow the second approach (i.e. "common reception time"),

In case two different GNSS systems are tracked and used for the PVT calculation, equation (18) has to be slightly modified. For example, let assume to have 4 GPS and 2 Galileo satellites in view, respectively. By following the "common transmission time" method, we

*u u uGPS GPS GPS*

*u u uGPS GPS GPS*

2

) =

*u u u Gal GPS GPS Gal*

2 1, / 2,

When we work with more than one GNSS we have to keep in mind that different GNSSs are not synchronized among each others. This fact implies to introduce additional unknowns that take into account the time-bias between the GNSS systems. For example, if we consider a GPS/Galileo receiver as stated in equation (19), we need a variable that estimates the bias offset between the GPS and the Galileo time scales. Finally the receiver is able to compute a valid position and velocity. One of the most commonly used algorithm for the position estimation is based on the least-squares (LS) method. The description of this technique is out of scope in this Chapter and a lot of material can be found in the scientific

Another noteworthy technique that is used in most of the commercial receivers to improve the accuracy of the PVT computed by using the LS approach, is the so-called Kalman filter (Anderson & Moore, 1979; Brown & Hwang, 1997; Kalman, 1960). By combining a system model with the measurements, this algorithm is able to smooth the solution calculated by the LS as well as to provide estimation of the user's position even when less than four

This section provides an example of the evaluation of user's position, presenting the results obtained with the LS algorithm. Most of these results have been taken from (Rao et al. 2011). Taking as an example the GPS satellite with PRN 30, Fig. 9 shows the comparison of the pseudoranges estimation obtained implementing the common transmission time and the common reception time methods. The blue marks represents the pseudorange computed by considering the "common transmission time*",* while the reds correspond to observables calculated fixing a unique time of reception. Though these two methods are conceptually different, as expected, no significant differences can be noticed in the pseudoranges estimates, that are substantially similar, but shifted in time due to the different computation instant.

*xx yy zz cb cb c*

−+ −+− +⋅ +⋅ +⋅ ⎪⎩

*u u u Gal GPS GPS Gal Gal*

ρΔ

ρΔ

equation (18) keeps the same except for the time delay *<sup>i</sup>*

( ) ( ) ( )=

( ) ( ) ( )=

( ) ( ) ( )=

−+ −+ −

( )( )( )=

( )( )( )=

2 2

2 22

2 22

literature (Bjork, 1990; Borre et al., 2006; Kaplan & Hegarty, 2006).

satellites are tracked (e.g. this can be done by using the modeled system only).

2 22

*x x y y z z cb*

−+ −+ − + ⋅

*u u uGPS GPS*

−+ −+ − +⋅ +⋅

−+ −+ − +⋅ +⋅

*x x y y z z cb c*

*x x y y z z cb c*

<sup>⎪</sup> +⋅ +⋅ <sup>⎪</sup> <sup>⎪</sup> −+ −+− +⋅ +⋅ <sup>⎪</sup>

*xx yy zz cb cb*

2 22

2 22

1 11

*GPS GPS GPS*

*GPS GPS GPS*

*GPS GPS GPS*

2 22

3 33

4 44

*Gal Gal Gal*

*Gal Gal Gal*

111

( )( )(

**5.1 Examples using GPS and Galileo data** 

2 2

*xx yy zz*

*GPS GPS GPS u u*

1 corresponds to the reference channel relative to the satellite with the shortest path

1,

ρ

ρ

ρ

ρ

δ

1, 2,

Δδ

Δδ

Δδ

 Δ

 Δ  δ

 Δ

*u GPS GPS GPS*

1, 3,

1, 4,

1, /

*cb c*

that has to be substituted by

Δ*i* .

(19)

where ρ

> ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨

> ⎪

can rewrite equation (18) as:

Fig. 9. Comparison between pseudoranges computed by using common reception time and common transmission time

If we suppose to start the PVT computation at the beginning of a subframe, updating the computation every second, using the common transmission time method, the first set of pseudorange corresponds to a time of transmission equal to TOW and is always an integer time of seconds for the subsequent sets. On the contrary, using the common reception time method, the pseudorange is computed at a transmission time that is not the same as TOW, but changes according to the reception time that has been fixed in the receiver. An example of real values of transmission time, following the two different approaches is reported in Table 1 for GPS satellite with PRN 30.


Table 1. Different time of transmission according to the "common transmission" and "common reception" methods

Estimation of Satellite-User Ranges Through GNSS Code Phase Measurements 125

GPS receivers. In both the approaches, a fundamental role is played by the tracking stage whose aim is to continuously refine the misalignment between the incoming signal and the local replica in order to perform the code de-spreading and retrieve the navigation data. On the basis of the local code evolution, GNSS receivers measure code phase delays, implementing a set of time-counters that accumulate the number of processed frames, subframes, data bits, code periods and samples for all the tracked satellites. The presented theory is completed by a real data example of PVT, in case of a joint processing of

Anderson, O.D.B. & Moore, J.B. (1979). *Optimal Filtering*, Prentice Hall Inc., ISBN 0-486-

Arinc Research Corporation. (1991). *Interface control document. ICD-GPS-200*, Available from:< http://www.navcen.uscg.gov/pubs/gps/icd200/default.html >. Bjork, A. (1990). Least Squares Methods, In: *Handbook of Numerical Analysis Vol. 1- Finite* 

Borre, K; Akos, D.M; Bertelsen, N.; Rinder, P. & Jensen, S.H. (2006). *A Software-defined GPS* 

Brown, R.G. & Hwang, P.Y.C. (1997). *Introduction to random signals and applied Kalman filtering,* John Wiley & Sons, Inc., ISBN: 0-471-12839-2, New York, USA. European Commission. (2010). *Open Service Signal-In-Space Interface Control Document. OS-*

Jonge de, P.J & Teunissen, P.J.G. (1996). Computational aspects of the LAMBDA method for

Kalman, R.E. (1960). A new approach to linear filtering and prediction problems.

Kaplan, E.D & Hegarty, C.J. (2006). *Understanding GPS: principles and applications*, House Inc.,

Kay, S.M. (1993). *Fundamentals of Statistical Signal Processing, volume I: Estimation Theory*,

Kay, S.M. (1998). *Fundamentals of Statistical Signal Processing, volume II: Detection Theory*,

Misra, P. & Enge, P. (2001). *Global Positioning System: Signals, Measurements, and Performance*,

Parkinson, W.B. & Spilker, J.J. (1996). *Global Positioning System: Theory and Applications,* 

*volume I and II*, American Institute of Aeronautics and Astronautics, Inc.,

Prentice Hall Inc., ISBN 0-13-345711-7, New Jersey, USA.

Prentice Hall Inc., ISBN 0-13-504135-X, New Jersey, USA.

Ganga-Jamuna Press, ISBN 0-9709544-1-7, Massachussets, USA.

GPS ambiguity resolution. *Proceedings of ION GPS-96, 9th International Technical Meeting of the Satellite Division of the Institute of Navigation* , Kansas City, Missouri,

*Transactions on ASME of Journal of Basic Engineering*, Vol.82, No. Series D, pp. 35-45.

http://ec.europa.eu/enterprise/policies/satnav/galileo/open-service/

*Difference Methods, Part1, Solution Equations in RN*, Elsevier, pp. 466-647, North-

*and Galileo Receiver: A Single-frequency approach,* Birkhäuser, ISBN 0-8176-4390-7,

GPS/Galileo signals, exploiting code-phase pseudorange measurements.

Holland, ISBN: 0-4447-0366-7, Amsterdam, Holland.

**7. References** 

43938-0, New Jersey, USA.

*SIS-GALILEO-ICD*, Avaiable from:

ISBN: 1-58053-894-0, Boston, USA.

Boston, USA.

index\_en.htm

Sept. 17-20.

Washington, USA.

An example of position estimation using the LS method is reported in Fig. 10. The LS algorithm have been run on the data sets of pseudoranges computed according to the two techniques and using both real GPS and simulated Galileo satellite signals.

Fig. 10. PVT solution for a joint GPS/Galileo receiver by using "common reception time" and "common transmission time" for the pseudoranges computation and a LS-based receiver

As expected, the trajectory of the user estimated by the two methods does not significantly differ and the variance of the positioning error along the three axis X,Y,Z has the same magnitude in both cases. This fact proves once again the benefit of using a unique time of reception, which is particularly suitable without affecting the position accuracy.

## **6. Conclusion**

In this chapter we have examined the GPS code-phase measurements in order to compute precise satellite-user ranges and to estimate the receiver's position accurately. Since the clocks on board of the satellites are not synchronized with the clock of the receiver, measures of code phase gives pseudoranges instead of ranges. Then, by limiting the discussion only at the pseudoranges computed through the code-phase estimation, two different methods have been presented. The former considers that all the satellites are synchronized and each navigation message is received by the user at different time instants. Therefore, measuring the time offset among all the channels and assigning a nominal travel time to the closest satellite, we are able to calculate the pseudoranges. On the other hand, the latter technique foresees the measure of time delays by fixing a common reception time over all the receiver channels. The first method is the most intuitive and didactic, while the second is more suitable for real-time implementations and is often employed in commercial GPS receivers. In both the approaches, a fundamental role is played by the tracking stage whose aim is to continuously refine the misalignment between the incoming signal and the local replica in order to perform the code de-spreading and retrieve the navigation data. On the basis of the local code evolution, GNSS receivers measure code phase delays, implementing a set of time-counters that accumulate the number of processed frames, subframes, data bits, code periods and samples for all the tracked satellites. The presented theory is completed by a real data example of PVT, in case of a joint processing of GPS/Galileo signals, exploiting code-phase pseudorange measurements.

#### **7. References**

124 Global Navigation Satellite Systems – Signal, Theory and Applications

An example of position estimation using the LS method is reported in Fig. 10. The LS algorithm have been run on the data sets of pseudoranges computed according to the two

**Position of a Joint GPS/Galileo receiver trough a Least-Squares method**

**12.97 12.971 12.972 12.973 12.974 12.975 12.976 12.977 12.978 Latitude [deg]** Fig. 10. PVT solution for a joint GPS/Galileo receiver by using "common reception time" and "common transmission time" for the pseudoranges computation and a LS-based

As expected, the trajectory of the user estimated by the two methods does not significantly differ and the variance of the positioning error along the three axis X,Y,Z has the same magnitude in both cases. This fact proves once again the benefit of using a unique time of

In this chapter we have examined the GPS code-phase measurements in order to compute precise satellite-user ranges and to estimate the receiver's position accurately. Since the clocks on board of the satellites are not synchronized with the clock of the receiver, measures of code phase gives pseudoranges instead of ranges. Then, by limiting the discussion only at the pseudoranges computed through the code-phase estimation, two different methods have been presented. The former considers that all the satellites are synchronized and each navigation message is received by the user at different time instants. Therefore, measuring the time offset among all the channels and assigning a nominal travel time to the closest satellite, we are able to calculate the pseudoranges. On the other hand, the latter technique foresees the measure of time delays by fixing a common reception time over all the receiver channels. The first method is the most intuitive and didactic, while the second is more suitable for real-time implementations and is often employed in commercial

reception, which is particularly suitable without affecting the position accuracy.

**Reference trajectory 47.611**

**Common reception time Common transmission time**

techniques and using both real GPS and simulated Galileo satellite signals.

 **47.61**

**47.609**

**47.608**

**Longitude [deg]**

**47.607**

**47.606**

**47.605**

receiver

**6. Conclusion** 


**6** 

*Turkey* 

**GNSS in Practical** 

Bihter Erol and Serdar Erol

 *Department of Geomatics Engineering* 

and *h*, directly. The instantaneous

**Determination of Regional Heights** 

Describing the position of a point in space, basically relies on determining three coordinate components: the Cartesian coordinates (X, Y, Z) in rectangular coordinate system or latitude,

given reference ellipsoid. Today, of course, global navigation satellite systems (GNSS) is the

determination of position and velocity on a continuous base, and the precise coordination of time are included in the objectives of GNSS, and positioning with GNSS base on ranging from known positions of satellites in space to the unknown positions on the earth or in space. Besides the geometrically described coordinates however, the natural coordinates, the astrogeodetic latitude, longitude and orthometric height (, Λ, H), which directly refer to the gravity field of the earth, are preferable to take for many special purposes. In particular the orthometric heights above the geoid are required in many applications, not only in all earth sciences, but also in other disciplines such as; cartography, oceanography, civil engineering, hydraulics, high-precision surveys, and last but not least geographical information systems. Traditionally, these heights are determined by combining geometric levelling and gravity observations with millimetre precision in smaller regions. This technique, however, is very time consuming, expensive and makes providing vertical control difficult, especially in mountainous areas which are hard to access. Another disadvantage is the loss of precision over longer distances since each height system (regional vertical datum) usually refers to a benchmark point close to the sea level, which is connected to a tide gauge station representing the mean sea level (Hofmann-Wellenhof & Moritz,

In order to counteract these drawbacks of levelling, GNSS introduces a revolution also in the practical determination of the heights in regional vertical datum depending on the basic relation *H* = *h* - *N* among the heights. This equation relates the orthometric height *H* (above the geoid), the ellipsoidal height *h* (above the ellipsoid), and the geoidal undulation *N*, as such, when the *h* is provided by GNSS and *N* exists from a reliable and precise digital geoid map, the orthometric height *H* can then be obtained immediately. This alternative technique for the practical determination of *H* is called GNSS levelling. In the recent decades the wide and increasing use of GNSS in all kinds of geodetic and surveying applications demands

ϕ, λ

ϕ, λ

best and most popular method for determining

**1. Introduction** 

2006).

longitude and ellipsoidal height (

*Istanbul Technical University, Civil Engineering Faculty* 

and *h*) in ellipsoidal coordinate system, referred to any


## **GNSS in Practical Determination of Regional Heights**

Bihter Erol and Serdar Erol

*Istanbul Technical University, Civil Engineering Faculty Department of Geomatics Engineering Turkey* 

#### **1. Introduction**

126 Global Navigation Satellite Systems – Signal, Theory and Applications

Rao, M.; Falco, G. & Falletti, M. (2011). SDR Joint GPS and Galileo Receiver: from Theory to

Tsui, J.B. (2000). *Fundamentals of Global Positioning System Receivers*, John Wiley & Sons, Inc.,

Practice. Submitted to *IET-Radar, Sonar and Navigation*.

ISBN 0-471-38154-3, New York, USA.

Describing the position of a point in space, basically relies on determining three coordinate components: the Cartesian coordinates (X, Y, Z) in rectangular coordinate system or latitude, longitude and ellipsoidal height (ϕ, λ and *h*) in ellipsoidal coordinate system, referred to any given reference ellipsoid. Today, of course, global navigation satellite systems (GNSS) is the best and most popular method for determining ϕ, λ and *h*, directly. The instantaneous determination of position and velocity on a continuous base, and the precise coordination of time are included in the objectives of GNSS, and positioning with GNSS base on ranging from known positions of satellites in space to the unknown positions on the earth or in space. Besides the geometrically described coordinates however, the natural coordinates, the astrogeodetic latitude, longitude and orthometric height (, Λ, H), which directly refer to the gravity field of the earth, are preferable to take for many special purposes. In particular the orthometric heights above the geoid are required in many applications, not only in all earth sciences, but also in other disciplines such as; cartography, oceanography, civil engineering, hydraulics, high-precision surveys, and last but not least geographical information systems. Traditionally, these heights are determined by combining geometric levelling and gravity observations with millimetre precision in smaller regions. This technique, however, is very time consuming, expensive and makes providing vertical control difficult, especially in mountainous areas which are hard to access. Another disadvantage is the loss of precision over longer distances since each height system (regional vertical datum) usually refers to a benchmark point close to the sea level, which is connected to a tide gauge station representing the mean sea level (Hofmann-Wellenhof & Moritz, 2006).

In order to counteract these drawbacks of levelling, GNSS introduces a revolution also in the practical determination of the heights in regional vertical datum depending on the basic relation *H* = *h* - *N* among the heights. This equation relates the orthometric height *H* (above the geoid), the ellipsoidal height *h* (above the ellipsoid), and the geoidal undulation *N*, as such, when the *h* is provided by GNSS and *N* exists from a reliable and precise digital geoid map, the orthometric height *H* can then be obtained immediately. This alternative technique for the practical determination of *H* is called GNSS levelling. In the recent decades the wide and increasing use of GNSS in all kinds of geodetic and surveying applications demands

GNSS in Practical Determination of Regional Heights 129

were run using the reference benchmarks and tested at the independent test benchmarks of each network. The applied modelling techniques for local geoids (ranging from simple to more comlex methods) including, the multivariable polynomial regression and artificial neuro-fuzzy inference systems (ANFIS). In the light of these conclusions, the roles of topography of the area of interest, the distribution and density of the reference benchmarks, and computation algorithm used in the precise determination of the geoid model and therefore the accuracy of regional heights from the GNSS levelling, were investigated. In addition to the investigation and review on local geoids, local improvement of the recent Turkish regional geoid using 31 reference benchmarks of Çankr GNSS/levelling networks has also been included. The next section has been structured accordingly to report on these

The outline for this chapter is as follows; the first section provides background information regarding the geoid models, the height data used to conduct the research are also presented and explained. As special emphasis has been given to the error sources affecting the used heights (*h*, *H* ) and thus the accuracy of the geoid model (*N*), information relating to the global geoid models (EGM96, EGM08, EIGEN-51C, EIGEN-6C, EIGEN-6S, GGM03C, GGM03S and GOCO02S) and Turkish regional geoid models (TG03, TG09) has also been included in this section. In addition to an overview of the aforementioned models, a list of related references where they have been used in previous studies is provided for further reading purposes. Furthermore the numerical validations of the explained models are included within a sub-heading and validation results appropriately presented as graphics

The second section is focused on the methodology, and the theoretical background of the applied methods used in the calculation of the local geoid models. The local improvements of regional models are also summarized, and corresponding literature provided. The merits and limitations of each method are also referred to in this section. The outlined methodology was implemented using the three test network's data, in order to test the computation algorithms and demonstrate the role of the data and topographical patterns in geometrical modelling of the local geoids and local improvements of regional geoids. The

The last section summarises the main conclusions of this research and some practical considerations for modernizing vertical control, as parallel to GNSS development are presented. This section therefore essentially focuses on how to evaluate the achievable accuracy of GNSS levelling. A brief discussion outlines some of the key concepts for providing users of GNSS with the proper information to transform ellipsoidal heights to heights associated with a regional vertical datum. To conclude the chapter, recommendations

GNSS ellipsoidal heights are purely geometric definitions and do not refer to an equipotential surface of the earth's gravity field, as such they cannot be used in the same way as conventional heights derived from levelling in many applications. In order for GNSS derived ellipsoidal heights to have any physical meaning in application, they must be transformed to orthometric heights referring to mean sea level (geoid). This transformation

areas.

and tables.

findings are presented as graphics and tables.

for future work in this area are also provided.

**2. Global and regional geoid models: Methodology and data** 

modernization of vertical control systems of countries. The current position is that, the most developed countries are concentrating efforts on establishing a dynamic geoid based vertical datum accessible via GNSS positioning (see e.g. Rangelova et al., 2010). Besides enabling the accurate determination of most up-to-date geoidal heights under the effects of secular dynamic changes of the earth for GNSS levelling purposes, it is envisioned that this new datum concept, will also provide a compatible vertical datum with global height system, which is crucial for studies related to large scale geodynamics and geo-hazards processes.

The accurate determination of orthometric heights via GNSS levelling requires a centimetre(s) accuracy of the geoid model. The level of achievable accuracy of the models varies depending on the computational methodology (assumptions used) and available data within the region of interest (Featherstone et al., 1998; Fotopoulos, 2003; Fotopoulos et al., 2001; Erol, 2007; Erol et al., 2008). The regional models provide better accuracies in comparison to global models. However, for many parts of the globe a high precision regional geoid model is not accessible usually due to lack of data. In these cases, depending on the required accuracy level of the derived heights, one may resort to applying global geopotential model values. An alternative way to determining discrete geoid undulation values is the geometric approach. The approach, which works well in relatively small areas, utilises the relationship between the GNSS ellipsoidal and regional orthometric heights at the known points to interpolate new values. In determining orthometric height with GNSS levelling, apart from consideration for the error budgets of each height data (*h*, *H*, *N* ), it will also be necessary to take into account the systematic shifts and datum differences among these data sets, which also restrict the precision of determination. Since the regional vertical datum is not necessarily coincident with the geoid surface, the discrepancies between the regional vertical datum and the geoid surface are preferably accounted for using a special technique allowing for an improved computation of the regional heights with GNSS coordinates (Fotopoulos, 2003; Erol, 2007).

This chapter aims to review the geoid models for GNSS levelling purposes in Turkey and mapping the progress of the global and regional geoid models in Turkish territory. In this respect the study consists of two parts; the first part provides validation results of the recently released eight global geopotential models from satellite gravity missions namely; EGM96, EGM08 (of full expansion and up to 360 degree), EIGEN-51C, EIGEN-6C, EIGEN-6S, GGM03C, GGM03S and GOCO02S, as well as two Turkish regional geoid models TG03 and TG09, based at 28 homogeneously distributed reference benchmarks with known ITRF96 coordinates and regional orthometric heights. The validations consist of comparison of the geoid undulations between the used models and the observed height data (*h*, *H* ). It should be noted that the results from the validations were evaluated against the reported precisions of the models by the responsible associations.

The second part of this chapter focuses on the determination and testing of the geoid models using geometrical approach in small areas and their assessments in GNSS levelling. In the numerical evaluation, two geodetic networks were used. Each network had 1205 and 109 reference benchmarks with known ITRF96 positions and regional vertical heights, established in these neighbour local areas. Since the topographical character, distribution and density of the reference benchmarks at each area were totally different, these networks provided an appropriate test bed for the local geoid evaluations. In the coverage of the second part, each network was evaluated independently. A group of modelling algorithms 128 Global Navigation Satellite Systems – Signal, Theory and Applications

modernization of vertical control systems of countries. The current position is that, the most developed countries are concentrating efforts on establishing a dynamic geoid based vertical datum accessible via GNSS positioning (see e.g. Rangelova et al., 2010). Besides enabling the accurate determination of most up-to-date geoidal heights under the effects of secular dynamic changes of the earth for GNSS levelling purposes, it is envisioned that this new datum concept, will also provide a compatible vertical datum with global height system, which is crucial for studies related to large scale geodynamics and geo-hazards processes. The accurate determination of orthometric heights via GNSS levelling requires a centimetre(s) accuracy of the geoid model. The level of achievable accuracy of the models varies depending on the computational methodology (assumptions used) and available data within the region of interest (Featherstone et al., 1998; Fotopoulos, 2003; Fotopoulos et al., 2001; Erol, 2007; Erol et al., 2008). The regional models provide better accuracies in comparison to global models. However, for many parts of the globe a high precision regional geoid model is not accessible usually due to lack of data. In these cases, depending on the required accuracy level of the derived heights, one may resort to applying global geopotential model values. An alternative way to determining discrete geoid undulation values is the geometric approach. The approach, which works well in relatively small areas, utilises the relationship between the GNSS ellipsoidal and regional orthometric heights at the known points to interpolate new values. In determining orthometric height with GNSS levelling, apart from consideration for the error budgets of each height data (*h*, *H*, *N* ), it will also be necessary to take into account the systematic shifts and datum differences among these data sets, which also restrict the precision of determination. Since the regional vertical datum is not necessarily coincident with the geoid surface, the discrepancies between the regional vertical datum and the geoid surface are preferably accounted for using a special technique allowing for an improved computation of the regional heights with GNSS

This chapter aims to review the geoid models for GNSS levelling purposes in Turkey and mapping the progress of the global and regional geoid models in Turkish territory. In this respect the study consists of two parts; the first part provides validation results of the recently released eight global geopotential models from satellite gravity missions namely; EGM96, EGM08 (of full expansion and up to 360 degree), EIGEN-51C, EIGEN-6C, EIGEN-6S, GGM03C, GGM03S and GOCO02S, as well as two Turkish regional geoid models TG03 and TG09, based at 28 homogeneously distributed reference benchmarks with known ITRF96 coordinates and regional orthometric heights. The validations consist of comparison of the geoid undulations between the used models and the observed height data (*h*, *H* ). It should be noted that the results from the validations were evaluated against the reported

The second part of this chapter focuses on the determination and testing of the geoid models using geometrical approach in small areas and their assessments in GNSS levelling. In the numerical evaluation, two geodetic networks were used. Each network had 1205 and 109 reference benchmarks with known ITRF96 positions and regional vertical heights, established in these neighbour local areas. Since the topographical character, distribution and density of the reference benchmarks at each area were totally different, these networks provided an appropriate test bed for the local geoid evaluations. In the coverage of the second part, each network was evaluated independently. A group of modelling algorithms

coordinates (Fotopoulos, 2003; Erol, 2007).

precisions of the models by the responsible associations.

were run using the reference benchmarks and tested at the independent test benchmarks of each network. The applied modelling techniques for local geoids (ranging from simple to more comlex methods) including, the multivariable polynomial regression and artificial neuro-fuzzy inference systems (ANFIS). In the light of these conclusions, the roles of topography of the area of interest, the distribution and density of the reference benchmarks, and computation algorithm used in the precise determination of the geoid model and therefore the accuracy of regional heights from the GNSS levelling, were investigated. In addition to the investigation and review on local geoids, local improvement of the recent Turkish regional geoid using 31 reference benchmarks of Çankr GNSS/levelling networks has also been included. The next section has been structured accordingly to report on these areas.

The outline for this chapter is as follows; the first section provides background information regarding the geoid models, the height data used to conduct the research are also presented and explained. As special emphasis has been given to the error sources affecting the used heights (*h*, *H* ) and thus the accuracy of the geoid model (*N*), information relating to the global geoid models (EGM96, EGM08, EIGEN-51C, EIGEN-6C, EIGEN-6S, GGM03C, GGM03S and GOCO02S) and Turkish regional geoid models (TG03, TG09) has also been included in this section. In addition to an overview of the aforementioned models, a list of related references where they have been used in previous studies is provided for further reading purposes. Furthermore the numerical validations of the explained models are included within a sub-heading and validation results appropriately presented as graphics and tables.

The second section is focused on the methodology, and the theoretical background of the applied methods used in the calculation of the local geoid models. The local improvements of regional models are also summarized, and corresponding literature provided. The merits and limitations of each method are also referred to in this section. The outlined methodology was implemented using the three test network's data, in order to test the computation algorithms and demonstrate the role of the data and topographical patterns in geometrical modelling of the local geoids and local improvements of regional geoids. The findings are presented as graphics and tables.

The last section summarises the main conclusions of this research and some practical considerations for modernizing vertical control, as parallel to GNSS development are presented. This section therefore essentially focuses on how to evaluate the achievable accuracy of GNSS levelling. A brief discussion outlines some of the key concepts for providing users of GNSS with the proper information to transform ellipsoidal heights to heights associated with a regional vertical datum. To conclude the chapter, recommendations for future work in this area are also provided.

## **2. Global and regional geoid models: Methodology and data**

GNSS ellipsoidal heights are purely geometric definitions and do not refer to an equipotential surface of the earth's gravity field, as such they cannot be used in the same way as conventional heights derived from levelling in many applications. In order for GNSS derived ellipsoidal heights to have any physical meaning in application, they must be transformed to orthometric heights referring to mean sea level (geoid). This transformation

GNSS in Practical Determination of Regional Heights 131

series expansion (Equation 4), which is available in practice (*ℓmax*< ∞). The other major contributing error type is due to the noise in the coefficients themselves and termed as commission errors. As the maximum degree *ℓmax*, of the spherical harmonic expansion increases, so does the commission error, while the omission error decreases. Therefore, it is important to strike a balance between the various errors. In general, formal error models should include both omission and commission error types in order to provide a realistic measure of the accuracy of the geoid heights computed from the global geopotential model. In the following section, recently released global geopotential models using the data from low earth orbiting missions such as CHAMP, GRACE and GOCE are exemplified and their performances in Turkish territory investigated. Parallel to the improvements in techniques, the new global geopotential models derived by incorporating the satellite data from these

The other errors in the budget contributing to the *N*Δ*<sup>g</sup>* component stem from the insufficient data coverage, density and accuracy of the local gravity data. Obviously, higher accuracy is implied by accurate Δ*g* values distributed evenly over the entire area with sufficient spacing, however there are some systematic errors such as datum inconsistencies, which influence the quality of the gravity anomalies too. The shorter wavelength errors in the geoid heights are introduced through the spacing and quality of the digital elevation model used in the computation of *NH*. Improper modelling of the terrain is especially significant in mountainous regions, where terrain effects contribute significantly to the final geoid model. This is in addition to errors relating to the approximate values of the vertical gravity gradient (Forsberg, 1994). Improvements in geoid models according to the computation of *NH*, will be seen through the use of higher resolution (and accuracy) digital elevation data,

The global geopotential model used as a reference in the R-R technique has the most significant error contribution in the total error budget of the computed regional geoid models. Therefore employing an appropriate global model in R-R computations is of primary importance. Likewise, in areas where regional models exist, they should be used as they are more accurate compared to global models. However, many parts of the globe do not have access to a regional geoid model, usually due to lack of data. In these cases, one may resort to applying global geopotential model values (Equation 4) that best fit the gravity field of the region. Determining the optimal global model for either, using the base model in R-R construction of the regional geoid or estimating the geoid undulations in the region with a relatively low accuracy, it will be necessary to undertake a comparison and validation of the models with independent geoid and gravity information, such as GNSS/levelling heights and gravity anomalies (Gruber, 2004; Kiamehr & Sjöberg, 2005;

The global geopotential models are mainly divided into three groups based on the data used in their computation, namely satellite-only (derived from the tracking of artificial satellites), combined (derived from the combination of a satellite-only model with terrestrial and/or airborne gravimetry, satellite altimetry, topography/bathymetry) and tailored (derived by refining existing satellite-only or combined global geopotential models using regional gravity and topography data) models. Satellite-only models are typically weak at

missions are quite promising (Tscherning et al., 2000; Fotopoulos, 2003).

especially in mountainous regions.

**2.1 Testing global geoid models** 

Merry, 2007).

is applied using the geoidal heights (*N*) from a geoid model that must be known with sufficient accuracy (Fotopoulos et al., 2001; Fotopoulos, 2005). The computation methods of geoid models are many (Schwarz et al., 1987; Featherstone, 1998; Featherstone, 2001; Hirt and Seeber, 2007; Erol et al., 2008; Erol et al., 2009). The most commonly used methods for geoid surface construction are described in textbooks like Heiskanen & Moritz (1967), Vaniček & Krakiwsky (1986), Torge (2001). The so-called remove-restore (R-R) procedure is one of these methods; where a global geopotential model and residual topographic effects are subtracted (and later added back) (see Equations 1 and 2). The smooth resulting data set is then suitable for interpolation or extrapolation using for example least squares collocation with parameters (Sideris, 1994). According to R-R method, the reduced gravity anomaly is:

$$
\Delta \mathbf{g} = \Delta \mathbf{g}\_{FA} - \Delta \mathbf{g}\_{GM} - \Delta \mathbf{g}\_{H} \tag{1}
$$

and the computed geoid height is:

$$N = N\_{GM} - N\_{A\text{g}} - N\_{ind} \tag{2}$$

where Δ*gGM* is the effect of the global geopotential model on gravity anomalies, Δ*gH* is the terrain effect on gravity, *NΔg* is the residual geoid height, which is calculated using Stokes integral (see Equation 3), *Nind* is the indirect effect of the terrain on the geoid heights and *NGM* is the contribution of the global geopotential model (expressed by the Equation 4), (Heiskanen & Moritz, 1967; Sideris, 1994). The residual geoid height, computed from Stokes's equation is;

$$N\_{A\text{g}} = \frac{R}{4\pi} \iint\limits\_{\sigma} \text{AgS}(\mathcal{V}) d\sigma \tag{3}$$

where σ denotes the Earth's surface, Δ*g* is the reduced gravity anomaly (Equation 1) and *S*(Ψ) is the Stokes kernel function where Ψ is the spherical distance between the computation and running points (Haagmans et al., 1993; Sideris; 1994).

The global geopotential model derived geoid height using spherical harmonic coefficients, *C*A*<sup>m</sup>* and *S*A*<sup>m</sup>* , is;

$$N\_{\rm GM} \approx R \sum\_{\ell=2}^{\ell \max} \sum\_{m=0}^{\ell} \overline{P}\_{\ell m}(\sin \theta) \Big[ \overline{\mathcal{C}}\_{\ell m} \cos m\mathcal{X} + \overline{\mathcal{S}}\_{\ell m} \sin m\mathcal{X} \Big] \tag{4}$$

where *R* is the mean radius of the Earth, (*θ*, *λ*) are co-latitude and longitude of the computation point, *P*A*<sup>m</sup>* are fully normalized Legendre functions for degree *ℓ* and order *m*, and *ℓmax* is the maximum degree of the global geopotential model (Heiskanen & Moritz, 1967).

Following the Equation 2, it is obvious that the accuracy of the computed geoid heights depends on the accuracy of the three height components, namely *NGM*, *N*Δ*<sup>g</sup>* and *NH* (Fotopoulos, 2003). The global geopotential model not only contributes to the long wavelength geoid information but also introduces long-wavelength errors that originate from insufficient satellite tracking data, lack of terrestrial gravity data and systematic errors in satellite altimetry. The two main types of errors can be categorized as either omission or commission errors. Omission errors occur from the truncation of the spherical harmonic 130 Global Navigation Satellite Systems – Signal, Theory and Applications

is applied using the geoidal heights (*N*) from a geoid model that must be known with sufficient accuracy (Fotopoulos et al., 2001; Fotopoulos, 2005). The computation methods of geoid models are many (Schwarz et al., 1987; Featherstone, 1998; Featherstone, 2001; Hirt and Seeber, 2007; Erol et al., 2008; Erol et al., 2009). The most commonly used methods for geoid surface construction are described in textbooks like Heiskanen & Moritz (1967), Vaniček & Krakiwsky (1986), Torge (2001). The so-called remove-restore (R-R) procedure is one of these methods; where a global geopotential model and residual topographic effects are subtracted (and later added back) (see Equations 1 and 2). The smooth resulting data set is then suitable for interpolation or extrapolation using for example least squares collocation with parameters (Sideris, 1994). According to R-R method, the reduced gravity anomaly is:

*N* = *NGM* − *N*

Δ

where Δ*gGM* is the effect of the global geopotential model on gravity anomalies, Δ*gH* is the terrain effect on gravity, *NΔg* is the residual geoid height, which is calculated using Stokes integral (see Equation 3), *Nind* is the indirect effect of the terrain on the geoid heights and *NGM* is the contribution of the global geopotential model (expressed by the Equation 4), (Heiskanen & Moritz, 1967; Sideris, 1994). The residual geoid height, computed from

> ( ) <sup>4</sup> *<sup>g</sup> <sup>R</sup> <sup>N</sup> gS d*

where σ denotes the Earth's surface, Δ*g* is the reduced gravity anomaly (Equation 1) and *S*(Ψ) is the Stokes kernel function where Ψ is the spherical distance between the

The global geopotential model derived geoid height using spherical harmonic coefficients,

*GM <sup>m</sup>* sin cos sin *<sup>m</sup> <sup>m</sup>*

*N R P C mS m* θ

where *R* is the mean radius of the Earth, (*θ*, *λ*) are co-latitude and longitude of the computation point, *P*A*<sup>m</sup>* are fully normalized Legendre functions for degree *ℓ* and order *m*, and *ℓmax* is the maximum degree of the global geopotential model (Heiskanen & Moritz,

Following the Equation 2, it is obvious that the accuracy of the computed geoid heights

(Fotopoulos, 2003). The global geopotential model not only contributes to the long wavelength geoid information but also introduces long-wavelength errors that originate from insufficient satellite tracking data, lack of terrestrial gravity data and systematic errors in satellite altimetry. The two main types of errors can be categorized as either omission or commission errors. Omission errors occur from the truncation of the spherical harmonic

depends on the accuracy of the three height components, namely *NGM*, *N*

≈ + ⎡ ⎤ ∑ ∑ ⎣ ⎦

AA A

Δ Ψ σ

λ

σ

π

Δ

computation and running points (Haagmans et al., 1993; Sideris; 1994).

2 0

*m*

= =

A

A A

( ) max

and the computed geoid height is:

Stokes's equation is;

*C*A*<sup>m</sup>* and *S*A*<sup>m</sup>* , is;

1967).

Δ*g* = Δ*gFA* − Δ*gGM* − Δ*gH* (1)

<sup>=</sup> ∫∫ (3)

 λ (4)

Δ

*<sup>g</sup>* and *NH*

*<sup>g</sup>* − *Nind* (2)

series expansion (Equation 4), which is available in practice (*ℓmax*< ∞). The other major contributing error type is due to the noise in the coefficients themselves and termed as commission errors. As the maximum degree *ℓmax*, of the spherical harmonic expansion increases, so does the commission error, while the omission error decreases. Therefore, it is important to strike a balance between the various errors. In general, formal error models should include both omission and commission error types in order to provide a realistic measure of the accuracy of the geoid heights computed from the global geopotential model. In the following section, recently released global geopotential models using the data from low earth orbiting missions such as CHAMP, GRACE and GOCE are exemplified and their performances in Turkish territory investigated. Parallel to the improvements in techniques, the new global geopotential models derived by incorporating the satellite data from these missions are quite promising (Tscherning et al., 2000; Fotopoulos, 2003).

The other errors in the budget contributing to the *N*Δ*<sup>g</sup>* component stem from the insufficient data coverage, density and accuracy of the local gravity data. Obviously, higher accuracy is implied by accurate Δ*g* values distributed evenly over the entire area with sufficient spacing, however there are some systematic errors such as datum inconsistencies, which influence the quality of the gravity anomalies too. The shorter wavelength errors in the geoid heights are introduced through the spacing and quality of the digital elevation model used in the computation of *NH*. Improper modelling of the terrain is especially significant in mountainous regions, where terrain effects contribute significantly to the final geoid model. This is in addition to errors relating to the approximate values of the vertical gravity gradient (Forsberg, 1994). Improvements in geoid models according to the computation of *NH*, will be seen through the use of higher resolution (and accuracy) digital elevation data, especially in mountainous regions.

#### **2.1 Testing global geoid models**

The global geopotential model used as a reference in the R-R technique has the most significant error contribution in the total error budget of the computed regional geoid models. Therefore employing an appropriate global model in R-R computations is of primary importance. Likewise, in areas where regional models exist, they should be used as they are more accurate compared to global models. However, many parts of the globe do not have access to a regional geoid model, usually due to lack of data. In these cases, one may resort to applying global geopotential model values (Equation 4) that best fit the gravity field of the region. Determining the optimal global model for either, using the base model in R-R construction of the regional geoid or estimating the geoid undulations in the region with a relatively low accuracy, it will be necessary to undertake a comparison and validation of the models with independent geoid and gravity information, such as GNSS/levelling heights and gravity anomalies (Gruber, 2004; Kiamehr & Sjöberg, 2005; Merry, 2007).

The global geopotential models are mainly divided into three groups based on the data used in their computation, namely satellite-only (derived from the tracking of artificial satellites), combined (derived from the combination of a satellite-only model with terrestrial and/or airborne gravimetry, satellite altimetry, topography/bathymetry) and tailored (derived by refining existing satellite-only or combined global geopotential models using regional gravity and topography data) models. Satellite-only models are typically weak at

GNSS in Practical Determination of Regional Heights 133

EGM08 in terms of root mean square errors of geoid heights, these models can be employed to obtain regional orthometric heights from GNSS heights for the applications that require a

**\*Model Degree Data Citation**  EGM96 360 Satellite, gravity, altimetry Lemoine et al., 1998 EGM08a 360 GRACE, gravity, altimetry Pavlis et al., 2008 EGM08b 2190 GRACE, gravity, altimetry Pavlis et al., 2008 EIGEN-6C 1420 GOCE, GRACE, LAGEOS, gravity, altimetry Förste et al., 2011 EIGEN-6S 240 GOCE, GRACE, LAGEOS Förste et al., 2011 EIGEN-51C 359 GRACE, CHAMP, gravity, altimetry Bruinsma et al., 2010 GGM03C 360 GRACE, gravity, altimetry Tapley et al., 2007 GGM03S 180 GRACE Tapley et al., 2007 GOCO02S 250 GOCE, GRACE Goiginger et al., 2011

\* Related to the global geopotential models that were used in the study: *i*-) The adopted reference system is GRS80, *ii*-) The applied models are in tide free system, *iii*-) Zero degree terms were included in

Some other conclusions drawn from the statistical inspection of the validation results that EGM08 provided improved results compared to its previous version EGM96 in the study region (compare the statistics of EGM96 and EGM08a in Table 2). Among the satellite only models EIGEN-6S fits best, and as such, can be recommended as a reference model for a

**Model** *ℓmax* **Type min. max. mean std. dev.**  EGM96 360 Combined -183.1 336.5 38.2 156.3 EGM08a 360 Combined -105.0 47.6 -18.1 36.4 EGM08b 2190 Combined -58.6 27.0 -4.5 17.3 EIGEN-6C 1420 Combined -41.9 28.3 -4.1 15.8 EIGEN-6S 240 Satellite only -77.5 85.0 -9.7 43.2 EIGEN-51C 359 Combined -126.2 50.5 -21.8 38.9 GGM03C 360 Combined -151.2 213.0 -2.4 76.3 GGM03S 180 Satellite only -394.3 331.4 -18.5 198.1 GOCO02S 250 Satellite only -87.2 90.9 -8.6 43.5 Table 2. Statistics of the geoid height differences between global models and observations

The geoid height differences of EIGEN-6C and EIGEN-6S global models from the observed geoid heights at the reference benchmarks are illustrated in Figures 2 and 3, respectively. These differences can be compared and interpreted considering the topographic map of

computations, *iv*-) The model coefficients are available from ICGEM (2011).

Table 1. Validated global geopotential models in the study

future regional geoid of Turkey with R-R technique.

(in centimetre)

Turkey in Figure 1.

decimetre level accuracy in heights.

coefficients of degrees higher than 60 or 70 due to several factors, such as the power-decay of the gravitational field with altitude, modelling of atmospheric drag, incomplete tracking of satellite orbits from the ground stations etc. (Rummel et al., 2002). Although the effects of some of these limitations on the models decreased after the dedicated satellite gravity missions CHAMP, GRACE and GOCE (GGM02, 2004; GFZ, 2006; GOCE, 2009), the new satellite-only models still have full power until a certain degree, however rapidly increasing errors make their coefficients unreliable at high degrees (see e.g. Tapley et al., 2005; ICGEM, 2005). Whilst, the application of combined models reduce some of the aforementioned limitations, the errors in the terrestrial data effectively remain the same.

Theoretically, the observations, used in computation of the global models, should be scattered to the entire earth homogenously, but it is almost impossible to realise this exactly. As such, accuracy of quantities computed via global geopotential models, such as geoid undulation (Equation 4), is directly connected to the quality and global distribution of gravity data as well as to the signal power of satellite mission. The distribution and the availability of quality gravity data therefore plays a major role in the global model-derived values in different parts of the Earth. It may however be argued that, the various models may not be as good as they are reported to be, otherwise the differences between them should not be so great as they are (Lambeck & Coleman, 1983). As such, validating the models in local scale with in situ data before using them with geodetic and geophysical purposes is highly important (Gruber 2004). In this manner, Roland & Denker (2003) evaluated the fit of some of the global models to the gravity field in Europe using external data such as GPS/levelling and gravity anomalies. Furthermore, Amos & Featherstone (2003) included astrogeodetic vertical deflections at the Earth surface in the external data for validating the EGMs at that date in Australia. Similar evaluations were also undertaken by Kiamehr & Sjöberg (2005), Abd-Elmotaal (2006), Rodriguez-Caderot et al (2006), Merry (2007) and Sadiq & Ahmad (2009) in Iran, Egypt, Southern Spain, Southern Africa, Pakistan, respectively. Satellite altimeter data and orbit parameters were also used by Klokočník et al (2002) and Förste et al (2009) in comparative assessments of the EGMs. Erol et al (2009), Ustun & Abbak (2010) and Ylmaz & Karaali (2010) provided some specific results on spectral evaluation of global models and on their local validations using terrestrial data in territory of Turkey. Motivated research conducted by Lambeck&Coleman (1983) and Gruber (2004), we tested some of the recent global geopotential models having various orders of spherical harmonic expansion for Turkish territory, the results of which have been recorded later in this chapter. The listed global geopotential models in Table 1 were validated at 28 GNSS/levelling benchmarks, homogenously distributed over the country. The table provides the maximum degrees of the harmonic expansions, the data contributed for developing the models and also the principle references for further reading on these models. The reference data for validations are included by Ylmaz & Karaali (2010), hence the results from the models evaluated in both studies are comparable (see Figure 1 for the distribution of the benchmarks).

In evaluations, the geoid heights derived from the models (Equation 4) were compared with observations at the benchmarks, and the statistics of comparisons (see Table 2) were investigated. In the validation results, superiority of ultra-high resolution models EIGEN-6C (*ℓmax* = 1420) and EGM08 (*ℓmax* = 2190) in representing the gravity field in the region is naturally obvious given that these models comprise information relating to full content of gravity field spectrum. Considering the ±16.3 cm and ±17.9 cm accuracies of EIGEN-6C and 132 Global Navigation Satellite Systems – Signal, Theory and Applications

coefficients of degrees higher than 60 or 70 due to several factors, such as the power-decay of the gravitational field with altitude, modelling of atmospheric drag, incomplete tracking of satellite orbits from the ground stations etc. (Rummel et al., 2002). Although the effects of some of these limitations on the models decreased after the dedicated satellite gravity missions CHAMP, GRACE and GOCE (GGM02, 2004; GFZ, 2006; GOCE, 2009), the new satellite-only models still have full power until a certain degree, however rapidly increasing errors make their coefficients unreliable at high degrees (see e.g. Tapley et al., 2005; ICGEM, 2005). Whilst, the application of combined models reduce some of the aforementioned

Theoretically, the observations, used in computation of the global models, should be scattered to the entire earth homogenously, but it is almost impossible to realise this exactly. As such, accuracy of quantities computed via global geopotential models, such as geoid undulation (Equation 4), is directly connected to the quality and global distribution of gravity data as well as to the signal power of satellite mission. The distribution and the availability of quality gravity data therefore plays a major role in the global model-derived values in different parts of the Earth. It may however be argued that, the various models may not be as good as they are reported to be, otherwise the differences between them should not be so great as they are (Lambeck & Coleman, 1983). As such, validating the models in local scale with in situ data before using them with geodetic and geophysical purposes is highly important (Gruber 2004). In this manner, Roland & Denker (2003) evaluated the fit of some of the global models to the gravity field in Europe using external data such as GPS/levelling and gravity anomalies. Furthermore, Amos & Featherstone (2003) included astrogeodetic vertical deflections at the Earth surface in the external data for validating the EGMs at that date in Australia. Similar evaluations were also undertaken by Kiamehr & Sjöberg (2005), Abd-Elmotaal (2006), Rodriguez-Caderot et al (2006), Merry (2007) and Sadiq & Ahmad (2009) in Iran, Egypt, Southern Spain, Southern Africa, Pakistan, respectively. Satellite altimeter data and orbit parameters were also used by Klokočník et al (2002) and Förste et al (2009) in comparative assessments of the EGMs. Erol et al (2009), Ustun & Abbak (2010) and Ylmaz & Karaali (2010) provided some specific results on spectral evaluation of global models and on their local validations using terrestrial data in territory of Turkey. Motivated research conducted by Lambeck&Coleman (1983) and Gruber (2004), we tested some of the recent global geopotential models having various orders of spherical harmonic expansion for Turkish territory, the results of which have been recorded later in this chapter. The listed global geopotential models in Table 1 were validated at 28 GNSS/levelling benchmarks, homogenously distributed over the country. The table provides the maximum degrees of the harmonic expansions, the data contributed for developing the models and also the principle references for further reading on these models. The reference data for validations are included by Ylmaz & Karaali (2010), hence the results from the models evaluated in both studies are comparable (see Figure 1 for the distribution

In evaluations, the geoid heights derived from the models (Equation 4) were compared with observations at the benchmarks, and the statistics of comparisons (see Table 2) were investigated. In the validation results, superiority of ultra-high resolution models EIGEN-6C (*ℓmax* = 1420) and EGM08 (*ℓmax* = 2190) in representing the gravity field in the region is naturally obvious given that these models comprise information relating to full content of gravity field spectrum. Considering the ±16.3 cm and ±17.9 cm accuracies of EIGEN-6C and

limitations, the errors in the terrestrial data effectively remain the same.

of the benchmarks).

EGM08 in terms of root mean square errors of geoid heights, these models can be employed to obtain regional orthometric heights from GNSS heights for the applications that require a decimetre level accuracy in heights.


\* Related to the global geopotential models that were used in the study: *i*-) The adopted reference system is GRS80, *ii*-) The applied models are in tide free system, *iii*-) Zero degree terms were included in computations, *iv*-) The model coefficients are available from ICGEM (2011).

Table 1. Validated global geopotential models in the study

Some other conclusions drawn from the statistical inspection of the validation results that EGM08 provided improved results compared to its previous version EGM96 in the study region (compare the statistics of EGM96 and EGM08a in Table 2). Among the satellite only models EIGEN-6S fits best, and as such, can be recommended as a reference model for a future regional geoid of Turkey with R-R technique.


Table 2. Statistics of the geoid height differences between global models and observations (in centimetre)

The geoid height differences of EIGEN-6C and EIGEN-6S global models from the observed geoid heights at the reference benchmarks are illustrated in Figures 2 and 3, respectively. These differences can be compared and interpreted considering the topographic map of Turkey in Figure 1.

GNSS in Practical Determination of Regional Heights 135

In Turkey, various regional geoid models have been computed with different methods, since the 1970's (see e.g. Ayan, 1976; Ayhan, 1993; Ayhan et al., 2002; TNUGG, 2003; TNUGG, 2011), along with the technologic advances and increasing use of GNSS techniques in 1990's, modernization of national geodetic infrastructure, including the vertical datum definition, was required. As a consequence of these developments the geodetic control network was reestablished in ITRF96 datum by Turkish Ministry of National Defence-General Command of Mapping between 1997 and 2001, a geoid model (TG99A) as a height transformation surface from GNSS to the regional vertical datum was released in 2000 (Ayhan et al., 2002). Turkey regional geoid model TG99A was gravimetrically determined and fitted to the regional vertical datum at homogeneously distributed GNSS/levelling benchmarks throughout the country. The absolute accuracy of TG99A model is reported between ±12 cm and ±25 cm, however the performance of the model decreases from the central territories through the coastline and boundaries of the country (Ayhan et al., 2002). An updated version of TG99A was released by General Command of Mapping in 2003 (TG03) (TNUGG, 2003). TG03 was computed with R-R method and Least Squares Collocation using terrestrial gravity data in 3-5 km density over the country (at Potsdam gravity datum), marine gravity data (acquired with shipborne and satellite altimetry), terrain based elevation model in 450 m x 450 m resolution and reference global model EGM96, and fitted to the regional vertical datum at 197 high order GNSS/levelling benchmarks (TNUGG, 2003). The accuracy of TG03 is reported as ±8.8 cm by TNUGG (2003) this revealed good improvement when compared the previous TG model.

Release of the Earth Gravitational Model 2008 (EGM08), the collection of new surface gravity observations (~266000), the advanced satellite altimetry-derived gravity over the sea (DNSC08), the availability of the high resolution digital terrain model (90 m resolution) and increased number of GNSS/levelling benchmarks (approximately 2700 benchmarks cover the entire country) enabled the computation of a new regional geoid model for Turkey in 2009, hence TG09 was released by General Command of Mapping as successor of TG03 (TNUGG, 2011). In computations, the quasi geoid model was constructed first using R-R procedure based on EGM08 and RTM reduction of surface gravity data and since the Helmert orthometric heights are used for vertical control in Turkey, the quasi geoid model was then converted to the geoid model. Ultimately, the hybrid geoid model TG09 was derived with combining the gravimetric geoid model and GNSS/levelling heights to be used in GNSS positioning applications. In the test results of TG09 with GNSS/levelling data, the accuracy of the model is reported as ±8.3 cm by TNUGG (2011). This result does not

This section examines the published accuracies of TG03 and TG09 models at 28 GNSS/levelling benchmarks used in the validation of global geopotential models in the previous section. With this purpose in mind, the derived geoid heights at the benchmarks were compared with observations and in the results: TG03 model revealed ±10.5 cm standard deviation with minimum -10.1 cm, maximum 28.9 cm and mean 7.3 cm geoid height differences, whereas the TG09 model has ±9.2 cm standard deviation with minimum -11.3 cm, maximum 36.7 cm and mean of 10.5 cm in geoid height differences. The distribution of geoid height residuals versus the numbers of point are given in histograms in Figure 4. The geoidal height differences for TG03 and TG09 models are illustrated in Figures

signify much improvement when comparing the TG03.

5 and 6, respectively.

**2.2 Regional geoid models in Turkey** 

Fig. 1. Topographic map of Turkey and validation benchmarks (units metre) (using GTOPO30 data (USGS, 1997))

Fig. 2. Geoid height differences between EIGEN-6C model and GNSS/levelling observations

Fig. 3. Geoid height differences between EIGEN-6S model and GNSS/levelling observations

#### **2.2 Regional geoid models in Turkey**

134 Global Navigation Satellite Systems – Signal, Theory and Applications

Longitude (°E)

Fig. 2. Geoid height differences between EIGEN-6C model and GNSS/levelling observations

Longitude (°E)

28 30 32 34 36 38 40 42 44

Longitude (°E)

Fig. 3. Geoid height differences between EIGEN-6S model and GNSS/levelling observations

Fig. 1. Topographic map of Turkey and validation benchmarks (units metre) (using

Latitude (°N)

Latitude (°N)

Latitude (°N)

GTOPO30 data (USGS, 1997))

36

38

40

42

In Turkey, various regional geoid models have been computed with different methods, since the 1970's (see e.g. Ayan, 1976; Ayhan, 1993; Ayhan et al., 2002; TNUGG, 2003; TNUGG, 2011), along with the technologic advances and increasing use of GNSS techniques in 1990's, modernization of national geodetic infrastructure, including the vertical datum definition, was required. As a consequence of these developments the geodetic control network was reestablished in ITRF96 datum by Turkish Ministry of National Defence-General Command of Mapping between 1997 and 2001, a geoid model (TG99A) as a height transformation surface from GNSS to the regional vertical datum was released in 2000 (Ayhan et al., 2002). Turkey regional geoid model TG99A was gravimetrically determined and fitted to the regional vertical datum at homogeneously distributed GNSS/levelling benchmarks throughout the country. The absolute accuracy of TG99A model is reported between ±12 cm and ±25 cm, however the performance of the model decreases from the central territories through the coastline and boundaries of the country (Ayhan et al., 2002). An updated version of TG99A was released by General Command of Mapping in 2003 (TG03) (TNUGG, 2003). TG03 was computed with R-R method and Least Squares Collocation using terrestrial gravity data in 3-5 km density over the country (at Potsdam gravity datum), marine gravity data (acquired with shipborne and satellite altimetry), terrain based elevation model in 450 m x 450 m resolution and reference global model EGM96, and fitted to the regional vertical datum at 197 high order GNSS/levelling benchmarks (TNUGG, 2003). The accuracy of TG03 is reported as ±8.8 cm by TNUGG (2003) this revealed good improvement when compared the previous TG model.

Release of the Earth Gravitational Model 2008 (EGM08), the collection of new surface gravity observations (~266000), the advanced satellite altimetry-derived gravity over the sea (DNSC08), the availability of the high resolution digital terrain model (90 m resolution) and increased number of GNSS/levelling benchmarks (approximately 2700 benchmarks cover the entire country) enabled the computation of a new regional geoid model for Turkey in 2009, hence TG09 was released by General Command of Mapping as successor of TG03 (TNUGG, 2011). In computations, the quasi geoid model was constructed first using R-R procedure based on EGM08 and RTM reduction of surface gravity data and since the Helmert orthometric heights are used for vertical control in Turkey, the quasi geoid model was then converted to the geoid model. Ultimately, the hybrid geoid model TG09 was derived with combining the gravimetric geoid model and GNSS/levelling heights to be used in GNSS positioning applications. In the test results of TG09 with GNSS/levelling data, the accuracy of the model is reported as ±8.3 cm by TNUGG (2011). This result does not signify much improvement when comparing the TG03.

This section examines the published accuracies of TG03 and TG09 models at 28 GNSS/levelling benchmarks used in the validation of global geopotential models in the previous section. With this purpose in mind, the derived geoid heights at the benchmarks were compared with observations and in the results: TG03 model revealed ±10.5 cm standard deviation with minimum -10.1 cm, maximum 28.9 cm and mean 7.3 cm geoid height differences, whereas the TG09 model has ±9.2 cm standard deviation with minimum -11.3 cm, maximum 36.7 cm and mean of 10.5 cm in geoid height differences. The distribution of geoid height residuals versus the numbers of point are given in histograms in Figure 4. The geoidal height differences for TG03 and TG09 models are illustrated in Figures 5 and 6, respectively.

GNSS in Practical Determination of Regional Heights 137

Among the computation methods of geoid models (see e.g. Schwarz et al., 1987; Featherstone, 1998; Featherstone, 2001; Hirt and Seeber, 2007; Erol et al., 2008; Erol et al., 2009), geometric approach that GNSS and orthometric heights (h and H, respectively) can be used to estimate the position of the geoid at discrete points (so called geoid reference benchmarks) through a simple relation between the heights (N ≈ h-H) provides a practical solution to the geoid problem in relatively small areas (typically a few kilometers) (Featherstone et al., 1998; Ayan et al., 2001, 2005; Erol and Çelik, 2006). This method addresses the geoid determination problem as "describing an interpolation surface depending on the reference benchmarks" (Featherstone et al., 1998; Erol et al., 2005; Erol & Çelik, 2006; Erol et al., 2008). The approximate equality in the equation arises due to the disregard for the deflection of the vertical that means the departure of the plumbline from the ellipsoidal normal (Heiskanen and Moritz, 1967). However the magnitude of error steming from this oversight is fairly minimal and therefore acceptable for the height

The data quality, density and distribution of the reference benchmarks have important role on the accuracy of local GNSS/levelling geoid model (Fotopoulos et al., 2001; Fotopoulos, 2005; Erol & Çelik, 2006; Erol, 2008, 2011). There are certain criteria on the geoid reference benchmark qualities and locations, as described in the regulations and reference books (LSMSDPR, 2005; Deniz&Çelik, 2007) that will be mentioned in the text that follows. On the other hand using an appropriate surface approximation method in geoid modelling with geometrical approach is also critical for the accuracy of the model. The modelling methods are various but those most commonly employed among are; polynomial equations (of various orders) (Ayan et al, 2001; Erol, 2008; Erol, 2011), least squares collocation (Erol and Çelik, 2004), geostatistical kriging (Erol and Çelik, 2006), finite elements (Çepni and Deniz, 2005), multiquadric or weighted linear interpolation (Yanalak and Baykal, 2001). In addition to these classical methods, soft computing algorithms such as artificial neural networks (either by itself, see e.g. Kavzaoğlu and Saka (2005) or as part of these classical statistical techniques, e.g. Stopar et al. (2006)), adaptive network-based fuzzy inference systems (ANFIS) (Ylmaz and Arslan, 2008) and wavelet neural networks (Erol, 2007) were also

evaluated by researchers in the most recent investigations on local geoid modelling.

In this section, we discuss and explain the handicaps and advantages of geometric approach and local geoid models from the view point of transformation of GNSS ellipsoidal heights. This includes two case studies: Istanbul and Sakarya local geoids, using polynomial

One of the case study areas, Istanbul, is located in the North West of Turkey (between 40°30' N – 41°30' N latitudes, 27°30' E – 30°00' E longitudes, see Figure 7). The region has a relatively plain topography and elevations vary between 0 and 600 m. The GNSS/levelling network (Istanbul GPS Triangulation Network 2005, IGNA2005) was established between 2005 and 2006 as a part of IGNA2005 project (Ayan et al., 2006), and the measurement

**3.1 Case studies: Istanbul and Sakarya local geoids** 

equations and ANFIS methods.

**3.1.1 Data** 

**3. Local GNSS/levelling geoids** 

transformation purposes (Featherstone, 1998).

Fig. 4. Validation results of (a) TG03 and (b) TG09 models: geoid height differences (in cm) versus reference benchmark numbers

Fig. 5. Geoid height differences between TG03 and GNSS/levelling observations

Fig. 6. Geoid height differences between TG09 and GNSS/levelling observations (TG09 data were used from Ylmaz&Karaali (2010))

## **3. Local GNSS/levelling geoids**

136 Global Navigation Satellite Systems – Signal, Theory and Applications

mean = 10.5 cm std.dev.=9.2 cm 28 BMs.

mean = 7.3 cm std.dev.=10.5 cm 28 BMs.

(a) (b)

Fig. 5. Geoid height differences between TG03 and GNSS/levelling observations

Longitude (°E)

28 30 32 34 36 38 40 42 44

Longitude (°E)

Fig. 6. Geoid height differences between TG09 and GNSS/levelling observations (TG09 data

versus reference benchmark numbers

Latitude (°N)

Latitude (°N)

36

38

40

42

were used from Ylmaz&Karaali (2010))

Fig. 4. Validation results of (a) TG03 and (b) TG09 models: geoid height differences (in cm)

Among the computation methods of geoid models (see e.g. Schwarz et al., 1987; Featherstone, 1998; Featherstone, 2001; Hirt and Seeber, 2007; Erol et al., 2008; Erol et al., 2009), geometric approach that GNSS and orthometric heights (h and H, respectively) can be used to estimate the position of the geoid at discrete points (so called geoid reference benchmarks) through a simple relation between the heights (N ≈ h-H) provides a practical solution to the geoid problem in relatively small areas (typically a few kilometers) (Featherstone et al., 1998; Ayan et al., 2001, 2005; Erol and Çelik, 2006). This method addresses the geoid determination problem as "describing an interpolation surface depending on the reference benchmarks" (Featherstone et al., 1998; Erol et al., 2005; Erol & Çelik, 2006; Erol et al., 2008). The approximate equality in the equation arises due to the disregard for the deflection of the vertical that means the departure of the plumbline from the ellipsoidal normal (Heiskanen and Moritz, 1967). However the magnitude of error steming from this oversight is fairly minimal and therefore acceptable for the height transformation purposes (Featherstone, 1998).

The data quality, density and distribution of the reference benchmarks have important role on the accuracy of local GNSS/levelling geoid model (Fotopoulos et al., 2001; Fotopoulos, 2005; Erol & Çelik, 2006; Erol, 2008, 2011). There are certain criteria on the geoid reference benchmark qualities and locations, as described in the regulations and reference books (LSMSDPR, 2005; Deniz&Çelik, 2007) that will be mentioned in the text that follows. On the other hand using an appropriate surface approximation method in geoid modelling with geometrical approach is also critical for the accuracy of the model. The modelling methods are various but those most commonly employed among are; polynomial equations (of various orders) (Ayan et al, 2001; Erol, 2008; Erol, 2011), least squares collocation (Erol and Çelik, 2004), geostatistical kriging (Erol and Çelik, 2006), finite elements (Çepni and Deniz, 2005), multiquadric or weighted linear interpolation (Yanalak and Baykal, 2001). In addition to these classical methods, soft computing algorithms such as artificial neural networks (either by itself, see e.g. Kavzaoğlu and Saka (2005) or as part of these classical statistical techniques, e.g. Stopar et al. (2006)), adaptive network-based fuzzy inference systems (ANFIS) (Ylmaz and Arslan, 2008) and wavelet neural networks (Erol, 2007) were also evaluated by researchers in the most recent investigations on local geoid modelling.

#### **3.1 Case studies: Istanbul and Sakarya local geoids**

In this section, we discuss and explain the handicaps and advantages of geometric approach and local geoid models from the view point of transformation of GNSS ellipsoidal heights. This includes two case studies: Istanbul and Sakarya local geoids, using polynomial equations and ANFIS methods.

#### **3.1.1 Data**

One of the case study areas, Istanbul, is located in the North West of Turkey (between 40°30' N – 41°30' N latitudes, 27°30' E – 30°00' E longitudes, see Figure 7). The region has a relatively plain topography and elevations vary between 0 and 600 m. The GNSS/levelling network (Istanbul GPS Triangulation Network 2005, IGNA2005) was established between 2005 and 2006 as a part of IGNA2005 project (Ayan et al., 2006), and the measurement

GNSS in Practical Determination of Regional Heights 139

levelling observations the relative accuracy of Helmert orthometric heights is reported as 0.2 ppm by Çelik et al. (2002). The GNSS coordinates are in ITRF96 datum, while the

The distribution of the 109 GNSS/levelling benchmarks is homogenous but rather sparse, and given the rough topography of the region, the coverage of the benchmarks cannot characterize the topographic changes well. The reference point density is 1 benchmark per 165 km2. Figure 8 shows the reference network benchmarks on the topographic map of the area.

IZNIK LAKE

Longitude (°E)

Fig. 8. Geoid reference benchmarks in Sakarya (topographic data from SRTM3 (USGS, 2010)

Since the computation algorithms, applied for local GNSS/levelling geoid determination in the study, are not able to detect potential blunders in data sets, the geoid heights derived from the observations at the benchmarks were statistically tested and the outliers were cleaned before modelling the data (see Erol (2011) for a case study in screening the reference data before geoid modelling). After removing the outliers from data sets, in Istanbul data, uniformly distributed 200 points of 1205 reference benchmarks (approximately 16% of the entire data) were selected to form the test data, and the remaining 1005 benchmarks were used in computation of the geoid. Similarly, in Sakarya, 14 of the 109 data points (nearly 13% of all data) having homogenous distribution were selected and used for external tests of the geoid model. The model and test points are distinguished with different marks on Figures 7 and 8. The theoretical review of the applied surface interpolation methods and comparisons of their performances by means of the test results are provided in the next

The polynomial equation for representing a local geoid surface based on the discrete

reference benchmarks with known geoid heights in the closed form is:

0 m200 m 400 m 600 m 800 m 1000 m 1200 m 1400 m 1600 m 1800 m

model point test point

orthometric heights are in TUDKA99 datum.

Latitude (°N)

**3.1.2 Methods** 

section.

**3.1.2.1 Polynomials** 

campaigns and data processing strategies adopted to compute benchmark coordinates satisfy the criteria of LSMSDPR (2005), on determination and use of local GNSS/levelling geoids. Accordingly the geoid reference benchmarks must be the common points of C1, C2 and C3 order GNSS benchmarks and high order levelling network points. Thus the GNSS observations of IGNA2005 project were carried out using dual frequency GNSS receivers, with observation durations of at least 2 hours for C1 type network points (for the baselines 20 km in length), and between 45 and 60 minutes for the C2 type network points (for the baselines 5 km in length). The recording interval was set 15 seconds or less during the campaigns. The GNSS coordinates of network benchmarks were determined in ITRF96 datum 2005.000 epoch with ±1.5 cm and ±2.3 cm of root mean square errors in the two dimensional coordinates and heights, respectively (Ayan et al., 2006). The levelling measurements were done simultaneously during the GNSS campaigns and Helmert orthometric heights of geoid reference benchmarks in Turkey National Vertical Control Network 1999 (TUDKA99) datum (Ayhan and Demir, 1993) were derived. Total number of the homogenously distributed reference benchmarks is 1205 with the density of 1 benchmark per 20 km2 in the network (see Figure 7).

Fig. 7. Geoid reference benchmarks in Istanbul (topographic data from SRTM3 (USGS, 2010)

The second case study on determining local GNSS/levelling geoids was carried out in the Sakarya region situated in the East of Marmara sea and Izmit Gulf (between 40°30' N – 41°30' N latitudes, 28°30' E – 31°00' E longitudes). The GNSS/levelling network was established during the Geodetic Infrastructure Project of the Marmara Earthquake Region Land Information System (MERLIS) in 2002 (Çelik et al., 2002), and overlap with IGNA2005 network. Compared to the Istanbul area, the topography in Sakarya is quite rough and the elevations are between 0 m and 2458 m. The GNSS and levelling observations, and data processes were executed according to the regulation of the project. After the adjustment of GNSS network, the accuracies of ±1.5 cm and ±3.0 cm for the horizontal coordinates and ellipsoidal heights were derived. During the GNSS campaign of the MERLIS project, precise levelling measurements were undertaken, simultaneously, and in the adjustment results of 138 Global Navigation Satellite Systems – Signal, Theory and Applications

campaigns and data processing strategies adopted to compute benchmark coordinates satisfy the criteria of LSMSDPR (2005), on determination and use of local GNSS/levelling geoids. Accordingly the geoid reference benchmarks must be the common points of C1, C2 and C3 order GNSS benchmarks and high order levelling network points. Thus the GNSS observations of IGNA2005 project were carried out using dual frequency GNSS receivers, with observation durations of at least 2 hours for C1 type network points (for the baselines 20 km in length), and between 45 and 60 minutes for the C2 type network points (for the baselines 5 km in length). The recording interval was set 15 seconds or less during the campaigns. The GNSS coordinates of network benchmarks were determined in ITRF96 datum 2005.000 epoch with ±1.5 cm and ±2.3 cm of root mean square errors in the two dimensional coordinates and heights, respectively (Ayan et al., 2006). The levelling measurements were done simultaneously during the GNSS campaigns and Helmert orthometric heights of geoid reference benchmarks in Turkey National Vertical Control Network 1999 (TUDKA99) datum (Ayhan and Demir, 1993) were derived. Total number of the homogenously distributed reference benchmarks is 1205 with the density of 1

benchmark per 20 km2 in the network (see Figure 7).

Longitude (°E)

Fig. 7. Geoid reference benchmarks in Istanbul (topographic data from SRTM3 (USGS, 2010)

The second case study on determining local GNSS/levelling geoids was carried out in the Sakarya region situated in the East of Marmara sea and Izmit Gulf (between 40°30' N – 41°30' N latitudes, 28°30' E – 31°00' E longitudes). The GNSS/levelling network was established during the Geodetic Infrastructure Project of the Marmara Earthquake Region Land Information System (MERLIS) in 2002 (Çelik et al., 2002), and overlap with IGNA2005 network. Compared to the Istanbul area, the topography in Sakarya is quite rough and the elevations are between 0 m and 2458 m. The GNSS and levelling observations, and data processes were executed according to the regulation of the project. After the adjustment of GNSS network, the accuracies of ±1.5 cm and ±3.0 cm for the horizontal coordinates and ellipsoidal heights were derived. During the GNSS campaign of the MERLIS project, precise levelling measurements were undertaken, simultaneously, and in the adjustment results of

model point test point

Latitude (°N)

levelling observations the relative accuracy of Helmert orthometric heights is reported as 0.2 ppm by Çelik et al. (2002). The GNSS coordinates are in ITRF96 datum, while the orthometric heights are in TUDKA99 datum.

The distribution of the 109 GNSS/levelling benchmarks is homogenous but rather sparse, and given the rough topography of the region, the coverage of the benchmarks cannot characterize the topographic changes well. The reference point density is 1 benchmark per 165 km2. Figure 8 shows the reference network benchmarks on the topographic map of the area.

Fig. 8. Geoid reference benchmarks in Sakarya (topographic data from SRTM3 (USGS, 2010)

## **3.1.2 Methods**

Since the computation algorithms, applied for local GNSS/levelling geoid determination in the study, are not able to detect potential blunders in data sets, the geoid heights derived from the observations at the benchmarks were statistically tested and the outliers were cleaned before modelling the data (see Erol (2011) for a case study in screening the reference data before geoid modelling). After removing the outliers from data sets, in Istanbul data, uniformly distributed 200 points of 1205 reference benchmarks (approximately 16% of the entire data) were selected to form the test data, and the remaining 1005 benchmarks were used in computation of the geoid. Similarly, in Sakarya, 14 of the 109 data points (nearly 13% of all data) having homogenous distribution were selected and used for external tests of the geoid model. The model and test points are distinguished with different marks on Figures 7 and 8. The theoretical review of the applied surface interpolation methods and comparisons of their performances by means of the test results are provided in the next section.

#### **3.1.2.1 Polynomials**

The polynomial equation for representing a local geoid surface based on the discrete reference benchmarks with known geoid heights in the closed form is:

GNSS in Practical Determination of Regional Heights 141

usually results in an insufficient or rough approximation of the surface, unnecessarily use of a higher degree function may produce an over fitted surface that may reveal unrealistic and optimistic values at the test points. Another critical phase of determining polynomial surface is selecting the significant parameters and hence ignoring the insignificant ones in the model that this decision also bases on statistical criteria. After calculating the polynomials with least squares adjustment, the statistical significance of the model parameters can be analyzed using F-test with the null hypothesis *Ho* : *Xi* = 0 and the alternative hypothesis *H1* : *Xi* ≠ 0 (Draper and Smith, 1998). The F-statistic is used to verify the null hypothesis and

> 1 2 ˆ *i i <sup>T</sup> XQ X i XX i <sup>F</sup> t*σ

ˆ is a-posteriori variance, *t* is the number of tested parameters. The null hypothesis

= (10)

α

> is fulfilled, then

is obtained from the standard statistical tables for a

−

confidence level α and degrees of freedom *r* that means the tested parameters are

the parameters remain in the model. After clarifying the optimal form of a polynomial model with significance tests of parameters, the performance of the calculated model is tested empirically, considering the geoid residuals at the benchmarks of the network. The tests are repeated with the polynomials in varying orders and hence an appropriate order of

ANFIS is an artificial intelligence inspired soft computing method that is first purposed in the late 1960's depending on fuzzy logic and fuzzy set theory introduced by Zadeh (1965). After that this method was used in various disciplines for controlling the systems and modelling non-stationary phenomena, and recently applied in geoid determination, as well (see e.g. Ayan et al, 2005; Ylmaz and Arslan, 2008). The computation algorithm of the method mainly bases on feed-forward adaptive networks and fuzzy inference systems. A fuzzy inference system is typically designed by defining linguistic input and output variables and an inference rule base. Initially, the resulting system is just an approximation for an adequate model. Hence, its premise and consequent parameters are tuned based on the given data in order to optimize the system performance and this process bases on a

In computations with ANFIS, depending on the fuzzy rule structures, there are different neural-fuzzy systems such as Mamdani, Tsukamoto and Takagi-Sugeno (Jang, 1993). Tung and Quek (2009) can be referred for a review on implementation of different neural-fuzzy systems. In Figure 9 a two input, two-fuzzy ruled, one output type 3 fuzzy model is illustrated. In this example Takagi-Sugeno's fuzzy if-then rules are used and the output of each rule is a linear combination of input variables plus a constant term, and the final output

In the associate fuzzy reasoning in the figure and corresponding equivalent ANFIS structure:

Rule 1: if *x* is *A1* and *y* is *B1*; then *f1*= *p1x* + *q1y* + *r1*

polynomial is determined for the data depending on the comparisons of test results.

computed as a function of observations (Dermanis and Rossikopoulos, 1991):

α

insignificant and deleted from the model. If the contrary is true and *F Ft r*,

where <sup>2</sup> σ

is accepted if *F Ft r*,

α

supervised learning algorithm (Jang, 1993).

is a weighted average of each rule's output.

≤ , where *Ft r*,

**3.1.2.2 Adaptive network based fuzzy inference system** 

$$N\left(\mu,\upsilon\right) = \sum\_{m=0}^{l} \sum\_{n=0}^{l-m} a\_{nm} \mu^m \upsilon^n \tag{5}$$

where *amn* are the polynomial coefficients for *m*,*n* = 0 to *l*, which is the order of polynomial. *u* and *v* represent the normalized coordinates, which are obtained by centring and scaling the geodetic coordinates ϕ and λ*.* In the numerical tests of this study, the normalized coordinates were obtained by *u* = *k* (ϕ − ϕ*<sup>o</sup>*) and *v* = *k* (λ − λ*o*) where ϕ*o* and λ*<sup>o</sup>* are the mean latitude and longitude of the local area, and the scaling factor is *k* = 100/*ρ*°.

In Equation 5, the unknown polynomial coefficients are determined with least squares adjustment solution. According to this, the geoid height (*Ni*) and its correction (*Vi*) at a reference benchmark having (*u*, *v*) normalized coordinates as a function of unknown polynomial coefficients is:

$$\begin{aligned} N\_i + V\_i &= a\_{00} + a\_{10}\mu + a\_{11}v \\ &+ a\_{20}\mu^2 + a\_{21}\mu v + a\_{22}v^2 \\ &+ a\_{30}\mu^3 + a\_{31}\mu^2v + a\_{32}\mu v^2 + a\_{33}v^3 \\ &+ a\_{40}\mu^4 + a\_{41}\mu^3v + a\_{42}\mu^2v^2 + a\_{43}\mu v^3 + a\_{44}v^4 \\ &\cdots \end{aligned} \tag{6}$$

and the correction equations for all reference geoid benchmarks in matrix form is:

$$
\begin{bmatrix} N\_1 \\ N\_2 \\ \vdots \\ \cdot \\ \cdot \\ N\_i \end{bmatrix} + \begin{bmatrix} V\_1 \\ V\_2 \\ \cdot \\ \cdot \\ V\_i \end{bmatrix} = \begin{bmatrix} 1 & u\_1 & v\_1 & \dots \\ 1 & u\_2 & v\_2 & \dots \\ \cdot & \cdot & \cdot & \cdots \\ \cdot & \cdot & \cdot & \cdots \\ 1 & u\_i & v\_i & \dots \end{bmatrix} \begin{bmatrix} a\_{00} \\ a\_{10} \\ \cdot \\ \cdot \\ a\_{mn} \end{bmatrix} \tag{7a}
$$

$$N + V = AX \tag{7b}$$

and the unknown polynomial coefficients (*amn* elements of the *X* vector, see Equations 7a and 7b):

$$X = \left(A^T A\right)^{-1} A^T \ell \tag{8}$$

and the cofactor matrix of *X*

$$Q\_{XX} = \left(A^T A\right)^{-1} \tag{9}$$

are calculated. In the equations *A* is coefficients matrix and *ℓ* is the vector of observations that the elements of the vector are the geoid heights (*NGNSS/levelling*).

One of the main issues of modelling with polynomials is deciding the optimum degree of the expansion, which is critical for accuracy of the approximation as well and its decision mostly bases on trial and error (Erol, 2009). Whilst the use of a low-degree polynomial 140 Global Navigation Satellite Systems – Signal, Theory and Applications

0 0

*<sup>o</sup>*) and *v* = *k* (

In Equation 5, the unknown polynomial coefficients are determined with least squares adjustment solution. According to this, the geoid height (*Ni*) and its correction (*Vi*) at a reference benchmark having (*u*, *v*) normalized coordinates as a function of unknown

λ − λ

= =

*m n N uv a u v* −

where *amn* are the polynomial coefficients for *m*,*n* = 0 to *l*, which is the order of polynomial. *u* and *v* represent the normalized coordinates, which are obtained by centring and scaling the

*l lm m n mn*

<sup>=</sup> ∑ ∑ (5)

ϕ*o* and λ

*<sup>o</sup>* are the mean

(6)

(7a)

*N V AX* + = (7b)

= A (8)

= (9)

*.* In the numerical tests of this study, the normalized

*o*) where

( )

,

ϕ − ϕ

latitude and longitude of the local area, and the scaling factor is *k* = 100/*ρ*°.

00 10 11

+++

*N V a au av i i*

+= + +

...

2 2 20 21 22

++ + +

*a u a uv a v*

and the correction equations for all reference geoid benchmarks in matrix form is:

32 23 30 31 32 33

++ + + +

1 1 1 1 00 2 2 2 2 10

. . . . . ... . . . . . . ... . 1 ... *i i i i mn*

<sup>⎡</sup> ⎤⎡⎤⎡ ⎤⎡ ⎤ <sup>⎢</sup> ⎥⎢⎥⎢ ⎥⎢ ⎥ <sup>⎢</sup> ⎥⎢⎥⎢ ⎥⎢ ⎥ <sup>⎢</sup> ⎥⎢⎥⎢ + = ⎥⎢ ⎥ <sup>⎢</sup> ⎥⎢⎥⎢ ⎥⎢ ⎥ <sup>⎢</sup> ⎥⎢⎥⎢ ⎥⎢ ⎥ <sup>⎢</sup> ⎥⎢⎥⎢ ⎥⎢ ⎥ <sup>⎣</sup> ⎦⎣⎦⎣ ⎦⎣ ⎦

*N V uv a*

and the unknown polynomial coefficients (*amn* elements of the *X* vector, see Equations 7a

( ) <sup>1</sup> *T T X AA A* <sup>−</sup>

( ) <sup>1</sup> *<sup>T</sup> Q AA XX*

are calculated. In the equations *A* is coefficients matrix and *ℓ* is the vector of observations

One of the main issues of modelling with polynomials is deciding the optimum degree of the expansion, which is critical for accuracy of the approximation as well and its decision mostly bases on trial and error (Erol, 2009). Whilst the use of a low-degree polynomial

that the elements of the vector are the geoid heights (*NGNSS/levelling*).

−

*N V uv a N V uv a*

*a u a u v a uv a v*

4 3 22 3 4 40 41 42 43 44

*a u a u v a u v a uv a v*

1 ... 1 ...

geodetic coordinates

polynomial coefficients is:

and 7b):

and the cofactor matrix of *X*

ϕ and λ

coordinates were obtained by *u* = *k* (

usually results in an insufficient or rough approximation of the surface, unnecessarily use of a higher degree function may produce an over fitted surface that may reveal unrealistic and optimistic values at the test points. Another critical phase of determining polynomial surface is selecting the significant parameters and hence ignoring the insignificant ones in the model that this decision also bases on statistical criteria. After calculating the polynomials with least squares adjustment, the statistical significance of the model parameters can be analyzed using F-test with the null hypothesis *Ho* : *Xi* = 0 and the alternative hypothesis *H1* : *Xi* ≠ 0 (Draper and Smith, 1998). The F-statistic is used to verify the null hypothesis and computed as a function of observations (Dermanis and Rossikopoulos, 1991):

$$F = \frac{X\_i^T Q\_{X\_i X\_i}^{-1} X\_i}{t \hat{\sigma}^2} \tag{10}$$

where <sup>2</sup> σˆ is a-posteriori variance, *t* is the number of tested parameters. The null hypothesis is accepted if *F Ft r*, α ≤ , where *Ft r*, α is obtained from the standard statistical tables for a confidence level α and degrees of freedom *r* that means the tested parameters are insignificant and deleted from the model. If the contrary is true and *F Ft r*, α > is fulfilled, then the parameters remain in the model. After clarifying the optimal form of a polynomial model with significance tests of parameters, the performance of the calculated model is tested empirically, considering the geoid residuals at the benchmarks of the network. The tests are repeated with the polynomials in varying orders and hence an appropriate order of polynomial is determined for the data depending on the comparisons of test results.

#### **3.1.2.2 Adaptive network based fuzzy inference system**

ANFIS is an artificial intelligence inspired soft computing method that is first purposed in the late 1960's depending on fuzzy logic and fuzzy set theory introduced by Zadeh (1965). After that this method was used in various disciplines for controlling the systems and modelling non-stationary phenomena, and recently applied in geoid determination, as well (see e.g. Ayan et al, 2005; Ylmaz and Arslan, 2008). The computation algorithm of the method mainly bases on feed-forward adaptive networks and fuzzy inference systems. A fuzzy inference system is typically designed by defining linguistic input and output variables and an inference rule base. Initially, the resulting system is just an approximation for an adequate model. Hence, its premise and consequent parameters are tuned based on the given data in order to optimize the system performance and this process bases on a supervised learning algorithm (Jang, 1993).

In computations with ANFIS, depending on the fuzzy rule structures, there are different neural-fuzzy systems such as Mamdani, Tsukamoto and Takagi-Sugeno (Jang, 1993). Tung and Quek (2009) can be referred for a review on implementation of different neural-fuzzy systems. In Figure 9 a two input, two-fuzzy ruled, one output type 3 fuzzy model is illustrated. In this example Takagi-Sugeno's fuzzy if-then rules are used and the output of each rule is a linear combination of input variables plus a constant term, and the final output is a weighted average of each rule's output.

In the associate fuzzy reasoning in the figure and corresponding equivalent ANFIS structure:

Rule 1: if *x* is *A1* and *y* is *B1*; then *f1*= *p1x* + *q1y* + *r1*

GNSS in Practical Determination of Regional Heights 143

approximation model corresponds to the actual data (*ℓ*), and takes values between 0 and 1 (or represented as percentage, and the closer the R2 is to 1, the smaller the residuals and

Coefficient of determination indicates how closely the estimated values ( ˆ

**w1**

**w2**

Π ⊗

Π ⊗

(a)

**Y**

**y Y**

*w1*(*x,y*)

*w2*(*x,y*)

*<sup>i</sup>* A is the geoid height computed with the polynomial or ANFIS *Nmodel*, and A is the mean value of observations, and *j* is the number of observations (Sen and Srivastava, 1990).

*f*1 = *p1x* + *q1y* + *r1*

*f*2 = *p2x* + *q2y* + *r2*

2

1

1 2 <sup>=</sup> <sup>+</sup> *<sup>w</sup> <sup>w</sup> w w*

*w f* 2 2

*w f* 1 1

*f1*(*x,y*)

*f2*(*x,y*)

1 2 <sup>=</sup> <sup>+</sup> *<sup>w</sup> <sup>w</sup> w w*

2

*Layer 1 Layer 2 Layer 3 Layer 4 Layer 5* (b) Fig. 9. (a) type 3 fuzzy reasoning, (b) a simple two-input, two-rule and single-output ANFIS

In the results of the tests, repeated with the varying polynomial orders from first to sixth order, a 5th and 4th order polynomial models (having 21 and 15 coefficients) were determined as optimal for the Istanbul and Sakarya data, respectively. The significance tests of the polynomial parameters revealed the final forms of the models. Evaluation of these polynomials at the reference and test benchmarks, separately, in Istanbul and Sakarya

1

A ) from an

( ) 11 2 2 1 2 11 2 2

= + *wf wf f xy w w*

*wf wf*

*<sup>f</sup>* (*x,y*) Σ

, <sup>+</sup> <sup>=</sup> <sup>+</sup>

where ˆ

hence the better the model fit).

**µA1**

**X**

**µB1**

**µB2**

**x X**

μ*A1*(*x*)

*x*

*y*

structure (Jang, 1993)

**3.1.3 Test results** 

μ*A2*(*x*)

μ*B1*(*y*)

μ*B2*(*y*)

**µA2**

Rule 2: if *x* is *A2* and *y* is *B2*; then *f2* = *p2x* + *q2y* + *r2*

where the symbols *A* and *B* denote the fuzzy sets defined for membership functions of *x* and *y* in the premise parts. The symbols *p*, *q* and *r* denote the consequent parameters of the output functions *f* (Takagi and Sugeno, 1985; Jang, 1993; Ylmaz, 2010). The Gaussian function is usually used as input membership function μ*<sup>i</sup>* (*x*) (see Equation 11) with the maximum value equal to 1 and the minimum value equal to 0:

$$\mu\_i(\mathbf{x}) = \exp\left[-\left(\frac{\mathbf{x} - b\_i}{a\_i}\right)^2\right] \tag{11}$$

where *ai*, *bi* are the premise parameters that define the gaussian-shape according to their changing values. Ylmaz and Arslan (2008) apply various membership functions and investigate the effect of the each function on the approximation accuracy of the data set.

In the associated ANFIS architecture of Figure 9, the functions of the layers can be explained as such that in *Layer 1*, inputs are divided subspaces using selected membership function, in *Layer 2*, firing strength of a rule is calculated by multiplying incoming signals, in *Layer 3*, the firing strengths are normalised and in *Layer 4*, the consequent parameters (*pi*, *qi, ri*) are determined and finally in *Layer 5*, the final output is obtained by summing of all incoming signals.

Using the designed architecture, in the running steps of the ANFIS, basically, it takes the initial fuzzy system and tunes it by means of a hybrid technique combining gradient descent back-propagation and mean least-squares optimization algorithms (see Ylmaz and Arslan, 2008). At each epoch, an error measure, usually defined as the sum of the squared difference between actual and desired output, is reduced. Training stops when either the predefined epoch number or error rate is obtained. The gradient descent algorithm is mainly implemented to tune the non-linear premise parameters while the basic function of the mean least-squares is to optimize or adjust the linear consequent parameters (Jang, 1993; Takagi and Sugeno, 1985).

After determination of the local geoid model using either of the methods, the success of the method can be assessed using various statistical measures such as the coefficient of determination, R2, and the root mean square error, RMSE, of geoidal heights at the reference benchmarks:

$$\mathcal{R}^2 = \mathbf{1} - \frac{\sum\_{i=1}^j \left(\ell - \hat{\ell}\_i\right)^2}{\sum\_{i=1}^j \left(\ell\_i - \overline{\ell}\right)^2} \tag{12}$$

$$RMSE = \sqrt{\frac{\sum\_{i=1}^{j} \left(\ell\_i - \hat{\ell}\_i\right)^2}{j}} \tag{13}$$

142 Global Navigation Satellite Systems – Signal, Theory and Applications

Rule 2: if *x* is *A2* and *y* is *B2*; then *f2* = *p2x* + *q2y* + *r2* where the symbols *A* and *B* denote the fuzzy sets defined for membership functions of *x* and *y* in the premise parts. The symbols *p*, *q* and *r* denote the consequent parameters of the output functions *f* (Takagi and Sugeno, 1985; Jang, 1993; Ylmaz, 2010). The Gaussian

exp *<sup>i</sup>*

where *ai*, *bi* are the premise parameters that define the gaussian-shape according to their changing values. Ylmaz and Arslan (2008) apply various membership functions and investigate the effect of the each function on the approximation accuracy of the data set.

In the associated ANFIS architecture of Figure 9, the functions of the layers can be explained as such that in *Layer 1*, inputs are divided subspaces using selected membership function, in *Layer 2*, firing strength of a rule is calculated by multiplying incoming signals, in *Layer 3*, the firing strengths are normalised and in *Layer 4*, the consequent parameters (*pi*, *qi, ri*) are determined and finally in *Layer 5*, the final output is obtained by summing of all incoming

Using the designed architecture, in the running steps of the ANFIS, basically, it takes the initial fuzzy system and tunes it by means of a hybrid technique combining gradient descent back-propagation and mean least-squares optimization algorithms (see Ylmaz and Arslan, 2008). At each epoch, an error measure, usually defined as the sum of the squared difference between actual and desired output, is reduced. Training stops when either the predefined epoch number or error rate is obtained. The gradient descent algorithm is mainly implemented to tune the non-linear premise parameters while the basic function of the mean least-squares is to optimize or adjust the linear consequent parameters (Jang, 1993;

After determination of the local geoid model using either of the methods, the success of the method can be assessed using various statistical measures such as the coefficient of determination, R2, and the root mean square error, RMSE, of geoidal heights at the reference

2 1

1

= −

*R* <sup>=</sup>

*<sup>i</sup> RMSE*

=

( )

−

A A

ˆ

*i*

2

2

( )

A A

−

( )<sup>2</sup>

−

*j*

*i*

<sup>ˆ</sup> *<sup>j</sup> i i*

∑ A A

1

1

=

*j*

∑

*i j*

*i*

=

∑

<sup>⎡</sup> <sup>⎤</sup> ⎛ ⎞ <sup>−</sup> = −⎢ <sup>⎥</sup> ⎜ ⎟ <sup>⎢</sup> <sup>⎥</sup> ⎝ ⎠ <sup>⎣</sup> <sup>⎦</sup>

*i x b*

*a*

μ

2

*<sup>i</sup>* (*x*) (see Equation 11) with the

(11)

(12)

(13)

function is usually used as input membership function

signals.

Takagi and Sugeno, 1985).

benchmarks:

maximum value equal to 1 and the minimum value equal to 0:

( )

*x*

*i*

μ

where ˆ *<sup>i</sup>* A is the geoid height computed with the polynomial or ANFIS *Nmodel*, and A is the mean value of observations, and *j* is the number of observations (Sen and Srivastava, 1990). Coefficient of determination indicates how closely the estimated values ( ˆ A ) from an approximation model corresponds to the actual data (*ℓ*), and takes values between 0 and 1 (or represented as percentage, and the closer the R2 is to 1, the smaller the residuals and hence the better the model fit).

(a)

Fig. 9. (a) type 3 fuzzy reasoning, (b) a simple two-input, two-rule and single-output ANFIS structure (Jang, 1993)

#### **3.1.3 Test results**

In the results of the tests, repeated with the varying polynomial orders from first to sixth order, a 5th and 4th order polynomial models (having 21 and 15 coefficients) were determined as optimal for the Istanbul and Sakarya data, respectively. The significance tests of the polynomial parameters revealed the final forms of the models. Evaluation of these polynomials at the reference and test benchmarks, separately, in Istanbul and Sakarya

GNSS in Practical Determination of Regional Heights 145

reveal optimistic results but, at the same time, produce an over fitted surface model that should be avoided in geoid modelling. While modelling with ANFIS, deciding an optimal

architecture for the system is based on trial and error procedure.

Longitude (°E)

Longitude (°E)

(Δ*N* = *NGNSS/lev.* − *Npoly.*): (a) Istanbul, (b) Sakarya

0 m

100 m

200 m

300 m

400 m

500 m

600 m

700 m

800 m

900 m

1000 m

1100 m

1200 m

1300 m

1400 m

1500 m

1600 m

1700 m

1800 m

(a)

IZNIK LAKE

(b)

Fig. 10. Geoid height differences of polynomial models and observations in centimetre

Latitude (°N)

Latitude (°N)

regions, revealed the statistics in Tables 3 and 4. As is seen from the Table 3 for Istanbul area, the accuracy of the fifth order polynomial in terms of RMSE of geoid heights at the test points is ±4.4 cm with a coefficient of determination of 0.992. The geoid height differences of the polynomial model and observations at the benchmarks are mapped in Figure 10a. The test statistics of the polynomial model for Sakarya local geoid are summarized in Table 4 that the evaluation of the model at the independent test points revealed an absolute accuracy of ±20.4 cm in terms of RMSE of the geoid heights. Although the qualities of employed reference data in computations of both local geoid models are comparable (see section 3.1.1), the polynomial surface model revealed much improved results in Istanbul territory than Sakarya. The reasons of low accuracy in local geoid model of Sakarya territory can be told as sparse and non-homogeneous distribution of geoid reference benchmarks and rough topographic character of the territory that makes difficult to access for height measuring. Hence the GNSS/levelling benchmarks whose density and distribution are very critical indeed for precise modelling of the local geoid, are not characterize sufficiently the topographic changes and mass distribution in Sakarya (compare point distribution versus topography in Figure 8). Figure 10b shows the geoid height differences of the polynomial model and observations at the benchmarks for Sakarya.


Table 3. Statistical comparison of applied approximation techniques in Istanbul local geoid (units in centimetre, R2 unitless)


Table 4. Statistical comparison of applied approximation techniques in Sakarya local geoid (units in centimetre, R2 unitless)

Nonlinear regression structure of ANFIS and its resulting system, based on tuning the model parameters according to local properties of the data may reveal improved results of surface fitting. However one must be careful whilst working with soft computing approaches and pay attention for choosing appropriate design of architecture with optimal parameters such as: (e.g. in ANFIS) the input and rule numbers, type and number of membership functions, efficient training algorithm. Since the prediction capabilities of these algorithms vary depending on adopted architecture, use of unrealistic parameters may 144 Global Navigation Satellite Systems – Signal, Theory and Applications

regions, revealed the statistics in Tables 3 and 4. As is seen from the Table 3 for Istanbul area, the accuracy of the fifth order polynomial in terms of RMSE of geoid heights at the test points is ±4.4 cm with a coefficient of determination of 0.992. The geoid height differences of the polynomial model and observations at the benchmarks are mapped in Figure 10a. The test statistics of the polynomial model for Sakarya local geoid are summarized in Table 4 that the evaluation of the model at the independent test points revealed an absolute accuracy of ±20.4 cm in terms of RMSE of the geoid heights. Although the qualities of employed reference data in computations of both local geoid models are comparable (see section 3.1.1), the polynomial surface model revealed much improved results in Istanbul territory than Sakarya. The reasons of low accuracy in local geoid model of Sakarya territory can be told as sparse and non-homogeneous distribution of geoid reference benchmarks and rough topographic character of the territory that makes difficult to access for height measuring. Hence the GNSS/levelling benchmarks whose density and distribution are very critical indeed for precise modelling of the local geoid, are not characterize sufficiently the topographic changes and mass distribution in Sakarya (compare point distribution versus topography in Figure 8). Figure 10b shows the geoid height differences of the polynomial

**5th order polynomial ANFIS** 

**Minimum** -11.2 -11.5 -10.5 -9.7 -32.5 **Maximum** 11.4 11.5 12.4 9.5 30.0 **Mean** 0.0 0.0 0.0 0.0 -0.3 **RMSE** 4.2 4.4 3.6 3.5 10.8 **R2** 0.993 0.992 0.996 0.995 0.960

Table 3. Statistical comparison of applied approximation techniques in Istanbul local geoid

**4th order polynomial ANFIS** 

**Minimum** -52.0 -36.3 -39.7 -35.4 -53.8 **Maximum** 82.7 24.1 42.1 19.0 64.3 **Mean** -0.3 -7.5 0.0 -11.0 -4.4 **RMSE** 22.7 20.4 12.0 18.9 18.6 **R2** 0.923 0.905 0.978 0.913 0.945

Table 4. Statistical comparison of applied approximation techniques in Sakarya local geoid

Nonlinear regression structure of ANFIS and its resulting system, based on tuning the model parameters according to local properties of the data may reveal improved results of surface fitting. However one must be careful whilst working with soft computing approaches and pay attention for choosing appropriate design of architecture with optimal parameters such as: (e.g. in ANFIS) the input and rule numbers, type and number of membership functions, efficient training algorithm. Since the prediction capabilities of these algorithms vary depending on adopted architecture, use of unrealistic parameters may

**Reference BMs Test BMs Reference BMs Test BMs TG03** 

**Reference BMs Test BMs Reference BMs Test BMs TG03** 

model and observations at the benchmarks for Sakarya.

(units in centimetre, R2 unitless)

(units in centimetre, R2 unitless)

reveal optimistic results but, at the same time, produce an over fitted surface model that should be avoided in geoid modelling. While modelling with ANFIS, deciding an optimal architecture for the system is based on trial and error procedure.

(a)

Fig. 10. Geoid height differences of polynomial models and observations in centimetre (Δ*N* = *NGNSS/lev.* − *Npoly.*): (a) Istanbul, (b) Sakarya

GNSS in Practical Determination of Regional Heights 147

Longitude (°E)

Longitude (°E)

Fig. 11. Geoid height differences of ANFIS models and observations in centimetre

(Δ*N* = *NGNSS/lev.* − *NANFIS*): (a) Istanbul, (b) Sakarya

IZNIK LAKE

(b)

0 m

100 m

200 m

300 m

400 m

500 m

600 m

700 m

800 m

900 m

1000 m

1100 m

1200 m

1300 m

1400 m

1500 m

1600 m

1700 m

1800 m

(a)

Latitude (°N)

Latitude (°N)

In modelling Istanbul and Sakarya local geoids using the ANFIS approach, training data (the geoid reference benchmarks) were used to estimate the ANFIS model parameters, whereas test data were employed to validate the estimated model. The input parameters are the geographic coordinates of the reference benchmarks, and the output membership functions are the first order polynomials of the input variables. As the number of the output membership functions depends on the number of fuzzy rules, in computations, the latitudes and longitudes were divided into 5 subsets to obtain 5 x 5 = 25 rules in Istanbul, and 4 subsets to obtain 4 x 4 = 16 rules in Sakarya. In both case studies, we adopted the Gaussian type membership function as suggested by Ylmaz (2010). After determining the ANFIS structure, the parameters of both the input and output membership functions were calculated according to a hybrid learning algorithm as a combination of least-squares estimation and gradient descent method (Takagi & Sugeno, 1985). Using the determined ANFIS model parameters for Istanbul and Sakarya data, separately, the geoid heights both at the reference and test benchmarks were calculated. In addition, the statistics of the geoid height differences between the model and observations were investigated in each local area.

In the test results for Istanbul local geoid with ANFIS (Table 3), the geoid height residuals at the test benchmarks vary between -9.7 cm and 9.5 cm with a standard deviation of ±3.5 cm. As the basic statistics in Table 3 provides a comparison between the performances of two methods in Istanbul, ANFIS has a 20% improvement in terms of RMSE of geoid heights comparing the 5th order polynomial model. As the RMSE of the computed geoid heights for the reference benchmarks and the test benchmarks are close values, we can say that the composed ANFIS structure is appropriate for modelling the Istanbul data. The coefficient of determination (R2), as the performance measure of ANFIS model is 0.996.

However, in Sakarya, the ANFIS method did not reveal significantly superior results from the 4th order polynomial at the test points with the geoid height residuals between -35.4 cm and 19.0 cm with root mean square error of ±18.9 cm. The improvement of the model accuracy with ANFIS method versus the polynomial is around 7%, considering the RMSE of geoid heights. On the other hand ANFIS revealed much improved test statistics at the reference benchmarks than the polynomial. The inconsistency, observed between the evaluation results at the reference and test benchmarks for ANFIS model may indicate an inappropriateness of this model for Sakarya data. Figure 11 maps the geoid height differences of ANFIS model and observations at the benchmarks in Istanbul and Sakarya.

In addition to the evaluation of surface approximation methods in modelling local GNSS/levelling geoids in case study areas, TG03 model was also evaluated at the reference geoid benchmarks. The statistics of geoid height differences with 0.3 cm mean and ±10.8 cm standard deviation for Istanbul, confirms the reported accuracies of the model by TNUGG (2003) and Klçoğlu et al. (2005). Conversely, the validation results of TG03 model in Sakarya GNSS/levelling benchmarks revealed the differences of geoid heights with -4.4 cm mean and ±18.6 cm standard deviations. Considering these validation results, although the performance of TG03 model seems low by means of RMSE of geoid heights, they revealed approximately 44% of improvement when comparing to the performance of previous Turkish regional geoid TG99A in the same region (see the results of TG99A validations in Sakarya region by Klçoğlu&Frat (2003)).

In the conclusion of this section, the Istanbul and Sakarya local GNSS/levelling geoid models by ANFIS approach can be observed in the maps depicted in Figures 12 and 13.

146 Global Navigation Satellite Systems – Signal, Theory and Applications

In modelling Istanbul and Sakarya local geoids using the ANFIS approach, training data (the geoid reference benchmarks) were used to estimate the ANFIS model parameters, whereas test data were employed to validate the estimated model. The input parameters are the geographic coordinates of the reference benchmarks, and the output membership functions are the first order polynomials of the input variables. As the number of the output membership functions depends on the number of fuzzy rules, in computations, the latitudes and longitudes were divided into 5 subsets to obtain 5 x 5 = 25 rules in Istanbul, and 4 subsets to obtain 4 x 4 = 16 rules in Sakarya. In both case studies, we adopted the Gaussian type membership function as suggested by Ylmaz (2010). After determining the ANFIS structure, the parameters of both the input and output membership functions were calculated according to a hybrid learning algorithm as a combination of least-squares estimation and gradient descent method (Takagi & Sugeno, 1985). Using the determined ANFIS model parameters for Istanbul and Sakarya data, separately, the geoid heights both at the reference and test benchmarks were calculated. In addition, the statistics of the geoid height differences between the model and observations were investigated in each local area. In the test results for Istanbul local geoid with ANFIS (Table 3), the geoid height residuals at the test benchmarks vary between -9.7 cm and 9.5 cm with a standard deviation of ±3.5 cm. As the basic statistics in Table 3 provides a comparison between the performances of two methods in Istanbul, ANFIS has a 20% improvement in terms of RMSE of geoid heights comparing the 5th order polynomial model. As the RMSE of the computed geoid heights for the reference benchmarks and the test benchmarks are close values, we can say that the composed ANFIS structure is appropriate for modelling the Istanbul data. The coefficient of

determination (R2), as the performance measure of ANFIS model is 0.996.

Sakarya region by Klçoğlu&Frat (2003)).

However, in Sakarya, the ANFIS method did not reveal significantly superior results from the 4th order polynomial at the test points with the geoid height residuals between -35.4 cm and 19.0 cm with root mean square error of ±18.9 cm. The improvement of the model accuracy with ANFIS method versus the polynomial is around 7%, considering the RMSE of geoid heights. On the other hand ANFIS revealed much improved test statistics at the reference benchmarks than the polynomial. The inconsistency, observed between the evaluation results at the reference and test benchmarks for ANFIS model may indicate an inappropriateness of this model for Sakarya data. Figure 11 maps the geoid height differences of ANFIS model and observations at the benchmarks in Istanbul and Sakarya. In addition to the evaluation of surface approximation methods in modelling local GNSS/levelling geoids in case study areas, TG03 model was also evaluated at the reference geoid benchmarks. The statistics of geoid height differences with 0.3 cm mean and ±10.8 cm standard deviation for Istanbul, confirms the reported accuracies of the model by TNUGG (2003) and Klçoğlu et al. (2005). Conversely, the validation results of TG03 model in Sakarya GNSS/levelling benchmarks revealed the differences of geoid heights with -4.4 cm mean and ±18.6 cm standard deviations. Considering these validation results, although the performance of TG03 model seems low by means of RMSE of geoid heights, they revealed approximately 44% of improvement when comparing to the performance of previous Turkish regional geoid TG99A in the same region (see the results of TG99A validations in

In the conclusion of this section, the Istanbul and Sakarya local GNSS/levelling geoid models by ANFIS approach can be observed in the maps depicted in Figures 12 and 13.

(a)

Fig. 11. Geoid height differences of ANFIS models and observations in centimetre (Δ*N* = *NGNSS/lev.* − *NANFIS*): (a) Istanbul, (b) Sakarya

GNSS in Practical Determination of Regional Heights 149

relationship between heterogeneous heights: *hGNSS* - *Hlevelling* - *Nmodel* = 0 should have been satisfied. However, because of physical realities and computational factors that cause discrepancies among the heights, this equation cannot be realised at all in real world. As such, this naturally affects the precision of transformation among the heights in practice. Dealing with these disturbing factors, especially the element caused by the systematic errors and datum inconsistencies as a part of geoid modelling, will reduce the discrepancies among the three heights and hence improve the transformation precision of GNSS ellipsoidal heights. As part of this chapter we therefore explain two methods, which are aimed at minimizing the systematic differences of three heights in terms of optimal combination of the heights, for the improvement of regional geoid models with limited reference data in local areas. In the first approach, the height discrepancies are modelled with a parametric equation, so called corrector surface model, which absorbs inconsistencies of the height sets and allow a direct transformation of GNSS heights to the regional vertical datum. The second method consists of the least squares adjustment of the orthometric height differences, which are derived from ellipsoidal heights and regional geoid model, on the base vectors. Hence the orthometric heights of the new points are derived using the adjusted orthometric height differences. Brief descriptions of these height combination

The corrector surfaces, determined according to combination of GNSS derived heights, orthometric heights from the vertical datum and a gravimetric based geoid model, provides an efficient and practical option to precise GNSS levelling in a local area (see e.g., Featherstone, 1998; Kotsakis & Sideris, 1999; Fotopoulos, 2003). The main idea of modelling the corrector surface is to make the regional model estimate of the geoid coincident with the valid vertical datum at GNSS/levelling benchmarks hence minimising the errors in the regional geoid model and the observed heights at the benchmarks. This provides a practical solution for GNSS users in order to accomplish a direct transformation from GNSS derived

Determining an optimal parametric model for discrepancies of three heights follows the similar steps as explained in section 3.1.2 for local GNSS/levelling geoid modelling. These steps basically include: determining an appropriate type for model, selecting the optimum extent (form) of the model, and finally assessing the performance of determined model. Accordingly, although one can find numerous models suggested in the literature for realizing corrector surfaces, selecting procedures of the parametric model is mostly arbitrary and based on comparison of statistical test results that measure the accuracy and numerical

General expression of the discrepancies between GNSS/levelling derived geoid heights and

*h HN F GNSS lev* − . mod −− = *el* (

having elements as only a bias, a bias and a tilt, or higher order polynomials), and multiple regression equations generally as low-order polynomials (similar with Equation 5,

ϕ λ

) function can be presented in various forms in different levels of complexity (e.g.

, 0 ) (14)

geoid heights from the regional geoid model as a function of geodetic position:

approaches with formulations can be found in the sections below.

ellipsoidal heights to orthometric heights, based on local vertical datum.

**3.2.1 Corrector surface model** 

stabilities of the various models.

that *F*(ϕ*,*λ

Fig. 12. Istanbul local GNSS/levelling geoid with ANFIS model

Fig. 13. Sakarya local GNSS/levelling geoid with ANFIS model

### **3.2 Local improvement of regional geoids**

Besides the local GNSS/levelling geoid models, using locally improved regional geoid model with local GNSS/levelling data also provides an applicable solution for transformation of GNSS heights into regional vertical datum. Theoretically, the fundamental 148 Global Navigation Satellite Systems – Signal, Theory and Applications

Longitude (°E)

Longitude (°E)

Besides the local GNSS/levelling geoid models, using locally improved regional geoid model with local GNSS/levelling data also provides an applicable solution for transformation of GNSS heights into regional vertical datum. Theoretically, the fundamental

IZNIK LAKE

0 m

200 m

400 m

600 m

800 m

1000 m

1200 m

1400 m

1600 m

1800 m

Fig. 12. Istanbul local GNSS/levelling geoid with ANFIS model

Fig. 13. Sakarya local GNSS/levelling geoid with ANFIS model

**3.2 Local improvement of regional geoids** 

Latitude (°N)

Latitude (°N)

relationship between heterogeneous heights: *hGNSS* - *Hlevelling* - *Nmodel* = 0 should have been satisfied. However, because of physical realities and computational factors that cause discrepancies among the heights, this equation cannot be realised at all in real world. As such, this naturally affects the precision of transformation among the heights in practice. Dealing with these disturbing factors, especially the element caused by the systematic errors and datum inconsistencies as a part of geoid modelling, will reduce the discrepancies among the three heights and hence improve the transformation precision of GNSS ellipsoidal heights. As part of this chapter we therefore explain two methods, which are aimed at minimizing the systematic differences of three heights in terms of optimal combination of the heights, for the improvement of regional geoid models with limited reference data in local areas. In the first approach, the height discrepancies are modelled with a parametric equation, so called corrector surface model, which absorbs inconsistencies of the height sets and allow a direct transformation of GNSS heights to the regional vertical datum. The second method consists of the least squares adjustment of the orthometric height differences, which are derived from ellipsoidal heights and regional geoid model, on the base vectors. Hence the orthometric heights of the new points are derived using the adjusted orthometric height differences. Brief descriptions of these height combination approaches with formulations can be found in the sections below.

#### **3.2.1 Corrector surface model**

The corrector surfaces, determined according to combination of GNSS derived heights, orthometric heights from the vertical datum and a gravimetric based geoid model, provides an efficient and practical option to precise GNSS levelling in a local area (see e.g., Featherstone, 1998; Kotsakis & Sideris, 1999; Fotopoulos, 2003). The main idea of modelling the corrector surface is to make the regional model estimate of the geoid coincident with the valid vertical datum at GNSS/levelling benchmarks hence minimising the errors in the regional geoid model and the observed heights at the benchmarks. This provides a practical solution for GNSS users in order to accomplish a direct transformation from GNSS derived ellipsoidal heights to orthometric heights, based on local vertical datum.

Determining an optimal parametric model for discrepancies of three heights follows the similar steps as explained in section 3.1.2 for local GNSS/levelling geoid modelling. These steps basically include: determining an appropriate type for model, selecting the optimum extent (form) of the model, and finally assessing the performance of determined model. Accordingly, although one can find numerous models suggested in the literature for realizing corrector surfaces, selecting procedures of the parametric model is mostly arbitrary and based on comparison of statistical test results that measure the accuracy and numerical stabilities of the various models.

General expression of the discrepancies between GNSS/levelling derived geoid heights and geoid heights from the regional geoid model as a function of geodetic position:

$$H\_{\rm GNSS} - H\_{\rm lev.} - N\_{\rm mod\,el} - F\left(\varphi\_{\prime}\mathcal{A}\right) = 0\tag{14}$$

that *F*(ϕ*,* λ) function can be presented in various forms in different levels of complexity (e.g. having elements as only a bias, a bias and a tilt, or higher order polynomials), and multiple regression equations generally as low-order polynomials (similar with Equation 5,

GNSS in Practical Determination of Regional Heights 151

unknown parameters. The a-priori root mean square error of Δ*H* of a baseline of *S* km is *mmS* = 0 () *km* that *m*0 is the a-priori RMSE of unit observation. The unknown parameters

where *P* includes the weights of Δ*H* observations. Hence the adjusted orthometric height

 Δ

The success of the method can be assessed at the test points where GNSS and levelling observations exist, and in the evaluations the orthometric heights of the test points are

Furthermore, combining the height sets using the method of least squares, weights of each set are essential to correctly estimate the unknown parameters. Improper stochastic modelling can lead to systematic deviations in the results. Therefore, for the purpose of estimating realistic and reliable variances of the data sets, and therefore constructing the appropriate a−priori covariance matrix of the observations, variance component estimation techniques can be included in combining algorithms of the heights. Numerous solution algorithms suggested for variance component estimation problems can be found in various literature published on the subject however, Rao's Minimum Norm Quadratic Unbiased Estimation is commonly used one of these methods (Rao, 1971.). Sjöberg (1984), Fotopoulos (2003) and Erol et al. (2008) can be referred to for further readings and practicing variance

Suggested data combination methods related to local improvement of regional geoids are exemplified and tested in a numerical case study in this title. These results are also included by Erol et al. (2008) to provide a detailed investigation on local performances of the various regional models and their improvement capabilities. The local area covers 154 km x 198 km, and the number of reference benchmarks used in the tests is 31. The GNSS positions of the benchmarks were determined with static measurements using dual frequency GNSS receivers. The accuracies of the latitudes and longitudes in ITRF96 datum is ±1.5 cm, and for the accuracy of ellipsoidal heights is reported as ±3.0 cm (Erol et al., 2008). The adjustment of levelling observations revealed the orthometric heights of the benchmarks with ±2.5 cm in TUDKA99 datum. As can be seen in Figure 14, the benchmarks have quite poor density and non-homogeneous distribution over the area. The approximate density of the benchmarks is 1 point per 900 km2. When the poor density of the benchmarks and rough topographic pattern of the area (the heights of the region change between 41 m and 2496 m) are considered, alongside the levelling technique the regional geoid model or its locally improved version can be applied to obtain regional orthometric heights from GNSS. As a result of this the density and distribution of the reference benchmarks do not allow determination of local GNSS/levelling geoid. According to Large Scale Map and Spatial Data Production Regulation of Turkey, legalized by July 2005, the density of the geoid

\* Δ

, A is the coefficients matrix, and *X* consists the

( ) *T T X A PA A P* = A (21)

*H Hv* = + (22)

Δ*h N* − Δ

from the solution of matrix system in Equation 20 is calculated as

compared with their observed orthometric heights.

component estimation techniques in the adjustment.

**3.2.3 Case study: Local Çankr geoid** 

where the observations matrix is A =

differences are:

( ) 0 0 , *l lm m n mn m n F a* ϕ λ *u v* − = = <sup>=</sup> ∑ ∑ ), and four or five parameter similarity transformation equations (see Equations 15 and 16, respectively) are generally used.

$$F(\varphi, \lambda) = a\_0 + a\_1 \cos \varphi \cos \lambda + a\_2 \cos \varphi \sin \lambda + a\_3 \sin \varphi \tag{15}$$

and five parameter similarity transformation as an extended version of Equation 15:

$$F(\varphi, \lambda) = a\_0 + a\_1 \cos \varphi \cos \lambda + a\_2 \cos \varphi \sin \lambda + a\_3 \sin \varphi + a\_4 \sin^2 \varphi \tag{16}$$

The coefficients of the parametric models are calculated using least squares adjustment method as described in section 3.1.2.1 with Equations 6-9. The appropriateness of the models are comparable according to the results of empirical tests, and RMSE of the height differences and coefficient of determinations (see Equations 12 and 13), are two of these statistics which provides useful hints on the compatibility of the parametric models as corrector surfaces. Hence the geoidal height at a new point can be determined with better precision as the summation of geoid height derived from the regional model and residual *NCS* δ from the corrector surface model as *NN N TG CS* <sup>03</sup> = + δ.

#### **3.2.2 Adjustment of the derived orthometric height differences on the baselines**

The second method combines the height differences, which are derived from GNSS ellipsoidal heights (Δ*h*) and regional geoid model (Δ*Nmodel*), in the least squares adjustment algorithm (Mikhail & Ackermann, 1976) and derives the adjusted orthometric height differences for the baselines between the reference GNSS/levelling benchmarks and new points according to following formulation:

$$
\Delta \mathbf{H} = \Delta \mathbf{l} - \Delta \mathbf{N} \tag{17}
$$

where Δ*H* is the orthometric height difference for the baseline between the reference GNSS/levelling benchmark and new computation point, Δ*h* is the ellipsoidal height difference derived from GNSS heights for the same baseline and finally Δ*N* is the geoid height difference of the baseline derived from the regional geoid model. In the adjustment computations that the orthometric heights of the reference benchmarks are set as 'known' to constrain the system, Δ*H* values are the observations. According to functional model of adjustment:

$$
\Delta H + \upsilon = H - H^\* \tag{18}
$$

where *H* and *H*\* are approximate and precise orthometric heights of new and reference benchmarks, respectively. And the residual for the orthometric height difference of the baseline:

$$
\sigma v = -H \stackrel{\*}{+} H - \Delta H \tag{19}
$$

and the residuals for all reference benchmarks set the matrix system:

$$w = AX - \ell \tag{20}$$

150 Global Navigation Satellite Systems – Signal, Theory and Applications

( ) 0 1 <sup>2</sup> <sup>3</sup> *F aa a a*

and five parameter similarity transformation as an extended version of Equation 15:

**3.2.2 Adjustment of the derived orthometric height differences on the baselines** 

Δ

Δ

and the residuals for all reference benchmarks set the matrix system:

The second method combines the height differences, which are derived from GNSS ellipsoidal heights (Δ*h*) and regional geoid model (Δ*Nmodel*), in the least squares adjustment algorithm (Mikhail & Ackermann, 1976) and derives the adjusted orthometric height differences for the baselines between the reference GNSS/levelling benchmarks and new

> *H hN* = Δ Δ

where Δ*H* is the orthometric height difference for the baseline between the reference GNSS/levelling benchmark and new computation point, Δ*h* is the ellipsoidal height difference derived from GNSS heights for the same baseline and finally Δ*N* is the geoid height difference of the baseline derived from the regional geoid model. In the adjustment computations that the orthometric heights of the reference benchmarks are set as 'known' to constrain the system, Δ*H* values are the observations. According to functional model of

where *H* and *H*\* are approximate and precise orthometric heights of new and reference benchmarks, respectively. And the residual for the orthometric height difference of the

\* *v HH H* =− + −

\*

Δ

 ϕλ

 ϕλ

, cos cos cos sin sin

( ) <sup>2</sup> 0 1 <sup>2</sup> 3 4 *F aa a a a*

, cos cos cos sin sin sin

The coefficients of the parametric models are calculated using least squares adjustment method as described in section 3.1.2.1 with Equations 6-9. The appropriateness of the models are comparable according to the results of empirical tests, and RMSE of the height differences and coefficient of determinations (see Equations 12 and 13), are two of these statistics which provides useful hints on the compatibility of the parametric models as corrector surfaces. Hence the geoidal height at a new point can be determined with better precision as the summation of geoid height derived from the regional model and residual

 ϕλ

<sup>=</sup> ∑ ∑ ), and four or five parameter similarity transformation equations

 ϕλ

δ.

=+ + + (15)

=+ + + + (16)

 ϕ  ϕ

− (17)

*HvHH* += − (18)

*v AX* = − A (20)

(19)

 ϕ

( )

,

ϕ λ

*NCS* δ

adjustment:

baseline:

0 0

= =

 *u v* −

*m n F a*

*l lm m n mn*

ϕ

λ

points according to following formulation:

(see Equations 15 and 16, respectively) are generally used.

from the corrector surface model as *NN N TG CS* <sup>03</sup> = +

ϕ

λ

where the observations matrix is A = Δ*h N* − Δ , A is the coefficients matrix, and *X* consists the unknown parameters. The a-priori root mean square error of Δ*H* of a baseline of *S* km is *mmS* = 0 () *km* that *m*0 is the a-priori RMSE of unit observation. The unknown parameters from the solution of matrix system in Equation 20 is calculated as

$$X = \left(A^T P A\right) A^T P \ell \tag{21}$$

where *P* includes the weights of Δ*H* observations. Hence the adjusted orthometric height differences are:

$$
\Delta H^\text{\,\,\,}^\, = \Delta H + v
\tag{22}
$$

The success of the method can be assessed at the test points where GNSS and levelling observations exist, and in the evaluations the orthometric heights of the test points are compared with their observed orthometric heights.

Furthermore, combining the height sets using the method of least squares, weights of each set are essential to correctly estimate the unknown parameters. Improper stochastic modelling can lead to systematic deviations in the results. Therefore, for the purpose of estimating realistic and reliable variances of the data sets, and therefore constructing the appropriate a−priori covariance matrix of the observations, variance component estimation techniques can be included in combining algorithms of the heights. Numerous solution algorithms suggested for variance component estimation problems can be found in various literature published on the subject however, Rao's Minimum Norm Quadratic Unbiased Estimation is commonly used one of these methods (Rao, 1971.). Sjöberg (1984), Fotopoulos (2003) and Erol et al. (2008) can be referred to for further readings and practicing variance component estimation techniques in the adjustment.

#### **3.2.3 Case study: Local Çankr geoid**

Suggested data combination methods related to local improvement of regional geoids are exemplified and tested in a numerical case study in this title. These results are also included by Erol et al. (2008) to provide a detailed investigation on local performances of the various regional models and their improvement capabilities. The local area covers 154 km x 198 km, and the number of reference benchmarks used in the tests is 31. The GNSS positions of the benchmarks were determined with static measurements using dual frequency GNSS receivers. The accuracies of the latitudes and longitudes in ITRF96 datum is ±1.5 cm, and for the accuracy of ellipsoidal heights is reported as ±3.0 cm (Erol et al., 2008). The adjustment of levelling observations revealed the orthometric heights of the benchmarks with ±2.5 cm in TUDKA99 datum. As can be seen in Figure 14, the benchmarks have quite poor density and non-homogeneous distribution over the area. The approximate density of the benchmarks is 1 point per 900 km2. When the poor density of the benchmarks and rough topographic pattern of the area (the heights of the region change between 41 m and 2496 m) are considered, alongside the levelling technique the regional geoid model or its locally improved version can be applied to obtain regional orthometric heights from GNSS. As a result of this the density and distribution of the reference benchmarks do not allow determination of local GNSS/levelling geoid. According to Large Scale Map and Spatial Data Production Regulation of Turkey, legalized by July 2005, the density of the geoid

GNSS in Practical Determination of Regional Heights 153

is 19.3 cm. When the TG03 is refined with LSA of orthometric height differences on the 58 baselines among the 22 reference and 9 test benchmarks, the accuracy of the refined TG03 model (version 1) is ±15 cm in terms of the RMSE of geoid heights at 9 test points. Hence the improvement of the model is approximately 42%. The refined version of TG03 (version 2) using CS fitting revealed ±19.2 of RMSE of geoid heights at the test points. The third version of refined TG03 was computed using LSA of geoid height differences on the baselines with estimated variance information from iterative MINQUE algorithm, and the internal accuracy of the computed geoid height values having ±4.9 cm RMSE at 31 points were obtained. As expected the 2nd order polynomial type local GNSS/levelling geoid model revealed the worst results with ±46.6 cm RMSE of geoid heights at the test benchmarks. All the results can be compared using the summary statistics at Table 5. For further reading on the applied methods for TG03 local refinement in Çankr area and the associated case

**TG03** - -10.8 60.4 19.3 26.2 **refined TG03\_ver.1** LSA of ΔH on Baseline -28.7 23.1 0.0 15.0 CS fit., 1storder model -25.4 43.3 0.0 17.6 **refined TG03\_ver.2** polynomial test -45.4 28.5 0.0 19.2 **local geoid model** 2nd order polynomial -120.1 85.7 0.0 46.6

Table 5. Statistical comparison of TG03 and its refined versions in Çankr (units in

This chapter compares geoid models from various scales in Turkish territory and aims to provide a road map to GNSS users in practice, with regards to how to choose, compute and use of the geoid model as a tool for transformation of GNSS ellipsoidal heights to the regional vertical datum. As the traditional levelling techniques for obtaining precise height information are left aside, the improved accuracy of the geoid, as a modern technique for vertical control called, known as GNSS(-geoid) levelling, can be contemplated as an alternative for practical height applications. In the numerical evaluations, presented as part of this chapter, the recently released global geoid models, which include the data by the latest gravity field satellite missions, CHAMP, GRACE and GOCE, were tested against the terrestrial data. The results from this indicate the absolute accuracies of the two ultra-high resolution combined global geopotential models, EGM08 (*ℓ*max = 2190) and EIGEN-6C (*ℓ*max = 1420) in Turkey were calculated around ±17 cm, which means that these global models can directly be used for GNSS levelling in small scale map production and applications that requires regional orthometric heights with decimetre accuracy. A comparison on validation results of satellite only global models put EIGEN-6S and GOCO02S forward that these models were calculated using GOCE and GRACE missions' data until 240, 250 maximum degrees of expansion, with ±44.0 cm absolute accuracy at the test points. Comparing these models, performance of the GGM03S (*ℓ*max = 180), the GRACE only model, stayed rough in representation of the local gravity field in the region. Therefore in modelling the regional hybrid geoid, EIGEN-6S and GOCO02S may provide better

**refining method min. max. mean RMSE** 

study, Erol et al. (2008) can be referred to.

**4. Summary of results and remarks** 

centimetre) (Erol et al., 2008)

performances.

reference benchmarks must be at least 1 benchmark per 15 km2 for determination of precise local geoid with geometric approach (LSMSDPR, 2005; Deniz&Çelik, 2008), however with the purpose of testing and local improvement of the regional geoid, the density of the reference GNSS/levelling benchmarks are foresighted to be at least 1 benchmark per 200 km2 by the regulation.

Fig. 14. Çankr geoid reference benchmarks on topography (Erol et al., 2008)

In respect of the case study carried out with Çankr local GNSS/levelling network by Erol et al. (2008), Turkish regional geoid TG03 (TNUGG, 2003; Klçoğlu et al., 2005) was tested at 31 GNSS/levelling benchmarks, and refined by combining the GNSS/levelling heights using least squares adjustment (LSA) of height differences derived from GNSS ellipsoidal heights and TG03 geoid undulations on the baselines, and simple corrector surface model (CS) with only a bias and a tilt. The performances of the refinement methods were also compared in terms of geoid height residuals at the 9 test points of 31 benchmarks. In addition, LSA of the geoid height differences on the baselines approach was applied with estimated variance components of each height sets, using iterative Minimum Quadratic Unbiased Estimation. The performances of TG03 and its refined versions were also compared with local GNSS/levelling geoid model which were determined with GNSS/levelling heights at 22 reference benchmarks using a 2nd order polynomial equation in the study. Considering the reported results, the accuracy of TG03 model is ±26.2 cm in terms of RMSE of geoid heights and the mean of geoid height differences at the benchmarks 152 Global Navigation Satellite Systems – Signal, Theory and Applications

reference benchmarks must be at least 1 benchmark per 15 km2 for determination of precise local geoid with geometric approach (LSMSDPR, 2005; Deniz&Çelik, 2008), however with the purpose of testing and local improvement of the regional geoid, the density of the reference GNSS/levelling benchmarks are foresighted to be at least 1 benchmark per 200

Fig. 14. Çankr geoid reference benchmarks on topography (Erol et al., 2008)

Longitude (°E)

In respect of the case study carried out with Çankr local GNSS/levelling network by Erol et al. (2008), Turkish regional geoid TG03 (TNUGG, 2003; Klçoğlu et al., 2005) was tested at 31 GNSS/levelling benchmarks, and refined by combining the GNSS/levelling heights using least squares adjustment (LSA) of height differences derived from GNSS ellipsoidal heights and TG03 geoid undulations on the baselines, and simple corrector surface model (CS) with only a bias and a tilt. The performances of the refinement methods were also compared in terms of geoid height residuals at the 9 test points of 31 benchmarks. In addition, LSA of the geoid height differences on the baselines approach was applied with estimated variance components of each height sets, using iterative Minimum Quadratic Unbiased Estimation. The performances of TG03 and its refined versions were also compared with local GNSS/levelling geoid model which were determined with GNSS/levelling heights at 22 reference benchmarks using a 2nd order polynomial equation in the study. Considering the reported results, the accuracy of TG03 model is ±26.2 cm in terms of RMSE of geoid heights and the mean of geoid height differences at the benchmarks

km2 by the regulation.

Latitude (°N)

is 19.3 cm. When the TG03 is refined with LSA of orthometric height differences on the 58 baselines among the 22 reference and 9 test benchmarks, the accuracy of the refined TG03 model (version 1) is ±15 cm in terms of the RMSE of geoid heights at 9 test points. Hence the improvement of the model is approximately 42%. The refined version of TG03 (version 2) using CS fitting revealed ±19.2 of RMSE of geoid heights at the test points. The third version of refined TG03 was computed using LSA of geoid height differences on the baselines with estimated variance information from iterative MINQUE algorithm, and the internal accuracy of the computed geoid height values having ±4.9 cm RMSE at 31 points were obtained. As expected the 2nd order polynomial type local GNSS/levelling geoid model revealed the worst results with ±46.6 cm RMSE of geoid heights at the test benchmarks. All the results can be compared using the summary statistics at Table 5. For further reading on the applied methods for TG03 local refinement in Çankr area and the associated case study, Erol et al. (2008) can be referred to.


Table 5. Statistical comparison of TG03 and its refined versions in Çankr (units in centimetre) (Erol et al., 2008)

## **4. Summary of results and remarks**

This chapter compares geoid models from various scales in Turkish territory and aims to provide a road map to GNSS users in practice, with regards to how to choose, compute and use of the geoid model as a tool for transformation of GNSS ellipsoidal heights to the regional vertical datum. As the traditional levelling techniques for obtaining precise height information are left aside, the improved accuracy of the geoid, as a modern technique for vertical control called, known as GNSS(-geoid) levelling, can be contemplated as an alternative for practical height applications. In the numerical evaluations, presented as part of this chapter, the recently released global geoid models, which include the data by the latest gravity field satellite missions, CHAMP, GRACE and GOCE, were tested against the terrestrial data. The results from this indicate the absolute accuracies of the two ultra-high resolution combined global geopotential models, EGM08 (*ℓ*max = 2190) and EIGEN-6C (*ℓ*max = 1420) in Turkey were calculated around ±17 cm, which means that these global models can directly be used for GNSS levelling in small scale map production and applications that requires regional orthometric heights with decimetre accuracy. A comparison on validation results of satellite only global models put EIGEN-6S and GOCO02S forward that these models were calculated using GOCE and GRACE missions' data until 240, 250 maximum degrees of expansion, with ±44.0 cm absolute accuracy at the test points. Comparing these models, performance of the GGM03S (*ℓ*max = 180), the GRACE only model, stayed rough in representation of the local gravity field in the region. Therefore in modelling the regional hybrid geoid, EIGEN-6S and GOCO02S may provide better performances.

GNSS in Practical Determination of Regional Heights 155

Numerous advantages of GNSS techniques from a practical perspective and its high precision in geodetic positioning make this satellite based positioning systems on service in a very large spectrum of applications, ranging from routine engineering surveys to scientific researches. On the other hand, the reference system definition of GNSS coordinates separates the geometry from the Earth gravity field, and therefore developing a solution for transition between the ellipsoidal and natural coordinates, especially in heights, constitutes a challenge for geodesists to be solved by a combination of terrestrial and GNSS data in the recent years. As a reflection of advances in computation techniques and improved data resolutions and accuracies, the precisions of geoid models increase and hence GNSS levelling, as a new concept in vertical control, become a consideration as a viable alternative for practical height determination. All these developments lead modernization of geodetic infrastructures in the national and consequently global scale, and cause leaving the traditional onerous surveying techniques aside as a means for obtaining heights. Today, in many countries, the new vertical datum definition is based solely on the geoid and vertical control is provided via

In the light of recent developments on GNSS techniques and their tremendous impacts on definitions of the reference systems and hence geodetic infrastructures, this chapter reviewed the principle geoid models and widely used methodologies for practical determination of regional heights using GNSS. With this purpose, the evaluations on global models validated the improvement of the long and medium wavelength information of the gravity field, as a result of the current state of technologies with modernized GNSS, as well as new LEO missions for dedicated gravity field research (i.e., CHAMP, GRACE, GOCE). The improvements on global models as well as the terrestrial data qualities contribute also to the regional geoid models by reducing their errors in the total budget of hybrid geoid representation. However, according to results drawn from this study, the accuracy of regional geoid model of Turkey is insufficient yet for deriving regional orthometric heights with centimetre precision from GNSS levelling, and therefore local solutions such as modelling local geoid with geometric approach or improving the regional geoid model with local terrestrial data are still required for providing heights with an accuracy under 5 centimetres. Although the local geoids provide high accuracies, there are handicaps related to their determination and use. The determination of local geoid models requires specifically acquired reference data, having good quality and adequate distribution representing the topography well, and an appropriate modelling algorithm, fitting the data. One of the disadvantages related with the use of local geoid models is that they can be applied only in the limited area with high precision and so are not suitable for extrapolation. These local solutions do not contribute to a unified vertical datum definition in the country. In this manner the importance of a precise and reliable regional geoid model in the concept of GNSS levelling for practical determination of precise regional heights is obvious. In Turkey, geoid modelling efforts as a part of modernization of geodetic infrastructure continue, and with the enhanced data qualities, a precise regional geoid model with its time dependent

GNSS levelling with a precise geoid model (see e.g. Rangelova et al., 2010).

variations for GNSS levelling purposes will be possible in the near future.

GNSS/levelling data, used in local geoid modelling, were provided by Istanbul GPS Levelling Network-2005 and Geodetic Infrastructure of Marmara Earthquake Region Land

**6. Acknowledgment** 

**5. Conclusion** 

In the content of numerical tests, beside the global models, the most recent regional geoids TG03 and TG09 were also validated against the GNSS/levelling heights at the test benchmarks. The validation results showed that although TG09 model provided approximately 12% improvement comparing to TG03 in terms of accuracy of geoid heights, the absolute accuracy of regional geoid models is not yet below 10 cm. This indicates that the regional geoid models remain insufficient to be applied for GNSS levelling purposes in large scale map production and applications that require centimetre level accuracy in heights. Since the lack of a 5-centimetre or higher precision regional geoid in the country; local geoid models, as an alternative solution for height transformation problems, are determined and used. This chapter presents examples of local geoid modelling using geometric approach, at the two case study areas, Istanbul and Sakarya situated in the north east of Turkey, which have precise GNSS/levelling data. Also from the test results of the computed local geoids, it is obvious that the topographic character of the local area, the quality of GNSS and levelling data, the density and distribution of the geoid reference benchmarks are very critical for the accuracy and reliability of the local geoid model. As such the design of the geoid reference network and data acquisition needs to be planned in a specific manner. Applied methodology for modelling the local geoid is another critical parameter that affects the final accuracy. In the numerical tests, the Istanbul and Sakarya local geoids were computed using classical polynomial type multi regression equations and ANFIS method. In Istanbul a fifth order polynomial equation fitted the best the reference geoid data, where as in Sakarya a fourth order polynomial was decided as an optimal model. Evaluation of the polynomial models at the test benchmarks revealed ±4.4 cm and ±20.4 cm absolute accuracies in Istanbul, and Sakarya, respectively. When the topographies and densities of the benchmarks in both local areas are compared, the difference between the accuracies of the polynomial representations of two local geoids can be understood (Figure 7 vs. Figure 8). On the other hand the ANFIS approach provided marked improvements in results with ±3.5 cm and ±18.9 cm accuracies, in Istanbul and Sakarya. TG03 regional geoid model has ±10.8 cm and ±18.6 cm accuracies in Istanbul and Sakarya. When comparing the regional model, in Istanbul, a local geoid model provides much better accuracy but in Sakarya many of the local geoid model solutions did not provide a better alternative to regional geoid for GNSS levelling purposes. The numerical tests on the local geoid modelling also provided an opportunity to compare the two surface approximation techniques. Hence it is concluded that although, ANFIS has a developed computation algorithm and potential to provide more improved results, it has handicaps from a practical point of view: the prediction capability of this method varies depending on the adopted architecture and it is too sensitive to the selection of the reference/test points. Therefore, while geoid modelling with ANFIS, one must be very carefully to employ the appropriate architecture and to decide reference and test data. Otherwise too optimistic and unrealistic statistics can appear with an over fitted surface model.

In the final part of this chapter, local improvement of geoid models is provided as another alternative solution to GNSS levelling. In the case study, local improvement of the TG03 in Çankr using precise GNSS/levelling data, by corrector surface fitting and adjustment of derived orthometric height differences on the baselines, is presented. The accuracy of TG03 model in the region is ±26.2 cm. Applying least squares adjustment of height differences derived from GNSS and TG03 on the baselines approach provided 42% improvement in the model and the RMSE of the orthometric heights derived from the improved version of TG03 is reported as ±15.0 cm.

## **5. Conclusion**

154 Global Navigation Satellite Systems – Signal, Theory and Applications

In the content of numerical tests, beside the global models, the most recent regional geoids TG03 and TG09 were also validated against the GNSS/levelling heights at the test benchmarks. The validation results showed that although TG09 model provided approximately 12% improvement comparing to TG03 in terms of accuracy of geoid heights, the absolute accuracy of regional geoid models is not yet below 10 cm. This indicates that the regional geoid models remain insufficient to be applied for GNSS levelling purposes in large scale map production and applications that require centimetre level accuracy in heights. Since the lack of a 5-centimetre or higher precision regional geoid in the country; local geoid models, as an alternative solution for height transformation problems, are determined and used. This chapter presents examples of local geoid modelling using geometric approach, at the two case study areas, Istanbul and Sakarya situated in the north east of Turkey, which have precise GNSS/levelling data. Also from the test results of the computed local geoids, it is obvious that the topographic character of the local area, the quality of GNSS and levelling data, the density and distribution of the geoid reference benchmarks are very critical for the accuracy and reliability of the local geoid model. As such the design of the geoid reference network and data acquisition needs to be planned in a specific manner. Applied methodology for modelling the local geoid is another critical parameter that affects the final accuracy. In the numerical tests, the Istanbul and Sakarya local geoids were computed using classical polynomial type multi regression equations and ANFIS method. In Istanbul a fifth order polynomial equation fitted the best the reference geoid data, where as in Sakarya a fourth order polynomial was decided as an optimal model. Evaluation of the polynomial models at the test benchmarks revealed ±4.4 cm and ±20.4 cm absolute accuracies in Istanbul, and Sakarya, respectively. When the topographies and densities of the benchmarks in both local areas are compared, the difference between the accuracies of the polynomial representations of two local geoids can be understood (Figure 7 vs. Figure 8). On the other hand the ANFIS approach provided marked improvements in results with ±3.5 cm and ±18.9 cm accuracies, in Istanbul and Sakarya. TG03 regional geoid model has ±10.8 cm and ±18.6 cm accuracies in Istanbul and Sakarya. When comparing the regional model, in Istanbul, a local geoid model provides much better accuracy but in Sakarya many of the local geoid model solutions did not provide a better alternative to regional geoid for GNSS levelling purposes. The numerical tests on the local geoid modelling also provided an opportunity to compare the two surface approximation techniques. Hence it is concluded that although, ANFIS has a developed computation algorithm and potential to provide more improved results, it has handicaps from a practical point of view: the prediction capability of this method varies depending on the adopted architecture and it is too sensitive to the selection of the reference/test points. Therefore, while geoid modelling with ANFIS, one must be very carefully to employ the appropriate architecture and to decide reference and test data. Otherwise too optimistic and unrealistic

statistics can appear with an over fitted surface model.

is reported as ±15.0 cm.

In the final part of this chapter, local improvement of geoid models is provided as another alternative solution to GNSS levelling. In the case study, local improvement of the TG03 in Çankr using precise GNSS/levelling data, by corrector surface fitting and adjustment of derived orthometric height differences on the baselines, is presented. The accuracy of TG03 model in the region is ±26.2 cm. Applying least squares adjustment of height differences derived from GNSS and TG03 on the baselines approach provided 42% improvement in the model and the RMSE of the orthometric heights derived from the improved version of TG03 Numerous advantages of GNSS techniques from a practical perspective and its high precision in geodetic positioning make this satellite based positioning systems on service in a very large spectrum of applications, ranging from routine engineering surveys to scientific researches. On the other hand, the reference system definition of GNSS coordinates separates the geometry from the Earth gravity field, and therefore developing a solution for transition between the ellipsoidal and natural coordinates, especially in heights, constitutes a challenge for geodesists to be solved by a combination of terrestrial and GNSS data in the recent years. As a reflection of advances in computation techniques and improved data resolutions and accuracies, the precisions of geoid models increase and hence GNSS levelling, as a new concept in vertical control, become a consideration as a viable alternative for practical height determination. All these developments lead modernization of geodetic infrastructures in the national and consequently global scale, and cause leaving the traditional onerous surveying techniques aside as a means for obtaining heights. Today, in many countries, the new vertical datum definition is based solely on the geoid and vertical control is provided via GNSS levelling with a precise geoid model (see e.g. Rangelova et al., 2010).

In the light of recent developments on GNSS techniques and their tremendous impacts on definitions of the reference systems and hence geodetic infrastructures, this chapter reviewed the principle geoid models and widely used methodologies for practical determination of regional heights using GNSS. With this purpose, the evaluations on global models validated the improvement of the long and medium wavelength information of the gravity field, as a result of the current state of technologies with modernized GNSS, as well as new LEO missions for dedicated gravity field research (i.e., CHAMP, GRACE, GOCE). The improvements on global models as well as the terrestrial data qualities contribute also to the regional geoid models by reducing their errors in the total budget of hybrid geoid representation. However, according to results drawn from this study, the accuracy of regional geoid model of Turkey is insufficient yet for deriving regional orthometric heights with centimetre precision from GNSS levelling, and therefore local solutions such as modelling local geoid with geometric approach or improving the regional geoid model with local terrestrial data are still required for providing heights with an accuracy under 5 centimetres. Although the local geoids provide high accuracies, there are handicaps related to their determination and use. The determination of local geoid models requires specifically acquired reference data, having good quality and adequate distribution representing the topography well, and an appropriate modelling algorithm, fitting the data. One of the disadvantages related with the use of local geoid models is that they can be applied only in the limited area with high precision and so are not suitable for extrapolation. These local solutions do not contribute to a unified vertical datum definition in the country. In this manner the importance of a precise and reliable regional geoid model in the concept of GNSS levelling for practical determination of precise regional heights is obvious. In Turkey, geoid modelling efforts as a part of modernization of geodetic infrastructure continue, and with the enhanced data qualities, a precise regional geoid model with its time dependent variations for GNSS levelling purposes will be possible in the near future.

#### **6. Acknowledgment**

GNSS/levelling data, used in local geoid modelling, were provided by Istanbul GPS Levelling Network-2005 and Geodetic Infrastructure of Marmara Earthquake Region Land

GNSS in Practical Determination of Regional Heights 157

Erol, B. (2011). An automated height transformation using precise geoid models. *Scientific* 

Erol, B. & Çelik, R.N. (2004). Precise Local Geoid Determination to Make GPS Technique

Erol, B. & Çelik, R.N. (2006). Modelling Local GPS/Levelling Geoid: Assessment of Inverse

Erol, B., Erol, S. & Çelik, R.N. (2005). Precise Geoid Model Determination Using GPS

(Eds.), pp.395-399, IEEE, Istanbul, Turkey, doi: 10.1109/RAST.2005.1512599 Erol, B., Erol, S. & Çelik, R.N. (2008). Height transformation using regional geoids and

Erol, B., Sideris, M.G. & Çelik, R.N. (2009). Comparison of Global Geopotential Models from

Featherstone, W.E. (1998). Do we need a gravimetric geoid or a model of the base of the

Featherstone, W.E. (2001). Absolute and relative testing of gravimetric geoid models using

Featherstone, W.E., Denith, M.C. & Kirby, J.F. (1998). Strategies for the accurate

Forsberg, R. (1994). Terrain Effects in Geoid Computations, In: *Lecture Notes - International* 

Fotopoulos, G. (2003). *An Analysis on the Optimal Combination of Geoid, Orthometric and* 

Fotopoulos, G. (2005). Calibration of geoid error models via a combined adjustment of

Fotopoulos, G., Kotsakis, C. & Sideris, M.G. (2001). How accurately can we determine

Förste, C., Bruinsma, S., Shako, R., Marty, J.C., Flechtner, F., Abrikosov, O., Dahle, C.,

More Effective in Practical Applications of Geodesy, *Proceedings of FIG Working* 

Distance Weighting and Geostatistical Kriging Methods. *Geoinformation Science* 

Technique and Geodetic Applications, In: *Proceedings 2nd International Conference on Recent Advances in Space Technologies*, S. Kurnaz, F. Ince, S. Inbasioglu, S. Basturk

the CHAMP and GRACE Missions for Regional Geoid Modeling in Turkey. *Studia* 

Australian Height Datum to transform GPS heights?. *The Australian Surveyor,* 

Global Positioning System and orthometric height data. *Computers & Geosciences,* 

determination of orthometric heights from GPS. *Survey Review,* Vol.34, No.267, pp.

*School for the Determination and Use of the Geoid*, 101−134, IGeS, DIIAR – Politecnico

*Ellipsoidal Height Data,* PhD Thesis, UCGE Report 20185, Geomatics Engineering

ellipsoidal, orthometric and gravimetric geoid height data. *Journal of Geodesy,* 

orthometric height differences from GPS and geoid data?. *Journal of Surveying* 

Lemoine, J.M., Neumayer, H., Biancale, R., Barthelmes, F., König, R. & Balmino, G. (2011). EIGEN-6 - A new combined global gravity field model including GOCE data from the collaboration of GFZ-Potsdam and GRGS-Toulouse. *Geophysical* 

Draper, N.R. & Smith, H. (1966). *Applied Regression Analysis*, John Wiley &Sons, Inc., USA Erol, B. (2007). *Investigations on Local Geoids for Geodetic Applications,* PhD Thesis, Institute of

Science and Technology, Istanbul Technical University, Turkey

*Week 2004*, Athens, Greece, April 2004, 30.07.2011, Available from http://www.fig.net/pub/athens/papers/ts07/ts07\_3\_erol\_celik.pdf

GPS/levelling in Turkey. *Survey Review,* Vol.40, No.307 pp. 2-18

Vol.27, No.7 pp. 807-814 doi:10.1016/S0098-3004(00)00169-2

*Research and Essays,* Vol.6, No.6, pp. 1351-1363

*Journal,* Vol.6, No.1, pp. 78-83

Vol.43, No.4 pp. 273-280

278-296

di Milano, Italy

Vol.79, No.1-3, pp. 111-123

*Research Abstracts*, Vol.13

*Engineering,* Vol.129, No.1, pp. 1-10

*Geopysica et Geodaetica,* Vol.53 pp.419-441.

Department, University of Calgary, Canada

Information System (MERLIS) projects by Istanbul Technical University, Geodesy Division. Validation of global geopotential models were done using the reference data which were published and used in the same purpose by Ylmaz & Karaali, Sci. Res. Essays 5(2010). Global geopotential models were used from International Centre for Global Earth Models of German Research Centre for Geosciences (GFZ). The validations and modelling computations were carried out using MATLAB ver.7.11. Special thanks go to Dr. R.N. Çelik, for his contributions on local geoid modelling in this study.

#### **7. References**


156 Global Navigation Satellite Systems – Signal, Theory and Applications

Information System (MERLIS) projects by Istanbul Technical University, Geodesy Division. Validation of global geopotential models were done using the reference data which were published and used in the same purpose by Ylmaz & Karaali, Sci. Res. Essays 5(2010). Global geopotential models were used from International Centre for Global Earth Models of German Research Centre for Geosciences (GFZ). The validations and modelling computations were carried out using MATLAB ver.7.11. Special thanks go to Dr. R.N. Çelik,

Abd-Elmotaal, H.A. (2006). High-Degree Geopotential Model Tailored to Egypt, In: *Gravity* 

Amos, M.J. & Fearhetstone, W.E. (2003). Comparisons of Global Geopotential Models with

Ayan T. (1976). *Astrogeodätische Geoidberechnung für das Gebiet der Türkei,* PhD Thesis,

Ayan, T., Deniz, R., Çelik, R. N., Denli, H., Özlüdemir, M.T., Erol, S., Özöner, B., Akylmaz,

ITU 2000/2294), Istanbul Technical University, Turkey, 152 pp. [in Turkish] Ayan, T., Deniz, R., Arslan, E., Çelik, R.N., Denli, H.H., Akylmaz, O., Özşaml, C.,

Ayhan, M.E. (1993). Geoid determination in Turkey. *Bulletin Geodesique*, Vol.67, pp. 10–22 Ayhan, M.E. & Demir C., 1992. Turkish National Vertical Control Network-1992 (TNVCN-

Network 1999 (TFGN-99). *Map Journal Special Issue,* Vol.16, pp. 47-50 Bruinsma S.L., Marty, J.C., Balmino, G., Biancale, R., Foerste, C., Abrikosov, O. & Neumayer,

Ayhan, M. E., Demir, C., Lenk, O., Klçoğlu, A., Aktuğ, B., Açkgöz, M., Frat, O., Şengün, Y.

Çepni, M.S. & Deniz, R. (2005). Examination of Continuity on Geodetic Transformations.

Deniz, R. & Çelik, R.N. (Eds.). (2008). *Explanations and Examples Book of Large Scale Map and* 

Dermanis, A. & Rossikopoulos, D. (1991). Statistical Inference in Integrated Geodesy,

Engineers, Ankara, Turkey, 86pp [in Turkish] 30.07.2011, Available from http://www.hkmo.org.tr/resimler/ekler/2CO1\_db11d259a9db7fb\_ek.pdf

*Field of the Earth*, A. Klçoğlu & R Forsberg (Eds.), 187-192, Map Journal,

Terrestrial Gravity field over New Zealand and Australia. *Geomatics Research* 

O. & Güney, C. (2001). *Izmir Geodetic Reference System–2001 (IzJRS 2001)* (Report ID-

Özlüdemir, M.T., Erol, S., Erol, B., Acar, M., Mercan, H. & Tekdal, E. (2006). *Istanbul GPS Triangulation Network (IGNA) 2005-2006 Re-measurements and Data Processing* (Report ID-2005/3123), Volume 1, Istanbul Technical University, Turkey, 186 pp.

S., Cingöz, A., Gürdal, M. A., Kurt, A. I., Ocak, M., Türkezer, A., Yldz, H., Bayazt, N., Ata, M., Çağlar, Y. & Özerkan, A. (2002). Turkish National Fundamental GPS

H. (2010). GOCE Gravity Field Recovery by Means of the Direct Numerical Method, *Proceedings of ESA Living Planet Symposium 2010*, Bergen, Norway, June - July 2010 Çelik, R.N., Ayan, T. & Erol, B. (2002). *Geodetic Infrastructure Project of Marmara Earthquake* 

*Region Land Information System (MERLIS)* (Report ID- ITU 2002/06/20), Istanbul

*Spatial Data Production Regulation (legalized in 15 June 2005)*, Chamber of Surveying

*Proceedings of IUGG XXth General Assembly, International Association of Geodesy*,

for his contributions on local geoid modelling in this study.

International Gravity Field Service, Turkey

Karlsruhe University, Karlsruhe, Germany [in German]

*Australasia*, Vol.78 pp. 67-84

92). *Map Journal*, Vol.109, pp. 22–42.

Technical University, Istanbul

Vienna, August 1991

*ITU Journal/d*, Vol.4, No.5, pp. 43-54 [in Turkish]

[in Turkish]

**7. References** 

Draper, N.R. & Smith, H. (1966). *Applied Regression Analysis*, John Wiley &Sons, Inc., USA


GNSS in Practical Determination of Regional Heights 159

Klokočník, J., Reigber, C., Schwintzer, P., Wagner, C.A. & Kostelecký, J. (2002) Evaluation of

Kotsakis, C. & Sideris, M..G. (1999). On the adjustment of combined GPS/levelling/geoid

Lambeck, K. & Coleman, R. (1983). The Earth's shape and gravity field: a report of progress

Lemoine, F.G., Kenyon, S.C., Factor, J.K., Trimmer R.G., Pavlis, N.K., Chinn, D.S., Cox, C.M.,

National Aeronautics and Space Administration, Maryland, U:S. 575 pp.

Merry, C.L. (2007). Evaluation of global geopotential models in determining the quasi-geoid

Mikhail, E.M. & Ackermann, F. (1976). *Observations and Least Squares*, Harper & Row

Pavlis, N.K., Holmes S.A., Kenyon S.C. & Factor J.K. (2008). An Earth Gravitational Model to

Rangelova, E., Fotopoulos, G. & Sideris, M.G. (2010). Implementing a Dynamic Geoid as a

Rao, C.R. (1971). Estimation of Variance Components – MINQUE Theory. *Journal of* 

Rodriguez-Caderot, G., Lacy, M.C., Gil, A.J. & Blazquez, B. (2006). Comparing recent

Roland, M. & Denker, H. (2003). Evaluation of Terrestrial Gravity Data by New Global

Rummel, R., Balmino, G., Johannessen, J., Visser, P. & Woodworth, P. (2002). Dedicated gravity field missions−principles and aims. *Journal of Geodynamics,* Vol.33 pp.3-20 Sadiq, M. & Ahmad, Z. (2009). On the selection of optimal global geopotential model for

Schwarz, K.P., Sideris, M.G. & Forsberg, R. (1987). Orthometric Heights Without Leveling.

Sen, A. & Srivastava, R.M. (1990). *Regression Analysis: Theory, Methods and Applications,*

Sideris, M.G. (1994). Geoid Determination by FFT Techniques, In: *Lecture Notes - International* 

Sjöberg, L. (1984). Non-negative Variance Component Estimation in the Gauss-Helmert

Adjustment Model. *Manuscripta Geodaetica*, Vol.9, pp. 247-280

Degree 2160: EGM2008, *Proceedings of 2008 General Assembly of the European* 

Vertical Datum for Orthometric Heights in Canada, In: *Gravity, Geoid and Earth Observation,* S.P. Mertikas, (Ed.), 295-302, Springer Verlag, DOI 10.1007/978-3-642-

geopotential models in Andalusia (Southern Spain). *Studia Geophysica et Geodaetica*,

Gravity Field, In: *Gravity and Geoid,* I.N. Tziavos, (Ed.), 256-261, Ziti Publishing,

geoid modelling: A case study in Pakistan. *Advances in Space Research*, Vol.44 pp

*School for the Determination and Use of the Geoid*, 213−272, IGeS, DIIAR – Politecnico

LSMSDPR (2005) *Large Scale Map and Spatial Data Production Regulation*, Turkey

for Southern Africa. *Survey Review,* Vol.39 pp. 180−192

Publishers, ISBN 0-7002-2481-5, New York

*Geosciences Union,* Vienna, Austria, April 2008

10634-7\_38, Berlin, Heidelberg

Vol.50 pp 619−631

Greece

627-639

di Milano, Italy

*Multivariate Statistics*, Vol.1, pp. 257-275

*Journal of Surveying Engineering,* Vol.113, pp. 28-40

Springer Texts in Statistics, Springer, New York

pre-CHAMP gravity field models GRIM5-S1 and GRIM5-C1 with satellite

from 1958 and 1982. *Geophysical Journal of the Royal Astronomical Society,* Vol.74 pp.

Klosko, S.M., Luthcke, S.B., Torrence, M.H., Wang, Y.M., Williamson, R.G., Pavlis, E.C., Rapp, R.H. & Olson, T.R. (1998). *The Development of the joint NASA GSFC and NIMA Geopotential Model EGM96*, Technical Paper, NASA/TP-1998-206861,

http://www.harita.selcuk.edu.tr/arsiv/calistay2003/02ak\_jeoid.pdf

crossover altimetry. *Journal of Geodesy*, Vol.76, pp.189-198

networks. *Journal of Geodesy*, Vol.73, No.8, pp. 412-421

25−54


http://earth.esa.int/workshops/goce04/participants/


158 Global Navigation Satellite Systems – Signal, Theory and Applications

Förste, C., Stubenvoll, R., König, R., Raimondo, J.C., Flechtner, F., Barthelmes, F., Kusche, J.,

GGM02 (2004). GRACE Gravity Model 02, Center of Space Research (CSR) of the University

GOCE (2009). European Space Agency GOCE (Gravity field and steady-state Ocean

Goiginger, H., Hoeck, E., Rieser, D., Mayer-Guerr, T., Maier, A., Krauss, S., Pail, R., Fecher,

Gruber, T. (2004). Validation concepts for gravity field models from new satellite missions,

Haagmans, R., de Min, E. & van Gelderen, M. (1993). Fast evaluation of convolution

Heiskanen, W.A. & Moritz, H. (1967). *Physical Geodesy*, W.H. Freeman and Company, San

Hirt, C. & Seeber, G. (2007). High-resolution local gravity field determination at the

Tregoning & C. Rizos, (Eds.), 316-321, Springer Verlag, Berlin, Heidelberg Hofmann-Wellenhof, B. & Moritz, H. (2006). *Physical Geodesy* (2nd Edition)*,* Springer, 978-

ICGEM (2011). Table of Available Models. *International Center for Global Earth Models, GeoForschungsZentrum Potsdam* (GFZ), Germany, 30.07.2011, Available from

Jang, J.S. (1993). ANFIS:adaptive–network based fuzzy inference system. *IEEE Transactions on Systems, Man, and Cybernetics,* Vol.23, No.3, pp. 665-685 doi: 10.1109/21.256541 Kavzaoğlu ,T. & Saka, M.H. (2005). Modelling local GPS/Levelling geoid undulations using

Kiamehr, R. & Sjöberg, L.E. (2005). Comparison of the qualities of recent global and local

Klçoğlu, A., Demir, C. & Frat, O. (2005). Data and Methods Used in Computation of New

Klçoğlu, A. & Frat, O. (2003). Geoid Modelling with the Purpose of Determining

*Invited Paper,* Konya, Turkey, September 2003, 30.07.2011, Available from

Stokes' integral. *Manuscripta Geodaetica*, Vol.18, No.5 pp. 227-241

T., Gruber, T., Brockmann, J.M., Krasbutter, I., Schuh, W.D., Jaeggi, A., Prange, L., Hausleitner, W., Baur, O. & Kusche, J. (2011). The combined satellite-only global gravity field model GOCO02S, *Proceedings of 2011 General Assembly of the European* 

integrals on the sphere using 1D FFT and a comparison with existing methods for

submilimeter level using a digital zenith camerasystem, In: *Dynamic Planet,* P.

artificial neural networks. *Journal of Geodesy* Vol.78, pp 520-527.

gravimetric geoid models in Iran. *Studia Geophysica et Geodaetica*, Vol.49 pp. 289-304

Turkish Geoid 2003 (TG-03), Proceedings of Turkish National Geodesy Commission 2005 Year Annual Meeting: Geoid and Vertical, Trabzon, October 2005

Orthometric Heights using GPS for Large Scale Map Production and Case Studies, *Proceedings of Turkish National Geodesy Commission 2003 Year Scientific Conference-*

models. *Newton's Bulletin (Special Issue),* Vol.4, pp 26-37

of Texas at Austin, U.S., 30.07.2011, Available from

*Geosciences Union*, Vienna, Austria, April 2011

http://earth.esa.int/workshops/goce04/participants/

http://icgem.gfz-potsdam.de/ICGEM/ICGEM.html

http://www.gfz-potsdam.de/pb1/op/champ/results/grav/010\_eigen-

Circulation Explorer) Project Website, 30.07.2011, Available from

GFZ (2006). The CHAMP Mission, 30.07.2011, Available from

http://www.csr.utexas.edu/grace/gravity/

champ03s.html

http://earth.esa.int/GOCE/

30.07.2011, Available from

3211335444, New York

doi: 10.1007/s00190-004-0420-3

Fransisco

Dahle, C., Neumayer, H., Biancale, R., Lemoine, J.M. & Bruinsma, S. (2009). Evaluation of EGM2008 by comparison with other recent global gravity field http://www.harita.selcuk.edu.tr/arsiv/calistay2003/02ak\_jeoid.pdf


**7** 

**Precise Real-Time Positioning** 

In the classic RTK method using a single reference station the rover needs to work within a short range from the reference station due to the spatial decorrelation of distance-dependent errors induced by the ionosphere, troposphere and orbital errors. The operating range of RTK positioning is thus dependent on the existing atmospheric conditions and is usually limited to a distance of up to 10-20 km. In addition, no redundancy of the reference stations is usually available if the reference station experiences any malfunctioning. The constraint of the limited reference-to-rover range in RTK can be removed by using a method known as Network RTK (NRTK), whereby a network of reference stations with ranges usually less than 100 km is used. The network stations continuously collect satellite observations and send them to a central processing facility, at which the station observations are processed in a common network adjustment and observation errors and their corrections are computed. The observation corrections obtained from the network are sent to the user, operating within

In this chapter, the principles of Network RTK are first discussed and the advantages and disadvantages of the method are given. Next, the network design parameters are discussed, which include network baseline lengths and configuration, the communication method between the computing centre and the user, and the amount of calculations required by the network processing centre and by the user. Description of possible network processing techniques, their basic models, and a comparison between their advantages and disadvantages are given. Finally, some important NRTK applications are discussed including the use of NRTK in engineering surveying, machine automation and in the

The aim of network RTK is to minimise the influence of the distance dependant errors on the computed position of a rover within the bounds of the network. NRTK provides redundancy of reference stations in the solution, such that if observations from one reference station are not available, a solution is still possible since the observations are gathered and processed in a common network adjustment. Figure 1 illustrates a simple demonstration of the concept of NRTK through representation of the relationship between

the coverage area of the network, to mitigate his observation errors.

**2. Principles of the network RTK** 

airborne mapping and navigation. Results from real-time testing are discussed.

**1. Introduction** 

**Using Network RTK** 

Ahmed El-Mowafy *Curtin University* 

*Australia* 


http://eros.usgs.gov/#/Find\_Data/Products\_and\_Data\_Available/gtopo30\_info


## **Precise Real-Time Positioning Using Network RTK**

Ahmed El-Mowafy *Curtin University Australia* 

## **1. Introduction**

160 Global Navigation Satellite Systems – Signal, Theory and Applications

Stopar, B., Ambrožič, T., Kuhar, M. & Turk, G. (2006). GPS-Derived Geoid Using Artificial

Takagi, T. & Sugeno, M. (1985). Fuzzy identification of systems and its application to

Tapley, B., Ries, J., Bettadpur, S., Chambers, M., Cheng, M., Condi, F., Gunter, B., Kang, Z.,

TNUGG (2003). Turkish National Union of Geodesy and Geophysics National Reports of

TNUGG (2011). Turkish National Union of Geodesy and Geophysics National Reports of Geodesy Commission of Turkey for 2007-2011, 12.07.2011, Available from http://www.iag-aig.org/attach/5015ba0f03bf732e1543f4120f15ec9a/turkey.pdf

Tscherning, C., Arabelos, D. & Strykowski, G. (2000). The 1-cm geoid after GOCE. In:

Tung, W.L. & Quek, C. (2009). A Mamdani-Takagi-Sugeno based Linguistic Neural-fuzzy

USGS (1997). GTOPO30 information website, U.S. Geological Survey EROS data center,

 http://eros.usgs.gov/#/Find\_Data/Products\_and\_Data\_Available/gtopo30\_info USGS (2010). Shuttle Radar Topography Mission (Mapping the World in 3 Dimensions)

Ustun, A & Abbak, R.A. (2010). On global and regional spectral evaluation of global geopotential models. *Journal of Geophysics and Engineering,* Vol.7, pp. 369-379

Vaniček, P. & Krakiwsky, E.J. (1986). *Geodesy: The Concepts* (2nd Edition)*,* Elsevier Science,

Yanalak, M. & Baykal, O. (2001). Transformation of Ellipsoid Heights to Local Leveling

Ylmaz, M. (2010). Adaptive network based on fuzzy inference system estimates of geoid heights interpolation. *Scientific Research and Essays*, Vol.5, No.16 pp. 2148-2154 Ylmaz, M. & Arslan, E. (2008). Effect of the Type of Membership Function on Geoid Height

Ylmaz, N. & Karaali, C. (2010). Comparison of Global and Local Gravimetric Geoid Models

Heights. *Journal of Surveying Engineering,* Vol.127, No.3, pp. 90-103

Modeling with Fuzzy Logic. *Survey Review,* 40(310), pp. 379-391

Zadeh, L.A. (1965). Fuzzy sets. *Information and Control*, Vol.8, pp. 338-353

in Turkey. *Scientific Research and Essays,* Vol.5, No.14 pp. 1829−1839

website, U.S. Geological Survey, 21.01.2011, Available from

http:// www.iugg.org/members/nationalreports/turkey.pdf

Torge, W. (1980). *Geodesy,* Walter de Gruyter, New York

372 doi: 10.1109/FUZZY.2009.5277194

27.December.2010, Available from

http://srtm.usgs.gov/index.php

doi:10.1088/1742-2132/7/4/003

doi: 10. 1179/003962608X325439

978-0444877772, New York

513-524

pp. 116-132

No.52 G42A-03

Heidelberg

Neural Network and Least Squares Collocation. *Survey Review,* Vol.38, No.300, pp.

modelling and control. *IEEE Transactions on Systems, Man and Cybernetics*, Vol.15,

Nagel, P., Pastor, R., Pekker, T., Poole, S. & Wang F. (2005). GGM02 - an improved Earth gravity field model from GRACE. *Journal of Geodesy*, Vol.79, pp.467−478 Tapley, B., Ries, J., Bettadpur, S., Chambers, D., Cheng, M., Condi, F. & Poole, S. (2007). The

GGM03 Mean Earth Gravity Model from GRACE. *EOS Transactions, AGU*, Vol.88,

Geodesy Commission of Turkey for 1999-2003, Presented In: *XXIII.General Assembly of the International Union of Geodesy and Geophysics,* 12.07.2011, Available from

*Gravity, Geoid and Geodynamics,* M.G. Sideris, (Ed.), 267-270, Springer Verlag, Berlin,

Inference System for Improved Interpretability-Accuracy Representation, *Proceedings of IEEE International Conference on Fuzzy Systems*, August 2009, pp.367–

In the classic RTK method using a single reference station the rover needs to work within a short range from the reference station due to the spatial decorrelation of distance-dependent errors induced by the ionosphere, troposphere and orbital errors. The operating range of RTK positioning is thus dependent on the existing atmospheric conditions and is usually limited to a distance of up to 10-20 km. In addition, no redundancy of the reference stations is usually available if the reference station experiences any malfunctioning. The constraint of the limited reference-to-rover range in RTK can be removed by using a method known as Network RTK (NRTK), whereby a network of reference stations with ranges usually less than 100 km is used. The network stations continuously collect satellite observations and send them to a central processing facility, at which the station observations are processed in a common network adjustment and observation errors and their corrections are computed. The observation corrections obtained from the network are sent to the user, operating within the coverage area of the network, to mitigate his observation errors.

In this chapter, the principles of Network RTK are first discussed and the advantages and disadvantages of the method are given. Next, the network design parameters are discussed, which include network baseline lengths and configuration, the communication method between the computing centre and the user, and the amount of calculations required by the network processing centre and by the user. Description of possible network processing techniques, their basic models, and a comparison between their advantages and disadvantages are given. Finally, some important NRTK applications are discussed including the use of NRTK in engineering surveying, machine automation and in the airborne mapping and navigation. Results from real-time testing are discussed.

## **2. Principles of the network RTK**

The aim of network RTK is to minimise the influence of the distance dependant errors on the computed position of a rover within the bounds of the network. NRTK provides redundancy of reference stations in the solution, such that if observations from one reference station are not available, a solution is still possible since the observations are gathered and processed in a common network adjustment. Figure 1 illustrates a simple demonstration of the concept of NRTK through representation of the relationship between

Precise Real-Time Positioning Using Network RTK 163

NRTK usually requires a minimum of three reference stations to generate corrections for the network area. In general there is no restriction concerning the network size, it can be regional, national, or even international. However, reference station separation is usually restricted to less than 100 km to allow for quick and reliable ambiguity resolution. As the number of stations increases, redundancy increases, and better corrections can be estimated. If one or two reference stations fail at the same time, their contribution can be eliminated from the solution and the remaining reference stations can still provide the user with corrections and give reliable results (El-Mowafy *et al*., 2003, Hu *et al*., 2003). Typically, a NRTK server system would consist of the following components (e.g. Leica Geo systems,

• A network server that acquires the data from the site servers and sends it to the

• A cluster server that hosts the network processing software. The software performs several tasks including: quality check of data, apply antenna phase centre corrections, ambiguity fixing, modelling and estimation of systematic errors, interpolation of errors (corrections) in some techniques (e.g. VRS, PRS) and generation of virtual observations,

• A firewall is usually established to protect the above servers from being accessed by a

• RTK proxy server to deal with requests from the users and send back network

• Cost and labour reduction, as there is no need to set up a base reference station for each

• Accuracy of the computed rover positions are more homogeneous and consistent as error mitigation refers to one processing software, which uses the same functional and

• Accuracy is maintained over larger distances between the reference stations and the

• The same area can be covered with fewer reference stations compared to the number of permanent reference stations required using single reference RTK. The separation distances between network stations are tens of kilometres, usually kept less than 100

• NRTK provides higher reliability and availability of RTK corrections with improved redundancy, such that if one station suffers from malfunctioning a solution can still be

• The cost of wireless communication with the network (typically via a wireless mobile

• A site server connected to each reference station receiver,

model coefficients in others (FKP), or Mac data.

obtained from the rest of the reference stations.

Network RTK has though some disadvantages, which are:

• The cost of subscription with a NRTK provider.

using for instance GPRS technology).

• The user interface to send/receive data from the NRTK centre.

The main advantages of the Network RTK can be summarised as follows:

stochastic modelling and assumptions, and use the same datum.

• Network RTK is capable of supporting multiple users and applications.

• The dependence on an external source to provide essential information.

2011):

user.

user.

rover.

km.

information.

processing centre,

the modelled distance-dependent errors and their actual values. The error planes at the three shown reference stations are at different levels. The NRTK provides an error surface formed from the errors at the three reference stations (a plane in this case). The actual change of error between the reference stations is shown in red. If a user is close to any of the stations, assuming having the same level of error of that reference station will give reasonable accuracy and results in small positioning errors at the rover. As the user moves away from the reference station, the magnitude of the differential error between the actual and the reference station error level increases. On the other hand, the differential error between the actual error and the NRTK estimated error, interpolated on the NRTK error surface at the location of the rover when used, is significantly minimised.

In principle, the RTK network approach consists of four basic segments: data collection at the reference stations; manipulation of the data and generation of corrections at the network processing centre; broadcasting the corrections, and finally positioning at the rover utilizing information from the NRTK. In the first segment, multiple reference stations simultaneously collect GNSS satellite observations and send them to the control centre, where a main computer directly controls all the reference stations, mostly via the Internet. All reference stations should use geodetic-grade multi-frequency GNSS receivers. The incoming GNSS observation data from all operating reference stations are screened for blunders and next their ambiguities are fixed. The control computer uses these data in processing a networking solution, and the data are archived for post-processing use. The network information are then broadcast to users. The network information depends on the processing algorithm and may include any of the following: observations from one reference station (physical or virtual), coefficients for interpolation of corrections within the coverage area, and observation corrections at a group of reference stations. To increase reliability, it is recommended to let a second computer work in real time as a backup to the main computer in the event of any malfunctioning.

Fig. 1. Relationship between errors in a small NRTK coverage area

162 Global Navigation Satellite Systems – Signal, Theory and Applications

the modelled distance-dependent errors and their actual values. The error planes at the three shown reference stations are at different levels. The NRTK provides an error surface formed from the errors at the three reference stations (a plane in this case). The actual change of error between the reference stations is shown in red. If a user is close to any of the stations, assuming having the same level of error of that reference station will give reasonable accuracy and results in small positioning errors at the rover. As the user moves away from the reference station, the magnitude of the differential error between the actual and the reference station error level increases. On the other hand, the differential error between the actual error and the NRTK estimated error, interpolated on the NRTK error

In principle, the RTK network approach consists of four basic segments: data collection at the reference stations; manipulation of the data and generation of corrections at the network processing centre; broadcasting the corrections, and finally positioning at the rover utilizing information from the NRTK. In the first segment, multiple reference stations simultaneously collect GNSS satellite observations and send them to the control centre, where a main computer directly controls all the reference stations, mostly via the Internet. All reference stations should use geodetic-grade multi-frequency GNSS receivers. The incoming GNSS observation data from all operating reference stations are screened for blunders and next their ambiguities are fixed. The control computer uses these data in processing a networking solution, and the data are archived for post-processing use. The network information are then broadcast to users. The network information depends on the processing algorithm and may include any of the following: observations from one reference station (physical or virtual), coefficients for interpolation of corrections within the coverage area, and observation corrections at a group of reference stations. To increase reliability, it is recommended to let a second computer work in real time as a backup to the main computer

Ref2

**Distance**

Ref1 error at

error at Ref2

NTRK error surface (plane for 3 stations)

Ref3

Ref3

surface at the location of the rover when used, is significantly minimised.

Fig. 1. Relationship between errors in a small NRTK coverage area

actual error at rover

Rover

change of actual error

in the event of any malfunctioning.

Ref1

error at

**Error** 

NRTK usually requires a minimum of three reference stations to generate corrections for the network area. In general there is no restriction concerning the network size, it can be regional, national, or even international. However, reference station separation is usually restricted to less than 100 km to allow for quick and reliable ambiguity resolution. As the number of stations increases, redundancy increases, and better corrections can be estimated. If one or two reference stations fail at the same time, their contribution can be eliminated from the solution and the remaining reference stations can still provide the user with corrections and give reliable results (El-Mowafy *et al*., 2003, Hu *et al*., 2003). Typically, a NRTK server system would consist of the following components (e.g. Leica Geo systems, 2011):


The main advantages of the Network RTK can be summarised as follows:


Network RTK has though some disadvantages, which are:


Precise Real-Time Positioning Using Network RTK 165

• Some geographic regions (i.e. equatorial or high latitude) will require a denser network than in the mid latitudes due to poorer-satellite geometry, satellite availability, and

In practice, a main factor affecting the choice of station distances and network configuration is finding suitable sites for the reference stations. The main considerations are availability of communication infrastructure and obtaining approval of site owners (whether a

It is also possible to integrate observation corrections estimated from networks of different sizes. For instance, errors of regional or even global nature, such as satellite orbital errors, clocks and the regional behavior of the ionosphere, which slowly change, can be estimated from regional networks. The local ionospheric and tropospheric errors on the other hand can be estimated from local networks. Thus, RTK networks can be configured such that areas of heavy usage can be covered by a close-meshed reference station network for highest accuracy and reliability in positioning, whereas less important areas are covered by a wide-

Real-time applications require a communication link between a service provider and the user. Currently, there are two main modes of communication that can be used in network RTK; either a duplex (bi-directional) communication or one-direction communication. Each method has its advantages and disadvantages. In choosing which communication method to use, the designer has to consider economical factors such as: operational cost by the user, cost of maintenance of existing infrastructure and/or building new one, and the amount of computations needed by the rover and the processing centre. The technical aspects that need

• latency (one second and shorter data transmission latencies are required for cm level

At present, the duplex communication mode is the mostly used method. In this mode, a cellular modem such as a General Packet Radio Service (GPRS) or Global System for Mobile Communications (GSM) are used. GPRS is usually preferred as it is more economical than GSM since the user only pays for the data packets received, not for the entire call duration when using GSM. GPRS can provide a stable and reliable connection with latencies less than one second (Hu et al., 2002). The duplex approach has a restriction on the number of users, as this number is limited by the ability of the NRTK processing centre to simultaneously perform calculations for all users. This may also result in extended latency in receiving the network information. For a limited number of users this latency is usually less than three seconds. On the other hand, the one-direction communication method mainly employs VHF or UHF broadcasting or encodes the RTK corrections into a broadcast TV audio sub-carrier

meshed network of regional or national extension (Wübbena *et al*., 2001).

**3.2 Communication method between the processing centre and the user**

ionosphere disturbance etc.

government or a private sector).

to be addressed include:

• number of users,

• protocol,

• expected signal strength at different locations,

• range and coverage (Wu, 2009) , • transmission bandwidth,

• reliability and error correction,

positioning accuracy).

## **3. Network design parameters**

Establishing a network RTK usually starts after a thorough cost/benefit analysis. At the design stage, the following main factors should be considered:


These factors are discussed in the following sections.

## **3.1 Distance between the reference stations and network configuration**

The main advantage of the network approach is that it improves modelling of the distance-dependent errors over long distances (El-Mowafy *et al*., 2003 and Euler *et al*., 2004). The observation corrections (computed as the same value of the errors but with opposite sign) can be generated after removing cycle slips and determining double differenced phase ambiguities between the reference stations. A major technical challenge in NRTK is ambiguity resolution within a reasonable short period over such large distances between reference stations. In order to achieve a fast and reliable ambiguity resolution, the distance between the reference stations is better chosen not to exceed 100 km (Wübbena and Willgalis, 2001). Typically, baseline lengths in NRTK range between 20 km and 100 km (70 km on average).

In principle, a minimum of three stations is required to generate RTK network corrections, but in practice this number should not be less than five. The increased redundancy of reference stations improves positioning accuracy and ambiguity resolution and helps to sustain network availability and reliability in the case of a temporary failure of any reference station. However, the degree of redundancy should be evaluated by means of a cost/benefit analysis, balancing the need to improve the economical aspects of establishing and running the network and keeping the required degree of redundancy. Hence, selection of baseline lengths by the network designers should satisfy the following conditions:


In network configuration, the following can be taken into consideration:


164 Global Navigation Satellite Systems – Signal, Theory and Applications

Establishing a network RTK usually starts after a thorough cost/benefit analysis. At the

1. Baseline lengths (distances between the reference stations) station locations and network configuration (number and geometric distribution of reference stations).

The main advantage of the network approach is that it improves modelling of the distance-dependent errors over long distances (El-Mowafy *et al*., 2003 and Euler *et al*., 2004). The observation corrections (computed as the same value of the errors but with opposite sign) can be generated after removing cycle slips and determining double differenced phase ambiguities between the reference stations. A major technical challenge in NRTK is ambiguity resolution within a reasonable short period over such large distances between reference stations. In order to achieve a fast and reliable ambiguity resolution, the distance between the reference stations is better chosen not to exceed 100 km (Wübbena and Willgalis, 2001). Typically, baseline lengths in NRTK range between 20 km

In principle, a minimum of three stations is required to generate RTK network corrections, but in practice this number should not be less than five. The increased redundancy of reference stations improves positioning accuracy and ambiguity resolution and helps to sustain network availability and reliability in the case of a temporary failure of any reference station. However, the degree of redundancy should be evaluated by means of a cost/benefit analysis, balancing the need to improve the economical aspects of establishing and running the network and keeping the required degree of redundancy. Hence, selection of baseline

• achieving a reliable ambiguity resolution with acceptable confidence level at every location within the network area as long as a minimum of five satellites are observed; • Ensuring reliable communications between the reference stations and the network centre (mostly via the internet through land lines, and in remote areas through satellite

• choosing sites free from multipath and radio frequency interference. It is also preferable

• For a limited number of reference stations, it is recommended to shape the network as a

• A compact shape of the network is preferable (i.e. a circular network is better than a

3. Calculations required by the network control and by the user (network algorithm).

2. The communication method between the computing centre and the user.

**3.1 Distance between the reference stations and network configuration**

lengths by the network designers should satisfy the following conditions:

• covering of the whole area of interest with reliable corrections;

to have the references stations situated at a similar altitude.

In network configuration, the following can be taken into consideration:

• maintaining sufficient station redundancy;

polygon with one or more central stations.

**3. Network design parameters**

and 100 km (70 km on average).

communication);

rectangular network).

design stage, the following main factors should be considered:

These factors are discussed in the following sections.

• Some geographic regions (i.e. equatorial or high latitude) will require a denser network than in the mid latitudes due to poorer-satellite geometry, satellite availability, and ionosphere disturbance etc.

In practice, a main factor affecting the choice of station distances and network configuration is finding suitable sites for the reference stations. The main considerations are availability of communication infrastructure and obtaining approval of site owners (whether a government or a private sector).

It is also possible to integrate observation corrections estimated from networks of different sizes. For instance, errors of regional or even global nature, such as satellite orbital errors, clocks and the regional behavior of the ionosphere, which slowly change, can be estimated from regional networks. The local ionospheric and tropospheric errors on the other hand can be estimated from local networks. Thus, RTK networks can be configured such that areas of heavy usage can be covered by a close-meshed reference station network for highest accuracy and reliability in positioning, whereas less important areas are covered by a widemeshed network of regional or national extension (Wübbena *et al*., 2001).

### **3.2 Communication method between the processing centre and the user**

Real-time applications require a communication link between a service provider and the user. Currently, there are two main modes of communication that can be used in network RTK; either a duplex (bi-directional) communication or one-direction communication. Each method has its advantages and disadvantages. In choosing which communication method to use, the designer has to consider economical factors such as: operational cost by the user, cost of maintenance of existing infrastructure and/or building new one, and the amount of computations needed by the rover and the processing centre. The technical aspects that need to be addressed include:


At present, the duplex communication mode is the mostly used method. In this mode, a cellular modem such as a General Packet Radio Service (GPRS) or Global System for Mobile Communications (GSM) are used. GPRS is usually preferred as it is more economical than GSM since the user only pays for the data packets received, not for the entire call duration when using GSM. GPRS can provide a stable and reliable connection with latencies less than one second (Hu et al., 2002). The duplex approach has a restriction on the number of users, as this number is limited by the ability of the NRTK processing centre to simultaneously perform calculations for all users. This may also result in extended latency in receiving the network information. For a limited number of users this latency is usually less than three seconds. On the other hand, the one-direction communication method mainly employs VHF or UHF broadcasting or encodes the RTK corrections into a broadcast TV audio sub-carrier

Precise Real-Time Positioning Using Network RTK 167

referencing is made to a non-physical reference station located in the vicinity of the approximate position of the rover and virtual observations are generated to refer to this non-physical reference station. The user typically has no information about the size of errors and their behaviour. In contrast to the non-physical network approach, FKP and MAC broadcast raw reference station observations and network information separately. The network information is represented by dispersive and non-dispersive corrections and the rover software decides how the network information is applied. A summary of these

Once the network errors are computed at the reference station, distance-dependent errors need to be interpolated at the location of the user receiver. Several methods can be used for such interpolation process including: the use of linear interpolation, using a linear combination model, applying an inverse-distance linear interpolation or a low-order surface model (used for example in the FKP technique), utilisation of the least-squares collocation approach, or using Kriging techniques (see for instance Fototpoulos, 2000, Dai *et al*., 2001,

**4. Estimation of the dispersive and non-dispersive errors at the network** 

The mathematical equation of the code and phase observations for the receiver (j) and the

s s s s s s s s ss j j j j jj j j j Pj s

s s s s s s s s s ss j j j j j jj j j i j s

<sup>δ</sup>tj, <sup>δ</sup>ts receiver and satellite clock errors, respectively; s T (t) modelled tropospheric refraction delay j (mainly the hydrostatic, dry component of

( ) <sup>s</sup> T t <sup>j</sup> δ residual tropospheric refraction delay (mainly the unmodelled wet troposphere);

<sup>j</sup> I t modelled ionospheric refraction delay if applied (frequency dependent);

<sup>R</sup> <sup>c</sup> R r c ( t t ) T (t) T (t) I (t) I (t) N p <sup>R</sup> <sup>f</sup> <sup>φ</sup> φ = + δ + δ −δ + +δ − −δ + + +ε

C R r c( t t ) T t T t I t I t p <sup>R</sup> = + δ + δ −δ + +δ − −δ + +ε

() () () ()

<sup>G</sup> <sup>G</sup> <sup>G</sup> <sup>G</sup> (1)

<sup>G</sup> <sup>G</sup> <sup>G</sup> <sup>G</sup> (2)

methods is given in section 5.

Wu, 2009 and Al-Shaery *et al*., 2010).

satellite (s) at time (t) can be written as:

s

R

s

j

<sup>j</sup> <sup>φ</sup> code and phase observations, respectively; s Rj

<sup>G</sup> geometric range between the user's antenna and the satellite;

<sup>j</sup> δI t residual ionospheric refraction (frequency dependent);

<sup>s</sup> p total site dependent errors (antenna phase centre and multipath j <sup>s</sup> Mj <sup>δ</sup> ); s

<sup>φ</sup><sup>j</sup> ε code and phase remaining random noise, respectively.

j

**reference stations** 

Where:

<sup>G</sup> orbit error; c speed of light;

the troposphere);

<sup>s</sup> N integer phase ambiguity; j

f signal frequency;

<sup>s</sup> C , j s

<sup>s</sup> δr

( ) <sup>s</sup>

( ) <sup>s</sup>

Pj ε , <sup>s</sup>

signal (Petrovski *et al*., 2001). For VHF broadcasting, allocation of suitable broadcast radio frequency and obtaining its license is an important issue in the early development of a network RTK. The main advantage of this method is that there is no restriction on the number of users concurrently using the NRTK service. However, the main disadvantage of the method is the high cost of the infrastructure needed to build radio signal repeaters, if needed, to cover the whole area. In addition, some problems can be experienced due to the possibility of receiving signals of varying strength in different locations, and possible frequency jamming. A mix of both communication methods is however possible (Cruddace *et al*., (2002).

The data transmission from the reference stations to the control centre server and from the control centre server to the user for RTK corrections is mostly carried out via the Network Transport of RTCM via Internet Protocal (Ntrip), BKG, 2011. Ntrip is an open source and can be downloaded from the internet (LENZ, 2004). Ntrip was built over the TCP/IP foundation and is an application–level protocol for streaming GNSS data over the internet. It was first developed by the German Federal Agency for Cartography and Geodesy (BKG). Ntrip uses HTTP and has three components: Ntrip Client, Server and Ntrip Caster. Ntrip is designed for disseminating differential correction data (e.g. in the RTCM-SC104 format) or other kind of GNSS streaming data to stationary or mobile users over the internet. It allows simultaneous PC, Laptop, PDA, or receiver connections to a broadcasting host. Ntrip supports wireless internet access through mobile IP networks like GSM, GPRS, EDGE, or UMTS (BKG, 2011).

To reduce latency, the amount of data transmitted to the rover should be minimised. One possible solution is to change (optimise) the update rates for the different parameters to follow their physical behaviour. Distance dependent errors can thus be separated into a dispersive component, consisting mainly of the ionospheric refraction, and a non-dispersive component consisting of the tropospheric refraction and orbit errors. Different proposals for optimising the update rates have been made. An update rate of 15 seconds seems reasonable for non-dispersive correction differences, while an update rate of only 10 seconds may be sufficient for the dispersive contribution (Euler *et al*., 2004). However, the impact of these rates on the Time-To-First-Fix (TTF) of carrier phase ambiguities should be carefully studied, as it lies at the top of the user interests (El-Mowafy, 2005).

The type of communications used also affects the network algorithm and the amount of calculations required at the processing centre and by the user. For instance, if a bidirectional communication is used, the processing centre can individualise the network information for a user based on his/her approximate location. Thus, the computations made at the user receiver are minimised. On the other hand, if the data link is one-directional, the user has to make the necessary interpolation of errors at his location and has to identify a suitable reference station to use.

### **3.3 NRTK solution methods**

Currently several solution methods can be applied in Network RTK, including the Virtual Reference Station (VRS), Pseudo-Reference Station (PRS), individualised Master-Auxiliary corrections (iMAX), Area-Parameter Corrections (Flächenkorrekturparameter –FKP- in its German origin), and the Master-Auxiliary (MAC) method. In VRS, PRS and iMAX referencing is made to a non-physical reference station located in the vicinity of the approximate position of the rover and virtual observations are generated to refer to this non-physical reference station. The user typically has no information about the size of errors and their behaviour. In contrast to the non-physical network approach, FKP and MAC broadcast raw reference station observations and network information separately. The network information is represented by dispersive and non-dispersive corrections and the rover software decides how the network information is applied. A summary of these methods is given in section 5.

Once the network errors are computed at the reference station, distance-dependent errors need to be interpolated at the location of the user receiver. Several methods can be used for such interpolation process including: the use of linear interpolation, using a linear combination model, applying an inverse-distance linear interpolation or a low-order surface model (used for example in the FKP technique), utilisation of the least-squares collocation approach, or using Kriging techniques (see for instance Fototpoulos, 2000, Dai *et al*., 2001, Wu, 2009 and Al-Shaery *et al*., 2010).

## **4. Estimation of the dispersive and non-dispersive errors at the network reference stations**

The mathematical equation of the code and phase observations for the receiver (j) and the satellite (s) at time (t) can be written as:

$$\mathbf{C}\_{\mathbf{j}}^{s} = \left| \vec{\mathbf{R}}\_{\mathbf{j}}^{s} \right| + \frac{\vec{\mathbf{R}}\_{\mathbf{j}}^{s}}{\left| \vec{\mathbf{R}}\_{\mathbf{j}}^{s} \right|} \delta \vec{\mathbf{r}}^{s} + \mathbf{c} (\delta \mathbf{t}\_{\mathbf{j}} - \delta \mathbf{t}^{s}) + \mathbf{T}\_{\mathbf{j}}^{s} \left( \mathbf{t} \right) + \delta \mathbf{T}\_{\mathbf{j}}^{s} \left( \mathbf{t} \right) - \mathbf{I}\_{\mathbf{j}}^{s} \left( \mathbf{t} \right) - \delta \mathbf{I}\_{\mathbf{j}}^{s} \left( \mathbf{t} \right) + \mathbf{p}\_{\mathbf{j}}^{s} + \mathbf{c}\_{\mathbf{P}j}^{s} \tag{1}$$

$$\boldsymbol{\phi}\_{\text{j}}^{s} = \left| \overline{\mathbf{R}}\_{\text{j}}^{s} \right| + \frac{\overline{\mathbf{R}}\_{\text{j}}^{s}}{\left| \overline{\mathbf{R}}\_{\text{j}}^{s} \right|} \delta \overline{\mathbf{r}}^{s} + \text{c} \left( \delta \mathbf{t}\_{\text{j}} - \delta \mathbf{t}^{s} \right) + \mathbf{T}\_{\text{j}}^{s} (\mathbf{t}) + \delta \mathbf{T}\_{\text{j}}^{s} (\mathbf{t}) - \mathbf{I}\_{\text{j}}^{s} (\mathbf{t}) - \delta \mathbf{I}\_{\text{j}}^{s} (\mathbf{t}) + \frac{\mathbf{c}}{\mathbf{f}} \mathbf{N}\_{\text{j}}^{s} + \mathbf{p}\_{\text{j}}^{s} + \mathbf{c}\_{\text{\textdegree{j}}}^{s} \tag{2}$$

Where:

166 Global Navigation Satellite Systems – Signal, Theory and Applications

signal (Petrovski *et al*., 2001). For VHF broadcasting, allocation of suitable broadcast radio frequency and obtaining its license is an important issue in the early development of a network RTK. The main advantage of this method is that there is no restriction on the number of users concurrently using the NRTK service. However, the main disadvantage of the method is the high cost of the infrastructure needed to build radio signal repeaters, if needed, to cover the whole area. In addition, some problems can be experienced due to the possibility of receiving signals of varying strength in different locations, and possible frequency jamming. A mix of both communication methods is however possible (Cruddace

The data transmission from the reference stations to the control centre server and from the control centre server to the user for RTK corrections is mostly carried out via the Network Transport of RTCM via Internet Protocal (Ntrip), BKG, 2011. Ntrip is an open source and can be downloaded from the internet (LENZ, 2004). Ntrip was built over the TCP/IP foundation and is an application–level protocol for streaming GNSS data over the internet. It was first developed by the German Federal Agency for Cartography and Geodesy (BKG). Ntrip uses HTTP and has three components: Ntrip Client, Server and Ntrip Caster. Ntrip is designed for disseminating differential correction data (e.g. in the RTCM-SC104 format) or other kind of GNSS streaming data to stationary or mobile users over the internet. It allows simultaneous PC, Laptop, PDA, or receiver connections to a broadcasting host. Ntrip supports wireless internet access through mobile IP networks like GSM, GPRS, EDGE, or

To reduce latency, the amount of data transmitted to the rover should be minimised. One possible solution is to change (optimise) the update rates for the different parameters to follow their physical behaviour. Distance dependent errors can thus be separated into a dispersive component, consisting mainly of the ionospheric refraction, and a non-dispersive component consisting of the tropospheric refraction and orbit errors. Different proposals for optimising the update rates have been made. An update rate of 15 seconds seems reasonable for non-dispersive correction differences, while an update rate of only 10 seconds may be sufficient for the dispersive contribution (Euler *et al*., 2004). However, the impact of these rates on the Time-To-First-Fix (TTF) of carrier phase ambiguities should be carefully

The type of communications used also affects the network algorithm and the amount of calculations required at the processing centre and by the user. For instance, if a bidirectional communication is used, the processing centre can individualise the network information for a user based on his/her approximate location. Thus, the computations made at the user receiver are minimised. On the other hand, if the data link is one-directional, the user has to make the necessary interpolation of errors at his location and has to identify a

Currently several solution methods can be applied in Network RTK, including the Virtual Reference Station (VRS), Pseudo-Reference Station (PRS), individualised Master-Auxiliary corrections (iMAX), Area-Parameter Corrections (Flächenkorrekturparameter –FKP- in its German origin), and the Master-Auxiliary (MAC) method. In VRS, PRS and iMAX

studied, as it lies at the top of the user interests (El-Mowafy, 2005).

*et al*., (2002).

UMTS (BKG, 2011).

suitable reference station to use.

**3.3 NRTK solution methods** 

<sup>s</sup> C , j s <sup>j</sup> <sup>φ</sup> code and phase observations, respectively; s Rj <sup>G</sup> geometric range between the user's antenna and the satellite; <sup>s</sup> δr <sup>G</sup> orbit error; c speed of light; <sup>δ</sup>tj, <sup>δ</sup>ts receiver and satellite clock errors, respectively; s T (t) modelled tropospheric refraction delay j (mainly the hydrostatic, dry component of the troposphere); ( ) <sup>s</sup> T t <sup>j</sup> δ residual tropospheric refraction delay (mainly the unmodelled wet troposphere); ( ) <sup>s</sup> <sup>j</sup> I t modelled ionospheric refraction delay if applied (frequency dependent); ( ) <sup>s</sup> <sup>j</sup> δI t residual ionospheric refraction (frequency dependent); f signal frequency; <sup>s</sup> N integer phase ambiguity; j <sup>s</sup> p total site dependent errors (antenna phase centre and multipath j <sup>s</sup> Mj <sup>δ</sup> ); s Pj ε , <sup>s</sup> <sup>φ</sup><sup>j</sup> ε code and phase remaining random noise, respectively.

Precise Real-Time Positioning Using Network RTK 169

In this section, the most common network RTK techniques used at present are discussed,

The VRS technique is currently the most popular NRTK method due to the fact that it does not require changes in the user software, i.e. it is compatible with existing software. The rover applies the standard differential positioning of its observations with observations from a 'virtual' reference station. The distance-dependent errors are computed for each pair of satellites, and for each master-to-another-reference station. The VRS method requires bidirectional communication. The rover sends its approximate position via a wireless communication link (typically a cellular modem in NMEA format) to the network processing centre where computations are carried out for each user (Vollath *et al*., 2000, Hu *et al*., 2003). Some network providers use only the nearest three-to-five reference stations to compute the measurement errors for a specific user, see Figure 2. The estimated network measurement distance-dependent errors are interpolated for a virtual reference station (VRS). The VRS location is typically selected at the initial approximate position of the rover. For a kinematic user, this VRS location is kept to preserve the ambiguity values determined from its solution until the range between the VRS and the actual position of the rover becomes too long for precise differential positioning. Then, a new VRS at the most recent

To construct the observations at the VRS, the VRS and the satellite known positions are firstly used to compute the range between the satellite and the VRS. Similarly, the range between the satellite and the master station is computed, where the master station is usually

rover

ref st.

VRS *i*

NRTK centre

� � �� � �� ���� � δ∆r�� � � ��

(5)

ref st.

δ∆r�� �

**5.1 The Virtual Reference Station (VRS) method** 

namely: the VRS, FKP, and Mac.

position of the user is established.

ref st. *j*

Fig. 2. VRS concept

��� � � ��

**5. Summary of network RTK processing techniques** 

� �� ���� � δ∆r�� � � ��

From the above equations, one can see that positioning accuracy from GNSS phase observations is limited by two types of errors: the distance dependent errors, which include orbit, ionosphere and troposphere errors, and station dependent errors, which include multipath, antenna phase centre variation, and receiver hardware biases. The network estimation methodology uses the known information of the antennae and site to reduce station related errors and focuses on estimating the distance-dependent errors.

For the station-dependent errors, multipath can be minimised using choke rings and modelling the site specific multipath pattern taking advantage of the fixed reflector to antenna geometry at reference stations and of the daily repeatability of multipath. This can be done utilizing techniques such as the Hilbert Huang transformation to decompose the time-shifted post-fit GPS phase signal residuals (Hsieh and Wu, 2008). Another approach is to include multipath error in the network estimation process, which will average out the uncorrelated multipath errors. To minimise the antenna phase centre variation, the definition of the network reference stations antennae has always to be consistent. This can be done by using the same antenna model type for all reference stations and unifying antenna orientation. To eliminate the phase centre variation, an absolute calibration of each antenna is recommended. However, most current networks only apply relative calibration of the antennae, which is a standard calibration process that can be applied for the type of antenna used, determined relative to a reference antenna (typically a Dome Margolin Model T with choke ring).

The distance dependent errors can be separated into a dispersive component (i.e. frequency dependent), which is the error induced by the ionosphere, and a non-dispersive component, that include orbital and tropospheric errors. Estimation of the dispersive and non-dispersive errors at the network reference stations can be performed in several ways. In one approach the state of individual GPS errors in real time can be estimated by processing all stations of the network simultaneously using un-differenced observables (Wübbena and Willgalis, 2001, Zebhauser et al., 2002, Wübbena et al., 2005). Then, the state vector ( X G ) at station j reads:

$$\delta \vec{X} = \left( \mathbf{N}\_{\text{j}}^{s}{}\_{\text{i}} \; \delta \mathbf{t}\_{\text{j}} \; \delta \mathbf{t}^{s} \; \delta \vec{\mathbf{r}}^{s} \; \delta \mathbf{\bar{r}}^{s} \; \delta \mathbf{T}\_{\text{j}}^{s} \; \delta \mathbf{r}\_{\text{j}}^{s} \; \delta \mathbf{M}\_{\text{j}}^{s} \right)^{\text{T}} \tag{3}$$

The orbital and tropospheric errors are combined to form the geometric (non-dispersive) error δr� � � , the ionospheric dispersive error δr� � I (replacing the terms <sup>s</sup> <sup>j</sup> I and s <sup>j</sup> δI and in Equations 1 and 2). The state space approach has some advantages; the main one is its ability to constrain each bias by specific models (Wübbena and Willgalis, 2001). Also, a change in the network configuration caused by the breakdown of one of the reference stations can be compensated without much effort. Moreover, in the case of irregular conditions of one of the state parameters, warnings can be issued to the users.

Another popular method for estimation of network errors is using single difference linear combination of observations. The dispersive and non-dispersive components are determined for satellite *s* and between the reference stations *j* and *k*, using dual-frequency receivers of L1 and L2, as follows:

$$
\delta\Delta\mathbf{r}\_{\rm jk\_{L1}}^{\rm s} = \left(\frac{\mathbf{f}\_{\rm 2}^{\rm f}}{\mathbf{f}\_{\rm 2}^{\rm g} - \mathbf{f}\_{\rm 1}^{\rm g}} \,\delta\Delta\mathbf{r}\_{\rm jk}^{\rm s}\right)\_{\rm L1} - \left(\frac{\mathbf{f}\_{\rm 2}^{\rm f}}{\mathbf{f}\_{\rm 2}^{\rm g} - \mathbf{f}\_{\rm 1}^{\rm g}} \,\delta\Delta\mathbf{r}\_{\rm |k}^{\rm s}\right)\_{\rm L2} \tag{4}$$

168 Global Navigation Satellite Systems – Signal, Theory and Applications

From the above equations, one can see that positioning accuracy from GNSS phase observations is limited by two types of errors: the distance dependent errors, which include orbit, ionosphere and troposphere errors, and station dependent errors, which include multipath, antenna phase centre variation, and receiver hardware biases. The network estimation methodology uses the known information of the antennae and site to reduce

For the station-dependent errors, multipath can be minimised using choke rings and modelling the site specific multipath pattern taking advantage of the fixed reflector to antenna geometry at reference stations and of the daily repeatability of multipath. This can be done utilizing techniques such as the Hilbert Huang transformation to decompose the time-shifted post-fit GPS phase signal residuals (Hsieh and Wu, 2008). Another approach is to include multipath error in the network estimation process, which will average out the uncorrelated multipath errors. To minimise the antenna phase centre variation, the definition of the network reference stations antennae has always to be consistent. This can be done by using the same antenna model type for all reference stations and unifying antenna orientation. To eliminate the phase centre variation, an absolute calibration of each antenna is recommended. However, most current networks only apply relative calibration of the antennae, which is a standard calibration process that can be applied for the type of antenna used, determined relative to a reference antenna (typically a Dome Margolin Model

The distance dependent errors can be separated into a dispersive component (i.e. frequency dependent), which is the error induced by the ionosphere, and a non-dispersive component, that include orbital and tropospheric errors. Estimation of the dispersive and non-dispersive errors at the network reference stations can be performed in several ways. In one approach the state of individual GPS errors in real time can be estimated by processing all stations of the network simultaneously using un-differenced observables (Wübbena and Willgalis,

( )

The orbital and tropospheric errors are combined to form the geometric (non-dispersive)

Equations 1 and 2). The state space approach has some advantages; the main one is its ability to constrain each bias by specific models (Wübbena and Willgalis, 2001). Also, a change in the network configuration caused by the breakdown of one of the reference stations can be compensated without much effort. Moreover, in the case of irregular

Another popular method for estimation of network errors is using single difference linear combination of observations. The dispersive and non-dispersive components are determined for satellite *s* and between the reference stations *j* and *k*, using dual-frequency

> � � �� � �� ���� � δ∆r�� � � ��

� I

<sup>T</sup> s ss ss s X = N , t , t , r , T , r , M j j j jI j δδδ δ δ δ <sup>G</sup> <sup>G</sup> (3)

(replacing the terms <sup>s</sup>

G

<sup>j</sup> I and s <sup>j</sup> δI and in

(4)

) at station j

2001, Zebhauser et al., 2002, Wübbena et al., 2005). Then, the state vector ( X

conditions of one of the state parameters, warnings can be issued to the users.

, the ionospheric dispersive error δr�

δ∆r�� �

I�� � � �� � �� ���� � δ∆r�� � � ��

station related errors and focuses on estimating the distance-dependent errors.

T with choke ring).

reads:

error δr� � �

receivers of L1 and L2, as follows:

$$
\delta\Delta\mathbf{r}\_{\rm jk}^{\rm s}{}\_{\rm lk} = \left(\frac{\mathbf{f}\_{\rm l}^{\rm f}}{\mathbf{f}\_{\rm l}^{2} - \mathbf{f}\_{\rm l}^{2}}\,\delta\Delta\mathbf{r}\_{\rm lk}^{\rm s}{}\_{\rm lk}\right)\_{\rm L1} - \left(\frac{\mathbf{f}\_{\rm l}^{2}}{\mathbf{f}\_{\rm l}^{2} - \mathbf{f}\_{\rm l}^{2}}\,\delta\Delta\mathbf{r}\_{\rm lk}^{\rm s}{}\_{\rm lk}\right)\_{\rm L2} \tag{5}$$

#### **5. Summary of network RTK processing techniques**

In this section, the most common network RTK techniques used at present are discussed, namely: the VRS, FKP, and Mac.

#### **5.1 The Virtual Reference Station (VRS) method**

The VRS technique is currently the most popular NRTK method due to the fact that it does not require changes in the user software, i.e. it is compatible with existing software. The rover applies the standard differential positioning of its observations with observations from a 'virtual' reference station. The distance-dependent errors are computed for each pair of satellites, and for each master-to-another-reference station. The VRS method requires bidirectional communication. The rover sends its approximate position via a wireless communication link (typically a cellular modem in NMEA format) to the network processing centre where computations are carried out for each user (Vollath *et al*., 2000, Hu *et al*., 2003). Some network providers use only the nearest three-to-five reference stations to compute the measurement errors for a specific user, see Figure 2. The estimated network measurement distance-dependent errors are interpolated for a virtual reference station (VRS). The VRS location is typically selected at the initial approximate position of the rover. For a kinematic user, this VRS location is kept to preserve the ambiguity values determined from its solution until the range between the VRS and the actual position of the rover becomes too long for precise differential positioning. Then, a new VRS at the most recent position of the user is established.

#### Fig. 2. VRS concept

To construct the observations at the VRS, the VRS and the satellite known positions are firstly used to compute the range between the satellite and the VRS. Similarly, the range between the satellite and the master station is computed, where the master station is usually

$$
\Delta \mathbf{R}\_{\parallel}^{\mathbf{s}} = \mathbf{R}\_{\parallel}^{\mathbf{s}} - \mathbf{R}\_{\parallel}^{\mathbf{s}} \tag{6}
$$

$$
\boldsymbol{\phi}\_{\rm l}^{\rm s} = \boldsymbol{\phi}\_{\rm l}^{\rm s} + \langle \boldsymbol{\Delta} \mathbf{R}\_{\rm l}^{\rm s} - \boldsymbol{\delta} \boldsymbol{\Delta} \mathbf{r}\_{\rm l}^{\rm s} + \boldsymbol{\delta} \boldsymbol{\Delta} \mathbf{r}\_{\rm l}^{\rm s} + \boldsymbol{\Delta} \mathbf{T}\_{\rm l}^{\rm s} \rangle / \boldsymbol{\lambda} \tag{7}
$$

$$\mathbf{P\_{ij}^s} = \mathbf{P\_{jl}^s} + \Delta \mathbf{R\_{il}^s} + \delta \Delta \mathbf{r\_{il}^s}\_{\mathbf{l}\mathbf{j}\_{\parallel}} + \delta \Delta \mathbf{r\_{il}^s}\_{\mathbf{l}\mathbf{j}\_{\mathbf{o}}} + \Delta \mathbf{T\_{il}^s} \tag{8}$$

$$\text{Str}(\mathbf{t}) = \mathbf{a}(\mathbf{t}) \left(\phi - \phi\_{\mathbb{R}}\right) + \mathbf{b}(\mathbf{t}) \left(\mathbf{y} - \mathbf{y}\_{\mathbb{R}}\right) + \mathbf{c}(\mathbf{t}) \tag{9}$$

$$
\begin{bmatrix}
\delta r\_{R-1} \\
\delta r\_{R-2} \\
\vdots \\
\delta r\_{R-n}
\end{bmatrix} = \begin{bmatrix}
\Delta \lambda\_{R-1} & \Delta \phi\_{R-1} & 1 \\
\Delta \lambda\_{R-2} & \Delta \phi\_{R-2} & 1 \\
\vdots & \vdots & \vdots \\
\Delta \lambda\_{R-n} & \Delta \phi\_{R-n} & 1
\end{bmatrix} \begin{bmatrix} a \\ b \\ c \end{bmatrix} \tag{10}
$$

$$
\begin{bmatrix}
\hat{a} \\
\hat{b} \\
\hat{c}
\end{bmatrix} = (A^T A)^{-1} A^T \overleftarrow{\delta r} \tag{11}
$$

$$A = \begin{bmatrix} \Delta \lambda\_{R-1} & \Delta \phi\_{R-1} & 1\\ \Delta \lambda\_{R-2} & \Delta \phi\_{R-2} & 1\\ \vdots & \vdots & \vdots\\ \Delta \lambda\_{R-n} & \Delta \phi\_{R-n} & 1 \end{bmatrix} \text{and } \overleftarrow{\delta r} = \begin{bmatrix} \delta r\_{R-1} \\ \delta r\_{R-2} \\ \vdots \\ \delta r\_{R-n} \end{bmatrix} \tag{12}$$

Precise Real-Time Positioning Using Network RTK 173

The drawbacks of the FKP method include the need of the rover to perform interpolation of measurement corrections, possible inconsistency at the edge of two adjacent planes due to the use of the linear plane surfaces, and large data formats are needed. In Radio Technical Commission for Maritime services (RTCM) format version 3.1, FKP corrections can be sent

In the Mac approach, the rover sends its approximate position via NMEA format to the network processing centre. The centre determines for this specific user the appropriate master station, which is usuall selected the closest reference station to its position, and identifies the auxiliary reference stations. These stations are chosen within a catch circle of a predefined radius (e.g. 70 km) around the rover, and with a pre-set number (e.g. from 3 to 14). Figure 3 illustrates the Mac concept. In one Mac approach, a network RTK of large number of reference stations can be subdivided into clusters (Leica Geo systems, 2011). The processing centre defines the appropriate cluster to a user and defines the appropriate

The rover can receive different types of information according to the strategy used by the

rover

Aux **k4**

• Correction differences between the Master and Auxiliary stations. These differences when being added to the corrections of the Master will give the corrections at the

The latter Mac corrections can be received via RTCM v3.1 message types 1014-1018, 1030-

In Equations (7 and 8), the single observation differences between the Master station *j* and

� � ����� �

� � �����

� ��� (20)

Aux **k3**

� � ������ � I

via message types 1034 and 1035 for GPS and GLONASS observations, respectively.

**5.3 The Master-Auxiliary (MAC) method** 

network corrections applicable to that user.

Fig. 3. The Master–Auxiliary NRTK

Auxiliary **k1**

Auxiliary stations.

1031 and 1034-1035 etc.

Mac processing centre which may include:

• Measurement corrections at the Master station.

Aux **k2**

��� � � ��

• The coordinates and raw measurements of the Master station.

Master **j**

an Auxiliary station *k* for satellite *s* reads (Takac and Lienhart, 2008):

� � �����

Typically in FKP, two planes are computed for each satellite, centred at each reference station, one plane for the dispersive and another for the non-dispersive corrections. The corrections at the rover are determined through interpolation using the inclination parameters of the correction planes. Results of Euler *et al*., 2002, showed that the plane surface model gave good results when modelling the regional trends of the correction differences. Although low-order (linear-plane) surface models are usually utilised, longer baselines between reference stations may require polynomials of a higher order.

The surface area model was discussed in several publications, e.g. Varner, 2000, Fotopoulos and Cannon, 2001 and Wübbena and Bagge, 2002. An example is given by the latter study for generation of FKP, where the errors are computed as follows:

$$\delta \mathbf{r}\_o(\mathbf{t}) = 6.37 \left( \text{FKP}\_{\mathbb{N}\_0} \left( \phi - \phi\_\mathbb{R} \right) + \text{FKP}\_{\mathbb{E}\_0} \left( \lambda - \lambda\_\mathbb{R} \right) \cos(\phi\_\mathbb{R}) \right) \tag{13}$$

$$\delta \mathbf{r}\_{\mathrm{I}}(\mathbf{t}) = \mathsf{6.37} \text{ or } \left( \mathsf{FKP}\_{\mathrm{N}\_{\mathrm{I}}} \left( \phi - \phi\_{\mathrm{R}} \right) + \mathsf{FKP}\_{\mathrm{E}\_{\mathrm{I}}} \left( \lambda - \lambda\_{\mathrm{R}} \right) \cos(\phi\_{\mathrm{R}} \rangle \right) \text{ (t)} \tag{14}$$

$$
\alpha = 1 + 16 \left( 0.53 - \theta(\pi)^3 \right) \tag{15}
$$

Where:


After interpolating the dispersive and non-dispersive errors, they are combined to generate the range residuals for L1 and L2 frequency observations, which read:

$$\left\| \text{\(\text{\(\(r\_1\)}\_1 = \text{\(\(r\_0\)}\_1 + \frac{f\_2}{f\_1}\)\text{\(\(r\_1\)}\_1\)}\right\|\_1 \tag{16}$$

$$\left\| \text{\(\text{\(\(\)}{}\_2\)} = \left\| \text{\(\(\)}{}\_0 + \frac{\text{f}\_1}{\text{f}\_2} \left\| \text{\(\(\)}{}\_1 \right\| \text{\(\)} \right\| \tag{17}$$

Where:

δr� � �, <sup>δ</sup>r� � � total measurement errors for the frequencies f1 and f2. f2, f1 frequencies of L1 and L2 signals.

Finally, the range R, derived from the carrier-phase measurements are corrected as follows:

$$\mathbf{R\_{i\_{1}corrected}^{s}} = \mathbf{R\_{i\_{1}}^{s}} - \boldsymbol{\delta}\,\mathbf{r\_{i\_{1}}^{s}} \tag{18}$$

$$\mathbf{R\_{i\_{2}corrected}^{s}} = \mathbf{R\_{i\_{2}}^{s}} - \boldsymbol{\delta} \,\mathbf{r\_{i\_{2}}^{s}} \tag{19}$$

172 Global Navigation Satellite Systems – Signal, Theory and Applications

Typically in FKP, two planes are computed for each satellite, centred at each reference station, one plane for the dispersive and another for the non-dispersive corrections. The corrections at the rover are determined through interpolation using the inclination parameters of the correction planes. Results of Euler *et al*., 2002, showed that the plane surface model gave good results when modelling the regional trends of the correction differences. Although low-order (linear-plane) surface models are usually utilised, longer

The surface area model was discussed in several publications, e.g. Varner, 2000, Fotopoulos and Cannon, 2001 and Wübbena and Bagge, 2002. An example is given by the latter study

δro(t) = 6.37 (FKPN� (φ − φR) + FKPE� (λ − λR) cos(φR)) (t) (13)

δrI(t) = 6.37 α *(*FKPNI (φ − φR) + FKPEI (λ − λR) cos(φR)) (t) (14)

α = 1+ 16 (0.53 – θ/π)3 (15)

The FKP parameter in north-south direction for the ionospheric signal "narrow

The FKP parameter in east-west direction for the ionospheric signal "narrow lane"

FKPN� The FKP parameter in north-south direction for the geometric signal "ionosphere-

FKPE� The FKP parameter in east-west direction for the geometric signal "ionosphere-

After interpolating the dispersive and non-dispersive errors, they are combined to generate

� � δr� � ��

� � δr� � ��

Finally, the range R, derived from the carrier-phase measurements are corrected as follows:

� � � δ r� �

� � � δ r� �

���������� � R�

���������� � R�

��

��

δrI (16)

δrI (17)

� (18)

� (19)

baselines between reference stations may require polynomials of a higher order.

δr� estimated non-dispersive geometric (orbital and tropospheric) error

the range residuals for L1 and L2 frequency observations, which read:

R� �

R� � δr� �

δr� �

� total measurement errors for the frequencies f1 and f2.

for generation of FKP, where the errors are computed as follows:

δrI estimated dispersive ionospheric error

θ the satellite elevation angle in radians

f2, f1 frequencies of L1 and L2 signals.

lane" in ppm

free" in ppm

free" in ppm

in ppm

Where:

FKPNI

FKPEI

Where:

δr� � �, <sup>δ</sup>r� � The drawbacks of the FKP method include the need of the rover to perform interpolation of measurement corrections, possible inconsistency at the edge of two adjacent planes due to the use of the linear plane surfaces, and large data formats are needed. In Radio Technical Commission for Maritime services (RTCM) format version 3.1, FKP corrections can be sent via message types 1034 and 1035 for GPS and GLONASS observations, respectively.

#### **5.3 The Master-Auxiliary (MAC) method**

In the Mac approach, the rover sends its approximate position via NMEA format to the network processing centre. The centre determines for this specific user the appropriate master station, which is usuall selected the closest reference station to its position, and identifies the auxiliary reference stations. These stations are chosen within a catch circle of a predefined radius (e.g. 70 km) around the rover, and with a pre-set number (e.g. from 3 to 14). Figure 3 illustrates the Mac concept. In one Mac approach, a network RTK of large number of reference stations can be subdivided into clusters (Leica Geo systems, 2011). The processing centre defines the appropriate cluster to a user and defines the appropriate network corrections applicable to that user.

Fig. 3. The Master–Auxiliary NRTK

The rover can receive different types of information according to the strategy used by the Mac processing centre which may include:


The latter Mac corrections can be received via RTCM v3.1 message types 1014-1018, 1030- 1031 and 1034-1035 etc.

In Equations (7 and 8), the single observation differences between the Master station *j* and an Auxiliary station *k* for satellite *s* reads (Takac and Lienhart, 2008):

$$
\Delta \phi\_{\rm jk}^{\rm s} = \phi\_{\rm l}^{\rm s} + \langle \Delta \mathbf{R}\_{\rm jk}^{\rm s} - \delta \Delta \mathbf{r}\_{\rm jk\_{\rm l}}^{\rm s} + \delta \Delta \mathbf{r}\_{\rm jk\_{\rm o}}^{\rm s} + \Delta \mathbf{T}\_{\rm jk}^{\rm s} \rangle / \lambda \tag{20}
$$

Precise Real-Time Positioning Using Network RTK 175

In PPP-RTK, the function of the network is to provide the user with satellite clocks and interpolated ionospheric delays. When these precise estimates are passed on to the user, the above given definition of these clocks ensures that the ambiguities of the user are also integer and ambiguity resolution is available at the user side. Satellite clocks for each epoch are added as pseudo-observations, with appropriate variance matrix. The precise IGS orbits are used. For the network processing in Teunissen et al., 2010, a Kalman filter is used, assuming the ambiguities are time-invariant, while for the user, an epoch-by-epoch leastsquares processing is used, thus providing truly instantaneous single-epoch solutions. The integer ambiguity resolution of both network and user is based on the LAMBDA method

(Teunissen, 1995), with the Fixed Failure Ratio Test (Teunissen and Verhagen, 2009).

In this section important applications that can benefit from the few cm-level positioning precision and accuracy achievable by using a single GNSS receiver with NRTK are

Surveying works in construction sites are usually dependent on determination of accurate coordinates and heights. The 3D positioning versatility and accuracy achievable from NRTK encourages the use of this technique for construction surveying works, particularly for large sites when a rapid survey is needed. The method helps in reducing field expenses and time due to reduction of the size of surveying crew, elimination of the need for frequent setups of the surveying instruments, and the reduction of the need for accurate local traverses or multiple control stations within the site. Studies showed that RTK GPS and the traditional techniques employing total stations gave statistically compatible results (El-Mowafy, 2000). With a typical accuracy of 1-5 cm, the NRTK GPS technique can be utilised for medium


A NRTK GNSS system can be integrated with the total station for instantaneous determination of the total station location by mounting the GNSS antenna directly on top of the total station alidade in open sites. Thus, the need for establishing permanent horizontal control stations onsite can be minimised. For orientation determination, the total station can be sighted at a back station, where its coordinates can be instantaneously determined using the NRTK-GNSS technique. This process improves the economics of surveying work, and reduces the overall surveying time, including the time required for the initialisation of the total station at each setup. However, one should note that performance of surveying with RTK GNSS in construction sites are affected by satellite availability, multipath errors resulting from working near buildings, and latency of the reference data. The influence of

**7. Network RTK applications** 

**7.1 NRTK in engineering surveying** 

accuracy construction survey works such as:

landscaping, fences etc.,



presented.



$$\mathbf{P}\_{\rm jk}^{\rm s} = \mathbf{P}\_{\rm j}^{\rm s} + \Delta \mathbf{R}\_{\rm jk}^{\rm s} + \delta \Delta \mathbf{r}\_{\rm jk\_{\rm l}}^{\rm s} + \delta \Delta \mathbf{r}\_{\rm jk\_{\rm o}}^{\rm s} + \Delta \mathbf{T}\_{\rm jk}^{\rm s} \tag{21}$$

One characteristic of the Mac approach is that its data are sent to the user at the same ambiguity level. This can be explained as follows. In the Mac method the carrier phase ambiguities are determined with respect to fixed single difference ambiguity values. However, ambiguity fixing is more reliably performed using a double difference approach. Thus, the ambiguities from the satellite *s* can be determined from that of the reference satellite *q* and their double difference as follows:

$$\mathbf{N}\_{\mathbf{k}\mathbf{j}}^{\mathbf{s}} = \mathbf{N}\_{\mathbf{k}\mathbf{j}}^{\mathbf{q}} + \mathbf{N}\_{\mathbf{k}\mathbf{j}}^{\mathbf{q},\mathbf{s}} \tag{22}$$

Therefore, the ambiguity bias, which is the difference between the true ambiguity and the estimated ambiguity for the reference satellite, usually known as the ambiguity level, should be estimated. It is common to all estimated ambiguities of satellites observed from one baseline and cancels out in double differencing.

After receiving the Mac information, the rover software is free to decide the method of interpolating the corrections at its location. The processing centre can do the interpolation if needed (individualised I-Max). The rover software is also free on how the Mac information be used to determine its position. For instance, the rover can apply double differencing with the Master reference station as the base. It can also do that after removing the errors from both the Master reference station and its position.

### **6. PPP- RTK**

A more recent direction of NRTK implementation is its integration with the precise point positioning (PPP) technique, Wübbena *et al.*, 2005. In a standalone PPP mode, undifferenced observations are used and the satellite related errors are mitigated by using satellite clock corrections and utilising precise orbits to avoid the orbital errors associated with the use of broadcast ephemeris. These satellite products are typically provided from a processing centre analysing global data such as the International GNSS Service (IGS). Since only one receiver is used in PPP, the ambiguities are solved as part of the unknowns with real numbers and not fixed. As a result, several minutes of data are needed when processing to achieve a reliable convergence of the solution. As the ambiguities are solved as real numbers, only accuracy at the sub-decimetre, at best, is achievable from PPP. However, it is possible to integrate PPP and NRTK into a seamless positioning service, which can provide an accuracy of a few centimetres (Li et al., 2011). The concept of PPP-RTK is to augment PPP estimation with precise un-differenced atmospheric corrections and satellite clock corrections from a reference network, so that instantaneous ambiguity fixing is achievable for users within the network coverage.

A few techniques have been yet proposed for PPP-RTK. In the method presented in Teunissen et al., 2010, un-differenced observation equations for the network stations are used, and thus the design matrix of the network will show a rank defect. This rank defect is eliminated through an appropriate reparametrization (i.e. reduction and redefinition of the unknown parameters). This results in redefined satellite clocks and ambiguities. The tropospheric delay is lumped with the phase and pseudo range satellite clock errors and the ambiguity becomes a between receiver single-differenced ambiguity. Eventually, a full-rank system of observations can be obtained.

174 Global Navigation Satellite Systems – Signal, Theory and Applications

One characteristic of the Mac approach is that its data are sent to the user at the same ambiguity level. This can be explained as follows. In the Mac method the carrier phase ambiguities are determined with respect to fixed single difference ambiguity values. However, ambiguity fixing is more reliably performed using a double difference approach. Thus, the ambiguities from the satellite *s* can be determined from that of the reference

� � N��

Therefore, the ambiguity bias, which is the difference between the true ambiguity and the estimated ambiguity for the reference satellite, usually known as the ambiguity level, should be estimated. It is common to all estimated ambiguities of satellites observed from one

After receiving the Mac information, the rover software is free to decide the method of interpolating the corrections at its location. The processing centre can do the interpolation if needed (individualised I-Max). The rover software is also free on how the Mac information be used to determine its position. For instance, the rover can apply double differencing with the Master reference station as the base. It can also do that after removing the errors from

A more recent direction of NRTK implementation is its integration with the precise point positioning (PPP) technique, Wübbena *et al.*, 2005. In a standalone PPP mode, undifferenced observations are used and the satellite related errors are mitigated by using satellite clock corrections and utilising precise orbits to avoid the orbital errors associated with the use of broadcast ephemeris. These satellite products are typically provided from a processing centre analysing global data such as the International GNSS Service (IGS). Since only one receiver is used in PPP, the ambiguities are solved as part of the unknowns with real numbers and not fixed. As a result, several minutes of data are needed when processing to achieve a reliable convergence of the solution. As the ambiguities are solved as real numbers, only accuracy at the sub-decimetre, at best, is achievable from PPP. However, it is possible to integrate PPP and NRTK into a seamless positioning service, which can provide an accuracy of a few centimetres (Li et al., 2011). The concept of PPP-RTK is to augment PPP estimation with precise un-differenced atmospheric corrections and satellite clock corrections from a reference network, so that instantaneous ambiguity fixing is achievable

A few techniques have been yet proposed for PPP-RTK. In the method presented in Teunissen et al., 2010, un-differenced observation equations for the network stations are used, and thus the design matrix of the network will show a rank defect. This rank defect is eliminated through an appropriate reparametrization (i.e. reduction and redefinition of the unknown parameters). This results in redefined satellite clocks and ambiguities. The tropospheric delay is lumped with the phase and pseudo range satellite clock errors and the ambiguity becomes a between receiver single-differenced ambiguity. Eventually, a full-rank

� ����� �

� � ����

� (21)

��� (22)

� � ����� � I

P�� � � P�

satellite *q* and their double difference as follows:

baseline and cancels out in double differencing.

both the Master reference station and its position.

for users within the network coverage.

system of observations can be obtained.

**6. PPP- RTK** 

� � ����

 N�� � � N�� In PPP-RTK, the function of the network is to provide the user with satellite clocks and interpolated ionospheric delays. When these precise estimates are passed on to the user, the above given definition of these clocks ensures that the ambiguities of the user are also integer and ambiguity resolution is available at the user side. Satellite clocks for each epoch are added as pseudo-observations, with appropriate variance matrix. The precise IGS orbits are used. For the network processing in Teunissen et al., 2010, a Kalman filter is used, assuming the ambiguities are time-invariant, while for the user, an epoch-by-epoch leastsquares processing is used, thus providing truly instantaneous single-epoch solutions. The integer ambiguity resolution of both network and user is based on the LAMBDA method (Teunissen, 1995), with the Fixed Failure Ratio Test (Teunissen and Verhagen, 2009).

## **7. Network RTK applications**

In this section important applications that can benefit from the few cm-level positioning precision and accuracy achievable by using a single GNSS receiver with NRTK are presented.

#### **7.1 NRTK in engineering surveying**

Surveying works in construction sites are usually dependent on determination of accurate coordinates and heights. The 3D positioning versatility and accuracy achievable from NRTK encourages the use of this technique for construction surveying works, particularly for large sites when a rapid survey is needed. The method helps in reducing field expenses and time due to reduction of the size of surveying crew, elimination of the need for frequent setups of the surveying instruments, and the reduction of the need for accurate local traverses or multiple control stations within the site. Studies showed that RTK GPS and the traditional techniques employing total stations gave statistically compatible results (El-Mowafy, 2000).

With a typical accuracy of 1-5 cm, the NRTK GPS technique can be utilised for medium accuracy construction survey works such as:


A NRTK GNSS system can be integrated with the total station for instantaneous determination of the total station location by mounting the GNSS antenna directly on top of the total station alidade in open sites. Thus, the need for establishing permanent horizontal control stations onsite can be minimised. For orientation determination, the total station can be sighted at a back station, where its coordinates can be instantaneously determined using the NRTK-GNSS technique. This process improves the economics of surveying work, and reduces the overall surveying time, including the time required for the initialisation of the total station at each setup. However, one should note that performance of surveying with RTK GNSS in construction sites are affected by satellite availability, multipath errors resulting from working near buildings, and latency of the reference data. The influence of

Precise Real-Time Positioning Using Network RTK 177

E 0.95 2.35 1.00

N 1.12 2.52 1.18

Table 1. Statistics of positioning differences between GPS RTK-Network with the total station

To check repeatability of results (internal accuracy), another independent survey was carried out after two days and 4 hours to re-determine the coordinates of the same marked points previously determined by GPS. The average and maximum values of the differences between the results of the two surveys are given in Table 2. As the table demonstrates, repeatability testing showed that the average value for differences in the total planimeteric coordinate estimation was at the cm level, while for ellipsoidal height determination it was 1.56 cm. These differences can be attributed to changes in the quality of the measurements used, which mainly resulted from differences in the number of the observed satellites and their geometric distribution. These parameters have affected the quality of the network computations of the measurement corrections and the quality of coordinate estimation at the

> Average (cm)

E 0.85 1.23 N 1.15 1.88 h 1.56 3.22

Unlike traditional levelling, GPS derived heights are referenced to an ellipsoidal datum (WGS 84) and do not depend on local gravity variations, whereas in most levelling works and mapping orthometric heights are used. Orthometric heights reflect changes in topography as well as local variations in gravity. They are referenced to the geoid, which is an equi-potential level surface of the Earth that is closely associated with the mean sea level on a global basis. To convert ellipsoidal heights from GPS (hGPS) into orthometric heights

where (N) is the geoid height. Thus, with the use of one receiver and employing NRTK to determine ellipsoidal heights, orthometric heights can be determined if a good geoid model

To assess accuracy of orthometric height determination by using NRTK, the second test was performed in Dubai, using the DVRS network. The Dubai gravimetric geoid model was used, which was developed by integrating a comprehensive set of gravity measurements with GPS, levelling and digital elevation data. The computed geoid fits GPS/levelling at the 3-4 cm level RMS (Forsberg *et al*. 2001). The test was performed on a network consisting of 41 benchmarks of the second order levelling network. Orthometric heights at these

Table 2. Statistics of coordinate discrepancies between different observing sessions

(H), geoid heights are needed, such that:

Maximum (cm)

H = hGPS - N (23)

Maximum (cm)

RMS (cm)

Average (cm)

rover.

is available.

the number of satellites in view, dilution of precision (DOP) and age of corrections over the accuracy and stability of the NRTK GPS solution was discussed in some studies, e.g. Aponte et al., 2009.

The performance of surveying with the network RTK approach in construction sites was evaluated by two tests. The first test was executed for checking performance in determination of planimetric coordinates and the second test for evaluation of performance in height determination. The first test was carried out during construction of a large building in Dubai for checking of positions of surveying marks of the footings, landscaping and the access road of the building. 48 points were used for checking purposes including 18 points on the boundary of the road, 19 points for the footings, and 11 points for the landscaping. These points were set out using a calibrated total station of 1 second precision. Next, point coordinates were computed from the working drawings and uploaded to a GPS controller, where a single GPS dual-frequency receiver was independently used for positioning of the test marks by utilizing data from the Dubai NRTK, known as Dubai Virtual Reference System (DVRS). The network consists of five continuously operating reference stations with baseline lengths ranging from 23.4 km to 90.8 km and uses the VRS algorithm. The position of each test point was determined after 10 seconds of data collection, which were recorded at 2 seconds interval. The shifts between the positions determined from the two methods, namely: calibrated total station and NRTK using observations from GPS, were measured and compared to represent the precision of the latter method if used in construction sites instead of the former method. Figure 4 illustrates the differences between the two methods. The statistics of coordinate differences between the two methods in easting and northing are given in Table 1. The average norm of the spatial differences (ඥሺ݂݂݀݅ாሻଶ ሺ݂݂݀݅ேሻଶ) was generally less than 1.45 cm whereas the maximum difference was 3.45 cm. The small errors can be explained by the presence of one of the network reference stations within a few kilometres, which would be picked up by the system as the master reference station. Thus, most orbital and atmospheric errors were cancelled and the remaining errors would mainly be due to data noise and multipath. These results show that the GPS-RTK network approach can be used in the setting out of medium-accuracy surveying marks.

Fig. 4. Positioning differences between GPS Network RTK and the total station

176 Global Navigation Satellite Systems – Signal, Theory and Applications

the number of satellites in view, dilution of precision (DOP) and age of corrections over the accuracy and stability of the NRTK GPS solution was discussed in some studies, e.g. Aponte

The performance of surveying with the network RTK approach in construction sites was evaluated by two tests. The first test was executed for checking performance in determination of planimetric coordinates and the second test for evaluation of performance in height determination. The first test was carried out during construction of a large building in Dubai for checking of positions of surveying marks of the footings, landscaping and the access road of the building. 48 points were used for checking purposes including 18 points on the boundary of the road, 19 points for the footings, and 11 points for the landscaping. These points were set out using a calibrated total station of 1 second precision. Next, point coordinates were computed from the working drawings and uploaded to a GPS controller, where a single GPS dual-frequency receiver was independently used for positioning of the test marks by utilizing data from the Dubai NRTK, known as Dubai Virtual Reference System (DVRS). The network consists of five continuously operating reference stations with baseline lengths ranging from 23.4 km to 90.8 km and uses the VRS algorithm. The position of each test point was determined after 10 seconds of data collection, which were recorded at 2 seconds interval. The shifts between the positions determined from the two methods, namely: calibrated total station and NRTK using observations from GPS, were measured and compared to represent the precision of the latter method if used in construction sites instead of the former method. Figure 4 illustrates the differences between the two methods. The statistics of coordinate differences between the two methods in easting and northing are given in Table 1. The average norm of the spatial differences (ඥሺ݂݂݀݅ாሻଶ ሺ݂݂݀݅ேሻଶ) was generally less than 1.45 cm whereas the maximum difference was 3.45 cm. The small errors can be explained by the presence of one of the network reference stations within a few kilometres, which would be picked up by the system as the master reference station. Thus, most orbital and atmospheric errors were cancelled and the remaining errors would mainly be due to data noise and multipath. These results show that the GPS-RTK network approach

can be used in the setting out of medium-accuracy surveying marks.

Fig. 4. Positioning differences between GPS Network RTK and the total station

0 0.5 1 1.5 2 2.5 3 **Error in Easting Coordinate (cm)**

0

0.5

1

1.5

**Error in Northing Coordinate (cm)** 

2

2.5

3

et al., 2009.


Table 1. Statistics of positioning differences between GPS RTK-Network with the total station

To check repeatability of results (internal accuracy), another independent survey was carried out after two days and 4 hours to re-determine the coordinates of the same marked points previously determined by GPS. The average and maximum values of the differences between the results of the two surveys are given in Table 2. As the table demonstrates, repeatability testing showed that the average value for differences in the total planimeteric coordinate estimation was at the cm level, while for ellipsoidal height determination it was 1.56 cm. These differences can be attributed to changes in the quality of the measurements used, which mainly resulted from differences in the number of the observed satellites and their geometric distribution. These parameters have affected the quality of the network computations of the measurement corrections and the quality of coordinate estimation at the rover.


Table 2. Statistics of coordinate discrepancies between different observing sessions

Unlike traditional levelling, GPS derived heights are referenced to an ellipsoidal datum (WGS 84) and do not depend on local gravity variations, whereas in most levelling works and mapping orthometric heights are used. Orthometric heights reflect changes in topography as well as local variations in gravity. They are referenced to the geoid, which is an equi-potential level surface of the Earth that is closely associated with the mean sea level on a global basis. To convert ellipsoidal heights from GPS (hGPS) into orthometric heights (H), geoid heights are needed, such that:

$$\mathbf{H} = \mathbf{h}\_{\text{GPS}} \text{ - N} \tag{23}$$

where (N) is the geoid height. Thus, with the use of one receiver and employing NRTK to determine ellipsoidal heights, orthometric heights can be determined if a good geoid model is available.

To assess accuracy of orthometric height determination by using NRTK, the second test was performed in Dubai, using the DVRS network. The Dubai gravimetric geoid model was used, which was developed by integrating a comprehensive set of gravity measurements with GPS, levelling and digital elevation data. The computed geoid fits GPS/levelling at the 3-4 cm level RMS (Forsberg *et al*. 2001). The test was performed on a network consisting of 41 benchmarks of the second order levelling network. Orthometric heights at these

Precise Real-Time Positioning Using Network RTK 179

**7.2 Using RTK GPS for remotely monitoring and controlling machine automation** 

construction and monitoring field operations that require precise positioning.

any of the industrial proven commercial SCADA systems.

The use of Real-time GNSS positioning for machine automation can enhance its productivity and functionality. Real-time GNSS positioning can provide cm accuracy, facilitating high performance and output for machines that require positioning data. The Supervisory Control and Data Acquisition (SCADA) is a good example for supporting a field automated system. Having real-time accurate GNSS positioning information as an input to the SCADA system gives a lot of opportunities to develop many solutions for planning, design,

Northing

An automated machine such as a field tractor, excavator, or a driller can be automatically operated, unmanned and fully remotely monitored and controlled. The machine can be controlled by the analogue and digital outputs from a Remote Terminal Unit (RTU). The RTU can have preloaded Programmable Logic Control (PLC) software that activates the outputs according to the field inputs from the machine primary sensors as well as the realtime accurate coordinates fed from a GNSS unit receiving measurement corrections from a NRTK centre. The field operations, events, logs and alarms can be fully remotely monitored through a SCADA system. For the SDADA system programming software, standards such as IEC1131-3 can be used, which is the international standard for controller programming languages. It specifies the syntax, semantics and display for the PLC programming languages. An open standard communication protocol such as Distributed Network Protocol (DNP) can be used. DNP is a set of standards and interoperable communication protocol used between companies in processing automation systems. These open source standards and interoperable platforms will allow the system design to be implemented in

The functionality and process automation of the system can be described in two scenarios. In the first scenario, the machine can be operated in a semi-automated mode with online

Fig. 6. Surface plot of the height differences

Diff. (cm)

Easting

benchmarks were first estimated by combining ellipsoidal heights determined by using the Dubai NRTK with the local gravimetric geoid model data and were next compared to known orthometric heights of the benchmarks. The test area spanned approximately 22.7 km x 7.8 km in the Easting and Northing directions respectively, representing the area acquiring the most demanding survey works in the Emirate of Dubai. The height difference between the highest and lowest points in this test was approximately 34.5 meters. Each test point was occupied for a period of a few seconds, representing an ordinary working environment. The standard deviations of the ellipsoidal height determination for the occupied points of the test network ranged between 1.05 cm and 5.47 cm (El-Mowafy *et al*., 2005).

Figure 5 shows the differences in orthometric heights between using the "NRTK GPS + geoid heights" and the known orthometric heights of the benchmarks. On average, differences were within ±5 cm, with a maximum value of 7.04 cm. The statistical results of the differences are presented in Table 3. The average value of the absolute differences was 2.4 cm with 3.05 cm standard deviation. The differences towards the north-east were greater than those at the south-west region of the test. This can mainly be attributed to accuracy of the geoid model used in the test area. Figure 6 illustrates the surface plot of the height difference between the two methods. These results show that no significant systematic errors were present. The achieved accuracy is considered precise enough for third order levelling, which represents the majority of levelling works being carried out.

Fig. 5. Height differences between GPS Network RTK geoid heights and precise levelling (cm)


Table 3. Statistics of height differences between the GPS network RTK + geoid and precise levelling (cm)

178 Global Navigation Satellite Systems – Signal, Theory and Applications

benchmarks were first estimated by combining ellipsoidal heights determined by using the Dubai NRTK with the local gravimetric geoid model data and were next compared to known orthometric heights of the benchmarks. The test area spanned approximately 22.7 km x 7.8 km in the Easting and Northing directions respectively, representing the area acquiring the most demanding survey works in the Emirate of Dubai. The height difference between the highest and lowest points in this test was approximately 34.5 meters. Each test point was occupied for a period of a few seconds, representing an ordinary working environment. The standard deviations of the ellipsoidal height determination for the occupied points of the test network ranged between 1.05 cm and 5.47 cm (El-Mowafy *et al*.,

Figure 5 shows the differences in orthometric heights between using the "NRTK GPS + geoid heights" and the known orthometric heights of the benchmarks. On average, differences were within ±5 cm, with a maximum value of 7.04 cm. The statistical results of the differences are presented in Table 3. The average value of the absolute differences was 2.4 cm with 3.05 cm standard deviation. The differences towards the north-east were greater than those at the south-west region of the test. This can mainly be attributed to accuracy of the geoid model used in the test area. Figure 6 illustrates the surface plot of the height difference between the two methods. These results show that no significant systematic errors were present. The achieved accuracy is considered precise enough for third order

levelling, which represents the majority of levelling works being carried out.

Fig. 5. Height differences between GPS Network RTK geoid heights and precise levelling

Table 3. Statistics of height differences between the GPS network RTK + geoid and precise

2.40 7.04 3.05

0 10 20 30 40

**Points**

**difference** <sup>σ</sup>

**Average of absolute values Max.** 

2005).

(cm)

levelling (cm)


**H ([DVRS+Geoid] - Leveling)**

Fig. 6. Surface plot of the height differences

#### **7.2 Using RTK GPS for remotely monitoring and controlling machine automation**

The use of Real-time GNSS positioning for machine automation can enhance its productivity and functionality. Real-time GNSS positioning can provide cm accuracy, facilitating high performance and output for machines that require positioning data. The Supervisory Control and Data Acquisition (SCADA) is a good example for supporting a field automated system. Having real-time accurate GNSS positioning information as an input to the SCADA system gives a lot of opportunities to develop many solutions for planning, design, construction and monitoring field operations that require precise positioning.

An automated machine such as a field tractor, excavator, or a driller can be automatically operated, unmanned and fully remotely monitored and controlled. The machine can be controlled by the analogue and digital outputs from a Remote Terminal Unit (RTU). The RTU can have preloaded Programmable Logic Control (PLC) software that activates the outputs according to the field inputs from the machine primary sensors as well as the realtime accurate coordinates fed from a GNSS unit receiving measurement corrections from a NRTK centre. The field operations, events, logs and alarms can be fully remotely monitored through a SCADA system. For the SDADA system programming software, standards such as IEC1131-3 can be used, which is the international standard for controller programming languages. It specifies the syntax, semantics and display for the PLC programming languages. An open standard communication protocol such as Distributed Network Protocol (DNP) can be used. DNP is a set of standards and interoperable communication protocol used between companies in processing automation systems. These open source standards and interoperable platforms will allow the system design to be implemented in any of the industrial proven commercial SCADA systems.

The functionality and process automation of the system can be described in two scenarios. In the first scenario, the machine can be operated in a semi-automated mode with online

Precise Real-Time Positioning Using Network RTK 181

4. The centre sends primary information to the field RTU unit that controls the machine. These commands can be either full machine commands or only the main commands that transfer between phases of field operations if detailed information is pre-fed to the

5. The RTU based on a preloaded automation process program will produce the controlling outputs based on a set of variable inputs from the primary sensors as well as

6. Real-time data from the GNSS and the primary sensors are sent remotely from the field through the RTU to the SCADA system to update it with the actual work progress in a

The logical sequence of the soft PLC process program of the RTU is illustrated in Figure 8 as a flow chart. As the figure depicts, the program starts with initialising and testing the system availability and health status. It reads the current positioning information of the machine as well as the status of the inputs from primary sensors. It executes the RTU process program while monitoring any alarms or malfunction signals for an emergency

The machine automation in this scenario can be fully automated in the field. In this case, the GNSS unit mounted on the field machine receives the corrections from the NRTK and continuously feeds the RTU with the real-time positioning information representing the exact location of the machine in the field. The automation system is uploaded on a computer fitted in the machine and all control is pre-programmed and work is executed online in a feedback loop to keep up with the pre-set design. The real-time 3D accurate coordinates of the field points are to be input to a GIS system on board, where its output is fused with the PLC program. Work orders can be downloaded remotely to the field RTU at any stage if a change of plans is required. The RTU can also be connected through a modem to a remote monitoring centre with the ability to control and adjust the field process based on any new input. The field automated process can thus be fully monitored and controlled in a remote

the input of the position information received from the centre.

stop. The RTU events, logs and alarms are sent to the remote SCADA centre.

mode. Report, alarms, trends and historical records can be archived at the centre.

To investigate positioning precision that can be obtained from network RTK for machine automation, a test was performed in Abu Dhabi. The Abu Dhabi network RTK was used in this test. The network consists of 20 reference stations with separating distances between stations ranging between 60 km to 209 km. Several types of NRTK techniques can be implemented as per user choice, including the VRS method, the MAC approach, the FKP, and standard RTK using a single nearby reference station. The proposed approach was tested in the marine mode for one hour where a Leica 1200 GPS system dual-frequency receiver was mounted on a dredger working in a small island close to Abu Dhabi main island. The NRTK positioning system was operating at a sampling rate of 1 Hz. In addition, the data were internally stored for post-mission processing to act as a reference for comparison with network RTK results to assess its accuracy for the application being tested. The rover data were referenced in this case to station ADCC of the Abu Dhabi continuously operating reference network. The distance between this station and the test trajectory was 6 km on average, giving stable ambiguity fixing with precise positioning output. When comparing the two sets of positioning results (Network RTK and post-mission processing) the differences were at the cm range. The average precision of the determined positions in NRTK mode was 2.85 cm for the horizontal components and 4.1 cm for the height. Statistics

RTU.

feedback loop.

tel Inf tel ins In emetric control. formation System emetric mode. In structions and au this case, the GIS The real-time m (GIS) is utilise n the second scen utomated integra S system is upload GNSS position ed at a control ario, the system c ation with GIS pl ded in the field m ning is compute centre that oper can be fully autom anning, database machine. ed and a Geog rates the machin mated based on p e and geo-coded graphic ne in a pre-set maps.

Fig g. 7. Proposed net twork RTK GNSS S and SCADA int tegration

Fig rem ha req wi rep sen inf gure 7 illustrates motely controlled rdware and softw quired software a ith the GIS thro presenting the rea nt through the formation. The fo s a developed ar d from a SCADA ware can be auto and is placed ins ough an interfac ality of the field i RTU to the SCA ollowing procedu rchitecture for th centre. In this ca omated in the fie side the machine ce where the m n a full 3D mode ADA system to ure can be applied he first scenario, ase, SCADA mon ld with a built-in . The SCADA sy mimics are truly . The GNSS and t o update the GI d (El-Mowafy and where the mach nitoring and contr n computer that h ystem can be inte y geo-referenced the primary senso S map with rea d Al-Musawa, 200 hine is rolling has all grated maps ors are al-time 09):


18

0

emetric control. formation System emetric mode. In structions and au this case, the GIS

Globa

al Navigation Satelli

te Systems – Signa

al, Theory and Appl

lications

graphic ne in a pre-set maps.

ed and a Geog rates the machin mated based on p e and geo-coded

ning is compute centre that oper can be fully autom anning, database

machine.

GNSS position ed at a control ario, the system c ation with GIS pl ded in the field m

 The real-time m (GIS) is utilise n the second scen utomated integra S system is upload

twork RTK GNSS

S and SCADA int

tegration

he first scenario, ase, SCADA mon ld with a built-in . The SCADA sy mimics are truly . The GNSS and t o update the GI d (El-Mowafy and d machine and se computed at th

where the mach nitoring and contr n computer that h ystem can be inte y geo-referenced the primary senso S map with rea d Al-Musawa, 200 ends its observati he centre for tele

hine is rolling has all grated maps ors are al-time 09): ions to emetric

re and remote

dinates

trolling hardwar controlled in a r ved at the centre. D accurate coord

nitoring and cont monitored and rds can be retriev ks based on the 3

rchitecture for th centre. In this ca omated in the fie side the machine ce where the m n a full 3D mode ADA system to ure can be applied nted on the field g information is

he SCADA mon process is fully nd historical recor anned field work

S.

s a developed ar d from a SCADA ware can be auto and is placed ins ough an interfac ality of the field i RTU to the SCA ollowing procedu NSS will be moun entre. Positionin

machine.

entre operates th field automated alarms, trends an m chooses the pla nts and using GIS

tel Inf tel ins In

Fig

g. 7. Proposed net

gure 7 illustrates motely controlled rdware and softw quired software a ith the GIS thro presenting the rea nt through the formation. The fo The roving GN the network ce control of the m The control ce software. The mode. Report, SCADA system of the field poin

Fig rem ha req wi rep sen inf 1.

2.

3.


The logical sequence of the soft PLC process program of the RTU is illustrated in Figure 8 as a flow chart. As the figure depicts, the program starts with initialising and testing the system availability and health status. It reads the current positioning information of the machine as well as the status of the inputs from primary sensors. It executes the RTU process program while monitoring any alarms or malfunction signals for an emergency stop. The RTU events, logs and alarms are sent to the remote SCADA centre.

The machine automation in this scenario can be fully automated in the field. In this case, the GNSS unit mounted on the field machine receives the corrections from the NRTK and continuously feeds the RTU with the real-time positioning information representing the exact location of the machine in the field. The automation system is uploaded on a computer fitted in the machine and all control is pre-programmed and work is executed online in a feedback loop to keep up with the pre-set design. The real-time 3D accurate coordinates of the field points are to be input to a GIS system on board, where its output is fused with the PLC program. Work orders can be downloaded remotely to the field RTU at any stage if a change of plans is required. The RTU can also be connected through a modem to a remote monitoring centre with the ability to control and adjust the field process based on any new input. The field automated process can thus be fully monitored and controlled in a remote mode. Report, alarms, trends and historical records can be archived at the centre.

To investigate positioning precision that can be obtained from network RTK for machine automation, a test was performed in Abu Dhabi. The Abu Dhabi network RTK was used in this test. The network consists of 20 reference stations with separating distances between stations ranging between 60 km to 209 km. Several types of NRTK techniques can be implemented as per user choice, including the VRS method, the MAC approach, the FKP, and standard RTK using a single nearby reference station. The proposed approach was tested in the marine mode for one hour where a Leica 1200 GPS system dual-frequency receiver was mounted on a dredger working in a small island close to Abu Dhabi main island. The NRTK positioning system was operating at a sampling rate of 1 Hz. In addition, the data were internally stored for post-mission processing to act as a reference for comparison with network RTK results to assess its accuracy for the application being tested. The rover data were referenced in this case to station ADCC of the Abu Dhabi continuously operating reference network. The distance between this station and the test trajectory was 6 km on average, giving stable ambiguity fixing with precise positioning output. When comparing the two sets of positioning results (Network RTK and post-mission processing) the differences were at the cm range. The average precision of the determined positions in NRTK mode was 2.85 cm for the horizontal components and 4.1 cm for the height. Statistics

Precise Real-Time Positioning Using Network RTK 183

Average (cm)

(cm)

σ<sup>E</sup> 3.60 1.92

σ<sup>N</sup> 3.93 2.11

σ<sup>h</sup> 9.03 4.10

The network RTK approach is mostly used in static or kinematic ground applications. In this section, the use of the NRTK approach in the airborne mode is discussed. At present, positioning by GNSS is a widely used technique in the airborne mode for geo-referencing of aerial mapping data and surveillance by Unmanned Aerial Vehicles (UAV). In aviation, it is estimated that from 2015, most new commercial aircraft will be fitted with GNSS to enhance precise navigation and make it safer (Pedreira, 2009). However, at the moment, GPS is the only approved system as a stand-alone aid for non-precision approaches (Radišić, 2010), e.g. as a supplementary navigation system and for positioning in non safety-of-life applications. This is mainly due to the need to achieve high level of performance in terms of integrity, availability and reliability in the airborne navigation, which GPS on its own cannot reach due to the limited number of satellites available in one site at any particular instance. This situation is expected to improve with the addition of the new systems such as Galileo and Compass. When using network RTK in the airborne navigation, additional concerns have to

• Due to the high dynamics involved in the airborne navigation, a high update rate of sending the corrections is needed compared with the rate implemented for land applications. This rate has a direct impact on the Time-To-First-Fix of phase ambiguities, and thus on the overall positioning feasibility and accuracy (El-Mowafy,

• The format of GPS measurement corrections should be standardised to ensure that the system is independent of any single receiver manufacturer. The use of the RTCM

The main advantages of using network RTK for precise airborne positioning can be

• No dedicated ground reference stations are needed for post-mission or real-time

• Unlike standard differential positioning, the distance between the aircraft receiver and the nearest reference station does not present a concern as long as the aircraft flies

• In navigation, due to the fact that networks RTK usually have an area of coverage that extends to several hundreds of kilometres, each network can cover more than one airport, including small airports, unlike the current Local Area Augmentation systems

Table 4. Difference between network RTK and post-mission positioning in machine

automation testing

be addressed, which include:

2004).

summarised as follows:

applications.

**7.3 Using network RTK in the airborne mode** 

Version 3.1 standards is thus recommended.

within the network RTK area of coverage.

S.D Maximum

of the positioning differences between the two methods are given in Table 4. During testing, availability of NRTK was higher than 95%. These results show that the NRTK can be successfully used for positioning of field machines.

Fig. 8. Flowchart of the RTK GNSS & SCADA/GIS Logic control

182 Global Navigation Satellite Systems – Signal, Theory and Applications

of the positioning differences between the two methods are given in Table 4. During testing, availability of NRTK was higher than 95%. These results show that the NRTK can be

successfully used for positioning of field machines.

Fig. 8. Flowchart of the RTK GNSS & SCADA/GIS Logic control


Table 4. Difference between network RTK and post-mission positioning in machine automation testing

## **7.3 Using network RTK in the airborne mode**

The network RTK approach is mostly used in static or kinematic ground applications. In this section, the use of the NRTK approach in the airborne mode is discussed. At present, positioning by GNSS is a widely used technique in the airborne mode for geo-referencing of aerial mapping data and surveillance by Unmanned Aerial Vehicles (UAV). In aviation, it is estimated that from 2015, most new commercial aircraft will be fitted with GNSS to enhance precise navigation and make it safer (Pedreira, 2009). However, at the moment, GPS is the only approved system as a stand-alone aid for non-precision approaches (Radišić, 2010), e.g. as a supplementary navigation system and for positioning in non safety-of-life applications. This is mainly due to the need to achieve high level of performance in terms of integrity, availability and reliability in the airborne navigation, which GPS on its own cannot reach due to the limited number of satellites available in one site at any particular instance. This situation is expected to improve with the addition of the new systems such as Galileo and Compass. When using network RTK in the airborne navigation, additional concerns have to be addressed, which include:


The main advantages of using network RTK for precise airborne positioning can be summarised as follows:


Precise Real-Time Positioning Using Network RTK 185

data were available are shown in the dashed areas in Figure 10. The temporary break in reception of the NRTK data can be attributed to the use of GSM signals as the means of communication between the DVRS centre and the aircraft at the time of the test. However, the GSM signals were only used for testing purposes. In practice, the problem of breaks in receiving the network corrections can be significantly alleviated by using more robust

When comparing the results of positioning obtained by the DVRS NRTK with the postmission double-difference positioning for the periods where NRTK data were received and phase ambiguities were fixed, the average 2-D and height positioning discrepancies between the two methods were at a few cm level, as they were 1.6 cm and 2.8 cm respectively. The differences can mainly be attributed to the model assumptions and procedure in the two techniques. During the period when phase ambiguities were only solved in a float solution, the differences were 26.3 cm and 52.5 cm. However, when the DVRS NRTK data were lost, positioning accuracy deteriorated to the metre level. In this case, supporting methods, such as good prediction algorithms and integration with other sensors, e.g. a geodetic-grade inertial system, are needed to cover the short periods when breaks in reception of the measurement corrections take place. Several methods for prediction of NRTK observation corrections as a time series were investigated in El-Mowafy, 2008. Different time-series prediction methods were investigated for different types of errors. The double exponential smoothing prediction approach performed best in most of the cases when studying the satellite clock error corrections. Winters' method and the Autoregressive Integrated Moving Average (ARIMA) model were the best methods for predicting the orbital and wet

55.34 55.36 55.38 55.4 **Longitude (deg.)**

means of delivering NRTK service to the aircraft.

Fig. 9. Trajectory of a fixed-wing aircraft test

25.18

25.19

25.2

25.21

25.22

25.23

**Latitude (deg.)**

25.24

25.25

25.26

25.27

tropospheric errors, respectively.

(LAAS) implemented only in some major airports. NRTK systems can also be used in search and rescue operations, emergency landing, road traffic monitoring from the air, as well as emergency response.


The use of the VRS technique in the airborne mode is not generally recommended since in this high velocity environment continuously updated approximate coordinates have to be used for the VRS computation. This is similar to having a moving reference station. A system reset should thus be frequently performed when the VRS coordinates are changing, which will result in frequent initialisation of the carrier-phase ambiguities. Therefore, it is preferable to keep the VRS location for the longest possible range. An alternative approach would be to apply the PRS technique, where the PRS points are chosen along the path of the final approach and close to and at the airport. Furthermore, the duplex communication mode used in the VRS technique is limited by the ability of the processing centre to simultaneously perform calculations for all users. As this number grows, extended latency in receiving the corrections may result. Additionally, the possibility of signal breaks in the duplex communication mode is more than the case of using a one-direction communication. Thus, the use of a one-directional communication method, e.g. applying the FKP method, would be more appropriate for the airborne mode. The PRS and Mac techniques can also be implemented in the one-directional mode, whereby the PRS or the Master-Auxiliary stations are selected to cover a specific area, such as the airport. The establishment of ground transmitters at the airport can improve availability of the corrections.

The feasibility of using real-time reference networks for positioning in the airborne mode was examined using the DVRS NRTK over the city of Dubai. Flight tests using a helicopter and a small fixed-wing airplane were carried out. The trajectory of the fixed-wing aircraft test is illustrated in Figure 9. The main parameters under investigation were the achievable accuracy and availability of VRS measurements. In these tests, aircraft positions were determined using a dual-frequency GPS receiver (Leica SR530). The data were processed in real time at one-second intervals. The DVRS reference stations collect and process data at five-second intervals. Thus, the NRTK data were interpolated in time for the rover receiver to compute positions at the one-second interval.

To assess performance of NRTK approach for this test, the results were compared with positions determined from a standard double-difference technique whereby the observations of the aircraft receiver were stored and processed in a post-mission mode. The aircraft data in this case were referenced to one of the DVRS network stations located within a range of a few kilometres from the flight route. Precise IGS orbits were used in the postmission processing. The differences between the two methods (NRTK and post-mission processing) are given in Figure 10. For the test at hand, the DVRS data were lost for some periods, which ranged from a few seconds to three minutes. The periods when the DVRS 184 Global Navigation Satellite Systems – Signal, Theory and Applications

• Compared to LAAS, no significant additional infrastructure cost is involved as the hardware and software of the GNSS-NRTK are available in most developed countries and the establishment of new networks is currently underway or planned in different

• Network RTK provides cm to decimetre positioning accuracy even in the case of

• Network RTK can give better runway utilisation by improving airport surface navigation. It can also enhance air traffic management by increasing dynamic flight

The use of the VRS technique in the airborne mode is not generally recommended since in this high velocity environment continuously updated approximate coordinates have to be used for the VRS computation. This is similar to having a moving reference station. A system reset should thus be frequently performed when the VRS coordinates are changing, which will result in frequent initialisation of the carrier-phase ambiguities. Therefore, it is preferable to keep the VRS location for the longest possible range. An alternative approach would be to apply the PRS technique, where the PRS points are chosen along the path of the final approach and close to and at the airport. Furthermore, the duplex communication mode used in the VRS technique is limited by the ability of the processing centre to simultaneously perform calculations for all users. As this number grows, extended latency in receiving the corrections may result. Additionally, the possibility of signal breaks in the duplex communication mode is more than the case of using a one-direction communication. Thus, the use of a one-directional communication method, e.g. applying the FKP method, would be more appropriate for the airborne mode. The PRS and Mac techniques can also be implemented in the one-directional mode, whereby the PRS or the Master-Auxiliary stations are selected to cover a specific area, such as the airport. The establishment of ground

The feasibility of using real-time reference networks for positioning in the airborne mode was examined using the DVRS NRTK over the city of Dubai. Flight tests using a helicopter and a small fixed-wing airplane were carried out. The trajectory of the fixed-wing aircraft test is illustrated in Figure 9. The main parameters under investigation were the achievable accuracy and availability of VRS measurements. In these tests, aircraft positions were determined using a dual-frequency GPS receiver (Leica SR530). The data were processed in real time at one-second intervals. The DVRS reference stations collect and process data at five-second intervals. Thus, the NRTK data were interpolated in time for the rover receiver

To assess performance of NRTK approach for this test, the results were compared with positions determined from a standard double-difference technique whereby the observations of the aircraft receiver were stored and processed in a post-mission mode. The aircraft data in this case were referenced to one of the DVRS network stations located within a range of a few kilometres from the flight route. Precise IGS orbits were used in the postmission processing. The differences between the two methods (NRTK and post-mission processing) are given in Figure 10. For the test at hand, the DVRS data were lost for some periods, which ranged from a few seconds to three minutes. The periods when the DVRS

malfunctioning of some reference stations, particularly for dense networks.

transmitters at the airport can improve availability of the corrections.

to compute positions at the one-second interval.

as well as emergency response.

regions worldwide.

planning.

(LAAS) implemented only in some major airports. NRTK systems can also be used in search and rescue operations, emergency landing, road traffic monitoring from the air, data were available are shown in the dashed areas in Figure 10. The temporary break in reception of the NRTK data can be attributed to the use of GSM signals as the means of communication between the DVRS centre and the aircraft at the time of the test. However, the GSM signals were only used for testing purposes. In practice, the problem of breaks in receiving the network corrections can be significantly alleviated by using more robust means of delivering NRTK service to the aircraft.

Fig. 9. Trajectory of a fixed-wing aircraft test

When comparing the results of positioning obtained by the DVRS NRTK with the postmission double-difference positioning for the periods where NRTK data were received and phase ambiguities were fixed, the average 2-D and height positioning discrepancies between the two methods were at a few cm level, as they were 1.6 cm and 2.8 cm respectively. The differences can mainly be attributed to the model assumptions and procedure in the two techniques. During the period when phase ambiguities were only solved in a float solution, the differences were 26.3 cm and 52.5 cm. However, when the DVRS NRTK data were lost, positioning accuracy deteriorated to the metre level. In this case, supporting methods, such as good prediction algorithms and integration with other sensors, e.g. a geodetic-grade inertial system, are needed to cover the short periods when breaks in reception of the measurement corrections take place. Several methods for prediction of NRTK observation corrections as a time series were investigated in El-Mowafy, 2008. Different time-series prediction methods were investigated for different types of errors. The double exponential smoothing prediction approach performed best in most of the cases when studying the satellite clock error corrections. Winters' method and the Autoregressive Integrated Moving Average (ARIMA) model were the best methods for predicting the orbital and wet tropospheric errors, respectively.

Precise Real-Time Positioning Using Network RTK 187

Dai, L., Han, S., Wang, J. & Rizos, C. (2001). A study on GPS/GLONASS multiple reference

El-Mowafy, A. (2000). Performance Analysis of the RTK Technique in an Urban

El-Mowafy, A., Fashir, H., Al Marzooqi, Y., Al Habbai, A. & Babiker, T. (2003). Testing of the

El-Mowafy, A. (2004). Using Multiple Reference Station GPS Networks for Aircraft Precision

El-Mowafy, A. (2008). Improving the Performance of RTK-GPS Reference Networks for

El-Mowafy, A. & Al-Musawa, M. (2009). Utilization of GIS and RTK GPS Reference

Euler, H.J., Townsend, B.R. & Wübbena, G. (2002). Comparison of Different Proposals for

Euler, H-J., Seeger, S., Zelzer, O., Takac, F. & Zebhauser, B. E. (2004). Improvement of

Fotopoulos, G. (2000). Parameterization of DGPS Carrier Phase Errors Over a Regional

Fotopoulos, G. & Cannon, M. E. (2001). An Overview of Multi-Reference Station Methods

Forsberg, R., Strykowski, G. & Tscherning, C. C. (2001). Geoid Model for Dubai Emirate,

Hsieha, C.H. & Wu, J. (2008). Multipath Reduction on Repetition in Time Series from the

Hu, G.R., Khoo, H.S., Goh, P.C. & Law, C.L. (2003). Development and Assessment of GPS

LENZ, E. (2004). Networked Transport of RTCM via Internet Protocol (NTRIP) - Application

*Mechatronics and its Applications,* Sharjah, UAE, March 24-26, 2009.

Environment, *the Australian Surveyor*, Vol. 45, No. 1, pp. 47-54.

Salt Lake City, UT, September 11-14, 2001.

(ION), Vol. 57, No. 3, pp. 215-223.

pp. 17-26.

September *2002*.

292–302.

Greece. May 22-27.

*Symposium*, Strasbourg, France, May 25-28, 2003.

of ION NTM, San Diego, CA, January 26-28, 2004.

Report No. SP296, Dubai Municipality.

*Systems*. Vol. 1, No. 2, pp. 113–120.

Engineering, University of Calgary, Calgary, Canada.

for cm-Level Positioning. GPS Solutions, Vol. 4, No. 3, pp. 1–10.

Leica Geo. Systems (2011). Using Network RTK, 15.07.2011, Available from: http://smartnet.leica-geosystems.eu/spiderweb/2fNetworkRTK.html.

station techniques for precise real-time carrier phase positioning, *Proceedings 14th Int. Tech. Meeting of the Satellite Division of the U.S. Inst. of Navigation*, pp. 392-403,

DVRS National GPS-RTK Network, *Proceedings of the 8th ISU International* 

Approach and Airport Surface Navigation, *Proceedings of GNSS 2004, The 2004 International Symposium on GNSS/GPS*, Sydney, Australia, December 6–8, 2004. El-Mowafy, A. (2005). Analysis of the Design Parameters of Multi-Reference Station RTK

GPS Networks, *Journal of Satellite and Land Information Science (SaLIS),* Vol. 65, No. 1,

Precise Airborne Navigation, *Navigation*, Journal of the Institute Of Navigation

Networks for Machine Automation*, Proceeding of the 6th International Symposium on* 

Reference Station Network Information Distribution Formats, *Proceedings of the International Technical Meeting, ION GPS-02*, pp. 2334 – 2341, Portland, Oregon.

Positioning Performance Using Standardized Network RTK Messages, Proceedings

Network of Reference Stations. Master thesis. Department of Geomatics

Permanent GPS Phase Residuals, *The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences*, Vol. XXXVII, Part B4, Beijing, pp. 911-916. Hu, G. R., Khoo, V. H. S., Goh, P. C. and Law, C. L. (2002). Internet-based GPS VRS RTK

positioning with a multiple reference station network. *Journal of Global Positioning* 

Virtual Reference Stations for RTK Positioning, *Journal of Geodesy*, Vol. 77, No. 5, pp.

and Benefit in Modern Surveying Systems. FIG Working Week 2004, Athens,

Fig. 10. Fixed-wing aircraft test results

#### **8. References**


186 Global Navigation Satellite Systems – Signal, Theory and Applications

Flying Height (m)

Hz. Error (m)

Height Error (m)

Al-Shaery, A.M., Lim, S., & Rizos, C. (2010). Functional models of ordinary kriging for

*Navigation*, , pp. 2513-2521, Portland, Oregan, USA, September 21-24, 2010. Aponte, J., Meng, X., Hill, C., Moore, T., Burbidge M. & Dodson A. (2009). Quality

BKG (2011). Networked Transport of RTCM via Internet Protocol. 21.08.2011, Available:

Cruddace, P., Wilson, I., Greaves, M., Euler, H-J., Keenan, R. & Wüebbena, G. (2002). The

medium range real-time kinematic positioning based on the Virtual Reference Station technique, *Proceedings of 23rd Int. Tech. Meeting of the Satellite Division of the U.S. Inst. of* 

0 250 500 750 1000 1250

0 250 500 750 1000 1250 Time (Sec.)

0 250 500 750 1000 1250

assessment of a network-based RTK GPS service in the UK. *Journal of Applied* 

Long Road to Establishing A National Network RTK Solution, *Proceedings FIG XXII* 

Fig. 10. Fixed-wing aircraft test results

*Geodesy*, Vol. 3, pp. 25 – 34.

http://igs.bkg.bund.de/ntrip.

*Int. Congress*, Washington, D.C., April 19-26, 2002.

**8. References** 


**8** 

*Italy* 

*Politecnico di Torino* 

**Achievable Positioning Accuracies** 

**in a Network of GNSS Reference Stations** 

The Network Real Time Kinematic (NRTK) positioning is nowadays a very common practice not only in academia but also in the professional world. Since its appearance, over 10 years ago, a growing number of people use this type of positioning not only for topographic applications, but also for the control of vehicles fleets, precision agriculture, land monitoring, etc.. To support these users several networks of Continuous Operating Reference Stations (CORSs) were born. These networks offer real-time services for NRTK positioning, providing a centimetric positioning accuracy with an average distance of 25-35

What is the effective distance between reference stations that allows to achieve the precision required for real-time positioning, using both geodetic and GIS receivers? How the positional accuracy changes with increasing distances between CORS? Can a service of geostationary satellites, such as the European EGNOS, be an alternative to the network positioning for medium-low cost receivers? These are only some of the questions that this chapter try to

First, the GNSS network positioning will be discussed, with particular attention to the differential GNSS corrections such as the Master Auxiliary Concept (MAC), Virtual Reference

After this short review, the results obtained during a national experiment designed to verify both the quality and the potential of existing real-time and post-processing positioning services will be presented, with particular attention to the variability of the same depending on the network geometry, the type of rover receiver and the duration of his survey, as well

This experiment was conducted using already existing CORSs. Three real-time networks, characterized by different distances between the stations (50, 100 and 150 kms), were designed. The real-time products were tested, for each network, by sessions during 24-hour on a centroid point, using both geodetic and GIS receivers provided by different companies. A so large time session is made to avoid, on final results, the constellation geometry based

In addition to the real-time network corrections, a post-processing analysis will be conducted, using the raw data acquired from geodetic and GIS receivers and combining

as the use of the different GNSS constellations currently available for our area.

**1. Introduction** 

answer.

kms between the reference stations.

Station (VRS) and Flächen Korrektur Parameter (FKP).

influence, making results fully comparable.

Paolo Dabove, Mattia De Agostino and Ambrogio Manzino


## **Achievable Positioning Accuracies in a Network of GNSS Reference Stations**

Paolo Dabove, Mattia De Agostino and Ambrogio Manzino *Politecnico di Torino Italy* 

## **1. Introduction**

188 Global Navigation Satellite Systems – Signal, Theory and Applications

Li, X., Zhang, Z. & M. Ge (2011). Regional reference network augmented precise point

Radišić, T., Novak, D. & Bucak, T. (2010). The Effect of Terrain Mask on RAIM Availability,

Petrovski, I., Kawaguchi, S., Torimoto, H., Fujii, K., Sasano, K., Cannon, M.E. & Lachapelle,

Takac, F. & Lienhart, W. (2008). SmartRTK: A Novel Method of Processing Standardised

Teunissen, P. J. G. (1995). The least squares ambiguity decorrelation adjustment: a method for

Teunissen, P. J. G. & Verhagen, S., (2009). The GNSS ambiguity ratio-test revisited, Survey

Teunissen, P., Odijk, D.J.G & Zhang, B. (2010). Results of CORS Network Based PPP with

Varner, C. (2000). DGPS carrier phase networks and partial derivative algorithms. Ph.D Thesis, Dept. of Geomatics Engineering, University of Calgary, Calgary, Canada. Vollath, U., Buecherl, A., Landau, H., Pagels, C. & Wager, B. (2000). Multi-base RTK

Wu, S., Zhang, K., & Silcock D. (2009). Differences in Accuracies and Fitting Surface Planes

Wu, S., (2009). Performance of Regional Atmospheric Error Models for NRTK in GPSnet and

Wübbena, G., Bagge, A. & Schmitz, M. (2001). Network-Based Techniques for RTK

Wübbena, G. & Willgalis, S. (2001). State Space Approach for Precise Real Time Positioning

Wübbena, G., Schmitz, M. & Bagge, A. (2005). PPP-RTK: Precise Point Positioning Using

Wübbena, G. & Bagge, A. (2006). RTCM Message Type 59 - FKP for transmission of FKP,

Zebhauser, B.E, Euler, H-J, Keenan, C.R & Wübbena, G. (2002). A Novel Approach for the Use

*Division, US ION*, Salt Lake City, UT, 19–22 September, 2000.

Geospatial Sciences, RMIT University, Melbourne, Australia.

Long Beach, California, September 13-16, 2005, pp. 2584-2594.

http://www.geopp.de/download/geopprtcmfkp59- 1.1.pdf .

*Navigation*, Tokyo, Japan, November 14-16, 2001.

Pedreira, P. (2009). Optimistic Outlook for Galileo, *GIM International*, pp. 6-13.

*Journal of Navigation*, Vol. 63, No. 1, pp. 105-117.

ENC GNSS 2008, Toulouse, France, April 22-25, 2008.

*Symposium*, Seville, Spain, 8-11 May, 2001.

Review, Vol. 41, No. 312, pp. 138-151.

Series A, Vol. 42, No. 4, pp. 223-230.

Vol.8, No.2, pp.154-163.

pp. 151-158.

positioning for instantaneous ambiguity resolution. *Journal of Geodesy*. Vol. 85, No3,

G. (2001). Practical Issues of Virtual Reference Station Implementation for Nationwide RTK Network, *Proceedings of GNSS 2001*, *The 5th GNSS International* 

RTCM Network RTK Information for High Precision Positioning, Proceedings of

fast GPS integer ambiguity estimation, Journal of Geodesy, Vol. 70, No. 1-2, pp. 65-82.

Integer Ambiguity Resolution, Journal of Aeronautics, Astronautics and Aviation,

positioning using virtual reference station. *Proceedings 13th Int Tech Meeting Satellite* 

of Two Error Models for NRTK in GPSnet. *Journal of Global Positioning Systems*,

the Implementation of NRTK System, Ph.D Thesis, School of Mathematical and

Applications*. Proceedings the GPS JIN 2001 Symposium, GPS Society, Japan Institute of* 

in GPS Reference Networks*. Proceedings of International Symposium on Kinematic Systems in Geodesy, Geomatics and Navigation*, KIS-01, Banff, Canada , June 5-8, 2001.

State-Space Representation in RTK Networks, *Proceedings of the 18th International Technical Meeting of the Satellite Division of The Institute of Navigation ION GNSS 2005*,

Version 1.1, Geo++ GmbH White Paper Nr. 2006.01, 17.7.2009, Available:

of Information from Reference Station Networks Conforming to RTCM V2.3 and Future V3.0*. Proceedings of PLANS 2002*, Palm Springs, California, April 15-18, 2002.

The Network Real Time Kinematic (NRTK) positioning is nowadays a very common practice not only in academia but also in the professional world. Since its appearance, over 10 years ago, a growing number of people use this type of positioning not only for topographic applications, but also for the control of vehicles fleets, precision agriculture, land monitoring, etc.. To support these users several networks of Continuous Operating Reference Stations (CORSs) were born. These networks offer real-time services for NRTK positioning, providing a centimetric positioning accuracy with an average distance of 25-35 kms between the reference stations.

What is the effective distance between reference stations that allows to achieve the precision required for real-time positioning, using both geodetic and GIS receivers? How the positional accuracy changes with increasing distances between CORS? Can a service of geostationary satellites, such as the European EGNOS, be an alternative to the network positioning for medium-low cost receivers? These are only some of the questions that this chapter try to answer.

First, the GNSS network positioning will be discussed, with particular attention to the differential GNSS corrections such as the Master Auxiliary Concept (MAC), Virtual Reference Station (VRS) and Flächen Korrektur Parameter (FKP).

After this short review, the results obtained during a national experiment designed to verify both the quality and the potential of existing real-time and post-processing positioning services will be presented, with particular attention to the variability of the same depending on the network geometry, the type of rover receiver and the duration of his survey, as well as the use of the different GNSS constellations currently available for our area.

This experiment was conducted using already existing CORSs. Three real-time networks, characterized by different distances between the stations (50, 100 and 150 kms), were designed. The real-time products were tested, for each network, by sessions during 24-hour on a centroid point, using both geodetic and GIS receivers provided by different companies. A so large time session is made to avoid, on final results, the constellation geometry based influence, making results fully comparable.

In addition to the real-time network corrections, a post-processing analysis will be conducted, using the raw data acquired from geodetic and GIS receivers and combining

Achievable Positioning Accuracies in a Network of GNSS Reference Stations 191

When the distance between the two receivers is lower than 10 kms, the atmospheric propagation delays and the ephemeris errors can be irrelevant, allowing to achieve a centimetrical accuracy. Over this distance, these errors grow up and can not be neglected. Otherwise, these errors are very spatially correlated and can be spatially modelled (Wübbena et al., 1996). However, to be able to predict and use in real-time these biases, three conditions must be satisfied: the knowledge with a centimetric accuracy of the masters positions, a control centre able to process in real-time data of all the stations, the continuous carrier-phase ambiguity fixing also when inter-station distances reach 80-100 kms. This concept is equal to bring to the left-hand side of (3), among the known terms, the first two

> ( ) *pq pq pq pq pq pq hk hk hk hk hk hk i i*

In this way, it is possible to model, not only between stations *h* and *k*, but also among all the reference stations of the network, the residual ionospheric and tropospheric biases and the ephemeris error. When these errors are modelled, they can be broadcasted to any rover

First, it is possible to note that the (4) was written using two satellites. Otherwise, if a network of GNSS reference stations is considered, the same satellites that are visible and usable for the two or more master stations can not be necessarily visible from the rover receiver. Therefore, it is better to move from ionospheric and tropospheric delays, which depend by a couple of satellites, to something which depends only by a single satellite.

The network biases can be calculated in real-time using double differences, single differences or non-differential equations. The achievement of a common value of ambiguities is required. A theoretical proof of the equivalence between the non-differential and differential methods, in particular, can be found in Schaffrin & Grafarend (1986). The use of a differential method has pros and cons. The best advantage of the differential method is that the unknown parameters are fewer. Otherwise, the main disadvantage is that there is a correlation problem. Although the approaches are identical, in recent years the trend is to use a non-differential

This methodology is obviously more complex, since both dispersive and non-dispersive components of each station are considered as unknowns in the Kalman filter state vector. This increased complexity is balanced by many advantages. The use of a Kalman filter allows to increase the number of equations available at each epoch, including for example measurements related to satellites that are not tracked by all the stations, in order to make the network estimation more robust also when one or more permanent stations are not

approach. The network state parameters are evaluated by the use of a Kalman filter.

Starting from pseudorange and carrier-phase equations written for the L1 ( *<sup>p</sup> Pk* and *<sup>p</sup>*

possible to separate the unknowns that depend on the receiver and the satellite:

) GPS frequencies considering only one receiver (*k*) and one satellite (*p*), it is

 α

*i Ni I T E* −− = ++ (4)

*k* φ) and

terms on the right-hand side, i.e.:

receiver.

φ

**3. From the concept to the implementation** 

available (e.g. transmission problems).

**3.1 The non-differential model** 

*k* φ

L2 ( *<sup>p</sup> Pk* and *<sup>p</sup>*

 ρλ

them to the RINEX from the nearest network station and to the RINEX of virtual stations, generated by the network software close to the measurement site.

The ultimate goal of this chapter is to quantify the accuracy achievable nowadays with geodetic and GIS receivers when they are used into a network of reference stations, as well as to verify (or deny) the possibility that, thanks to the continuous GNSS modernization program, the improvement of new satellite constellations and new algorithms for computing and positioning, networks that are characterized by large distances between reference stations can be used for high accuracy real-time positioning.

#### **2. The network positioning concept**

Between 1990 and 1995, the carrier-phase differential positioning has known an enormous evolution due to phase ambiguity fixing method named "On The Fly" Ambiguity Resolution (Landau & Euler, 1992). Using this technique, a cycle slip recovery, also for moving items, was not more problematic, but positioning problems when distances between master and rover exceed 10-15 kms were not solved. For this reasons, at the end of the 90's, the Network Real Time Kinematic (NRTK) or, more generally, Ground Based Augmentation Model (GBAS) was realized. (Vollath et al, 2000, Raquet & Lachapelle, 2001, Rizos, 2002).

First, to understand the network positioning concept it is necessary keep in mind some concepts about differential positioning. To do this, it is possible to write the carrier-phase equation in a metric form:

$$\boldsymbol{\phi}\_{k}^{p}(\mathbf{i}) = \boldsymbol{\rho}\_{k}^{p} - \boldsymbol{\alpha}\boldsymbol{T}\_{k} + \boldsymbol{\alpha}\boldsymbol{t}\mathbf{t}^{p} - \boldsymbol{\alpha}\_{i}\boldsymbol{I}\_{k}^{p} + \boldsymbol{T}\_{k}^{p} + \boldsymbol{M}\mathbf{i}\_{k}^{p} + \boldsymbol{E}\_{k}^{p} + \boldsymbol{\lambda}\_{i}\mathbf{N}\mathbf{i}\_{k}^{p} + \boldsymbol{\varepsilon}\_{k}^{p} \tag{1}$$

In this equation, the ( ) *<sup>p</sup> k* φ *i* term represents the carrier-phase measurement on the *i*-th frequency. On the right-hand side of the equation, in addition to the geometric range *<sup>p</sup> k* ρ between the satellite *p* and the receiver *k*, it is possible to find the biases related to receiver and satellite clocks multiply by the speed light ( *<sup>k</sup> cdT* and *<sup>p</sup> cdt* ), the ionospheric propagation delay *<sup>p</sup> i k* α *I* (with a known coefficient 2 2 *i i* 1 α = *f f* that depend by the *i*-th frequency), the tropospheric propagation delay *<sup>p</sup> Tk* , the multipath error *<sup>p</sup> <sup>M</sup> ki* , the ephemeris error *<sup>p</sup> Ek* , the carrier-phase ambiguity multiply by the frequency length *<sup>p</sup> i Nik* λ and finally the random errors *<sup>p</sup> k* ε.

Single differences can be written considering two receivers (*h* and *k*). Neglecting multipath error, that depends only by the rover site and therefore can not be modelled, it is possible to write:

$$
\boldsymbol{\phi}\_{nk}^{p}(\mathbf{i}) = \boldsymbol{\phi}\_{n}^{p}(\mathbf{i}) - \boldsymbol{\phi}\_{k}^{p}(\mathbf{i}) = \boldsymbol{\rho}\_{hk}^{p} + \boldsymbol{\lambda}\_{i} \text{Ni}\_{hk}^{p} - \boldsymbol{\alpha} \boldsymbol{\sigma}\_{hk}^{p} - \boldsymbol{\alpha}\_{i} \text{I}\_{hk}^{p} + \boldsymbol{\Gamma}\_{hk}^{p} + \boldsymbol{\mathcal{E}}\_{hk}^{p} + \boldsymbol{\mathcal{E}}\_{hk}^{p} \tag{2}
$$

After that, double differences equations can be written considering two receivers (*h* and *k*) and two satellites (*p* and *q*). Subtracting the single difference calculated for the satellite *q* from those one calculated for the satellite *p*, it is possible to obtain the double differences equation, neglecting random errors contribution:

$$
\rho\_h \phi\_{hk}^{pq}(\mathbf{i}) = \phi\_{hk}^p(\mathbf{i}) - \phi\_{hk}^q(\mathbf{i}) = \rho\_{hk}^{pq} + \mathbb{A}\_i \text{N} \mathbf{i}\_{hk}^{pq} - \alpha\_i \mathbf{I}\_{hk}^{pq} + \mathbf{T}\_{hk}^{pq} + \mathbf{E}\_{hk}^{pq} + \mathbf{Q}\_{hk}^{pq} \tag{3}
$$

190 Global Navigation Satellite Systems – Signal, Theory and Applications

them to the RINEX from the nearest network station and to the RINEX of virtual stations,

The ultimate goal of this chapter is to quantify the accuracy achievable nowadays with geodetic and GIS receivers when they are used into a network of reference stations, as well as to verify (or deny) the possibility that, thanks to the continuous GNSS modernization program, the improvement of new satellite constellations and new algorithms for computing and positioning, networks that are characterized by large distances between

Between 1990 and 1995, the carrier-phase differential positioning has known an enormous evolution due to phase ambiguity fixing method named "On The Fly" Ambiguity Resolution (Landau & Euler, 1992). Using this technique, a cycle slip recovery, also for moving items, was not more problematic, but positioning problems when distances between master and rover exceed 10-15 kms were not solved. For this reasons, at the end of the 90's, the Network Real Time Kinematic (NRTK) or, more generally, Ground Based Augmentation Model (GBAS) was realized. (Vollath et al, 2000, Raquet & Lachapelle,

First, to understand the network positioning concept it is necessary keep in mind some concepts about differential positioning. To do this, it is possible to write the carrier-phase

> *p p p p p pp pp k k k i kk k k kk i*

frequency. On the right-hand side of the equation, in addition to the geometric range *<sup>p</sup>*

between the satellite *p* and the receiver *k*, it is possible to find the biases related to receiver and satellite clocks multiply by the speed light ( *<sup>k</sup> cdT* and *<sup>p</sup> cdt* ), the ionospheric propagation

*i i* 1

tropospheric propagation delay *<sup>p</sup> Tk* , the multipath error *<sup>p</sup> <sup>M</sup> ki* , the ephemeris error *<sup>p</sup> Ek* , the

Single differences can be written considering two receivers (*h* and *k*). Neglecting multipath error, that depends only by the rover site and therefore can not be modelled, it is possible to

> () () () *p pp p p p ppp hk h k hk hk i hk i hk hk hk hk*

After that, double differences equations can be written considering two receivers (*h* and *k*) and two satellites (*p* and *q*). Subtracting the single difference calculated for the satellite *q* from those one calculated for the satellite *p*, it is possible to obtain the double differences

> () () () *pq p q pq pq pq pq pq pq hk hk hk hk hk hk hk hk hk i i*

 α

*i i i Ni I T E* = − =+ − +++

 ρλ

*i i i Ni cdT I T E* = − = + − − +++

− + − ++ ++ +

λ

= *f f* that depend by the *i*-th frequency), the

*i Nik* λ

*i* term represents the carrier-phase measurement on the *i*-th

 α  ε (1)

and finally the random

(2)

(3)

ε

ε

*k* ρ

*i cdT cdt I T Mi E Ni* =

 α

α

generated by the network software close to the measurement site.

reference stations can be used for high accuracy real-time positioning.

**2. The network positioning concept** 

( )

ρ

*I* (with a known coefficient 2 2

 φ  ρλ

carrier-phase ambiguity multiply by the frequency length *<sup>p</sup>*

*p k* φ

φ

2001, Rizos, 2002).

delay *<sup>p</sup> i k* α

errors *<sup>p</sup> k* ε.

write:

equation in a metric form:

In this equation, the ( )

φ

φ

φ

equation, neglecting random errors contribution:

φ

 φ When the distance between the two receivers is lower than 10 kms, the atmospheric propagation delays and the ephemeris errors can be irrelevant, allowing to achieve a centimetrical accuracy. Over this distance, these errors grow up and can not be neglected. Otherwise, these errors are very spatially correlated and can be spatially modelled (Wübbena et al., 1996). However, to be able to predict and use in real-time these biases, three conditions must be satisfied: the knowledge with a centimetric accuracy of the masters positions, a control centre able to process in real-time data of all the stations, the continuous carrier-phase ambiguity fixing also when inter-station distances reach 80-100 kms. This concept is equal to bring to the left-hand side of (3), among the known terms, the first two terms on the right-hand side, i.e.:

$$
\rho\_{\rm ik}^{p\eta}(\mathbf{i}) - \rho\_{\rm ik}^{p\eta} - \mathcal{A}\_{\rm i} \text{Ni}\_{\rm ik}^{p\eta} = \alpha\_{\rm i} \mathbf{I}\_{\rm ik}^{p\eta} + T\_{\rm ik}^{p\eta} + \mathbf{E}\_{\rm ik}^{p\eta} \tag{4}
$$

In this way, it is possible to model, not only between stations *h* and *k*, but also among all the reference stations of the network, the residual ionospheric and tropospheric biases and the ephemeris error. When these errors are modelled, they can be broadcasted to any rover receiver.

## **3. From the concept to the implementation**

First, it is possible to note that the (4) was written using two satellites. Otherwise, if a network of GNSS reference stations is considered, the same satellites that are visible and usable for the two or more master stations can not be necessarily visible from the rover receiver. Therefore, it is better to move from ionospheric and tropospheric delays, which depend by a couple of satellites, to something which depends only by a single satellite.

The network biases can be calculated in real-time using double differences, single differences or non-differential equations. The achievement of a common value of ambiguities is required. A theoretical proof of the equivalence between the non-differential and differential methods, in particular, can be found in Schaffrin & Grafarend (1986). The use of a differential method has pros and cons. The best advantage of the differential method is that the unknown parameters are fewer. Otherwise, the main disadvantage is that there is a correlation problem.

Although the approaches are identical, in recent years the trend is to use a non-differential approach. The network state parameters are evaluated by the use of a Kalman filter.

This methodology is obviously more complex, since both dispersive and non-dispersive components of each station are considered as unknowns in the Kalman filter state vector. This increased complexity is balanced by many advantages. The use of a Kalman filter allows to increase the number of equations available at each epoch, including for example measurements related to satellites that are not tracked by all the stations, in order to make the network estimation more robust also when one or more permanent stations are not available (e.g. transmission problems).

#### **3.1 The non-differential model**

Starting from pseudorange and carrier-phase equations written for the L1 ( *<sup>p</sup> Pk* and *<sup>p</sup> k* φ ) and L2 ( *<sup>p</sup> Pk* and *<sup>p</sup> k* φ ) GPS frequencies considering only one receiver (*k*) and one satellite (*p*), it is possible to separate the unknowns that depend on the receiver and the satellite:

Achievable Positioning Accuracies in a Network of GNSS Reference Stations 193

As mentioned before, the idea is to create a synthetic correction generated as if the reference station is close to the rover. For this reason, the rover communicates its approximate position (e.g. through an NMEA GGA message). Using this position, it is possible to

This strategy can also be applied for post-processing positioning, by mean of a virtual data file (usually, in RINEX format). This file contains the observations that a virtual reference

As said above, the VRS positioning is the oldest network positioning strategy. Even if it has some advantages and is widely applied, there are also some disadvantages. The VRS method not allows, for example, a multi-base positioning (such as other methods) and it is

Another differential network strategy is to calculate the interpolative area parameters and to broadcast them together with data from one reference station. This allows to have a one-way

The idea was first used by Wübbena et al. (1996, 2002), who uses a flat to interpolate the network biases in a given area. This positioning strategy was called FKP, which is the acronym of the German sentence "Flächen Korrektur Parameter" (flat correction parameters,

station may acquire in a well-known position selected by the network user.

communication system and to maintain a relatively low transmission load.

interpolate data following different strategies, e.g. using:

• collocation (Raquet et al., 2001).

Fig. 1. The VRS positioning concept

not always well-regulated and repeatable.

**4.2 The FKP positioning** 

in English).

• plane triangles (Vollath et al., 2000, Landau et al., 2002); • an Inverse Distance Weighting (IDW) estimation;

• least squares method for estimating polynomial coefficients;

$$
\begin{bmatrix}
\overline{P}^{p}\_{k} \\
\overline{P}^{p}\_{k} \\
\overline{\phi}^{p}\_{k} \\
\overline{\phi}^{p}\_{k}
\end{bmatrix} = \begin{bmatrix}
1 & 1 & 0 & 0 \\
1 & \alpha & 0 & 0 \\
1 & -1 & \lambda\_{1} & 0 \\
1 & -\alpha & 0 & \lambda\_{2}
\end{bmatrix} \begin{bmatrix}
\rho^{p}\_{k} + cdt^{p} - cdT\_{k} + T^{p}\_{k} \\
I^{p}\_{k} \\
\overline{N}^{p}\_{k} \\
\overline{N}^{p}\_{k}
\end{bmatrix} \tag{5}
$$

Although it stands to reason that the pseudorange measurements have a beneficial contribution to the unknowns estimation, now only the phase equations are considered. In addition, the geometric range *<sup>p</sup> k* ρcan be moved on the left-hand side of the equation.

$$
\begin{bmatrix}
\overline{\phi}\_k^p - \rho\_k^p \\
\overline{\phi}\_k^p - \rho\_k^p
\end{bmatrix} = \begin{bmatrix}
1 & -1 & \lambda\_1 & 0 \\
1 & -a & 0 & \lambda\_2
\end{bmatrix} \begin{bmatrix}
 cdt^p - cdT\_k + T\_k^p \\
I\_k^p \\
\overline{N}\_k^p \\
\overline{N}\_k^p
\end{bmatrix} \tag{6}
$$

After that, in the right-hand side, it is possible to separate the tropospheric bias from the clock errors:

$$
\begin{bmatrix}
\overline{\Phi}\_k^p - \rho\_k^p \\
\overline{\Phi}\_k^p - \rho\_k^p
\end{bmatrix} = \begin{bmatrix}
1 & 1 & -1 & \lambda\_1 & 0 \\
1 & 1 & -\alpha & 0 & \lambda\_2
\end{bmatrix} \begin{bmatrix}
cdt^p - cdT\_k \\
T\_k^p \\
I\_k^p \\
\overline{N}\_k^p \\
\overline{N}\_k^p
\end{bmatrix} \tag{7}
$$

Through (7), and after few mathematical processes that are not shown, it is possible to separate the tropospheric propagation delay and the clock errors, solving the network positioning in a non-differential way.

#### **4. The biases interpolation**

After the dispersive and not-dispersive biases estimation, three solutions can be followed:


#### **4.1 The VRS positioning**

When the previous biases are estimated, the easiest and oldest way to broadcast differential corrections is the VRS (Virtual Reference Stations).

As mentioned before, the idea is to create a synthetic correction generated as if the reference station is close to the rover. For this reason, the rover communicates its approximate position (e.g. through an NMEA GGA message). Using this position, it is possible to interpolate data following different strategies, e.g. using:


192 Global Navigation Satellite Systems – Signal, Theory and Applications

*p p p p k k k k p p k k p p k k p p k k*

ρ

*N N*

can be moved on the left-hand side of the equation.

*cdt cdT T I N N*

*p p k k*

> *p k*

*p*

*cdt cdT T I N N*

*k p k*

*k p*

(5)

(6)

(7)

*P cdt cdT T*

⎡ ⎤ ⎡ ⎤ +−+ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ <sup>=</sup> ⎢ ⎥ ⎢ ⎥ <sup>−</sup> ⎢ ⎥ ⎢ ⎥ <sup>−</sup> ⎣ ⎦ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦

1

λ

*P I*

11 00 1 00 11 0 1 0

α

α

*k* ρ

φ

φ

φ ρ

φ ρ

> φ ρ

> φ ρ

reference station of the network (MAC positioning).

corrections is the VRS (Virtual Reference Stations).

positioning in a non-differential way.

**4. The biases interpolation** 

positioning);

(FKP positioning);

**4.1 The VRS positioning** 

addition, the geometric range *<sup>p</sup>*

clock errors:

2

Although it stands to reason that the pseudorange measurements have a beneficial contribution to the unknowns estimation, now only the phase equations are considered. In

1

<sup>⎡</sup> − + <sup>⎤</sup> <sup>⎢</sup> <sup>⎥</sup> ⎡ ⎤ <sup>−</sup> ⎡ ⎤ <sup>−</sup> <sup>⎢</sup> <sup>⎥</sup> ⎢ ⎥ <sup>=</sup> ⎢ ⎥⎢ <sup>⎥</sup> ⎢ ⎥ <sup>−</sup> <sup>−</sup> <sup>⎣</sup> ⎦ ⎢ <sup>⎥</sup> ⎣ ⎦ <sup>⎢</sup> <sup>⎥</sup> ⎢⎣ ⎥⎦

After that, in the right-hand side, it is possible to separate the tropospheric bias from the

11 1 0 11 0

<sup>⎡</sup> <sup>−</sup> <sup>⎤</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> ⎡ ⎤ <sup>−</sup> ⎡ ⎤ <sup>−</sup> <sup>⎢</sup> <sup>⎥</sup> ⎢ ⎥ <sup>=</sup> ⎢ ⎥⎢ <sup>⎥</sup> ⎢ ⎥ <sup>−</sup> <sup>−</sup> ⎣ ⎦⎢ <sup>⎥</sup> ⎣ ⎦ <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> ⎢⎣ ⎥⎦

Through (7), and after few mathematical processes that are not shown, it is possible to separate the tropospheric propagation delay and the clock errors, solving the network

After the dispersive and not-dispersive biases estimation, three solutions can be followed: • to consider data from the reference stations of the network and to interpolate these data on the rover position, generating a virtual reference station close to the rover (VRS

• to model with a plane the biases and to broadcast the model parameters to the rover

• to broadcast to the rover the estimated biases together with data from a master

When the previous biases are estimated, the easiest and oldest way to broadcast differential

α

*p p k k k p k p p k k p*

λ

11 0 1 0

α

*p p p k k k p p p k k k*

2

1

λ

2

 λ

 λ

 λ

This strategy can also be applied for post-processing positioning, by mean of a virtual data file (usually, in RINEX format). This file contains the observations that a virtual reference station may acquire in a well-known position selected by the network user.

As said above, the VRS positioning is the oldest network positioning strategy. Even if it has some advantages and is widely applied, there are also some disadvantages. The VRS method not allows, for example, a multi-base positioning (such as other methods) and it is not always well-regulated and repeatable.

## **4.2 The FKP positioning**

Another differential network strategy is to calculate the interpolative area parameters and to broadcast them together with data from one reference station. This allows to have a one-way communication system and to maintain a relatively low transmission load.

The idea was first used by Wübbena et al. (1996, 2002), who uses a flat to interpolate the network biases in a given area. This positioning strategy was called FKP, which is the acronym of the German sentence "Flächen Korrektur Parameter" (flat correction parameters, in English).

Achievable Positioning Accuracies in a Network of GNSS Reference Stations 195

These single differences are numerically small and have not a relevant transmission size, considering for example that the tropospheric corrections can be transmitted with a lower

This new strategy implies that the network software should not perform any interpolation of the estimated biases. This interpolation, however, is only shifted to the rover, which has the possibility to choose different interpolative models or to apply a multi-base positioning. Therefore, the rover receiver must have more computing power, so this positioning mode

For very large networks, it is possible to transmit data from a subset of the network stations (sub-network or cell). Even in this case the positioning performed by the rover is accurate and fast. Even in this case, the result of the rover positioning is independent of the used cell.

It wonders if, due to the GPS and GLONASS modernization and the development of the Compass and Galileo constellations, the NRTK positioning will become obsolete. Over the

• After a large number of simulations, it is possible to conclude that, in the master-rover differential real-time positioning, the phase ambiguity solution will be almost instantaneous, making it unnecessary the use of a network of GNSS reference stations. However, in high ionospheric activity scenarios, the ambiguity fixing probability in the master-rover positioning will be very low. With a reference station network the

• In differential positioning, the maximum distance between the master and the rover will be increased from 10 kms to 20 kms with the same reliability. Even the network

last ten years, several authors (e.g. Chen et al., 2004) ask this question. In summary:

ionosphere bias will be reduced (Stankov & Jakowskia, 2007).

Fig. 3. The MAC positioning concept

does not fit well to older receivers.

**5. NRTK developments and problems** 

rate than the ionospheric ones (e.g. 2 - 5 seconds).

The application of the positioning strategy is very simple. Four parameters, named *E*0, *N*0, *E*1, *N*1 can be computed considering the estimated values of geometric and ionospheric delays, using a given reference position (ϕ*R*, λ*<sup>R</sup>*). After that, it is possible to calculate the terms:

$$\begin{aligned} \delta \tau\_0 &= 6.37 \Big( N\_0 \left( \varphi - \varphi\_{\mathbb{R}} \right) + E\_0 \left( \mathcal{A} - \mathcal{A}\_{\mathbb{R}} \right) \cos \varphi\_{\mathbb{R}} \Big) \\ \delta \tau\_l &= 6.37 \, H \left( N\_1 \left( \varphi - \varphi\_{\mathbb{R}} \right) + E\_1 \left( \mathcal{A} - \mathcal{A}\_{\mathbb{R}} \right) \cos \varphi\_{\mathbb{R}} \right) \end{aligned} \tag{8}$$

where:

$$H = 1 + 16\left(0.53 - E \;/\,\pi\right)^3\tag{9}$$

where *E* is the satellite elevation (in radians). Finally, the two carrier-phase corrections (in meters) are:

$$\begin{aligned} \delta r\_{f1} &= \delta r\_0 + \left(60 \, / \, 77 \, \right) \delta r\_l \\ \delta r\_{f2} &= \delta r\_0 + \left(77 \, / \, 60 \right) \delta r\_l \end{aligned} \tag{10}$$

#### **4.3 The MAC positioning**

In 2001, Euler et al. (2001) had proposed a new approach to the use and transmission of network corrections called Master Auxiliary Concept (MAC). The concept is the same as above: a common level of network ambiguity fixing is estimated and the corrections are transmitted to the rover separating dispersive and non-dispersive components.

In the MAC positioning, the coordinates and the biases of a single reference station (master station) are broadcasted to the rover in addition to the single differences (both corrections and coordinates) of the other stations in the network (auxiliary stations).

#### Fig. 3. The MAC positioning concept

194 Global Navigation Satellite Systems – Signal, Theory and Applications

The application of the positioning strategy is very simple. Four parameters, named *E*0, *N*0, *E*1, *N*1 can be computed considering the estimated values of geometric and ionospheric

1 1

( )<sup>3</sup> *H E* =+ − 1 16 0.53 /

where *E* is the satellite elevation (in radians). Finally, the two carrier-phase corrections (in

*rr r rr r*

In 2001, Euler et al. (2001) had proposed a new approach to the use and transmission of network corrections called Master Auxiliary Concept (MAC). The concept is the same as above: a common level of network ambiguity fixing is estimated and the corrections are

In the MAC positioning, the coordinates and the biases of a single reference station (master station) are broadcasted to the rover in addition to the single differences (both corrections

( ) ( )

60 /77 77 /60 *f I f I*

ϕϕ

*I RR R*

6.37 cos 6.37 cos

( ( ) ( ) ) ( ) ( )( )

 λλ

*R RR*

 λλ

π

 δ

 δ  ϕ

> ϕ

*<sup>R</sup>*). After that, it is possible to calculate the

(8)

(9)

(10)

ϕ*R*, λ

00 0

 ϕϕ

= −+ − = −+ −

> 1 0 2 0

transmitted to the rover separating dispersive and non-dispersive components.

and coordinates) of the other stations in the network (auxiliary stations).

δ

= + = +

δ

δ

δ

*rN E r HN E*

Fig. 2. The FKP positioning concept

terms:

where:

meters) are:

**4.3 The MAC positioning** 

delays, using a given reference position (

δ

δ These single differences are numerically small and have not a relevant transmission size, considering for example that the tropospheric corrections can be transmitted with a lower rate than the ionospheric ones (e.g. 2 - 5 seconds).

This new strategy implies that the network software should not perform any interpolation of the estimated biases. This interpolation, however, is only shifted to the rover, which has the possibility to choose different interpolative models or to apply a multi-base positioning.

Therefore, the rover receiver must have more computing power, so this positioning mode does not fit well to older receivers.

For very large networks, it is possible to transmit data from a subset of the network stations (sub-network or cell). Even in this case the positioning performed by the rover is accurate and fast. Even in this case, the result of the rover positioning is independent of the used cell.

## **5. NRTK developments and problems**

It wonders if, due to the GPS and GLONASS modernization and the development of the Compass and Galileo constellations, the NRTK positioning will become obsolete. Over the last ten years, several authors (e.g. Chen et al., 2004) ask this question. In summary:


Achievable Positioning Accuracies in a Network of GNSS Reference Stations 197

The experiments were conducted using, as rover site, the reference one located on the roof of the headquarters of the Politecnico di Torino at Vercelli. For this reason, other reference stations have been chosen in the north-west side of Italy, so that the rover can be in a centroid point with respect to the three different GNSS networks (Fig. 4). The reference stations that are involved in this test (see the Table 1) belong to networks operated by public entities (such as administrative regions) and by private organizations (for example Surveyor

> *Receiver type (IGS name)*

ALES Alessandria LEICA GRX1200+GNSS LEIAR25.R3 CRES Crescentino LEICA GRX1200+GNSS LEIAR25.R3 BIEL Biella LEICA GRX1200+GNSS LEIAR25.R3 LENT Lenta TPS NETG3 TPSCR.G3

VIGE Vigevano TPS ODYSSEY\_E TPSCR3\_GGD TORI Torino LEICA GRX1200+GNSS LEIAR25.R3 CHAT Chatillon TPS NETG3 TPSCR.G3 LUIN Luino TPS NETG3 TPSG3\_A1 CREA Crema TPS ODYSSEY\_E TPSCR3\_GGD

SESC Serravalle Scrivia TPS NETG3 TPSCR.G3 CARP Carpenedolo LEICA GRX1200+GNSS LEIAS10 BUSL Bussoleno LEICA GRX1200+GNSS LEIAR25.R3 LOAN Loano TPS NETG3 TPSCR.G3 TARO Borgo val di Taro TPS ODYSSEY\_E TPSCR3\_GGD

DOMO Domodossola TPS NETG3 TPSCR.G3

The reference coordinates of all the stations were computed by processing 15 days of data with the Bernese GPS scientific software (version 5.0), linking the networks reference system with the ETRF2000 (2008.0), that is the Italian reference system materialized by the RDN. The different antennas used for the rover receivers were mounted on a pillar, as mentioned

The network software that was used is GNSMART (GNSS State Monitoring and Representation Technique), distributed by Geo++®. This software allows to quantify and to estimate tropospheric and ionospheric errors in addition to allows the modelling of the satellites ephemeris errors, of the multipath estimation and of the satellite and receiver clock errors. Even if it has no theoretical limitations to the minimum number of permanent

In the experiments, double frequency, geodetic GNSS receivers of the main companies operating in Italy were used. The characteristics of the instruments are shown in the Table 2.

above, located on the roof of the Politecnico di Torino at Vercelli (Fig. 2).

stations, for a correct functioning, at least 5 stations are suggested.

*Antenna type (IGS name)* 

Colleges or private companies).

Red Network

Green Network

Blue Network

(150 kms)

Table 1. Characteristics of permanent stations

(100 kms)

(50kms)

*Station ID* 

*Station Name* 

spacing will be increase with more frequencies (up to 80-100 kms), but the rover receiver will improve the reliability of fixing.


To achieve a high real time positioning accuracy, in conclusion, a network solution will be required. The new GNSS constellations will decrease the time-to-fix the ambiguities, for both the network reference stations and the rover receiver.

A major technical problem of the network positioning is the correction signal broadcasting. A GPRS/GSM coverage in the survey site is not always available: in these cases, a radio link between the master and the rover receiver is a common solution. Otherwise, this solution does not involve the network positioning. A possible solution, especially for large networks, would be the integration between the NRTK correction and the SBAS architecture. This integration, in fact, does not require the use of additional antennas but only the payment of an access fee to the satellite band. Another solution might be to use digital subcarriers of TV channels. This solution fits very well, for example, with the MAC technique in the RTCM3 format.

On the other hand, the two-way internet communication could allow the network manager to offer additional services that are not usually provided. It may be possible, for example, to broadcast the number of satellite with a fixed network ambiguity, the maps on the survey site, the geoid undulation, etc… At the same time, the network user could transmit measurement data, updating in real time its survey, in addition to the quality of the data and of the fixed ambiguities, and to other parameters that can provide useful information for increase the reliability of GNSS positioning.

## **6. The experiments**

In the previous sections, the network positioning concept and the different correction strategies were presented. But what is the effective distance between reference stations that allows to achieve the precision required for real-time positioning, using both geodetic and GIS receivers? And how the positional accuracy changes with increasing distances between Continuously Operating Reference Stations (CORSs)? These are only some of the questions that the experiments reported in the following try to answer.

The experiments were based on three different networks, with different inter-station distances: the first one (in the following, "red network" or "small network"), with distances of about 50 kms, is comparable with the existing GNSS networks in Italy. The second network ("green network" or "medium network") is characterized by distances of about 100 kms, which is the average spacing of the national geodetic network which materializes the Italian reference system (*Rete Dinamica Nazionale* - RDN). The last one ("blue network" or "large network") has inter-station distances of about 150 kms and it is used to verify the possibility of use not too thick networks.

196 Global Navigation Satellite Systems – Signal, Theory and Applications

• The tropospheric biases will not be removed by using three or more frequencies. With a higher number of satellites tracked by a network of reference stations, instead, these

• Regardless of the GNSS future improvements, the multipath error in the rover will still be present. However, this error can be modelled on reference stations, giving the benefit

To achieve a high real time positioning accuracy, in conclusion, a network solution will be required. The new GNSS constellations will decrease the time-to-fix the ambiguities, for

A major technical problem of the network positioning is the correction signal broadcasting. A GPRS/GSM coverage in the survey site is not always available: in these cases, a radio link between the master and the rover receiver is a common solution. Otherwise, this solution does not involve the network positioning. A possible solution, especially for large networks, would be the integration between the NRTK correction and the SBAS architecture. This integration, in fact, does not require the use of additional antennas but only the payment of an access fee to the satellite band. Another solution might be to use digital subcarriers of TV channels. This solution fits very well, for example, with the MAC technique in the RTCM3

On the other hand, the two-way internet communication could allow the network manager to offer additional services that are not usually provided. It may be possible, for example, to broadcast the number of satellite with a fixed network ambiguity, the maps on the survey site, the geoid undulation, etc… At the same time, the network user could transmit measurement data, updating in real time its survey, in addition to the quality of the data and of the fixed ambiguities, and to other parameters that can provide useful information

In the previous sections, the network positioning concept and the different correction strategies were presented. But what is the effective distance between reference stations that allows to achieve the precision required for real-time positioning, using both geodetic and GIS receivers? And how the positional accuracy changes with increasing distances between Continuously Operating Reference Stations (CORSs)? These are only some of the questions

The experiments were based on three different networks, with different inter-station distances: the first one (in the following, "red network" or "small network"), with distances of about 50 kms, is comparable with the existing GNSS networks in Italy. The second network ("green network" or "medium network") is characterized by distances of about 100 kms, which is the average spacing of the national geodetic network which materializes the Italian reference system (*Rete Dinamica Nazionale* - RDN). The last one ("blue network" or "large network") has inter-station distances of about 150 kms and it is used to verify the

errors will be estimated with a greater accuracy (Zhang & Lachapelle, 2001).

receiver will improve the reliability of fixing.

of a more reliable estimation of the other biases.

both the network reference stations and the rover receiver.

for increase the reliability of GNSS positioning.

possibility of use not too thick networks.

that the experiments reported in the following try to answer.

**6. The experiments** 

format.

spacing will be increase with more frequencies (up to 80-100 kms), but the rover

The experiments were conducted using, as rover site, the reference one located on the roof of the headquarters of the Politecnico di Torino at Vercelli. For this reason, other reference stations have been chosen in the north-west side of Italy, so that the rover can be in a centroid point with respect to the three different GNSS networks (Fig. 4). The reference stations that are involved in this test (see the Table 1) belong to networks operated by public entities (such as administrative regions) and by private organizations (for example Surveyor Colleges or private companies).


Table 1. Characteristics of permanent stations

The reference coordinates of all the stations were computed by processing 15 days of data with the Bernese GPS scientific software (version 5.0), linking the networks reference system with the ETRF2000 (2008.0), that is the Italian reference system materialized by the RDN. The different antennas used for the rover receivers were mounted on a pillar, as mentioned above, located on the roof of the Politecnico di Torino at Vercelli (Fig. 2).

The network software that was used is GNSMART (GNSS State Monitoring and Representation Technique), distributed by Geo++®. This software allows to quantify and to estimate tropospheric and ionospheric errors in addition to allows the modelling of the satellites ephemeris errors, of the multipath estimation and of the satellite and receiver clock errors. Even if it has no theoretical limitations to the minimum number of permanent stations, for a correct functioning, at least 5 stations are suggested.

In the experiments, double frequency, geodetic GNSS receivers of the main companies operating in Italy were used. The characteristics of the instruments are shown in the Table 2.

Achievable Positioning Accuracies in a Network of GNSS Reference Stations 199

In addition to the geodetic receivers, GIS L1 receivers with their external antennas were used in the experiments. As previously, the characteristics of these instruments are

*Antenna* LEIAT502 TPSPG\_A5 TRM53406.00

*Nr. of channels* 14 72 220 *Constellations* GPS+GLONASS GPS+GLONASS GPS only *Position update rate* 5 Hz N/A 1 Hz *External antenna* Yes Yes Yes *Time to first position* 35 ÷ 120 s N/A 45 s

*Phase corrections* Yes Yes No *Internal modem* Yes Yes No *Type of connection* GSM/UMTS 3.5G GSM No

*GRS-1 (Topcon)* 

RTCM 2.x RTCM 3.0 CMR / CMR+

The results reported below are average values, considered significant for the two sets of instruments used in the experiments. All data collected during the experiment were

*GeoXH (Trimble)* 

RTCM 2.x RTCM 3.0 CMR / CMR+

Fig. 5. Rover installation

summarized in the Table 3*.*

*Image* 

*Type of protocols* 

Table 3. GIS instruments

**7. Real time positioning accuracies** 

analysed using two types of charts:

 *Zeno 10* 

*(Leica Geosystems)* 

RTCM 2.x RTCM 3.0 CMR / CMR+

Fig. 4. Types of networks


Table 2. Geodetic receivers

#### Fig. 5. Rover installation

198 Global Navigation Satellite Systems – Signal, Theory and Applications

Fig. 4. Types of networks

*Image* 

*Type of protocols* 

Table 2. Geodetic receivers

 *GX1230+GNSS* 

*(Leica Geosystems)* 

*GRS-1 (Topcon)* 

*Antenna* LEIAX1203+GNSS TPSPG\_A1 TRM55970.00

*Constellations* GPS+GLONASS GPS+GLONASS GPS+GLONASS

RTCM 2.x RTCM 3.0 CMR / CMR+

*Nr. of channels* 120 72 220

*Position update rate* 20 Hz N/A 1 Hz

*Internal modem* Yes Yes Yes

*Type of connection* GSM GSM GSM/GPRS

RTCM 2.x RTCM 3.0 CMR / CMR+ *S9 GNSS (Stonex)* 

RTCM 2.x RTCM 3.0 CMR / CMR+ In addition to the geodetic receivers, GIS L1 receivers with their external antennas were used in the experiments. As previously, the characteristics of these instruments are summarized in the Table 3*.*


Table 3. GIS instruments

## **7. Real time positioning accuracies**

The results reported below are average values, considered significant for the two sets of instruments used in the experiments. All data collected during the experiment were analysed using two types of charts:

Achievable Positioning Accuracies in a Network of GNSS Reference Stations 201

Fig. 7. CDF of planimetric (left) and elevation (right) errors of the three geodetic receivers

on the variation in the positioning quality with respect to the size of the network.

used.

the time.

**7.1.1 VRS positioning** 

virtual reference station close to the measurement site.

broadcasted by networks of different sizes.

network) and to about 25 cm ("blue" network).

For this reason, it is acceptable to consider the behaviour of an "average" receiver, focusing

The following sections show the results of these tests with the different NRTK correction

The VRS is without doubt the most used differential correction in real-time positioning, as well as the easiest to manage for each receiver. As seen above, in fact, this type of differential correction is based on generating, starting from the data of network CORSs, a

It is therefore expected a deteriorating positioning quality with the increasing of the interstation distances. If this variable is increasing, in fact, there are numerous inaccuracies that

The Fig. 8 shows the average behaviour of a geodetic receiver in the case of a VRS correction

The CDF analysis brings out an effective increase of the errors (both planimetric and altimetric) when the size of the GNSS network grows up. The planimetric error, for example, changes from values below 5 cms (95% of reliability) considering the "red" network up to 10 and 15 cm with the "green" and the "blue" one, respectively. A similar behaviour can be observed for the elevation error, with values from 6 cm ("red" network) to 10 cm ("green"

Even with regard to the cumulative moving average, VRS positioning with a "red" network achieves a centimetre accuracy after few minutes, with a trend that remains constant over

It is also interesting to analyse the trend of the cumulative moving average when "green" and "blue" networks are used: in both cases, there was a significant improvement of the position quality when the measurement period increases. This trend allows to reach a centimetre accuracy after a few hours-length measurement. This behaviour is evident for

can be made by the interpolation step during the generation of the VRS correction.


In the following, the planimetric and elevation positioning errors for both the instrument categories, for the different network size and products are analysed.

## **7.1 Geodetic receivers**

The tests carried out using geodetic receivers have involved the use of the three types of NRTK corrections analysed in the previous paragraphs: VRS, MAC and FKP. For each receiver and each NRTK correction, 24 hours of real time positioning results have been stored. For this analysis, only the positions with both fixed ambiguities and a HDOP (Horizontal Diluition Of Precision) index lower than 4 have been considered.

Analysing the stored positions, it was possible to highlight the behaviour that each receiver has depending on the type of differential correction: the Fig. 6 shows the quality of the planimetric and height positioning that one of the receivers used has into the "red" network.

Fig. 6. CDF of planimetric (left) and elevation (right) errors of a geodetic receiver

In addition to the different behaviour that a receiver has with respect to differential correction, it is possible to consider also the position quality variation of different receivers. The Fig. 7 shows the planimetric and elevation error distributions using the three different geodetic receivers presented in Table 2, with a VRS correction broadcasted by the "green" network.

The analysis of the curves above allows to highlight a homogeneous behaviour among the receivers, which are separated only at the end of the distribution (around the 85% of probability). The planimetric accuracy, for example, changes from about 2 cms (95% of probability) for the "Receiver 2" to about 7 cms for "Receiver 1" and "Receiver 3".

Fig. 7. CDF of planimetric (left) and elevation (right) errors of the three geodetic receivers

For this reason, it is acceptable to consider the behaviour of an "average" receiver, focusing on the variation in the positioning quality with respect to the size of the network.

The following sections show the results of these tests with the different NRTK correction used.

#### **7.1.1 VRS positioning**

200 Global Navigation Satellite Systems – Signal, Theory and Applications

• Cumulative Distribution Function (CDF), i.e. a curve that describes the probability that a variable X with a given probability distribution will be found at a value less than or

• Cumulative moving average, i.e. the arithmetic mean of a series of values over a period that increase with respect to time. Assuming equidistant measuring or sampling times, it can be computed as the sum of the values over a period divided by the number of

In the following, the planimetric and elevation positioning errors for both the instrument

The tests carried out using geodetic receivers have involved the use of the three types of NRTK corrections analysed in the previous paragraphs: VRS, MAC and FKP. For each receiver and each NRTK correction, 24 hours of real time positioning results have been stored. For this analysis, only the positions with both fixed ambiguities and a HDOP

Analysing the stored positions, it was possible to highlight the behaviour that each receiver has depending on the type of differential correction: the Fig. 6 shows the quality of the planimetric and height positioning that one of the receivers used has into the "red" network.

categories, for the different network size and products are analysed.

(Horizontal Diluition Of Precision) index lower than 4 have been considered.

Fig. 6. CDF of planimetric (left) and elevation (right) errors of a geodetic receiver

probability) for the "Receiver 2" to about 7 cms for "Receiver 1" and "Receiver 3".

In addition to the different behaviour that a receiver has with respect to differential correction, it is possible to consider also the position quality variation of different receivers. The Fig. 7 shows the planimetric and elevation error distributions using the three different geodetic receivers presented in Table 2, with a VRS correction broadcasted by the "green"

The analysis of the curves above allows to highlight a homogeneous behaviour among the receivers, which are separated only at the end of the distribution (around the 85% of probability). The planimetric accuracy, for example, changes from about 2 cms (95% of

equal to x%.

**7.1 Geodetic receivers** 

values.

network.

The VRS is without doubt the most used differential correction in real-time positioning, as well as the easiest to manage for each receiver. As seen above, in fact, this type of differential correction is based on generating, starting from the data of network CORSs, a virtual reference station close to the measurement site.

It is therefore expected a deteriorating positioning quality with the increasing of the interstation distances. If this variable is increasing, in fact, there are numerous inaccuracies that can be made by the interpolation step during the generation of the VRS correction.

The Fig. 8 shows the average behaviour of a geodetic receiver in the case of a VRS correction broadcasted by networks of different sizes.

The CDF analysis brings out an effective increase of the errors (both planimetric and altimetric) when the size of the GNSS network grows up. The planimetric error, for example, changes from values below 5 cms (95% of reliability) considering the "red" network up to 10 and 15 cm with the "green" and the "blue" one, respectively. A similar behaviour can be observed for the elevation error, with values from 6 cm ("red" network) to 10 cm ("green" network) and to about 25 cm ("blue" network).

Even with regard to the cumulative moving average, VRS positioning with a "red" network achieves a centimetre accuracy after few minutes, with a trend that remains constant over the time.

It is also interesting to analyse the trend of the cumulative moving average when "green" and "blue" networks are used: in both cases, there was a significant improvement of the position quality when the measurement period increases. This trend allows to reach a centimetre accuracy after a few hours-length measurement. This behaviour is evident for

Achievable Positioning Accuracies in a Network of GNSS Reference Stations 203

Fig. 9. Positioning quality of a geodetic receiver when a MAC correction is used: CDF of the planimetric error (top left) and of the elevation error (top right), cumulative moving average of the planimetric error (bottom left) and of the three-dimensional positioning error (bottom

The curves above confirms what was expected. Analysing, for example, the CDF curves of planimetric and elevation error, it is possible to see how positioning errors do not increase excessively when switching between the "red" and the "green" network. In these cases, the positioning quality is similar, and reaches about 5 cms (95% of observations) in planimetry and about 10 cms in altitude. A significant positioning deterioration occurs when differential MAC corrections broadcasted by the "blue" network are used. In this case, the master station is very far from the rover, causing problems on the quality of the positioning (15 cms

The trend of cumulative moving averages allows to highlight once again the similar behaviour of the MAC positioning performed with a "red" and a "green" network, as seen in bottom right in the Fig. 9. The cumulative moving average also shows how the MAC positioning with the "blue" network is not perfectly consistent over the time: as it is possible to see, after about 8 hours of measurement there is a worsening of the three-dimensional positioning quality, due to measurement error variations that are not well modelled by a so

right)

wide network.

in planimetry and 25 cms in elevation).

example when the "blue" network is used, where the effect of few outliers positions, not detected in real-time by the receiver, disappears only after 5 hours of measurements.

Fig. 8. Positioning quality of a geodetic receiver when a VRS correction is used: CDF of the planimetric error (top left) and of the elevation error (top right), cumulative moving average of the planimetric error (bottom left) and of the three-dimensional positioning error (bottom right)

## **7.1.2 MAC positioning**

As said previously, the MAC correction is realized using observations from a single reference station (master) with additional information of other CORSs within a well-defined cell of the network (auxiliary stations). For this reason, this correction should be less sensitive to the variation of GNSS network sizes. As long as the distance between the master station and the rover is maintained below the permissible values for differential real-time positioning, the variation of the network size represents only a minor contribution to the differential correction (i.e., the contribution due to the auxiliary stations). The tests allow to extract an "average" behaviour of a geodetic dual frequency receiver when a MAC correction is used. This behaviour is summarized in the Fig. 9.

202 Global Navigation Satellite Systems – Signal, Theory and Applications

example when the "blue" network is used, where the effect of few outliers positions, not

Fig. 8. Positioning quality of a geodetic receiver when a VRS correction is used: CDF of the planimetric error (top left) and of the elevation error (top right), cumulative moving average of the planimetric error (bottom left) and of the three-dimensional positioning error (bottom

As said previously, the MAC correction is realized using observations from a single reference station (master) with additional information of other CORSs within a well-defined cell of the network (auxiliary stations). For this reason, this correction should be less sensitive to the variation of GNSS network sizes. As long as the distance between the master station and the rover is maintained below the permissible values for differential real-time positioning, the variation of the network size represents only a minor contribution to the differential correction (i.e., the contribution due to the auxiliary stations). The tests allow to extract an "average" behaviour of a geodetic dual frequency receiver when a MAC

correction is used. This behaviour is summarized in the Fig. 9.

right)

**7.1.2 MAC positioning** 

detected in real-time by the receiver, disappears only after 5 hours of measurements.

Fig. 9. Positioning quality of a geodetic receiver when a MAC correction is used: CDF of the planimetric error (top left) and of the elevation error (top right), cumulative moving average of the planimetric error (bottom left) and of the three-dimensional positioning error (bottom right)

The curves above confirms what was expected. Analysing, for example, the CDF curves of planimetric and elevation error, it is possible to see how positioning errors do not increase excessively when switching between the "red" and the "green" network. In these cases, the positioning quality is similar, and reaches about 5 cms (95% of observations) in planimetry and about 10 cms in altitude. A significant positioning deterioration occurs when differential MAC corrections broadcasted by the "blue" network are used. In this case, the master station is very far from the rover, causing problems on the quality of the positioning (15 cms in planimetry and 25 cms in elevation).

The trend of cumulative moving averages allows to highlight once again the similar behaviour of the MAC positioning performed with a "red" and a "green" network, as seen in bottom right in the Fig. 9. The cumulative moving average also shows how the MAC positioning with the "blue" network is not perfectly consistent over the time: as it is possible to see, after about 8 hours of measurement there is a worsening of the three-dimensional positioning quality, due to measurement error variations that are not well modelled by a so wide network.

Achievable Positioning Accuracies in a Network of GNSS Reference Stations 205

With regard to the performance of the cumulative moving average, it is possible to see that a positioning error always lower than 5 cm can be achieved only by averaging several hours of data. The analysis of the average length of the lines shows the small number of epochs with a fix ambiguity values (almost always less than 50% of the total measured times). This is not due to NRTK corrections transmission problems, but to the use of FKP corrections by

The tests carried out on the three GIS receivers shown in Table 3 were designed to study their accuracy within GNSS networks with different inter-station distances. The corrections from a VRS (used by all receivers in this class) and from the nearest reference station (NRT) were tested. Given the receiver category and the metric accuracies expectations, the EGNOS1 corrections were also used, in order to assess whether could be, for GIS receivers, the benefits of a network of GNSS reference stations compared with the area corrections

In the following, the results obtained using VRS corrections are discussed. After that, the comparison between these results and those obtained using corrections from the NRT

First, planimetric and elevation accuracies achievable with a GIS receiver into networks with different inter-station distances are analysed. As said above, 24 hours of measurements (to be independent of satellites geometry) and only positions with a HDOP index lower than or

The Fig. 11, in the next page, shows the results obtained considering an "average" receiver. From the pictures analysis, it may notice that the positioning accuracy changes when the inter-station distance increases. However, it is possible to see that, unlike the geodetic receivers, in this case there is not a significant positioning deterioration with the increasing network size (from the "red" network to the "blue" one). The planimetric error at the 95% of reliability, for example, goes from 80 cms ("red" network) to 60 cms ("green" network) and to about 1 m ("blue" network). The improvement obtained by considering a medium-sized ("green") network is not surprising, but it must be analysed considering the quality of GIS receivers. This behaviour shows a substantial stability of the positioning accuracy, which remains always around metric values. This trend is more evident when the elevation error is analysed. Cumulative moving average lines achieve a sub-decimetric accuracy only after about 5 hours, showing no particular differences between the three different networks.

The analysis carried out considering the positioning quality with VRS corrections were compared with these obtained using the corrections from both the nearest reference station and the European geostationary satellites constellation EGNOS. In order to highlight the benefits of differential corrections, the stand-alone positioning results are also reported in

the receiver (the flat model does not fit well with the rover measurement errors).

broadcasted by a geostationary satellites constellation.

station and from the EGNOS satellites are presented.

equal to 4 (to exclude outliers) were considered.

**7.2.2 NRT and EGNOS positioning** 

1 http://www.esa.int/esaNA/egnos.html

the figures.

**7.2 GIS receivers** 

**7.2.1 VRS positioning** 

#### **7.1.3 FKP positioning**

The FKP positioning, as seen in previous sections, consists of broadcasting to the rover the bias flat model parameters estimated by the GNSS network software. The hypothesis that spatial delay variations can be arranged along a plane is certainly reliable for small networks, but become trivial when the inter-station distances become too high. In that case, in fact, local atmospheric phenomena, which can cause considerable disturbances in the GNSS observations, are not taken into account. The positioning results obtained by the use of a geodetic receiver corrected by a FKP model are shown in the Fig. 10.

Fig. 10. Positioning quality of a geodetic receiver when a FKP correction is used: CDF of the planimetric error (top left) and of the elevation error (top right), cumulative moving average of the planimetric error (bottom left) and of the three-dimensional positioning error (bottom right)

As it is possible to see in the previous figures, a flat interpolation model allows to achieve a positioning error equal to or slightly greater than 10 cms (95% of reliability) only when small networks (e.g. the "red" one) are used. When medium-sized and large-sized networks (the "green" and "blue" ones, respectively) are used, the planimetric average error exceeds 20 cms. A very similar trend is found also for the elevation error, which increases from 15 cms ("red" network) to over 30 cms ("green" and "blue" networks). There are, in this case, no significant differences between the two wider networks.

With regard to the performance of the cumulative moving average, it is possible to see that a positioning error always lower than 5 cm can be achieved only by averaging several hours of data. The analysis of the average length of the lines shows the small number of epochs with a fix ambiguity values (almost always less than 50% of the total measured times). This is not due to NRTK corrections transmission problems, but to the use of FKP corrections by the receiver (the flat model does not fit well with the rover measurement errors).

## **7.2 GIS receivers**

204 Global Navigation Satellite Systems – Signal, Theory and Applications

The FKP positioning, as seen in previous sections, consists of broadcasting to the rover the bias flat model parameters estimated by the GNSS network software. The hypothesis that spatial delay variations can be arranged along a plane is certainly reliable for small networks, but become trivial when the inter-station distances become too high. In that case, in fact, local atmospheric phenomena, which can cause considerable disturbances in the GNSS observations, are not taken into account. The positioning results obtained by the use

Fig. 10. Positioning quality of a geodetic receiver when a FKP correction is used: CDF of the planimetric error (top left) and of the elevation error (top right), cumulative moving average of the planimetric error (bottom left) and of the three-dimensional positioning error (bottom

As it is possible to see in the previous figures, a flat interpolation model allows to achieve a positioning error equal to or slightly greater than 10 cms (95% of reliability) only when small networks (e.g. the "red" one) are used. When medium-sized and large-sized networks (the "green" and "blue" ones, respectively) are used, the planimetric average error exceeds 20 cms. A very similar trend is found also for the elevation error, which increases from 15 cms ("red" network) to over 30 cms ("green" and "blue" networks). There are, in this case, no

significant differences between the two wider networks.

of a geodetic receiver corrected by a FKP model are shown in the Fig. 10.

**7.1.3 FKP positioning** 

right)

The tests carried out on the three GIS receivers shown in Table 3 were designed to study their accuracy within GNSS networks with different inter-station distances. The corrections from a VRS (used by all receivers in this class) and from the nearest reference station (NRT) were tested. Given the receiver category and the metric accuracies expectations, the EGNOS1 corrections were also used, in order to assess whether could be, for GIS receivers, the benefits of a network of GNSS reference stations compared with the area corrections broadcasted by a geostationary satellites constellation.

In the following, the results obtained using VRS corrections are discussed. After that, the comparison between these results and those obtained using corrections from the NRT station and from the EGNOS satellites are presented.

## **7.2.1 VRS positioning**

First, planimetric and elevation accuracies achievable with a GIS receiver into networks with different inter-station distances are analysed. As said above, 24 hours of measurements (to be independent of satellites geometry) and only positions with a HDOP index lower than or equal to 4 (to exclude outliers) were considered.

The Fig. 11, in the next page, shows the results obtained considering an "average" receiver. From the pictures analysis, it may notice that the positioning accuracy changes when the inter-station distance increases. However, it is possible to see that, unlike the geodetic receivers, in this case there is not a significant positioning deterioration with the increasing network size (from the "red" network to the "blue" one). The planimetric error at the 95% of reliability, for example, goes from 80 cms ("red" network) to 60 cms ("green" network) and to about 1 m ("blue" network). The improvement obtained by considering a medium-sized ("green") network is not surprising, but it must be analysed considering the quality of GIS receivers. This behaviour shows a substantial stability of the positioning accuracy, which remains always around metric values. This trend is more evident when the elevation error is analysed. Cumulative moving average lines achieve a sub-decimetric accuracy only after about 5 hours, showing no particular differences between the three different networks.

## **7.2.2 NRT and EGNOS positioning**

The analysis carried out considering the positioning quality with VRS corrections were compared with these obtained using the corrections from both the nearest reference station and the European geostationary satellites constellation EGNOS. In order to highlight the benefits of differential corrections, the stand-alone positioning results are also reported in the figures.

<sup>1</sup> http://www.esa.int/esaNA/egnos.html

Achievable Positioning Accuracies in a Network of GNSS Reference Stations 207

Fig. 12. Positioning quality comparison of a GIS receiver: CDF of planimetric error (left) and of elevation error (right) when a NRT correction is used (top) and when an EGNOS area

Fig. 13. Positioning quality comparison of a GIS receiver: cumulative moving average of the

positioning error when NRT (left) and EGNOS (right) corrections are used

correction is involved (bottom)

Fig. 11. Positioning quality of a GIS receiver when a VRS correction is used: CDF of the planimetric error (top left) and of the elevation error (top right), cumulative moving average of the planimetric error (bottom left) and of the three-dimensional positioning error (bottom right)

The Fig. 12 shows the comparison between the stand-alone positioning error and this one obtained using the two corrections said above. The figure shows the results obtained considering the network with inter-station distances of about 100 kms ("green" network), which in previous tests gave better results. As shown, both the NRT and EGNOS corrections allow to obtain a positioning quality that is fully comparable to that one achievable using VRS corrections. This result, although it may seem in contrast with the virtual stations and with the GNSS network positioning concepts, must not surprise. Common GIS receivers, in fact, are not able to well use carrier-phase corrections that difficultly can be modelled when the reference stations are too far from the measurement site.

The analysis of figures above allows also to highlight benefits due to the use of differential corrections with respect to the stand-alone positioning. The planimetric error (at the 95% of reliability), for example, decreases from values close to 1.7 m for stand-alone positioning up to about 70 cms when differential corrections are used. This improvement is even more evident observing the height accuracy trend (which decreases from about 4.5 ms to 1 m) and when cumulative moving average is considered (Fig. 13).

206 Global Navigation Satellite Systems – Signal, Theory and Applications

Fig. 11. Positioning quality of a GIS receiver when a VRS correction is used: CDF of the planimetric error (top left) and of the elevation error (top right), cumulative moving average of the planimetric error (bottom left) and of the three-dimensional positioning error (bottom

the reference stations are too far from the measurement site.

when cumulative moving average is considered (Fig. 13).

The Fig. 12 shows the comparison between the stand-alone positioning error and this one obtained using the two corrections said above. The figure shows the results obtained considering the network with inter-station distances of about 100 kms ("green" network), which in previous tests gave better results. As shown, both the NRT and EGNOS corrections allow to obtain a positioning quality that is fully comparable to that one achievable using VRS corrections. This result, although it may seem in contrast with the virtual stations and with the GNSS network positioning concepts, must not surprise. Common GIS receivers, in fact, are not able to well use carrier-phase corrections that difficultly can be modelled when

The analysis of figures above allows also to highlight benefits due to the use of differential corrections with respect to the stand-alone positioning. The planimetric error (at the 95% of reliability), for example, decreases from values close to 1.7 m for stand-alone positioning up to about 70 cms when differential corrections are used. This improvement is even more evident observing the height accuracy trend (which decreases from about 4.5 ms to 1 m) and

right)

Fig. 12. Positioning quality comparison of a GIS receiver: CDF of planimetric error (left) and of elevation error (right) when a NRT correction is used (top) and when an EGNOS area correction is involved (bottom)

Fig. 13. Positioning quality comparison of a GIS receiver: cumulative moving average of the positioning error when NRT (left) and EGNOS (right) corrections are used

Achievable Positioning Accuracies in a Network of GNSS Reference Stations 209

between the "red" and the "green" networks (98-99% of epochs using 5 minutes files and 99- 100% using 10 minutes files). For the "blue" one, this percentage decreases to 92% and 97%

Fig. 15. Positioning quality of a geodetic receiver after the post-processing with the nearest reference station. CDF of the planimetric (left) and altimetric (right) error using static time

These results clearly show what is the limit (about 2 cms) of the post-processing approach when a master station is located farther than 25-30 kms from the rover. This distance is until today comparable with the actual inter-station distances of GNSS networks. To improve the

• use a post-processing network product, i.e. a virtual RINEX file generated from the

In the last case, the main advantage for the user consists in having a raw data file located

In this way, the rover has a higher probability to fix the phase ambiguities. Otherwise, this product shows the problems already discussed for the real-time VRS positioning. The VRS

2 cms limit with a reasonable reliability, two strategies can be adopted:

error models estimated by the GNSS network.

• increase the static measurement length, resulting in a lack of the productivity;

sessions of 5 (top) and 10 (bottom) minutes

close to the rover.

respectively.

## **8. Post-processing positioning accuracies**

Raw data files in a RINEX format were stored in order to estimate the accuracies achievable in post-processing and the performance when the average inter-station distance increases.

These files, with a length of 24 hours, were split in many shorter files with different duration, in order to statistically evaluate the planimetric and altimetric accuracy.

Fig. 14. Data files split schema

The data files were processed by a commercial software (Leica Geomatics Office™ v.8.0) based on the double differences approach, using as master station the nearest permanent station to each considered network and a VRS generated by the network software close to the measurement site.

The post-processing results show a no significant difference among the three geodetic receivers, due to the goodness raw data quality. The same behaviour was observed also when the three GIS receivers were used. For this reason, an "average" instrument for each class of receivers is considered in the following analysis.

## **8.1 Geodetic receivers**

Raw data files of a geodetic receiver were split in many files of 5 and 10 minutes long, and they were post-processed as said above. The CDF of the planimetric and altimetric error (calculated using the "true position" evaluated from the network adjustment previously described) were computed for each time session.

The Fig. 15, in the next page, shows the results obtained using the nearest station for the three considered networks. A low deterioration of the positioning accuracy can be observed when different reference stations (at different distances from the rover) were used as master.

This can be seen, for instance, considering the planimetric accuracy obtained by the postprocessing of the 5 minutes long data. In this case, for the "green" and the "blue" networks, a significant degradation can be observed only in the last 10% of the distribution.

Considering the 10 minutes long files, no significant improvements are observed, as expected, in the "red" network, while a better accuracy can be seen when "green" and "blue" networks are used. Also the percentage of epochs with fixed ambiguities are similar 208 Global Navigation Satellite Systems – Signal, Theory and Applications

Raw data files in a RINEX format were stored in order to estimate the accuracies achievable in post-processing and the performance when the average inter-station distance increases. These files, with a length of 24 hours, were split in many shorter files with different

The data files were processed by a commercial software (Leica Geomatics Office™ v.8.0) based on the double differences approach, using as master station the nearest permanent station to each considered network and a VRS generated by the network software close to

The post-processing results show a no significant difference among the three geodetic receivers, due to the goodness raw data quality. The same behaviour was observed also when the three GIS receivers were used. For this reason, an "average" instrument for each

Raw data files of a geodetic receiver were split in many files of 5 and 10 minutes long, and they were post-processed as said above. The CDF of the planimetric and altimetric error (calculated using the "true position" evaluated from the network adjustment previously

The Fig. 15, in the next page, shows the results obtained using the nearest station for the three considered networks. A low deterioration of the positioning accuracy can be observed when different reference stations (at different distances from the rover) were used as master. This can be seen, for instance, considering the planimetric accuracy obtained by the postprocessing of the 5 minutes long data. In this case, for the "green" and the "blue" networks,

Considering the 10 minutes long files, no significant improvements are observed, as expected, in the "red" network, while a better accuracy can be seen when "green" and "blue" networks are used. Also the percentage of epochs with fixed ambiguities are similar

a significant degradation can be observed only in the last 10% of the distribution.

duration, in order to statistically evaluate the planimetric and altimetric accuracy.

**8. Post-processing positioning accuracies** 

Fig. 14. Data files split schema

class of receivers is considered in the following analysis.

described) were computed for each time session.

the measurement site.

**8.1 Geodetic receivers** 

between the "red" and the "green" networks (98-99% of epochs using 5 minutes files and 99- 100% using 10 minutes files). For the "blue" one, this percentage decreases to 92% and 97% respectively.

Fig. 15. Positioning quality of a geodetic receiver after the post-processing with the nearest reference station. CDF of the planimetric (left) and altimetric (right) error using static time sessions of 5 (top) and 10 (bottom) minutes

These results clearly show what is the limit (about 2 cms) of the post-processing approach when a master station is located farther than 25-30 kms from the rover. This distance is until today comparable with the actual inter-station distances of GNSS networks. To improve the 2 cms limit with a reasonable reliability, two strategies can be adopted:


In the last case, the main advantage for the user consists in having a raw data file located close to the rover.

In this way, the rover has a higher probability to fix the phase ambiguities. Otherwise, this product shows the problems already discussed for the real-time VRS positioning. The VRS

Achievable Positioning Accuracies in a Network of GNSS Reference Stations 211

As in the previous section, an "average" GIS receiver was considered, and the same processing methods were adopted. However, it was a priori decided to increase the static processing length session, splitting the raw data in 10 or 20 minutes long files. This is the average time that an operator could wait to achieve a sub-decimeter positioning accuracy

The Fig. 17 shows the post-processing results of raw data files, obtained using the nearest

Fig. 17. Positioning quality of a GIS receiver after the post-processing with the nearest reference station. CDF of the planimetric (left) and altimetric (right) error using static time

A low deterioration both in planimetric and in altimetric accuracy can be observed, when the master is farther than 30 kms from the rover. This deterioration is not due to the low quality of raw data, but, instead, because GIS receivers are not able to track the L2 frequency. This frequency, in fact, allows to linearly combine measurements to reduce some

sessions of 10 (top) and 20 (bottom) minutes

of the biases (e.g. iono-free combination).

**8.2 GIS receivers** 

reference station.

using a low-cost receiver.

RINEX files, in fact, are generated interpolating the error model estimated by the network software. When the inter-station distances grows up, a positioning quality deterioration is expected, due to the approximations made in the interpolation process of a wider area. The Fig. 16, that shows the results obtained using a VRS RINEX file as master, confirms what was expected.

Fig. 16. Positioning quality of a geodetic receiver after the post-processing with the VRS. CDF of the planimetric (left) and altimetric (right) error using static time sessions of 5 (top) and 10 (bottom) minutes

It is easy to note that the VRS post-processing positioning improves the planimetric and altimetric accuracy only in the case of small- ("red") and medium-sized ("green") networks (about 1 cm and 4 cms respectively, considering the planimetric error).

These results confirm the goodness of this product when GNSS permanent stations, far each other about 50-100 kms, are used. When these distances exceed 100 kms, the positioning quality is comparable, or even worse, with the one obtained using the nearest reference station as master.

The percentage of fixed ambiguities is about 100% of the epochs considering 5 minutes or 10 minutes long files in "red" network, while it is very low for "green" (50%) and "blue" (12%) networks.

## **8.2 GIS receivers**

210 Global Navigation Satellite Systems – Signal, Theory and Applications

RINEX files, in fact, are generated interpolating the error model estimated by the network software. When the inter-station distances grows up, a positioning quality deterioration is expected, due to the approximations made in the interpolation process of a wider area. The Fig. 16, that shows the results obtained using a VRS RINEX file as master, confirms what

Fig. 16. Positioning quality of a geodetic receiver after the post-processing with the VRS. CDF of the planimetric (left) and altimetric (right) error using static time sessions of 5 (top)

(about 1 cm and 4 cms respectively, considering the planimetric error).

It is easy to note that the VRS post-processing positioning improves the planimetric and altimetric accuracy only in the case of small- ("red") and medium-sized ("green") networks

These results confirm the goodness of this product when GNSS permanent stations, far each other about 50-100 kms, are used. When these distances exceed 100 kms, the positioning quality is comparable, or even worse, with the one obtained using the nearest reference

The percentage of fixed ambiguities is about 100% of the epochs considering 5 minutes or 10 minutes long files in "red" network, while it is very low for "green" (50%) and "blue" (12%)

was expected.

and 10 (bottom) minutes

station as master.

networks.

As in the previous section, an "average" GIS receiver was considered, and the same processing methods were adopted. However, it was a priori decided to increase the static processing length session, splitting the raw data in 10 or 20 minutes long files. This is the average time that an operator could wait to achieve a sub-decimeter positioning accuracy using a low-cost receiver.

The Fig. 17 shows the post-processing results of raw data files, obtained using the nearest reference station.

Fig. 17. Positioning quality of a GIS receiver after the post-processing with the nearest reference station. CDF of the planimetric (left) and altimetric (right) error using static time sessions of 10 (top) and 20 (bottom) minutes

A low deterioration both in planimetric and in altimetric accuracy can be observed, when the master is farther than 30 kms from the rover. This deterioration is not due to the low quality of raw data, but, instead, because GIS receivers are not able to track the L2 frequency. This frequency, in fact, allows to linearly combine measurements to reduce some of the biases (e.g. iono-free combination).

Achievable Positioning Accuracies in a Network of GNSS Reference Stations 213

It is representative the planimetric positioning achieved with the VRS generated by the wider ("blue") network, which has a percentage of about 20-30% of the data that lies outside the maximum axis value (30 cms). In this case, the percentage of measurement sessions with fixed ambiguities is 100% ("red" network) and collapse to 40% ("green" network) and only

In this chapter, the accuracy of geodetic and GIS receivers in small-, medium- and largesized networks of GNSS reference stations were analysed, comparing the results obtained with different network products. The accuracies achieved with a 95% of reliability, referring to a well-known rover position and using 24 hours of measurement, were considered.

Geodetic receivers can benefit from the VRS corrections transmitted by networks with interstation distances up to 100 kms, allowing it to achieve planimetric accuracies from 2 to 8 cms and from 5 to 12 cms in elevation. A similar behaviour can be found when MAC corrections are used. This network product, in fact, provides comparable results for the small- and

If large networks are considered, the NRTK positioning is often inefficient and unreliable. Due to their lower accuracy to model biases of large areas, FKP corrections are not suitable

The performance of GIS receivers in real-time is poorly influenced by the size of the network. Planimetric error achieves accuracies from 65 to 85 cms in the three considered networks, and elevation error is always about 1 m. This improvement is noticeable when it is compared to the stand-alone position, with planimetric accuracies of 1.7 m and 4.5 m in altitude. Even with the EGNOS corrections it is possible to reach the same altitude accuracy (1 m at 95%) and a planimetric accuracy of about 75 cms. Using the network differential corrections, a planimetric accuracy of 50 cm can be achieved by averaging few minutes of

Regarding the post-processing positioning, no substantial differences were noted in the accuracy considering static session of 5 and 10 minutes long for geodetic receivers, and of 10

For geodetic instruments, it is found that the positioning using a VRS RINEX file allows an improvement only when small-sized networks are involved. For wider networks, the best accuracies are always obtained using the RINEX file from the nearest reference station,

Considering GIS receivers, the best performance is obtained when the nearest station data are used in a small-sized network (inter-station distances of about 50 kms), with a planimetric error of 2 cms and an elevation error of 3 cms. A VRS RINEX file generated by a large network does not improve the position accuracy with respect to the results obtained from the nearest station, while some advantages can be found when a medium-sized network is involved. The planimetric accuracy, in fact, goes from 10 cms, when data from the nearest station are used, to about 4 cms considering virtual data generated by a GNSS network. A similar behaviour can be also found when elevation accuracy is considered

although the number of ambiguity fixes may drop up to about 30% of the epochs.

medium-sized networks (about 5 cm in planimetry and 10 cm in elevation).

for positioning even in medium-sized networks.

to 10% ("blue" network).

**9. Conclusions** 

real-time positions.

(from 15 cms to 8 cms).

and 20 minutes long for GIS receivers.

It should also be noted that increasing the measurement time from 10 to 20 minutes does not entail a real improvement in the positioning accuracy, that reaches values from about 2-3 cms ("red" network) to 7 cms ("green" network) at the 90% of reliability.

As before, a better accuracy can be obtained using VRS RINEX files generated by the network software. The analysis of the positioning accuracy, shown in Fig. 18, confirms the expected behaviour, already seen for geodetic receivers.

A reduction of the maximum planimetric and altimetric error for "red" and "green" networks is observed (few centimetres at 90% of reliability).

The percentage of measurement sessions with fixed ambiguities, goes from 68% ("red" network) to 48% ("green" network) and only to 31% ("blue" network). These percentages do not appreciably changes when 20 minutes long files are considered.

Fig. 18. Positioning quality of a GIS receiver after the post-processing with the VRS file. CDF of the planimetric (left) and altimetric (right) error using static time sessions of 10 (top) and 20 (bottom) minutes

The combined use of single frequency instruments and virtual data is very useful only for the "red" network, while the virtual data generated by the "green" and the "blue" networks significantly increase the errors, as clearly shown in Fig. 18.

It is representative the planimetric positioning achieved with the VRS generated by the wider ("blue") network, which has a percentage of about 20-30% of the data that lies outside the maximum axis value (30 cms). In this case, the percentage of measurement sessions with fixed ambiguities is 100% ("red" network) and collapse to 40% ("green" network) and only to 10% ("blue" network).

## **9. Conclusions**

212 Global Navigation Satellite Systems – Signal, Theory and Applications

It should also be noted that increasing the measurement time from 10 to 20 minutes does not entail a real improvement in the positioning accuracy, that reaches values from about 2-3

As before, a better accuracy can be obtained using VRS RINEX files generated by the network software. The analysis of the positioning accuracy, shown in Fig. 18, confirms the

A reduction of the maximum planimetric and altimetric error for "red" and "green"

The percentage of measurement sessions with fixed ambiguities, goes from 68% ("red" network) to 48% ("green" network) and only to 31% ("blue" network). These percentages do

Fig. 18. Positioning quality of a GIS receiver after the post-processing with the VRS file. CDF of the planimetric (left) and altimetric (right) error using static time sessions of 10 (top) and

The combined use of single frequency instruments and virtual data is very useful only for the "red" network, while the virtual data generated by the "green" and the "blue" networks

significantly increase the errors, as clearly shown in Fig. 18.

20 (bottom) minutes

cms ("red" network) to 7 cms ("green" network) at the 90% of reliability.

expected behaviour, already seen for geodetic receivers.

networks is observed (few centimetres at 90% of reliability).

not appreciably changes when 20 minutes long files are considered.

In this chapter, the accuracy of geodetic and GIS receivers in small-, medium- and largesized networks of GNSS reference stations were analysed, comparing the results obtained with different network products. The accuracies achieved with a 95% of reliability, referring to a well-known rover position and using 24 hours of measurement, were considered.

Geodetic receivers can benefit from the VRS corrections transmitted by networks with interstation distances up to 100 kms, allowing it to achieve planimetric accuracies from 2 to 8 cms and from 5 to 12 cms in elevation. A similar behaviour can be found when MAC corrections are used. This network product, in fact, provides comparable results for the small- and medium-sized networks (about 5 cm in planimetry and 10 cm in elevation).

If large networks are considered, the NRTK positioning is often inefficient and unreliable. Due to their lower accuracy to model biases of large areas, FKP corrections are not suitable for positioning even in medium-sized networks.

The performance of GIS receivers in real-time is poorly influenced by the size of the network. Planimetric error achieves accuracies from 65 to 85 cms in the three considered networks, and elevation error is always about 1 m. This improvement is noticeable when it is compared to the stand-alone position, with planimetric accuracies of 1.7 m and 4.5 m in altitude. Even with the EGNOS corrections it is possible to reach the same altitude accuracy (1 m at 95%) and a planimetric accuracy of about 75 cms. Using the network differential corrections, a planimetric accuracy of 50 cm can be achieved by averaging few minutes of real-time positions.

Regarding the post-processing positioning, no substantial differences were noted in the accuracy considering static session of 5 and 10 minutes long for geodetic receivers, and of 10 and 20 minutes long for GIS receivers.

For geodetic instruments, it is found that the positioning using a VRS RINEX file allows an improvement only when small-sized networks are involved. For wider networks, the best accuracies are always obtained using the RINEX file from the nearest reference station, although the number of ambiguity fixes may drop up to about 30% of the epochs.

Considering GIS receivers, the best performance is obtained when the nearest station data are used in a small-sized network (inter-station distances of about 50 kms), with a planimetric error of 2 cms and an elevation error of 3 cms. A VRS RINEX file generated by a large network does not improve the position accuracy with respect to the results obtained from the nearest station, while some advantages can be found when a medium-sized network is involved. The planimetric accuracy, in fact, goes from 10 cms, when data from the nearest station are used, to about 4 cms considering virtual data generated by a GNSS network. A similar behaviour can be also found when elevation accuracy is considered (from 15 cms to 8 cms).

**9** 

*Chile* 

Carola A. Blazquez *Universidad Andres Bello* 

*Department of Engineering Science* 

**A Decision-Rule Topological Map-Matching** 

Intelligent Transportation System (ITS) applications such as congestion and traffic management employ Global Positioning Systems (GPS) technology to collect positioning data in two or three dimensions of events, incidents, or vehicles. This information is integrated with Geographic Information Systems (GIS) to determine the roadway upon which events and incidents occur, point features such as traffic signs are located, or vehicles

Vehicle trajectories displayed on a digital map are not situated on top of the roadway centerlines, which represent the real world. Therefore, when both GPS measurements and roadway centerline maps are very accurate, a GPS data point is associated with the nearest roadway by calculating the minimum perpendicular distance between each roadway representation and the GPS data point. This process is called "snapping". Unfortunately, a spatial mismatch occurs when a GPS data point is snapped to an incorrect roadway centerline due to roadway network complexities, inadequate GPS data collection procedures, and lack of accuracy in the digital roadway map and the GPS measurements, or combinations of them (Chen et al., 2005). Figure 1 shows an example where errors in the location of the measured GPS data point cause an incorrect snap to the nearest road 2

Correct Snap **Road 1**

Incorrect Snap

Measured **Road 2**

Fig. 1. Measured GPS Data Point with Error Snapped to the Wrong Roadway Centerline

Generally, spatial mismatches or map-matching problems occur at overpasses and underpasses, converging and diverging roadways such as ramps and divided highways, or when roads are close together. Figure 2 presents GPS measurements of a vehicle traveling at

**1. Introduction** 

are traveling.

instead of snapping to road 1.

**Algorithm with Multiple Spatial Data** 

The experiments were funded as part of the National Research Project PRIN 2008 "The new Italian Geodetic Reference System: continuous monitoring and application of control management of the territory", financed by the Italian Ministry of Education, University and Research in 2008.

## **10. References**


## **A Decision-Rule Topological Map-Matching Algorithm with Multiple Spatial Data**

Carola A. Blazquez *Universidad Andres Bello Department of Engineering Science Chile* 

## **1. Introduction**

214 Global Navigation Satellite Systems – Signal, Theory and Applications

The experiments were funded as part of the National Research Project PRIN 2008 "The new Italian Geodetic Reference System: continuous monitoring and application of control management of the territory", financed by the Italian Ministry of Education, University and

Chen, X.; Vollath, U. & Landau, H. (2004). Will GALILEO/modernized GPS obsolete

Euler, H.J.; Keenan, C.R.; Zebhauser, B.E. & Wübbena G. (2001). Study of a simplified

Landau, H., Vollath, U. & Chen, X. (2002). Virtual Reference Station Systems. *Journal of* 

Raquet, J. & Lachapelle, G. (2001). Multiple Reference RTK Positioning. *GPS World*,

Raquet, J.; Lachapelle, G. & Fortes, L. (2001). Use of a covariance analysis technique for

Rizos, C. (2002). Network RTK research and implementation – a geodetic perspective.

Schaffrin, B. & Grafarend, E. (1986). Generating classes of equivalent linear models by nuisance parameter elimination. *Manuscripta Geodaetica*, Vol.11, pp. 262–271. Stankov, S.M. & Jakowskia, N. (2007). Ionospheric effects on GNSS reference network

Vollath, U.; Buecherl, A.; Landau, H.; Pagels, C. & Wagner, B. (2000). Multi-Base RTK using

Wübbena, G. & Bagge, A. (2002). RTCM Message Type 59–FKP for transmission of FKP.

Wübbena, G.; Bagge, A.; Seeber, G.; Böder, V. & Hankemeier, P. (1996). Reducing distance

Zhang, J. & Lachapelle, G. (2001). Precise estimation of residual tropospheric delays using a

*Global Positioning Systems*, Vol. 1(2), pp. 137-143.

networks. *Navigation*, Vol. 48(1), pp. 25-34.

*Journal of Global Positioning Systems*, Vol. 1(2), pp. 144-150.

Network RTK, *Proceedings of ENC-GNSS 2004*, Rotterdam, The Netherlands, May

approach in utilizing information from permanent reference station arrays, *Proceedings of the 14th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GPS 2001)*, Salt Lake City (UT-USA), September 2001. Landau, H. & Euler, H.J. (1992). On-the-Fly ambiguity resolution for precise differential

positioning, *Proceedings of the 5th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GPS 1992)*, Albuquerque (NM-USA),

predicting performance of regional area differential code and carrier-phase

integrity. *Journal of Atmospheric and Solar-Terrestrial Physics*, Vol. 69 (4-5), pp. 485-

Virtual Reference Stations, *Proceedings of the 13th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GPS 2000)*, Salt Lake City (UT-

dependent errors for real-time precise DGPS applications by establishing reference station networks, *Proceedings of the 9th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GPS 1996)*, Kansas City (KS-USA),

regional GPS network for real-time kinematic applications. *Journal of Geodesy*,

Research in 2008.

**10. References** 

2004.

499.

September 1992.

Vol.12(4), pp. 48-53.

USA), September 2000.

September 1996.

Vol.75 (5-6), pp. 255-266.

*Geo++ White paper*, N.2002.01.

Intelligent Transportation System (ITS) applications such as congestion and traffic management employ Global Positioning Systems (GPS) technology to collect positioning data in two or three dimensions of events, incidents, or vehicles. This information is integrated with Geographic Information Systems (GIS) to determine the roadway upon which events and incidents occur, point features such as traffic signs are located, or vehicles are traveling.

Vehicle trajectories displayed on a digital map are not situated on top of the roadway centerlines, which represent the real world. Therefore, when both GPS measurements and roadway centerline maps are very accurate, a GPS data point is associated with the nearest roadway by calculating the minimum perpendicular distance between each roadway representation and the GPS data point. This process is called "snapping". Unfortunately, a spatial mismatch occurs when a GPS data point is snapped to an incorrect roadway centerline due to roadway network complexities, inadequate GPS data collection procedures, and lack of accuracy in the digital roadway map and the GPS measurements, or combinations of them (Chen et al., 2005). Figure 1 shows an example where errors in the location of the measured GPS data point cause an incorrect snap to the nearest road 2 instead of snapping to road 1.

Fig. 1. Measured GPS Data Point with Error Snapped to the Wrong Roadway Centerline

Generally, spatial mismatches or map-matching problems occur at overpasses and underpasses, converging and diverging roadways such as ramps and divided highways, or when roads are close together. Figure 2 presents GPS measurements of a vehicle traveling at

A Decision-Rule Topological Map-Matching Algorithm with Multiple Spatial Data 217

location on a roadway and a given direction of travel. Conditional tests are applied to determine whether the vehicle is traveling on the known road by comparing turns from the vehicle location to a segment of the digital road map. A correction is performed whenever the heading of the vehicle changes (Morisue & Ikeda, 1989). However, for this technique to work, the vehicle is generally assumed to follow a predetermined road. There is considerable uncertainty when the vehicle travels off-road because there is no longer any

The probabilistic approach, described later, has the advantage of not assuming that the vehicle is always on a road. Vehicle heading error is calculated with an elliptical or rectangular confidence region and error models are developed within which the true vehicle location can be determined. If the vehicle position within the region contains one intersection or road segment, a match is made and the coordinates on the road are used in the next position calculation. If more than one road or intersection lies within the region, connectivity checks are made to determine the most probable location of the vehicle given earlier vehicle positions. As a result, the algorithm yields the best match segment along with

Fuzzy logic is an effective way to deal with tasks that involve qualitative terms and concepts, vagueness, and human intervention. Expert knowledge and experiences employed by a fuzzy logic based map-matching algorithm are represented as a set of rules to determine vehicle location (e.g., if the difference between the orientation of the roadway segment and the heading of the vehicle is small, then resemblance between the vehicle travel

S. Kim and J. H. Kim (2001) propose an adaptive fuzzy-network-based C-measure algorithm that identifies the roadway on which a vehicle is traveling by comparing C-measures associated with each candidate roadway. These measures are membership functions that represent the certainty of the existence of a vehicle on a specific roadway. After the roadway is identified, the algorithm determines the vehicle position on the roadway by orthogonal projection. The algorithm requires the distance between the vehicle's GPS coordinates and its projected position on the roadway to be small. Furthermore, the shape of the roadway

Jagadeesh et al. (2004) developed a map-matching algorithm based on the inferences and a simple fuzzy rule set. This algorithm evaluates the likelihood of candidate roads to be the actual traveled road. Three fuzzy rules are employed for this purpose, which include heading comparison, road resemblance, and verification of off road vehicles. Test results with simulated data indicate that the algorithm is capable of achieving high accuracy.

Quddus et al. (2006) describe a map-matching algorithm based on fuzzy logic theory. The proposed algorithm employs an integrated navigation system and digital map data to identify the correct link and determine the vehicle location on the selected link. Although the algorithm was tested successfully in different road networks, the authors consider that

future evaluation of the algorithm is required under urban conditions.

the most probable matching point on the segment (Zhao, 1997; Czerniak, 2002).

way to correct for errors (Zhao, 1997; Czerniak, 2002).

**2.2 Probabilistic map-matching** 

**2.3 Fuzzy logic map-matching** 

path and the candidate route is high).

must be similar to the trajectory of the vehicle.

a major highway interchange containing ramps, overpasses, and underpasses. This example indicates that multiple spatial mismatches may occur at interchanges.

As a consequence of the map-matching problem, any subsequent usage, visualization, computation, evaluation, analysis, planning, and decision-making may be impacted negatively and produce erroneous perceptions. For example, the calculated cumulative distance traveled by a vehicle along a roadway network is incorrect and, therefore, calculated values for performance measures such as fuel consumption or decision management tools that depend upon cumulative distance are wrong. Additionally, any nonspatial data collected from vehicle sensors such as speed data or emission levels are associated with incorrect roadway centerlines. Furthermore, GPS data points might be incorrectly assigned to roadways along which no measurements were ever taken affecting transportation applications such as road use charging based on the total mileage driven by vehicle (Cozzens, 2009; Sheridan, 2011). The need to overcome spatial mismatches in ITS applications is a major motivation for implementing map-matching algorithms.

Fig. 2. GPS Data Points Collected by a Vehicle While Traveling at a Highway Interchange

Section 2 presents a literature review of map-matching algorithms developed to solve spatial ambiguities. Section 3 describes the proposed topological decision-rule mapmatching algorithm and an example of its implementation. Results of the performance analysis with real spatial data are presented in section 4. Finally, section 5 presents a summary and the main conclusions of this chapter, and further research topics to be addressed.

## **2. Map-matching methods**

The problem of resolving spatial ambiguities has been widely studied over the years. The following map-matching algorithms are described in the literature with different levels of complexity ranging from simple geometric techniques to complex, advanced approaches.

#### **2.1 Semi-deterministic map-matching**

The earliest map-matching algorithm, before GPS was developed in the 1970's, followed a semi-deterministic model (French, 1989). This model assumes that the vehicle has an initial location on a roadway and a given direction of travel. Conditional tests are applied to determine whether the vehicle is traveling on the known road by comparing turns from the vehicle location to a segment of the digital road map. A correction is performed whenever the heading of the vehicle changes (Morisue & Ikeda, 1989). However, for this technique to work, the vehicle is generally assumed to follow a predetermined road. There is considerable uncertainty when the vehicle travels off-road because there is no longer any way to correct for errors (Zhao, 1997; Czerniak, 2002).

## **2.2 Probabilistic map-matching**

216 Global Navigation Satellite Systems – Signal, Theory and Applications

a major highway interchange containing ramps, overpasses, and underpasses. This example

As a consequence of the map-matching problem, any subsequent usage, visualization, computation, evaluation, analysis, planning, and decision-making may be impacted negatively and produce erroneous perceptions. For example, the calculated cumulative distance traveled by a vehicle along a roadway network is incorrect and, therefore, calculated values for performance measures such as fuel consumption or decision management tools that depend upon cumulative distance are wrong. Additionally, any nonspatial data collected from vehicle sensors such as speed data or emission levels are associated with incorrect roadway centerlines. Furthermore, GPS data points might be incorrectly assigned to roadways along which no measurements were ever taken affecting transportation applications such as road use charging based on the total mileage driven by vehicle (Cozzens, 2009; Sheridan, 2011). The need to overcome spatial mismatches in ITS

indicates that multiple spatial mismatches may occur at interchanges.

applications is a major motivation for implementing map-matching algorithms.

Fig. 2. GPS Data Points Collected by a Vehicle While Traveling at a Highway Interchange

addressed.

**2. Map-matching methods** 

**2.1 Semi-deterministic map-matching** 

Section 2 presents a literature review of map-matching algorithms developed to solve spatial ambiguities. Section 3 describes the proposed topological decision-rule mapmatching algorithm and an example of its implementation. Results of the performance analysis with real spatial data are presented in section 4. Finally, section 5 presents a summary and the main conclusions of this chapter, and further research topics to be

The problem of resolving spatial ambiguities has been widely studied over the years. The following map-matching algorithms are described in the literature with different levels of complexity ranging from simple geometric techniques to complex, advanced approaches.

The earliest map-matching algorithm, before GPS was developed in the 1970's, followed a semi-deterministic model (French, 1989). This model assumes that the vehicle has an initial The probabilistic approach, described later, has the advantage of not assuming that the vehicle is always on a road. Vehicle heading error is calculated with an elliptical or rectangular confidence region and error models are developed within which the true vehicle location can be determined. If the vehicle position within the region contains one intersection or road segment, a match is made and the coordinates on the road are used in the next position calculation. If more than one road or intersection lies within the region, connectivity checks are made to determine the most probable location of the vehicle given earlier vehicle positions. As a result, the algorithm yields the best match segment along with the most probable matching point on the segment (Zhao, 1997; Czerniak, 2002).

## **2.3 Fuzzy logic map-matching**

Fuzzy logic is an effective way to deal with tasks that involve qualitative terms and concepts, vagueness, and human intervention. Expert knowledge and experiences employed by a fuzzy logic based map-matching algorithm are represented as a set of rules to determine vehicle location (e.g., if the difference between the orientation of the roadway segment and the heading of the vehicle is small, then resemblance between the vehicle travel path and the candidate route is high).

S. Kim and J. H. Kim (2001) propose an adaptive fuzzy-network-based C-measure algorithm that identifies the roadway on which a vehicle is traveling by comparing C-measures associated with each candidate roadway. These measures are membership functions that represent the certainty of the existence of a vehicle on a specific roadway. After the roadway is identified, the algorithm determines the vehicle position on the roadway by orthogonal projection. The algorithm requires the distance between the vehicle's GPS coordinates and its projected position on the roadway to be small. Furthermore, the shape of the roadway must be similar to the trajectory of the vehicle.

Jagadeesh et al. (2004) developed a map-matching algorithm based on the inferences and a simple fuzzy rule set. This algorithm evaluates the likelihood of candidate roads to be the actual traveled road. Three fuzzy rules are employed for this purpose, which include heading comparison, road resemblance, and verification of off road vehicles. Test results with simulated data indicate that the algorithm is capable of achieving high accuracy.

Quddus et al. (2006) describe a map-matching algorithm based on fuzzy logic theory. The proposed algorithm employs an integrated navigation system and digital map data to identify the correct link and determine the vehicle location on the selected link. Although the algorithm was tested successfully in different road networks, the authors consider that future evaluation of the algorithm is required under urban conditions.

A Decision-Rule Topological Map-Matching Algorithm with Multiple Spatial Data 219

a period of 30 minutes proving the feasibility of the approach for lane-level applications. The authors mention that outlier removal, multipath effect mitigation, and additional

White et al. (2000) discuss solutions to the map-matching problem for personal navigation assistants (PNA). Four different map-matching algorithms were implemented and tested: 1) use of minimum distance (point-to-curve), 2) comparison of heading information with arc and trajectory, 3) use of topology to select roads that are reachable from the current road, and 4) construction of piece-wise linear curves from different paths, followed by comparison of them to centerline curves using points (curve-to-curve matching). The authors conclude that these algorithms performed better when the distance between the GPS point and the closest road is small and that correct matches tend to occur at greater speeds on straight

Freitas et al. (2009) explain the necessity of map-matching algorithms to correctly locate GPS positions on a map when using PNA, particularly for dynamic route guidance systems. The authors describe an approach to update digital maps through the use of GPS points, in order to identify map incongruence. The proposed system was designed as a prototype and lacks of extensive testing, however, it correctly processes and implements methods for map-

Taylor et al. (2001) describe an algorithm called "Road Reduction Filter (RRF)" that uses differential corrections and height aids. RRF identifies all possible roadway candidates while systematically removing incorrect ones. RRF is improved by using shortest path network analysis and drive restriction information. A shortest path network routine calculates the distance through the roadway network from a vehicle's previous position to each potential present position offered by the algorithm. The drive restriction information

Greenfeld (2002) presents a map-matching procedure that consists of two algorithms. One algorithm assesses similarity between characteristics of the roadway network and the positioning pattern of the vehicle. The second algorithm performs topological analysis and applies a weighting scheme to match each GPS data point to the roadway network. The highest weighted score determines the most likely candidate for a correct match. The author indicates that further research is needed to determine the correct position of the vehicle

Doherty et al. (2000) studied an algorithm that automatically matches GPS data points to roadway segments along a network. First, the algorithm joins GPS points to create a linear object forming the vehicle's track. Subsequently, it creates a buffer zone around the linear object, and then identifies all the roadways that are totally included within the buffer to

Marchal et al. (2005) presents an innovative map-matching algorithm that relies on GPS measurements and network topology. The algorithm consists of maintaining a set of

along a roadway segment and to verify the accuracy performance of the algorithms.

matching and detecting discrepancies between the real network and digital maps.

method validation are tasks that need to be addressed in the future.

**2.6 Personal navigation assistants and map-matching** 

**2.7 Topological network-based algorithms** 

routine selects roadways using direction and access information.

roadways.

select the correct one.

Yet another map-matching algorithm based on fuzzy theory is proposed by Guo and Luo (2009). First, the algorithm compares the similarity degree between the trajectory curve of the road and all candidate roads to identify the road on which a vehicle is traveling. Subsequently, fuzzy preference relations are adopted to perform a multi-criteria decision and a look-ahead technique is employed to improve the matching accuracy. The algorithm requires testing and analysis with GPS data in addition to cell phone positions.

## **2.4 Kalman filter approach**

There has been abundant research on application of Kalman filters in combination with GPS and dead-reckoning signals to solve spatial mismatches. This integrated technology improves positioning accuracy by estimating white noise and error in the GPS and then correcting the vehicle's position (Jo et al., 1996; W. Kim et al., 2000; Zhao et al., 2003). For example, Quddus et al. (2003) present a general map-matching algorithm that integrates GPS and dead-reckoning sensor data (position, velocity, and time) through an extended Kalman filter and uses them as input to improve performance of the algorithm. The physical location of the vehicle on a roadway link is determined empirically from the weighted averages of two state determinations of the vehicle position based on topological information and external sensors.

Yang et al. (2003) present an improved map-matching algorithm that employs Kalman filtering to filter unreasonable GPS data and the Dempster-Shafer (D-S) theory to correctly snap GPS vehicle coordinates to the digital roadway map. The D-S theory allows explicit representation of ignorance and combination of evidence and operates with a smaller set of uncertainties. Although the authors report satisfying results, they suggest additional research to verify the accurate performance of the algorithm.

Nassreddine et al. (2009) describe a map-matching method based on D-S theory and interval analysis to compute accurate vehicle positions from an initial estimated position on a digital road network. The authors state that the proposed technique proves to be successful at junctions and parallel roads. However, real world data needs to be examined in addition to simulated data.

## **2.5 Particle filtering and map-matching**

Particle filtering, based on a stochastic process, is another approach to the map-matching problem. Particle filters are recursive implementations of Monte Carlo-based statistical signal processing (Crisan & Doucet, 2002). Gustafsson et al. (2002) evaluate in real time a map-matching particle filter used to match a vehicle's horizontal driven path to a digital roadway map. They conclude that the particle filter converged relatively rapid after a few iterations of the algorithm. The challenge of this map-matching technique is to find nonlinear relations and non-Gaussian sensor models that provide the most information about the vehicle's position. The authors assert that research is still needed to seek a reliable way to detect divergence and to restart the filter.

Toledo-Moreo et al. (2009) present a multiple-hypothesis particle-filter based algorithm to solve the map-matching problem with integrity provision at the lane level. The proposed system joins measurements from a GPS receiver, an odometer, and a gyroscope along with road information in digital maps. A set of six experiments were conducted with real data for

a period of 30 minutes proving the feasibility of the approach for lane-level applications. The authors mention that outlier removal, multipath effect mitigation, and additional method validation are tasks that need to be addressed in the future.

## **2.6 Personal navigation assistants and map-matching**

218 Global Navigation Satellite Systems – Signal, Theory and Applications

Yet another map-matching algorithm based on fuzzy theory is proposed by Guo and Luo (2009). First, the algorithm compares the similarity degree between the trajectory curve of the road and all candidate roads to identify the road on which a vehicle is traveling. Subsequently, fuzzy preference relations are adopted to perform a multi-criteria decision and a look-ahead technique is employed to improve the matching accuracy. The algorithm

There has been abundant research on application of Kalman filters in combination with GPS and dead-reckoning signals to solve spatial mismatches. This integrated technology improves positioning accuracy by estimating white noise and error in the GPS and then correcting the vehicle's position (Jo et al., 1996; W. Kim et al., 2000; Zhao et al., 2003). For example, Quddus et al. (2003) present a general map-matching algorithm that integrates GPS and dead-reckoning sensor data (position, velocity, and time) through an extended Kalman filter and uses them as input to improve performance of the algorithm. The physical location of the vehicle on a roadway link is determined empirically from the weighted averages of two state determinations of the vehicle position based on topological

Yang et al. (2003) present an improved map-matching algorithm that employs Kalman filtering to filter unreasonable GPS data and the Dempster-Shafer (D-S) theory to correctly snap GPS vehicle coordinates to the digital roadway map. The D-S theory allows explicit representation of ignorance and combination of evidence and operates with a smaller set of uncertainties. Although the authors report satisfying results, they suggest additional

Nassreddine et al. (2009) describe a map-matching method based on D-S theory and interval analysis to compute accurate vehicle positions from an initial estimated position on a digital road network. The authors state that the proposed technique proves to be successful at junctions and parallel roads. However, real world data needs to be examined in addition to

Particle filtering, based on a stochastic process, is another approach to the map-matching problem. Particle filters are recursive implementations of Monte Carlo-based statistical signal processing (Crisan & Doucet, 2002). Gustafsson et al. (2002) evaluate in real time a map-matching particle filter used to match a vehicle's horizontal driven path to a digital roadway map. They conclude that the particle filter converged relatively rapid after a few iterations of the algorithm. The challenge of this map-matching technique is to find nonlinear relations and non-Gaussian sensor models that provide the most information about the vehicle's position. The authors assert that research is still needed to seek a reliable

Toledo-Moreo et al. (2009) present a multiple-hypothesis particle-filter based algorithm to solve the map-matching problem with integrity provision at the lane level. The proposed system joins measurements from a GPS receiver, an odometer, and a gyroscope along with road information in digital maps. A set of six experiments were conducted with real data for

requires testing and analysis with GPS data in addition to cell phone positions.

**2.4 Kalman filter approach** 

information and external sensors.

**2.5 Particle filtering and map-matching** 

way to detect divergence and to restart the filter.

simulated data.

research to verify the accurate performance of the algorithm.

White et al. (2000) discuss solutions to the map-matching problem for personal navigation assistants (PNA). Four different map-matching algorithms were implemented and tested: 1) use of minimum distance (point-to-curve), 2) comparison of heading information with arc and trajectory, 3) use of topology to select roads that are reachable from the current road, and 4) construction of piece-wise linear curves from different paths, followed by comparison of them to centerline curves using points (curve-to-curve matching). The authors conclude that these algorithms performed better when the distance between the GPS point and the closest road is small and that correct matches tend to occur at greater speeds on straight roadways.

Freitas et al. (2009) explain the necessity of map-matching algorithms to correctly locate GPS positions on a map when using PNA, particularly for dynamic route guidance systems. The authors describe an approach to update digital maps through the use of GPS points, in order to identify map incongruence. The proposed system was designed as a prototype and lacks of extensive testing, however, it correctly processes and implements methods for mapmatching and detecting discrepancies between the real network and digital maps.

## **2.7 Topological network-based algorithms**

Taylor et al. (2001) describe an algorithm called "Road Reduction Filter (RRF)" that uses differential corrections and height aids. RRF identifies all possible roadway candidates while systematically removing incorrect ones. RRF is improved by using shortest path network analysis and drive restriction information. A shortest path network routine calculates the distance through the roadway network from a vehicle's previous position to each potential present position offered by the algorithm. The drive restriction information routine selects roadways using direction and access information.

Greenfeld (2002) presents a map-matching procedure that consists of two algorithms. One algorithm assesses similarity between characteristics of the roadway network and the positioning pattern of the vehicle. The second algorithm performs topological analysis and applies a weighting scheme to match each GPS data point to the roadway network. The highest weighted score determines the most likely candidate for a correct match. The author indicates that further research is needed to determine the correct position of the vehicle along a roadway segment and to verify the accuracy performance of the algorithms.

Doherty et al. (2000) studied an algorithm that automatically matches GPS data points to roadway segments along a network. First, the algorithm joins GPS points to create a linear object forming the vehicle's track. Subsequently, it creates a buffer zone around the linear object, and then identifies all the roadways that are totally included within the buffer to select the correct one.

Marchal et al. (2005) presents an innovative map-matching algorithm that relies on GPS measurements and network topology. The algorithm consists of maintaining a set of

A Decision-Rule Topological Map-Matching Algorithm with Multiple Spatial Data 221

The decision-rule topological map-matching algorithm determines the correct roadway centerline for vehicle travel by obtaining feasible shortest paths between snapped GPS data points in post-processing mode. The algorithm selects all roadways within a buffer around a GPS data point and snaps the point to the closest roadway by obtaining the minimum perpendicular distance from the data point to each roadway. Figure 3 illustrates that GPS data points 1 and 2 (shown as circles) are snapped to ramp 2 because it is the closest roadway contained with the buffers around the points. Subsequently, the shortest path (displayed with a bold arrow) is obtained between the two snapped GPS data points S1 and S2 (shown as squares). Only paths that follow allowable traffic directions and allowable turns are employed. The travel speed between these two snapped GPS points is determined by the length of the shortest path and the difference in time stamps for the points. The computed speed is compared to the average of the speeds at the data points collected by the vehicle while traveling. If the computed speed is within a specified tolerance of the average recorded speed, then the obtained shortest path is viable and the snapped locations for

Fig. 3. Example of Snapping to the Correct Roadway for Two GPS Data Points Using the

The map-matching algorithm advances to GPS data point 3, snaps this point to the closest roadway centerline within its buffer, and calculates the shortest path between snapped point S2 and the newly-snapped GPS data point S3. If the path between S2 and S3 is not feasible because the speed comparison yields a large disparity, then the algorithm determines if feasible routes exist between the preceding and subsequent points bounding the GPS data points of concern, as illustrated in the example of Figure 4. This example shows that there is no feasible path between snapped points S2 and S3 when network topology and turn restrictions are employed. Therefore, the map-matching algorithm looks

GPS Data Point

Perpendicular Road

Nearest

Point

**3. Decision-rule topological map-matching algorithm** 

**3.1 Description** 

points 1 and 2 are accepted as correct.

Map-Matching Algorithm

candidate paths as GPS data are processed and computing matching scores for each path. The path with the best score represents the correct vehicle route. According to the authors further research is needed to improve the robustness of the algorithm.

Yet another topological map-matching algorithm is proposed by Wang and Yang (2009). The algorithm presents high accuracy and solves spatial ambiguities in complex roadway networks, specifically near intersections and parallel roads. Nevertheless, the topological algorithm was tested on only four road intersections with a 2-second sampling interval of GPS measurements.

Velaga et al (2009) describe an enhanced weight-based topological map-matching algorithm for ITS. The algorithm was tested with real data under different operational environments. However, the optimal algorithmic weights for different factors such as heading, proximity, connectivity, and turn-restriction still need to be estimated with a range of real-world field data from different road environments.

Blazquez and Vonderohe (2005) propose a topological map-matching algorithm that resolves spatial ambiguities that occur with intelligent winter maintenance vehicle data collected in Wisconsin. The algorithm computes shortest paths between snapped GPS data points using network topology and turn restrictions. If similarity exists between calculated and recorded vehicle speed values, then the path is feasible and snapped GPS locations are correct. If the path is not viable, then GPS data points are snapped to alternative roadway centerlines, shortest paths are recalculated, and speeds are again compared. The authors studied this problem further and published the effects of controlling parameters on the performance of the map-matching algorithm (Blazquez & Vonderohe, 2009). The current chapter discusses and describes in more detail the performance analysis of this mapmatching algorithm.

## **2.8 Other map-matching algorithms**

According to Zhao (1997), many pattern recognition methods (e.g., neural network) could be used for map-matching. Neural networks are dynamic systems that consist of many interconnected layered nodes (neurons). These networks need to be trained to arrange the layers and interconnections to model real-world applications. Other pattern recognition methods can be used to work with positioning sensors such as GPS. The underlying principle of these methods is that the digital map is used to filter out vehicle sensor errors and to determine the best position.

Schlingelhof et al. (2008) present a two-dimension map-matching algorithm based on a lanelevel model. The output of this algorithm is the road segment identification number, the relative vehicle position along this segment, and the relative transversal vehicle position with respect to one of the border lines. The road selection algorithm consists of extracting candidate segments, computing positioning solution residuals, and selecting the most likely segment. The authors state that the first results obtained with real measurements are encouraging. However, these should be generalized to enhanced maps.

Li et al. (2005) present a novel map-matching method using least-squares position estimation, and digital mapping and height data to augment the vehicle position calculation. Experiment results indicate that combining the algorithm with height aiding improves the vehicle position accuracy when the number of visible satellites is reduced.

## **3. Decision-rule topological map-matching algorithm**

## **3.1 Description**

220 Global Navigation Satellite Systems – Signal, Theory and Applications

candidate paths as GPS data are processed and computing matching scores for each path. The path with the best score represents the correct vehicle route. According to the authors

Yet another topological map-matching algorithm is proposed by Wang and Yang (2009). The algorithm presents high accuracy and solves spatial ambiguities in complex roadway networks, specifically near intersections and parallel roads. Nevertheless, the topological algorithm was tested on only four road intersections with a 2-second sampling interval of

Velaga et al (2009) describe an enhanced weight-based topological map-matching algorithm for ITS. The algorithm was tested with real data under different operational environments. However, the optimal algorithmic weights for different factors such as heading, proximity, connectivity, and turn-restriction still need to be estimated with a range of real-world field

Blazquez and Vonderohe (2005) propose a topological map-matching algorithm that resolves spatial ambiguities that occur with intelligent winter maintenance vehicle data collected in Wisconsin. The algorithm computes shortest paths between snapped GPS data points using network topology and turn restrictions. If similarity exists between calculated and recorded vehicle speed values, then the path is feasible and snapped GPS locations are correct. If the path is not viable, then GPS data points are snapped to alternative roadway centerlines, shortest paths are recalculated, and speeds are again compared. The authors studied this problem further and published the effects of controlling parameters on the performance of the map-matching algorithm (Blazquez & Vonderohe, 2009). The current chapter discusses and describes in more detail the performance analysis of this map-

According to Zhao (1997), many pattern recognition methods (e.g., neural network) could be used for map-matching. Neural networks are dynamic systems that consist of many interconnected layered nodes (neurons). These networks need to be trained to arrange the layers and interconnections to model real-world applications. Other pattern recognition methods can be used to work with positioning sensors such as GPS. The underlying principle of these methods is that the digital map is used to filter out vehicle sensor errors

Schlingelhof et al. (2008) present a two-dimension map-matching algorithm based on a lanelevel model. The output of this algorithm is the road segment identification number, the relative vehicle position along this segment, and the relative transversal vehicle position with respect to one of the border lines. The road selection algorithm consists of extracting candidate segments, computing positioning solution residuals, and selecting the most likely segment. The authors state that the first results obtained with real measurements are

Li et al. (2005) present a novel map-matching method using least-squares position estimation, and digital mapping and height data to augment the vehicle position calculation. Experiment results indicate that combining the algorithm with height aiding improves the

encouraging. However, these should be generalized to enhanced maps.

vehicle position accuracy when the number of visible satellites is reduced.

further research is needed to improve the robustness of the algorithm.

GPS measurements.

matching algorithm.

data from different road environments.

**2.8 Other map-matching algorithms** 

and to determine the best position.

The decision-rule topological map-matching algorithm determines the correct roadway centerline for vehicle travel by obtaining feasible shortest paths between snapped GPS data points in post-processing mode. The algorithm selects all roadways within a buffer around a GPS data point and snaps the point to the closest roadway by obtaining the minimum perpendicular distance from the data point to each roadway. Figure 3 illustrates that GPS data points 1 and 2 (shown as circles) are snapped to ramp 2 because it is the closest roadway contained with the buffers around the points. Subsequently, the shortest path (displayed with a bold arrow) is obtained between the two snapped GPS data points S1 and S2 (shown as squares). Only paths that follow allowable traffic directions and allowable turns are employed. The travel speed between these two snapped GPS points is determined by the length of the shortest path and the difference in time stamps for the points. The computed speed is compared to the average of the speeds at the data points collected by the vehicle while traveling. If the computed speed is within a specified tolerance of the average recorded speed, then the obtained shortest path is viable and the snapped locations for points 1 and 2 are accepted as correct.

Fig. 3. Example of Snapping to the Correct Roadway for Two GPS Data Points Using the Map-Matching Algorithm

The map-matching algorithm advances to GPS data point 3, snaps this point to the closest roadway centerline within its buffer, and calculates the shortest path between snapped point S2 and the newly-snapped GPS data point S3. If the path between S2 and S3 is not feasible because the speed comparison yields a large disparity, then the algorithm determines if feasible routes exist between the preceding and subsequent points bounding the GPS data points of concern, as illustrated in the example of Figure 4. This example shows that there is no feasible path between snapped points S2 and S3 when network topology and turn restrictions are employed. Therefore, the map-matching algorithm looks

A Decision-Rule Topological Map-Matching Algorithm with Multiple Spatial Data 223

Fig. 5. Flow Diagram with the Step Sequence of the Map-Matching Algorithm

The example illustrated in Figure 6 includes a set of Differential GPS (DGPS) data points collected every five seconds by a winter maintenance vehicle during the 2002-2003 winter season in Columbia County, Wisconsin. The spatial mismatch, occurring at the diverging roadways in this figure, is resolved by implementing the decision-rule map-matching algorithm. Points 0, 2, 3, and 4 are snapped to the nearest roadway within their 35-foot buffers, resulting in points S0, S2, S3, and S4 (shown as rectangles). Points S0, S3, and S4 are on the Interstate 39 centerline, while point S2 is situated on the ramp centerline. Note that no roadways are contained within the buffer for GPS data point 1, thus, this point is not used in

**3.2 Example of an implementation of the algorithm** 

determining the feasible path.

ahead by snapping point 4 to the nearest roadway centerline within its buffer, and determines if the shortest path between snapped points S3 and S4 is possible. Since the tested path is not feasible, the algorithm snaps point 3 to the next nearest roadway centerline within its buffer obtaining point alt3, shown as a triangle.

Fig. 4. Example of an Alternative Roadway Centerline Snapping

Subsequently, the upper part of the algorithm (shown in Figure 5) for alternative roadway centerline search and feasibility path check is initiated. This algorithm verifies if a path is feasible between the alternative snapped location for point 3 (where Ki = 3), and former and succeeding neighboring snapped points 2 and 4 (where Ki-1 = 2 and Kj = 4). If the shortest paths between these three points are not feasible because the speed comparison fails, then the algorithm searches for other roadway centerlines within the buffer around point 3 that have not already been used in a feasibility path check. When finding a new candidate, point 3 is then snapped to it and the feasibility of shortest paths between snapped points 2, 3, and 4 (Ki-1, Ki, Kj) is checked again. If these paths are feasible, then the spatial ambiguity is resolved, and the algorithm terminates. If no alternative roadway centerline exists within the buffer for GPS data point 3, then the algorithm continues by snapping data point 4 to alternative roadway candidates contained within its buffer, and the upper part of the algorithm is executed again. If no other roadway centerlines exist within the buffer of GPS data point 4 or no feasible paths are obtained, then the lower part of the algorithm is executed and feasible paths between preceding and subsequent data points are examined. If none of the consecutive data points aid in solving the spatial mismatch between the snapped points for 2 and 3, then it is likely that no roadway centerlines within their buffers yield a feasible path and larger buffers and/or more consecutive data points need to be utilized by the algorithm. Once a feasible path is obtained, the intermediate points not employed during the map-matching process are snapped to the roadway along that feasible path.

222 Global Navigation Satellite Systems – Signal, Theory and Applications

ahead by snapping point 4 to the nearest roadway centerline within its buffer, and determines if the shortest path between snapped points S3 and S4 is possible. Since the tested path is not feasible, the algorithm snaps point 3 to the next nearest roadway centerline

Subsequently, the upper part of the algorithm (shown in Figure 5) for alternative roadway centerline search and feasibility path check is initiated. This algorithm verifies if a path is feasible between the alternative snapped location for point 3 (where Ki = 3), and former and succeeding neighboring snapped points 2 and 4 (where Ki-1 = 2 and Kj = 4). If the shortest paths between these three points are not feasible because the speed comparison fails, then the algorithm searches for other roadway centerlines within the buffer around point 3 that have not already been used in a feasibility path check. When finding a new candidate, point 3 is then snapped to it and the feasibility of shortest paths between snapped points 2, 3, and 4 (Ki-1, Ki, Kj) is checked again. If these paths are feasible, then the spatial ambiguity is resolved, and the algorithm terminates. If no alternative roadway centerline exists within the buffer for GPS data point 3, then the algorithm continues by snapping data point 4 to alternative roadway candidates contained within its buffer, and the upper part of the algorithm is executed again. If no other roadway centerlines exist within the buffer of GPS data point 4 or no feasible paths are obtained, then the lower part of the algorithm is executed and feasible paths between preceding and subsequent data points are examined. If none of the consecutive data points aid in solving the spatial mismatch between the snapped points for 2 and 3, then it is likely that no roadway centerlines within their buffers yield a feasible path and larger buffers and/or more consecutive data points need to be utilized by the algorithm. Once a feasible path is obtained, the intermediate points not employed during the map-matching process are

within its buffer obtaining point alt3, shown as a triangle.

Fig. 4. Example of an Alternative Roadway Centerline Snapping

snapped to the roadway along that feasible path.

Fig. 5. Flow Diagram with the Step Sequence of the Map-Matching Algorithm

#### **3.2 Example of an implementation of the algorithm**

The example illustrated in Figure 6 includes a set of Differential GPS (DGPS) data points collected every five seconds by a winter maintenance vehicle during the 2002-2003 winter season in Columbia County, Wisconsin. The spatial mismatch, occurring at the diverging roadways in this figure, is resolved by implementing the decision-rule map-matching algorithm. Points 0, 2, 3, and 4 are snapped to the nearest roadway within their 35-foot buffers, resulting in points S0, S2, S3, and S4 (shown as rectangles). Points S0, S3, and S4 are on the Interstate 39 centerline, while point S2 is situated on the ramp centerline. Note that no roadways are contained within the buffer for GPS data point 1, thus, this point is not used in determining the feasible path.

A Decision-Rule Topological Map-Matching Algorithm with Multiple Spatial Data 225

Average Recorded Speed (mi/h)

Is Path Feasible?

Calculated Speed (mi/h)

S0 → S2 392.6 26.8 31.5 YES S2 → S3 5125.9 699 33 NO S3 → S4 213 29 35 YES S0 → alt2 392.8 26.8 31.5 YES alt2 → S3 215.7 29.4 33 YES

Table 1. Speed Comparison for Determining Feasibility of Shortest Paths

Fig. 7. DGPS Data Points Collected in Portage County Every 10 seconds

**4. Performance analysis of the decision-rule map-matching algorithm** 

Success in solving spatial ambiguities depends on the values assigned to each variable of the map-matching algorithm. The analysis in this chapter examines the performance of the map-matching algorithm as values of the following parameters vary: 1) buffer size, 2) speed range, 3) number of consecutive data points, 4) temporal resolution, and 5) DGPS positional

The data employed in this study were collected by winter maintenance vehicles in Columbia and Portage Counties, Wisconsin, and Polk County, Iowa. These counties have different accuracy roadway centerline maps with 1:2,400, 1:12,000, and 1:100,000 nominal scales, respectively, and employ different AVL/DGPS systems for data collection. Selected data sets with sampling intervals of 2 and 10 seconds were collected for different storm events and vehicle operators driving through various routes over the 2000-2001, 2001-2002, and 2002-2003 winter seasons. These routes include federal, state, and interstate highways, and local roads. Figures 7, 8, and 9 display examples of data collected in Columbia, Portage, and Polk counties every 2, 10, and 10 seconds, respectively. Notice that none of the counties employed an integrated dead reckoning system and heading information was not available

Data Points

error.

**4.1 Spatial data description** 

during the data collection process.

Shortest Path Distance (ft)

The shortest path between points S0 and S2 is computed using network topology and allowable turns. Consequently, the speed comparison shown in Table 1 is performed to determine if this path is feasible. In this case, the obtained path is feasible since the difference between the average calculated and recorded speeds (26.8 and 31.5 mi/h, respectively) is within tolerance (25 mi/h). Therefore, the current snapped positions for points 0 and 2 are initially assumed to be correct. The main algorithm continues by finding the shortest path between the next pair of snapped points, S2 and S3. This path is not feasible when using network topology because if the vehicle was located at S2, it would have to exit the ramp and travel approximately 5,125.9 feet in 5 seconds at an average speed of 699 mi/h to reach snapped point S3. Hence, either point S2 or S3 or both were snapped to an incorrect roadway centerline. The map-matching algorithm now obtains the shortest path between points S3 and S4 and determines that the difference between calculated and average recorded speeds with values of 29 mi/h and 35 mi/h, respectively, is within tolerance. Therefore, an alternative roadway centerline is sought within the buffer around point 2. Interstate 39 is found to be the next nearest roadway, resulting in alternative point alt2, shown as a triangle in Figure 6. Consequently, feasibility is checked for paths between the preceding points S0 and alt2, and between alt2 and its successor, snapped point S3. As indicated in Table 1, both computed shortest paths are feasible. The calculated speeds along these paths are within 25 mi/h of their respective average recorded speeds for the vehicle. Therefore, the spatial ambiguity at the diverging roadway is resolved and the correct roadway for point 2 is Interstate 39. Data point S1 is then obtained by snapping point 1 to the Interstate 39 centerline.

Fig. 6. Example of Map-Matching Algorithm at Diverging Roadways


Table 1. Speed Comparison for Determining Feasibility of Shortest Paths

## **4. Performance analysis of the decision-rule map-matching algorithm**

Success in solving spatial ambiguities depends on the values assigned to each variable of the map-matching algorithm. The analysis in this chapter examines the performance of the map-matching algorithm as values of the following parameters vary: 1) buffer size, 2) speed range, 3) number of consecutive data points, 4) temporal resolution, and 5) DGPS positional error.

### **4.1 Spatial data description**

224 Global Navigation Satellite Systems – Signal, Theory and Applications

The shortest path between points S0 and S2 is computed using network topology and allowable turns. Consequently, the speed comparison shown in Table 1 is performed to determine if this path is feasible. In this case, the obtained path is feasible since the difference between the average calculated and recorded speeds (26.8 and 31.5 mi/h, respectively) is within tolerance (25 mi/h). Therefore, the current snapped positions for points 0 and 2 are initially assumed to be correct. The main algorithm continues by finding the shortest path between the next pair of snapped points, S2 and S3. This path is not feasible when using network topology because if the vehicle was located at S2, it would have to exit the ramp and travel approximately 5,125.9 feet in 5 seconds at an average speed of 699 mi/h to reach snapped point S3. Hence, either point S2 or S3 or both were snapped to an incorrect roadway centerline. The map-matching algorithm now obtains the shortest path between points S3 and S4 and determines that the difference between calculated and average recorded speeds with values of 29 mi/h and 35 mi/h, respectively, is within tolerance. Therefore, an alternative roadway centerline is sought within the buffer around point 2. Interstate 39 is found to be the next nearest roadway, resulting in alternative point alt2, shown as a triangle in Figure 6. Consequently, feasibility is checked for paths between the preceding points S0 and alt2, and between alt2 and its successor, snapped point S3. As indicated in Table 1, both computed shortest paths are feasible. The calculated speeds along these paths are within 25 mi/h of their respective average recorded speeds for the vehicle. Therefore, the spatial ambiguity at the diverging roadway is resolved and the correct roadway for point 2 is Interstate 39. Data point S1 is then obtained by snapping point 1 to

the Interstate 39 centerline.

Fig. 6. Example of Map-Matching Algorithm at Diverging Roadways

The data employed in this study were collected by winter maintenance vehicles in Columbia and Portage Counties, Wisconsin, and Polk County, Iowa. These counties have different accuracy roadway centerline maps with 1:2,400, 1:12,000, and 1:100,000 nominal scales, respectively, and employ different AVL/DGPS systems for data collection. Selected data sets with sampling intervals of 2 and 10 seconds were collected for different storm events and vehicle operators driving through various routes over the 2000-2001, 2001-2002, and 2002-2003 winter seasons. These routes include federal, state, and interstate highways, and local roads. Figures 7, 8, and 9 display examples of data collected in Columbia, Portage, and Polk counties every 2, 10, and 10 seconds, respectively. Notice that none of the counties employed an integrated dead reckoning system and heading information was not available during the data collection process.

Fig. 7. DGPS Data Points Collected in Portage County Every 10 seconds

A Decision-Rule Topological Map-Matching Algorithm with Multiple Spatial Data 227

Correct and incorrect snaps are computed before and after applying the map-matching

Fig. 10. Example of Three Consecutive GPS Data Points Considered False Negatives

Figure 11 presents the cases of snapped and not snapped data points before and after applying the map-matching algorithm. The group of data points that does not snap to any roadway contains either FN or points that have no solution. Data points that have roadway centerlines within their buffers are either snapped correctly or incorrectly, or are FP. A data point that snaps incorrectly before applying the algorithm and snaps correctly afterwards is

**r r r** 

Fig. 11. Cases for Data Points Snapped and Not Snapped Before and After Applying the

algorithm.

Algorithm

Fig. 8. DGPS Data Points Collected in Columbia County Every 2 seconds

Fig. 9. DGPS Data Points Collected in Polk County Every 10 seconds

## **4.2 DGPS data point classification**

This section identifies different cases (i.e., false negatives, false positives, no solution, incorrect and correct snap, and solved spatial ambiguities) obtained from comparing snapping results to the true roadway centerline on which a vehicle is traveling. The true vehicle path was obtained by performing a visual examination of the collected data. Data points are classified in these cases before and after applying the map-matching algorithm.

#### **4.2.1 False negatives and false positives**

False Negatives (FN) occur when data points fail to snap to any roadway centerline when they should have snapped to one. False Positives (FP) are data points that snapped to some roadway centerline when they should have not snapped to any centerline. Figure 10 shows an example of three successive GPS data points (1, 2, and 3) considered as FN. They should have snapped to Interstate 39 east bound direction, however, their buffers with radius r are too small to include any roadway centerline.

#### **4.2.2 Solved / not solved cases**

If roadway centerlines exist within the buffer of a data point, then a correct snap occurs when this point snaps along the true route of the vehicle. Conversely, an incorrect snap is obtained when a data point snaps to a roadway that is not on the true route of the vehicle. 226 Global Navigation Satellite Systems – Signal, Theory and Applications

Fig. 8. DGPS Data Points Collected in Columbia County Every 2 seconds

Fig. 9. DGPS Data Points Collected in Polk County Every 10 seconds

This section identifies different cases (i.e., false negatives, false positives, no solution, incorrect and correct snap, and solved spatial ambiguities) obtained from comparing snapping results to the true roadway centerline on which a vehicle is traveling. The true vehicle path was obtained by performing a visual examination of the collected data. Data points are classified in these cases before and after applying the map-matching algorithm.

False Negatives (FN) occur when data points fail to snap to any roadway centerline when they should have snapped to one. False Positives (FP) are data points that snapped to some roadway centerline when they should have not snapped to any centerline. Figure 10 shows an example of three successive GPS data points (1, 2, and 3) considered as FN. They should have snapped to Interstate 39 east bound direction, however, their buffers with radius r are

If roadway centerlines exist within the buffer of a data point, then a correct snap occurs when this point snaps along the true route of the vehicle. Conversely, an incorrect snap is obtained when a data point snaps to a roadway that is not on the true route of the vehicle.

**4.2 DGPS data point classification** 

**4.2.1 False negatives and false positives** 

too small to include any roadway centerline.

**4.2.2 Solved / not solved cases** 

Correct and incorrect snaps are computed before and after applying the map-matching algorithm.

Fig. 10. Example of Three Consecutive GPS Data Points Considered False Negatives

Figure 11 presents the cases of snapped and not snapped data points before and after applying the map-matching algorithm. The group of data points that does not snap to any roadway contains either FN or points that have no solution. Data points that have roadway centerlines within their buffers are either snapped correctly or incorrectly, or are FP. A data point that snaps incorrectly before applying the algorithm and snaps correctly afterwards is

Fig. 11. Cases for Data Points Snapped and Not Snapped Before and After Applying the Algorithm

A Decision-Rule Topological Map-Matching Algorithm with Multiple Spatial Data 229

20 30 40 50 60 **Buffer Size (ft)**

Fig. 12. FN Percentages Before and After Applying Algorithm by Buffer Size for Columbia,

Before (Columbia) After (Columbia) Before (Portage) After (Portage) Before (Polk) After (Polk)

20 30 40 50 60 **Buffer Size (ft)**

Columbia Portage Polk

Fig. 13. Percentages of Solved Cases After Applying Algorithm by Buffer Size for Columbia,

Portage, and Polk Counties

0

Portage, and Polk Counties

10

20

30

40

50

**Percentage of DGPS Data Points**

60

70

80

90

100

**Percentage of DGPS Data Points**

regarded as a solved case. If a data point is snapped incorrectly before applying the algorithm and it is snapped incorrectly after applying the algorithm, then the spatial mismatch is not solved. If this occurs, then some neighboring data points may be left incorrectly snapped. Note that FN, FP, and no solution are not included in the solved and not solved case analysis.

In the following section of this chapter, FN data points are minimized and solved spatial mismatches are maximized after applying the algorithm. Although FP and no solution cases occur due to spatial database incompleteness, they amount to less than 0.5% of the total number of data points examined in this study. Therefore, these two cases were not taken into account in the analysis.

#### **4.3 Analysis of the impact of variables on the performance of the map-matching algorithm**

This section examines each algorithmic variable independently to determine its effect on the performance of the map-matching algorithm. These variables are classified into two groups. One group consists of parameters controlled by the user (i.e., buffer size, speed range, number of consecutive data points) and the other group comprises parameters controlled through the data (i.e., temporal resolution and DGPS error).

## **4.3.1 Buffer size**

The appropriate buffer size employed during the snapping process when solving spatial ambiguities depends on the quality and geometry of the spatial data. This proximity parameter used to select roadway centerlines around data points is critical for solving the map-matching problem and, therefore, for the success of the algorithm. Buffers that are overly small in size might not include any roadways. While extremely large buffers make the algorithm less efficient since it needs to examine more roadways, many of which will not be correct.

Roadways are typically represented by centerlines that do not account for lane widths. Therefore, data points will almost always appear offset some distance from roadway centerlines in addition to being affected by errors in the DGPS measurements and digital roadway maps (Wolf & Ghilani, 1997). Hence, the buffer size parameter was tested at 10-ft increments from 20 ft to 60 ft for data collected in Columbia and Portage Counties, and at 20-ft increments from 20 to 100 ft for data collected in Polk County. The latter is due to the smaller scale of the Polk County roadway centerline map. These buffer size values were predetermined through the computation of average distance percentages between the data points and roadway centerlines. As different buffer sizes were analyzed and tested against the map-matching algorithm, the speed range tolerance and number of consecutive data points were maintained constant with values 25 mi/h and 5, respectively.

Figure 12 shows a chart with the average percentages of FN before and after applying the algorithm, as the buffer size varies for Columbia, Portage, and Polk County. This figure indicates that lower FN percentages are obtained after applying the algorithm for all three counties. Portage and Polk counties present the largest decrease of FN percentages with an average difference of 20% before and after executing the algorithm. Overall, average percentages of FN data points diminish as the buffer size increases since more data points are snapped to roadway centerlines.

228 Global Navigation Satellite Systems – Signal, Theory and Applications

regarded as a solved case. If a data point is snapped incorrectly before applying the algorithm and it is snapped incorrectly after applying the algorithm, then the spatial mismatch is not solved. If this occurs, then some neighboring data points may be left incorrectly snapped. Note that FN, FP, and no solution are not included in the solved and

In the following section of this chapter, FN data points are minimized and solved spatial mismatches are maximized after applying the algorithm. Although FP and no solution cases occur due to spatial database incompleteness, they amount to less than 0.5% of the total number of data points examined in this study. Therefore, these two cases were not taken

**4.3 Analysis of the impact of variables on the performance of the map-matching** 

through the data (i.e., temporal resolution and DGPS error).

This section examines each algorithmic variable independently to determine its effect on the performance of the map-matching algorithm. These variables are classified into two groups. One group consists of parameters controlled by the user (i.e., buffer size, speed range, number of consecutive data points) and the other group comprises parameters controlled

The appropriate buffer size employed during the snapping process when solving spatial ambiguities depends on the quality and geometry of the spatial data. This proximity parameter used to select roadway centerlines around data points is critical for solving the map-matching problem and, therefore, for the success of the algorithm. Buffers that are overly small in size might not include any roadways. While extremely large buffers make the algorithm less efficient since it needs to examine more roadways, many of which will not

Roadways are typically represented by centerlines that do not account for lane widths. Therefore, data points will almost always appear offset some distance from roadway centerlines in addition to being affected by errors in the DGPS measurements and digital roadway maps (Wolf & Ghilani, 1997). Hence, the buffer size parameter was tested at 10-ft increments from 20 ft to 60 ft for data collected in Columbia and Portage Counties, and at 20-ft increments from 20 to 100 ft for data collected in Polk County. The latter is due to the smaller scale of the Polk County roadway centerline map. These buffer size values were predetermined through the computation of average distance percentages between the data points and roadway centerlines. As different buffer sizes were analyzed and tested against the map-matching algorithm, the speed range tolerance and number of consecutive data

Figure 12 shows a chart with the average percentages of FN before and after applying the algorithm, as the buffer size varies for Columbia, Portage, and Polk County. This figure indicates that lower FN percentages are obtained after applying the algorithm for all three counties. Portage and Polk counties present the largest decrease of FN percentages with an average difference of 20% before and after executing the algorithm. Overall, average percentages of FN data points diminish as the buffer size increases since more data points

points were maintained constant with values 25 mi/h and 5, respectively.

are snapped to roadway centerlines.

not solved case analysis.

into account in the analysis.

**algorithm** 

**4.3.1 Buffer size** 

be correct.

Fig. 12. FN Percentages Before and After Applying Algorithm by Buffer Size for Columbia, Portage, and Polk Counties

Fig. 13. Percentages of Solved Cases After Applying Algorithm by Buffer Size for Columbia, Portage, and Polk Counties

A Decision-Rule Topological Map-Matching Algorithm with Multiple Spatial Data 231

increase in the percentage of solved cases remaining at 68% for Portage County data when speed range values equal to or greater than 25 mi/h are employed. The percentage of solved cases for Polk County remained constant at 50% for speed ranges equal to or greater than 15 mi/h, independent of buffer size. Thus, the map-matching algorithm is sensitive to speed range values, particularly when small speed ranges are employed since feasible paths are

> 20 30 40 50 60 **Buffer Size (ft)**

> > 5 mi/hr 10 mi/hr 15 mi/hr 20 mi/hr

25 mi/hr 30 mi/hr 35 mi/hr

Fig. 14. FN Percentages After Applying Algorithm for Different Speed Ranges by Buffer Size

If no feasible paths are obtained between a pair of snapped data points, then the algorithm tests for viable routes between preceding and subsequent data points, as described in Figure 5. If a small buffer size is utilized, several successive data points do not snap to any roadway centerline generating FN data points. Thus, the number of consecutive data points used by the algorithm needs to be incremented to consider adjacent data points that are correctly

Although the map-matching algorithm may employ any number of consecutive data points, the performance of the map-matching algorithm was analyzed with a number of consecutive data points between three and eight. A previous test determined that this range of consecutive data points is suffice for solving spatial ambiguities with the spatial and

Similar to the FN curve behavior due to speed range variations, FN curves for different number of consecutive data points are parallel for the three counties and converge to constant values as the buffer size increases. Figure 15 shows the percentage of FN data points as the

rejected.

for Columbia County

**4.3.3 Number of consecutive GPS data points** 

snapped and minimize FN percentages.

temporal data employed in this study.

**Percentage of DGPS Data Points**

Figure 13 presents the percentage of solved spatial ambiguities after applying the mapmatching algorithm for Columbia, Portage, and Polk counties. This chart indicates that over 90% of incorrectly snapped data points collected in Columbia County were solved by the algorithm when employing a 30-foot buffer size. Whereas, solved cases reached their maximum values (68% and 64%) for Portage and Polk counties with 50 and 60-foot buffers, respectively. As mentioned earlier, Polk County data was tested for buffer sizes every 20 feet, thus, there is no data for buffer sizes equal to 30 and 50 feet.

#### **4.3.2 Speed**

The map-matching algorithm determines the correct roadway centerline on which a vehicle is traveling by computing feasible shortest paths between snapped data points. This feasibility is sensitive to the allowable range utilized when comparing computed and recorded speeds. The analysis of this variable examines the effect that it has on the performance of the map-matching algorithm.

The average recorded speed (v) is computed using the recorded speeds (v1 and v2), as shown in Equation 1. Equation 2 presents the computed speed calculation (s) given the shortest distance traveled (D) and timestamps (t1 and t2) between a pair of snapped data points. Subsequently, the algorithm accepts a tested path as feasible if the average recorded speed is within the equally distributed speed range shown in Equation 3.

$$\mathbf{v} = \frac{\mathbf{v}\_1 + \mathbf{v}\_2}{2} \tag{1}$$

$$\mathbf{s} = \frac{\mathbf{D}}{\left(\mathbf{t}\_2 - \mathbf{t}\_1\right)}\tag{2}$$

$$\mathbf{v} \in \mathbf{s} \pm \frac{\text{SpeedRange}}{2} \tag{3}$$

FN curves were computed for various buffer sizes and different speed range tolerances from 5 to 35 mi/h with increments of 5 mi/h for the three counties. Analysis results for this variable show that feasible paths are rejected when small speed ranges are employed leaving FN data points not snapped to any roadway centerline. On the contrary, as speed range increases, FN percentages diminish since feasible paths are found during the speed comparison process. Figure 14 shows FN curves for Columbia County with data collected every 2 seconds. These curves are approximately parallel as the speed range varies, and stabilize for speed ranges greater than 15 mi/h. Speed ranges equal to or greater than 25 mi/h are needed to minimize FN percentages in Portage and Polk counties. Further speed range increase does not improve the results because all feasible paths are accepted. In general, FN curves are steeper for small buffer sizes, and approach near-zero slope for buffer sizes equal to or greater than 40 feet.

Analysis results for this variable indicate that the percentage of solved cases increases as speed range also increases. The percentage of solved cases has the highest value of approximately 90% when the algorithm employs speed ranges equal to or greater than 20 mi/hr and a 30-foot buffer for Columbia County data. Conversely, there is no considerable 230 Global Navigation Satellite Systems – Signal, Theory and Applications

Figure 13 presents the percentage of solved spatial ambiguities after applying the mapmatching algorithm for Columbia, Portage, and Polk counties. This chart indicates that over 90% of incorrectly snapped data points collected in Columbia County were solved by the algorithm when employing a 30-foot buffer size. Whereas, solved cases reached their maximum values (68% and 64%) for Portage and Polk counties with 50 and 60-foot buffers, respectively. As mentioned earlier, Polk County data was tested for buffer sizes every 20

The map-matching algorithm determines the correct roadway centerline on which a vehicle is traveling by computing feasible shortest paths between snapped data points. This feasibility is sensitive to the allowable range utilized when comparing computed and recorded speeds. The analysis of this variable examines the effect that it has on the

The average recorded speed (v) is computed using the recorded speeds (v1 and v2), as shown in Equation 1. Equation 2 presents the computed speed calculation (s) given the shortest distance traveled (D) and timestamps (t1 and t2) between a pair of snapped data points. Subsequently, the algorithm accepts a tested path as feasible if the average recorded

<sup>+</sup> <sup>=</sup> v v 1 2 <sup>v</sup>

s

v s

2 1 D

FN curves were computed for various buffer sizes and different speed range tolerances from 5 to 35 mi/h with increments of 5 mi/h for the three counties. Analysis results for this variable show that feasible paths are rejected when small speed ranges are employed leaving FN data points not snapped to any roadway centerline. On the contrary, as speed range increases, FN percentages diminish since feasible paths are found during the speed comparison process. Figure 14 shows FN curves for Columbia County with data collected every 2 seconds. These curves are approximately parallel as the speed range varies, and stabilize for speed ranges greater than 15 mi/h. Speed ranges equal to or greater than 25 mi/h are needed to minimize FN percentages in Portage and Polk counties. Further speed range increase does not improve the results because all feasible paths are accepted. In general, FN curves are steeper for small buffer sizes, and approach near-zero slope for

Analysis results for this variable indicate that the percentage of solved cases increases as speed range also increases. The percentage of solved cases has the highest value of approximately 90% when the algorithm employs speed ranges equal to or greater than 20 mi/hr and a 30-foot buffer for Columbia County data. Conversely, there is no considerable

SpeedRange

2

2 (1)

(t t ) <sup>=</sup> <sup>−</sup> (2)

∈ ± (3)

feet, thus, there is no data for buffer sizes equal to 30 and 50 feet.

speed is within the equally distributed speed range shown in Equation 3.

performance of the map-matching algorithm.

buffer sizes equal to or greater than 40 feet.

**4.3.2 Speed** 

increase in the percentage of solved cases remaining at 68% for Portage County data when speed range values equal to or greater than 25 mi/h are employed. The percentage of solved cases for Polk County remained constant at 50% for speed ranges equal to or greater than 15 mi/h, independent of buffer size. Thus, the map-matching algorithm is sensitive to speed range values, particularly when small speed ranges are employed since feasible paths are rejected.

Fig. 14. FN Percentages After Applying Algorithm for Different Speed Ranges by Buffer Size for Columbia County

## **4.3.3 Number of consecutive GPS data points**

If no feasible paths are obtained between a pair of snapped data points, then the algorithm tests for viable routes between preceding and subsequent data points, as described in Figure 5. If a small buffer size is utilized, several successive data points do not snap to any roadway centerline generating FN data points. Thus, the number of consecutive data points used by the algorithm needs to be incremented to consider adjacent data points that are correctly snapped and minimize FN percentages.

Although the map-matching algorithm may employ any number of consecutive data points, the performance of the map-matching algorithm was analyzed with a number of consecutive data points between three and eight. A previous test determined that this range of consecutive data points is suffice for solving spatial ambiguities with the spatial and temporal data employed in this study.

Similar to the FN curve behavior due to speed range variations, FN curves for different number of consecutive data points are parallel for the three counties and converge to constant values as the buffer size increases. Figure 15 shows the percentage of FN data points as the

A Decision-Rule Topological Map-Matching Algorithm with Multiple Spatial Data 233

20 40 60 80 100 **Buffer Size (ft)**

<sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> <sup>60</sup>

Fig. 17. Percentages of Solved Cases for Different Temporal Resolutions by Buffer Size for

**Buffer Size (ft)**

0

10

20

30

40

**Percentage of DGPS Data Points**

50

60

70

Fig. 16. FN Percentages Before and After Applying Algorithm for Different Temporal

10sec (FN) 20sec (FN) 30sec (FN) FN (Before)

0

Resolutions by Buffer Size for Polk County

10 20 30

**Temporal Resolution (sec)**

Portage County

10

20

30

40

**Percentage of DGPS Data Points**

50

60

70

number of consecutive data points varies by county with a 40-foot buffer size. No significant improvements are identified in the percentage of FN for the three counties as the number of consecutive data points varies, except for Portage County data that presents a decrease in the amount of FN when increasing the number of consecutive data points from three to four.

Fig. 15. Average Percentages of FN After Applying Algorithm by Number of Consecutive Data Points and County

The percentage of solved spatial mismatches increased as the number of consecutive data points increased. Eight consecutive data points solve almost 100% of initial incorrect snaps for Columbia County data when employing a 40-foot buffer. The largest percentage of solved mismatches (over 70%) after applying the algorithm occurs with a 50-foot buffer for Portage County. While the percentage of solved cases in Polk County remained constant at 50%, as the buffer size and number of consecutive data points increased. The results of this analysis show that increasing the number of consecutive data points solves a larger number of spatial ambiguities. By increasing this number, the algorithm resolves ambiguities that arise when alternative roadway centerlines are equally viable.

#### **4.3.4 Temporal resolution**

The outcome of the map-matching technique is not only affected by spatial inaccuracies, it is also influenced by the collection frequency of the data points. As temporal resolution increases, the tracking of the vehicle becomes more accurate. On the other hand, the sampling interval impacts the sizes of the data sets. Processing of large data sets takes significant CPU time, and increases storage requirements. Hence, there is a tradeoff between decreasing the sampling interval and quality of collected speed data.

Data sets collected in Columbia County with an original 2-second time interval were processed to generate data files with lower temporal resolutions varying from 2 to 30 with increments of 4 seconds. Similarly, data collected every 10 seconds in Portage and Polk counties were processed to create data files with temporal resolutions equal to 10, 20, and 30 seconds, respectively. The speed range and number of consecutive points remained constant with values 25 mi/h and 5, respectively.

232 Global Navigation Satellite Systems – Signal, Theory and Applications

number of consecutive data points varies by county with a 40-foot buffer size. No significant improvements are identified in the percentage of FN for the three counties as the number of consecutive data points varies, except for Portage County data that presents a decrease in the amount of FN when increasing the number of consecutive data points from three to four.

> 345678 **Number of Consecutive Data Points**

> > Columbia Portage Polk

Fig. 15. Average Percentages of FN After Applying Algorithm by Number of Consecutive

The percentage of solved spatial mismatches increased as the number of consecutive data points increased. Eight consecutive data points solve almost 100% of initial incorrect snaps for Columbia County data when employing a 40-foot buffer. The largest percentage of solved mismatches (over 70%) after applying the algorithm occurs with a 50-foot buffer for Portage County. While the percentage of solved cases in Polk County remained constant at 50%, as the buffer size and number of consecutive data points increased. The results of this analysis show that increasing the number of consecutive data points solves a larger number of spatial ambiguities. By increasing this number, the algorithm resolves ambiguities that

The outcome of the map-matching technique is not only affected by spatial inaccuracies, it is also influenced by the collection frequency of the data points. As temporal resolution increases, the tracking of the vehicle becomes more accurate. On the other hand, the sampling interval impacts the sizes of the data sets. Processing of large data sets takes significant CPU time, and increases storage requirements. Hence, there is a tradeoff between

Data sets collected in Columbia County with an original 2-second time interval were processed to generate data files with lower temporal resolutions varying from 2 to 30 with increments of 4 seconds. Similarly, data collected every 10 seconds in Portage and Polk counties were processed to create data files with temporal resolutions equal to 10, 20, and 30 seconds, respectively. The speed range and number of consecutive points remained constant

arise when alternative roadway centerlines are equally viable.

decreasing the sampling interval and quality of collected speed data.

0

Data Points and County

**4.3.4 Temporal resolution** 

with values 25 mi/h and 5, respectively.

5

10

**Percentages of DGPS Data Points**

15

20

25

Fig. 16. FN Percentages Before and After Applying Algorithm for Different Temporal Resolutions by Buffer Size for Polk County

Fig. 17. Percentages of Solved Cases for Different Temporal Resolutions by Buffer Size for Portage County

A Decision-Rule Topological Map-Matching Algorithm with Multiple Spatial Data 235

45

24

2 meter error

Columbia (2 sec) Portage (10 sec) Polk (10 sec)

Before After

53

29

5 meter error

45

22

Original data

60

42

2 meter error

63

48

5 meter error

5

0

**Percentage of Solved Cases**

Original data

2 meter error

5 meter error

and 5 m Error by Buffer Size in Columbia County

Original data

2 meter error

5 meter error

Original data

Fig. 19. Percentage of Solved Cases After Applying the Algorithm for Original Data, 2 m,

2 meter error

20 30 40 50 60 **Buffer Size (ft)**

5 meter error

Original data

2 meter error

5 meter error

Original data

2 meter error

5 meter error

10

20

30

**Percentage of DGPS Data Points**

40

50

60

70

0,6

Original data

9

6

2 meter error

and 5 m Error with a 40-foot Buffer by County

15

10

5 meter error

32

16

Original data

Fig. 18. FN Percentages Before and After Applying the Algorithm for Original Data, 2 m,

Figure 16 illustrates FN curves before and after applying the algorithm for different temporal resolutions with data originally collected every 10 seconds in Polk County. The graph presents relatively parallel FN curves for all data collection frequencies. These curves show that as temporal resolution increases, the percentages of FN data points decreases. FN curves after applying the algorithm show lower percentages of FN data points compared to before executing the algorithm. FN curves for Columbia and Portage counties behave similarly with different sampling intervals. All county cases illustrate that larger amount of FN points occur when using smaller buffers independent of data collection frequency. Figure 17 shows the variation of solved cases as temporal resolution increases in Portage County. Percentages of solved spatial ambiguities increase as data is collected at higher frequencies, being the largest at a 50-foot buffer with 68%. This percentage decreases in average for Columbia County data from approximately 80% to 20% as sampling intervals increase from 5 to 30 second for all buffer sizes. The same behavior is apparent for solved case percentages in Polk County as data is collected more frequently.

#### **4.3.5 GPS error**

GPS measurements are affected by both systematic and random errors. Their combined magnitudes will affect the accuracy of the positioning results. Systematic errors obey physical or mathematical law, and can be computed and applied to measurements to eliminate their effects (Ghilani & Wolf, 2006). Random errors occur because of stochastic noise in the measurement process producing different coordinates each time a measurement is achieved, even during short intervals. This type of error is assumed to be Gaussian affecting both latitude and longitude or X, Y coordinates. DGPS is a method that increases the accuracy of CA code measurements by canceling some of the inherent systematic errors. Any potentially remaining systematic errors were not modeled in this study, and only the effects of random errors were examined.

Random errors were simulated by using a normal distribution random number generator (Box & Muller, 1958) for known means and different standard deviations. If U1 and U2 are a pair of independent uniformly-distributed random numbers from the rectangular density function on the interval (0, 1), then a pair of independent random numbers (*X1* and *X2*) from a normal distribution with mean zero and standard deviation σ are generated using Equations 4 and 5.

$$X\_1 = (-2\log \mathcal{U} \mathbf{1})^{1/2} \cos(2\pi \mathcal{U} \mathbf{1}\_2) \tag{4}$$

$$X\_2 = (-2\log \mathcal{U}\_1)^{1/2} \sin(2\pi \mathcal{U}\_2) \tag{5}$$

Experiments conducted by the Wisconsin Winter Maintenance Concept Vehicle project concluded that random DGPS errors were on the order of 2 to 5 meters, root-mean-square (Vonderohe et al., 2001). Therefore, a mean value of zero and standard deviations of ±2 and ±5 meters were employed in this analysis. Speed range and number of consecutive points values were held fixed as 2- and 5-meter standard deviation errors were introduced in the DGPS data points.

Percentages for FN and solved cases were computed to compare the performance of the algorithm for original and perturbed DGPS data points. Figure 18 presents variations in the percentage of FN data points for original and perturbed data by county for a 40-foot buffer 234 Global Navigation Satellite Systems – Signal, Theory and Applications

Figure 16 illustrates FN curves before and after applying the algorithm for different temporal resolutions with data originally collected every 10 seconds in Polk County. The graph presents relatively parallel FN curves for all data collection frequencies. These curves show that as temporal resolution increases, the percentages of FN data points decreases. FN curves after applying the algorithm show lower percentages of FN data points compared to before executing the algorithm. FN curves for Columbia and Portage counties behave similarly with different sampling intervals. All county cases illustrate that larger amount of FN points occur when using smaller buffers independent of data collection frequency. Figure 17 shows the variation of solved cases as temporal resolution increases in Portage County. Percentages of solved spatial ambiguities increase as data is collected at higher frequencies, being the largest at a 50-foot buffer with 68%. This percentage decreases in average for Columbia County data from approximately 80% to 20% as sampling intervals increase from 5 to 30 second for all buffer sizes. The same behavior is apparent for solved

GPS measurements are affected by both systematic and random errors. Their combined magnitudes will affect the accuracy of the positioning results. Systematic errors obey physical or mathematical law, and can be computed and applied to measurements to eliminate their effects (Ghilani & Wolf, 2006). Random errors occur because of stochastic noise in the measurement process producing different coordinates each time a measurement is achieved, even during short intervals. This type of error is assumed to be Gaussian affecting both latitude and longitude or X, Y coordinates. DGPS is a method that increases the accuracy of CA code measurements by canceling some of the inherent systematic errors. Any potentially remaining systematic errors were not modeled in this study, and only the

Random errors were simulated by using a normal distribution random number generator (Box & Muller, 1958) for known means and different standard deviations. If U1 and U2 are a pair of independent uniformly-distributed random numbers from the rectangular density function on the interval (0, 1), then a pair of independent random numbers (*X1* and *X2*) from a normal distribution with mean zero and standard deviation σ are generated using

Experiments conducted by the Wisconsin Winter Maintenance Concept Vehicle project concluded that random DGPS errors were on the order of 2 to 5 meters, root-mean-square (Vonderohe et al., 2001). Therefore, a mean value of zero and standard deviations of ±2 and ±5 meters were employed in this analysis. Speed range and number of consecutive points values were held fixed as 2- and 5-meter standard deviation errors were introduced in the

Percentages for FN and solved cases were computed to compare the performance of the algorithm for original and perturbed DGPS data points. Figure 18 presents variations in the percentage of FN data points for original and perturbed data by county for a 40-foot buffer

*X*1 *= (-2 logU*1*)*1/2 *cos(2πU*2*)* (4)

*X*2 *= (-2 logU*1*)*1/2 *sin(2πU*2*)* (5)

case percentages in Polk County as data is collected more frequently.

**4.3.5 GPS error** 

Equations 4 and 5.

DGPS data points.

effects of random errors were examined.

Fig. 18. FN Percentages Before and After Applying the Algorithm for Original Data, 2 m, and 5 m Error with a 40-foot Buffer by County

Fig. 19. Percentage of Solved Cases After Applying the Algorithm for Original Data, 2 m, and 5 m Error by Buffer Size in Columbia County

A Decision-Rule Topological Map-Matching Algorithm with Multiple Spatial Data 237

The results of this study indicate that the success of the map-matching algorithm in solving spatial ambiguities depends on not only by the variables employed by the algorithm, but also by the sampling interval and the quality of the spatial measurements and roadway map scale. If lower spatial data qualities and less frequent sampling intervals are used, then the algorithm requires larger buffers and speed ranges to obtain best results. On the other hand, if GPS data points collected more frequently are snapped to higher accuracy maps, such as the Columbia County case, then larger percentages of incorrect snaps are solved and smaller buffer sizes are adequate. By increasing the number of consecutive data points, a larger number of spatial ambiguities are solved, particularly when alternative roadway centerlines are equally viable, and FN percentages are reduced since more combinations are examined between pairs of snapped DGPS data points. However, no significant variations in the solved results for Polk County are apparent as the number of consecutive data points increases since lower spatial data accuracies were used in this county. Table 2 presents the best and worst variable values encountered when solving incorrect snaps after applying the map-matching algorithm by county. This table indicates that larger speed range values, and numbers of consecutive points provide better results in maximizing solved cases. Stable percentage values are reached as both speed range and number of consecutive points reach certain values. While small speed ranges tend to reject tested paths, larger speed ranges accept most of these paths without improving the performance of the algorithm. Similarly, larger percentages of solved cases are obtained as the number of consecutive points increases since additional combinations between pairs of snapped data points are examined. Overall, higher parameter values yield better results as data are collected less frequently and

Buffer Size (ft) Speed Range (mi/hr) Number of Consecutive Points County Best Worst Best Worst Best Worst

Introducing positional error in the DGPS data points decreases the percentage of solved incorrect snaps and total number of snapped data points obtained before and after applying the algorithm. As the positional error increments from 2 to 5 meters in standard deviation, the percentage of solved cases decrease and FN percentages increase for all counties. Thus, larger buffer sizes and speed ranges are needed for lower quality data. Future research is required to explore these parameter values against additional spatial data qualities derived from multiple ITS applications. Further research may involve online implementation of the map-matching algorithm, in which spatial ambiguities are solved as GPS measurements are

Blazquez, C., & Vonderohe, A. (2005). Simple Map-Matching Algorithm Applied to

Intelligent Winter Maintenance Vehicle Data. *Journal of Transportation Research* 

Columbia 30 ≥50 35 5 8 3 Portage 50 20 ≥25 5 8 3 Polk 40 20 ≥15 5 ≥3 ≥3

Table 2. Best and Worst Variable Values for Solved Cases by County

snapped to lower quality roadway maps.

collected in real-time.

*Board*, Vol. 1935, pp. 68-76.

**6. References** 

before and after applying the algorithm. All FN percentages decrease after executing the algorithm, independent from the spatial data quality. Average FN percentages computed with original data present smaller values than data perturbed with 2- and 5-m error before and after applying the algorithm. For example, FN percentages increase from 22% to 48% for Polk County after executing the algorithm when introducing a 5-m error. In general, the percentage of data points that should snap to a roadway centerline increases when there is larger error in the DGPS data points.

Figure 19 presents the percentages of solved spatial ambiguities by the algorithm before and after perturbing the DGPS data points with original and simulated random errors (2-and 5 meter standard deviation) for Columbia County. This figure shows that the percentage of incorrect snaps solved after applying the algorithm for original Columbia County data are larger than those computed with perturbed data. On average, the percentage of solved cases decreases approximately 20% and 40% for data with 2- and 5-meter error for all buffer sizes, except for the 20-foot buffer. This small buffer is not able to accommodate the spatial ambiguities that arise with simulated data. Similarly, Portage and Polk counties present a drop in the percentages of solved data points from approximately 68% and 50% for original data to approximately 10% and 15%, respectively, for 5 m perturbed data.

## **5. Summary and conclusions**

Transportation applications employ AVL/DGPS technology to collect vehicle positions and other sensor data. Normally, DGPS data points are associated with roadways by snapping to the nearest centerline in a GIS environment. The map-matching problem or spatial ambiguities arise during this association due to errors in DGPS measurements and digital cartography. Such ambiguities are common at underpasses and converging or diverging roadways. These can result in DGPS data points being snapped to incorrect roadway centerlines affecting the calculation of cumulative distance traveled by the vehicles along a roadway network, or the allocation of non-spatial data collected from vehicle sensors to incorrect roadways. Thus, this problem propagates to the computation of performance measures or decision management tools.

This study contributes with the development and implementation of a post-processing decision-rule map-matching algorithm that resolves many of these spatial ambiguities by examining the feasibility of paths between pairs of snapped data points. A viable path is the shortest-distance path between two snapped points that a vehicle can travel, while following network topology and turn restrictions, at a speed comparable to its average recorded speed. If a given shortest path is not feasible, then DGPS data points are related to other roadway centerlines within their buffers and new shortest paths are calculated; or adjacent DGPS data points are used to determine feasible paths. Examples were presented to describe the step-by-step process of the map-matching algorithm. Five variables were studied independently to analyze the performance of the map-matching algorithm. These variables are buffer size, speed range tolerance, number of consecutive points, temporal resolution, and positional error in the DGPS data points. Data collection frequency and DGPS error are variables controlled externally through the data, while buffer size, speed range, and number of consecutive data points are algorithm parameters are controlled by the user.

The results of this study indicate that the success of the map-matching algorithm in solving spatial ambiguities depends on not only by the variables employed by the algorithm, but also by the sampling interval and the quality of the spatial measurements and roadway map scale. If lower spatial data qualities and less frequent sampling intervals are used, then the algorithm requires larger buffers and speed ranges to obtain best results. On the other hand, if GPS data points collected more frequently are snapped to higher accuracy maps, such as the Columbia County case, then larger percentages of incorrect snaps are solved and smaller buffer sizes are adequate. By increasing the number of consecutive data points, a larger number of spatial ambiguities are solved, particularly when alternative roadway centerlines are equally viable, and FN percentages are reduced since more combinations are examined between pairs of snapped DGPS data points. However, no significant variations in the solved results for Polk County are apparent as the number of consecutive data points increases since lower spatial data accuracies were used in this county. Table 2 presents the best and worst variable values encountered when solving incorrect snaps after applying the map-matching algorithm by county. This table indicates that larger speed range values, and numbers of consecutive points provide better results in maximizing solved cases. Stable percentage values are reached as both speed range and number of consecutive points reach certain values. While small speed ranges tend to reject tested paths, larger speed ranges accept most of these paths without improving the performance of the algorithm. Similarly, larger percentages of solved cases are obtained as the number of consecutive points increases since additional combinations between pairs of snapped data points are examined. Overall, higher parameter values yield better results as data are collected less frequently and snapped to lower quality roadway maps.


Table 2. Best and Worst Variable Values for Solved Cases by County

Introducing positional error in the DGPS data points decreases the percentage of solved incorrect snaps and total number of snapped data points obtained before and after applying the algorithm. As the positional error increments from 2 to 5 meters in standard deviation, the percentage of solved cases decrease and FN percentages increase for all counties. Thus, larger buffer sizes and speed ranges are needed for lower quality data. Future research is required to explore these parameter values against additional spatial data qualities derived from multiple ITS applications. Further research may involve online implementation of the map-matching algorithm, in which spatial ambiguities are solved as GPS measurements are collected in real-time.

### **6. References**

236 Global Navigation Satellite Systems – Signal, Theory and Applications

before and after applying the algorithm. All FN percentages decrease after executing the algorithm, independent from the spatial data quality. Average FN percentages computed with original data present smaller values than data perturbed with 2- and 5-m error before and after applying the algorithm. For example, FN percentages increase from 22% to 48% for Polk County after executing the algorithm when introducing a 5-m error. In general, the percentage of data points that should snap to a roadway centerline increases when there is

Figure 19 presents the percentages of solved spatial ambiguities by the algorithm before and after perturbing the DGPS data points with original and simulated random errors (2-and 5 meter standard deviation) for Columbia County. This figure shows that the percentage of incorrect snaps solved after applying the algorithm for original Columbia County data are larger than those computed with perturbed data. On average, the percentage of solved cases decreases approximately 20% and 40% for data with 2- and 5-meter error for all buffer sizes, except for the 20-foot buffer. This small buffer is not able to accommodate the spatial ambiguities that arise with simulated data. Similarly, Portage and Polk counties present a drop in the percentages of solved data points from approximately 68% and 50% for original

Transportation applications employ AVL/DGPS technology to collect vehicle positions and other sensor data. Normally, DGPS data points are associated with roadways by snapping to the nearest centerline in a GIS environment. The map-matching problem or spatial ambiguities arise during this association due to errors in DGPS measurements and digital cartography. Such ambiguities are common at underpasses and converging or diverging roadways. These can result in DGPS data points being snapped to incorrect roadway centerlines affecting the calculation of cumulative distance traveled by the vehicles along a roadway network, or the allocation of non-spatial data collected from vehicle sensors to incorrect roadways. Thus, this problem propagates to the computation of performance

This study contributes with the development and implementation of a post-processing decision-rule map-matching algorithm that resolves many of these spatial ambiguities by examining the feasibility of paths between pairs of snapped data points. A viable path is the shortest-distance path between two snapped points that a vehicle can travel, while following network topology and turn restrictions, at a speed comparable to its average recorded speed. If a given shortest path is not feasible, then DGPS data points are related to other roadway centerlines within their buffers and new shortest paths are calculated; or adjacent DGPS data points are used to determine feasible paths. Examples were presented to describe the step-by-step process of the map-matching algorithm. Five variables were studied independently to analyze the performance of the map-matching algorithm. These variables are buffer size, speed range tolerance, number of consecutive points, temporal resolution, and positional error in the DGPS data points. Data collection frequency and DGPS error are variables controlled externally through the data, while buffer size, speed range, and number of consecutive data points are algorithm parameters are controlled by

data to approximately 10% and 15%, respectively, for 5 m perturbed data.

larger error in the DGPS data points.

**5. Summary and conclusions** 

measures or decision management tools.

the user.

Blazquez, C., & Vonderohe, A. (2005). Simple Map-Matching Algorithm Applied to Intelligent Winter Maintenance Vehicle Data. *Journal of Transportation Research Board*, Vol. 1935, pp. 68-76.

A Decision-Rule Topological Map-Matching Algorithm with Multiple Spatial Data 239

Kim, W., Lee, G.I., & Lee, J.G. (2000). Efficient Use of Digital Road Map in Various

Li, J., Taylor, G., & Kidner, D. (2005) Accuracy and Reliability of Map-Matched GPS

Marchal, F., Hackney, J., & Axhausen, K.W. (2005). Efficient Map-Matching of Large GPS

Nassreddine, G., Abdallah, F., & Denoeux, T. (2009) Map Matching Algorithm Using

Quddus, M., Ochieng, W., Zhao, L., & Noland, R. (2003). A General Map Matching

Quddus, M., Noland, R., & Ochieng, W. (2006). A High Accuracy Fuzzy Logic Based Map

Schlingelhof, M., Betaille, D., Bonnifait, P., and Demaseure, K. (2008) Advanced Positioning

Sheridan, K., T. Dyjas, C. Botteron, J. Leclere, F. Dominic, and G. Marucco (2011) Demands

Taylor, G., Blewitt, G., Steup, D., Corbett, S., & Car, A. (2001). Road Reduction Filtering for GPS-GIS Navigation. *Transactions in GIS*, Volume 5, No. 3, pp. 193-207. Toledo-Moreo, R., Betaille, D., & Peyret, F. (2010) Lane-Level Integrity Provision For

Velaga, N., Quddus, M., & Bristow, A. (2009). Developing an Enhanced Weight-Based

and Environmental Engineering, University of Wisconsin-Madison. Wang, Z., & Yang, Z. (2009) Research on the Map Matching of Typical Region Based on the

*Computation Technology and Automation*, Zhangjiajie, China October 2009. White, C., Bernstein, D., & KornHauser, A. (2000). Some Map Matching Algorithms for

*Symposium*, San Diego, California, March 2000.

September 1989.

pp. 157-167.

2, pp. 81-91.

113.

*World*, Vol. 22, No. 3.

*Technologies*, Vol. 8, No. 1, pp. 91-108.

Vol. 10, No. 3, pp. 103-115.

Algorithm. *Computers & Geosciences*, Vol. 31, pp. 241-251.

*Vehicles Symposium (IV'09*), Xi'an ,China, June 2009.

Positioning for ITS. *Proceedings of IEEE PLANS, Position Location and Navigation* 

Coordinates: The Dependence on Terrain Model Resolution and Interpolation

Data Set – Tests on a Speed Monitoring Experiment in Zurich. *Proceeding of 84th Annual Meeting Transportation Research Board*, Washington, DC., January 2005. Morisue, F., & Ikeda, K. (1989). Evaluation of Map-Matching Techniques. *Proceedings of* 

*Vehicle Navigation and Information Systems Conference Record*, Toronto, Canada,

Interval Analysis and Dempster-Shafer Theory. *Proceedings of IEEE Intelligent* 

Algorithm for Transportation Telematics Applications. *GPS Solutions*, Vol. 7, No. 3,

Matching Algorithm for Road Transport. *Journal of Intelligent Transportation Systems*,

Technologies for Co-operative Systems*. IET Intelligent Transport Systems*, Vol. 2, No.

of Roads. An Assisted-GNSS Solution Uses the EGNOS Data Access Service. *GPS* 

Navigation And Map Matching With GNSS, Dead Reckoning, And Enhanced Maps. *IEEE Transactions on Intelligent Transportation Systems*. Vol. 11, No. 1, pp. 100-

Topological Map-Matching Algorithm for Intelligent Transport Systems. *Transportation Research Part C: Emerging Technologies*, Vol. 17, No. 6, pp. 672-683. Vonderohe, A., Malhotra, A., Sheth, V., Mezera, D., & Adams, A. (2001). *Report Wisconsin* 

*Winter Maintenance Concept Vehicle: Data Management Year 1.* Department of Civil

Topological Analysis. *Proceedings of 2nd International Conference on Intelligent* 

Personal Navigation Assistants. *Transportation Research Part C: Emerging* 


238 Global Navigation Satellite Systems – Signal, Theory and Applications

Blazquez, C., & Vonderohe, A. (2009). Effects of Controlling Parameters on Performance of a

Box, G. & Muller, M. (1958) A Note on the Generation of Random Normal Deviates. *Annals* 

Chen, W., Li, Z., Yu, M., & Chen, Y. (2005). Effects of sensor errors on the performance of

Crisan, D., & Doucet A. (2002). A Survey of Convergence Results on Particle Filtering

Czerniak R. (2002). Collecting, Processing, and Integrating GPS Data into GIS. In

Freitas, T., Coelho, A., & Rossetti, R. (2009) Improving digital maps through GPS data

French, R. (1989). Map Matching Origins, Approaches and Applications. *Proceedings of the* 

Ghilani, C., & Wolf, P. (2006). *Adjustment Computations-Spatial Data Analysis*, John Wiley &

Greenfeld, J.S. (2002). Matching GPS Observations to Location on a Digital Map. *Proceedings* 

Guo, L. & Luo, D. Y. (2009) Development of an Integrated Map Matching Algorithm Based

*Computation Technology and Automation*, Zhangjiajie, China, October 2009. Gustafsson, F., Gunnarsson, F., Bergman, N., Forssell, U., Jansson, J., Karlsson, R., &

Jagadeesh, G., Srikanthan, T., & Zhang, X. (2004) A Map Matching Method for GPS Based Real-Time Vehicle Location. *The Journal of Navigation*, Vol. 54, pp. 429-440 Jo, T., Haseyama, M., & Kitajima, H. (1996). A Map Matching Method with the Innovation of

Kim, S., & Kim, J.H. (2001). Adaptive Fuzzy-Network-Based C-Measure Map-Matching

*Transportation Systems, ITSC*, St. Louis, Missouri., October 2009.

Sons, Inc., ISBN 9780471697282, Hoboken, New Jersey.

*Transactions on Signal Processing*, Vol. 50, No. 2, pp. 425-437.

Cozzens, T. (2009) Mileage-Based Road Tax Gets Pumped. *GPS World*, Vol. 20, No. 5.

*Engineering*, Vol. 135, No. 12, (December, 2009), pp. 966-973.

map-matching. *Journal of Navigation*, Vol. 58, pp. 273-282.

*of Mathematical Statistics*, Vol. 29, No. 2, pp. 610-611.

Research Council, Washington, D.C.

Board, Washington, D.C., January 2002.

*and Computer Sciences*, Vol. 79-A, No. 11.

Vol. 48, No. 2, pp.432-441.

736-746.

1989.

Decision-Rule Map-Matching Algorithm. *ASCE Journal of Transportation* 

Methods for Practitioners. *IEEE Transactions on Signal Processing*, Vol. 50, No. 3, pp.

*Transportation Research Board: NCHRP Synthesis of Highway Practice 301*, Transportation Research Board, National Research Council, Washington, D.C. Doherty, S.T., Noel, N., Gosselin, M.L., Sirois, C., and Ueno, M. (2000). Moving Beyond

Observed Outcomes: Integrating Global Positioning Systems and Interactive Computer-Based Travel Behavior Surveys. *Transportation Research Circular E-C026, Personal Travel: The Long and Short of It*, Transportation Research Board, National

processing. *Proceedings of the 12th International IEEE Conference on Intelligent* 

*Second International Symposium on Land Vehicle Navigation*, Munster, Germany, July

*of Transportation Research Board 81st Annual Meeting*, Transportation Research

on Fuzzy Theory. *Proceedings of Second International Conference on Intelligent* 

Nordlund, P. (2002). Particle Filters for Positioning, Navigation, and Tracking. *IEEE* 

the Kalman Filter. *IEICE Transactions on Fundamentals of Electronics, Communications* 

Algorithm for Car Navigation System. *IEEE Transactions on Industrial Electronics*,


**0**

**10**

**Beyond Trilateration: GPS Positioning Geometry**

Trilateration/multilateration is the fundamental basis for most GPS positioning algorithms. It begins by finding range estimates to known satellite positions which provides a spherical Locus of Position (LOP) for the receiver. Ideally four such spherical LOPs can be solved to precisely determine the receiver position. Thus, it is an analytical approach that finds receiver position by solving required number of linear/quadratic equations. This method can determine the receiver position precisely when the equations are perfectly formulated. However, determining the exact range is nearly impossible in real-life due to many external factors such as noise interference, signal fading, multi-path propagation, weather condition, clock synchronization problem etc (Strang & Borre, 1997). Hence, trilateration fails to achieve sufficient accuracy under real world conditions. It is also argued that GPS algorithms are not at all tri/multi-lateration rather they are difference of measurement (time-difference or second order difference of two ranges) based hyperbolic formulations (Chaffee & Abel, 1994). However, there are widely used useful range-based algorithms such as Bancroft (1985) method. Therefore, trilateration is still predominantly associated with positioning (Bajaj et al., 2002). In this chapter, we first discuss about the analytical accuracy of trilateration based positioning algorithms. Subsequently, we show how noise can impact positioning accuracy in real world. In Section 3, we present existing analytical algorithms for GPS along with two new analytical approaches using Paired Measurement Localization (PML) of (Rahman & Kleeman, 2009). PML approaches can cope up with practical improper range based equations and are computationally efficient for implementation by conventional and resource constraint GPS

As alluded before, analytical approaches of positioning are based on accurate distance measurement from geo-stationary satellites. Trilateration is the basis of these techniques where the range measurements from *n* + 1 satellites are used for an *n*-dimensional position

In the ideal scenario when we can measure the precise range estimates of the GPS receiver, we can formulate a spherical locus of position for the receiver. The fundamental positioning geometry using three satellites placed in a hypothetical 2-Dimensional space is shown in

receivers. Section 4 draws some conclusions for this chapter.

estimation (Caffery, 2000).

Fig. 1(a).

**2. Trilateration: its problems and alternative approaches**

**1. Introduction**

**and Analytical Accuracy**

Mohammed Ziaur Rahman

*University of Malaya*

*Malaysia*


## **Beyond Trilateration: GPS Positioning Geometry and Analytical Accuracy**

Mohammed Ziaur Rahman *University of Malaya Malaysia*

### **1. Introduction**

240 Global Navigation Satellite Systems – Signal, Theory and Applications

Wolf, P. & Ghilani, C. (1997). *Adjustment Computations-Statistics and Least Squares in Surveying and GIS*, John Wiley & Sons, ISBN 0-471-16833-5, New York. Yang, D., Cai, B., & Yuan, Y. (2003). An Improved Map-Matching Algorithm Used in Vehicle

Zhao, Y. (1997). *Vehicle location and navigation systems*, Artech House, Inc., ISBN 0-89006-861-

Zhao, L., Ochieng, W., Quddus, M., & Noland, R. (2003). An Extended Kalman Filter

*Transportation Systems*, Vol. 2, Shanghai, China, October 2003.

5, Norwood, MA.

257-275.

Navigation System. *Proceedings of IEEE International Conference on Intelligent* 

Algorithm for Integrating GPS and Low Cost Dead Reckoning System Data for Vehicle Performance and Emissions Monitoring. *Journal of Navigation*, Vol. 56, pp.

> Trilateration/multilateration is the fundamental basis for most GPS positioning algorithms. It begins by finding range estimates to known satellite positions which provides a spherical Locus of Position (LOP) for the receiver. Ideally four such spherical LOPs can be solved to precisely determine the receiver position. Thus, it is an analytical approach that finds receiver position by solving required number of linear/quadratic equations. This method can determine the receiver position precisely when the equations are perfectly formulated. However, determining the exact range is nearly impossible in real-life due to many external factors such as noise interference, signal fading, multi-path propagation, weather condition, clock synchronization problem etc (Strang & Borre, 1997). Hence, trilateration fails to achieve sufficient accuracy under real world conditions. It is also argued that GPS algorithms are not at all tri/multi-lateration rather they are difference of measurement (time-difference or second order difference of two ranges) based hyperbolic formulations (Chaffee & Abel, 1994). However, there are widely used useful range-based algorithms such as Bancroft (1985) method. Therefore, trilateration is still predominantly associated with positioning (Bajaj et al., 2002).

> In this chapter, we first discuss about the analytical accuracy of trilateration based positioning algorithms. Subsequently, we show how noise can impact positioning accuracy in real world. In Section 3, we present existing analytical algorithms for GPS along with two new analytical approaches using Paired Measurement Localization (PML) of (Rahman & Kleeman, 2009). PML approaches can cope up with practical improper range based equations and are computationally efficient for implementation by conventional and resource constraint GPS receivers. Section 4 draws some conclusions for this chapter.

## **2. Trilateration: its problems and alternative approaches**

As alluded before, analytical approaches of positioning are based on accurate distance measurement from geo-stationary satellites. Trilateration is the basis of these techniques where the range measurements from *n* + 1 satellites are used for an *n*-dimensional position estimation (Caffery, 2000).

In the ideal scenario when we can measure the precise range estimates of the GPS receiver, we can formulate a spherical locus of position for the receiver. The fundamental positioning geometry using three satellites placed in a hypothetical 2-Dimensional space is shown in Fig. 1(a).

(a) Ideal 2-D trilateration scenario where linear form LOPs are found from the corresponding two circular LOPs (Caffery, 2000).

(b) LOP from equidistant satellites in presense of equal noise.

**0 5 10 15 20 25 30 35**

(a)

**Anchor node Regular node Observed range Actual range Linear Form LOP Hyperbolic LOP**

Beyond Trilateration: GPS Positioning Geometry and Analytical Accuracy 243

Fig. 2. The hyperbolic and linear form LOP of a receiver from range estimates by a pair of satellites under equal noise assumption. (a) The general case when two observed circular LOPs physically intersect. (b) The case when circular LOPs do not intersect due to noise and

At first, we present the following observation that identifies the case when the conventional

**Observation 1.** *Assuming a receiver uses range estimates from two satellites that are located at the same distance from the receiver and have equal noise components, it is shown below that the locus of positions for that receiver (as the error components vary) is a straight line whose equation is independent*

Assume that due to noise, the range measurements for **p1** (*x*1, *y*1), **p2** (*x*2, *y*2) and **p3** (*x*3, *y*3) are corrupted to give respective LOPs of radii *r*˜1 = *r*<sup>1</sup> + ξ1, *r*˜2 = *r*<sup>2</sup> + ξ<sup>2</sup> and *r*˜3 = *r*<sup>3</sup> + ξ3, where *r*˜*i*, *ri* represent the observed and actual distance (pseudorange and actual range respectively)

receiver corresponding to the measurement. The circular LOP can then be expressed as:

(*x*<sup>2</sup> − *x*1) *x* + (*y*<sup>2</sup> − *y*1) *y* =

�**p**2�<sup>2</sup> − �**p**1�<sup>2</sup> <sup>+</sup> (*r*<sup>1</sup> <sup>+</sup> <sup>ξ</sup>1)

where the right hand side becomes independent of range parameters, *i.e.*, measurement values

and equal noise presence when the above condition is fulfilled. The importance of this observation lies in the fact that it eliminates the signal propagation dependent parameters and receiver clock bias under assumed conditions completely. GPS measurements are mostly susceptible to these errors which are both device and

*th* satellite and receiver respectively and ξ*<sup>i</sup>* is the measurement noise at the

(*ri* <sup>+</sup> <sup>ξ</sup>*i*)<sup>2</sup> <sup>=</sup> �**p***<sup>i</sup>* <sup>−</sup> <sup>ρ</sup>�<sup>2</sup> (2)

2

(3)

<sup>2</sup> <sup>−</sup> (*r*<sup>2</sup> <sup>+</sup> <sup>ξ</sup>2)

<sup>2</sup> ⇒ *r*<sup>1</sup> + ξ<sup>1</sup> = *r*<sup>2</sup> + ξ2. One particular case is equidistant satellites

**5 10 15 20 25 30**

**X**

(b)

**Anchor node Regular node Observed range Actual range Linear Form LOP Hyperbolic LOP**

**Y**

**X**

trilateration works in consideration of noise.

where ρ = (*x*, *y*) is the receiver position to be determined.

1 2 

<sup>1</sup> = *r*˜

Equating the circular LOPs for **p**<sup>1</sup> and **p**<sup>2</sup> using (2), *L*<sup>1</sup> becomes:

*of range estimates.*

between the *i*

*r*˜ <sup>1</sup> and *r*˜

<sup>2</sup> whenever *r*˜

environmentally dependent.

underestimation of the ranges.

**Y**

Fig. 1. Depiction of observation 1.

The circles surrounding the satellites with known positions **p1** (*x*1, *y*1), **p2** (*x*2, *y*2) and **p3** (*x*3, *y*3), denote the LOPs obtained from the individual range measurements for each satellite. Ideally, the LOPs surrounding satellite *i* is given by,

$$r\_i^2 = \|\mathbf{p}\_i - \rho\|^2 = \left( (\mathbf{x} - \mathbf{x}\_i)^2 + (y - y\_i)^2 \right) \tag{1}$$

In 2-D, it is feasible to calculate the exact receiver position using only three range measurements. Two range measurements can result in two solutions corresponding to the intersection of two circular LOPs. The third measurement resolves this ambiguity.

However, equating two circular LOPs will result in a straight line equation (in case of 3-D, it will be planar equation) passing through two intersecting points of the circular LOPs. This line does not represent the actual locus of the receiver position as it will be clarified later. However, following (Caffery, 2000) this line is referred as *Linear Form LOP* in the subsequent discussions. In Fig. 1, *L*<sup>1</sup> and *L*<sup>2</sup> are determined from the circular LOPs corresponding to satellite pairs (**p1**, **p2**) and (**p1**, **p3**) respectively, with the intersection point (*x*, *y*) of *L*<sup>1</sup> and *L*<sup>2</sup> denoting the actual position of the receiver.

As shown in Fig. 1,the positioning geometry works correctly for ideal case of exact range estimates being measured by the positioning devices. However, in reality it is quite difficult to measure the exact range both for external noise impact and internal errors such as receiver clock bias and satellite clock skews. However, we also showed the fact that accurate positioning can be obtained if the noise effect is exactly the same for two satellites. However, in case of variable noise presence for two satellite range estimates usual linear form LOP obtained from circular LOPs deviates significantly from the true position of the receiver and leads to a bad positioning geometry. This is further explained as follows.

As it is clarified before that the range equations are mostly not accurate in practical scenario. Though trilateration is a mathematical approach and ideally it can find the exact receiver position, however it cannot find the position very well when the range estimates are perturbed by noise. In this section we will specifically identify the problems of trilateration for inaccurate range equations. For the ease of understanding we still limit this discussion for 2-dimensions only.

2 Will-be-set-by-IN-TECH

The circles surrounding the satellites with known positions **p1** (*x*1, *y*1), **p2** (*x*2, *y*2) and **p3** (*x*3, *y*3), denote the LOPs obtained from the individual range measurements for each

In 2-D, it is feasible to calculate the exact receiver position using only three range measurements. Two range measurements can result in two solutions corresponding to the

However, equating two circular LOPs will result in a straight line equation (in case of 3-D, it will be planar equation) passing through two intersecting points of the circular LOPs. This line does not represent the actual locus of the receiver position as it will be clarified later. However, following (Caffery, 2000) this line is referred as *Linear Form LOP* in the subsequent discussions. In Fig. 1, *L*<sup>1</sup> and *L*<sup>2</sup> are determined from the circular LOPs corresponding to satellite pairs (**p1**, **p2**) and (**p1**, **p3**) respectively, with the intersection point (*x*, *y*) of *L*<sup>1</sup> and *L*<sup>2</sup> denoting the actual

As shown in Fig. 1,the positioning geometry works correctly for ideal case of exact range estimates being measured by the positioning devices. However, in reality it is quite difficult to measure the exact range both for external noise impact and internal errors such as receiver clock bias and satellite clock skews. However, we also showed the fact that accurate positioning can be obtained if the noise effect is exactly the same for two satellites. However, in case of variable noise presence for two satellite range estimates usual linear form LOP obtained from circular LOPs deviates significantly from the true position of the receiver and leads to a bad

As it is clarified before that the range equations are mostly not accurate in practical scenario. Though trilateration is a mathematical approach and ideally it can find the exact receiver position, however it cannot find the position very well when the range estimates are perturbed by noise. In this section we will specifically identify the problems of trilateration for inaccurate range equations. For the ease of understanding we still limit this discussion for 2-dimensions

(*<sup>x</sup>* <sup>−</sup> *xi*)<sup>2</sup> + (*<sup>y</sup>* <sup>−</sup> *yi*)<sup>2</sup>

*L*1

**P**<sup>1</sup> **P**<sup>2</sup>

(b) LOP from equidistant satellites in presense of equal noise.

*r*1

ξ1 Same Linear LOP

*r*2 ξ2

(1)

(a) Ideal 2-D trilateration scenario where linear form LOPs are found from the corresponding two circular LOPs (Caffery, 2000).

satellite. Ideally, the LOPs surrounding satellite *i* is given by,

*<sup>i</sup>* <sup>=</sup> �**p***<sup>i</sup>* <sup>−</sup> <sup>ρ</sup>�<sup>2</sup> <sup>=</sup>

intersection of two circular LOPs. The third measurement resolves this ambiguity.

*r* 2

positioning geometry. This is further explained as follows.

Fig. 1. Depiction of observation 1.

position of the receiver.

only.

Fig. 2. The hyperbolic and linear form LOP of a receiver from range estimates by a pair of satellites under equal noise assumption. (a) The general case when two observed circular LOPs physically intersect. (b) The case when circular LOPs do not intersect due to noise and underestimation of the ranges.

At first, we present the following observation that identifies the case when the conventional trilateration works in consideration of noise.

**Observation 1.** *Assuming a receiver uses range estimates from two satellites that are located at the same distance from the receiver and have equal noise components, it is shown below that the locus of positions for that receiver (as the error components vary) is a straight line whose equation is independent of range estimates.*

Assume that due to noise, the range measurements for **p1** (*x*1, *y*1), **p2** (*x*2, *y*2) and **p3** (*x*3, *y*3) are corrupted to give respective LOPs of radii *r*˜1 = *r*<sup>1</sup> + ξ1, *r*˜2 = *r*<sup>2</sup> + ξ<sup>2</sup> and *r*˜3 = *r*<sup>3</sup> + ξ3, where *r*˜*i*, *ri* represent the observed and actual distance (pseudorange and actual range respectively) between the *i th* satellite and receiver respectively and ξ*<sup>i</sup>* is the measurement noise at the receiver corresponding to the measurement. The circular LOP can then be expressed as:

$$((r\_i + \xi\_i)^2 = \|\mathbf{p}\_i - \rho\|^2 \tag{2}$$

where ρ = (*x*, *y*) is the receiver position to be determined.

Equating the circular LOPs for **p**<sup>1</sup> and **p**<sup>2</sup> using (2), *L*<sup>1</sup> becomes:

$$\begin{aligned} \left( \left( \mathbf{x}\_2 - \mathbf{x}\_1 \right) \mathbf{x} + \left( y\_2 - y\_1 \right) \right) \mathbf{y} &= \\ \frac{1}{2} \left( \| \mathbf{p}\_2 \| ^2 - \| \mathbf{p}\_1 \| ^2 + \left( r\_1 + \xi\_1 \right)^2 - \left( r\_2 + \xi\_2 \right)^2 \right) &\tag{3} \end{aligned}$$

where the right hand side becomes independent of range parameters, *i.e.*, measurement values *r*˜ <sup>1</sup> and *r*˜ <sup>2</sup> whenever *r*˜ <sup>1</sup> = *r*˜ <sup>2</sup> ⇒ *r*<sup>1</sup> + ξ<sup>1</sup> = *r*<sup>2</sup> + ξ2. One particular case is equidistant satellites and equal noise presence when the above condition is fulfilled.

The importance of this observation lies in the fact that it eliminates the signal propagation dependent parameters and receiver clock bias under assumed conditions completely. GPS measurements are mostly susceptible to these errors which are both device and environmentally dependent.

The hyperbolic LOP represents the actual LOP for a pair of satellites under the equal noise assumption. The linear form LOP does not truly represent the locus of the receiver in presence of noise unless both ranges to the satellites are equal as clarified in Fig. 2. Two possible cases could arise due to equal noise presence: *a*) the circular ranges have a physical intersection and *b*) the circular ranges do not have any physical intersection. In both cases, the hyperbolic LOP is able to represent the original receiver position whereas linear form LOP deviates from receiver position significantly. As establishing the LOP is the first step in positioning, any error present at this step could aggravate the result significantly and hence finding a LOP closer to

Beyond Trilateration: GPS Positioning Geometry and Analytical Accuracy 245

the original receiver position is fundamental to achieving high accuracy positioning.

linear form LOPs.

analytical algorithms for GPS.

*r*˜*i*

(Strang & Borre, 1997).

<sup>2</sup> = (*ri* + ξ*i*)

**3. Analytical approaches for global positioning**

It is also crucial to compare the hyperbolic and linear form LOPs for unequal noise components in individual measurements as in reality this assumption can be void. In these general situations three possible cases could arise. *a*) the observed circular ranges have a physical intersection; *b*) the observed circular ranges do not have any common intersection region; and *c*) One of the observed circular ranges overlap completely within the other circular region. These three cases are shown in Fig. 3 where Fig. 3(a), (b) shows the hyperbolic and linear form LOPs for noise ratio (ξ1/ξ2) of 2 while Fig. 3(c) shows the LOPs for noise ratio of 4. Fig. 3(c) also shows that for completely overlapped ranges the hyperbolic formulation turns into elliptic formulation. This is the case when coefficient of *y*<sup>2</sup> in (5) changes sign as the range difference becomes greater than distance between the satellites (*c* > *a*). The noise presence generally attenuates the signal more than that of ideal propagation scenario causing overestimation of the range. However, it is theoretically possible to imagine the case where range is underestimated due to noise. The simultaneous overestimation and underestimation of ranges is supposed to be the most detrimental for LOP estimation and hence this case is shown in Fig. 3(d). It is evident from the figures that for all the three cases of unequal noise presence as well as for noise having different signs, hyperbolic formulation is better suited than linear form and the impact of noise is less detrimental on hyperbolic LOPs than it is on

We have discussed about the mathematical basis for positioning and presented the problems of regular trilateration from the viewpoint of noisy measurements. The positioning algorithms for GPS need greater care for noise and often augmented by filtering process to mitigate the effect of noise. However, they still largely depend on basic analytical positioning both for initial estimation and for error correcting/filtering phase. In this chapter, we present the different

A generally acceptable modeling of the ranging error ξ*<sup>i</sup>* is described by the following equation

where *Ii* is the ionospheric error, *Ti* is the tropospheric error, *c* is the speed of light, *dti* is the satellite clock offset, *dt* is the receiver clock offset, *t* is the receiver time and τ*<sup>i</sup>* is the signal

(*<sup>x</sup>* <sup>−</sup> *xi*)<sup>2</sup> + (*<sup>y</sup>* <sup>−</sup> *yi*)<sup>2</sup> + (*<sup>z</sup>* <sup>−</sup> *zi*)<sup>2</sup>

ξ*<sup>i</sup>* = *Ii* + *Ti* + *c* (*dti*(*t* − τ*i*) − *dt*(*t*)) − *ei* (7)

(6)

We begin with the 3-D analogous formula for equation 2 which represents a sphere.

<sup>2</sup> <sup>=</sup> �**p***<sup>i</sup>* <sup>−</sup> <sup>ρ</sup>�<sup>2</sup> <sup>=</sup>

propagation time and *ei* represents all other unmodelled error terms.

Fig. 3. The hyperbolic and linear form LOPs for unequal noise presence. (a) The general case when two observed circular LOPs physically intersect. (b) The case when observed circular LOPs do not intersect due to underestimation of the ranges. (c) The case when observed circular LOPs do not intersect but overlap completely due to overestimation of the ranges.(d) The case when ranging errors are of opposite signs.

Assuming equal noise presence, it is useful to explore paired measurements rather than individual ranges to mitigate the effect of noise. As the difference of the range estimates equate to actual difference for equal noise presence (*e.g.*, *r*˜ <sup>2</sup> − *r*˜ <sup>1</sup> = *r*<sup>2</sup> − *r*1), the LOP for the receiver position is found by the locus of positions maintaining constant difference from the pair of satellites. Hence, the hyperbolic LOP of the receiver can be found independent of the noise parameters as shown in Fig. 2 and formulated below:

$$\sqrt{(\mathbf{x} - \mathbf{x}\_2)^2 + (y - y\_2)^2} \quad \text{---} \quad \sqrt{(\mathbf{x} - \mathbf{x}\_1)^2 + (y - y\_1)^2} \quad \text{---} \quad \qquad \qquad (\mathbf{r}\_2 - \mathbf{r}\_1) \tag{4}$$

After algebraic manipulations, it takes the general hyperbolic form as follows for **p**<sup>1</sup> = (0, 0), **p**<sup>2</sup> = (*a*, 0), and *r*˜ <sup>1</sup> − *r*˜ <sup>2</sup> = *c*.

$$
\left(\mathbf{x} - \frac{a}{2}\right)^2 - \frac{y^2}{\left(\frac{a^2}{c^2} - 1\right)} = \frac{c^2}{4} \tag{5}
$$

4 Will-be-set-by-IN-TECH

Fig. 3. The hyperbolic and linear form LOPs for unequal noise presence. (a) The general case when two observed circular LOPs physically intersect. (b) The case when observed circular LOPs do not intersect due to underestimation of the ranges. (c) The case when observed circular LOPs do not intersect but overlap completely due to overestimation of the ranges.(d)

Assuming equal noise presence, it is useful to explore paired measurements rather than individual ranges to mitigate the effect of noise. As the difference of the range estimates

receiver position is found by the locus of positions maintaining constant difference from the pair of satellites. Hence, the hyperbolic LOP of the receiver can be found independent of the

<sup>2</sup> <sup>+</sup> (*<sup>y</sup>* <sup>−</sup> *<sup>y</sup>*1)

After algebraic manipulations, it takes the general hyperbolic form as follows for **p**<sup>1</sup> =

<sup>−</sup> *<sup>y</sup>*<sup>2</sup> *a*2 *<sup>c</sup>*<sup>2</sup> − 1

<sup>=</sup> *<sup>c</sup>*<sup>2</sup>

**Y**

**Y**

**−20 0 20 40 60 80 100**

**X**

(b)

**−5 0 5 10 15 20 25 30**

**X**

(d)

<sup>2</sup> − *r*˜

<sup>2</sup> = (*r*˜

<sup>1</sup> = *r*<sup>2</sup> − *r*1), the LOP for the

<sup>4</sup> (5)

<sup>2</sup> − *r*˜

<sup>1</sup>) (4)

**Anchor node Regular node Observed range Actual range Linear Form LOP Hyperbolic LOP**

**Anchor node Regular node Observed range Actual range Linear Form LOP Hyperbolic LOP**

**Anchor node Regular node Observed range Actual range Linear Form LOP Hyperbolic LOP**

**Anchor node Regular node Observed range Actual range Linear Form LOP Hyperbolic LOP**

**−20 0 20 40 60 80 100 120**

(a)

**−50 0 50 100 150 200**

(c)

equate to actual difference for equal noise presence (*e.g.*, *r*˜

noise parameters as shown in Fig. 2 and formulated below:

 *<sup>x</sup>* <sup>−</sup> *<sup>a</sup>* 2 2

(*x* − *x*1)

<sup>2</sup> <sup>−</sup>

<sup>1</sup> − *r*˜ <sup>2</sup> = *c*.

**X**

The case when ranging errors are of opposite signs.

**X**

(*x* − *x*2)

(0, 0), **p**<sup>2</sup> = (*a*, 0), and *r*˜

<sup>2</sup> <sup>+</sup> (*<sup>y</sup>* <sup>−</sup> *<sup>y</sup>*2)

**Y**

**Y**

The hyperbolic LOP represents the actual LOP for a pair of satellites under the equal noise assumption. The linear form LOP does not truly represent the locus of the receiver in presence of noise unless both ranges to the satellites are equal as clarified in Fig. 2. Two possible cases could arise due to equal noise presence: *a*) the circular ranges have a physical intersection and *b*) the circular ranges do not have any physical intersection. In both cases, the hyperbolic LOP is able to represent the original receiver position whereas linear form LOP deviates from receiver position significantly. As establishing the LOP is the first step in positioning, any error present at this step could aggravate the result significantly and hence finding a LOP closer to the original receiver position is fundamental to achieving high accuracy positioning.

It is also crucial to compare the hyperbolic and linear form LOPs for unequal noise components in individual measurements as in reality this assumption can be void. In these general situations three possible cases could arise. *a*) the observed circular ranges have a physical intersection; *b*) the observed circular ranges do not have any common intersection region; and *c*) One of the observed circular ranges overlap completely within the other circular region.

These three cases are shown in Fig. 3 where Fig. 3(a), (b) shows the hyperbolic and linear form LOPs for noise ratio (ξ1/ξ2) of 2 while Fig. 3(c) shows the LOPs for noise ratio of 4. Fig. 3(c) also shows that for completely overlapped ranges the hyperbolic formulation turns into elliptic formulation. This is the case when coefficient of *y*<sup>2</sup> in (5) changes sign as the range difference becomes greater than distance between the satellites (*c* > *a*). The noise presence generally attenuates the signal more than that of ideal propagation scenario causing overestimation of the range. However, it is theoretically possible to imagine the case where range is underestimated due to noise. The simultaneous overestimation and underestimation of ranges is supposed to be the most detrimental for LOP estimation and hence this case is shown in Fig. 3(d). It is evident from the figures that for all the three cases of unequal noise presence as well as for noise having different signs, hyperbolic formulation is better suited than linear form and the impact of noise is less detrimental on hyperbolic LOPs than it is on linear form LOPs.

#### **3. Analytical approaches for global positioning**

We have discussed about the mathematical basis for positioning and presented the problems of regular trilateration from the viewpoint of noisy measurements. The positioning algorithms for GPS need greater care for noise and often augmented by filtering process to mitigate the effect of noise. However, they still largely depend on basic analytical positioning both for initial estimation and for error correcting/filtering phase. In this chapter, we present the different analytical algorithms for GPS.

We begin with the 3-D analogous formula for equation 2 which represents a sphere.

$$\left|\tilde{r}\_i^2 = (r\_i + \xi\_i)^2 = \|\mathbf{p}\_i - \rho\|^2 = \left( (\mathbf{x} - \mathbf{x}\_i)^2 + (y - y\_i)^2 + (z - z\_i)^2 \right) \tag{6}$$

A generally acceptable modeling of the ranging error ξ*<sup>i</sup>* is described by the following equation (Strang & Borre, 1997).

$$\xi\_{i} = I\_{i} + T\_{i} + c \left( dt\_{i}(t - \tau\_{i}) - dt(t) \right) - e\_{i} \tag{7}$$

where *Ii* is the ionospheric error, *Ti* is the tropospheric error, *c* is the speed of light, *dti* is the satellite clock offset, *dt* is the receiver clock offset, *t* is the receiver time and τ*<sup>i</sup>* is the signal propagation time and *ei* represents all other unmodelled error terms.

When more than four observations are available we can compute the correction values

Beyond Trilateration: GPS Positioning Geometry and Analytical Accuracy 247

*<sup>r</sup>*<sup>1</sup> <sup>−</sup>*z*1−*z*<sup>0</sup>

*<sup>r</sup>*<sup>2</sup> <sup>−</sup>*z*2−*z*<sup>0</sup>

*rm* <sup>−</sup>*zm*−*z*<sup>0</sup>

If the code observations are independent and assumed to have equal variance, then the above

We want to turn positioning into a linear algebra problem. Here is a clever method due to Bancroft (1985) that does some algbraic manipulations to reduce the equations to a least-squares problem. Multiplying things out in equation 6 and using the receiver clock bias −*b* = ξ�

� for the preliminary estimate. The least squares formulation can be concisely

⎤

⎡

δ*x* δ*y* δ*z* δ*c dt*

⎤

⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦

= (*AT*Σ−1*A*)−<sup>1</sup>*AT*Σ−1**b** (13)

= (*ATA*)−<sup>1</sup>*AT***b** (14)

2

*<sup>x</sup>*<sup>2</sup> <sup>+</sup> *<sup>y</sup>*<sup>2</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup> <sup>−</sup> *<sup>r</sup>*

�*T* .

*<sup>i</sup>* <sup>−</sup> <sup>2</sup>*rib* <sup>+</sup> *<sup>b</sup>*<sup>2</sup> (15)

2 �

�ρ, ρ� = 0; (17)

*<sup>T</sup>* denote the *i*

*i s* as

= 0 (16)

*th* satellite

*x*<sup>0</sup> + δ*x y*<sup>0</sup> + δ*y z*<sup>0</sup> + δ*z*

*<sup>i</sup>* <sup>−</sup> <sup>2</sup>*ziz* <sup>+</sup> *<sup>z</sup>*<sup>2</sup> <sup>=</sup> *<sup>r</sup>*

= **b** − � (12)

⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦

*<sup>r</sup>*<sup>1</sup> 1

*<sup>r</sup>*<sup>2</sup> 1

*rm* 1

�

δ *x*, δ *y*, δ *z*

written as follows.

The least squares solution is

can be simplified to

**Ax** =

The final position vector can be estimated by <sup>ρ</sup> = �

*<sup>i</sup>* <sup>−</sup> <sup>2</sup>*xix* <sup>+</sup> *<sup>x</sup>*<sup>2</sup> <sup>+</sup> *<sup>y</sup>*<sup>2</sup>

*<sup>i</sup>* <sup>+</sup> *<sup>z</sup>*<sup>2</sup> *<sup>i</sup>* − *r* 2 *i* �

Using *Lorentz inner product* for 4-space defined by:

1 2

**3.3 Bancroft's method (least squares solution)**

the only noise parameter, we get

*x*2

� *x*2 *<sup>i</sup>* <sup>+</sup> *<sup>y</sup>*<sup>2</sup>

position and range vectors.

Equation 16 can be rewritten as:

Rearranging,

Let ρ = [*xyzr*]

⎡

−*x*1−*x*<sup>0</sup>

−*x*2−*x*<sup>0</sup>

. . . . . . . . . . . .

−*xm*−*x*<sup>0</sup>

⎡

δ*x* δ*y* δ*z* δ*c dt*

⎤

⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦

⎡

δ*x* δ*y* δ*z* δ*c dt*

⎤

⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦

*<sup>i</sup>* <sup>−</sup> <sup>2</sup>*yiy* <sup>+</sup> *<sup>y</sup>*<sup>2</sup> <sup>+</sup> *<sup>z</sup>*<sup>2</sup>

*<sup>T</sup>* denote the receiver position vector and **<sup>p</sup>***<sup>i</sup>* = [*xi yi zi ri*]

�**p***i*, **p***i*�−�**p***i*, ρ� +

<sup>−</sup> <sup>2</sup> (*xix* <sup>+</sup> *yiy* <sup>+</sup> *ziz* <sup>−</sup> *rib*) <sup>+</sup> �

�*u*�, �*v*� = *u*1*v*<sup>1</sup> + *u*2*v*<sup>2</sup> + *u*3*v*<sup>3</sup> − *u*4*v*<sup>4</sup>

1 2

⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

*<sup>r</sup>*<sup>1</sup> <sup>−</sup> *<sup>y</sup>*1−*y*<sup>0</sup>

*<sup>r</sup>*<sup>2</sup> <sup>−</sup> *<sup>y</sup>*2−*y*<sup>0</sup>

*rm* <sup>−</sup> *ym*−*y*<sup>0</sup>

⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

The equation 6 can be iteratively solved using Newton's method. However, the iterative approach will be computationally expensive. Moreover, the positioning accuracy will be poor as there is no proper formalism to identify and mitigate the error components.

#### **3.1 Ordinary trilateration for positioning**

Let **<sup>P</sup>***<sup>i</sup>* <sup>=</sup> � **p***i* <sup>1</sup>, **<sup>p</sup>***<sup>i</sup>* 2 � be an arbitrary satellite pair, where **p***<sup>i</sup>* <sup>1</sup> <sup>=</sup> � *xi* <sup>1</sup>, *yi* 1, *zi* 1 � and **p***<sup>i</sup>* <sup>2</sup> <sup>=</sup> � *xi* <sup>2</sup>, *yi* 2, *zi* 2 � represent satellite positions of the *i th* pair. Analogous to 2-D linear form LOP of equation 3, a 3-D planar form LOP is found as follows.

$$\begin{aligned} \left(\mathbf{x}\_2^i - \mathbf{x}\_1^i\right)\mathbf{x} + \left(y\_2^i - y\_1^i\right)\mathbf{y} + \left(z\_2^i - z\_1^i\right)\mathbf{z} &= \\ \frac{1}{2}\left(\|\mathbf{p}\_2^i\|^2 - \|\mathbf{p}\_1^i\|^2 + \left(r\_1^i\right)^2 - \left(r\_2^i\right)^2 + 2\xi\left(r\_1^i - r\_2^i\right)\right) &\tag{8} \end{aligned} \tag{9}$$

Where it is assumed that the noise are equal and constant for a particular satellite pair *i.e.*, ξ<sup>1</sup> = ξ<sup>2</sup> = ξ.

The equation becomes linear in terms of *x*, *y*, *z* and ξ if the noise is represented by a single parameter ξ for all pairs. In that case there are four unknowns in this equation and therefore four equations will be required to solve them. In practicality, the assumption is susceptible for large positioning error and hence iterative refinement approach of the following is rather adopted for real implementations.

#### **3.2 Iterative least squares estimate**

The iterative approach works by having a preliminary estimate of the receiver position (ρ<sup>0</sup> = [*x*<sup>0</sup> *y*0*z*0] *<sup>T</sup>*). Let the rotation rate of the earth be ω. The position vectors in the earth centered earth fixed (ECEF) system of the receiver be donated by ρ(*t*)*ECEF* and geo-stationary position vector for satellite *i* be denoted by **p***i*(*t*)*geo* where the argument *t* denotes the dependence on time. The range equation can be written as:

$$r\_i = \|R\_3(\omega \tau\_i) \mathbf{p}\_i(t - \tau\_i)\_{\text{geo}} - \rho(t)\_{\text{ECEF}}\|\tag{9}$$

Where *R*<sup>3</sup> is the earth's rotation matrix as defined below.

$$R\_3(\omega \tau\_i) = \begin{bmatrix} \cos(\omega \tau\_i) & \sin(\omega \tau\_i) & 0\\ -\sin(\omega \tau\_i) & \cos(\omega \tau\_i) & 0\\ 0 & 0 & 1 \end{bmatrix}$$

Let

$$
\begin{bmatrix} x\_i \\ y\_i \\ z\_i \end{bmatrix} = \mathcal{R}\_3(\omega \tau\_i) \mathbf{p}\_i(t - \tau\_i)\_{\mathcal{G}^{\otimes 0}} \quad \text{and} \begin{bmatrix} x \\ y \\ z \end{bmatrix} = \rho(t)\_{\text{ECEF}} \tag{10}
$$

Now, omitting the refraction terms *Ii* and *Ti* and linearizing the equation 6, we get

$$-\frac{\mathbf{x}\_{i}-\mathbf{x}^{0}}{(\tilde{r}\_{i})^{0}}\delta\mathbf{x}-\frac{y\_{i}-y^{0}}{(\tilde{r}\_{i})^{0}}\delta y-\frac{z\_{k}-z^{0}}{(\tilde{r}\_{i})^{0}}\delta z+(\mathbf{c}\,dt)=\tilde{r}\_{i}-(\tilde{r}\_{i})^{0}-\mathbf{c}\_{i}=b\_{i}-\mathbf{c}\_{i}\tag{11}$$

where *bi* denotes the correction to the preliminary range estimate.

When more than four observations are available we can compute the correction values � δ *x*, δ *y*, δ *z* � for the preliminary estimate. The least squares formulation can be concisely written as follows.

$$\mathbf{A}\mathbf{x} = \begin{bmatrix} -\frac{\underline{x}\_1 - \underline{x}^0}{r\_1} - \frac{y\_1 - \underline{y}^0}{r\_1} & -\frac{z\_1 - \underline{z}^0}{r\_1} & 1\\ -\frac{\underline{x}\_2 - \underline{x}^0}{r\_2} & -\frac{y\_2 - \underline{y}^0}{r\_2} & -\frac{z\_2 - \underline{z}^0}{r\_2} & 1\\ \vdots & \vdots & \vdots & \vdots\\ -\frac{\underline{x}\_m - \underline{x}^0}{r\_m} & -\frac{y\_m - \underline{y}^0}{r\_m} & -\frac{z\_m - \underline{z}^0}{r\_m} & 1 \end{bmatrix} \begin{bmatrix} \delta x\\ \delta y\\ \delta z\\ \delta c \, dt \end{bmatrix} = \mathbf{b} - \boldsymbol{\varepsilon} \tag{12}$$

The least squares solution is

6 Will-be-set-by-IN-TECH

The equation 6 can be iteratively solved using Newton's method. However, the iterative approach will be computationally expensive. Moreover, the positioning accuracy will be poor

> <sup>1</sup> <sup>=</sup> � *xi* <sup>1</sup>, *yi* 1, *zi* 1 � and **p***<sup>i</sup>*

*th* pair. Analogous to 2-D linear form LOP of equation 3, a

<sup>2</sup> <sup>=</sup> � *xi* <sup>2</sup>, *yi* 2, *zi* 2 �

�� (8)

as there is no proper formalism to identify and mitigate the error components.

be an arbitrary satellite pair, where **p***<sup>i</sup>*

2�<sup>2</sup> − �**p***<sup>i</sup>*

<sup>1</sup>�<sup>2</sup> <sup>+</sup> � *r i* 1 �2 − � *r i* 2 �2 + 2ξ � *r i* <sup>1</sup> − *r i* 2

Where it is assumed that the noise are equal and constant for a particular satellite pair *i.e.*,

The equation becomes linear in terms of *x*, *y*, *z* and ξ if the noise is represented by a single parameter ξ for all pairs. In that case there are four unknowns in this equation and therefore four equations will be required to solve them. In practicality, the assumption is susceptible for large positioning error and hence iterative refinement approach of the following is rather

The iterative approach works by having a preliminary estimate of the receiver position (ρ<sup>0</sup> =

earth fixed (ECEF) system of the receiver be donated by ρ(*t*)*ECEF* and geo-stationary position vector for satellite *i* be denoted by **p***i*(*t*)*geo* where the argument *t* denotes the dependence on

*<sup>T</sup>*). Let the rotation rate of the earth be ω. The position vectors in the earth centered

cos(ωτ*i*) sin(ωτ*i*) 0 −sin(ωτ*i*) cos(ωτ*i*) 0 0 01

> ⎡ ⎢⎢⎢⎢⎢⎢⎣ *x y z*

⎤

*ri* = �*R*3(ωτ*i*)**p***i*(*t* − τ*i*)*geo* − ρ(*t*)*ECEF*� (9)

⎤ ⎥⎥⎥⎥⎥⎥⎦

(*r*˜*i*)<sup>0</sup> <sup>δ</sup>*<sup>z</sup>* + (*c dt*) = *<sup>r</sup>*˜*<sup>i</sup>* <sup>−</sup> (*r*˜*i*)<sup>0</sup> <sup>−</sup> �*<sup>i</sup>* <sup>=</sup> *bi* <sup>−</sup> �*<sup>i</sup>* (11)

⎥⎥⎥⎥⎥⎥⎦ <sup>=</sup> <sup>ρ</sup>(*t*)*ECEF* (10)

**3.1 Ordinary trilateration for positioning**

3-D planar form LOP is found as follows.

� *xi* <sup>2</sup> <sup>−</sup> *<sup>x</sup><sup>i</sup>* 1 � *<sup>x</sup>* + � *yi* <sup>2</sup> <sup>−</sup> *<sup>y</sup><sup>i</sup>* 1 � *<sup>y</sup>* + � *zi* <sup>2</sup> <sup>−</sup> *<sup>z</sup><sup>i</sup>* 1 � *z* =

1 2 � �**p***i*

represent satellite positions of the *i*

adopted for real implementations.

**3.2 Iterative least squares estimate**

time. The range equation can be written as:

Where *R*<sup>3</sup> is the earth's rotation matrix as defined below.

⎡ ⎢⎢⎢⎢⎢⎢⎣ *xi yi zi*

(*r*˜*i*)<sup>0</sup> <sup>δ</sup>*<sup>x</sup>* <sup>−</sup> *yi* <sup>−</sup> *<sup>y</sup>*<sup>0</sup>

<sup>−</sup> *xi* <sup>−</sup> *<sup>x</sup>*<sup>0</sup>

⎤

*R*3(ωτ*i*) =

⎡ ⎢⎢⎢⎢⎢⎢⎣

⎥⎥⎥⎥⎥⎥⎦ <sup>=</sup> *<sup>R</sup>*3(ωτ*i*)**p***i*(*<sup>t</sup>* <sup>−</sup> <sup>τ</sup>*i*)*geo and*

Now, omitting the refraction terms *Ii* and *Ti* and linearizing the equation 6, we get

(*r*˜*i*)<sup>0</sup> <sup>δ</sup>*<sup>y</sup>* <sup>−</sup> *zk* <sup>−</sup> *<sup>z</sup>*<sup>0</sup>

where *bi* denotes the correction to the preliminary range estimate.

Let **<sup>P</sup>***<sup>i</sup>* <sup>=</sup> �

ξ<sup>1</sup> = ξ<sup>2</sup> = ξ.

[*x*<sup>0</sup> *y*0*z*0]

Let

**p***i* <sup>1</sup>, **<sup>p</sup>***<sup>i</sup>* 2 �

$$\begin{bmatrix} \delta \mathbf{x} \\ \delta y \\ \delta z \\ \delta c \, dt \end{bmatrix} = (A^T \Sigma^{-1} A)^{-1} A^T \Sigma^{-1} \mathbf{b} \tag{13}$$

If the code observations are independent and assumed to have equal variance, then the above can be simplified to

$$\begin{bmatrix} \delta \mathbf{x} \\ \delta y \\ \delta z \\ \delta c \, dt \end{bmatrix} = (A^T A)^{-1} A^T \mathbf{b} \tag{14}$$

The final position vector can be estimated by <sup>ρ</sup> = � *x*<sup>0</sup> + δ*x y*<sup>0</sup> + δ*y z*<sup>0</sup> + δ*z* �*T* .

#### **3.3 Bancroft's method (least squares solution)**

We want to turn positioning into a linear algebra problem. Here is a clever method due to Bancroft (1985) that does some algbraic manipulations to reduce the equations to a least-squares problem. Multiplying things out in equation 6 and using the receiver clock bias −*b* = ξ� *i s* as the only noise parameter, we get

$$\mathbf{x}\_i^2 - 2\mathbf{x}\_i\mathbf{x} + \mathbf{x}^2 + y\_i^2 - 2y\_i y + y^2 + z\_i^2 - 2z\_i z + z^2 = r\_i^2 - 2r\_i b + b^2 \tag{15}$$

Rearranging,

$$2\left(x\_i^2 + y\_i^2 + z\_i^2 - r\_i^2\right) - 2\left(x\_i x + y\_i y + z\_i z - r\_i b\right) + \left(x^2 + y^2 + z^2 - r^2\right) = 0\tag{16}$$

Let ρ = [*xyzr*] *<sup>T</sup>* denote the receiver position vector and **<sup>p</sup>***<sup>i</sup>* = [*xi yi zi ri*] *<sup>T</sup>* denote the *i th* satellite position and range vectors.

Using *Lorentz inner product* for 4-space defined by:

$$
\langle \vec{u}, \vec{v} \rangle = \mu\_1 \upsilon\_1 + \mu\_2 \upsilon\_2 + \mu\_3 \upsilon\_3 - \mu\_4 \upsilon\_4
$$

Equation 16 can be rewritten as:

$$\frac{1}{2}\langle \mathbf{p}\_{i\prime}\mathbf{p}\_{i}\rangle - \langle \mathbf{p}\_{i\prime}\rho\rangle + \frac{1}{2}\langle \rho\_{\prime}\rho\rangle = 0;\tag{17}$$

In order to apply least squares estimation the equations for each satellite are organized as follows:

$$\mathbf{B} = \begin{bmatrix} x\_1 & y\_1 & z\_1 & -r\_1 \\ x\_2 & y\_2 & z\_2 & -r\_2 \\ \vdots & \vdots & \vdots & \vdots \\ x\_m & y\_m & z\_m & -r\_m \end{bmatrix}'$$

$$\mathbf{a} = \frac{1}{2} \begin{bmatrix} \langle \mathbf{p}\_1, \mathbf{p}\_1 \rangle \\ \langle \mathbf{p}\_2, \mathbf{p}\_2 \rangle \\ \vdots \\ \langle \mathbf{p}\_{m'}, \mathbf{p}\_m \rangle \end{bmatrix}\_{\prime} \mathbf{e} = \begin{bmatrix} 1 \\ 1 \\ \vdots \\ 1 \end{bmatrix}\_{\prime} \text{and } \wedge = \frac{1}{2} \langle \rho, \rho \rangle \mathbf{e}$$

We can now rewrite equation 17 as:

$$\begin{aligned} \mathbf{a} - \mathbf{B}\rho + \wedge \mathbf{e} &= 0 \\ \Rightarrow \mathbf{B}\rho &= \mathbf{a} + \wedge \mathbf{e} \end{aligned} \tag{18}$$

Fig. 4. Three dimensional vector representation for a receiver and four satellites.

Kleusberg (1994) provided a vector algebraic solution for GPS. The geometry of the 3-D positioning is shown in figure 4. It begins with the fundamental equation 6 for range estimates. It also uses difference equation given below analogous to equation 4 between two satellite

Beyond Trilateration: GPS Positioning Geometry and Analytical Accuracy 249

<sup>2</sup> <sup>+</sup> (*<sup>y</sup>* <sup>−</sup> *<sup>y</sup>*1)

*<sup>i</sup>* of equation 22, we get

<sup>=</sup> *<sup>b</sup>*<sup>2</sup>

<sup>4</sup> <sup>−</sup> *<sup>d</sup>*<sup>2</sup> 4 *d*<sup>4</sup> + *b*4**e**<sup>1</sup> · **e**<sup>4</sup>

**e**<sup>1</sup> ·**f**<sup>3</sup> = *u*<sup>3</sup> (25)

*<sup>i</sup>* <sup>−</sup> *<sup>d</sup>*<sup>2</sup> *i di* + *bi***e**<sup>1</sup> · **e***<sup>i</sup>* <sup>2</sup> <sup>+</sup> (*<sup>z</sup>* <sup>−</sup> *<sup>z</sup>*1)

<sup>1</sup> − 2*bir*˜1**e**<sup>1</sup> · **e***<sup>i</sup>* (22)

<sup>2</sup> <sup>=</sup> (*r*˜*<sup>i</sup>* <sup>−</sup> *<sup>r</sup>*˜1) <sup>=</sup> *di* (21)

(23)

(24)

(*x* − *x*1)

2

<sup>2</sup>*r*˜1 <sup>=</sup> *<sup>b</sup>*<sup>2</sup>

Using satellite pairs (1, 2),(1, 3) and (1, 4); we can get three equations for *r*˜1 as follows:

<sup>3</sup> <sup>−</sup> *<sup>d</sup>*<sup>2</sup> 3 *d*<sup>3</sup> + *b*3**e**<sup>1</sup> · **e**<sup>3</sup>

**e**<sup>1</sup> ·**f**<sup>2</sup> = *u*<sup>2</sup> and

<sup>=</sup> *<sup>b</sup>*<sup>2</sup>

This represents a sheet of hyperboloid. We can find three such hyperboloids for *i* = 2, 3 and 4 that can be solved for determining the receiver position. Mathematically, there will be two solutions though one of which can be discarded from the knowledge of the earth's proximity. Let *b*2, *b*3, *b*<sup>4</sup> be the known distances from satellite 1 to satellites 2, 3, 4 along unit vectors

**3.4 Kleusberg's algorithm**

<sup>2</sup> <sup>+</sup> (*<sup>y</sup>* <sup>−</sup> *yi*)

<sup>2</sup> <sup>+</sup> (*<sup>z</sup>* <sup>−</sup> *zi*)

**e**2, **e**3, **e**4. From the cosine law for triangle 1 − *i* − ρ,

*b*2 <sup>2</sup> <sup>−</sup> *<sup>d</sup>*<sup>2</sup> 2 *d*<sup>2</sup> + *b*2**e**<sup>1</sup> · **e**<sup>2</sup>

The only unknown in the above equation is the unit vector **e**1. Some rewritings result in the two scalar equations as follows:

Squaring equation 21 and equating with *r*˜

2 − 

*r*˜ 2 *<sup>i</sup>* <sup>=</sup> *<sup>b</sup>*<sup>2</sup> *<sup>i</sup>* + *r*˜ 2

measurements.

(*x* − *xi*)

For more than 4 satellites, we can have closed form least squares solution as follows:

$$
\rho = \mathbf{B}^+ \mathbf{a} + \wedge \mathbf{e} \tag{19}
$$

where **B**<sup>+</sup> = (**B***T***B**)−1**B***<sup>T</sup>* is the pseudoinverse of Matrix **B**.

However, the solution ρ involves ∧ which is defined in terms of unknown ρ. This problem is avoided by substituting ρ into the definition of the scalar ∧ and using the linearity of the Lorentz inner product as follows:

$$\wedge = \frac{1}{2} \left\langle \mathbf{B}^+ \left( \mathbf{a} + \wedge \mathbf{e} \right) , \mathbf{B}^+ \left( \mathbf{a} + \wedge \mathbf{e} \right) \right\rangle$$

After rearranging,

$$
\wedge^2 \langle \mathbf{B}^+ \mathbf{e}, \mathbf{B}^+ \mathbf{e} \rangle + 2 \wedge \left( \langle \mathbf{B}^+ \vec{\mathbf{e}}, \mathbf{B}^+ \mathbf{a} \rangle - 1 \right) + \langle \mathbf{B}^+ \mathbf{a}, \mathbf{B}^+ \mathbf{a} \rangle = 0 \tag{20}
$$

This is a quadratic equation in <sup>∧</sup> with coefficients �**B**+**e**, **<sup>B</sup>**+**e**�, 2 (�**B**+**e**, **<sup>B</sup>**+**a**� − <sup>1</sup>), and �**B**+**a**, **<sup>B</sup>**+**a**�. All these three values can be computed and we can solve for two possible values of ∧ using the quadratic equation. If we get the two solutions to this equation ∧<sup>1</sup> and ∧2, then we can solve for two possible solutions ρ<sup>1</sup> and ρ<sup>2</sup> in equation 19. One of these solutions will make sense, it will be on the surface of the earth (which has a radius of approximately 6371 km), and one will not.

The major advantage of the Bancroft's method is to have a closed form least squares solution for GPS equations. It has the same advantage of least squares approach of using all the available satellites for location estimation. On the contrary, it uses the fundamental equation of spherical ranging that in the course of solution leads to planar form LOPs which are than hyperboloid LOPs. Therefore as discussed before, this method cannot be used for high-accuracy positioning in presence of noise.

#### **3.4 Kleusberg's algorithm**

8 Will-be-set-by-IN-TECH

In order to apply least squares estimation the equations for each satellite are organized as

*x*<sup>1</sup> *y*<sup>1</sup> *z*<sup>1</sup> −*r*<sup>1</sup> *x*<sup>2</sup> *y*<sup>2</sup> *z*<sup>2</sup> −*r*<sup>2</sup> ⎤

⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ ,

, and <sup>∧</sup> <sup>=</sup> <sup>1</sup>

2 �ρ, ρ�

⇒ **B**ρ = **a** + ∧**e** (18)

<sup>ρ</sup> <sup>=</sup> **<sup>B</sup>**+**<sup>a</sup>** <sup>+</sup> <sup>∧</sup>**<sup>e</sup>** (19)

�

<sup>+</sup> �**B**+**a**, **<sup>B</sup>**+**a**� <sup>=</sup> <sup>0</sup> (20)

*xm ym zm* −*rm*

⎡

1 1 . . . 1

⎤

⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦

⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

**a** − **B**ρ + ∧**e** = 0

However, the solution ρ involves ∧ which is defined in terms of unknown ρ. This problem is avoided by substituting ρ into the definition of the scalar ∧ and using the linearity of the

�**B**+�**e**, **<sup>B</sup>**+**a**� − <sup>1</sup>

This is a quadratic equation in <sup>∧</sup> with coefficients �**B**+**e**, **<sup>B</sup>**+**e**�, 2 (�**B**+**e**, **<sup>B</sup>**+**a**� − <sup>1</sup>), and �**B**+**a**, **<sup>B</sup>**+**a**�. All these three values can be computed and we can solve for two possible values of ∧ using the quadratic equation. If we get the two solutions to this equation ∧<sup>1</sup> and ∧2, then we can solve for two possible solutions ρ<sup>1</sup> and ρ<sup>2</sup> in equation 19. One of these solutions will make sense, it will be on the surface of the earth (which has a radius of approximately 6371

The major advantage of the Bancroft's method is to have a closed form least squares solution for GPS equations. It has the same advantage of least squares approach of using all the available satellites for location estimation. On the contrary, it uses the fundamental equation of spherical ranging that in the course of solution leads to planar form LOPs which are than hyperboloid LOPs. Therefore as discussed before, this method cannot be used for high-accuracy positioning

**<sup>B</sup>**<sup>+</sup> (**<sup>a</sup>** <sup>+</sup> <sup>∧</sup>**e**), **<sup>B</sup>**<sup>+</sup> (**<sup>a</sup>** <sup>+</sup> <sup>∧</sup>**e**)

�

For more than 4 satellites, we can have closed form least squares solution as follows:

**B** =

�**p**1, **p**1� �**p**2, **p**2� . . . �**p***m*, **p***m*�

**<sup>a</sup>** <sup>=</sup> <sup>1</sup> 2

where **B**<sup>+</sup> = (**B***T***B**)−1**B***<sup>T</sup>* is the pseudoinverse of Matrix **B**.

<sup>∧</sup> <sup>=</sup> <sup>1</sup> 2 �

�

<sup>∧</sup>2�**B**+**e**, **<sup>B</sup>**+**e**� <sup>+</sup> <sup>2</sup><sup>∧</sup>

We can now rewrite equation 17 as:

Lorentz inner product as follows:

After rearranging,

km), and one will not.

in presence of noise.

⎡

⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

⎡

⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

⎤

⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ , **e** =

. . . . . . . . . . . .

follows:

Kleusberg (1994) provided a vector algebraic solution for GPS. The geometry of the 3-D positioning is shown in figure 4. It begins with the fundamental equation 6 for range estimates. It also uses difference equation given below analogous to equation 4 between two satellite measurements.

$$\sqrt{(\mathbf{x} - \mathbf{x}\_i)^2 + (y - y\_i)^2 + (z - z\_i)^2} - \sqrt{(\mathbf{x} - \mathbf{x}\_1)^2 + (y - y\_1)^2 + (z - z\_1)^2} = (\vec{r}\_i - \vec{r}\_1) = d\_i \tag{21}$$

This represents a sheet of hyperboloid. We can find three such hyperboloids for *i* = 2, 3 and 4 that can be solved for determining the receiver position. Mathematically, there will be two solutions though one of which can be discarded from the knowledge of the earth's proximity.

Let *b*2, *b*3, *b*<sup>4</sup> be the known distances from satellite 1 to satellites 2, 3, 4 along unit vectors **e**2, **e**3, **e**4. From the cosine law for triangle 1 − *i* − ρ,

$$
\vec{r}\_i^2 = b\_i^2 + \vec{r}\_1^2 - 2b\_i\tilde{r}\_1\mathbf{e}\_1 \cdot \mathbf{e}\_i \tag{22}
$$

Squaring equation 21 and equating with *r*˜ 2 *<sup>i</sup>* of equation 22, we get

$$2\vec{r}\_1 = \frac{b\_i^2 - d\_i^2}{d\_i + b\_i \mathbf{e}\_1 \cdot \mathbf{e}\_i} \tag{23}$$

Using satellite pairs (1, 2),(1, 3) and (1, 4); we can get three equations for *r*˜1 as follows:

$$\frac{b\_2^2 - d\_2^2}{d\_2 + b\_2 \mathbf{e}\_1 \cdot \mathbf{e}\_2} = \frac{b\_3^2 - d\_3^2}{d\_3 + b\_3 \mathbf{e}\_1 \cdot \mathbf{e}\_3} = \frac{b\_4^2 - d\_4^2}{d\_4 + b\_4 \mathbf{e}\_1 \cdot \mathbf{e}\_4} \tag{24}$$

The only unknown in the above equation is the unit vector **e**1.

Some rewritings result in the two scalar equations as follows:

$$\begin{aligned} \mathbf{e}\_1 \cdot \mathbf{f}\_2 &= \boldsymbol{\mu}\_2 \quad \text{and} \\ \mathbf{e}\_1 \cdot \mathbf{f}\_3 &= \boldsymbol{\mu}\_3 \end{aligned} \tag{25}$$

a distance of about 6371 km from the origin. We can eventually get the receiver coordinate

Beyond Trilateration: GPS Positioning Geometry and Analytical Accuracy 251

ρ = **p**<sup>1</sup> + *r*˜

The Kleusberg's method is geometrically oriented and uses a minimum number of satellites. On the other hand, it cannot utilize more number of satellites even when they are available. This method is also dependent on the proper geometrical orientation of the satellites. Moreover, it often gives different results for different set of satellites and depending on the order of the

In trilateration, the positioning works by simultaneous solution of three spherical LOP equations. Similar to the 2-D steps, we can equate two spherical LOP equations to find equation for a 2-D plane representing the planar locus of position. Analogous to 2-D case,

As shown in section 2, the effect of noise will have detrimental impact on the aforementioned simple solution. On the other hand, instead of equating the two imprecise range equations we can maintain an equi-distant locus of position from two satellites as formulated in equation 21 for a hyperboloid LOP. This will be more accurate than a traditional 2-D planar LOP based

Solving the nonlinear hyperbolic/hyperboloid equations is difficult. Moreover, existing hyperbolic positioning methods proceed by linearizing the system of equations using either Taylor-series approximation (Foy, 1976; Torrieri, 1984) or by linearizing with another additional variable (Chan & Ho, 1994; Friedlander, 1987; Smith & Abel, 1987). However, while linearizing works well for existing approaches it is not readily adaptable for the proposed paired approach as linearizing is indeed pairing with an arbitrarily chosen hyperbolic LOP. The assumption of equal noise cannot be held for any arbitrary selection of pairs and hence alternate ways to

(Chan & Ho, 1994) provided closed form least squares solution for non-linear hyperbolic LOPs by linearizing with reference to a single satellite. Analogous to their approach a closed form solution is found for PML using pairs having a common reference satellite in them. The solution is simpler than (Chan & Ho, 1994)'s approach as the effect of noise is considered early

Let *<sup>r</sup>ij* represent the difference in the observed ranges for satellite pairs (*i*, *<sup>j</sup>*). In case of equal

After squaring and rearranging,

2 *j*

(*<sup>x</sup>* <sup>−</sup> *xi*)<sup>2</sup> + (*<sup>y</sup>* <sup>−</sup> *yi*)<sup>2</sup> + (*<sup>z</sup>* <sup>−</sup> *zi*)<sup>2</sup> = (*<sup>r</sup>ij*)<sup>2</sup> <sup>+</sup> <sup>2</sup>*<sup>r</sup>ijrj* + (*rj*)<sup>2</sup> (31)

<sup>+</sup> <sup>2</sup>*<sup>r</sup>ijrj* <sup>+</sup> *<sup>r</sup>*

*<sup>r</sup>*˜*ij* <sup>=</sup> *<sup>r</sup>ij* <sup>=</sup> *ri* <sup>−</sup> *rj*

three planar equations can be solved to find the ultimate receiver position.

solve such LOPs for paired measurement is now formulated.

*r* 2 *<sup>i</sup>* = *r* 2 *ij*

Hence, the actual spherical LOP can be transformed as follows:

**3.6 PML with single reference satellite**

in the paired measurements formulations.

noise presence it follows:

<sup>1</sup>**e**<sup>1</sup> (29)

(30)

using correct value of **e**<sup>1</sup> as follows:

satellites in solving the equations.

positioning.

**3.5 Paired measurement localization**

Where for *m* = 2, 3;

$$\mathbf{F}\_m = \frac{b\_m}{b\_m^2 - d\_m^2} \mathbf{e}\_m - \frac{b\_{m+1}}{b\_{m+1}^2 - d\_{m+1}^2} \mathbf{e}\_{m+1}$$

$$\mathbf{f}\_m = \frac{\mathbf{F}\_m}{\|\mathbf{F}\_m\|}$$

$$u\_m = \frac{1}{\|\mathbf{F}\_m\|} \left(\frac{d\_{m+1}}{b\_{m+1}^2 - d\_{m+1}^2} - \frac{d\_m}{b\_m^2 - d\_m^2}\right)$$

The unit vector **f**<sup>2</sup> lies in the plane through satellites 1, 2 and 3. This plane is spanned by **e**<sup>2</sup> and **e**3. Similarly **f**<sup>3</sup> is in the plane determined by satellites 1, 3 and 4.

Equation 25 determines the cosine of the two unit vectors **f**<sup>2</sup> and **f**<sup>3</sup> with the desired unit vector **e**1. It will have two solutions, one above and one below the plane spanned by **f**<sup>2</sup> and **f**3. In case these vectors are parallel their inner product is zero and there are infinitely many solutions and hence the position cannot be determined.

The algebraic solution to equation 25 can be derived using vector triple product identity,

$$\mathbf{e}\_1 \times (\mathbf{f}\_1 \times \mathbf{f}\_2) = \mathbf{f}\_1 \left(\mathbf{e}\_1 \cdot \mathbf{f}\_2\right) - \mathbf{f}\_2 \left(\mathbf{e}\_1 \cdot \mathbf{f}\_1\right)$$

All the terms in the right hand of the above equation is readily computed using **u**2, **u**3. Substituting **h** for the right hand side and **g** for **f**<sup>1</sup> × **f**2, we get

$$\mathbf{e}\_1 \times \mathbf{g} = \mathbf{h} \tag{26}$$

Multiplying both sides of the equation by **g** and applying the vector triple product identity,

$$\mathbf{e}\_{1}\left(\mathbf{g}\cdot\mathbf{g}\right) - \mathbf{g}\left(\mathbf{g}\cdot\mathbf{e}\_{1}\right) = \mathbf{g} \times \mathbf{h} \tag{27}$$

The scalar product in the second term of the left-hand side can be written in terms of the angle θ between unit vector **e**<sup>1</sup> and **g** as follows

$$\mathbf{g} \cdot \mathbf{e}\_1 = [\mathbf{g} \cdot \mathbf{g}]^{\frac{1}{2}} \cos \theta$$

The sine value of the angle can be found from equation 26 as follows:

$$[\mathbf{h} \cdot \mathbf{h}]^{\frac{1}{2}} = [(\mathbf{e}\_1 \times \mathbf{g}) \cdot (\mathbf{e}\_1 \times \mathbf{g})]^{\frac{1}{2}} = [\mathbf{g} \cdot \mathbf{g}]^{\frac{1}{2}} \sin \theta$$

Using the sine value in the cosine formula above, we obtain,

$$\mathbf{g} \cdot \mathbf{e}\_1 = \pm \left[\mathbf{g} \cdot \mathbf{g}\right]^{\frac{1}{2}} \left[1 - \frac{\mathbf{h} \cdot \mathbf{h}}{\mathbf{g} \cdot \mathbf{g}}\right]^{\frac{1}{2}} = \pm \left[\mathbf{g} \cdot \mathbf{g} - \mathbf{h} \cdot \mathbf{h}\right]^{\frac{1}{2}}$$

Substituting the above into equation 27, we obtain the desired solution:

$$\mathbf{e}\_1 = \frac{1}{2} \left( \mathbf{g} \times \mathbf{h} \pm \mathbf{g} \sqrt{\mathbf{g} \cdot \mathbf{g} - \mathbf{h} \cdot \mathbf{h}} \right) \tag{28}$$

The two values can be put in equation 24 to check the correctness of the value. The correct parameter will result in a intersection point that lies on the earth's surface and hence must have a distance of about 6371 km from the origin. We can eventually get the receiver coordinate using correct value of **e**<sup>1</sup> as follows:

$$
\rho = \mathbf{p}\_1 + \tilde{r}\_1 \mathbf{e}\_1 \tag{29}
$$

The Kleusberg's method is geometrically oriented and uses a minimum number of satellites. On the other hand, it cannot utilize more number of satellites even when they are available. This method is also dependent on the proper geometrical orientation of the satellites. Moreover, it often gives different results for different set of satellites and depending on the order of the satellites in solving the equations.

#### **3.5 Paired measurement localization**

10 Will-be-set-by-IN-TECH

**<sup>e</sup>***<sup>m</sup>* <sup>−</sup> *bm*<sup>+</sup><sup>1</sup> *b*2 *m*+1−*d*<sup>2</sup> *m*+1 **e***m*+<sup>1</sup>

> <sup>−</sup> *dm b*2 *m*−*d*<sup>2</sup> *m*

**e**<sup>1</sup> × **g** = **h** (26)

**e**<sup>1</sup> (**g** · **g**) − **g** (**g** · **e**1) = **g** × **h** (27)

1 <sup>2</sup> sin θ

> 1 2

(28)

**f***<sup>m</sup>* = **<sup>F</sup>***<sup>m</sup>* �**F***m*�

 *dm*<sup>+</sup><sup>1</sup> *b*2 *m*+1−*d*<sup>2</sup> *m*+1

The unit vector **f**<sup>2</sup> lies in the plane through satellites 1, 2 and 3. This plane is spanned by **e**<sup>2</sup>

Equation 25 determines the cosine of the two unit vectors **f**<sup>2</sup> and **f**<sup>3</sup> with the desired unit vector **e**1. It will have two solutions, one above and one below the plane spanned by **f**<sup>2</sup> and **f**3. In case these vectors are parallel their inner product is zero and there are infinitely many solutions

The algebraic solution to equation 25 can be derived using vector triple product identity,

**e**<sup>1</sup> × (**f**<sup>1</sup> × **f**2) = **f**<sup>1</sup> (**e**<sup>1</sup> ·**f**2) − **f**<sup>2</sup> (**e**<sup>1</sup> ·**f**1)

All the terms in the right hand of the above equation is readily computed using **u**2, **u**3.

Multiplying both sides of the equation by **g** and applying the vector triple product identity,

The scalar product in the second term of the left-hand side can be written in terms of the angle

1 <sup>2</sup> cos θ

 1 2

**g** · **g** − **h** · **h**

1

<sup>2</sup> = [**g** · **g**]

= ± [**g** · **g** − **h** · **h**]

**g** · **e**<sup>1</sup> = [**g** · **g**]

<sup>2</sup> = [(**e**<sup>1</sup> × **g**) · (**e**<sup>1</sup> × **g**)]

<sup>1</sup> <sup>−</sup> **<sup>h</sup>** · **<sup>h</sup> g** · **g**

**g** × **h** ± **g**

The two values can be put in equation 24 to check the correctness of the value. The correct parameter will result in a intersection point that lies on the earth's surface and hence must have

1 2 

Substituting the above into equation 27, we obtain the desired solution:

**<sup>e</sup>**<sup>1</sup> <sup>=</sup> <sup>1</sup> 2 

The sine value of the angle can be found from equation 26 as follows:

[**h** · **h**] 1

Using the sine value in the cosine formula above, we obtain,

**g** · **e**<sup>1</sup> = ± [**g** · **g**]

**F***<sup>m</sup>* = *bm b*2 *m*−*d*<sup>2</sup> *m*

*um* = <sup>1</sup> �**F***m*�

and **e**3. Similarly **f**<sup>3</sup> is in the plane determined by satellites 1, 3 and 4.

Substituting **h** for the right hand side and **g** for **f**<sup>1</sup> × **f**2, we get

and hence the position cannot be determined.

θ between unit vector **e**<sup>1</sup> and **g** as follows

Where for *m* = 2, 3;

In trilateration, the positioning works by simultaneous solution of three spherical LOP equations. Similar to the 2-D steps, we can equate two spherical LOP equations to find equation for a 2-D plane representing the planar locus of position. Analogous to 2-D case, three planar equations can be solved to find the ultimate receiver position.

As shown in section 2, the effect of noise will have detrimental impact on the aforementioned simple solution. On the other hand, instead of equating the two imprecise range equations we can maintain an equi-distant locus of position from two satellites as formulated in equation 21 for a hyperboloid LOP. This will be more accurate than a traditional 2-D planar LOP based positioning.

Solving the nonlinear hyperbolic/hyperboloid equations is difficult. Moreover, existing hyperbolic positioning methods proceed by linearizing the system of equations using either Taylor-series approximation (Foy, 1976; Torrieri, 1984) or by linearizing with another additional variable (Chan & Ho, 1994; Friedlander, 1987; Smith & Abel, 1987). However, while linearizing works well for existing approaches it is not readily adaptable for the proposed paired approach as linearizing is indeed pairing with an arbitrarily chosen hyperbolic LOP. The assumption of equal noise cannot be held for any arbitrary selection of pairs and hence alternate ways to solve such LOPs for paired measurement is now formulated.

#### **3.6 PML with single reference satellite**

(Chan & Ho, 1994) provided closed form least squares solution for non-linear hyperbolic LOPs by linearizing with reference to a single satellite. Analogous to their approach a closed form solution is found for PML using pairs having a common reference satellite in them. The solution is simpler than (Chan & Ho, 1994)'s approach as the effect of noise is considered early in the paired measurements formulations.

Let *<sup>r</sup>ij* represent the difference in the observed ranges for satellite pairs (*i*, *<sup>j</sup>*). In case of equal noise presence it follows:

$$\begin{aligned} \vec{r}\_{\hat{i}\hat{j}} &= r\_{\hat{i}\hat{j}} = r\_{\hat{i}} - r\_{\hat{j}} \\ \text{After squaring and rearranging} \\ r\_{\hat{i}}^2 &= r\_{\hat{i}\hat{j}}^2 + 2r\_{\hat{i}\hat{j}}r\_{\hat{j}} + r\_{\hat{j}}^2 \end{aligned} \tag{30}$$

Hence, the actual spherical LOP can be transformed as follows:

$$(x - x\_{\hat{i}})^2 + (y - y\_{\hat{i}})^2 + (z - z\_{\hat{i}})^2 = (r\_{\widehat{i}\hat{j}})^2 + 2r\_{\widehat{i}\hat{j}}r\_{\hat{j}} + (r\_{\hat{j}})^2 \tag{31}$$

Using (31) for pairs (**p***i*, **p***j*)=(**p***k*, **p**1) and (**p***l*, **p**1) and subtracting the second from the first,

$$-\left(\mathbf{x}\_{k} - \mathbf{x}\right)\mathbf{x} - \left(y\_{k} - y\_{l}\right)y - \left(z\_{k} - z\_{l}\right)y - \left(r\_{\widehat{k1}} - r\_{\widehat{l1}}\right)r\_{1} = \frac{1}{2}\left(\left(r\_{\widehat{k1}}\right)^{2} - \left(r\_{\widehat{l1}}\right)^{2} - \left\|\mathbf{p}\_{k}\right\|^{2} + \left\|\mathbf{p}\_{l}\right\|^{2}\right) \tag{32}$$

where �**p***k*�<sup>2</sup> = (*x*<sup>2</sup> *<sup>k</sup>* <sup>+</sup> *<sup>y</sup>*<sup>2</sup> *<sup>k</sup>* ). The above formulation represents a set of linear equations with unknowns *x*, *y*, *z* and *r*<sup>1</sup> for all combination of two pair of satellites having satellite 1 in common. Let *<sup>x</sup>*�*ij*, *<sup>y</sup>*�*ij*, *<sup>z</sup>*�*ij* represent the difference *xi* <sup>−</sup> *xj*, *yi* <sup>−</sup> *yj*, *zi* <sup>−</sup> *zj* respectively, **<sup>C</sup>***<sup>i</sup>* represent the *<sup>i</sup> th* combination and *m* represent the total number of combinations with **C***<sup>i</sup>* = {(**p***ki* , **p**1),(**p***li* , **p**1)}. The system of linear equations for these *m* combinations can be concisely written as follows:

$$\mathbf{A}\mathbf{X} = \mathbf{B} \tag{33}$$

*Li*

*xijk*, *yijk*, *zijk*

signs. A similar argument applies to *L*

�

.

 *xi* <sup>2</sup> <sup>−</sup> *<sup>x</sup><sup>i</sup>* 1 *<sup>x</sup>* + *yi* <sup>2</sup> <sup>−</sup> *<sup>y</sup><sup>i</sup>* 1 *<sup>y</sup>* + *zi* <sup>2</sup> <sup>−</sup> *<sup>z</sup><sup>i</sup>* 1 *z* =

1 2 �**p***i*

The original LOP *Oi* will then pass between the planes *L*

*Oi*, *Li*, *Oj*, *Lj*, *Ok*, *Lk* will have an aspect ratio *AR* <sup>=</sup>

, where *q* is an arbitrary positive constant.

2�<sup>2</sup> − �**p***<sup>i</sup>*

<sup>1</sup>�<sup>2</sup> <sup>+</sup> *r i* 1 2 − *r i* 2 2 − *q r i* <sup>1</sup> − *r i* 2

differ only by the constant terms in (8). The *AR* of the parallelopiped bounded by the planes

*<sup>k</sup>* will have exactly the same aspect ratio so indicating **I***ijk*, **I**

� *j* , *L* �

pairs **P***i*, **P***<sup>j</sup>* and **P***k*.

2ξ *ri* <sup>1</sup> <sup>−</sup> *<sup>r</sup><sup>i</sup>* 2 , 2ξ *r j* <sup>1</sup> − *r j* 2 and 2ξ *rk* <sup>1</sup> <sup>−</sup> *<sup>r</sup><sup>k</sup>* 2 

−*q ri* <sup>1</sup> <sup>−</sup> *<sup>r</sup><sup>i</sup>* 2 

2ξ *ri* <sup>1</sup> <sup>−</sup> *<sup>r</sup><sup>i</sup>* 2 , 2ξ *r j* <sup>1</sup> − *r j* 2 and 2ξ *rk* <sup>1</sup> <sup>−</sup> *<sup>r</sup><sup>k</sup>* 2 

*Oi*, *L* � *i* , *Oj*, *L* � *j* , *Ok*, *L* �

point **<sup>I</sup>***ijk* <sup>=</sup>

Another plane *L*

*Ok*

*Iij k*

*Iij k*

*L j*

*I*

*Ii <sup>j</sup> <sup>k</sup>*

*Oj*

*Iijk*

Fig. 5. Refining the locus of the receiver position under noisy measurement conditions.

Fig. 5 shows the ideal scenario where the position of the receiver to be determined, ρ, and the three respective planar form LOPs *Oi*, *Oj* and *Ok* are obtained from any three arbitrary satellite

The equation for *Li*, *Lj*, *Lk* can be found using (8). For specific measurement instance ξ is constant due to identical receiver clock bias and exposure to similar atmospheric noise. Hence, *Li*, *Lj*, *Lk* vary from the ideal noise free LOPs *Oi*, *Oj*, *Ok* by the extra constant terms of

(Left hand side of (8)), and these are shown by the solid planes *Li*, *Lj*, *Lk* parallel to *Oi*, *Oj* and *Ok* in Fig. 5. For non co-planar satellite pairs, *Li*, *Lj* and *Lk* will have a physical intersection

*<sup>i</sup>* parallel to *Li* can be found as follows by modifying the term 2ξ

�

*ri* <sup>1</sup> <sup>−</sup> *<sup>r</sup><sup>i</sup>* <sup>2</sup> : *r j* <sup>1</sup> − *r j* <sup>2</sup> : *<sup>r</sup><sup>k</sup>* <sup>1</sup> <sup>−</sup> *<sup>r</sup><sup>k</sup>* 2 

*L <sup>k</sup>*

*L i*

*Ii jk*

*Ii jk*

*Lj*

respectively. Crucially their slopes remain unchanged

 *ri* <sup>1</sup> <sup>−</sup> *<sup>r</sup><sup>i</sup>* 2 with

(34)

as *Li*, *Lj*, *Lk* are

*ijk* and **I** to be the

*<sup>i</sup>* and *Li* as the constants have opposite

�

*<sup>k</sup>* so that the parallelopiped bounded by the planes

distances away from *Oi*, *Oj* and *Ok* respectively as they

*Lk*

*Oi*

*I ijk*

Beyond Trilateration: GPS Positioning Geometry and Analytical Accuracy 253

where,

$$\mathbf{A} = -\begin{bmatrix} x\_{\widehat{k\_1 l\_1}} & y\_{\widehat{k\_1 l\_1}} & z\_{\widehat{k\_1 l\_1}} & -\left(r\_{\widehat{l\_1 1}} - r\_{\widehat{l\_1 1}}\right) \\ x\_{\widehat{k\_2 l\_2}} & y\_{\widehat{k\_2 l\_2}} & z\_{\widehat{k\_2 l\_2}} & -\left(r\_{\widehat{k\_2} 1} - r\_{\widehat{l\_2 1}}\right) \\ \vdots & \vdots & \vdots \\ x\_{\widehat{k\_m l\_m}} & y\_{\widehat{k\_m l\_m}} & z\_{\widehat{k\_m l\_m}} - \left(r\_{\widehat{k\_m 1}} - r\_{\widehat{l\_m 1}}\right) \end{bmatrix},$$

$$\mathbf{X} = \begin{bmatrix} x \\ y \\ z \\ r\_1 \end{bmatrix}, \mathbf{B} = \frac{1}{2} \begin{bmatrix} (r\_{\widehat{k\_1} 1})^2 - (r\_{\widehat{l\_1 1}})^2 - ||\mathbf{p}\_{k\_1}||^2 + ||\mathbf{p}\_{l\_1}||^2 \\ (r\_{\widehat{k\_2} 1})^2 - (r\_{\widehat{l\_2 1}})^2 - ||\mathbf{p}\_{k\_2}||^2 + ||\mathbf{p}\_{l\_2}||^2 \\ (r\_{\widehat{k\_m 1} 1})^2 - (r\_{\widehat{l\_m 1}})^2 - ||\mathbf{p}\_{k\_m}||^2 + ||\mathbf{p}\_{l\_m}||^2 \end{bmatrix}$$

For *m* ≥ 3, the system of equations can be solved. However, *r*<sup>1</sup> is related to *x*, *y*, *z* by (6). For pairing and equivalence of *r*˜*<sup>i</sup>* − *r*˜1 = *ri* − *r*1, observed ranges are always used in the equations and thus the system of equations are essentially independent of relationship between (*x*, *y*, *z*) and *r*1. This is also verified by the iterative refinement of *r*<sup>1</sup> where *r*˜1 is modified by obtained *r*<sup>1</sup> in successive runs. The results show no difference in position estimates (*x*, *y*, *z*) for successive iterations.

The equal noise assumption cannot be applied to any arbitrary selection of pairs while it is quite reasonable for satellites observing near equal ranges to have equal noise components. The selection of pairs with near equal ranges from a single reference satellite, may not be feasible for low visibility where only a very few satellites are available for positioning. This is the motivation for the next solution approach.

#### **3.7 PML with refinement of the locus of positions**

The linearization using one single reference satellite raises a performance issue and while it is superior to trilateration in most of the cases, occassionally it performs worse. In search for a positioning approach that can give consistently better estimates than basic trilateration, a locus refinement approach is now presented.

A refined and better approximation to planar form LOP is found from two imprecise planar form LOPs assuming equal noise presence due to receiver bias and ionospheric error in each pair and for specific instance of measurement as follows.

12 Will-be-set-by-IN-TECH

Using (31) for pairs (**p***i*, **p***j*)=(**p***k*, **p**1) and (**p***l*, **p**1) and subtracting the second from the first,

unknowns *x*, *y*, *z* and *r*<sup>1</sup> for all combination of two pair of satellites having satellite 1 in common. Let *<sup>x</sup>*�*ij*, *<sup>y</sup>*�*ij*, *<sup>z</sup>*�*ij* represent the difference *xi* <sup>−</sup> *xj*, *yi* <sup>−</sup> *yj*, *zi* <sup>−</sup> *zj* respectively, **<sup>C</sup>***<sup>i</sup>* represent the *<sup>i</sup>*

The system of linear equations for these *m* combinations can be concisely written as follows:

*<sup>k</sup>* ). The above formulation represents a set of linear equations with

**AX** = **B** (33)

⎤

⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ ,

⎤

⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦

)<sup>2</sup> − �**p***k*<sup>1</sup> �<sup>2</sup> <sup>+</sup> �**p***l*<sup>1</sup> �<sup>2</sup>

)<sup>2</sup> − �**p***k*<sup>2</sup> �<sup>2</sup> <sup>+</sup> �**p***l*<sup>2</sup> �<sup>2</sup>

)<sup>2</sup> − �**p***km* �<sup>2</sup> <sup>+</sup> �**p***lm* �<sup>2</sup>

.

)<sup>2</sup> − �**p***k*�<sup>2</sup> <sup>+</sup> �**p***l*�<sup>2</sup>

, **p**1),(**p***li*

� (32)

*th*

, **p**1)}.

*k* �<sup>1</sup> <sup>−</sup> *<sup>r</sup> l* �1 )*r*<sup>1</sup> = <sup>1</sup> 2 � (*r k* �1 )<sup>2</sup> <sup>−</sup> (*<sup>r</sup> l* �1

combination and *m* represent the total number of combinations with **C***<sup>i</sup>* = {(**p***ki*

**A** = −

**X** =

the motivation for the next solution approach.

locus refinement approach is now presented.

pair and for specific instance of measurement as follows.

**3.7 PML with refinement of the locus of positions**

⎡

*x y z r*1 ⎤

⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ , **<sup>B</sup>** <sup>=</sup> <sup>1</sup> 2

⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

⎡

*xk* �1*l*<sup>1</sup> *yk* �1*l*<sup>1</sup> *z k* �1*l*<sup>1</sup> − � *r k* �<sup>11</sup> <sup>−</sup> *<sup>r</sup> l* �11 �

*xk* �2*l*<sup>2</sup> *yk* �2*l*<sup>2</sup> *z k* �2*l*<sup>2</sup> − � *r k* �<sup>21</sup> <sup>−</sup> *<sup>r</sup> l* �21 �

*xk mlm yk mlm zk mlm* − � *r k* �*m*<sup>1</sup> <sup>−</sup> *<sup>r</sup> l* �*m*1 �

⎡

(*r k* �11 )<sup>2</sup> <sup>−</sup> (*<sup>r</sup> l* �11

(*r k* �21 )<sup>2</sup> <sup>−</sup> (*<sup>r</sup> l* �21

(*r k* �*m*1 )<sup>2</sup> <sup>−</sup> (*<sup>r</sup> l* �*m*1

For *m* ≥ 3, the system of equations can be solved. However, *r*<sup>1</sup> is related to *x*, *y*, *z* by (6). For pairing and equivalence of *r*˜*<sup>i</sup>* − *r*˜1 = *ri* − *r*1, observed ranges are always used in the equations and thus the system of equations are essentially independent of relationship between (*x*, *y*, *z*) and *r*1. This is also verified by the iterative refinement of *r*<sup>1</sup> where *r*˜1 is modified by obtained *r*<sup>1</sup> in successive runs. The results show no difference in position estimates (*x*, *y*, *z*) for successive

The equal noise assumption cannot be applied to any arbitrary selection of pairs while it is quite reasonable for satellites observing near equal ranges to have equal noise components. The selection of pairs with near equal ranges from a single reference satellite, may not be feasible for low visibility where only a very few satellites are available for positioning. This is

The linearization using one single reference satellite raises a performance issue and while it is superior to trilateration in most of the cases, occassionally it performs worse. In search for a positioning approach that can give consistently better estimates than basic trilateration, a

A refined and better approximation to planar form LOP is found from two imprecise planar form LOPs assuming equal noise presence due to receiver bias and ionospheric error in each

⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

...

⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

− (*xk* − *xl*)*x* − (*yk* − *yl*)*y* − (*zk* − *zl*)*y* − (*r*

*<sup>k</sup>* <sup>+</sup> *<sup>y</sup>*<sup>2</sup>

where �**p***k*�<sup>2</sup> = (*x*<sup>2</sup>

where,

iterations.

Fig. 5. Refining the locus of the receiver position under noisy measurement conditions.

Fig. 5 shows the ideal scenario where the position of the receiver to be determined, ρ, and the three respective planar form LOPs *Oi*, *Oj* and *Ok* are obtained from any three arbitrary satellite pairs **P***i*, **P***<sup>j</sup>* and **P***k*.

The equation for *Li*, *Lj*, *Lk* can be found using (8). For specific measurement instance ξ is constant due to identical receiver clock bias and exposure to similar atmospheric noise. Hence, *Li*, *Lj*, *Lk* vary from the ideal noise free LOPs *Oi*, *Oj*, *Ok* by the extra constant terms of 2ξ *ri* <sup>1</sup> <sup>−</sup> *<sup>r</sup><sup>i</sup>* 2 , 2ξ *r j* <sup>1</sup> − *r j* 2 and 2ξ *rk* <sup>1</sup> <sup>−</sup> *<sup>r</sup><sup>k</sup>* 2 respectively. Crucially their slopes remain unchanged (Left hand side of (8)), and these are shown by the solid planes *Li*, *Lj*, *Lk* parallel to *Oi*, *Oj* and *Ok* in Fig. 5. For non co-planar satellite pairs, *Li*, *Lj* and *Lk* will have a physical intersection point **<sup>I</sup>***ijk* <sup>=</sup> *xijk*, *yijk*, *zijk* .

Another plane *L* � *<sup>i</sup>* parallel to *Li* can be found as follows by modifying the term 2ξ *ri* <sup>1</sup> <sup>−</sup> *<sup>r</sup><sup>i</sup>* 2 with −*q ri* <sup>1</sup> <sup>−</sup> *<sup>r</sup><sup>i</sup>* 2 , where *q* is an arbitrary positive constant.

$$\begin{aligned} \left(\mathbf{x}\_2^i - \mathbf{x}\_1^i\right)\mathbf{x} + \left(y\_2^i - y\_1^i\right)\mathbf{y} + \left(z\_2^i - z\_1^i\right)\mathbf{z} &= \\ \frac{1}{2}\left(\|\mathbf{p}\_2^i\|^2 - \|\mathbf{p}\_1^i\|^2 + \left(r\_1^i\right)^2 - \left(r\_2^i\right)^2 - q\left(r\_1^i - r\_2^i\right)\right) \end{aligned} \tag{34}$$

The original LOP *Oi* will then pass between the planes *L* � *<sup>i</sup>* and *Li* as the constants have opposite signs. A similar argument applies to *L* � *j* , *L* � *<sup>k</sup>* so that the parallelopiped bounded by the planes *Oi*, *Li*, *Oj*, *Lj*, *Ok*, *Lk* will have an aspect ratio *AR* <sup>=</sup> *ri* <sup>1</sup> <sup>−</sup> *<sup>r</sup><sup>i</sup>* <sup>2</sup> : *r j* <sup>1</sup> − *r j* <sup>2</sup> : *<sup>r</sup><sup>k</sup>* <sup>1</sup> <sup>−</sup> *<sup>r</sup><sup>k</sup>* 2 as *Li*, *Lj*, *Lk* are 2ξ *ri* <sup>1</sup> <sup>−</sup> *<sup>r</sup><sup>i</sup>* 2 , 2ξ *r j* <sup>1</sup> − *r j* 2 and 2ξ *rk* <sup>1</sup> <sup>−</sup> *<sup>r</sup><sup>k</sup>* 2 distances away from *Oi*, *Oj* and *Ok* respectively as they differ only by the constant terms in (8). The *AR* of the parallelopiped bounded by the planes *Oi*, *L* � *i* , *Oj*, *L* � *j* , *Ok*, *L* � *<sup>k</sup>* will have exactly the same aspect ratio so indicating **I***ijk*, **I** � *ijk* and **I** to be the

**Algorithm 1** Satellite Selection for PML Refinement Approach

Replace the previous co-planar pair with current pair

Add the current pair to the selected pairs

complexity is O(1) once satellite selection has been completed.

significantly better for real time GPS and tracking performance.

**if** Number of selected pair < Required number of pairs **then**

Replace the worst ranking selected pair with the current pair

can be run on-demand only when satellite positions are either changed or after considerable movement of the receiver. Given the small number of visible satellites in range, this will incur

Finally, as the new PML method itself is an analytical approach, the order of computational

Summarizing, PML approaches are improvement over basic trilateration in that it considers noisy measurement conditions in its formulation. Thus, this new strategy performs

This chapter presented a detailed discussion on the analytical approaches for GPS positioning. Trilateration is the basis for most analytical positioning approaches and hence this chapter begins with fundamental discussion on trilateration. However, it performs poorly under noisy conditions which is analyzed in detail from theoretical and simulated scenarios. We also showed how difference of two range measurements can result in better positioning formulations. Subsequently, we present existing analytical approaches of Bancroft's method and Kleusberg's method that uses least squares and vector algebra respectively for solution of GPS equations. Later we present two newer approaches that are based on using better Locus Of Position (LOP) for the receiver than customary spherical locus in presence of noise. The first of these, called Paired Measurement Localization (PML) with single reference satellite uses hyperboloid planar locus of positions. The solution of these non-linear hyperboloids are found by linearizing with reference to a single satellite. The other PML approach obtains a better LOP from ordinary planar LOPs using a LOP refinement technique. Both of the PML based approaches have the advantage that they can utilize all the available satellites using least squares solution. If only four/three LOPs are used for PML single reference or PML LOP refinement respectively, the receiver position can be calculated by simple algebra. This has the advantage of avoiding matrix inversion for least squares solution and particularly suitable when the receiver has constraint computational support such as mobile embedded GPS receivers. Alternatively, when sufficient computational resources are available and better

**p***i*, **p***<sup>j</sup>* ;

Beyond Trilateration: GPS Positioning Geometry and Analytical Accuracy 255

is co-planar with any previous selected pair and (�) of present pair is lower

**for all** pair of neighboring satellites **do** Calculate rank (�) for the pair

than selected pair **then**

**if p***i*, **p***<sup>j</sup>* 

**else**

**else**

**end if end if end for**

negligible cost.

**4. Conclusion**

diagonal points of the parallelopiped where **I** � *ijk* denotes the intersection point of planes *L* � *i* , *L* � *j* and *L* � *k* .

Hence, the equation of the actual LOP **I***ijk***I** � *ijk* passing through **I** is found from the three intersection points **I***ijk*,**I***ij*� *<sup>k</sup>* and **I***<sup>i</sup>* � *j* � *k* � which are available from equations (8) and (34) and analogous equations for LOPs *Lj*, *Lk*, *L* � *<sup>j</sup>* and *L* � *k* .

As the LOPs obtained in this way are expressed by linear equations with unknowns *x*, *y* and *z*, they can be solved using simple algebraic or least squares methods.

The locus refinement formulation assumes noise to be present in the formulae. However, if the noise is absent the diagonal points **I***ijk* and **I** � *ijk* would be very close and during the calculation process whenever pairs having distance < 2*m* are observed the estimated location is found as the mean of these two points.

The planar form LOP obtained from each satellite pair must be linearly independent so they do not represent either the same or a parallel planar LOP. Such satellite pairs are referred to as mutually independent, so a key objective is to identify such satellite pairs where each satellite has nearly similar distance from the receiver. PML may be intuitively viewed as positioning exploiting bearing measurements, as LOPs effectively denote a directional line. It is known that angular measurements are consistently more accurate compared to TOF range measurements and in (Chintalapudi et al., 2004) a combination of range and angular measurement has been shown to achieve better positioning results, providing a valuable insight as to why the LOP refinement furnishes better location estimation.

#### **3.8 Selection of satellite pairs for PML**

It is apparent from observation 1 that the existence of a pair of satellites having equal distance from the receiver position can have equal atmospheric noise exposure, with this prerequisite being relaxed and generalized by LOP refinement approach. Observation 1 highlights the significance of pairing the satellites for better noise cancellation and a better selection process can result in considerable improvement. With practical range estimations there is no explicit way to determine the best possible pairs following the observation. However, the range estimation ratios can be used as a rough measure for adhering to observation 1 which is the basis for the following empirically defined ranking criteria. The ranking criteria also considers the closeness of the satellites. If the two satellites are too close to each other they might have the best range estimation ratio while effectively they are like two satellites placed at the same place and hence providing no additional redundancy to help positioning. Utilizing, the above mentioned two principles the following empirical ranking criteria is introduced.

$$\mathcal{R} = \frac{\vec{r}\_1}{\vec{r}\_2} \left( \frac{1}{\|\mathbf{p}\_1 \mathbf{p}\_2\|} \right) \tag{35}$$

where *r*˜1 and *r*˜2 are the observed range estimates for satellite pair (**p**1, **p**2) such that *r*˜1 ≥ *r*˜2 and �**p**1**p**2� is the Euclidean distance between the two satellites. The pairs having lower ranks (�) are preferred over ones with higher ranks. The complete satellite selection algorithm is given as follows.

Algorithm 1 searches all available satellites for a particular receiver so its computational complexity is *O*(*available satellites*2) if an exhaustive search is applied. This selection process

can be run on-demand only when satellite positions are either changed or after considerable movement of the receiver. Given the small number of visible satellites in range, this will incur negligible cost.

Finally, as the new PML method itself is an analytical approach, the order of computational complexity is O(1) once satellite selection has been completed.

Summarizing, PML approaches are improvement over basic trilateration in that it considers noisy measurement conditions in its formulation. Thus, this new strategy performs significantly better for real time GPS and tracking performance.

### **4. Conclusion**

14 Will-be-set-by-IN-TECH

�

�

As the LOPs obtained in this way are expressed by linear equations with unknowns *x*, *y* and

The locus refinement formulation assumes noise to be present in the formulae. However, if the

process whenever pairs having distance < 2*m* are observed the estimated location is found as

The planar form LOP obtained from each satellite pair must be linearly independent so they do not represent either the same or a parallel planar LOP. Such satellite pairs are referred to as mutually independent, so a key objective is to identify such satellite pairs where each satellite has nearly similar distance from the receiver. PML may be intuitively viewed as positioning exploiting bearing measurements, as LOPs effectively denote a directional line. It is known that angular measurements are consistently more accurate compared to TOF range measurements and in (Chintalapudi et al., 2004) a combination of range and angular measurement has been shown to achieve better positioning results, providing a valuable insight as to why the LOP

It is apparent from observation 1 that the existence of a pair of satellites having equal distance from the receiver position can have equal atmospheric noise exposure, with this prerequisite being relaxed and generalized by LOP refinement approach. Observation 1 highlights the significance of pairing the satellites for better noise cancellation and a better selection process can result in considerable improvement. With practical range estimations there is no explicit way to determine the best possible pairs following the observation. However, the range estimation ratios can be used as a rough measure for adhering to observation 1 which is the basis for the following empirically defined ranking criteria. The ranking criteria also considers the closeness of the satellites. If the two satellites are too close to each other they might have the best range estimation ratio while effectively they are like two satellites placed at the same place and hence providing no additional redundancy to help positioning. Utilizing, the above

mentioned two principles the following empirical ranking criteria is introduced.

� <sup>=</sup> *<sup>r</sup>*˜1 *r*˜2  1 �**p**1**p**2�

where *r*˜1 and *r*˜2 are the observed range estimates for satellite pair (**p**1, **p**2) such that *r*˜1 ≥ *r*˜2 and �**p**1**p**2� is the Euclidean distance between the two satellites. The pairs having lower ranks (�) are preferred over ones with higher ranks. The complete satellite selection algorithm is given

Algorithm 1 searches all available satellites for a particular receiver so its computational complexity is *O*(*available satellites*2) if an exhaustive search is applied. This selection process

�

*ijk* denotes the intersection point of planes *L*

� which are available from equations (8) and (34) and

*ijk* passing through **I** is found from the three

*ijk* would be very close and during the calculation

� *i* , *L* � *j*

(35)

diagonal points of the parallelopiped where **I**

Hence, the equation of the actual LOP **I***ijk***I**

noise is absent the diagonal points **I***ijk* and **I**

refinement furnishes better location estimation.

**3.8 Selection of satellite pairs for PML**

as follows.

analogous equations for LOPs *Lj*, *Lk*, *L*

*<sup>k</sup>* and **I***<sup>i</sup>* � *j* � *k*

> � *<sup>j</sup>* and *L* � *k* .

*z*, they can be solved using simple algebraic or least squares methods.

intersection points **I***ijk*,**I***ij*�

the mean of these two points.

and *L* � *k* .

> This chapter presented a detailed discussion on the analytical approaches for GPS positioning. Trilateration is the basis for most analytical positioning approaches and hence this chapter begins with fundamental discussion on trilateration. However, it performs poorly under noisy conditions which is analyzed in detail from theoretical and simulated scenarios. We also showed how difference of two range measurements can result in better positioning formulations. Subsequently, we present existing analytical approaches of Bancroft's method and Kleusberg's method that uses least squares and vector algebra respectively for solution of GPS equations. Later we present two newer approaches that are based on using better Locus Of Position (LOP) for the receiver than customary spherical locus in presence of noise. The first of these, called Paired Measurement Localization (PML) with single reference satellite uses hyperboloid planar locus of positions. The solution of these non-linear hyperboloids are found by linearizing with reference to a single satellite. The other PML approach obtains a better LOP from ordinary planar LOPs using a LOP refinement technique. Both of the PML based approaches have the advantage that they can utilize all the available satellites using least squares solution. If only four/three LOPs are used for PML single reference or PML LOP refinement respectively, the receiver position can be calculated by simple algebra. This has the advantage of avoiding matrix inversion for least squares solution and particularly suitable when the receiver has constraint computational support such as mobile embedded GPS receivers. Alternatively, when sufficient computational resources are available and better

**0**

**11**

**Improved Inertial/Odometry/GPS**

**GPS-Denied Environments**

and Aboelmagd Noureldin5

<sup>2</sup>*Trusted Positioning Inc.*

*University Canada*

**Positioning of Wheeled Robots Even in**

<sup>3</sup>*Electrical and Computer Engineering Department, Queen's University* <sup>4</sup>*Electrical and Computer Engineering Department, Royal Military College*

Eric North1, Jacques Georgy2, Umar Iqbal3, Mohammed Tarbochi4

<sup>1</sup>*Canadian Forces Aerospace and Telecommunications Engineering Support Squadron*

<sup>5</sup>*Electrical and Computer Engineering Department, Royal Military College/Queen's*

As described by Pacis et al (Pacis et al., 2004) the control strategy from a navigational viewpoint used in a mobile platform ranges from tele-operated to autonomous. A tele-operated platform is a platform having no on-board intelligence and whose navigation is guided in real-time by a remote human operator. An autonomous platform is one that takes its own decisions using onboard sensors and processor. According to Pacis et al (Pacis et al., 2005) for autonomous mobile robot navigation the problems that must be dealt with are localization, path planning, obstacle avoidance and map building. The focus of this work is

Localization is the problem of estimating robot's pose relative to its environment from sensor observations. Localization is a necessity for successful mobile robot systems, it has been referred to as "the most fundamental problem to providing a mobile robot with autonomous capabilities" (Cox, 1991). Furthermore, as confirmed in (Pacis et al., 2004) to achieve autonomous navigation the robot must maintain an accurate knowledge of its position and orientation. Successful achievement of all other navigation tasks depends on the robot's ability to know its position and orientation accurately. According to a review by Borenstein et al (Borenstein et al., 1997) of mobile robot positioning technologies, positioning systems are divided into seven categories falling in two groups. They classified the positioning techniques as: relative position measurement and absolute position measurement. The former includes odometry and inertial navigation while the latter includes magnetic compass, active beacons,

global positioning system (GPS), landmark navigation and map-based positioning.

An unprecedented surge of developments in mobile robot outdoor navigation was witnessed after the US government removed selective availability (SA) of GPS. Examples of applications

**1. Introduction**

in the localization problem.

precision is needed full fledged least squares solution and further filtering techniques could be applied.

## **5. Acknowledgment**

Part of this research is supported by University of Malaya high-impact research grant number UM.C/HIR/MOHE/FCSIT/04.

#### **6. References**


## **Improved Inertial/Odometry/GPS Positioning of Wheeled Robots Even in GPS-Denied Environments**

Eric North1, Jacques Georgy2, Umar Iqbal3, Mohammed Tarbochi4 and Aboelmagd Noureldin5 <sup>1</sup>*Canadian Forces Aerospace and Telecommunications Engineering Support Squadron* <sup>2</sup>*Trusted Positioning Inc.*

<sup>3</sup>*Electrical and Computer Engineering Department, Queen's University* <sup>4</sup>*Electrical and Computer Engineering Department, Royal Military College*

<sup>5</sup>*Electrical and Computer Engineering Department, Royal Military College/Queen's University Canada*

#### **1. Introduction**

16 Will-be-set-by-IN-TECH

256 Global Navigation Satellite Systems – Signal, Theory and Applications

precision is needed full fledged least squares solution and further filtering techniques could

Part of this research is supported by University of Malaya high-impact research grant number

Bajaj, R., Ranaweera, S. & Agrawal, D. (2002). GPS: Location-tracking technology, *Computer*

Bancroft, S. (1985). An algebraic solution of the GPS equations, *IEEE Transactions on Aerospace*

Caffery, J. J. (2000). A new approach to the geometry of TOA location, *52nd Vehicular Technology*

Chaffee, J. & Abel, J. (1994). On the exact solutions of pseudorange equations, *IEEE Transactions*

Chan, Y. & Ho, K. (1994). A simple and effificient estimator for hyperbolic location, *IEEE*

Chintalapudi, K. K., Dhariwal, A., Govindan, R. & Sukhatme, G. (2004). Ad-hoc localization

Foy, W. H. (1976). Position-location solutions by Taylor-series estimation, *IEEE Trans. Aerosp.*

Friedlander, B. (1987). A passive localization algorithm and its accuracy analysis, *IEEE J. Ocean.*

Rahman, M. Z. & Kleeman, L. (2009). Paired measurement localization: A robust approach for

Smith, J. O. & Abel, J. S. (1987). Closed-form least-squares source location estimation

Torrieri, D. J. (1984). Statistical theory of passive location systems, *IEEE Trans. Aerosp. Electron.*

from range-difference measurements, *IEEE Trans. Acoust., Speech, Signal Process.*

be applied.

UM.C/HIR/MOHE/FCSIT/04.

**5. Acknowledgment** 

**6. References** 

35(4): 92–94.

*Conference*.

*and Electronic Systems* 21(6): 56–59.

*on Aerospace and Electronic Systems* 30: 1021–1030.

Kleusberg, A. (1994). Analytical GPS navigation solution, pp. 1905–1915.

wireless localization, *IEEE Transactions on Mobile Computing* 8(8).

Strang, G. & Borre, K. (1997). *Linear Algebra, Geodesy, and GPS*, Wellesley-Cambridge.

*Transactions on Signal Processing* 42: 1905–1915.

using ranging and sectoring, *INFOCOM*.

*Electron. Syst.* 12: 187–194.

*Eng.* 12: 234–245.

35: 1661–1669.

*Syst.* 20: 183–197.

As described by Pacis et al (Pacis et al., 2004) the control strategy from a navigational viewpoint used in a mobile platform ranges from tele-operated to autonomous. A tele-operated platform is a platform having no on-board intelligence and whose navigation is guided in real-time by a remote human operator. An autonomous platform is one that takes its own decisions using onboard sensors and processor. According to Pacis et al (Pacis et al., 2005) for autonomous mobile robot navigation the problems that must be dealt with are localization, path planning, obstacle avoidance and map building. The focus of this work is in the localization problem.

Localization is the problem of estimating robot's pose relative to its environment from sensor observations. Localization is a necessity for successful mobile robot systems, it has been referred to as "the most fundamental problem to providing a mobile robot with autonomous capabilities" (Cox, 1991). Furthermore, as confirmed in (Pacis et al., 2004) to achieve autonomous navigation the robot must maintain an accurate knowledge of its position and orientation. Successful achievement of all other navigation tasks depends on the robot's ability to know its position and orientation accurately. According to a review by Borenstein et al (Borenstein et al., 1997) of mobile robot positioning technologies, positioning systems are divided into seven categories falling in two groups. They classified the positioning techniques as: relative position measurement and absolute position measurement. The former includes odometry and inertial navigation while the latter includes magnetic compass, active beacons, global positioning system (GPS), landmark navigation and map-based positioning.

An unprecedented surge of developments in mobile robot outdoor navigation was witnessed after the US government removed selective availability (SA) of GPS. Examples of applications

**2. Methodology**

frame to another is **R***<sup>l</sup>*

defined as:

the mechanization process. **R***<sup>l</sup>*

**2.1 Reduced inertial sensor system**

of Wheeled Robots Even in GPS-Denied Environments

In addition to MEMS-based sensors, the concept of RISS is used in a navigation scheme for a full-sized vehicle (Iqbal et al., 2008) in order to further lower the cost of the positioning solution. The RISS used in (Iqbal et al., 2008) involves a single-axis gyroscope and two-axis accelerometer together with a built-in vehicle speed sensor to provide a 2-D navigation solution in denied GPS environments. With the assumption that the vehicle remains mostly in the horizontal plane, the vehicles speed sensor is used with heading information obtained from the vertically-aligned gyroscope to determine the velocities along the East and North directions. Consequently, the vehicles longitude and latitude are determined. If pitch and roll angles are needed the two accelerometers pointing towards the forward and transverse directions are used together with odometer-derived speed and a reliable gravity model to determine these angles independently of the integration filter. In (Iqbal et al., 2009), 2-D RISS/GPS integration were presented using Kalman filter (KF) for a full-sized vehicle.

<sup>259</sup> Improved Inertial/Odometry/GPS Positioning

In this work a low-cost navigation system using a KF to integrate MEMS-based RISS with GPS in a loosely-coupled scheme is described. The RISS used herein is 3-D: it includes a three-axis accelerometer and a single-axis gyroscope aligned with the vertical axis of the body frame of the robot together with two wheel encoders. Here accelerometers are used to calculate 3-D velocity and position while the vertical gyroscope is used to calculate the azimuth angle (i.e. the heading of the robot). Pitch and roll are calculated based on the idea presented in (Noureldin et al., 2002)(Noureldin et al., 2004) using the two horizontal accelerometers and the forward velocity obtained from wheel encoders. This constitutes the RISS mechanization. The benefits of eliminating the other two gyroscopes in this RISS mechanization scheme are as follows: (1) further decreases in system costs and (2) improvements in positioning accuracy by employing fewer inertial sensors and thus less contribution of sensor errors towards positional errors. Of particular importance is the reduction in error in pitch and roll calculations. Whereas full mechanization of pitch and roll from gyroscopes involves integration, their calculation in RISS mechanization using accelerometers does not. This last

fact decreases the portion of positional error originating from pitch and roll errors.

The local level frame (LLF) serves to represent mobile robot attitude and velocity for operation on or near the surface of the earth and is defined as an origin, x, y and z-axis. The origin coincides with the center of the sensor frame (origin of inertial sensor triad). The y-axis points to true north. The x-axis points to east. Finally, the z-axis completes the right-handed

One of the important direction cosine matrices for specifying rotation between one coordinate

*<sup>b</sup>* which transforms a vector from *b*-frame to LLF, a requirement during

*<sup>b</sup>* is expressed in terms of yaw, pitch and roll Euler angles is

**2.2 Coordinate transformation from local level lram (LLF) to body frame (B-F)**

coordinate systems pointing up, perpendicular to the reference ellipsoid.

for these robots are autonomous lawnmowers and motorized wheelchairs. These devices are low-cost and are used on terrain that is not flat. GPS can be used to provide three-dimensional knowledge of the mobile robot's position. Unfortunately, GPS suffers from outages when line-of-sight is blocked between the robot and GPS satellites. These outages are caused by operating the robot in and around buildings, dense foliage and other obstructions. An inertial measurement unit (IMU), with three accelerometers and three gyroscopes, is a good choice in lieu of GPS during outages for providing a 3-D positioning solution. Since a low-cost solution is needed for certain mobile robots, a low-cost IMU based on a micro electro-mechanical system (MEMS) has to be used. However, MEMS-based inertial sensors suffer from several complex errors such as biases; moreover these errors have influential stochastic parts. Since inertial navigation systems (INS) involve integration operations using sensor readings, the subsequent errors will accumulate and cause a rapid degrade in the quality of position estimate. Odometry using wheel encoders is another type of dead reckoning that provides limited localization information, mostly two-dimensional (2-D). This information is not subject to the same magnitude of errors as the IMU, provided that the vehicle does not encounter excessive skidding or slipping. But these 2-D solutions will not be adequate if the robot often moves outside the horizontal plane.

While 2-D and 3-D solutions using sensors in a full-sized vehicle have been done in the work to-date, further research is needed in the area of 3-D localization of small wheeled mobile robots operating in large 3-D terrain. The majority of the previous work using small mobile robots shows that the terrain is flat and the paths of the robots are small (for example (Ohno et al., 2003)(Ollero et al., 1999)(Chong & Kleeman, 1997)). This work attempts to bridge the gap between full-sized vehicle navigation in 3-D and navigation of small wheeled mobile robots over large paths in uneven terrain. Furthermore, this work will provide a 3-D solution for a small wheeled mobile robot that is required to travel distances in excess of 1 km over hilly terrain.

This work aims at combining the advantages of inertial sensors and odometry while mitigating their disadvantages to provide enhanced low-cost mobile robot 3-D localization capabilities during GPS outages. This will be achieved through the use of a Kalman Filter (KF) that integrates odometry from wheel encoders, low cost MEMS-based inertial sensors and GPS in a loosely-coupled scheme. To enhance the performance and lower the cost further, the proposed technique uses a reduced inertial sensor system (RISS). To further enhance the solution during GPS outages, velocity updates computed from wheel speeds are used to reduce the drift of the estimated solution. Moreover, this work proposes the development of a predictive error model used in a KF for estimating the errors in positions, velocities and azimuth angle from RISS mechanization. The experimental results will show that this error model when combined in a KF with 3-D measurement updates of velocities using forward speed from encoders together with pitch and azimuth estimates is a good technique for greatly reducing localization errors.

The structure of the rest of this chapter is as follows: Section 2 presents the methodology used. It describes the equations used to implement the RISS mechanization and KF error-models. Section 3 is a description of the mobile robot and the setup used in the experiments. Section 4 presents the results and discussion of this work. Finally, Section 5 is the conclusion and discussion of future work.

## **2. Methodology**

2 Will-be-set-by-IN-TECH

for these robots are autonomous lawnmowers and motorized wheelchairs. These devices are low-cost and are used on terrain that is not flat. GPS can be used to provide three-dimensional knowledge of the mobile robot's position. Unfortunately, GPS suffers from outages when line-of-sight is blocked between the robot and GPS satellites. These outages are caused by operating the robot in and around buildings, dense foliage and other obstructions. An inertial measurement unit (IMU), with three accelerometers and three gyroscopes, is a good choice in lieu of GPS during outages for providing a 3-D positioning solution. Since a low-cost solution is needed for certain mobile robots, a low-cost IMU based on a micro electro-mechanical system (MEMS) has to be used. However, MEMS-based inertial sensors suffer from several complex errors such as biases; moreover these errors have influential stochastic parts. Since inertial navigation systems (INS) involve integration operations using sensor readings, the subsequent errors will accumulate and cause a rapid degrade in the quality of position estimate. Odometry using wheel encoders is another type of dead reckoning that provides limited localization information, mostly two-dimensional (2-D). This information is not subject to the same magnitude of errors as the IMU, provided that the vehicle does not encounter excessive skidding or slipping. But these 2-D solutions will not be

While 2-D and 3-D solutions using sensors in a full-sized vehicle have been done in the work to-date, further research is needed in the area of 3-D localization of small wheeled mobile robots operating in large 3-D terrain. The majority of the previous work using small mobile robots shows that the terrain is flat and the paths of the robots are small (for example (Ohno et al., 2003)(Ollero et al., 1999)(Chong & Kleeman, 1997)). This work attempts to bridge the gap between full-sized vehicle navigation in 3-D and navigation of small wheeled mobile robots over large paths in uneven terrain. Furthermore, this work will provide a 3-D solution for a small wheeled mobile robot that is required to travel distances in excess of 1 km over hilly

This work aims at combining the advantages of inertial sensors and odometry while mitigating their disadvantages to provide enhanced low-cost mobile robot 3-D localization capabilities during GPS outages. This will be achieved through the use of a Kalman Filter (KF) that integrates odometry from wheel encoders, low cost MEMS-based inertial sensors and GPS in a loosely-coupled scheme. To enhance the performance and lower the cost further, the proposed technique uses a reduced inertial sensor system (RISS). To further enhance the solution during GPS outages, velocity updates computed from wheel speeds are used to reduce the drift of the estimated solution. Moreover, this work proposes the development of a predictive error model used in a KF for estimating the errors in positions, velocities and azimuth angle from RISS mechanization. The experimental results will show that this error model when combined in a KF with 3-D measurement updates of velocities using forward speed from encoders together with pitch and azimuth estimates is a good technique for greatly

The structure of the rest of this chapter is as follows: Section 2 presents the methodology used. It describes the equations used to implement the RISS mechanization and KF error-models. Section 3 is a description of the mobile robot and the setup used in the experiments. Section 4 presents the results and discussion of this work. Finally, Section 5 is the conclusion and

adequate if the robot often moves outside the horizontal plane.

terrain.

reducing localization errors.

discussion of future work.

## **2.1 Reduced inertial sensor system**

In addition to MEMS-based sensors, the concept of RISS is used in a navigation scheme for a full-sized vehicle (Iqbal et al., 2008) in order to further lower the cost of the positioning solution. The RISS used in (Iqbal et al., 2008) involves a single-axis gyroscope and two-axis accelerometer together with a built-in vehicle speed sensor to provide a 2-D navigation solution in denied GPS environments. With the assumption that the vehicle remains mostly in the horizontal plane, the vehicles speed sensor is used with heading information obtained from the vertically-aligned gyroscope to determine the velocities along the East and North directions. Consequently, the vehicles longitude and latitude are determined. If pitch and roll angles are needed the two accelerometers pointing towards the forward and transverse directions are used together with odometer-derived speed and a reliable gravity model to determine these angles independently of the integration filter. In (Iqbal et al., 2009), 2-D RISS/GPS integration were presented using Kalman filter (KF) for a full-sized vehicle.

In this work a low-cost navigation system using a KF to integrate MEMS-based RISS with GPS in a loosely-coupled scheme is described. The RISS used herein is 3-D: it includes a three-axis accelerometer and a single-axis gyroscope aligned with the vertical axis of the body frame of the robot together with two wheel encoders. Here accelerometers are used to calculate 3-D velocity and position while the vertical gyroscope is used to calculate the azimuth angle (i.e. the heading of the robot). Pitch and roll are calculated based on the idea presented in (Noureldin et al., 2002)(Noureldin et al., 2004) using the two horizontal accelerometers and the forward velocity obtained from wheel encoders. This constitutes the RISS mechanization.

The benefits of eliminating the other two gyroscopes in this RISS mechanization scheme are as follows: (1) further decreases in system costs and (2) improvements in positioning accuracy by employing fewer inertial sensors and thus less contribution of sensor errors towards positional errors. Of particular importance is the reduction in error in pitch and roll calculations. Whereas full mechanization of pitch and roll from gyroscopes involves integration, their calculation in RISS mechanization using accelerometers does not. This last fact decreases the portion of positional error originating from pitch and roll errors.

### **2.2 Coordinate transformation from local level lram (LLF) to body frame (B-F)**

The local level frame (LLF) serves to represent mobile robot attitude and velocity for operation on or near the surface of the earth and is defined as an origin, x, y and z-axis. The origin coincides with the center of the sensor frame (origin of inertial sensor triad). The y-axis points to true north. The x-axis points to east. Finally, the z-axis completes the right-handed coordinate systems pointing up, perpendicular to the reference ellipsoid.

One of the important direction cosine matrices for specifying rotation between one coordinate frame to another is **R***<sup>l</sup> <sup>b</sup>* which transforms a vector from *b*-frame to LLF, a requirement during the mechanization process. **R***<sup>l</sup> <sup>b</sup>* is expressed in terms of yaw, pitch and roll Euler angles is defined as:

**2.4 RISS mechanization 2.4.1 Attitude equations**

Where:

Where:

The equations for calculating pitch and roll from accelerometers are based on the idea presented in (Noureldin et al., 2002)(Noureldin et al., 2004). The robot acceleration derived from wheel encoder measurements is removed from the forward accelerometer measurement before computing the pitch angle. The equation for pitch *ρ* and neglecting acceleration in the

<sup>261</sup> Improved Inertial/Odometry/GPS Positioning

*a <sup>f</sup>* is the forward acceleration and is derived from the forward velocity of the robot calculated from the average velocity measured by the wheel encoders from Equation 2.

> *fx* + *Vf ω<sup>z</sup> g* cos *ρ*

tan *φ*

*<sup>ψ</sup>* (*k*) <sup>=</sup> *<sup>ψ</sup>* (*<sup>k</sup>* <sup>−</sup> <sup>1</sup>) <sup>+</sup> *<sup>ψ</sup>*˙ (*k*) <sup>Δ</sup>*<sup>t</sup>* (9)

*<sup>b</sup>* from the body frame to the local frame in Equation 1. Calculate

*ie* for the earth's rotation rate since the last velocity calculation. In

*<sup>R</sup>* <sup>+</sup> *<sup>h</sup>* (8)

*el* for the LLF change-of-orientation since last

The transverse accelerometer has to be compensated for the normal acceleration of the vehicle

The equation for the time-rate-of-change of yaw according to (Iqbal et al., 2009) using the

There are three accelerometers that can be used to measure acceleration in the body frame of the mobile robot. Use these acceleration values to compute a velocity increment in the current time-step in order to compute an estimate for velocities. Use roll, pitch and yaw to

*<sup>ψ</sup>*˙ <sup>=</sup> *<sup>ω</sup><sup>z</sup>* <sup>−</sup> *<sup>ω</sup><sup>e</sup>* sin *<sup>φ</sup>* <sup>−</sup> *Ve*

<sup>≈</sup> sin−<sup>1</sup>

 *fy g* 

(6)

(7)

 *fy* − *a <sup>f</sup> g*

forward direction since the robot travels at very low speeds is as follows:

*ρ* = sin−<sup>1</sup>

and then it is used to calculate the roll angle. The equation for roll *θ*:

*fx* is the transversal accelerometer reading; and

*ω<sup>z</sup>* is the vertical gyroscope reading.

Integrating in discrete time gives us:

**2.4.2 Velocity equations**

calculate a rotation matrix **R***<sup>l</sup>*

a skew-symmetric matrix *ω<sup>l</sup>*

calculation.

previous value for *Ve* from RISS mechanization:

addition, calculate the skew-symmetric matrix *ω<sup>l</sup>*

*<sup>θ</sup>* <sup>=</sup> <sup>−</sup> sin−<sup>1</sup>

*fy* is the forward accelerometer reading; *g* is the acceleration due to gravity; and

of Wheeled Robots Even in GPS-Denied Environments

**R***l <sup>b</sup>* =

$$\begin{bmatrix} \cos\psi\cos\theta - \sin\psi\sin\rho\sin\theta & -\sin\psi\cos\rho \cos\psi\sin\theta + \sin\psi\sin\rho\sin\theta\\ \sin\psi\cos\theta + \cos\psi\sin\rho\sin\theta & \cos\psi\cos\rho & \sin\psi\sin\theta - \cos\psi\sin\rho\cos\theta\\ -\cos\rho\sin\theta & \sin\rho & \cos\rho\cos\theta \end{bmatrix} \tag{1}$$

#### **2.3 Mobile robot odometry equation**

The conventions and notation presented in (Chong & Kleeman, 1997) are used to create a kinematic model for the mobile robot. In this work, a simple model for mobile robot kinematics is considered. The wheels must be as thin as possible (one rolling point-of-contact between the terrain and each wheel). Also, there must not be any slipping along the longitudinal direction. There must not be any sliding along the transverse direction.

Define the instantaneous center of curvature (ICC) as a means of describing the curvilinear motion that the mobile robot makes on a plane. In a two-dimensional environment the plane that the robot travels on remains fixed for all possible positions and orientations of the mobile robot (some authors refer to the reference frame enclosed in this plane as the "global reference frame"(Chong & Kleeman, 1997). In this work, motion of the mobile robot on possibly distinct planes for each time interval is considered. The mobile robot travels on a plane that is fixed from time *k* − 1 ≤ *t* ≤ *k* + 1.

$$V\_{T\_k} = \frac{r\_R \omega\_{R\_k} + r\_L \omega\_{L\_k}}{2} \tag{2}$$

where:

*VTk* is the velocity of the robot measured from its center and tangent to the circular path contained on a plane from time *k* − 1 ≤ *t* ≤ *k* + 1;

*rR* is the radius of the right drive wheel;

*rL* is the radius of the left drive wheel;

*ωRk* is the angular velocity of the right drive wheel from time *k* − 1 ≤ *t* ≤ *k* + 1;

*ωLk* is the angular velocity of the left drive wheel from time *k* − 1 ≤ *t* ≤ *k* + 1;

*k* represents discrete time epochs; and

Δ*t* is the sampling time.

Rotational speeds of the left and right drive wheels are measured using encoders which are used to calculate the forward velocity of the robot. The forward velocity is transformed into velocities in the local frame using the equations below. Equation 2 is expressed in the mobile robot frame. In order for us to use the velocities of the robot's wheels as a measurement update we must transform these quantities to the local frame. Using the following transformation we have:

$$V\_{\varepsilon\_k}^{\prime \rm{abs}} = V\_{T\_k} \cos \left(\rho\_k \right) \sin \left(a\_k \right) \tag{3}$$

$$V\_{n\_k}^{\prime \rm obs} = V\_{T\_k} \cos \left(\rho\_k \right) \cos \left(a\_k \right) \tag{4}$$

$$V\_{\mu\_k}^{\prime \alpha \prime} = V\_{T\_k} \sin \left( \rho\_k \right) \tag{5}$$

#### **2.4 RISS mechanization**

#### **2.4.1 Attitude equations**

The equations for calculating pitch and roll from accelerometers are based on the idea presented in (Noureldin et al., 2002)(Noureldin et al., 2004). The robot acceleration derived from wheel encoder measurements is removed from the forward accelerometer measurement before computing the pitch angle. The equation for pitch *ρ* and neglecting acceleration in the forward direction since the robot travels at very low speeds is as follows:

$$
\rho = \sin^{-1}\left(\frac{f\_y - a\_f}{g}\right) \approx \sin^{-1}\left(\frac{f\_y}{g}\right) \tag{6}
$$

Where:

4 Will-be-set-by-IN-TECH

**R***l <sup>b</sup>* =

> ⎤ ⎥ ⎥ ⎥ ⎦

<sup>2</sup> (2)

*ek* = *VTk* cos (*ρk*) sin (*αk*) (3)

*nk* = *VTk* cos (*ρk*) cos (*αk*) (4)

*uk* = *VTk* sin (*ρk*) (5)

(1)

cos *ψ* cos *θ* − sin *ψ* sin *ρ* sin *θ* − sin *ψ* cos *ρ* cos *ψ* sin *θ* + sin *ψ* sin *ρ* sin *θ* sin *ψ* cos *θ* + cos *ψ* sin *ρ* sin *θ* cos *ψ* cos *ρ* sin *ψ* sin *θ* − cos *ψ* sin *ρ* cos *θ* − cos *ρ* sin *θ* sin *ρ* cos *ρ* cos *θ*

The conventions and notation presented in (Chong & Kleeman, 1997) are used to create a kinematic model for the mobile robot. In this work, a simple model for mobile robot kinematics is considered. The wheels must be as thin as possible (one rolling point-of-contact between the terrain and each wheel). Also, there must not be any slipping along the

Define the instantaneous center of curvature (ICC) as a means of describing the curvilinear motion that the mobile robot makes on a plane. In a two-dimensional environment the plane that the robot travels on remains fixed for all possible positions and orientations of the mobile robot (some authors refer to the reference frame enclosed in this plane as the "global reference frame"(Chong & Kleeman, 1997). In this work, motion of the mobile robot on possibly distinct planes for each time interval is considered. The mobile robot travels on a plane that is fixed

*VTk* <sup>=</sup> *rRωRk* <sup>+</sup> *rLωLk*

*VTk* is the velocity of the robot measured from its center and tangent to the circular path

*ωRk* is the angular velocity of the right drive wheel from time *k* − 1 ≤ *t* ≤ *k* + 1; *ωLk* is the angular velocity of the left drive wheel from time *k* − 1 ≤ *t* ≤ *k* + 1;

*Vodo*

*Vodo*

*Vodo*

Rotational speeds of the left and right drive wheels are measured using encoders which are used to calculate the forward velocity of the robot. The forward velocity is transformed into velocities in the local frame using the equations below. Equation 2 is expressed in the mobile robot frame. In order for us to use the velocities of the robot's wheels as a measurement update we must transform these quantities to the local frame. Using the following transformation we

longitudinal direction. There must not be any sliding along the transverse direction.

⎡ ⎢ ⎢ ⎢ ⎣

**2.3 Mobile robot odometry equation**

from time *k* − 1 ≤ *t* ≤ *k* + 1.

contained on a plane from time *k* − 1 ≤ *t* ≤ *k* + 1;

*rR* is the radius of the right drive wheel; *rL* is the radius of the left drive wheel;

*k* represents discrete time epochs; and

Δ*t* is the sampling time.

where:

have:

*fy* is the forward accelerometer reading;

*g* is the acceleration due to gravity; and

*a <sup>f</sup>* is the forward acceleration and is derived from the forward velocity of the robot calculated from the average velocity measured by the wheel encoders from Equation 2.

The transverse accelerometer has to be compensated for the normal acceleration of the vehicle and then it is used to calculate the roll angle. The equation for roll *θ*:

$$\theta = -\sin^{-1}\left(\frac{f\_{\text{x}} + V\_f \omega\_z}{g \cos \rho}\right) \tag{7}$$

Where:

*fx* is the transversal accelerometer reading; and

*ω<sup>z</sup>* is the vertical gyroscope reading.

The equation for the time-rate-of-change of yaw according to (Iqbal et al., 2009) using the previous value for *Ve* from RISS mechanization:

$$
\psi = \omega\_z - \omega\_\varepsilon \sin \phi - V\_\varepsilon \frac{\tan \phi}{R + h} \tag{8}
$$

Integrating in discrete time gives us:

$$
\psi\left(k\right) = \psi\left(k-1\right) + \dot{\psi}\left(k\right)\Delta t\tag{9}
$$

#### **2.4.2 Velocity equations**

There are three accelerometers that can be used to measure acceleration in the body frame of the mobile robot. Use these acceleration values to compute a velocity increment in the current time-step in order to compute an estimate for velocities. Use roll, pitch and yaw to calculate a rotation matrix **R***<sup>l</sup> <sup>b</sup>* from the body frame to the local frame in Equation 1. Calculate a skew-symmetric matrix *ω<sup>l</sup> ie* for the earth's rotation rate since the last velocity calculation. In addition, calculate the skew-symmetric matrix *ω<sup>l</sup> el* for the LLF change-of-orientation since last calculation.

**2.5 Kalman filtering**

**2.5.1 Error state vector**

containing eleven states.

*δ* ˙*x* =

integration scheme is adopted in this chapter.

of Wheeled Robots Even in GPS-Denied Environments

KF is the most commonly used technique for INS/GPS integration (Farrell & Barth, 1998)(Grewal et al., 2007). fig 1 shows a top-level view of the KF-based system used in this chapter for outdoor mobile robot localization. As mentioned previously, a loosely-coupled

<sup>263</sup> Improved Inertial/Odometry/GPS Positioning

Fig. 1. An overview of the KF-based system used for outdoor mobile robot localization.

The linearized error-state system model used by the KF in this work is in the form:

*<sup>e</sup> δV*˙

stochastic drift for the single gyroscope and three accelerometers:

*h δV*˙

*δφ δ* ˙ *λ δ* ˙ ˙

Since KF requires linearized models it estimates the error state not the navigation states themselves. The errors in the navigation states estimated by the filter are then used to correct the mechanization output and provide corrected navigation states. Leveraging the benefits of wheel encoders during GPS outages the KF presented in this section uses an error vector

The error state vector in Equation 18 consists of position errors (for latitude, longitude and altitude), velocity errors (along East, North and vertical Up), yaw error and inertial sensor

*<sup>n</sup> δV*˙

*δ* ˙*x* = **F***δx* + **G***W* (*t*) (18)

*fx δ* ˙

*fy δ* ˙ *fz* *<sup>T</sup>* (19)

*<sup>u</sup> δψ δ* ˙ *ω*˙ *<sup>z</sup> δ* ˙

Using **R***<sup>l</sup> <sup>b</sup>* from Equation 1, calculate the skew-symmetric matrix for earth rotation rate *<sup>ω</sup><sup>l</sup> ie* since the last velocity calculation:

$$
\omega\_{i\varepsilon}^{l} = \begin{bmatrix}
0 & -\omega\_{\varepsilon}\sin\phi \,\,\omega\_{\varepsilon}\cos\phi \\\\ \omega\_{\varepsilon}\sin\phi & 0 & 0 \\\\ -\omega\_{\varepsilon}\cos\phi & 0 & 0
\end{bmatrix} \tag{10}
$$

In addition, calculate the skew-symmetric matrix for the L-frame change of orientation *ω<sup>l</sup> el* since last calculation:

$$
\boldsymbol{\omega}\_{cl}^{l} = \begin{bmatrix}
0 & -\frac{V\_e \tan \phi}{N+h} & \frac{V\_e}{N+h} \\
\frac{V\_e \tan \phi}{N+h} & 0 & \frac{V\_n}{M+h} \\
\end{bmatrix} \tag{11}
$$

Use the following equation to provide velocity increments of the mobile robot in the body frame:

$$
\Delta \vec{V}^b = \begin{bmatrix} f\_{\vec{x}} \Delta t \\ f\_{\vec{y}} \Delta t \\ f\_{\vec{z}} \Delta t \end{bmatrix} \tag{12}
$$

With the three matrices **R***<sup>l</sup> <sup>b</sup>*, *<sup>ω</sup><sup>l</sup> ie* and *<sup>ω</sup><sup>l</sup> el*, the effect of gravity in the local frame in addition to the body-frame velocity increments Δ*V <sup>b</sup>* the new velocities are calculated by determining the rate-of-change for velocity increments Δ*V <sup>l</sup>* in the local frame as follows:

$$
\Delta \vec{V}^{l} = \mathbf{R}\_{b}^{l} \Delta \vec{V}^{b} - \left(2\omega\_{\rm i\varepsilon}^{l} + \omega\_{\rm el}^{l}\right) \vec{V}^{l} \Delta t + \vec{g}^{l} \Delta t \tag{13}
$$

Where *<sup>g</sup><sup>l</sup>* <sup>=</sup> [0 0 <sup>−</sup> *<sup>g</sup>*] *<sup>T</sup>*. Integration is performed using the previous values for velocities *<sup>V</sup> <sup>l</sup>* at time *<sup>k</sup>* <sup>−</sup> 1 to get *<sup>V</sup> <sup>l</sup>* at time *<sup>k</sup>* using <sup>Δ</sup>*<sup>V</sup> <sup>l</sup>* as follows:

$$
\vec{\mathcal{V}}^{l}(k) = \vec{\mathcal{V}}^{l}(k-1) + 0.5 \left[ \Delta \vec{\mathcal{V}}^{l}(k) + \Delta \vec{\mathcal{V}}^{l}(k-1) \right] \tag{14}
$$

#### **2.4.3 Position equations**

The equations for altitude *h*, latitude *φ* and longitude *λ* are as follows:

$$h\left(k\right) = h\left(k-1\right) + 0.5\left[V\_{\mathcal{U}}\left(k\right) + V\_{\mathcal{U}}\left(k-1\right)\right]\Delta t\tag{15}$$

$$
\phi\left(k\right) = \phi\left(k-1\right) + 0.5\left[V\_{\text{fl}}\left(k\right) + V\_{\text{fl}}\left(k-1\right)\right]\frac{\Delta t}{R+h} \tag{16}
$$

$$
\lambda\left(k\right) = \lambda\left(k - 1\right) + 0.5\left[V\_{\varepsilon}\left(k\right) + V\_{\varepsilon}\left(k - 1\right)\right]\frac{\Delta t}{\left(R + h\right)\cos\phi} \tag{17}
$$

It should be noted that any uncompensated bias or drift error in the accelerometer data will lead to growing errors when integrating acceleration to get velocity and again when integrating to get position. Furthermore, any uncompensated bias or drift error in the vertical gyroscope reading will lead to error growth when integrating to get yaw and again (together with velocity) to get position.

#### **2.5 Kalman filtering**

*ie*

(10)

*el*

(11)

(12)

(14)

6 Will-be-set-by-IN-TECH

*<sup>b</sup>* from Equation 1, calculate the skew-symmetric matrix for earth rotation rate *<sup>ω</sup><sup>l</sup>*

*ω<sup>e</sup>* sin *φ* 0 0 −*ω<sup>e</sup>* cos *φ* 0 0

In addition, calculate the skew-symmetric matrix for the L-frame change of orientation *ω<sup>l</sup>*

<sup>0</sup> <sup>−</sup>*Ve* tan *<sup>φ</sup> N*+*h*

*<sup>N</sup>*+*<sup>h</sup>* <sup>−</sup> *Vn*

Use the following equation to provide velocity increments of the mobile robot in the body

⎡ ⎢ ⎢ ⎢ ⎣

the body-frame velocity increments Δ*V <sup>b</sup>* the new velocities are calculated by determining the

�

*<sup>φ</sup>* (*k*) <sup>=</sup> *<sup>φ</sup>* (*<sup>k</sup>* <sup>−</sup> <sup>1</sup>) <sup>+</sup> 0.5 [*Vn* (*k*) <sup>+</sup> *Vn* (*<sup>k</sup>* <sup>−</sup> <sup>1</sup>)] <sup>Δ</sup>*<sup>t</sup>*

It should be noted that any uncompensated bias or drift error in the accelerometer data will lead to growing errors when integrating acceleration to get velocity and again when integrating to get position. Furthermore, any uncompensated bias or drift error in the vertical gyroscope reading will lead to error growth when integrating to get yaw and again (together

*<sup>λ</sup>* (*k*) <sup>=</sup> *<sup>λ</sup>* (*<sup>k</sup>* <sup>−</sup> <sup>1</sup>) <sup>+</sup> 0.5 [*Ve* (*k*) <sup>+</sup> *Ve* (*<sup>k</sup>* <sup>−</sup> <sup>1</sup>)] <sup>Δ</sup>*<sup>t</sup>*

*fx*Δ*t fy*Δ*t fz*Δ*t*

*<sup>N</sup>*+*<sup>h</sup>* <sup>0</sup> *Vn*

*Ve N*+*h* ⎤ ⎥ ⎥ ⎥ ⎥ ⎦

*el*, the effect of gravity in the local frame in addition to

�

Δ*t* (13)

*<sup>R</sup>* <sup>+</sup> *<sup>h</sup>* (16)

(*<sup>R</sup>* <sup>+</sup> *<sup>h</sup>*) cos *<sup>φ</sup>* (17)

Δ*t* +*g<sup>l</sup>*

*<sup>T</sup>*. Integration is performed using the previous values for velocities

<sup>Δ</sup>*<sup>V</sup> <sup>l</sup>* (*k*) <sup>+</sup> <sup>Δ</sup>*<sup>V</sup> <sup>l</sup>* (*<sup>k</sup>* <sup>−</sup> <sup>1</sup>)

*h* (*k*) = *h* (*k* − 1) + 0.5 [*Vu* (*k*) + *Vu* (*k* − 1)] Δ*t* (15)

*M*+*h*

*<sup>M</sup>*+*<sup>h</sup>* 0

⎤ ⎥ ⎥ ⎥ ⎦

0 −*ω<sup>e</sup>* sin *φ ω<sup>e</sup>* cos *φ*

⎤ ⎥ ⎥ ⎥ ⎦

Using **R***<sup>l</sup>*

frame:

since the last velocity calculation:

since last calculation:

With the three matrices **R***<sup>l</sup>*

Where *<sup>g</sup><sup>l</sup>* <sup>=</sup> [0 0 <sup>−</sup> *<sup>g</sup>*]

**2.4.3 Position equations**

with velocity) to get position.

*ωl ie* = ⎡ ⎢ ⎢ ⎢ ⎣

*ωl el* =

*<sup>b</sup>*, *<sup>ω</sup><sup>l</sup>*

Δ*V <sup>l</sup>* = **R***<sup>l</sup>*

*<sup>V</sup> <sup>l</sup>* at time *<sup>k</sup>* <sup>−</sup> 1 to get *<sup>V</sup> <sup>l</sup>* at time *<sup>k</sup>* using <sup>Δ</sup>*<sup>V</sup> <sup>l</sup>* as follows:

*ie* and *<sup>ω</sup><sup>l</sup>*

*<sup>V</sup> <sup>l</sup>* (*k*) <sup>=</sup> *<sup>V</sup> <sup>l</sup>* (*<sup>k</sup>* <sup>−</sup> <sup>1</sup>) <sup>+</sup> 0.5

The equations for altitude *h*, latitude *φ* and longitude *λ* are as follows:

rate-of-change for velocity increments Δ*V <sup>l</sup>* in the local frame as follows:

*<sup>b</sup>*Δ*<sup>V</sup> <sup>b</sup>* <sup>−</sup>

⎡ ⎢ ⎢ ⎢ ⎢ ⎣

*Ve* tan *φ*

<sup>−</sup> *Ve*

Δ*V <sup>b</sup>* =

� 2*ω<sup>l</sup> ie* <sup>+</sup> *<sup>ω</sup><sup>l</sup> el* � *V l* KF is the most commonly used technique for INS/GPS integration (Farrell & Barth, 1998)(Grewal et al., 2007). fig 1 shows a top-level view of the KF-based system used in this chapter for outdoor mobile robot localization. As mentioned previously, a loosely-coupled integration scheme is adopted in this chapter.

Fig. 1. An overview of the KF-based system used for outdoor mobile robot localization.

#### **2.5.1 Error state vector**

Since KF requires linearized models it estimates the error state not the navigation states themselves. The errors in the navigation states estimated by the filter are then used to correct the mechanization output and provide corrected navigation states. Leveraging the benefits of wheel encoders during GPS outages the KF presented in this section uses an error vector containing eleven states.

The linearized error-state system model used by the KF in this work is in the form:

$$
\delta \dot{\vec{\mathbf{x}}} = \mathbf{F} \delta \vec{\mathbf{x}} + \mathbf{G} \mathbf{W} \left( t \right) \tag{18}
$$

The error state vector in Equation 18 consists of position errors (for latitude, longitude and altitude), velocity errors (along East, North and vertical Up), yaw error and inertial sensor stochastic drift for the single gyroscope and three accelerometers:

$$
\delta\dot{\mathbf{x}} = \begin{bmatrix} \delta\dot{\phi} & \delta\dot{\lambda} & \delta\dot{h} & \delta\dot{V}\_{\varepsilon} & \delta\dot{V}\_{\text{ll}} & \delta\dot{V}\_{\text{ll}} & \delta\dot{\psi} & \delta\dot{\omega}\_{z} & \delta\dot{f}\_{\text{x}} & \delta\dot{f}\_{y} & \delta f\_{z} \end{bmatrix}^{T} \tag{19}$$

In this work the errors in pitch *δρ* and *δθ* roll are not modelled inside the KF. This is because they don't suffer from error growth due to lack of integration operations. Therefore there are no dynamic error states for pitch and roll errors. Instead, expressions for *δρ* and *δθ* in

<sup>265</sup> Improved Inertial/Odometry/GPS Positioning

*V <sup>l</sup>* composed of other error states and pitch and roll equations in RISS mechanization are derived. The equation for velocity error is then re-arranged to accommodate the error terms

The following is a derivation for the expression for *δρ* in Equation 21 using Equation 6. Take

A similar operation is performed for the error in roll, keeping in mind that the aim is to find an alternate expression for *δθ* that contains error terms other than *δθ* and *δρ*. Using Equation

� *fy g*

�� *<sup>δ</sup> fy* <sup>=</sup> <sup>1</sup>

� *fx* + *Vf ω<sup>z</sup> g* cos *ρ*

> � *δρ* +

*Vf g* cos *ρ*

1

*Ve Vn Vu*

�

⎡ ⎢ ⎢ ⎢ ⎣

*δVe δVn δVu* ⎤ ⎥ ⎥ ⎥ ⎦

(26)

. Taking partial derivatives results in:

� sin *ρ* cos<sup>2</sup> *ρ*

> � sin *ρ*

� ⎛

⎜⎜⎝

*g* � 1 − � *fy g* �2 *δ fy*

⎞

⎟⎟⎠

� sin *ρ* cos<sup>2</sup> *ρ* *g* � 1 − � *fy g* �2 *δ fy* (22)

�� *<sup>δ</sup><sup>θ</sup>* (23)

*δω<sup>z</sup>* +

⎞

⎟⎟⎠

*δ fy* (25)

1 *g* cos *ρ* *δ fx* �

(24)

sin−<sup>1</sup>

� *∂ <sup>∂</sup><sup>θ</sup>* sin−<sup>1</sup>

*<sup>δ</sup>Vf* <sup>+</sup> *fx* <sup>+</sup> *Vf <sup>ω</sup><sup>z</sup> g*

*δρ* <sup>=</sup> *fx* <sup>+</sup> *Vf <sup>ω</sup><sup>z</sup> g*

⎛

�

*g*<sup>2</sup> cos2 *ρ*

Express *δVf* contained in *δθ* from Equation 24 in terms of the three velocities along the east,

*fx* + *Vf ω<sup>z</sup>*

� 1 − � *fy g* �2

*<sup>n</sup>* + *V*<sup>2</sup> *u* � *δV<sup>f</sup>*

> ⎡ ⎢ ⎢ ⎢ ⎣

*δVe δVn δVu* ⎤ ⎥ ⎥ ⎥ <sup>⎦</sup> <sup>=</sup> <sup>1</sup> *Vf* �

�

⎜⎜⎝

� *V*2 *<sup>e</sup>* + *V*<sup>2</sup>

✁2*Ve* ✁2*Vn* ✁2*Vu*

=

Express *δρ* and its associated terms in Equation 24 using the components from Equation 22 as

� d d*fy*

7, the partial derivatives of each component of *θ* in Equation 21 are used to give:

*δθ* = −

�*T*

*δ* ˙

belonging to *δρ* and *δθ*.

Where *δθ* =

follows:

*δθ* <sup>=</sup> <sup>−</sup> <sup>1</sup> � 1 −

�

� *fx*+*Vf ω<sup>z</sup> g* cos *ρ*

*fx* + *Vf ω<sup>z</sup> g*

north and up channels:

*δVf* =

� *∂ ∂V<sup>f</sup> Vf* �

<sup>=</sup> <sup>1</sup> ✁2 �*V*<sup>2</sup> *<sup>e</sup>* + *V*<sup>2</sup>

*δρ* =

the derivative of the error component of *ρ* to give *δρ*:

of Wheeled Robots Even in GPS-Denied Environments

� d d*fy ρ* � *δ fy* =

*δθ* =

*δVf δρ δω<sup>z</sup> δ fx*

�2

� sin *ρ* cos2 *ρ*

*δV<sup>f</sup>* =

*<sup>n</sup>* + *V*<sup>2</sup> *u* �

� *∂ ∂V<sup>f</sup>*

�

� *∂ ∂θ θ* �

� *ω<sup>z</sup> g* cos *ρ*

The following sections contain derivations of the equations for each error state in the model. These equations use first order terms from the Taylor series expansion of the mechanization equations.

#### **2.5.2 Position errors**

From (Noureldin et al., 2009) the position components of the mechanization equations are linearized, yielding three error equations for latitude, longitude and altitude. Neglecting higher-order terms of the Taylor Series and writing in matrix form gives:

$$
\delta\vec{r}^{l} = \begin{bmatrix}
\delta\dot{\phi} \\
\delta\dot{\lambda} \\
\delta h
\end{bmatrix} = \begin{bmatrix}
0 & 0 & -\frac{V\_{n}}{(M+h)^{2}} \\
\frac{V\_{\varepsilon}\tan\phi}{(N+h)\cos\phi} & 0 - \frac{V\_{\varepsilon}}{(N+h)^{2}\cos\phi} \\
0 & 0 & 0
\end{bmatrix} \begin{bmatrix}
\delta\phi \\
\delta\lambda \\
\delta h
\end{bmatrix} \\
$$

$$
+ \begin{bmatrix}
0 & \frac{1}{M+h} & 0 \\
\frac{1}{(N+h)\cos\phi} & 0 & 0 \\
0 & 0 & 1
\end{bmatrix} \begin{bmatrix}
\delta V\_{\varepsilon} \\
\delta V\_{n} \\
\delta V\_{u}
\end{bmatrix} \tag{20}
$$

Where *M* and *N* are the respective Meridian and normal radii of the curvature of the Earth.

#### **2.5.3 Velocity errors**

The velocity components from the mechanization equations are linearized to provide velocity error equations; these equations are presented in (Noureldin et al., 2009). The velocity errors are function of errors in position, velocity, and attitude as well as accelerometer stochastic drift errors.

*δ* ˙ *V <sup>l</sup>* = ⎡ ⎢ ⎢ ⎢ ⎣ *δV*˙ *e δV*˙ *n δV*˙ *u* ⎤ ⎥ ⎥ ⎥ ⎦ = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 2*ωeVu* sin *φ* + 2*ωeVn* cos *φ* + *VnVe* (*N*+*h*) cos2 *<sup>φ</sup>* 0 0 <sup>−</sup>2*ωeVe* cos *<sup>φ</sup>* <sup>−</sup> *<sup>V</sup>*<sup>2</sup> *e* (*N*+*h*) cos2 *<sup>φ</sup>* 0 0 <sup>−</sup>2*ωeVe* sin *<sup>φ</sup>* <sup>0</sup> <sup>2</sup>*<sup>g</sup> M*+*h* ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎢ ⎣ *δφ δλ δh* ⎤ ⎥ ⎥ ⎥ ⎦ + ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ <sup>−</sup> *Vu <sup>N</sup>*+*<sup>h</sup>* <sup>+</sup> *Vn* tan *<sup>φ</sup> <sup>N</sup>*+*<sup>h</sup>* <sup>2</sup>*ω<sup>e</sup>* sin *<sup>φ</sup>* <sup>+</sup> *Ve* tan *<sup>φ</sup> <sup>N</sup>*+*<sup>h</sup>* −2 � *ω<sup>e</sup>* cos *φ* + *Ve N*+*h* � −2 � *<sup>ω</sup><sup>e</sup>* sin *<sup>φ</sup>* + *Ve* tan *<sup>φ</sup> N*+*h* � <sup>−</sup> *Vu <sup>M</sup>*+*<sup>h</sup>* <sup>−</sup> *Vn M*+*h* 2 � *ω<sup>e</sup>* cos *φ* + *Ve N*+*h* � 2 *Vn <sup>M</sup>*+*<sup>h</sup>* 0 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎢ ⎣ *δVe δVn δVu* ⎤ ⎥ ⎥ ⎥ ⎦ + ⎡ ⎢ ⎢ ⎢ ⎣ 0 *fu* −*fn* −*fu* 0 *fe fn* −*fe* 0 ⎤ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎢ ⎣ *δρ δθ δψ* ⎤ ⎥ ⎥ ⎥ ⎦ + **R***<sup>l</sup> b* ⎡ ⎢ ⎢ ⎢ ⎣ *δ fx δ fy δ fz* ⎤ ⎥ ⎥ ⎥ ⎦ (21)

8 Will-be-set-by-IN-TECH

The following sections contain derivations of the equations for each error state in the model. These equations use first order terms from the Taylor series expansion of the mechanization

From (Noureldin et al., 2009) the position components of the mechanization equations are linearized, yielding three error equations for latitude, longitude and altitude. Neglecting

*Ve* tan *φ*

⎡ ⎢ ⎢ ⎢ ⎢ ⎣

Where *M* and *N* are the respective Meridian and normal radii of the curvature of the Earth.

The velocity components from the mechanization equations are linearized to provide velocity error equations; these equations are presented in (Noureldin et al., 2009). The velocity errors are function of errors in position, velocity, and attitude as well as accelerometer stochastic drift

*e*

<sup>−</sup>2*ωeVe* sin *<sup>φ</sup>* <sup>0</sup> <sup>2</sup>*<sup>g</sup>*

*<sup>N</sup>*+*<sup>h</sup>* −2

*<sup>M</sup>*+*<sup>h</sup>* <sup>−</sup> *Vn*

*<sup>M</sup>*+*<sup>h</sup>* 0

+

2*ωeVu* sin *φ* + 2*ωeVn* cos *φ* + *VnVe*

<sup>−</sup>2*ωeVe* cos *<sup>φ</sup>* <sup>−</sup> *<sup>V</sup>*<sup>2</sup>

*<sup>N</sup>*+*<sup>h</sup>* <sup>2</sup>*ω<sup>e</sup>* sin *<sup>φ</sup>* <sup>+</sup> *Ve* tan *<sup>φ</sup>*

⎡ ⎢ ⎢ ⎢ ⎣ <sup>−</sup> *Vu*

2 *Vn*

*δ fx δ fy δ fz* ⎤ ⎥ ⎥ ⎥ ⎦

0 0 <sup>−</sup> *Vn*

(*N*+*h*) cos *<sup>φ</sup>* <sup>0</sup> <sup>−</sup> *Ve*

00 0

1 (*N*+*h*) cos *<sup>φ</sup>* 0 0

(*M*+*h*) 2

<sup>2</sup> cos *φ*

*<sup>M</sup>*+*<sup>h</sup>* 0

(*N*+*h*) cos2 *<sup>φ</sup>* 0 0

�

*M*+*h*

*ω<sup>e</sup>* cos *φ* + *Ve*

*M*+*h*

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

⎡ ⎢ ⎢ ⎢ ⎣

*N*+*h* �

*δφ δλ δh* ⎤ ⎥ ⎥ ⎥ ⎦

> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

⎡ ⎢ ⎢ ⎢ ⎣ *δVe δVn δVu* ⎤ ⎥ ⎥ ⎥ ⎦

(21)

(*N*+*h*) cos2 *<sup>φ</sup>* 0 0

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

⎤ ⎥ ⎥ ⎥ ⎥ ⎦

⎡ ⎢ ⎢ ⎢ ⎣

*δVe δVn δVu* ⎤ ⎥ ⎥ ⎥ ⎦

(20)

⎡ ⎢ ⎢ ⎢ ⎣ *δφ δλ δh*

⎤ ⎥ ⎥ ⎥ ⎦

(*N*+*h*)

0 <sup>1</sup>

0 01

higher-order terms of the Taylor Series and writing in matrix form gives:

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

*δ* ˙*r<sup>l</sup>* =

⎡ ⎢ ⎢ ⎢ ⎣ *δφ*˙ *δλ*˙ *δ* ˙ *h* ⎤ ⎥ ⎥ ⎥ ⎦ =

equations.

**2.5.2 Position errors**

**2.5.3 Velocity errors**

⎡ ⎢ ⎢ ⎢ ⎣

> ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

−2 �

> 2 �

+

+

⎡ ⎢ ⎢ ⎢ ⎣

*δV*˙ *e δV*˙ *n δV*˙ *u* ⎤ ⎥ ⎥ ⎥ ⎦ =

<sup>−</sup> *Vu*

0 *fu* −*fn* −*fu* 0 *fe fn* −*fe* 0

*<sup>N</sup>*+*<sup>h</sup>* <sup>+</sup> *Vn* tan *<sup>φ</sup>*

*<sup>ω</sup><sup>e</sup>* sin *<sup>φ</sup>* + *Ve* tan *<sup>φ</sup>*

⎤ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎢ ⎣

*ω<sup>e</sup>* cos *φ* + *Ve*

*N*+*h* �

*N*+*h* �

> *δρ δθ δψ*

⎤ ⎥ ⎥ ⎥ ⎦ + **R***<sup>l</sup> b*

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

errors.

*δ* ˙ *V <sup>l</sup>* = In this work the errors in pitch *δρ* and *δθ* roll are not modelled inside the KF. This is because they don't suffer from error growth due to lack of integration operations. Therefore there are no dynamic error states for pitch and roll errors. Instead, expressions for *δρ* and *δθ* in *δ* ˙ *V <sup>l</sup>* composed of other error states and pitch and roll equations in RISS mechanization are derived. The equation for velocity error is then re-arranged to accommodate the error terms belonging to *δρ* and *δθ*.

The following is a derivation for the expression for *δρ* in Equation 21 using Equation 6. Take the derivative of the error component of *ρ* to give *δρ*:

$$
\delta\rho = \left[\frac{\mathbf{d}}{\mathbf{d}f\_y}\rho\right]\delta f\_y = \left[\frac{\mathbf{d}}{\mathbf{d}f\_y}\sin^{-1}\left(\frac{f\_y}{\mathcal{g}}\right)\right]\delta f\_y = \frac{1}{\mathcal{g}\sqrt{1-\left(\frac{f\_y}{\mathcal{g}}\right)^2}}\delta f\_y \tag{22}
$$

A similar operation is performed for the error in roll, keeping in mind that the aim is to find an alternate expression for *δθ* that contains error terms other than *δθ* and *δρ*. Using Equation 7, the partial derivatives of each component of *θ* in Equation 21 are used to give:

$$
\delta\theta = \left[\frac{\partial}{\partial\theta}\theta\right]\delta\theta = -\left[\frac{\partial}{\partial\theta}\sin^{-1}\left(\frac{f\_x + V\_f\omega\_z}{g\cos\rho}\right)\right]\delta\theta\tag{23}
$$

Where *δθ* = � *δVf δρ δω<sup>z</sup> δ fx* �*T* . Taking partial derivatives results in:

$$\delta\theta = -\frac{1}{\sqrt{1 - \left(\frac{f\_x + V\_f \omega\_z}{g \cos \rho}\right)^2}} \left(\frac{\omega\_z}{g \cos \rho} \delta V\_f + \frac{f\_x + V\_f \omega\_z}{g} \left(\frac{\sin \rho}{\cos^2 \rho}\right) \delta \rho + \frac{V\_f}{g \cos \rho} \delta \omega\_z + \frac{1}{g \cos \rho} \delta f\_x\right) \tag{24}$$

Express *δρ* and its associated terms in Equation 24 using the components from Equation 22 as follows:

$$\frac{f\_x + V\_f \omega\_z}{\mathcal{g}} \left(\frac{\sin \rho}{\cos^2 \rho}\right) \delta \rho = \frac{f\_x + V\_f \omega\_z}{\mathcal{g}} \left(\frac{\sin \rho}{\cos^2 \rho}\right) \left(\frac{1}{\mathcal{g}\sqrt{1 - \left(\frac{f\_x}{\mathcal{g}}\right)^2}} \delta f\_y\right)$$

$$= \left(\frac{\left(f\_x + V\_f \omega\_z\right) \sin \rho}{\mathcal{g}^2 \cos^2 \rho \sqrt{1 - \left(\frac{f\_y}{\mathcal{g}}\right)^2}}\right) \delta f\_y \tag{25}$$

Express *δVf* contained in *δθ* from Equation 24 in terms of the three velocities along the east, north and up channels:

$$
\begin{split}
\delta V\_f &= \left[\frac{\partial}{\partial V\_f} V\_f\right] \delta V\_f = \left[\frac{\partial}{\partial V\_f} \sqrt{V\_\varepsilon^2 + V\_n^2 + V\_n^2}\right] \delta V\_f \\ &= \frac{1}{\mathcal{Z}\sqrt{V\_\varepsilon^2 + V\_n^2 + V\_n^2}} \left[\mathcal{Z}V\_\varepsilon \mathcal{Z}V\_n \mathcal{Z}V\_u\right] \begin{bmatrix} \delta V\_\varepsilon \\ \delta V\_n \\ \delta V\_u \end{bmatrix} = \frac{1}{V\_f} \begin{bmatrix} V\_\varepsilon \ V\_n \ V\_u \end{bmatrix} \begin{bmatrix} \delta V\_\varepsilon \\ \delta V\_n \\ \delta V\_u \end{bmatrix} \end{split} \tag{26}
$$

wheels are essentially rigid. Therefore the error in forward speed *δVf* contained in *δθ<sup>T</sup>* is expressed as a white noise term using the standard deviation of *δVf* . The standard deviation

<sup>267</sup> Improved Inertial/Odometry/GPS Positioning

As mentioned earlier, pitch and roll do not have an error model in this work. This is because errors in pitch and roll do not accumulate as for the other measurements due to a lack of integration operations. To obtain an expression for heading error, use the yaw expression from mechanization for yaw *ψ* and linearize it by taking the first order terms in the Taylor series expansion. An expression for the error in yaw (and consequently azimuth) is determined:

*<sup>ω</sup><sup>e</sup>* cos *<sup>φ</sup>* + *Ve* csc *<sup>φ</sup>*

used as a measurement. The measurement model is as follows:

*z GPS <sup>k</sup>* <sup>=</sup> **<sup>H</sup>***GPS*

*z GPS <sup>k</sup>* =

*GPS*

*R*+*h*

Section 2.5.1 described the error-state system model for the KF. The KF also needs a measurement model to be used in the update stage. There are two measurement update models used in this work. The first is when GPS is available and the second is used when there is a GPS outage. During GPS availability both GPS position and velocity are used and the differences between the RISS mechanization position and velocity and those of GPS are

*<sup>k</sup> xk* +*v*

*<sup>k</sup>* is defined as:

*λINS <sup>k</sup>* − *λ*

*h INS <sup>k</sup>* − *h*

*VINS ek* <sup>−</sup> *<sup>V</sup>GPS ek*

*VINS nk* <sup>−</sup> *<sup>V</sup>GPS nk*

*VINS uk* <sup>−</sup> *<sup>V</sup>GPS uk*

*ωINS zk* <sup>−</sup> *<sup>ω</sup>GPS zk*

*φ INS <sup>k</sup>* − *φ*

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *GPS*

*GPS k*

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

*GPS k*

*GPS k*

� *Ve* tan *φ* (*R*+*h*) 2 tan *φ <sup>R</sup>*+*<sup>h</sup>* 1 � ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

*δφ δh δVe δωz* ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

*<sup>k</sup>* (30)

(29)

(31)

is added to the process noise coupling state vector **G**.

of Wheeled Robots Even in GPS-Denied Environments

*δψ*˙ = � − �

**2.5.4 Heading error**

**2.5.5 Measurement model**

Where the measurement state vector*z*

Using Equations 25 and 26, Equation 24 can be rewritten as:

$$
\delta\theta = -\left(\frac{\omega\_z}{V\_f \left(g\cos\rho\right)\sqrt{1-\left(\frac{f\_z + V\_f\omega\_z}{g\cos\rho}\right)^2}}\right)\left[V\_e\ V\_n\ V\_u\right]\begin{bmatrix}\delta V\_e\\\delta V\_n\\\delta V\_u\end{bmatrix}
$$

$$
$$

$$
$$

Using Equations 22 and 27 the equation for the attitude components of the velocity error can be re-arranged to accommodate the error terms belonging to *ρ* and *θ*. Similar terms will be grouped to produce a set of equations that describe the attitude errors within *<sup>δ</sup>* ˙ *V l* . The term *<sup>δ</sup>* ˙ *V l* att will be used to describe the attitude-portion of the velocity errors states. Once the equation for the components of velocity errors due to attitude errors is described it can be easily combined with the other terms in the velocity error states.

*δ* ˙ *V l* att = ⎡ ⎢ ⎢ ⎢ ⎣ 0 *fu* −*fn* −*fu* 0 *fe fn* −*fe* 0 ⎤ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎢ ⎣ *δρ δθ δψ* ⎤ ⎥ ⎥ ⎥ ⎦ = ⎡ ⎢ ⎢ ⎢ ⎣ −*fn fe* 0 ⎤ ⎥ ⎥ ⎥ ⎦ *δψ* + ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ −*fuVf g* cos *ρ* � 1− � *fx*+*Vf <sup>ω</sup><sup>z</sup> g* cos *ρ* �<sup>2</sup> 0 *feVf g* cos *ρ* � 1− � *fx*+*Vf <sup>ω</sup><sup>z</sup> g* cos *ρ* �<sup>2</sup> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ *δωz* + ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ −*fu g* cos *ρ* � 1− � *fx*+*Vf <sup>ω</sup><sup>z</sup> g* cos *ρ* �<sup>2</sup> <sup>−</sup>*fu*(*fx*+*Vf <sup>ω</sup><sup>z</sup>* ) sin *<sup>ρ</sup> g*<sup>2</sup> cos2 *ρ* � 1− � *fy g* �2 � 1− � *fx*+*Vf <sup>ω</sup><sup>z</sup> g* cos *ρ* �<sup>2</sup> 0 <sup>−</sup>*fu g* � 1− � *fy g* �2 *fe g* cos *ρ* � 1− � *fx*+*Vf <sup>ω</sup><sup>z</sup> g* cos *ρ* �<sup>2</sup> *fe*(*fx*+*Vf ω<sup>z</sup>* ) sin *ρ g*<sup>2</sup> cos2 *ρ* � 1− � *fy g* �2 � 1− � *fx*+*Vf <sup>ω</sup><sup>z</sup> g* cos *ρ* �<sup>2</sup> <sup>+</sup> *fn g* � 1− � *fy g* �2 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎡ ⎣ *δ fx δ fy* ⎤ ⎦ + ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ � *σ*2 *Vf* 0 � *σ*2 *Vf* ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ *W* (*t*) (28)

Experimental data shows that the forward speed originating from the encoders does not suffer from stochastic errors such as stochastic scale factor. This is due to the fact that the robot's wheels are essentially rigid. Therefore the error in forward speed *δVf* contained in *δθ<sup>T</sup>* is expressed as a white noise term using the standard deviation of *δVf* . The standard deviation is added to the process noise coupling state vector **G**.

#### **2.5.4 Heading error**

10 Will-be-set-by-IN-TECH

� *fx*+*Vf ω<sup>z</sup> g* cos *ρ*

> �2 *δωz*

Using Equations 22 and 27 the equation for the attitude components of the velocity error can be re-arranged to accommodate the error terms belonging to *ρ* and *θ*. Similar terms will be grouped to produce a set of equations that describe the attitude errors within *<sup>δ</sup>* ˙

att will be used to describe the attitude-portion of the velocity errors states. Once the equation for the components of velocity errors due to attitude errors is described it can be

> ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

*g* cos *ρ* � 1−

*g* cos *ρ* � 1−

*g* � 1− � *fy g* �2

*fe*(*fx*+*Vf ω<sup>z</sup>* ) sin *ρ*

<sup>−</sup>*fu*(*fx*+*Vf <sup>ω</sup><sup>z</sup>* ) sin *<sup>ρ</sup>*

� *fx*+*Vf <sup>ω</sup><sup>z</sup> g* cos *ρ*

*W* (*t*) (28)

� <sup>1</sup> *g* cos *ρ* �2

*g*<sup>2</sup> cos2 *ρ* � 1− � *fy g* �2

⎞

⎟⎟⎠ �

(*fx*+*Vf ω<sup>z</sup>* ) sin *ρ*

*Ve Vn Vu*

−*fuVf*

0 *feVf*

� *fx*+*Vf <sup>ω</sup><sup>z</sup> g* cos *ρ*

� *fx*+*Vf <sup>ω</sup><sup>z</sup> g* cos *ρ*

� *fx*+*Vf <sup>ω</sup><sup>z</sup> g* cos *ρ*

�<sup>2</sup>

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

*δωz*

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

⎡ ⎣ *δ fx δ fy* ⎤ ⎦

�<sup>2</sup>

�<sup>2</sup>

�<sup>2</sup> <sup>+</sup> *fn g* � 1− � *fy g* �2

�

� ⎡ ⎣ *δ fx δ fy*

⎡ ⎢ ⎢ ⎢ ⎣

*δVe δVn δVu*

⎤

⎦ (27)

*V l* . The

⎤ ⎥ ⎥ ⎥ ⎦

*ωz*

� *fx*+*Vf ω<sup>z</sup> g* cos *ρ*

�2

� 1 −

Using Equations 25 and 26, Equation 24 can be rewritten as:

*Vf* (*g* cos *ρ*)

� 1 −

� *fx*+*Vf ω<sup>z</sup> g* cos *ρ*

easily combined with the other terms in the velocity error states.

*δρ δθ δψ* ⎤ ⎥ ⎥ ⎥ ⎦ = ⎡ ⎢ ⎢ ⎢ ⎣

−*fn fe* 0

⎤ ⎥ ⎥ ⎥ ⎦ *δψ* +

*g*<sup>2</sup> cos2 *ρ* � 1− � *fy g* �2 � 1−

Experimental data shows that the forward speed originating from the encoders does not suffer from stochastic errors such as stochastic scale factor. This is due to the fact that the robot's

0 <sup>−</sup>*fu*

*g*<sup>2</sup> cos2 *ρ* � 1− � *fy g* �2 � 1−

⎤ ⎥ ⎥ ⎥ ⎦

−*fu*

*fe*

� *fx*+*Vf <sup>ω</sup><sup>z</sup> g* cos *ρ*

� *fx*+*Vf <sup>ω</sup><sup>z</sup> g* cos *ρ*

�<sup>2</sup>

�<sup>2</sup>

⎡ ⎢ ⎢ ⎢ ⎣

<sup>−</sup> *Vf g* cos *ρ*

<sup>−</sup> <sup>1</sup> � 1 −

*δθ* = −

term *<sup>δ</sup>* ˙ *V l*

> *δ* ˙ *V l* att =

⎡ ⎢ ⎢ ⎢ ⎣

+

+

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ � *σ*2 *Vf* 0 � *σ*2 *Vf*

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

*g* cos *ρ* � 1−

*g* cos *ρ* � 1−

> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

0 *fu* −*fn* −*fu* 0 *fe fn* −*fe* 0

⎛

⎜⎜⎝

As mentioned earlier, pitch and roll do not have an error model in this work. This is because errors in pitch and roll do not accumulate as for the other measurements due to a lack of integration operations. To obtain an expression for heading error, use the yaw expression from mechanization for yaw *ψ* and linearize it by taking the first order terms in the Taylor series expansion. An expression for the error in yaw (and consequently azimuth) is determined:

$$
\delta\dot{\psi} = \begin{bmatrix} -\left(\omega\_{\ell}\cos\phi + \frac{V\_{\ell}\csc\phi}{R+h}\right) \frac{V\_{\ell}\tan\phi}{\left(R+h\right)^{2}} \frac{\tan\phi}{R+h} \,1 \end{bmatrix} \begin{bmatrix} \delta\phi\\ \delta h\\ \delta V\_{\ell} \\ \delta\omega\_{z} \\ \delta\omega\_{z} \end{bmatrix} \tag{29}$$

#### **2.5.5 Measurement model**

Section 2.5.1 described the error-state system model for the KF. The KF also needs a measurement model to be used in the update stage. There are two measurement update models used in this work. The first is when GPS is available and the second is used when there is a GPS outage. During GPS availability both GPS position and velocity are used and the differences between the RISS mechanization position and velocity and those of GPS are used as a measurement. The measurement model is as follows:

$$
\overline{z}\_{k}^{GPS} = \mathbf{H}\_{k}^{GPS} \vec{x}\_{k} + \overline{v}\_{k}^{GPS} \tag{30}
$$

Where the measurement state vector*z GPS <sup>k</sup>* is defined as:

$$\begin{aligned} \begin{bmatrix} \Phi\_{k}^{\text{INS}} & -\Phi\_{k}^{\text{GPS}} \\ \lambda\_{k}^{\text{INS}} & -\lambda\_{k}^{\text{GPS}} \\ \end{bmatrix} \\ \begin{aligned} \lambda\_{k}^{\text{INS}} &= \lambda\_{k}^{\text{GPS}} \\ \end{aligned} \\ \begin{aligned} \begin{aligned} \mu\_{k}^{\text{INS}} &= \mu\_{k}^{\text{GPS}} \\ V\_{\varepsilon\_{k}}^{\text{INS}} &-V\_{\varepsilon\_{k}}^{\text{GPS}} \\ V\_{n\_{k}}^{\text{INS}} &-V\_{n\_{k}}^{\text{GPS}} \\ V\_{u\_{k}}^{\text{INS}} &-V\_{u\_{k}}^{\text{GPS}} \\ \end{aligned} \\ \begin{aligned} \begin{aligned} V\_{u\_{k}}^{\text{INS}} &= -V\_{u\_{k}}^{\text{GPS}} \\ \omega\_{z\_{k}} &= \omega\_{z\_{k}}^{\text{GPS}} \end{aligned} \end{aligned} \end{aligned} \end{aligned} \tag{31}$$

The design matrix **H** is:

$$\mathbf{H}\_{k}^{\text{eps}} = \begin{bmatrix} 1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \\\\ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \\\\ 0 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \\\\ 0 \ 0 \ 0 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \\\\ 0 \ 0 \ 0 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \end{bmatrix} \tag{32}$$

And *v GPS <sup>k</sup>* is a white noise process with zero mean and unity variance. When GPS is not available, forward velocity derived from the wheel encoders together with pitch and azimuth estimates are used as a measurement update. The measurement model is as follows:

$$
\vec{z}\_{k}^{\prime \prime \prime} = \mathbf{H}\_{k}^{\prime \prime \prime} \vec{x}\_{k} + \vec{v}\_{k}^{\prime \prime \prime} \tag{33}
$$

Where the measurement state vector*z odo <sup>k</sup>* is defined as:

$$\begin{aligned} \boldsymbol{\tilde{z}}\_{k}^{\boldsymbol{\omega}^{\rm{do}}} &= \begin{bmatrix} \boldsymbol{V}\_{\mathcal{e}\_{k}}^{\rm{INS}} - \boldsymbol{V}\_{\mathcal{e}\_{k}}^{\rm{\rm{adj}}} \\ \boldsymbol{V}\_{\boldsymbol{n}\_{k}}^{\rm{INS}} - \boldsymbol{V}\_{\boldsymbol{n}\_{k}}^{\rm{\rm{adj}}} \\ \boldsymbol{V}\_{\boldsymbol{\mu}\_{k}}^{\rm{INS}} - \boldsymbol{V}\_{\boldsymbol{\mu}\_{k}}^{\rm{\rm{adj}}} \end{bmatrix} \end{aligned} \tag{34}$$

Fig. 2. The mobile robot used for the experiments in this work, custom-built by the author and members of the Electrical and Computer Engineering Department at Royal Military

<sup>269</sup> Improved Inertial/Odometry/GPS Positioning

of Wheeled Robots Even in GPS-Denied Environments

The inertial sensors used in this work include a MEMS-grade IMU made by Crossbow, model IMU300CC-100. Specifications of this IMU are in table 1 and detailed specifications can be found in (*IMU300CC - 6DOF Inertial Measurement Unit*, 2009). Velocity updates are provided by the forward speed of the robot, derived from encoders coupled to the drive output of each motor. The results of the presented navigation solution are evaluated with respect to a reference solution made by NovAtel where a Honeywell HG1700 high-end tactical grade IMU is integrated with a NovAtel OEM4 GPS receiver. The IMU and GPS receiver are integrated with a G2 Pro-Pack SPAN unit which is an off-the-shelf system developed by NovAtel. The details of this system are described in (*SPAN Technology System User Manual OM-20000062*, 2005). Biases and scale factors for the HG1700 IMU are in table 2 and detailed specification can be found in (*HG1700 Inertial Measurement Unit*, 2009). The high-cost NovAtel SPAN system provides a reference solution to validate the proposed method which uses the low-cost MEMS-based sensors. The SPAN system is also used to examine the overall performance during some GPS outages intentionally introduced in post-processing. A basic block diagram

Trajectories are carried out using the mobile robot described in Section 3 and sensor data is collected to test the developed solution in post-processing. Four navigation solutions are compared in order to show the benefit of using RISS instead of a full IMU and the benefit of

of the sensor electronics on-board the mobile robot appears in fig 3.

College of Canada.

**4. Results and discussion**

**3.2 Equipment**

The design matrix **H** is:

$$\mathbf{H}\_{k}^{\text{abs}} = \begin{bmatrix} 0 \ 0 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \\\ 0 \ 0 \ 0 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \\\ 0 \ 0 \ 0 \ 0 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \end{bmatrix} \tag{35}$$

And *v odo <sup>k</sup>* is a white noise process having zero mean and unity variance.

#### **3. Experimental setup**

#### **3.1 Wheeled mobile robot**

Outdoor trajectory tests are conducted to assess the performance of the developed navigation solution; the tests are conducted using a mobile robot shown in fig 2. The robot was developed by a member of NavINST and members of the Electrical and Computer Engineering Department (ECE) at Royal Military College (RMC) of Canada. The mobile robot is three-wheeled and differentially-driven with a quadrature optical encoder coupled to the drive outputs of each motor. Appropriate scaling in the navigation scheme is used to provide angular velocity estimates of each wheel from these encoders. Power on-board the mobile robot comes from three sources, namely (1) two 12V batteries connected to the drive amplifiers and motors, (2) two 12V batteries connected to the sensors and encoder processor board and (3) a battery internal to the laptop mounted on the top level of the robot.

Fig. 2. The mobile robot used for the experiments in this work, custom-built by the author and members of the Electrical and Computer Engineering Department at Royal Military College of Canada.

## **3.2 Equipment**

12 Will-be-set-by-IN-TECH

100000000000000

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

*<sup>k</sup>* (33)

(32)

(34)

(35)

010000000000000

001000000000000

000100000000000

000010000000000

000001000000000

*<sup>k</sup>* is a white noise process with zero mean and unity variance. When GPS is not available, forward velocity derived from the wheel encoders together with pitch and azimuth

*<sup>k</sup> xk* +*v*

*<sup>k</sup>* is defined as:

*VINS ek* <sup>−</sup> *<sup>V</sup>odo ek*

*VINS nk* <sup>−</sup> *<sup>V</sup>odo nk*

*VINS uk* <sup>−</sup> *<sup>V</sup>odo uk*

Outdoor trajectory tests are conducted to assess the performance of the developed navigation solution; the tests are conducted using a mobile robot shown in fig 2. The robot was developed by a member of NavINST and members of the Electrical and Computer Engineering Department (ECE) at Royal Military College (RMC) of Canada. The mobile robot is three-wheeled and differentially-driven with a quadrature optical encoder coupled to the drive outputs of each motor. Appropriate scaling in the navigation scheme is used to provide angular velocity estimates of each wheel from these encoders. Power on-board the mobile robot comes from three sources, namely (1) two 12V batteries connected to the drive amplifiers and motors, (2) two 12V batteries connected to the sensors and encoder processor board and

000100000000000 000010000000000 000001000000000

⎡ ⎢ ⎢ ⎢ ⎢ ⎣ *odo*

⎤ ⎥ ⎥ ⎥ ⎥ ⎦

> ⎤ ⎥ ⎥ ⎥ ⎦

estimates are used as a measurement update. The measurement model is as follows:

*z odo <sup>k</sup>* <sup>=</sup> **<sup>H</sup>***odo*

*z odo <sup>k</sup>* =

⎡ ⎢ ⎢ ⎢ ⎣

*<sup>k</sup>* is a white noise process having zero mean and unity variance.

(3) a battery internal to the laptop mounted on the top level of the robot.

**H***odo <sup>k</sup>* = *odo*

The design matrix **H** is:

And *v GPS*

And *v odo* **H***GPS <sup>k</sup>* =

Where the measurement state vector*z*

The design matrix **H** is:

**3. Experimental setup 3.1 Wheeled mobile robot** ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

> The inertial sensors used in this work include a MEMS-grade IMU made by Crossbow, model IMU300CC-100. Specifications of this IMU are in table 1 and detailed specifications can be found in (*IMU300CC - 6DOF Inertial Measurement Unit*, 2009). Velocity updates are provided by the forward speed of the robot, derived from encoders coupled to the drive output of each motor. The results of the presented navigation solution are evaluated with respect to a reference solution made by NovAtel where a Honeywell HG1700 high-end tactical grade IMU is integrated with a NovAtel OEM4 GPS receiver. The IMU and GPS receiver are integrated with a G2 Pro-Pack SPAN unit which is an off-the-shelf system developed by NovAtel. The details of this system are described in (*SPAN Technology System User Manual OM-20000062*, 2005). Biases and scale factors for the HG1700 IMU are in table 2 and detailed specification can be found in (*HG1700 Inertial Measurement Unit*, 2009). The high-cost NovAtel SPAN system provides a reference solution to validate the proposed method which uses the low-cost MEMS-based sensors. The SPAN system is also used to examine the overall performance during some GPS outages intentionally introduced in post-processing. A basic block diagram of the sensor electronics on-board the mobile robot appears in fig 3.

### **4. Results and discussion**

Trajectories are carried out using the mobile robot described in Section 3 and sensor data is collected to test the developed solution in post-processing. Four navigation solutions are compared in order to show the benefit of using RISS instead of a full IMU and the benefit of

Crossbow IMU (IMU300CC) Gyroscopes Accelerometers

<sup>271</sup> Improved Inertial/Odometry/GPS Positioning

Honeywell IMU (HG1700) Gyroscopes Accelerometers

Scale Factor, ppm 150.000 Scale Factor, ppm 300.000

Table 1. Bias, scale factor error and random walk for the Crossbow IMU300CC IMU.

<sup>√</sup>*hr* 0.125

Table 2. Bias, scale factor error and random walk for the Honeywell HG1700 IMU found in Novatel GPS/INS. Adapted from *SPAN Technology System User Manual OM-20000062* (2005)

using velocity updates from wheel encoders during GPS outages. Each of the four navigation

The errors in all the estimated solutions are calculated with respect to the NovAtel reference solution. Results for two trajectories are shown in this work. The first trajectory is shown in fig 4 and is located on-campus at RMC. It forms a loop with start and end at the same position and contains two different sections with hills both at an incline and decline to the robot's

The second trajectory for this experiment is shown in fig 5 and is also located on-campus at RMC. This trajectory forms a loop with start and end at the same position and is much longer than the trajectory in fig 4. It contains several different sections which include hills both at an

The ultimate check for the proposed systems accuracy is during GPS signal blockage which can be intentionally introduced in post-processing. Since the presented solution is loosely coupled the outages represent complete blockages of GPS updates. Seven GPS outages are simulated with durations of 60 seconds each. The simulated outages are chosen such that

Table 7 shows the root mean square (RMS) error in both the estimated 2-D horizontal position and the estimated altitude during seven GPS outages for the four compared solutions. The

Adapted from (*IMU300CC - 6DOF Inertial Measurement Unit*, 2009).

Random Walk, ◦/

KF using RISS and velocity updates during GPS outages;

KF using full IMU with velocity updates during outages; and

KF using RISS without updates during outages;

KF using full IMU without updates during outages.

and *HG1700 Inertial Measurement Unit* (2009).

incline and decline to the robot's trajectory.

they encompass straight portions, turns, and slopes.

solutions are described as follows:

trajectory.

**4.1 Trajectory 1**

Random Walk, ◦/

of Wheeled Robots Even in GPS-Denied Environments

Bias, ◦/sec < ±2.000 Bias, mg < ±30.000 Scale Factor, % < 1.000 Scale Factor, % 1.000

Bias, ◦/hr 1.000 Bias, mg 1.000

<sup>√</sup>*hr* 2.250 Random Walk, m/(s√*hr*) 0.150

Fig. 3. Block diagram for the electronics on-board the mobile robot used in the experiments.

14 Will-be-set-by-IN-TECH

Fig. 3. Block diagram for the electronics on-board the mobile robot used in the experiments.


Table 1. Bias, scale factor error and random walk for the Crossbow IMU300CC IMU. Adapted from (*IMU300CC - 6DOF Inertial Measurement Unit*, 2009).


Table 2. Bias, scale factor error and random walk for the Honeywell HG1700 IMU found in Novatel GPS/INS. Adapted from *SPAN Technology System User Manual OM-20000062* (2005) and *HG1700 Inertial Measurement Unit* (2009).

using velocity updates from wheel encoders during GPS outages. Each of the four navigation solutions are described as follows:

KF using RISS and velocity updates during GPS outages;

KF using RISS without updates during outages;

KF using full IMU with velocity updates during outages; and

KF using full IMU without updates during outages.

The errors in all the estimated solutions are calculated with respect to the NovAtel reference solution. Results for two trajectories are shown in this work. The first trajectory is shown in fig 4 and is located on-campus at RMC. It forms a loop with start and end at the same position and contains two different sections with hills both at an incline and decline to the robot's trajectory.

The second trajectory for this experiment is shown in fig 5 and is also located on-campus at RMC. This trajectory forms a loop with start and end at the same position and is much longer than the trajectory in fig 4. It contains several different sections which include hills both at an incline and decline to the robot's trajectory.

#### **4.1 Trajectory 1**

The ultimate check for the proposed systems accuracy is during GPS signal blockage which can be intentionally introduced in post-processing. Since the presented solution is loosely coupled the outages represent complete blockages of GPS updates. Seven GPS outages are simulated with durations of 60 seconds each. The simulated outages are chosen such that they encompass straight portions, turns, and slopes.

Table 7 shows the root mean square (RMS) error in both the estimated 2-D horizontal position and the estimated altitude during seven GPS outages for the four compared solutions. The

Fig. 6. Three solutions and reference for first trajectory: Red for reference, Yellow for KF using full IMU with velocity updates, Green for KF using RISS without updates, Blue for KF

<sup>273</sup> Improved Inertial/Odometry/GPS Positioning

Fig. 7. RMS errors for altitude and 2D position for the seven outages in first trajectory.

The benefit of using wheel encoders to provide velocity updates during GPS outages can be seen by two comparisons. The first comparison uses KF with full IMU and considers two sets of results, one with and the second without velocity updates. With velocity updates the solution has an average of the maximum positional error for the seven GPS outages equal to 8.77 meters while the case without updates has 139.3 meters of error. The second comparison uses KF with RISS and considers two sets of results, one with and the second without velocity updates. With velocity updates the solution has an average of the maximum positional error for the seven GPS outages equal to 3.35 meters while the case without updates has 18.67

using RISS with velocity updates.

of Wheeled Robots Even in GPS-Denied Environments

meters of error.

Fig. 4. The first trajectory for assessing each navigation solution.

Fig. 5. The second trajectory for assessing each navigation solution.

errors are calculated with respect to the NovAtel reference solution. Table 8 shows the maximum errors in the estimated 2-D horizontal position and the estimated altitude during these outages. fig 6 shows a 2-D plot of four tracks, namely: (1) the reference solution, (2) KF with full IMU and velocity updates during GPS outages (3) KF with RISS and without updates during outages and (4) KF with RISS and velocity updates during outages. The KF with full IMU and without updates during GPS outages is not shown because the position errors would dramatically change the scale of the plot and make comparison of the other solutions very difficult.

The results in table 8 and fig 6 clearly show the advantage of RISS over a full IMU. There is a big difference in 2-D positional errors when one compares the results of KF with full IMU without updates with the results of KF with RISS without updates during GPS outages. While the former has an average of the maximum positional error for the seven GPS outages equal to 139.3 meters, the latter shows an error of 18.67 meters. The reason for this difference is the use of accelerometers vice the two gyroscopes used to get pitch and roll from the RISS.

16 Will-be-set-by-IN-TECH

Fig. 4. The first trajectory for assessing each navigation solution.

Fig. 5. The second trajectory for assessing each navigation solution.

solutions very difficult.

errors are calculated with respect to the NovAtel reference solution. Table 8 shows the maximum errors in the estimated 2-D horizontal position and the estimated altitude during these outages. fig 6 shows a 2-D plot of four tracks, namely: (1) the reference solution, (2) KF with full IMU and velocity updates during GPS outages (3) KF with RISS and without updates during outages and (4) KF with RISS and velocity updates during outages. The KF with full IMU and without updates during GPS outages is not shown because the position errors would dramatically change the scale of the plot and make comparison of the other

The results in table 8 and fig 6 clearly show the advantage of RISS over a full IMU. There is a big difference in 2-D positional errors when one compares the results of KF with full IMU without updates with the results of KF with RISS without updates during GPS outages. While the former has an average of the maximum positional error for the seven GPS outages equal to 139.3 meters, the latter shows an error of 18.67 meters. The reason for this difference is the use of accelerometers vice the two gyroscopes used to get pitch and roll from the RISS.

Fig. 6. Three solutions and reference for first trajectory: Red for reference, Yellow for KF using full IMU with velocity updates, Green for KF using RISS without updates, Blue for KF using RISS with velocity updates.


Fig. 7. RMS errors for altitude and 2D position for the seven outages in first trajectory.

The benefit of using wheel encoders to provide velocity updates during GPS outages can be seen by two comparisons. The first comparison uses KF with full IMU and considers two sets of results, one with and the second without velocity updates. With velocity updates the solution has an average of the maximum positional error for the seven GPS outages equal to 8.77 meters while the case without updates has 139.3 meters of error. The second comparison uses KF with RISS and considers two sets of results, one with and the second without velocity updates. With velocity updates the solution has an average of the maximum positional error for the seven GPS outages equal to 3.35 meters while the case without updates has 18.67 meters of error.

789.2 meters the latter shows an error of 68.29 meters. The reason for this huge enhancement of performance is the use of accelerometers vice the two gyroscopes used to get pitch and roll

<sup>275</sup> Improved Inertial/Odometry/GPS Positioning

As seen in the first trajectory the benefit of using velocity updates during GPS outages derived from the wheel encoders is seen by two comparisons. The first comparison uses KF with full IMU and considers two sets of results, one with and the second without velocity updates. With velocity updates the solution has an average of the maximum positional error for the seven GPS outages equal to 11.44 meters while the case without updates has 789.2 meters of

Fig. 9. Three solutions and reference for second trajectory: Red for reference, Yellow for KF using full IMU with velocity updates, Green for KF using RISS without updates, Blue for KF

Fig. 10. RMS errors for altitude and 2D position for the seven outages in second trajectory.

The second comparison uses KF with RISS and considers two sets of results, one with and the second without velocity updates. With velocity updates the solution has an average of

from the RISS.

of Wheeled Robots Even in GPS-Denied Environments

using RISS with velocity updates.

error.


Fig. 8. Maximum errors for altitude and 2D position for the seven outages in first trajectory.

When comparing KF with RISS and velocity updates to KF with full IMU and velocity updates the advantage of RISS can be clearly noticed. The former has an average of the maximum positional error for the seven GPS outages equal to 3.35 meters while the latter shows an error of 8.77 meters.

These results together with the trajectory plots in fig 6 demonstrate that the proposed 3-D localization solution using KF for RISS/GPS integration and employing velocity updates using wheel encoders outperforms all the other compared solutions. Furthermore this solution provides very good results when compared to the MEMS-based INS/GPS integration results in the literature.

#### **4.2 Trajectory 2**

The second trajectory is much longer than the first one and enables the examination of long GPS outages. Seven outages are simulated with the duration of each outage equal to 150 seconds. This duration is chosen to test the performance of the proposed navigation solution in long GPS outages. The simulated outages are also chosen such that they encompass straight portions, turns and slopes.

Table 10 shows the root mean square (RMS) error in both the estimated 2-D horizontal position and the estimated altitude during the seven GPS outages for the four compared solutions. Table 11 shows the maximum errors in the estimated 2-D horizontal position and the estimated altitude during each outage. fig 9 shows a 2-D plot of four tracks, namely: (1) reference solution, (2) KF with full IMU and velocity updates during GPS outages, (3) KF with RISS and without updates during outages and (4) KF with RISS and velocity updates during outages. As mentioned earlier the KF with full IMU and without updates during GPS outages is not shown because the position errors would dramatically change the scale of the plot and make comparison of the other solutions very difficult.

The results in table 10 and table 11 confirm the results of the first trajectory and they further demonstrate the advantage of RISS over a full IMU. One can see a great difference in 2-D positional errors when comparing the results of KF with full IMU without updates with the results of KF with RISS without updates during during GPS outages. While the former technique has an average of the maximum positional error for the seven GPS outages equal to 18 Will-be-set-by-IN-TECH

Fig. 8. Maximum errors for altitude and 2D position for the seven outages in first trajectory.

When comparing KF with RISS and velocity updates to KF with full IMU and velocity updates the advantage of RISS can be clearly noticed. The former has an average of the maximum positional error for the seven GPS outages equal to 3.35 meters while the latter shows an error

These results together with the trajectory plots in fig 6 demonstrate that the proposed 3-D localization solution using KF for RISS/GPS integration and employing velocity updates using wheel encoders outperforms all the other compared solutions. Furthermore this solution provides very good results when compared to the MEMS-based INS/GPS integration

The second trajectory is much longer than the first one and enables the examination of long GPS outages. Seven outages are simulated with the duration of each outage equal to 150 seconds. This duration is chosen to test the performance of the proposed navigation solution in long GPS outages. The simulated outages are also chosen such that they encompass straight

Table 10 shows the root mean square (RMS) error in both the estimated 2-D horizontal position and the estimated altitude during the seven GPS outages for the four compared solutions. Table 11 shows the maximum errors in the estimated 2-D horizontal position and the estimated altitude during each outage. fig 9 shows a 2-D plot of four tracks, namely: (1) reference solution, (2) KF with full IMU and velocity updates during GPS outages, (3) KF with RISS and without updates during outages and (4) KF with RISS and velocity updates during outages. As mentioned earlier the KF with full IMU and without updates during GPS outages is not shown because the position errors would dramatically change the scale of the plot and

The results in table 10 and table 11 confirm the results of the first trajectory and they further demonstrate the advantage of RISS over a full IMU. One can see a great difference in 2-D positional errors when comparing the results of KF with full IMU without updates with the results of KF with RISS without updates during during GPS outages. While the former technique has an average of the maximum positional error for the seven GPS outages equal to

of 8.77 meters.

**4.2 Trajectory 2**

results in the literature.

portions, turns and slopes.

make comparison of the other solutions very difficult.

789.2 meters the latter shows an error of 68.29 meters. The reason for this huge enhancement of performance is the use of accelerometers vice the two gyroscopes used to get pitch and roll from the RISS.

As seen in the first trajectory the benefit of using velocity updates during GPS outages derived from the wheel encoders is seen by two comparisons. The first comparison uses KF with full IMU and considers two sets of results, one with and the second without velocity updates. With velocity updates the solution has an average of the maximum positional error for the seven GPS outages equal to 11.44 meters while the case without updates has 789.2 meters of error.

Fig. 9. Three solutions and reference for second trajectory: Red for reference, Yellow for KF using full IMU with velocity updates, Green for KF using RISS without updates, Blue for KF using RISS with velocity updates.


Fig. 10. RMS errors for altitude and 2D position for the seven outages in second trajectory.

The second comparison uses KF with RISS and considers two sets of results, one with and the second without velocity updates. With velocity updates the solution has an average of

with velocity updates during outages, and of approximately 82.0% over KF with RISS without any updates during outages. Considering the maximum error in horizontal positioning in the first trajectory, the KF with RISS and velocity updates during GPS outages achieved an average improvement of approximately 99.0% over KF with full IMU without any updates during outages, of approximately 33.2% over KF with full IMU with velocity updates during outages, and of approximately 88.8% over KF with RISS without any updates during outages.

<sup>277</sup> Improved Inertial/Odometry/GPS Positioning

One problem unique to small wheeled robots with strap-down navigation systems is that there is a great deal of chassis rigidity that passes along any disturbances felt at the wheels of the robot. Small, low-cost robots do not have suspension systems found on full-size vehicles which prevent many disturbances from being measured by the accelerometers of a strap-down IMU. A future investigation is required regarding low-cost measures for dampening some of the vibrations caused by small obstacles and imperfections on the road surface. Prospective researchers should make a careful selection of tires for their small mobile robot that allow moderate deformation to small obstacles while preserving sufficient shape to

Kalman filtering is a good technique for reducing the stochastic error of a system since it requires little processing time compared to other algorithms. It is a suitable choice for deployment in low-cost, low-power, low-form-factor systems such as those found on small mobile robots. Further study is required to determine the performance of the techniques

To our family and friends for their love, support and commitment. This chapter wouldn't

Borenstein, J., Everett, H., Feng, L. & Wehe, D. (1997). Mobile robot positioning: Sensors and

Chong, K. S. & Kleeman, L. (1997). Accurate odometry and error modelling for a mobile robot,

Cox, I. J. (1991). Blanche - an experiment in guidance and navigation of an autonomous robot

Farrell, J. A. & Barth, M. (1998). *The Global Positioning System & Inertial Navigation*,

Grewal, M. S., Weill, L. R. & Andrews, A. P. (2007). *Global Positioning Systems, Inertial*

vehicle, *IEEE Transactions on Robotics and Automation* 7(2): 193–204.

*Proceedings of the 1997 IEEE International Conference on Robotics and Automation*, Vol. 4,

URL: *http://www51.honeywell.com/aero/common/documents/myaerospacecatalog-documents*

URL: *www.xbow.com/Products/Product\_pdf\_files/Inertial\_pdf/IMU300CC\_Datasheet.pdf*

These results show the superiority of the proposed localization solution.

of Wheeled Robots Even in GPS-Denied Environments

maintain reliable estimates for velocities measured by the wheel encoders.

techniques, *Journal of Robotic Systems* 14(4): 231–249.

*Navigation, and Integration*, John Wiley and Sons.

*/Missiles-Munitions/HG1700\_Inertial\_Measurement\_Unit.pdf*

Albuquerque, NM, pp. 2783–2788.

*IMU300CC - 6DOF Inertial Measurement Unit* (2009).

**6. Acknowledgements**

**7. References**

have been possible without them.

McGraw-Hill.

*HG1700 Inertial Measurement Unit* (2009).

outlined in this work in the context of an embedded system operating in real-time.


Fig. 11. Maximum errors for altitude and 2D position for the seven outages in second trajectory.

the maximum positional error for the seven GPS outages equal to 7.64 meters while the case without updates has 68.29 meters of error.

When comparing KF with RISS and velocity updates to KF with full IMU and velocity updates the advantage of RISS can be seen especially in the altitude component. The former has an average of the maximum positional error for the seven GPS outages equal to 7.64 meters while the latter shows an error of 11.44 meters. The former has an average of the maximum altitude error for the seven GPS outages equal to 3.68 meters while the latter shows an error of 19.53 meters.

These results together demonstrate that the 3-D localization solution using KF for RISS/GPS integration and employing velocity updates using wheel encoders outperforms all the other compared solutions. Furthermore this solution provides very good results when compared to the MEMS-based INS/GPS integration results in the literature.

### **5. Conclusion and future work**

This chapter presented an outdoor 3-D localization solution for mobile robots using low-cost MEMS-based sensors, wheel encoders and GPS. A reduced inertial sensor system was used for both decreasing the cost and improving the performance. The integration was achieved using a loosely-coupled KF. In this work, a predictive error model for KF was developed for estimating the errors in positions, velocities and attitude provided by RISS mechanization. Using this error model inside the KF gives good results during GPS outages that outperformed the full IMU results. Furthermore, when this KF is used with measurement updates using forward velocity from encoders together with pitch and azimuth estimates (during GPS outages) it provides better results and outperforms all the compared solutions.

The positioning solutions in this work were tested with two real trajectories with seven simulated GPS outages whose duration was 60 seconds in the first trajectory and 150 seconds in the second trajectory. The proposed solutions were discussed and compared with each solution also compared against a reference solution. Considering the maximum error in horizontal positioning in the first trajectory, the KF with RISS and velocity updates during GPS outages achieved an average improvement of approximately 97.6% over KF with full IMU without any updates during outages, of approximately 61.8% over KF with full IMU with velocity updates during outages, and of approximately 82.0% over KF with RISS without any updates during outages. Considering the maximum error in horizontal positioning in the first trajectory, the KF with RISS and velocity updates during GPS outages achieved an average improvement of approximately 99.0% over KF with full IMU without any updates during outages, of approximately 33.2% over KF with full IMU with velocity updates during outages, and of approximately 88.8% over KF with RISS without any updates during outages. These results show the superiority of the proposed localization solution.

One problem unique to small wheeled robots with strap-down navigation systems is that there is a great deal of chassis rigidity that passes along any disturbances felt at the wheels of the robot. Small, low-cost robots do not have suspension systems found on full-size vehicles which prevent many disturbances from being measured by the accelerometers of a strap-down IMU. A future investigation is required regarding low-cost measures for dampening some of the vibrations caused by small obstacles and imperfections on the road surface. Prospective researchers should make a careful selection of tires for their small mobile robot that allow moderate deformation to small obstacles while preserving sufficient shape to maintain reliable estimates for velocities measured by the wheel encoders.

Kalman filtering is a good technique for reducing the stochastic error of a system since it requires little processing time compared to other algorithms. It is a suitable choice for deployment in low-cost, low-power, low-form-factor systems such as those found on small mobile robots. Further study is required to determine the performance of the techniques outlined in this work in the context of an embedded system operating in real-time.

## **6. Acknowledgements**

To our family and friends for their love, support and commitment. This chapter wouldn't have been possible without them.

## **7. References**

20 Will-be-set-by-IN-TECH

Fig. 11. Maximum errors for altitude and 2D position for the seven outages in second

the maximum positional error for the seven GPS outages equal to 7.64 meters while the case

When comparing KF with RISS and velocity updates to KF with full IMU and velocity updates the advantage of RISS can be seen especially in the altitude component. The former has an average of the maximum positional error for the seven GPS outages equal to 7.64 meters while the latter shows an error of 11.44 meters. The former has an average of the maximum altitude error for the seven GPS outages equal to 3.68 meters while the latter shows an error of 19.53

These results together demonstrate that the 3-D localization solution using KF for RISS/GPS integration and employing velocity updates using wheel encoders outperforms all the other compared solutions. Furthermore this solution provides very good results when compared to

This chapter presented an outdoor 3-D localization solution for mobile robots using low-cost MEMS-based sensors, wheel encoders and GPS. A reduced inertial sensor system was used for both decreasing the cost and improving the performance. The integration was achieved using a loosely-coupled KF. In this work, a predictive error model for KF was developed for estimating the errors in positions, velocities and attitude provided by RISS mechanization. Using this error model inside the KF gives good results during GPS outages that outperformed the full IMU results. Furthermore, when this KF is used with measurement updates using forward velocity from encoders together with pitch and azimuth estimates (during GPS

The positioning solutions in this work were tested with two real trajectories with seven simulated GPS outages whose duration was 60 seconds in the first trajectory and 150 seconds in the second trajectory. The proposed solutions were discussed and compared with each solution also compared against a reference solution. Considering the maximum error in horizontal positioning in the first trajectory, the KF with RISS and velocity updates during GPS outages achieved an average improvement of approximately 97.6% over KF with full IMU without any updates during outages, of approximately 61.8% over KF with full IMU

outages) it provides better results and outperforms all the compared solutions.

trajectory.

meters.

without updates has 68.29 meters of error.

**5. Conclusion and future work**

the MEMS-based INS/GPS integration results in the literature.


*IMU300CC - 6DOF Inertial Measurement Unit* (2009). URL: *www.xbow.com/Products/Product\_pdf\_files/Inertial\_pdf/IMU300CC\_Datasheet.pdf*

**0**

**12**

*Canada*

**Emerging New Trends in Hybrid Vehicle**

*Department of Electrical and Computer Engineering, University of Waterloo*

Over the last decade, vehicle localization has been attracting attention in a wide range of applications. A number of localization techniques have been developed to serve a variety of applications Al-Bayari & Sadoun (2005); Aono et al. (1998); Bouju et al. (2002); Cramer (1997); Dao et al. (2002); Drawil & Basir (2010); Jabbour, Cherfaoui & Bonnifait (2006); Lai & Tsai (2003); Nishimura et al. (1996); Sliety (2007); Stockus et al. (2000). In recent years, the focus has been on localization accuracy improvement – an issue considered crucial, specially in mission critical applications. For instance, for emergency response systems, such as the eCall system, to deliver on their task they need reliable and accurate localization capabilities. These capabilities are becoming as important in other applications, including, accident avoidance and management, navigation systems, location sensitive billing systems,

The focus of much recent research in localization has been on improving accuracy through the use of multiple localization modalities. This chapter provides a review on multi-modality based localization techniques and establishes a categorization of such techniques based on the type of measurement and the strategy employed to fuse measurements from multiple

Although these techniques have demonstrated significant performance improvement, there remain situations that give rise to degraded localization accuracy. Moreover, current localization systems lack in their ability to reliably quantify the accuracy of localization estimates, neither the means by which sources of localization information are properly

In this chapter, a novel framework is proposed to tackle the aforementioned issues. The proposed framework fuses different localization techniques in order to improve their location estimates, and provides a location reliability assessment that captures the integrity of the estimates. Knowledge about estimate integrity allows the system to plan the use of its localization resources so as to match the target accuracy of the application. The proposed framework provides the tools that would allow for modeling the impact of the operation

Differential GPS (DGPS) and Assisted GPS (A-GPS) are two advanced types of GPS technologies that provide a high level of accuracy and fast retrieving rate. Nevertheless, using

conditions on estimate integrity, as such it enables more robust system performance.

**1. Introduction**

location based services.

localization sources.

discounted based on reliability/accuracy merits.

**2. Motion and GPS measurement data fusion**

**Localization Systems**

Nabil Drawil and Otman Basir


## **Emerging New Trends in Hybrid Vehicle Localization Systems**

Nabil Drawil and Otman Basir *Department of Electrical and Computer Engineering, University of Waterloo Canada*

## **1. Introduction**

22 Will-be-set-by-IN-TECH

278 Global Navigation Satellite Systems – Signal, Theory and Applications

Iqbal, U., Karamat, T. B., Okou, A. F. & Noureldin, A. (2009). Experimental results on an

Iqbal, U., Okou, A. F. & Noureldin, A. (2008). An integrated reduced inertial sensor system

Noureldin, A., Irvine-Halliday, D. & Mintchev, M. P. (2002). Accuracy limitations of fog-based

Ohno, K., Tsubouchi, T., Shigematsu, B., Maeyama, S. & Yuta, S. (2003). Outdoor navigation

Ollero, A., Arrue, B. C., Ferruz, J., Heredia, G., Cuesta, F., López-Pichaco, F. & Nogales,

Pacis, E., Everett, H., Farrington, N., Kogut, G., Sights, B., Kramer, T., Thompson, M.,

*IEEE Transactions on Instrumentation and Measurement* 51(6): 1177–1191. Noureldin, A., Irvine-Halliday, D. & Mintchev, M. P. (2004). Measurement-while-drilling

sensing system, *Measurement Science and Technology* 15(12): 2426–2434. Noureldin, A., Karamat, T., Eberts, M. D. & El-Shafie, A. (2009). Performance enhancement

*Journal of Navigation and Observation* 2009.

*(PLANS) 2008*, Monterey, California, USA, pp. 912–922.

*Transactions on Vehicular Technology* 58(3): 1077–1096.

Press, pp. 1978–1984.

Orlando, FL, USA.

*Technology VII*, Orlando, FL, USA.

*SPAN Technology System User Manual OM-20000062* (2005).

URL: *www.novatel.com/Documents/Manuals/om-20000062.pdf*

integrated gps and multi sensor system for land vehicle positioning, *International*


continuous measurement-while-drilling surveying instruments for horizontal wells,

surveying of highly-inclined and horizontal well sections utilizing single-axis gyro

of mems based ins/gps integration for low cost navigation applications, *IEEE*

of a mobile robot between buildings based on DGPS and odometry data fusion, *Proceedings of the IEEE International Conference on Robotics and Automation*, Vol. 2, IEEE

C. (1999). Control and perception components for autonomous vehicle guidance. application to the romeo vehicles, *Control Engineering Practice* 7(10): 1291–1299. Pacis, E., Everett, H., Farrington, N. & Bruemmer, D. (2004). Enhancing functionality

and autonomy in man-portable robots, *SPIE Proc. 5804: Unmanned Ground Vehicle*

Bruemmer, D. & Few, D. (2005). Transitioning unmanned ground vehicle research technologies, *SPIE Proceedings 5804: Unmanned Ground Vehicle Technology VII*, Over the last decade, vehicle localization has been attracting attention in a wide range of applications. A number of localization techniques have been developed to serve a variety of applications Al-Bayari & Sadoun (2005); Aono et al. (1998); Bouju et al. (2002); Cramer (1997); Dao et al. (2002); Drawil & Basir (2010); Jabbour, Cherfaoui & Bonnifait (2006); Lai & Tsai (2003); Nishimura et al. (1996); Sliety (2007); Stockus et al. (2000). In recent years, the focus has been on localization accuracy improvement – an issue considered crucial, specially in mission critical applications. For instance, for emergency response systems, such as the eCall system, to deliver on their task they need reliable and accurate localization capabilities. These capabilities are becoming as important in other applications, including, accident avoidance and management, navigation systems, location sensitive billing systems, location based services.

The focus of much recent research in localization has been on improving accuracy through the use of multiple localization modalities. This chapter provides a review on multi-modality based localization techniques and establishes a categorization of such techniques based on the type of measurement and the strategy employed to fuse measurements from multiple localization sources.

Although these techniques have demonstrated significant performance improvement, there remain situations that give rise to degraded localization accuracy. Moreover, current localization systems lack in their ability to reliably quantify the accuracy of localization estimates, neither the means by which sources of localization information are properly discounted based on reliability/accuracy merits.

In this chapter, a novel framework is proposed to tackle the aforementioned issues. The proposed framework fuses different localization techniques in order to improve their location estimates, and provides a location reliability assessment that captures the integrity of the estimates. Knowledge about estimate integrity allows the system to plan the use of its localization resources so as to match the target accuracy of the application. The proposed framework provides the tools that would allow for modeling the impact of the operation conditions on estimate integrity, as such it enables more robust system performance.

## **2. Motion and GPS measurement data fusion**

Differential GPS (DGPS) and Assisted GPS (A-GPS) are two advanced types of GPS technologies that provide a high level of accuracy and fast retrieving rate. Nevertheless, using

encoders are used as motion sensors. The positioning accuracy is improved by compensating for the error for each sensor. The error is determined by means of a KF, which is also utilized

Emerging New Trends in Hybrid Vehicle Localization Systems 281

In Sharaf et al. (2005) an Artificial Neural Network (ANN) is chosen as a tool for detecting errors and noises in INS measurements using a DGPS as a guide to the true location of the vehicle during a training phase. The work reported in Sharaf et al. (2005) is similar to that reported in Bouvet & Garcia (2000) in that preprocessing operations are performed on the measurements before they are fused. An assumption that is made in this method is that the DGPS data is always either available or unavailable due to an outage in satellite signal. However, in urban areas, satellite signals are often available but quite often are contaminated

Detecting and recognizing landmarks provide spatial information related to the local environment. It is therefore possible to integrate spatial information with localization measurements from DR and GPS in order to improve localization accuracy Fuerstenberg & Weiss (2005); Jabbour, Bonnifait & Cherfaoui (2006); Jabbour, Cherfaoui & Bonnifait (2006); Rae & Basir (2007); Weiss et al. (2005). Two approaches for detecting and augmenting landmarks to vehicle localization systems are presented next along with another localization

Due to the accumulated error caused by the long satellite outages in GPS/DR localization systems, digital maps are utilized to perform localization during such outages Weiss et al. (2005). A laser scanner mounted on a vehicle scans major objects in the vehicle environment. The system matches these landmarks with other landmarks in the digital map that represent the region of interest. If there is a match, the vehicle location is estimated by correlating the

However, segmentation is not a trivial job specially in situations where landmarks are merged with background objects. Moreover, the system must be trained by having it traverse the regions of interest Fuerstenberg & Weiss (2005) to extract landmarks (features, such as traffic

In Jabbour, Bonnifait & Cherfaoui (2006), a vehicle equipped with an autonomous navigation system and a laser scanner is reported. The laser scanner is used to detect the edges of sidewalks and estimate the distance between the edge of the sidewalk and the vehicle. Distance measurements are utilized to improve the accuracy of a localization system that comprises GPS, DR, and Geographic Information System (GIS). The GIS data contains digitized information such as abstract road maps, road edges, and other landmarks. Landmark information is created through a learning stage. During the testing stage, the EKF fusion technique produces an innovation value from which the system determines whether to accept the fusion location estimate. If the GPS data is corrupted by multipath signals or is unavailable, only the DR location estimate utilized. The vehicle location estimate is used to select the region of interest from the GIS database that contains the landmark information. To improve the vehicle location estimate, a matching scheme is performed to compare the GIS-extracted landmarks (i.e., sidewalk edges) with those extracted by the laser scanner, and

signs and the posts of traffic lights) that can later be used as a reference points.

technique that attempts to detect visible satellites for use in the positioning process.

by multipath noises, which effects the quality of the ANN learning.

**3. Fusion of landmark, INS, and GPS measurements**

**3.1 Laser scanners, digital maps, and GPS/DR**

identified landmarks.

as a fusion unit.

a GPS receiver as the sole vehicle localization measurement source may turn to be unreliable, especially in urban canyons and other areas where the satellite signal can be distorted or lost. A number of solutions have been reported in the literature that proposed augmenting GPS measurements with information about the vehicle's motion in order to improve localization accuracy. In what follows we provide a summary of a number of such solutions.

## **2.1 Dead Reckoning (DR) and GPS integration**

A DR is a localization method that estimates the next location of a mobile object over a series of short time intervals, given the object's direction, speed, and previous location. DR is simple and known for producing incremental error and hence needs to be reset periodically. It is therefore suitable for use over short periods of time.

One approach to resetting the accumulative localization error is to combine DR with GPS whereby GPS measurements are used to reduce the DR accumulative error; when the GPS measurement is unavailable, the DR estimates the location using sensors such as wheel odometers, a flux-gate compass, a gyroscope, and an accelerometer Kao (1991).

## **2.2 Inertial Navigation System (INS) and GPS fusion**

Basically, INS operates as a DR system. INS employs a computing unit and motion sensors to estimate its location without relying on any external reference once it is initialized using for example a GPS measurement. To avoid the accumulated error caused by the measurements of internal sensors in INS, the INS location estimate is fused with measurement data from other sources. As discussed in Skog & Handel (2009) fusing INS and GPS can take the form of a loosely or tightly coupled system architecture.

An example of a system that fuses INS and GPS is the real-time kinematic global positioning system (RTK GPS) Bouvet & Garcia (2000) which uses an Extended Kalman Filter (EKF) to fuse data. In this system, GPS latency is defined as the time required for the satellite signals to travel to Earth and the time required for the computation of the location; GPS latency varies with the number of observed satellites. Therefore, the GPS latency is encapsulated in the EKF state so that the fusion of the INS and GPS data is synchronized with the readings of the sensors.

It is possible to fuse standard GPS and INS by means of a KF as well Honghui & Moore (2002). In this case the computational complexity of the EKF can be reduced by preprocessing the INS measurements and inputting them into the KF as a linear component. However, preprocessing the INS measurement adds to the computational cost of the solution.

#### **2.3 Other motion sensors and DGPS fusion**

Integrating the INS of a dynamic model with a DGPS is also investigated inRezaei & Sengupta (2005). To deal with the nonlinearity of the dynamic model, an EKF is used. Due to the accelerometer noise other motion sensors, such as six wheel-speed encoders, a steering angle encoder, and an optical yaw rate gyro, are used instead. Localization accuracy of 0.9 m on 100 m driving track was reported for situations where the system relies on the dynamic model more than it does on the GPS measurements. The multipath effect is not addressed as the experiment was conducted in an open space environment.

In Aono et al. (1998) a method of positioning a vehicle on undulating ground by fusing DGPS data and motion sensor data is proposed. A fibre optic gyro, a roll pitch sensor, and wheel 2 Will-be-set-by-IN-TECH

a GPS receiver as the sole vehicle localization measurement source may turn to be unreliable, especially in urban canyons and other areas where the satellite signal can be distorted or lost. A number of solutions have been reported in the literature that proposed augmenting GPS measurements with information about the vehicle's motion in order to improve localization

A DR is a localization method that estimates the next location of a mobile object over a series of short time intervals, given the object's direction, speed, and previous location. DR is simple and known for producing incremental error and hence needs to be reset periodically. It is

One approach to resetting the accumulative localization error is to combine DR with GPS whereby GPS measurements are used to reduce the DR accumulative error; when the GPS measurement is unavailable, the DR estimates the location using sensors such as wheel

Basically, INS operates as a DR system. INS employs a computing unit and motion sensors to estimate its location without relying on any external reference once it is initialized using for example a GPS measurement. To avoid the accumulated error caused by the measurements of internal sensors in INS, the INS location estimate is fused with measurement data from other sources. As discussed in Skog & Handel (2009) fusing INS and GPS can take the form of a

An example of a system that fuses INS and GPS is the real-time kinematic global positioning system (RTK GPS) Bouvet & Garcia (2000) which uses an Extended Kalman Filter (EKF) to fuse data. In this system, GPS latency is defined as the time required for the satellite signals to travel to Earth and the time required for the computation of the location; GPS latency varies with the number of observed satellites. Therefore, the GPS latency is encapsulated in the EKF state so that the fusion of the INS and GPS data is synchronized with the readings of the

It is possible to fuse standard GPS and INS by means of a KF as well Honghui & Moore (2002). In this case the computational complexity of the EKF can be reduced by preprocessing the INS measurements and inputting them into the KF as a linear component. However, preprocessing

Integrating the INS of a dynamic model with a DGPS is also investigated inRezaei & Sengupta (2005). To deal with the nonlinearity of the dynamic model, an EKF is used. Due to the accelerometer noise other motion sensors, such as six wheel-speed encoders, a steering angle encoder, and an optical yaw rate gyro, are used instead. Localization accuracy of 0.9 m on 100 m driving track was reported for situations where the system relies on the dynamic model more than it does on the GPS measurements. The multipath effect is not addressed as the

In Aono et al. (1998) a method of positioning a vehicle on undulating ground by fusing DGPS data and motion sensor data is proposed. A fibre optic gyro, a roll pitch sensor, and wheel

the INS measurement adds to the computational cost of the solution.

experiment was conducted in an open space environment.

accuracy. In what follows we provide a summary of a number of such solutions.

odometers, a flux-gate compass, a gyroscope, and an accelerometer Kao (1991).

**2.1 Dead Reckoning (DR) and GPS integration**

therefore suitable for use over short periods of time.

**2.2 Inertial Navigation System (INS) and GPS fusion**

loosely or tightly coupled system architecture.

**2.3 Other motion sensors and DGPS fusion**

sensors.

encoders are used as motion sensors. The positioning accuracy is improved by compensating for the error for each sensor. The error is determined by means of a KF, which is also utilized as a fusion unit.

In Sharaf et al. (2005) an Artificial Neural Network (ANN) is chosen as a tool for detecting errors and noises in INS measurements using a DGPS as a guide to the true location of the vehicle during a training phase. The work reported in Sharaf et al. (2005) is similar to that reported in Bouvet & Garcia (2000) in that preprocessing operations are performed on the measurements before they are fused. An assumption that is made in this method is that the DGPS data is always either available or unavailable due to an outage in satellite signal. However, in urban areas, satellite signals are often available but quite often are contaminated by multipath noises, which effects the quality of the ANN learning.

## **3. Fusion of landmark, INS, and GPS measurements**

Detecting and recognizing landmarks provide spatial information related to the local environment. It is therefore possible to integrate spatial information with localization measurements from DR and GPS in order to improve localization accuracy Fuerstenberg & Weiss (2005); Jabbour, Bonnifait & Cherfaoui (2006); Jabbour, Cherfaoui & Bonnifait (2006); Rae & Basir (2007); Weiss et al. (2005). Two approaches for detecting and augmenting landmarks to vehicle localization systems are presented next along with another localization technique that attempts to detect visible satellites for use in the positioning process.

## **3.1 Laser scanners, digital maps, and GPS/DR**

Due to the accumulated error caused by the long satellite outages in GPS/DR localization systems, digital maps are utilized to perform localization during such outages Weiss et al. (2005). A laser scanner mounted on a vehicle scans major objects in the vehicle environment. The system matches these landmarks with other landmarks in the digital map that represent the region of interest. If there is a match, the vehicle location is estimated by correlating the identified landmarks.

However, segmentation is not a trivial job specially in situations where landmarks are merged with background objects. Moreover, the system must be trained by having it traverse the regions of interest Fuerstenberg & Weiss (2005) to extract landmarks (features, such as traffic signs and the posts of traffic lights) that can later be used as a reference points.

In Jabbour, Bonnifait & Cherfaoui (2006), a vehicle equipped with an autonomous navigation system and a laser scanner is reported. The laser scanner is used to detect the edges of sidewalks and estimate the distance between the edge of the sidewalk and the vehicle. Distance measurements are utilized to improve the accuracy of a localization system that comprises GPS, DR, and Geographic Information System (GIS). The GIS data contains digitized information such as abstract road maps, road edges, and other landmarks. Landmark information is created through a learning stage. During the testing stage, the EKF fusion technique produces an innovation value from which the system determines whether to accept the fusion location estimate. If the GPS data is corrupted by multipath signals or is unavailable, only the DR location estimate utilized. The vehicle location estimate is used to select the region of interest from the GIS database that contains the landmark information. To improve the vehicle location estimate, a matching scheme is performed to compare the GIS-extracted landmarks (i.e., sidewalk edges) with those extracted by the laser scanner, and

estimate given that it shares relative spatial information with nearby nodes (e.g., other vehicles

Emerging New Trends in Hybrid Vehicle Localization Systems 283

Radio localization methods have been studied extensively for cellular networks in a wide range of applications (e.g., for CDMA networks see Al-Jazzar & Caffery (2004); Caffery & Stuber (1994; 1998); Caffery (2000); Le et al. (2003); McGuire et al. (2003); Porretta et al. (2008); Sayed et al. (2005); Venkatraman et al. (2002); Wang et al. (2003); Wylie & Holtzman (1996) and for GSM networks see Chen et al. (2006)). An example of these systems is a localization system that estimates the locations of emergency calls initiated by cellular phones. The system operates on the principle that measurements from different Base Stations (BS's) are combined in order to compute the location of a Mobile Station (MS). The BS's typically have different levels of uncertainty in their measurements, which are minimized as a result of the fusion process. The relative spatial information in this system is based on the measurements from radio signals, such as Time of Arrival (TOA), Time Difference of Arrival (TDOA), Angel of Arrival (AOA), Received Signal Strength (RSS). In some of these GPS-less approaches, a mix of two or more different types of radio signal measurements is utilized in order to relax

In the following subsections detailed models for some of these techniques are given. (*xm*, *ym*) signifies the MS location. The locations of *n* base stations: (BS1, BS2, BS3, . . . , BS*n*) are denoted by {(*x*1, *y*1),(*x*2, *y*2),(*x*3, *y*3),...,(*xn*, *yn*)}, respectively. For simplicity and without loss of generality, locations are represented by two coordinates, *x* and *y*, in the Cartesian coordinate

Time of arrival measurements are based on the time of flight of a signal as it travels between a source and a destination. Since the signal travels at the speed of light (*c*), it is possible to

where *tm* signifies the signal sending time from the MS, *ti* signifies the signal arrival time at

According to Caffery & Stuber (1998), the TOA technique can be employed using three BS's, the minimum number of reference points in two dimensions (Figure 1), in order to estimate the MS location by computing the distances between each BS and the MS (i.e., *d*1, *d*2, *d*3), as

Nevertheless, due to possible NLOS propagation conditions, the actual Euclidean distances between the MS and the BS*<sup>i</sup>* is less than or equal to (*ti* − *tm*)*c*. This inequality creates more than one solution for the optimization problem in 2, all of which reside in a bounded area, as shown in Figure 1. A constrained version of the optimization problem in 2 is proposed in Caffery (1999); Porretta et al. (2008) in order to increase the localization accuracy; however, the

(*xi* − *xm*)

*di* = (*ti* − *tm*)*c* (1)

<sup>2</sup> <sup>+</sup> (*yi* <sup>−</sup> *ym*)

2 <sup>2</sup>

(2)

or mobile network towers).

system.

**4.1.1 TOA data fusion**

**4.1 Radio signal measurement data fusion**

constraints such as the synchronization of the BS's.

compute the distance between the two points as follows:

the BS*i*, and *i* signifies the BS's index (i.e., *i* = {1, 2, . . . , *n*}).

*x*ˆ*m*, *y*ˆ*<sup>m</sup>* = *arg* min

per Equation 1, and then formulating the following optimization problem:

3 ∑ *i*=1  *di* − 

*xm*,*ym*

the estimated distances between the sidewalk edge and the vehicle are then used in fixing the vehicle location. Although the memory constraints are overcome by using the GIS, the accuracy of the estimate of the distances is not consistent due to occluding objects between the laser scanner and the edge of the sidewalk. The training phase required for any traversed region is also not insignificant.

## **3.2 Vision, digital maps, and GPS/DR**

Visual data is also utilized in localization techniques since digital images can provide a wide range of information about the surrounding environment. Due to the time required for image processing Jabbour, Cherfaoui & Bonnifait (2006), only key images are maintained and linked to the GIS database Jabbour, Bonnifait & Cherfaoui (2006). Again, both GPS/DR are used and the proximity of the vehicle location estimate to the roads in the GIS database is examined. The road segment closest to the location estimate is then selected, and key images of that road are extracted in order to compare their features with the features of the images taken during the navigation stage. The weakness of this strategy appears when the curvature of the vehicle's path is significant, especially when the vehicle turns in orthogonal intersections.

Visual features can, however, be blended with other location measurements, such as GPS and DR data in the EKF formulation Rae & Basir (2007). The main advantage of this strategy is that the uncertainty of all the information sources is kept local to the EKF, namely, in the error covariance matrix, which guarantees a minimum mean square error estimates. In Rae & Basir (2007), the EKF structure is derived and validated where the curvature of the roads is employed as a visual feature. It is shown that when the roads are curvy, the vehicle location estimate is dramatically improved. On the other hand, if the road traversed is not curved, then the accuracy of the location estimate remains the same as that produced by the GPS/DR fusion localization technique.

## **3.3 Satellite visibility and DGPS**

In urban areas, GPS multipath signals cause unpredictable localization errors due to the NLOS satellite signals. Another approach is the localization system which is driven by tracking visible GPS satellites using an infrared camera. An omni-directional infrared camera mounted on the top of a vehicle is used to recognize obstacles and their height and to detect visible satellites by observing their positions with a satellite orbit simulator Meguro et al. (2009). This method allows the system to exclude any radio waves emitted by invisible satellites to improve the localization accuracy.

The vehicle localization system used in this approach has high degree of accuracy since it employs a DGPS receiver. However, in high rise building areas, the availability of location estimates is low due to the lack of enough visible satellites, and even with enough visible satellites, the geometric configuration of the constellation may result in a high Dilution of Precision (DOP).

## **4. Cooperative localization**

Cooperative Localization is a recent location estimation approach that has been implemented in vehicular positioning and wireless communication systems. This localization scheme is suitable for scenarios which involve the coexistence of several entities that independently provide location information. The goal is to localize a mobile node or to enhance its location estimate given that it shares relative spatial information with nearby nodes (e.g., other vehicles or mobile network towers).

#### **4.1 Radio signal measurement data fusion**

4 Will-be-set-by-IN-TECH

the estimated distances between the sidewalk edge and the vehicle are then used in fixing the vehicle location. Although the memory constraints are overcome by using the GIS, the accuracy of the estimate of the distances is not consistent due to occluding objects between the laser scanner and the edge of the sidewalk. The training phase required for any traversed

Visual data is also utilized in localization techniques since digital images can provide a wide range of information about the surrounding environment. Due to the time required for image processing Jabbour, Cherfaoui & Bonnifait (2006), only key images are maintained and linked to the GIS database Jabbour, Bonnifait & Cherfaoui (2006). Again, both GPS/DR are used and the proximity of the vehicle location estimate to the roads in the GIS database is examined. The road segment closest to the location estimate is then selected, and key images of that road are extracted in order to compare their features with the features of the images taken during the navigation stage. The weakness of this strategy appears when the curvature of the vehicle's

Visual features can, however, be blended with other location measurements, such as GPS and DR data in the EKF formulation Rae & Basir (2007). The main advantage of this strategy is that the uncertainty of all the information sources is kept local to the EKF, namely, in the error covariance matrix, which guarantees a minimum mean square error estimates. In Rae & Basir (2007), the EKF structure is derived and validated where the curvature of the roads is employed as a visual feature. It is shown that when the roads are curvy, the vehicle location estimate is dramatically improved. On the other hand, if the road traversed is not curved, then the accuracy of the location estimate remains the same as that produced by the GPS/DR

In urban areas, GPS multipath signals cause unpredictable localization errors due to the NLOS satellite signals. Another approach is the localization system which is driven by tracking visible GPS satellites using an infrared camera. An omni-directional infrared camera mounted on the top of a vehicle is used to recognize obstacles and their height and to detect visible satellites by observing their positions with a satellite orbit simulator Meguro et al. (2009). This method allows the system to exclude any radio waves emitted by invisible satellites to

The vehicle localization system used in this approach has high degree of accuracy since it employs a DGPS receiver. However, in high rise building areas, the availability of location estimates is low due to the lack of enough visible satellites, and even with enough visible satellites, the geometric configuration of the constellation may result in a high Dilution of

Cooperative Localization is a recent location estimation approach that has been implemented in vehicular positioning and wireless communication systems. This localization scheme is suitable for scenarios which involve the coexistence of several entities that independently provide location information. The goal is to localize a mobile node or to enhance its location

path is significant, especially when the vehicle turns in orthogonal intersections.

region is also not insignificant.

fusion localization technique.

**3.3 Satellite visibility and DGPS**

improve the localization accuracy.

**4. Cooperative localization**

Precision (DOP).

**3.2 Vision, digital maps, and GPS/DR**

Radio localization methods have been studied extensively for cellular networks in a wide range of applications (e.g., for CDMA networks see Al-Jazzar & Caffery (2004); Caffery & Stuber (1994; 1998); Caffery (2000); Le et al. (2003); McGuire et al. (2003); Porretta et al. (2008); Sayed et al. (2005); Venkatraman et al. (2002); Wang et al. (2003); Wylie & Holtzman (1996) and for GSM networks see Chen et al. (2006)). An example of these systems is a localization system that estimates the locations of emergency calls initiated by cellular phones. The system operates on the principle that measurements from different Base Stations (BS's) are combined in order to compute the location of a Mobile Station (MS). The BS's typically have different levels of uncertainty in their measurements, which are minimized as a result of the fusion process. The relative spatial information in this system is based on the measurements from radio signals, such as Time of Arrival (TOA), Time Difference of Arrival (TDOA), Angel of Arrival (AOA), Received Signal Strength (RSS). In some of these GPS-less approaches, a mix of two or more different types of radio signal measurements is utilized in order to relax constraints such as the synchronization of the BS's.

In the following subsections detailed models for some of these techniques are given. (*xm*, *ym*) signifies the MS location. The locations of *n* base stations: (BS1, BS2, BS3, . . . , BS*n*) are denoted by {(*x*1, *y*1),(*x*2, *y*2),(*x*3, *y*3),...,(*xn*, *yn*)}, respectively. For simplicity and without loss of generality, locations are represented by two coordinates, *x* and *y*, in the Cartesian coordinate system.

#### **4.1.1 TOA data fusion**

Time of arrival measurements are based on the time of flight of a signal as it travels between a source and a destination. Since the signal travels at the speed of light (*c*), it is possible to compute the distance between the two points as follows:

$$d\_i = (t\_i - t\_m)c\tag{1}$$

where *tm* signifies the signal sending time from the MS, *ti* signifies the signal arrival time at the BS*i*, and *i* signifies the BS's index (i.e., *i* = {1, 2, . . . , *n*}).

According to Caffery & Stuber (1998), the TOA technique can be employed using three BS's, the minimum number of reference points in two dimensions (Figure 1), in order to estimate the MS location by computing the distances between each BS and the MS (i.e., *d*1, *d*2, *d*3), as per Equation 1, and then formulating the following optimization problem:

$$\text{fit}\_{m}\,\hat{y}\_{m} = \arg\min\_{\mathbf{x}\_{m},y\_{m}} \sum\_{i=1}^{3} \left( d\_{i} - \sqrt{(\mathbf{x}\_{i} - \mathbf{x}\_{m})^{2} + (y\_{i} - y\_{m})^{2}} \right)^{2} \tag{2}$$

Nevertheless, due to possible NLOS propagation conditions, the actual Euclidean distances between the MS and the BS*<sup>i</sup>* is less than or equal to (*ti* − *tm*)*c*. This inequality creates more than one solution for the optimization problem in 2, all of which reside in a bounded area, as shown in Figure 1. A constrained version of the optimization problem in 2 is proposed in Caffery (1999); Porretta et al. (2008) in order to increase the localization accuracy; however, the

where *k*<sup>2</sup>

where **H** =

the MS as follows:

Wang et al. (2003).

**4.1.2 TDOA data fusion**

BS1 (x1,y1)

Fig. 2. The TDOA localization method.

*d1=ct*<sup>1</sup>

*d31=c(t*3*- t*1)

*d21=c(t*2*- t*1)

*<sup>i</sup>* <sup>=</sup> *<sup>x</sup>*<sup>2</sup>

⎡ ⎢ ⎢ ⎢ ⎣

*<sup>i</sup>* <sup>+</sup> *<sup>y</sup>*<sup>2</sup>

*x*<sup>2</sup> *y*<sup>2</sup> *x*<sup>3</sup> *y*<sup>3</sup> . . . . . . *xn yn* ⎤ ⎥ ⎥ ⎥ ⎦ , **x** = � *xm ym* �

*<sup>i</sup>* . Equation 4 can be expressed in a matrix form

2

⎡ ⎢ ⎢ ⎢ ⎣ *k*2 <sup>2</sup> <sup>−</sup> *<sup>d</sup>*<sup>2</sup>

Emerging New Trends in Hybrid Vehicle Localization Systems 285

*k*2 <sup>3</sup> <sup>−</sup> *<sup>d</sup>*<sup>2</sup>

*k*2 *<sup>n</sup>* <sup>−</sup> *<sup>d</sup>*<sup>2</sup>

Equation 5 represents an overdetermined system (i.e., *n* > 2). Practically, such a system has no exact solution. Therefore a linear least squares method is used to estimate the location of

Alternative techniques, such as the maximum likelihood are reported in McGuire et al. (2003);

TDOA is preferable to the TOA due to the fact that TDOA does not require synchronization between the MS and BS's, Figure 2. Instead, it takes advantage of the synchronization of the CDMA cellular network BS's to compute the difference between the time of arrivals of the MS

> MS (xm,ym)

> > *d2=ct*2

BS3 (x3,y3)

*d3=ct*3

<sup>2</sup> <sup>+</sup> *<sup>d</sup>*<sup>2</sup> 1 ⎤ ⎥ ⎥ ⎥ ⎦ .

<sup>3</sup> <sup>+</sup> *<sup>d</sup>*<sup>2</sup> 1

*<sup>n</sup>* + *d*<sup>2</sup> 1

. . .

, and **b** = <sup>1</sup>

**x**ˆ = � **H***T***H** �−<sup>1</sup>

where (.)*<sup>T</sup>* signifies matrix transpose and (.)−<sup>1</sup> signifies matrix inverse.

**Hx** = **b** (5)

**H***T***b** (6)

BS2 (x2,y2)

Fig. 1. The TOA localization method.

geometric arrangement of the BS's may produce a poor location estimates due to the shape of the bounded area that contains the MS. This shortcoming might be avoided using more BS's. The next method described below utilizes more than three BS's in estimating the MS location so that ambiguity in the distance computation is reduced.

In Caffery (2000); Sayed et al. (2005) the Cartesian coordinate system is represented as follows. The location of one of the base stations is assumed to be the origin (e.g., BS1 be the origin: (*x*1, *y*1)=(0, 0)) and the locations of the other objects in the network are computed with respect to the origin. Hence, the distances (*d*1, *d*2, *d*3...., *dn*) can be used to estimate the location of the MS by solving the following set of equations:

$$\begin{array}{l} d\_1^2 = \mathbf{x}\_m^2 + y\_m^2\\ d\_2^2 = (\mathbf{x}\_2 - \mathbf{x}\_m)^2 + (y\_2 - y\_m)^2\\ d\_3^2 = (\mathbf{x}\_3 - \mathbf{x}\_m)^2 + (y\_3 - y\_m)^2\\ \vdots\\ d\_n^2 = (\mathbf{x}\_n - \mathbf{x}\_m)^2 + (y\_n - y\_m)^2 \end{array} \tag{3}$$

After rearranging terms, the above equations can be written as follows:

$$
\begin{bmatrix} x\_2 \ y\_2 \\ x\_3 \ y\_3 \\ \vdots \ \vdots \\ x\_n \ y\_n \end{bmatrix} \begin{bmatrix} x\_m \\ y\_m \end{bmatrix} = \frac{1}{2} \begin{bmatrix} k\_2^2 - d\_2^2 + d\_1^2 \\ k\_3^2 - d\_3^2 + d\_1^2 \\ \vdots \\ k\_n^2 - d\_n^2 + d\_1^2 \end{bmatrix} \tag{4}
$$

where *k*<sup>2</sup> *<sup>i</sup>* <sup>=</sup> *<sup>x</sup>*<sup>2</sup> *<sup>i</sup>* <sup>+</sup> *<sup>y</sup>*<sup>2</sup> *<sup>i</sup>* . Equation 4 can be expressed in a matrix form

$$\mathbf{H}\mathbf{x} = \mathbf{b} \tag{5}$$

$$\begin{bmatrix} k\_2^2 - d\_2^2 + d\_1^2\\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{bmatrix}$$

$$\text{where } \mathbf{H} = \begin{bmatrix} \mathbf{x}\_2 \ y\_2 \\ \mathbf{x}\_3 \ y\_3 \\ \vdots \\ \mathbf{x}\_n \ y\_n \end{bmatrix}, \mathbf{x} = \begin{bmatrix} \mathbf{x}\_m \\ \mathbf{y}\_m \end{bmatrix}, \text{and } \mathbf{b} = \frac{1}{2} \begin{bmatrix} k\_2^2 - d\_2^2 + d\_1^2 \\ k\_3^2 - d\_3^2 + d\_1^2 \\ \vdots \\ k\_n^2 - d\_n^2 + d\_1^2 \end{bmatrix}.$$

Equation 5 represents an overdetermined system (i.e., *n* > 2). Practically, such a system has no exact solution. Therefore a linear least squares method is used to estimate the location of the MS as follows:

$$\hat{\mathbf{x}} = \left(\mathbf{H}^T \mathbf{H}\right)^{-1} \mathbf{H}^T \mathbf{b} \tag{6}$$

where (.)*<sup>T</sup>* signifies matrix transpose and (.)−<sup>1</sup> signifies matrix inverse.

Alternative techniques, such as the maximum likelihood are reported in McGuire et al. (2003); Wang et al. (2003).

#### **4.1.2 TDOA data fusion**

⎡

⎤

6 Will-be-set-by-IN-TECH

BS2

*d*2

BS1 (x1,y1)

geometric arrangement of the BS's may produce a poor location estimates due to the shape of the bounded area that contains the MS. This shortcoming might be avoided using more BS's. The next method described below utilizes more than three BS's in estimating the MS location

In Caffery (2000); Sayed et al. (2005) the Cartesian coordinate system is represented as follows. The location of one of the base stations is assumed to be the origin (e.g., BS1 be the origin: (*x*1, *y*1)=(0, 0)) and the locations of the other objects in the network are computed with respect to the origin. Hence, the distances (*d*1, *d*2, *d*3...., *dn*) can be used to estimate the

<sup>2</sup> = (*x*<sup>2</sup> <sup>−</sup> *xm*)<sup>2</sup> + (*y*<sup>2</sup> <sup>−</sup> *ym*)<sup>2</sup>

<sup>3</sup> = (*x*<sup>3</sup> <sup>−</sup> *xm*)<sup>2</sup> + (*y*<sup>3</sup> <sup>−</sup> *ym*)<sup>2</sup>

*<sup>n</sup>* = (*xn* <sup>−</sup> *xm*)<sup>2</sup> + (*yn* <sup>−</sup> *ym*)<sup>2</sup>

⎡ ⎢ ⎢ ⎢ ⎣ *k*2 <sup>2</sup> <sup>−</sup> *<sup>d</sup>*<sup>2</sup>

*k*2 <sup>3</sup> <sup>−</sup> *<sup>d</sup>*<sup>2</sup>

*k*2 *<sup>n</sup>* <sup>−</sup> *<sup>d</sup>*<sup>2</sup>

<sup>2</sup> <sup>+</sup> *<sup>d</sup>*<sup>2</sup> 1 ⎤ ⎥ ⎥ ⎥ ⎦

<sup>3</sup> <sup>+</sup> *<sup>d</sup>*<sup>2</sup> 1

*<sup>n</sup>* + *d*<sup>2</sup> 1

. . .

*<sup>m</sup>* + *y*<sup>2</sup> *m*

*d*1

BS (x2,y2) <sup>3</sup>

(x3,y3)

Fig. 1. The TOA localization method.

so that ambiguity in the distance computation is reduced.

location of the MS by solving the following set of equations:

⎡ ⎢ ⎢ ⎢ ⎣ *d*2 <sup>1</sup> <sup>=</sup> *<sup>x</sup>*<sup>2</sup>

*d*2

*d*2

. . . *d*2

After rearranging terms, the above equations can be written as follows:

⎤ ⎥ ⎥ ⎥ ⎦

� *xm ym* � <sup>=</sup> <sup>1</sup> 2

*x*<sup>2</sup> *y*<sup>2</sup> *x*<sup>3</sup> *y*<sup>3</sup> . . . . . . *xn yn*

*d*3

MS (xm,ym)

(3)

(4)

TDOA is preferable to the TOA due to the fact that TDOA does not require synchronization between the MS and BS's, Figure 2. Instead, it takes advantage of the synchronization of the CDMA cellular network BS's to compute the difference between the time of arrivals of the MS

Fig. 2. The TDOA localization method.

signal at the BS*<sup>i</sup>* and BS1, where *i* ∈ {2, 3, . . . , *n*}. The difference in the distance is therefore defined as follows:

$$\begin{array}{l} d\_{i1} \equiv d\_i - d\_1\\ \quad = (t\_i - t\_m)c - (t\_1 - t\_m)c\\ \quad = (t\_i - t\_1)c \end{array} \tag{7}$$

It can be seen that the difference is not affected by errors in the MS clock time *tm*. Substituted Equation 7 in Equation 3, and then expanding and rearranging the terms produce the following set of equations:

$$
\begin{bmatrix} \mathbf{x}\_2 \ y\_2 \\ \mathbf{x}\_3 \ y\_3 \\ \vdots \ \vdots \\ \mathbf{x}\_n \ y\_n \end{bmatrix} \begin{bmatrix} \mathbf{x}\_m \\ \mathbf{y}\_m \end{bmatrix} = d\_1 \begin{bmatrix} -d\_{21} \\ -d\_{31} \\ \vdots \\ -d\_{n1} \end{bmatrix} + \frac{1}{2} \begin{bmatrix} k\_2^2 - d\_{21}^2 \\ k\_3^2 - d\_{31}^2 \\ \vdots \\ k\_n^2 - d\_{n1}^2 \end{bmatrix} \tag{8}
$$

MS (xm,ym)

three or more BS's provides an estimate for the MS location.

Fig. 3. The AOA localization method.

barriers.

**4.1.5 Fingerprinting**

different locations.

�1 �2

Emerging New Trends in Hybrid Vehicle Localization Systems 287

distance between an MS and a BS, the MS must lie on a circle centered at the BS. Employing

RSS is well known for being drastically affected by multipath fading and shadowing (multipath signals). The error caused by multipath signals can be reduced by using prior knowledge available on the contours of the signal strength centered at the BS's Smith (1991). However, such knowledge assumes a specific surrounding environment that can change due to change in whether, moving objects, such as trucks, as well as new buildings and other

This localization method is a pattern recognition, or pattern matching, technique. The underlying concept of fingerprinting is that the radio signal propagation characteristics of an MS are unique in terms of TOA, AOA, and RSS when captured at different BS's Chen et al. (2006); Porretta et al. (2008). These characteristics can therefore be used as a signature to indicate the location of an MS. The fingerprinting method has two phases: a training phase and localization phase. In the training phase, a database is created to index the different patterns in the characteristics of the radio signal propagation. In the localization phase, the signature of the MS is matched with the patterns in the database. The challenging aspect of this method is assuring that the system can distinguish between similar patterns that represent

Of course, the more exhaustive is the training phase (i.e., recording a signature for every small area in the environment), the more accurate is the MS location estimate. The main drawback of this method is the requirement to continually update the database as the configuration of

BS1 (x1,y1)

BS2 (x2,y2)

which can be expressed in a matrix form as follows:

$$\mathbf{H}\mathbf{x} = d\_1 \mathbf{c} + \mathbf{r} \tag{9}$$

$$\begin{bmatrix} x\_2 \ y\_2 \\ x\_3 \ y\_3 \\ \vdots \\ x\_n \ y\_n \end{bmatrix}, \mathbf{c} = \begin{bmatrix} -d\_{21} \\ -d\_{31} \\ \vdots \\ -d\_{n1} \end{bmatrix}, \text{and } \mathbf{r} = \frac{1}{2} \begin{bmatrix} k\_2^2 - d\_{21}^2 \\ k\_3^2 - d\_{31}^2 \\ \vdots \\ k\_n^2 - d\_{n1}^2 \end{bmatrix}.$$

Similarly, Equation 9 can be solved using the following linear least squares formulation:

$$\hat{\mathbf{x}} = \left(\mathbf{H}^T \mathbf{H}\right)^{-1} \mathbf{H}^T \left(d\_1 \mathbf{c} + \mathbf{r}\right) \tag{10}$$

The solution of Equation 10 is determined in two steps. First, the estimate of the MS is determined in terms of *d*1, which is substituted in the quadratic expression *d*<sup>2</sup> <sup>1</sup> <sup>=</sup> *<sup>x</sup>*<sup>2</sup> *<sup>m</sup>* + *y*<sup>2</sup> *m* to compute *d*1. Second, the value of *d*<sup>1</sup> is substituted back in Equation 10 to solve for **x**ˆ Sayed et al. (2005).

#### **4.1.3 AOA data fusion**

where **H** =

AOA techniques estimate the location of an MS by measuring the angle of signal arrival from the MS at several BS's by means of an antenna array. The MS location is then estimated through the intersection of the straight paths leaving from at least two BS's, as depicted in Figure 3. However, combining only two AOA measurements may introduce a large amount of uncertainty with respect to the MS location estimate, especially when the MS is close to the line connecting the two BS's. Moreover, this localization method requires the MS to be in LOS with the participating BS's, since reflected or diffracted signals result in misleading information. For this reason, it is preferable for the AOA to be combined with another localization method, such as TOA or TDOA.

#### **4.1.4 RSS data fusion**

RSS based localization is a method that employs mathematical models that describe the path loss as a function of distance. Since these models translate the received signal power into a 8 Will-be-set-by-IN-TECH

signal at the BS*<sup>i</sup>* and BS1, where *i* ∈ {2, 3, . . . , *n*}. The difference in the distance is therefore

It can be seen that the difference is not affected by errors in the MS clock time *tm*. Substituted Equation 7 in Equation 3, and then expanding and rearranging the terms produce the

> ⎡ ⎢ ⎢ ⎢ ⎣

−*d*<sup>21</sup> −*d*<sup>31</sup> . . . −*dn*<sup>1</sup> ⎤ ⎥ ⎥ ⎥ ⎦ + 1 2 ⎡ ⎢ ⎢ ⎢ ⎣

⎤ ⎥ ⎥ ⎥ ⎦ . *k*2 <sup>2</sup> <sup>−</sup> *<sup>d</sup>*<sup>2</sup> 21 ⎤ ⎥ ⎥ ⎥ ⎦

**Hx** = *d*1**c** + **r** (9)

**H***<sup>T</sup>* (*d*1**c** + **r**) (10)

<sup>1</sup> <sup>=</sup> *<sup>x</sup>*<sup>2</sup>

*<sup>m</sup>* + *y*<sup>2</sup> *m*

*k*2 <sup>3</sup> <sup>−</sup> *<sup>d</sup>*<sup>2</sup> 31 . . . *k*2 *<sup>n</sup>* <sup>−</sup> *<sup>d</sup>*<sup>2</sup> *n*1

= (*ti* − *tm*)*c* − (*t*<sup>1</sup> − *tm*)*c*

(7)

(8)

*di*<sup>1</sup> ≡ *di* − *d*<sup>1</sup>

= (*ti* − *t*1)*c*

defined as follows:

where **H** =

et al. (2005).

**4.1.3 AOA data fusion**

such as TOA or TDOA.

**4.1.4 RSS data fusion**

following set of equations:

⎡ ⎢ ⎢ ⎢ ⎣ *x*<sup>2</sup> *y*<sup>2</sup> *x*<sup>3</sup> *y*<sup>3</sup> . . . . . . *xn yn* ⎤ ⎥ ⎥ ⎥ ⎦ , **c** =

⎡ ⎢ ⎢ ⎢ ⎣

which can be expressed in a matrix form as follows:

⎡ ⎢ ⎢ ⎢ ⎣ −*d*<sup>21</sup> −*d*<sup>31</sup> . . . −*dn*<sup>1</sup> ⎤ ⎥ ⎥ ⎥ ⎦

> **x**ˆ = � **H***T***H** �−<sup>1</sup>

, and **r** = <sup>1</sup>

determined in terms of *d*1, which is substituted in the quadratic expression *d*<sup>2</sup>

2

Similarly, Equation 9 can be solved using the following linear least squares formulation:

The solution of Equation 10 is determined in two steps. First, the estimate of the MS is

to compute *d*1. Second, the value of *d*<sup>1</sup> is substituted back in Equation 10 to solve for **x**ˆ Sayed

AOA techniques estimate the location of an MS by measuring the angle of signal arrival from the MS at several BS's by means of an antenna array. The MS location is then estimated through the intersection of the straight paths leaving from at least two BS's, as depicted in Figure 3. However, combining only two AOA measurements may introduce a large amount of uncertainty with respect to the MS location estimate, especially when the MS is close to the line connecting the two BS's. Moreover, this localization method requires the MS to be in LOS with the participating BS's, since reflected or diffracted signals result in misleading information. For this reason, it is preferable for the AOA to be combined with another localization method,

RSS based localization is a method that employs mathematical models that describe the path loss as a function of distance. Since these models translate the received signal power into a

⎡ ⎢ ⎢ ⎢ ⎣ *k*2 <sup>2</sup> <sup>−</sup> *<sup>d</sup>*<sup>2</sup> 21

*k*2 <sup>3</sup> <sup>−</sup> *<sup>d</sup>*<sup>2</sup> 31 . . . *k*2 *<sup>n</sup>* <sup>−</sup> *<sup>d</sup>*<sup>2</sup> *n*1

*x*<sup>2</sup> *y*<sup>2</sup> *x*<sup>3</sup> *y*<sup>3</sup> . . . . . . *xn yn* ⎤ ⎥ ⎥ ⎥ ⎦

� *xm ym* � = *d*<sup>1</sup>

Fig. 3. The AOA localization method.

distance between an MS and a BS, the MS must lie on a circle centered at the BS. Employing three or more BS's provides an estimate for the MS location.

RSS is well known for being drastically affected by multipath fading and shadowing (multipath signals). The error caused by multipath signals can be reduced by using prior knowledge available on the contours of the signal strength centered at the BS's Smith (1991). However, such knowledge assumes a specific surrounding environment that can change due to change in whether, moving objects, such as trucks, as well as new buildings and other barriers.

#### **4.1.5 Fingerprinting**

This localization method is a pattern recognition, or pattern matching, technique. The underlying concept of fingerprinting is that the radio signal propagation characteristics of an MS are unique in terms of TOA, AOA, and RSS when captured at different BS's Chen et al. (2006); Porretta et al. (2008). These characteristics can therefore be used as a signature to indicate the location of an MS. The fingerprinting method has two phases: a training phase and localization phase. In the training phase, a database is created to index the different patterns in the characteristics of the radio signal propagation. In the localization phase, the signature of the MS is matched with the patterns in the database. The challenging aspect of this method is assuring that the system can distinguish between similar patterns that represent different locations.

Of course, the more exhaustive is the training phase (i.e., recording a signature for every small area in the environment), the more accurate is the MS location estimate. The main drawback of this method is the requirement to continually update the database as the configuration of

Lack of adequate location anchors and/or prolonged multipath conditions remain unsolved

Emerging New Trends in Hybrid Vehicle Localization Systems 289

As it has been reported above, a verity of multi-modality localization methods have evolved in recent years. Typical modalities include satellite signals, VANET communication, vision features, laser rays, etc. This variety of information has motivated the concept of multi-level

For instance, in Boukerche et al. (2008), a data-fusion model is proposed in the form of a three-level fusion localization system. In the first level, a variety of location information is gathered as row data and processed separately using local filters that are suitable for each type of location information. As with the system in Skog & Handel (2009), the second level combines the output of the first level and produces a better location estimate. In Boukerche et al. (2008), the results are then fused in the third level based on contextual information (e.g., digital maps and traffic information). In this scheme, the final location estimate is fed back to

Multi Level fusions aims to tackle data fusion as a hierarchical process so as to allow for combining measurements at various levels of abstraction in a simple manner. Nevertheless, if the estimates in the lowest-level filters are evaluated for reliability, the fusion of these

Due to the inherent errors in the positioning information, a level of uncertainty in location estimates is inevitable. Therefore, it is essential to measure the reliability of the positioning information in order to identify any hidden anomalies. To achieve this task, a level of trust,

In the last two decades, a significant effort has been made in aviation to develop integrity monitoring systems Hewitson (2003); Walter & Enge (1995). Integrity is defined as a measure

*GPS* (*x,y*) **IVCAL**

**Uncertainty Measure**

**Discrepancy value**

Fig. 4. Block diagram of the inter-vehicle-communication-assisted localization technique.

Multipath Detection Unit (MDU) *X* ~

Localization Enhancement Unit (LEU)

*X*ˆ

**Location Estimate**

a

<sup>b</sup> <sup>b</sup>

a

GPS INS Fusion Unit (GPS/INS-FU)

a

**Information extracted from VANET**

**Information sent to VANET**

nts *X* ~ (*<sup>S</sup>* ,

b


problems that continue to degrade localization accuracy.

the second level in order to improve future estimations.

estimates in higher-level filters will then be more robust.

**6. Integrity of localization systems**

integrity, in every estimate must be determined.

GPS reading

INS measureme

**5. Multi-level fusion approaches**

fusion.

the BS's changes when BS's are removed or new ones are added. Nevertheless, this method is becoming more attractive for indoor applications because the database creation can be more comprehensive and manageable.

#### **4.2 VANET localization using relative distances measurements**

This approach takes advantage of the emerging VANET environments. The distances between VANET nodes are estimated and exchanged among vehicles along with preliminary estimates of the vehicles' locations. Vehicles can then use this information to construct local relative position maps that contain the vehicles and their neighbours. This strategy has initially emerged in Wireless Sensor Networks (WSN), but recently, a number of solutions have been proposed for use in VANET Benslimane (2005); Drawil & Basir (2008); Parker & Valaee (2007)

## **4.2.1 Vehicle localization in VANET**

A VANET based localization method was introduced in Benslimane (2005) for localizing vehicles with no GPS receivers, or those whose location can not be determined because satellite signals have been lost, for instance, in a tunnel. With this method, vehicles that are not equipped with GPS determine their own locations by relying on information they receive from vehicles that are equipped with GPS. Vehicles within transmission range can measure the distances between each other using one of the radio-location methods presented in Caffery & Stuber (1998). By finding its closest three neighbours the unequipped vehicle can compute its position using trilateration.

## **4.2.2 Cooperative vehicle position estimation**

The work reported in Parker & Valaee (2006) presents a method of distributed vehicle localization in VANET. The method utilizes RSS measurements to estimate the distances between one vehicle and others in its coverage area. It is assumed that vehicles initially estimate their own locations using a GPS receiver and then exchange their location information so that they can perform an optimization technique in order to improve their location estimates.

This technique demonstrates robustness of location estimates. However, it lacks the ability to detect and avoid the effect of multipath signals in the GPS measurements, which drastically degrades the localization accuracy in multipath environments (e.g., urban canyons).

In Drawil & Basir (2010) an algorithm called InterVehicle-Communication-Assisted Localization (IVCAL) is proposed to mitigate the multipath effect in the location estimates of vehicles in VANET. A KF and an inter-vehicle-communication system collaborate in order to increase the robustness and accuracy of the localization of every vehicle in the network. The two main components that allow the inter-vehicle-communication system and the KF to interact are the Multipath Detection Unit (MDU), which detects the existence of a multipath effect in the output of the KF, and the Localization Enhancement Unit (LEU), which obtains the neighbours' information from the inter-vehicle-communication system and feeds an optimized location estimate back to the KF (Figure 4). As in Jabbour, Cherfaoui & Bonnifait (2006) and Jabbour, Bonnifait & Cherfaoui (2006), KF innovation is used as an indication of the contamination of the GPS measurement, and it has therefore been used as a learning pattern for the MDU in IVCAL. An uncertainty measure is also utilized in order to specify a subset of the most accurate network neighbours that can be used as anchors to enable vehicles to improve their location estimates.

Lack of adequate location anchors and/or prolonged multipath conditions remain unsolved problems that continue to degrade localization accuracy.

## **5. Multi-level fusion approaches**

10 Will-be-set-by-IN-TECH

the BS's changes when BS's are removed or new ones are added. Nevertheless, this method is becoming more attractive for indoor applications because the database creation can be more

This approach takes advantage of the emerging VANET environments. The distances between VANET nodes are estimated and exchanged among vehicles along with preliminary estimates of the vehicles' locations. Vehicles can then use this information to construct local relative position maps that contain the vehicles and their neighbours. This strategy has initially emerged in Wireless Sensor Networks (WSN), but recently, a number of solutions have been proposed for use in VANET Benslimane (2005); Drawil & Basir (2008); Parker & Valaee (2007)

A VANET based localization method was introduced in Benslimane (2005) for localizing vehicles with no GPS receivers, or those whose location can not be determined because satellite signals have been lost, for instance, in a tunnel. With this method, vehicles that are not equipped with GPS determine their own locations by relying on information they receive from vehicles that are equipped with GPS. Vehicles within transmission range can measure the distances between each other using one of the radio-location methods presented in Caffery & Stuber (1998). By finding its closest three neighbours the unequipped vehicle can compute its

The work reported in Parker & Valaee (2006) presents a method of distributed vehicle localization in VANET. The method utilizes RSS measurements to estimate the distances between one vehicle and others in its coverage area. It is assumed that vehicles initially estimate their own locations using a GPS receiver and then exchange their location information so that they can perform an optimization technique in order to improve their

This technique demonstrates robustness of location estimates. However, it lacks the ability to detect and avoid the effect of multipath signals in the GPS measurements, which drastically

In Drawil & Basir (2010) an algorithm called InterVehicle-Communication-Assisted Localization (IVCAL) is proposed to mitigate the multipath effect in the location estimates of vehicles in VANET. A KF and an inter-vehicle-communication system collaborate in order to increase the robustness and accuracy of the localization of every vehicle in the network. The two main components that allow the inter-vehicle-communication system and the KF to interact are the Multipath Detection Unit (MDU), which detects the existence of a multipath effect in the output of the KF, and the Localization Enhancement Unit (LEU), which obtains the neighbours' information from the inter-vehicle-communication system and feeds an optimized location estimate back to the KF (Figure 4). As in Jabbour, Cherfaoui & Bonnifait (2006) and Jabbour, Bonnifait & Cherfaoui (2006), KF innovation is used as an indication of the contamination of the GPS measurement, and it has therefore been used as a learning pattern for the MDU in IVCAL. An uncertainty measure is also utilized in order to specify a subset of the most accurate network neighbours that can be used as anchors to enable vehicles to

degrades the localization accuracy in multipath environments (e.g., urban canyons).

comprehensive and manageable.

**4.2.1 Vehicle localization in VANET**

position using trilateration.

location estimates.

improve their location estimates.

**4.2.2 Cooperative vehicle position estimation**

**4.2 VANET localization using relative distances measurements**

As it has been reported above, a verity of multi-modality localization methods have evolved in recent years. Typical modalities include satellite signals, VANET communication, vision features, laser rays, etc. This variety of information has motivated the concept of multi-level fusion.

For instance, in Boukerche et al. (2008), a data-fusion model is proposed in the form of a three-level fusion localization system. In the first level, a variety of location information is gathered as row data and processed separately using local filters that are suitable for each type of location information. As with the system in Skog & Handel (2009), the second level combines the output of the first level and produces a better location estimate. In Boukerche et al. (2008), the results are then fused in the third level based on contextual information (e.g., digital maps and traffic information). In this scheme, the final location estimate is fed back to the second level in order to improve future estimations.

Multi Level fusions aims to tackle data fusion as a hierarchical process so as to allow for combining measurements at various levels of abstraction in a simple manner. Nevertheless, if the estimates in the lowest-level filters are evaluated for reliability, the fusion of these estimates in higher-level filters will then be more robust.

## **6. Integrity of localization systems**

Due to the inherent errors in the positioning information, a level of uncertainty in location estimates is inevitable. Therefore, it is essential to measure the reliability of the positioning information in order to identify any hidden anomalies. To achieve this task, a level of trust, integrity, in every estimate must be determined.

In the last two decades, a significant effort has been made in aviation to develop integrity monitoring systems Hewitson (2003); Walter & Enge (1995). Integrity is defined as a measure

Fig. 4. Block diagram of the inter-vehicle-communication-assisted localization technique.

In order to put these outstanding issues in practical context the following performance criteria

Emerging New Trends in Hybrid Vehicle Localization Systems 291

**7.1.** *Accuracy: Accuracy of a vehicle location estimate is defined as the degree of closeness of a vehicle's*

**7.2.** *Availability: Availability of a vehicle location estimate is defined as the ratio of the number of*

**7.3.** *Response Time: Response time is the time required by a localization technique to produce a*

**7.4.** *Integrity: Integrity is defined as the level of confidence that can be placed in the correctness of the*

Based on the above performance criteria, a benchmark can be established in order to compare the performance of different localization techniques based on reported best achievable accuracy localization performance. Localization performance is compared with respect to reliability as well. Table 1 provides a summary of the comparison in terms of modality used, best case accuracy, environmental constraints, synchronization requirements, and dependency on infrastructure. Table 2 reports emerging applications and their requirements

**Modality(ies) Best Case Accuracy (m) Availability Synch. Infr.str.** GPS 10-20 Hoshen (1996); Leva (1996) Out Door-Open Sky Yes No DeadReckoning (DR) Worsen with time Kao (1991) Anywhere No No DGPS with Visible Satellites 0.01-7.6 Meguro et al. (2009) suburban-Open Sky Yes Yes DGPS+DR+Map Matching 0.5-5 Lahrech et al. (2005) Out Door-Open Sky N/A Yes GPS+Vision+Map Matching 0.5-1 Chausse et al. (2005); Jabbour, Bonnifait & Cherfaoui (2006) Out Door-Open Sky No Yes Cellular Localization 90-250 Chen et al. (2006); 25-69 Porretta et al. (2008) Under Network Coverage Yes Yes Location Services Submeter Zhang et al. (2008) In Door N/A Yes Relative Ad hoc Localization 2-7 (Simulation Drawil & Basir (2008); Parker & Valaee (2006)) Suburban Yes No

with respect to localization accuracy. It is evident from Tables 1 and 2 that current localization techniques do not live up to the required integrity and availability performance. In other words, the delivered performance of the localization techniques listed in Table 1 is not always above the target performance specified by the applications, and that is due to the unavailability of their measurements or the decrease in their accuracy in some environments,

Hence, performance needed by applications can constitute a challenging issue in the fusion process of a multi-sensory system. Therefore task driven integrity issues relevant to vehicle

From the above discussion it is obvious that for localization systems to meet the expectations of emerging applications it is imperative that they employ diverse location measurement sources and effective strategies to fuse these sources so as to achieve the Quality of Service expected of them. Of course this Quality of Service is multi-dimensional as it pertains to expected accuracy, availability, response time and integrity. The Quality of Service as a function of these performance criteria is application and task dependent. The more stringent is the required Quality of Service with respect to a given performance criterion, the more resources are needed and the higher is the computational cost. This presents a challenge for

*estimates produced to the number of estimates expected per one unit of time.*

*location estimate* Bakhache & Nikiforov (2000); ESA (n.d.); Quddus (2006).

are proposed.

*location estimate.*

*location estimate to its actual (true) location.*

Table 1. Specifications of Localization Techniques.

such as urban canyons, foggy weather, and dark areas.

localization are highlighted next.

**8. Task driven localization integrity**

of the trust which can be placed in the correctness of the information supplied by the total system; integrity includes the ability of a system to provide timely and valid measurements to users ESA (n.d.). Three key components have been proposed for integrity monitoring: 1) fault detection, 2) fault isolation, and 3) removal of faulty measurement sources from the estimates Hewitson et al. (2004). The European Geostationary Navigation Overlay Service (EGNOS) and the Wide Area Augmentation System (WAAS), Hewitson (2003), are developed to form a redundant source of information for the Global Navigation Satellite Systems (GNSS) in order to perform integrity monitoring by providing correction information.

During the last decade, monitoring the integrity of land-vehicles' localization has attracted attention due to the increasing demand for highly reliable accurate location data. Since roving in dense urban environments may limit access to the signals from augmentation systems such as EGNOS or WAAS, other means of measuring integrity have been proposed Schlingelhof et al. (2008).

For instance, Toledo-Moreo et al. (2006) presents a localization solution based on the fusion of GNSS and INS sensors. In this fusion process an interactive multimodel method is used. Different covariance matrices are used as a response to change in the noise behaviour. The proposed integrity measure is based on the covariance matrix of the EKF estimation error.

Relying on the error covariance matrix can be misleading especially when experiencing unmodeled environment noise. In other words, it is not possible in many cases to detect, isolate, and remove the estimation faults, let alone the unavoidable false alarms.

Also, in Jabbour et al. (2008) a binary integrity decision-maker is proposed for a map-matching localization technique in which multihypothesis road-tracking method combines proprioceptive sensors (odometers and gyrometers) with GPS and map information. In this work, the integrity represents high or low confidence of the location estimate. The candidate tracks or roads are associated with a probability that is computed using the multihypothesis road-tracking method. If one credible road exists and the normalized innovation is below a prespecified threshold, the technique declares high confidence location estimate. However, the lack of granularity in the integrity measure limits the range of the integrity-level based application that can use this method.

Integrity monitoring of map-matching localization has also been proposed and tested in Quddus (2006). However, in this work three indicators has been monitored to achieve this task: distance residuals, heading residuals, and an indicator related to uncertainty of the map matched position. Due to the linguistic nature of these indicators, they have been combined using a fuzzy inference model to produce a value between 0 to 100 to indicate the integrity of the system. The integrity threshold has been determined experimentally to be 70, where the type of the environment experienced during the experiment was not specified. The value of the threshold thus can be considered specific to the environment of the experiment. Therefore, the approach might not guarantee a robust integrity monitoring. In other words, it is possible to come across an environment that influences the system to produce both an integrity value above the threshold and a location estimate mismatch.

#### **7. Performance criteria and benchmarking**

From the discussion above it is clear that vehicle localization is an increasingly growing area of research. Nevertheless, there is a number of outstanding issues that still need to be addressed. 12 Will-be-set-by-IN-TECH

of the trust which can be placed in the correctness of the information supplied by the total system; integrity includes the ability of a system to provide timely and valid measurements to users ESA (n.d.). Three key components have been proposed for integrity monitoring: 1) fault detection, 2) fault isolation, and 3) removal of faulty measurement sources from the estimates Hewitson et al. (2004). The European Geostationary Navigation Overlay Service (EGNOS) and the Wide Area Augmentation System (WAAS), Hewitson (2003), are developed to form a redundant source of information for the Global Navigation Satellite Systems (GNSS) in order

During the last decade, monitoring the integrity of land-vehicles' localization has attracted attention due to the increasing demand for highly reliable accurate location data. Since roving in dense urban environments may limit access to the signals from augmentation systems such as EGNOS or WAAS, other means of measuring integrity have been proposed Schlingelhof

For instance, Toledo-Moreo et al. (2006) presents a localization solution based on the fusion of GNSS and INS sensors. In this fusion process an interactive multimodel method is used. Different covariance matrices are used as a response to change in the noise behaviour. The proposed integrity measure is based on the covariance matrix of the EKF estimation error. Relying on the error covariance matrix can be misleading especially when experiencing unmodeled environment noise. In other words, it is not possible in many cases to detect,

Also, in Jabbour et al. (2008) a binary integrity decision-maker is proposed for a map-matching localization technique in which multihypothesis road-tracking method combines proprioceptive sensors (odometers and gyrometers) with GPS and map information. In this work, the integrity represents high or low confidence of the location estimate. The candidate tracks or roads are associated with a probability that is computed using the multihypothesis road-tracking method. If one credible road exists and the normalized innovation is below a prespecified threshold, the technique declares high confidence location estimate. However, the lack of granularity in the integrity measure limits

Integrity monitoring of map-matching localization has also been proposed and tested in Quddus (2006). However, in this work three indicators has been monitored to achieve this task: distance residuals, heading residuals, and an indicator related to uncertainty of the map matched position. Due to the linguistic nature of these indicators, they have been combined using a fuzzy inference model to produce a value between 0 to 100 to indicate the integrity of the system. The integrity threshold has been determined experimentally to be 70, where the type of the environment experienced during the experiment was not specified. The value of the threshold thus can be considered specific to the environment of the experiment. Therefore, the approach might not guarantee a robust integrity monitoring. In other words, it is possible to come across an environment that influences the system to produce both an integrity value

From the discussion above it is clear that vehicle localization is an increasingly growing area of research. Nevertheless, there is a number of outstanding issues that still need to be addressed.

isolate, and remove the estimation faults, let alone the unavoidable false alarms.

the range of the integrity-level based application that can use this method.

above the threshold and a location estimate mismatch.

**7. Performance criteria and benchmarking**

to perform integrity monitoring by providing correction information.

et al. (2008).

In order to put these outstanding issues in practical context the following performance criteria are proposed.

**7.1.** *Accuracy: Accuracy of a vehicle location estimate is defined as the degree of closeness of a vehicle's location estimate to its actual (true) location.*

**7.2.** *Availability: Availability of a vehicle location estimate is defined as the ratio of the number of estimates produced to the number of estimates expected per one unit of time.*

**7.3.** *Response Time: Response time is the time required by a localization technique to produce a location estimate.*

**7.4.** *Integrity: Integrity is defined as the level of confidence that can be placed in the correctness of the location estimate* Bakhache & Nikiforov (2000); ESA (n.d.); Quddus (2006).

Based on the above performance criteria, a benchmark can be established in order to compare the performance of different localization techniques based on reported best achievable accuracy localization performance. Localization performance is compared with respect to reliability as well. Table 1 provides a summary of the comparison in terms of modality used, best case accuracy, environmental constraints, synchronization requirements, and dependency on infrastructure. Table 2 reports emerging applications and their requirements


Table 1. Specifications of Localization Techniques.

with respect to localization accuracy. It is evident from Tables 1 and 2 that current localization techniques do not live up to the required integrity and availability performance. In other words, the delivered performance of the localization techniques listed in Table 1 is not always above the target performance specified by the applications, and that is due to the unavailability of their measurements or the decrease in their accuracy in some environments, such as urban canyons, foggy weather, and dark areas.

Hence, performance needed by applications can constitute a challenging issue in the fusion process of a multi-sensory system. Therefore task driven integrity issues relevant to vehicle localization are highlighted next.

#### **8. Task driven localization integrity**

From the above discussion it is obvious that for localization systems to meet the expectations of emerging applications it is imperative that they employ diverse location measurement sources and effective strategies to fuse these sources so as to achieve the Quality of Service expected of them. Of course this Quality of Service is multi-dimensional as it pertains to expected accuracy, availability, response time and integrity. The Quality of Service as a function of these performance criteria is application and task dependent. The more stringent is the required Quality of Service with respect to a given performance criterion, the more resources are needed and the higher is the computational cost. This presents a challenge for

**Primary Localization Layer**

**GPS INS MAPs**

Fig. 5. The structure of the proposed framework.

**9.1 Primary localization layer**

` **Calibration command from the estimate fusion and management layer**

Fig. 6. Primary localization layer.

**9.2 Integrity Monitoring layer**

further description of the framework layers functionality.

**Integrity Monitoring Layer Estimate** 

Emerging New Trends in Hybrid Vehicle Localization Systems 293

accuracy and integrity are achieved by executing a proper fusion scheme. In what follows a

The primary localization layer comprises of the system's localization techniques which are partitioned in the form of a set of Primary Localization Units (PLUs), as can be seen in Figure 6. Any localization technique, such as those mentioned above, can be used in any given PLU. These primary localization units receive localization requests from the Estimate Fusion and Management layer. Each PLU is constructed from techniques that are based on different phenomena/algorithms to ensure minimum correlation. A primary localization unit can share its information sources with other units; it can constitute a single modality or multiple-modalities. An example of a single modality PLU is one that estimates the vehicle

**PLU1 PLU2 PLUn**

location from a GPS information source. IVCAL is an example of a PLU that utilizes three

Central to the proposed framework is the integrity monitoring layer. Here, an Integrity Monitoring unit (IMU) is used to monitor the performance of a primary localization unit (Figure 7). The monitoring process takes in consideration the impact of the measurement conditions on the PLU. For example, to indicate the reliability of an estimate DOP measure

**GPS INS MAPs**

**The integrity monitoring layer**

modalities: GPS, INS, and Inter-Vehicle-Communication.

**Fusion and Managemen Layer**

**Application**

**Loc-Req**

**Localization command from the estimate fusion and management Layer**

**Preliminary location estimates**


Table 2. Applications Requirement for Location Estimates Boukerche et al. (2008).

the system as calls for effective use of resources to achieve the target Quality of Service. For example, there are applications where accuracy can be traded for faster response time. On the other hand, there are applications where response time is not as important as accuracy (offline vehicle track mapping). There are also applications where both requirements, accuracy and response time, can not be compromised for any other gain.

Indeed, task or goal driven localization is about effective allocating system resources and planning of localization tasks such that the system mission is achieved with maximum integrity possible. This strategy to performance is a key issue to the new trends of hybrid localization systems. In order for this strategy to work it is imperative that the impact of the environment is not ignored. Without modeling the impact of the environment on the system, the system can not be guaranteed to achieve its target performance, and even worst as it may falsely determine its task is accomplished. Thus, modeling the impact of the environmental conditions on the system is a central issue to the following proposed framework.

#### **9. Task-driven localization through integrity assessment and control**

It is well understood that the reported techniques can estimate the location of vehicles relatively accurately in some situations if they are given adequate time to perform the task. However, they may not perform as well in other situations. The deficiencies of these localization techniques are uncorrelated as they are expected to be of diverse phenomena, and/or utilize different algorithmic paradigms. This motivates the development of systems that can take advantage of this diversity to achieve a reliable and accurate performance.

In this section, a high level concept of a novel framework for fusing different localization techniques is proposed, Figure 5. What distinguishes this framework from existing ones is its ability to take in account the impact of the measurement conditions on the individual techniques. Thus, it is able to optimize the fusion process so as to maximize the accuracy and integrity of the localization estimates. The framework consists of three logical layers: (1) Primary Localization layer which provides preliminary location estimates using the available localization techniques; (2) Integrity Monitoring layer which computes the reliability of the vehicle's location estimates produced by the Primary Localization layer- a process that captures the impact of measurement conditions; and (3) Estimate Fusion and Management layer which interacts with the application task to ensure that the task's expected localization

Fig. 5. The structure of the proposed framework.

accuracy and integrity are achieved by executing a proper fusion scheme. In what follows a further description of the framework layers functionality.

### **9.1 Primary localization layer**

14 Will-be-set-by-IN-TECH

**Low(10-20 m) Medium (1-5 m) High (less than 1 m)**

**Application Required Accuracy**

Collision Warning Sys. X Vision Enhancement X Automatic Parking X Road Pricing X

Table 2. Applications Requirement for Location Estimates Boukerche et al. (2008).

conditions on the system is a central issue to the following proposed framework.

**9. Task-driven localization through integrity assessment and control**

the system as calls for effective use of resources to achieve the target Quality of Service. For example, there are applications where accuracy can be traded for faster response time. On the other hand, there are applications where response time is not as important as accuracy (offline vehicle track mapping). There are also applications where both requirements, accuracy and

Indeed, task or goal driven localization is about effective allocating system resources and planning of localization tasks such that the system mission is achieved with maximum integrity possible. This strategy to performance is a key issue to the new trends of hybrid localization systems. In order for this strategy to work it is imperative that the impact of the environment is not ignored. Without modeling the impact of the environment on the system, the system can not be guaranteed to achieve its target performance, and even worst as it may falsely determine its task is accomplished. Thus, modeling the impact of the environmental

It is well understood that the reported techniques can estimate the location of vehicles relatively accurately in some situations if they are given adequate time to perform the task. However, they may not perform as well in other situations. The deficiencies of these localization techniques are uncorrelated as they are expected to be of diverse phenomena, and/or utilize different algorithmic paradigms. This motivates the development of systems that can take advantage of this diversity to achieve a reliable and accurate performance.

In this section, a high level concept of a novel framework for fusing different localization techniques is proposed, Figure 5. What distinguishes this framework from existing ones is its ability to take in account the impact of the measurement conditions on the individual techniques. Thus, it is able to optimize the fusion process so as to maximize the accuracy and integrity of the localization estimates. The framework consists of three logical layers: (1) Primary Localization layer which provides preliminary location estimates using the available localization techniques; (2) Integrity Monitoring layer which computes the reliability of the vehicle's location estimates produced by the Primary Localization layer- a process that captures the impact of measurement conditions; and (3) Estimate Fusion and Management layer which interacts with the application task to ensure that the task's expected localization

Coop. Cruise Control X Coop. Intersection Safety X Blind Crossing X Platooning X

response time, can not be compromised for any other gain.

Message Routing (VANET) X Data Dissemination X Map Localization X

> The primary localization layer comprises of the system's localization techniques which are partitioned in the form of a set of Primary Localization Units (PLUs), as can be seen in Figure 6. Any localization technique, such as those mentioned above, can be used in any given PLU. These primary localization units receive localization requests from the Estimate Fusion and Management layer. Each PLU is constructed from techniques that are based on different phenomena/algorithms to ensure minimum correlation. A primary localization unit can share its information sources with other units; it can constitute a single modality or multiple-modalities. An example of a single modality PLU is one that estimates the vehicle

Fig. 6. Primary localization layer.

location from a GPS information source. IVCAL is an example of a PLU that utilizes three modalities: GPS, INS, and Inter-Vehicle-Communication.

#### **9.2 Integrity Monitoring layer**

Central to the proposed framework is the integrity monitoring layer. Here, an Integrity Monitoring unit (IMU) is used to monitor the performance of a primary localization unit (Figure 7). The monitoring process takes in consideration the impact of the measurement conditions on the PLU. For example, to indicate the reliability of an estimate DOP measure

that of the various PLUs, a management and fusion scheme is computed such that the scheme

Emerging New Trends in Hybrid Vehicle Localization Systems 295

To overcome the problems of this layer, first of all, PLU estimates should be time-stamped as close as possible to a common time base. Of course the allowable synchronization error would depend on factors such as the speed of the vehicle relative to the PLU response time. It is also affected by the system's desired spatial precision and detection frequency. The tighter time synchronization is achieved with respect to the common time base, the greater precision

Second of all, since the fusion process is task driven, an optimal fusion strategy is the one that achieves the target accuracy and integrity within the constraints of the task deadline. This gives rise to the challenge of optimal estimate fusion and reliability aggregation. Both fuzzy reasoning and evidential reasoning are a tentative tools to be investigated as the bases for constructing the meta-fusion model. Fuzzy reasoning can be used for representing uncertainty in the estimates as well as for representing linguistic task requirements. Since some PLUs may employ probabilistic (Bayesian) estimators, it will be interesting to study how probabilistic

Bayesian theory based fusion techniques have been evolving in fields such as process control, target tracking and object recognition. Nonetheless, effective fusion performance can only be achieved if adequate and appropriate priori and conditional probabilities are available. Although, at least in some situations, assumptions can be made with respect to priori and posteriori probabilities, these assumptions can turn to be unreasonable in many other situations, especially if we are to allow for non-probabilistic estimators in the PLU layer. One possible solution is using the Dempster-Shafer (DS) evidence theory as an extension to the Bayes theory. DS belief and plausibility functions can be used to quantify evidence and unify uncertainty of the PLU estimates. DS evidence theory can also model how the uncertainty of a given location estimate diminishes as pieces of evidence accumulate during the localization process. One important aspect of this theory is that reasoning or decision making can be carried out with incomplete or conflicting pieces of evidence – a reality that is quit common

In this chapter, a variety of reported localization techniques are presented and classified based

Although, techniques that incorporate fusion of motion sensory data with GPS localization have demonstrated improvement in performance, there are still situations that can have a negative effect on their localization accuracy. Incremental localization errors in motion-sensor data and the multipath effect in urban canyon environments contribute significantly to such location estimate errors, which necessitates augmenting the initial location data with other

Digital maps and visual features enhance GPS-DR localization by recognizing landmarks in the surrounding environment and matching them with others in a reference GIS map. A key problem associated with this scheme is that the landmark segmentation process is complex

Multi-level fusion schemes are promising as they employ multiple location measurement phenomena. However, these schemes have given birth to new challenges in the localization

on the type of the measurement of the location information used.

sources of location information in order to overcome these shortcomings.

estimates and fuzzy estimates are represented in a unified uncertainty framework.

produces a location estimate that meets the task requirements.

is possible in the tracking of the vehicle.

in localization problems.

and ill conditioned process.

**10. Conclusions**

**The estimate fusion and management layer**

and/or the signal to noise ratio can be utilized when a GPS receiver is used, light intensity can be utilized when vision features are used, and KF innovation can be utilized when IVCAL is used. Various tools can be employed in this layer based on the type of the localization technique. Fuzzy inference systems and probabilistic models for reliability are two examples of these tools.

#### **9.3 The Estimate Fusion and Management Layer**

The Estimate Fusion and Management layer (EFM) is responsible for determining an effective integration (Meta-Fusion) strategy for fusing the estimates produced by the different primary localization units so as to achieve the required localization accuracy and integrity (Figure 8). The estimate fusion and management processes the location estimates produced by the

Fig. 8. Estimate fusion and management layer.

different primary localization units in conjunction with their integrity assessments. Since the vehicle is expected to be performing localization while moving, it is imperative for the fusion process to perform spatial and temporal alignment of the estimates produced by the different PLUs. Therefore, this layer employs a synchronization handler to manage timing issues among the different PLUs. Given the task's target accuracy and integrity, as well as

Fig. 7. Integrity monitoring layer.

16 Will-be-set-by-IN-TECH

**The primary localization layer**

**IMU1 IMU2 IMUn**

**The estimate fusion and management layer**

and/or the signal to noise ratio can be utilized when a GPS receiver is used, light intensity can be utilized when vision features are used, and KF innovation can be utilized when IVCAL is used. Various tools can be employed in this layer based on the type of the localization technique. Fuzzy inference systems and probabilistic models for reliability are two examples

The Estimate Fusion and Management layer (EFM) is responsible for determining an effective integration (Meta-Fusion) strategy for fusing the estimates produced by the different primary localization units so as to achieve the required localization accuracy and integrity (Figure 8). The estimate fusion and management processes the location estimates produced by the

**PLUn**

�i

Localization Request

Calibration

zi

**IMUn**

different primary localization units in conjunction with their integrity assessments. Since the vehicle is expected to be performing localization while moving, it is imperative for the fusion process to perform spatial and temporal alignment of the estimates produced by the different PLUs. Therefore, this layer employs a synchronization handler to manage timing issues among the different PLUs. Given the task's target accuracy and integrity, as well as

**Fusion Layer**

**Estimate Fusion and Management**

**Response**

**Application**

**Query (QoS)**

Fig. 7. Integrity monitoring layer.

**9.3 The Estimate Fusion and Management Layer**

**PLU1**

**PLU2**

**GPS INS MAPs**

**IMU2**

Fig. 8. Estimate fusion and management layer.

**IMU1**

of these tools.

that of the various PLUs, a management and fusion scheme is computed such that the scheme produces a location estimate that meets the task requirements.

To overcome the problems of this layer, first of all, PLU estimates should be time-stamped as close as possible to a common time base. Of course the allowable synchronization error would depend on factors such as the speed of the vehicle relative to the PLU response time. It is also affected by the system's desired spatial precision and detection frequency. The tighter time synchronization is achieved with respect to the common time base, the greater precision is possible in the tracking of the vehicle.

Second of all, since the fusion process is task driven, an optimal fusion strategy is the one that achieves the target accuracy and integrity within the constraints of the task deadline. This gives rise to the challenge of optimal estimate fusion and reliability aggregation. Both fuzzy reasoning and evidential reasoning are a tentative tools to be investigated as the bases for constructing the meta-fusion model. Fuzzy reasoning can be used for representing uncertainty in the estimates as well as for representing linguistic task requirements. Since some PLUs may employ probabilistic (Bayesian) estimators, it will be interesting to study how probabilistic estimates and fuzzy estimates are represented in a unified uncertainty framework.

Bayesian theory based fusion techniques have been evolving in fields such as process control, target tracking and object recognition. Nonetheless, effective fusion performance can only be achieved if adequate and appropriate priori and conditional probabilities are available. Although, at least in some situations, assumptions can be made with respect to priori and posteriori probabilities, these assumptions can turn to be unreasonable in many other situations, especially if we are to allow for non-probabilistic estimators in the PLU layer. One possible solution is using the Dempster-Shafer (DS) evidence theory as an extension to the Bayes theory. DS belief and plausibility functions can be used to quantify evidence and unify uncertainty of the PLU estimates. DS evidence theory can also model how the uncertainty of a given location estimate diminishes as pieces of evidence accumulate during the localization process. One important aspect of this theory is that reasoning or decision making can be carried out with incomplete or conflicting pieces of evidence – a reality that is quit common in localization problems.

## **10. Conclusions**

In this chapter, a variety of reported localization techniques are presented and classified based on the type of the measurement of the location information used.

Although, techniques that incorporate fusion of motion sensory data with GPS localization have demonstrated improvement in performance, there are still situations that can have a negative effect on their localization accuracy. Incremental localization errors in motion-sensor data and the multipath effect in urban canyon environments contribute significantly to such location estimate errors, which necessitates augmenting the initial location data with other sources of location information in order to overcome these shortcomings.

Digital maps and visual features enhance GPS-DR localization by recognizing landmarks in the surrounding environment and matching them with others in a reference GIS map. A key problem associated with this scheme is that the landmark segmentation process is complex and ill conditioned process.

Multi-level fusion schemes are promising as they employ multiple location measurement phenomena. However, these schemes have given birth to new challenges in the localization

Drawil, N. & Basir, O. (2010). Intervehicle-communication-assisted localization, *Intelligent*

Emerging New Trends in Hybrid Vehicle Localization Systems 297

Fuerstenberg, K. & Weiss, T. (2005). Feature-Level Map Building and Object Recognition

Hewitson, S. (2003). GNSS receiver autonomous integrity monitoring: A separability analysis,

Hewitson, S., Kyu Lee, H. & Wang, J. (2004). Localizability Analysis for GPS/Galileo Receiver Autonomous Integrity Monitoring, *The Journal of Navigation* 57(02): 245–259. Honghui, Q. & Moore, J. B. (2002). Direct Kalman Filtering Approach for GPS/INS Integration, *IEEE Transactions on Aerospace and Electronic Systems* 38(2): 687 – 693. Hoshen, J. (1996). The GPS Equations and the Problem of Apollonius, *Aerospace and Electronic*

Jabbour, M., Bonnifait, P. & Cherfaoui, V. (2006). Enhanced Local Maps in a GIS for a

Jabbour, M., Bonnifait, P. & Cherfaoui, V. (2008). Map-Matching Integrity using Multi-Sensor

Jabbour, M., Cherfaoui, V. & Bonnifait, P. (2006). Management of Landmarks in a GIS for

Kao, W. (1991). Integration of GPS and Dead-Reckoning Navigation Systems, *Vehicle*

Lahrech, A., Boucher, C. & Noyer, J.-C. (2005). Accurate Vehicle Positioning in Urban Areas,

Lai, C.-C. & Tsai, W.-H. (2003). Location Estimation and Trajectory Prediction of Moving

Le, B. L., Ahmed, K. & Tsuji, H. (2003). Mobile Location Estimator With NLOS Mitigation

Leva, J. L. (1996). An Alternative Closed-Form Solution to the GPS Pseudo-Range Equations, *Aerospace and Electronic Systems, IEEE Transactions* 32(4): 1430 – 1439. McGuire, M., Plataniotis, K. & Venetsanopoulos, A. (2003). Location of Mobile Terminals using

Meguro, J.-i., Murata, T., Takiguchi, J.-i., Amano, Y. & Hashizume, T. (2009). GPS Multipath

Nishimura, Y., Tanahashi, I., Taniguchi, S., Matsumoto, N. & Nakamura, K. (1996). A New

Parker, R. & Valaee, S. (2006). Vehicle Localization in Vehicular Networks, *IEEE 64th Vehicular*

Parker, R. & Valaee, S. (2007). Cooperative Vehicle Position Estimation, *IEEE International*

Precise Localisation in Urban Areas, *IEEE Intelligent Transportation Systems Conference*

Fusion and Multi-Hypothesis Road Tracking, *Journal of Intelligent Transportation*

an Enhanced Localisation in Urban Areas, *Intelligent Vehicles Symposium, 2006 IEEE*

Lateral Vehicle using Two Wheel Shapes Information in 2-D Lateral Vehicle Images by 3-D Computer Vision Techniques, *IEEE International Conference on Robotics and*

using Kalman Filtering, *Wireless Communications and Networking, IEEE* 3: 1969–1973.

time Measurements and Survey Points, *IEEE Transactions on Vehicular Technology*

Mitigation for Urban Area Using Omnidirectional Infrared Camera, *IEEE Transactions*

Concept for Vehicle Localization of Road Debiting System, *Proceedings of the IEEE*

for Intersection Safety Applications, *Proceedings of IEEE Intelligent Vehicles Symposium*

*Transportation Systems, IEEE Transactions on* 11(3): 678 –691.

ESA (n.d.). Making EGNOS Work for You,CD-ROM.

*Proc. ION GPS* pp. 1502–1509.

*Systems, IEEE Transactions* 32(3): 1116 – 1124.

*Systems Technology Planning and Operations* 12(4): 189–201.

*Navigation and Information Systems Conference, 1991* 2: 635–643.

*31st Annual Conference of IEEE Industrial Electronics Society* p. 5 pp.

pp. 490–495.

pp. 468–473.

pp. 50–57.

*Automation* 1: 881–886.

*Conference* 52(4): 999–1011.

*Technology Conference* pp. 1–5.

*on Intelligent Transportation Systems* 10(1): 22–30.

*Intelligent Vehicles Symposium* pp. 93–98.

*Conference on Communications* pp. 5837–5842.

problem in terms of resource synchronization, resource management, and task driven performance.

A novel framework for vehicle localization is presented. The aim is to develop a vehicle localization system that can optimize and plan the use of its resources so as to achieve the performance requirements of the localization task or application. The main components of the proposed framework are key research issues.

#### **11. References**


18 Will-be-set-by-IN-TECH

problem in terms of resource synchronization, resource management, and task driven

A novel framework for vehicle localization is presented. The aim is to develop a vehicle localization system that can optimize and plan the use of its resources so as to achieve the performance requirements of the localization task or application. The main components of

Al-Bayari, O. & Sadoun, B. (2005). New Centralized Automatic Vehicle Location

Al-Jazzar, S. & Caffery, J., J. (2004). NLOS Mitigation Method for Urban Environments, *IEEE*

Aono, T., Fujii, K., Hatsumoto, S. & Kamiya, T. (1998). Positioning of Vehicle on Undulating

Bakhache, B. & Nikiforov, I. (2000). Reliable detection of faults in measurement systems, *International Journal of Adaptive Control and Signal Processing* 14(7): 683–700. Benslimane, A. (2005). Localization in Vehicular Ad hoc Networks, *Systems Communications.*

Bouju, A., Stockus, A., Bertrand, R. & Boursier, P. (2002). Location-Based Spatial Data Management in Navigation Systems, *IEEE Intelligent Vehicle Symposium* 1: 172–177. Boukerche, A., Oliveira, H., Nakamura, E. & Loureiro, A. (2008). Vehicular Ad hoc Networks:

Bouvet, D. & Garcia, G. (2000). Improving the Accuracy of Dynamic Localization Systems

Caffery, J. & Stuber, G. (1998). Overview of Radiolocation in CDMA Cellular Systems, *IEEE*

Caffery, J.J., J. (2000). A New Approach to the Geometry of TOA Location, *IEEE 52nd Vehicular*

Chausse, F., Laneurit, J. & Chapuis, R. (2005). Vehicle Localization on a Digital Map using

Chen, M., Sohn, T., Chmelev, D., Haehnel, D., Hightower, J., Hughes, J., LaMarca, A., Potter, F.,

Dao, D., Rizos, C. & Wang, J. (2002). Location-Based Services: Technical and Business Issues,

Drawil, N. & Basir, O. (2008). Vehicular Collaborative Technique for Location Estimate

Caffery, J. (1999). *Wireless Location in Cdma Cellular Radio Systems*, Kluwer Academic Pub. Caffery, J. & Stuber, G. (1994). Vehicle Location and Tracking for IVHS in CDMA Microcells,

*JOURNAL OF COMMUNICATION SYSTEMS* 18(9): 833.

*60th Vehicular Technology Conference* 7: 5112–5115.

*Robotics and Automation* 3: 2525–2530 vol.3.

*Communications Magazine* 36(4): 38 – 45.

Phones, *Lecture Notes in Computer Science* 4206: 225.

Correction, *IEEE 68th Vehicular Technology Conference* pp. 1–5.

*Technology Conference* 4: 1943–1949.

Particles Fltering, pp. 243–248.

Cramer, M. (1997). GPS/INS Integration.

*GPS Solutions* 6(3): 169–178.

Communications Software System Under GIS Environment, *INTERNATIONAL*

Ground using GPS and Dead Reckoning, *IEEE International Conference on Robotics and*

A New Challenge for Localization-Based Systems, *Computer Communications*

using RTK GPS by Identifying the GPS Latency, *IEEE International Conference on*

*5th IEEE International Symposium on Personal, Indoor and Mobile Radio Communications*

Smith, I. & Varshavsky, A. (2006). Practical Metropolitan-scale Positioning for GSM

performance.

**11. References**

the proposed framework are key research issues.

*Automation* 4: 3443–3448.

*Proceedings* pp. 19–25.

31(12): 2838–2849.

4: 1227–1231.


**13** 

Nel Samama

*France* 

**Indoor Positioning with** 

*Institut Telecom / Telecom SudParis* 

**GNSS-Like Local Signal Transmitters** 

After more than ten years of research into indoor positioning and localisation techniques, whose aim has been to provide real continuity of service, as with GNSS outdoors, one has to

The real story started a little bit more than ten years ago, in the context of the Galileo project, with the very interesting idea of the so-called "*local elements*". The question was to do better than the future competitor GPS in designing a real positioning service for the twenty first century: technology transparency to the end user, simple and intuitive operation, performance and of course continuity of the positioning in all possible environments that the modern

One technology followed another: Ultra Wide band (UWB) was the first candidate, at the end of the 20th century. But, facing the problem of considering the proposed approaches as a real "*indoor GPS*", Assisted-GPS (A-GPS, shortly followed by the Assisted-GNSS) was the next one, typically between 2003 and 2007. It was at that time that the positioning community seemed to realise that the problem was really hard and that a huge research effort would be necessary. For instance, this was the time that ubiquitous positioning was no longer described as imminent and works being carried out in many directions now had a chance to be heard. A few industrial partners, often small organisations, proposed various technical solutions, from the well-known WiFi to the use of TV (television) signals for

On the other hand, the market of "*Location Based Services*" developed very slowly, probably due to the complexity and diversity of the environments to be addressed: "one" is still waiting for THE FREE technological solution (as in the case of GPS : this system is one example of the numerous modern "costly free" services) (Kupper 2005). Current techniques proposed in order to provide this continuity of service are mainly oriented, for commercially available solutions, towards WiFi. Some R&D partners also propose inertial sensors or

**1. Introduction** 

example.

vision based approaches.

**1.1 A very brief history** 

conclude that no solution has yet been found.

citizen will face with his/her mobile phone.


## **Indoor Positioning with GNSS-Like Local Signal Transmitters**

Nel Samama

*Institut Telecom / Telecom SudParis France* 

## **1. Introduction**

20 Will-be-set-by-IN-TECH

298 Global Navigation Satellite Systems – Signal, Theory and Applications

Porretta, M., Nepa, P., Manara, G. & Giannetti, F. (2008). Location, Location, Location,

Quddus, M. (2006). *High integrity map matching algorithms for advanced transport telematics*

Rae, A. & Basir, O. (2007). A Framework for Visual Position Estimation for Motor Vehicles, *4th*

Rezaei, S. & Sengupta, R. (2005). Kalman Filter Based Integration of DGPS and Vehicle

Sayed, A., Tarighat, A. & Khajehnouri, N. (2005). Network-Based Wireless Location:

Schlingelhof, M., Betaille, D., Bonnifait, P. & Demaseure, K. (2008). Advanced Positioning

Sharaf, R., Noureldin, A., Osman, A. & El-Sheimy, N. (2005). Online INS/GPS Integration

Skog, I. & Handel, P. (2009). In-Car Positioning and Navigation TechnologiesâA˘TA Survey,

Sliety, M. (2007). Impact of Vehicle Platform on Global Positioning System Performance in Intelligent Transportation, *Intelligent Transport Systems, IET* 1(4): 241–248. Smith, W.W., J. (1991). Passive Location of Mobile Cellular Telephone Terminals, *IEEE*

Stockus, A., Bouju, A., Bertrand, F. & Boursier, P. (2000). Web-Based Vehicle Localization,

Toledo-Moreo, R., Zamora-Izquierdo, M. & Gomez-Skarmeta, A. (2006). A Novel Design of a

Venkatraman, S., Caffery, J., J. & You, H.-R. (2002). Location using LOS Range Estimation in NLOS Environments, *Vehicular Technology Conference, IEEE 55th* 2: 856–860. Walter, T. & Enge, P. (1995). Weighted RAIM for precision approach, *PROCEEDINGS OF ION*

Wang, X., Wang, Z. & O'Dea, B. (2003). A TOA-Based Location Algorithm Reducing the

Weiss, T., Kaempchen, N. & Dietmayer, K. (2005). Precise ego-Localization in Urban Areas

Wylie, M. & Holtzman, J. (1996). The Non-Line of Sight Problem in Mobile Location

Zhang, G., Krishnan, S., Chin, F. & Ko, C. C. (2008). UWB Multicell Indoor Localization

Experiment System with Adaptive TDOA Combination, pp. 1–5.

High Integrity Low Cost Navigation Unit for Road Vehicle Applications, pp. 577–582.

Errors due to Non-Line-of-Sight (NLOS) Propagation, *IEEE Transactions on Vehicular*

using Laserscanner and High Accuracy Feature Maps, *Proceedings of IEEE Intelligent*

Estimation, *5th IEEE International Conference on Universal Personal Communications*

*IEEE Transactions on Intelligent Transportation Systems* 10(1): 4–21.

*International Carnahan Conference on Security Technology* pp. 221–225.

*Proceedings of the IEEE Intelligent Vehicles Symposium* pp. 436–441.

Sensors for Localization, *Mechatronics and Automation, IEEE International Conference*

Challenges Faced in Developing Techniques for Accurate Wireless Location

Technologies for Co-operative Systems, *Intelligent Transport Systems, IET* 2(2): 81–91.

with A Radial Basis Function Neural Network, *IEEE Aerospace and Electronic Systems*

ˇ

*Workshop on Positioning, Navigation and Communication* pp. 223–228.

Information, *Signal Processing Magazine, IEEE* 22(4): 24–40.

*Vehicular Technology Magazine, IEEE* 3(2): 20–29.

*applications*, PhD thesis, Citeseer.

1: 455–460.

*Magazine* 20(3): 8–14.

*GPS*, Vol. 8, Citeseer, pp. 1995–2004.

*Technology Conference* 52(1): 112–116.

*Vehicles Symposium* pp. 284–289.

2: 827–831.

After more than ten years of research into indoor positioning and localisation techniques, whose aim has been to provide real continuity of service, as with GNSS outdoors, one has to conclude that no solution has yet been found.

## **1.1 A very brief history**

The real story started a little bit more than ten years ago, in the context of the Galileo project, with the very interesting idea of the so-called "*local elements*". The question was to do better than the future competitor GPS in designing a real positioning service for the twenty first century: technology transparency to the end user, simple and intuitive operation, performance and of course continuity of the positioning in all possible environments that the modern citizen will face with his/her mobile phone.

One technology followed another: Ultra Wide band (UWB) was the first candidate, at the end of the 20th century. But, facing the problem of considering the proposed approaches as a real "*indoor GPS*", Assisted-GPS (A-GPS, shortly followed by the Assisted-GNSS) was the next one, typically between 2003 and 2007. It was at that time that the positioning community seemed to realise that the problem was really hard and that a huge research effort would be necessary. For instance, this was the time that ubiquitous positioning was no longer described as imminent and works being carried out in many directions now had a chance to be heard. A few industrial partners, often small organisations, proposed various technical solutions, from the well-known WiFi to the use of TV (television) signals for example.

On the other hand, the market of "*Location Based Services*" developed very slowly, probably due to the complexity and diversity of the environments to be addressed: "one" is still waiting for THE FREE technological solution (as in the case of GPS : this system is one example of the numerous modern "costly free" services) (Kupper 2005). Current techniques proposed in order to provide this continuity of service are mainly oriented, for commercially available solutions, towards WiFi. Some R&D partners also propose inertial sensors or vision based approaches.

Indoor Positioning with GNSS-Like Local Signal Transmitters 301

In terms of technologies for indoor positioning1, numerous candidates are almost available, some of them being proposed as commercial products and solutions. A fundamental point to understand is that one is always looking for a positioning system that is globally the continuation of GPS in all environments, i.e. a few meters of accuracy, free for the users and with no specific infrastructure to be deployed by any commercial operator. Hence the various directions of works carried out in recent years: indoor GNSS through Assisted-GNSS, although this is not a solution to the problem (see the first lines of this sections), WiFi because one considers that the required infrastructure will be deployed anyway for telecommunication purposes2 and inertial approaches that really don't need any specific infrastructure. The accuracy being sought eliminates candidates such as the GSM (Global System for Mobile) or UMTS (Universal Mobile Telecommunications System), whatever the

• Wireless Local Area Networks (WLANs, such as WiFi) or Wireless Personal Area Networks (WPANs, such as UWB or Bluetooth) based: the main idea is to use these telecommunication networks for positioning purposes. The main problems for translating the GNSS time of flight measurements lie in the non-synchronised nature of these networks and the complexity of the indoor propagation environment. Thus, the usually implemented technique is based on so-called fingerprinting, described in the next section. An exception to this rule is the Ultra Wide Band that fundamentally works in the time domain, thus could potentially allow us to carry out time measurements. Technological developments are still on-going and initial promises have not yet been

• Wireless Mobile Networks (such as GSM or UMTS). The use of mobile networks leads to the same basic difficulty as WLAN or WPAN. Although non-synchronisation is a problem, propagation characteristics are probably the largest difficulty. Performances are not at a sufficient level in order to allow a real continuity with outdoor GNSS. Nevertheless, some services are available which implement the so-called Cell-Id (Identification of the telecom Cell the mobile is associated with). This technique allows a mobile terminal to know the area it is in by analysing the base station it is associated with. The accuracy is rather poor, ranging from a few hundreds of meters in densely

• Inertial systems have typically three problems: time related shift of the accuracy, distance related shift of the accuracy and the cost of the terminal. Recent smart phones have embedded inertial sensors but positioning remains a challenge. Nowadays, techniques are mainly oriented in two directions: integration of the measurements provided by the sensors (accelerometers, gyrometers and magnetometers) or modelling

1 Note that indoor positioning is seen as the ultimate difficulty in order to cope with ubiquity since this seems to include all the most difficult phenomena. This is of course not the only environment where GNSS are not very efficient: so-called urban canyons are also important to be dealt with. Nevertheless, the topic of this chapter is clearly limited to indoor techniques. 2 This assertion is not 100% right with current proposed solutions since it is almost always necessary to distribute additional access points to existing networks in order to create the required redundancy.

Among a few others, it is possible to list the following global categories:

**1.3 The main radio based approaches** 

technique envisaged.

met.

populated areas to several kilometres.

## **1.2 Applications and services**

The potential applications and services likely to use such ubiquitous positioning systems are numerous. Of course, the first kind is clearly related to guidance and navigation, as currently for outdoors and GNSS related services, which is the natural extension of the most popular applications. But now that the citizen is considered, through his/her mobile phone, the new services are not only individual (same as the car navigation system, designed for a single user), but also for the community with, for example, the "group" approaches developed by so-called social networks. There is probably a historical parallel that can be drawn between the introduction of the portable clock, about two hundred and fifty years ago, and the development of the navigation capabilities: from individual to collective and from collective to individual. Maybe the advent of these ubiquitous positioning devices will lead to social transformations similar to those induced by the portable clock … but this is another story. Note also that for these collective approaches, telecommunications systems are required (and in that way, this is now probably the "right time"): this is evidence that the two domains, telecommunication and positioning, are so closely linked. Another very important point to consider, when addressing the mobile phone of a user, is that there are then no constraints on the displacements of the citizen (as was the case for a car for instance) and that current positioning devices, namely mainly GNSS ones, are placed in far more difficult environments and uses (this latter point is the most important for the discussion): thus, new techniques, new devices and new services must be imagined and designed.

It is also possible to cite the classical asset management and various surveillance applications, but which must now work in many different environmental conditions. Once again, the individual and collective approaches are one of the important new features. Multimodal transportation, a desire not yet realised, of a world that would like to be able to reduce its energy consumption, clearly needs the ability to position in real-time all the actors and the various components: pedestrians are indoors more than seventy percent of a typical day and are in constant mobility (and in addition have a potential problem of energy), when vehicles will have to be precisely monitored in order to manage not only their locations, but also their energy, their availability, their reservation, to check the payments, etc. Self-service car locations or co-driving applications fit naturally in this same category.

In a totally different domain, certification and security applications can be envisaged on a geographical basis but ubiquity must be reached (current performance of GNSS are not enough). Following the privacy issues, the conditional liberty of prisoners could be largely extended: currently, due to the limitations of positioning systems (coverage indoors), the prisoners are not allowed to take the underground for example (at least in France). The large scale deployment of ubiquitous systems could allow substantial improvements of the capabilities.

The next generation of applications could be in the domain of social networks. The developments of these networks have been huge and the permanent exchanges between people and connected groups are enhanced when geographical data are associated. Note that our imagination could easily apply this approach to objects, of course.

## **1.3 The main radio based approaches**

300 Global Navigation Satellite Systems – Signal, Theory and Applications

The potential applications and services likely to use such ubiquitous positioning systems are numerous. Of course, the first kind is clearly related to guidance and navigation, as currently for outdoors and GNSS related services, which is the natural extension of the most popular applications. But now that the citizen is considered, through his/her mobile phone, the new services are not only individual (same as the car navigation system, designed for a single user), but also for the community with, for example, the "group" approaches developed by so-called social networks. There is probably a historical parallel that can be drawn between the introduction of the portable clock, about two hundred and fifty years ago, and the development of the navigation capabilities: from individual to collective and from collective to individual. Maybe the advent of these ubiquitous positioning devices will lead to social transformations similar to those induced by the portable clock … but this is another story. Note also that for these collective approaches, telecommunications systems are required (and in that way, this is now probably the "right time"): this is evidence that the two domains, telecommunication and positioning, are so closely linked. Another very important point to consider, when addressing the mobile phone of a user, is that there are then no constraints on the displacements of the citizen (as was the case for a car for instance) and that current positioning devices, namely mainly GNSS ones, are placed in far more difficult environments and uses (this latter point is the most important for the discussion): thus, new techniques, new devices and new services

It is also possible to cite the classical asset management and various surveillance applications, but which must now work in many different environmental conditions. Once again, the individual and collective approaches are one of the important new features. Multimodal transportation, a desire not yet realised, of a world that would like to be able to reduce its energy consumption, clearly needs the ability to position in real-time all the actors and the various components: pedestrians are indoors more than seventy percent of a typical day and are in constant mobility (and in addition have a potential problem of energy), when vehicles will have to be precisely monitored in order to manage not only their locations, but also their energy, their availability, their reservation, to check the payments, etc. Self-service car locations or co-driving applications fit naturally in this same

In a totally different domain, certification and security applications can be envisaged on a geographical basis but ubiquity must be reached (current performance of GNSS are not enough). Following the privacy issues, the conditional liberty of prisoners could be largely extended: currently, due to the limitations of positioning systems (coverage indoors), the prisoners are not allowed to take the underground for example (at least in France). The large scale deployment of ubiquitous systems could allow substantial improvements of the

The next generation of applications could be in the domain of social networks. The developments of these networks have been huge and the permanent exchanges between people and connected groups are enhanced when geographical data are associated. Note

that our imagination could easily apply this approach to objects, of course.

**1.2 Applications and services** 

must be imagined and designed.

category.

capabilities.

In terms of technologies for indoor positioning1, numerous candidates are almost available, some of them being proposed as commercial products and solutions. A fundamental point to understand is that one is always looking for a positioning system that is globally the continuation of GPS in all environments, i.e. a few meters of accuracy, free for the users and with no specific infrastructure to be deployed by any commercial operator. Hence the various directions of works carried out in recent years: indoor GNSS through Assisted-GNSS, although this is not a solution to the problem (see the first lines of this sections), WiFi because one considers that the required infrastructure will be deployed anyway for telecommunication purposes2 and inertial approaches that really don't need any specific infrastructure. The accuracy being sought eliminates candidates such as the GSM (Global System for Mobile) or UMTS (Universal Mobile Telecommunications System), whatever the technique envisaged.

Among a few others, it is possible to list the following global categories:


 1 Note that indoor positioning is seen as the ultimate difficulty in order to cope with ubiquity since this seems to include all the most difficult phenomena. This is of course not the only environment where GNSS are not very efficient: so-called urban canyons are also important to be dealt with. Nevertheless,

the topic of this chapter is clearly limited to indoor techniques. 2 This assertion is not 100% right with current proposed solutions since it is almost always necessary to distribute additional access points to existing networks in order to create the required redundancy.

Indoor Positioning with GNSS-Like Local Signal Transmitters 303

measurements or vision based (camera) scene analysis systems present some real advantages in terms of measurement accuracy (a few millimetres for the former) or orientation determination (very useful for any guidance system, available for the latter). Unfortunately, the foreseen use of positioning devices being mainly dedicated to pedestrians in urban environments, optical obstacles are numerous. These latter techniques are then considered as potential hybridisation3 candidates. Many types of sensors have also been studied for positioning, such as infrared or ultrasound. Once again, although accuracy can reach centimetre values, the environmental constraints are not compatible with the ubiquitous systems being sought. Another category is, of course, inertial systems which could be a valuable alternative to radio systems: time and distance associated position drifts are not yet sufficiently mastered and the given positioning is relative4, which means the need for "something else" in order to provide the user with an absolute location. The object

There are mainly four techniques that are used for radio positioning. In fact they come from the history of mathematics and have been improved over the centuries, thanks to the development of instrumentation (Samama 2008). In chronological order there are *angle* 

*Angle measurement* is the basis of triangulation used by geodesists for measuring the earth. For positioning purposes, the technique is a little bit different and is illustrated in figure 1. The main idea is to measure the absolute direction of a signal received from a transmitter (at the mobile terminal). The reference usually used is the magnetic north which can be obtained from a compass. Thus, with a single measurement, the terminal knows that it is somewhere on the line L1 (see figure 1). Of course, this is not accurate enough, so it is necessary to carry out a second measurement from another transmitter, say T2. This second measurement allows the terminal to know that it is somewhere on line L2. The combination of both measurements gives the location of the terminal, at the intersection of lines L1 and L2. This kind of approach, combining multiple measurements in order to find the location geometrically, is often applied. Two measurements give a location in two dimensions.

This technique can be applied in 3D but requires a 3D angle measurement, hence two angles (azimuth and elevation): this is possible with 2D receiving antennas. Two 3D angle measurements, hence four angles, lead to a location in 3D. Note that when, in 2D, three measurements are available, there is the need for an additional method in order to determine the location considered, as can be seen in figure 1 (right). In the present case, it is often chosen to consider the centre of the inner circle of the triangle that is formed by the

This technique can be quite efficient since angle of arrival measurements are usually based on phase differences which can be measured with rather high precision. Unfortunately, in

3 Hybridization is the approach that consists in coupling two or more techniques in order to provide the device with improved performance, either in terms of accuracy or in coverage or availability. 4 Relative positioning refers here to a position that is given with reference to the previous one. Thus, there is the need to know the first position in order to be able to give an absolute positioning (given in a

of this section is to focus on radio based approaches.

*measurements*, *fingerprinting, time of flight measurements* and *cell-id*.

**2.1 Measurement techniques** 

intersections of the three lines.

known reference frame).

the walking of an individual based on the detection of some very specific instances, such as the precise time the foot touches the ground. Then, the method consists in counting the number of footsteps. These approaches are not yet mature for mass-market applications but research is still being carried out.

• GNSS based systems. In addition to Assisted-GNSS, which is once again not a solution for ubiquitous positioning, the following sections will deal specifically with this problem. Various approaches have been proposed with rather good accuracy results: the remaining problem is clearly the need for an additional infrastructure that needs to be deployed locally. Operators are not ready for this and although very good results are reported, very few systems are really available.

The last category is related to sensor networks. Many systems have been proposed in the last fifteen years, but the lack of standardisation and the high number of sensors that need to be deployed are currently a real drawback.

## **1.4 The perceived and real needs**

If we take a little break to try to analyze the needs (i.e. requirements) for the continuity of service definition, it will quickly become apparent that it greatly depends on the targeted applications and services. But if you ask anybody, the answer will very often be given in terms of positioning accuracy, availability and latency: it should be accurate to better than one meter, available everywhere and instantaneously in real-time. Curiously, the fact that it should be available in three dimensions will almost never be mentioned. Although it really depends on the application (the requirement is not the same for the guidance of a robot in a nuclear reactor and for finding the nearest restaurant), one should be able to distinguish between the positioning "engine" and the resulting services. For instance, GPS does not provide a one meter positioning everywhere, even outdoors, but car navigation systems are very accurate for the delivered service, thanks to map matching and Kalman filtering. The same should apply to ubiquitous positioning. Nevertheless, a good rule of thumb could be to consider that the major difference, in terms of environments, between outdoors and indoors is that indoors is typically a 3D environment, thus requires full 3D positioning capabilities. In that sense, the accuracy should probably be enough to allow the floor level to be determined, i.e. an accuracy of typically half the height of a given floor. In most buildings this means roughly one meter.

Following this general presentation of the indoor field, this chapter is going to focus on radio positioning solutions, and more specifically on GNSS-based radio approaches. The second paragraph is dedicated to an introduction to radio positioning. It is followed by three paragraphs dedicated to GNSS-based architectures: pseudolites, repeaters and repealites. The chapter ends with a synthesis and some hints for the possible future, as seen by the author.

## **2. The concepts of indoor positioning using radio transmitters**

Not all the techniques proposed have, of course, been based on radio techniques, but they are the most important ones for two main reasons: their level of development and maturity on the one hand and their ability to "cross" or to "get around" obstacles such as walls, furniture or people on the other hand. Optical based techniques, like laser based distance measurements or vision based (camera) scene analysis systems present some real advantages in terms of measurement accuracy (a few millimetres for the former) or orientation determination (very useful for any guidance system, available for the latter). Unfortunately, the foreseen use of positioning devices being mainly dedicated to pedestrians in urban environments, optical obstacles are numerous. These latter techniques are then considered as potential hybridisation3 candidates. Many types of sensors have also been studied for positioning, such as infrared or ultrasound. Once again, although accuracy can reach centimetre values, the environmental constraints are not compatible with the ubiquitous systems being sought. Another category is, of course, inertial systems which could be a valuable alternative to radio systems: time and distance associated position drifts are not yet sufficiently mastered and the given positioning is relative4, which means the need for "something else" in order to provide the user with an absolute location. The object of this section is to focus on radio based approaches.

#### **2.1 Measurement techniques**

302 Global Navigation Satellite Systems – Signal, Theory and Applications

• GNSS based systems. In addition to Assisted-GNSS, which is once again not a solution for ubiquitous positioning, the following sections will deal specifically with this problem. Various approaches have been proposed with rather good accuracy results: the remaining problem is clearly the need for an additional infrastructure that needs to be deployed locally. Operators are not ready for this and although very good results are

The last category is related to sensor networks. Many systems have been proposed in the last fifteen years, but the lack of standardisation and the high number of sensors that need to

If we take a little break to try to analyze the needs (i.e. requirements) for the continuity of service definition, it will quickly become apparent that it greatly depends on the targeted applications and services. But if you ask anybody, the answer will very often be given in terms of positioning accuracy, availability and latency: it should be accurate to better than one meter, available everywhere and instantaneously in real-time. Curiously, the fact that it should be available in three dimensions will almost never be mentioned. Although it really depends on the application (the requirement is not the same for the guidance of a robot in a nuclear reactor and for finding the nearest restaurant), one should be able to distinguish between the positioning "engine" and the resulting services. For instance, GPS does not provide a one meter positioning everywhere, even outdoors, but car navigation systems are very accurate for the delivered service, thanks to map matching and Kalman filtering. The same should apply to ubiquitous positioning. Nevertheless, a good rule of thumb could be to consider that the major difference, in terms of environments, between outdoors and indoors is that indoors is typically a 3D environment, thus requires full 3D positioning capabilities. In that sense, the accuracy should probably be enough to allow the floor level to be determined, i.e. an accuracy of typically half the height of a given floor. In most buildings

Following this general presentation of the indoor field, this chapter is going to focus on radio positioning solutions, and more specifically on GNSS-based radio approaches. The second paragraph is dedicated to an introduction to radio positioning. It is followed by three paragraphs dedicated to GNSS-based architectures: pseudolites, repeaters and repealites. The chapter ends with a synthesis and some hints for the possible future, as seen

Not all the techniques proposed have, of course, been based on radio techniques, but they are the most important ones for two main reasons: their level of development and maturity on the one hand and their ability to "cross" or to "get around" obstacles such as walls, furniture or people on the other hand. Optical based techniques, like laser based distance

**2. The concepts of indoor positioning using radio transmitters** 

applications but research is still being carried out.

reported, very few systems are really available.

be deployed are currently a real drawback.

**1.4 The perceived and real needs** 

this means roughly one meter.

by the author.

the walking of an individual based on the detection of some very specific instances, such as the precise time the foot touches the ground. Then, the method consists in counting the number of footsteps. These approaches are not yet mature for mass-market

> There are mainly four techniques that are used for radio positioning. In fact they come from the history of mathematics and have been improved over the centuries, thanks to the development of instrumentation (Samama 2008). In chronological order there are *angle measurements*, *fingerprinting, time of flight measurements* and *cell-id*.

> *Angle measurement* is the basis of triangulation used by geodesists for measuring the earth. For positioning purposes, the technique is a little bit different and is illustrated in figure 1. The main idea is to measure the absolute direction of a signal received from a transmitter (at the mobile terminal). The reference usually used is the magnetic north which can be obtained from a compass. Thus, with a single measurement, the terminal knows that it is somewhere on the line L1 (see figure 1). Of course, this is not accurate enough, so it is necessary to carry out a second measurement from another transmitter, say T2. This second measurement allows the terminal to know that it is somewhere on line L2. The combination of both measurements gives the location of the terminal, at the intersection of lines L1 and L2. This kind of approach, combining multiple measurements in order to find the location geometrically, is often applied. Two measurements give a location in two dimensions.

> This technique can be applied in 3D but requires a 3D angle measurement, hence two angles (azimuth and elevation): this is possible with 2D receiving antennas. Two 3D angle measurements, hence four angles, lead to a location in 3D. Note that when, in 2D, three measurements are available, there is the need for an additional method in order to determine the location considered, as can be seen in figure 1 (right). In the present case, it is often chosen to consider the centre of the inner circle of the triangle that is formed by the intersections of the three lines.

> This technique can be quite efficient since angle of arrival measurements are usually based on phase differences which can be measured with rather high precision. Unfortunately, in

<sup>3</sup> Hybridization is the approach that consists in coupling two or more techniques in order to provide the

device with improved performance, either in terms of accuracy or in coverage or availability. 4 Relative positioning refers here to a position that is given with reference to the previous one. Thus, there is the need to know the first position in order to be able to give an absolute positioning (given in a known reference frame).

Indoor Positioning with GNSS-Like Local Signal Transmitters 305

*Time of flight measurements* are quite simple in principle but require acceptable propagation models (Kaplan 2006), (Parkinson 1996). The basic idea, shown in figure 3, is based on the measurement of the time required by a signal to propagate from a transmitter to a receiver. Once obtained, this time is usually converted into a distance. In the case of radio signals, it simply consists in multiplying the time by the speed of light, typically 3x108m/s. Of course, this model is too simple in real cases, so the modelling of the propagation is an essential step. Once one has the distance between the transmitter and the receiver, it means that the receiver is somewhere on the surface of a sphere whose centre is the transmitter and the radius the above mentioned distance. It appears clearly that this is not enough for positioning. Thus, we use additional measurements in order to reduce, geometrically, the uncertainty. A second measurement from a second transmitter (see figure 3) allows us to reduce the set of possible locations to a circle, while a third one reduces the set to two points and finally a fourth measurement leads to a unique location5. In case of more than four measurements, techniques such as least square are usually applied in order to find the

Fig. 2. The fingerprinting approach

optimal location in a set of superabundant equations.

Transmitter #n

Transmitter #i

Fig. 3. Time of flight positioning

Transmitter #1

di=c.ti

able to obtain such a location even when considering unavoidable measurement errors.

5 Note that here we are dealing with the real world and we know that the location exists. Thus, even if four spheres do not have an intersection in mathematics, we are sure that in the present case they do have one, the location of the receiver. The positioning algorithms must implement mechanisms that are

dn=c.tn

d1=c.t1 d2=c.t2

Transmitter #2

Transmitter #3

d3=c.t3

Receiver

indoor environments, the difficulty comes from the fact that the propagation is characterised by a large number of reflected path (from walls and all reflective objects), called multipath. Those multipath are even sometimes more powerful than the direct signal, which can in turn also be absent. Thus, even if the angle measurement is accurate, the environmental conditions are likely to mislead the positioning algorithms.

Fig. 1. Angle measurements

The second technique is called *fingerprinting*. The first idea of this method was reported around the sixteenth century when a solution to the longitude problem was being sought. Some scientists had the idea to make a complete geographical cartography of the magnetic field of the earth: if there is a unique link between the location on earth and the value of the magnetic field, then one can consider that the magnetic field value is a perfect indicator for finding a location. Unfortunately, the magnetic field is not a good candidate for such a purpose. This idea came back to engineers with the development of wireless networks: the complexity of the indoor environment for propagation led to the revival of the fingerprinting approach: the received power of the radio signal is now the physical value that is measured. The indoor environment is then cut into squares and the fingerprints (the received power) measured at each intersection of the grid (see figure 2): the "map" associated with transmitter #1 (a data base indeed) is created. The problem is now that many different fingerprints are identical for different locations. The method of multiple measurements is once again implemented: in this case, a second (and more, if required) transmitter is added and a second map is filled in. The location is no longer characterised by a single value but now by a couple of values. In the case of n transmitters, then all calibrated locations are characterised by a vector of length n.

The way in which positioning is then achieved in real-time is quite simple: the mobile terminal carries out received power measurements from all the "radio visible" transmitters in its environment and fills in its own vector. The location is obtained by finding the nearest neighbour in the complete set of maps (data bases) available. The need for this "calibration" phase is clearly a drawback of the method because it is time consuming and, moreover, because it is not a stable operating mode, since the power received is bound to be modified by any movement of any obstacle (including people for instance). Thus, techniques have been proposed in order to manage in real-time (or for longer periods of time) the variation of the maps in comparison with the reference maps. Note also that more measurements should lead to a more accurate positioning.

Fig. 2. The fingerprinting approach

304 Global Navigation Satellite Systems – Signal, Theory and Applications

indoor environments, the difficulty comes from the fact that the propagation is characterised by a large number of reflected path (from walls and all reflective objects), called multipath. Those multipath are even sometimes more powerful than the direct signal, which can in turn also be absent. Thus, even if the angle measurement is accurate, the environmental

Transmitter #1

Line L1

The second technique is called *fingerprinting*. The first idea of this method was reported around the sixteenth century when a solution to the longitude problem was being sought. Some scientists had the idea to make a complete geographical cartography of the magnetic field of the earth: if there is a unique link between the location on earth and the value of the magnetic field, then one can consider that the magnetic field value is a perfect indicator for finding a location. Unfortunately, the magnetic field is not a good candidate for such a purpose. This idea came back to engineers with the development of wireless networks: the complexity of the indoor environment for propagation led to the revival of the fingerprinting approach: the received power of the radio signal is now the physical value that is measured. The indoor environment is then cut into squares and the fingerprints (the received power) measured at each intersection of the grid (see figure 2): the "map" associated with transmitter #1 (a data base indeed) is created. The problem is now that many different fingerprints are identical for different locations. The method of multiple measurements is once again implemented: in this case, a second (and more, if required) transmitter is added and a second map is filled in. The location is no longer characterised by a single value but now by a couple of values. In the case of n transmitters, then all calibrated

The way in which positioning is then achieved in real-time is quite simple: the mobile terminal carries out received power measurements from all the "radio visible" transmitters in its environment and fills in its own vector. The location is obtained by finding the nearest neighbour in the complete set of maps (data bases) available. The need for this "calibration" phase is clearly a drawback of the method because it is time consuming and, moreover, because it is not a stable operating mode, since the power received is bound to be modified by any movement of any obstacle (including people for instance). Thus, techniques have been proposed in order to manage in real-time (or for longer periods of time) the variation of the maps in comparison with the reference maps. Note also that more measurements

North North

Transmitter #2

Line L2

Line L3

Transmitter #3

conditions are likely to mislead the positioning algorithms.

Line L2

locations are characterised by a vector of length n.

should lead to a more accurate positioning.

Transmitter #2

Line L1

Transmitter #1

Fig. 1. Angle measurements

*Time of flight measurements* are quite simple in principle but require acceptable propagation models (Kaplan 2006), (Parkinson 1996). The basic idea, shown in figure 3, is based on the measurement of the time required by a signal to propagate from a transmitter to a receiver. Once obtained, this time is usually converted into a distance. In the case of radio signals, it simply consists in multiplying the time by the speed of light, typically 3x108m/s. Of course, this model is too simple in real cases, so the modelling of the propagation is an essential step. Once one has the distance between the transmitter and the receiver, it means that the receiver is somewhere on the surface of a sphere whose centre is the transmitter and the radius the above mentioned distance. It appears clearly that this is not enough for positioning. Thus, we use additional measurements in order to reduce, geometrically, the uncertainty. A second measurement from a second transmitter (see figure 3) allows us to reduce the set of possible locations to a circle, while a third one reduces the set to two points and finally a fourth measurement leads to a unique location5. In case of more than four measurements, techniques such as least square are usually applied in order to find the optimal location in a set of superabundant equations.

Fig. 3. Time of flight positioning

<sup>5</sup> Note that here we are dealing with the real world and we know that the location exists. Thus, even if four spheres do not have an intersection in mathematics, we are sure that in the present case they do have one, the location of the receiver. The positioning algorithms must implement mechanisms that are able to obtain such a location even when considering unavoidable measurement errors.

Indoor Positioning with GNSS-Like Local Signal Transmitters 307

Let us come back to the specific case of indoors: some major differences have to be kept in mind in comparison with outdoors. Let us also discuss the case of GNSS since this chapter is dedicated to indoor GNSS-based solutions. First of all, the various techniques are based on time of fight measurements, the same as outdoors, but consider the following parameters for

• *Propagation environments*: indoors is a very difficult environment and acceptable models are not available. This means that signal processing must solve problems that are either not present, or less difficult to solve, outdoors, the most challenging being multipath. Another problem is related to the possibility of Non Line of Sight (NLOS) path from transmitters to receiver, which happens more often than outdoors. The same kind of techniques could be envisaged but outdoors they are usually based on a certain

• *Dilution Of Precision (DOP)*: the geometrical distribution9 of the transmitters is a very important point to consider when dealing with positioning systems that use distances in order to carry out the calculation. Outdoors, for a location on earth, with GNSS for instance, there is disequilibrium between the horizontal DOP (HDOP), calculated in the horizontal plane, and the vertical DOP (VDOP), calculated in the vertical plane. This discrepancy is due to the fact that when the distribution can be really uniform horizontally (all the satellites being uniformly distributed around the receiver), leading to a good HDOP, this distribution cannot be so good vertically since only satellites above the radio horizon (which is quite similar to the geometrical horizon in the present case) are visible. Thus, the HDOP is usually better than the VDOP. Indoors, things are quite different since one can decide the location of the transmitters: it is very important to locate at least one transmitter below the receiver in order to reduce the VDOP (Vervisch-Picois and Samama 2006). Evaluations have shown a dramatic

9 This DOP allows the receiver to give a real-time estimation of the accuracy provided to the user (the User Estimated Range Error in GPS for example): it is of uppermost importance for any application or

redundancy of available signals, which is not the usual case indoors.

User #1

Transmitters

**2.2 Main differences with outdoor techniques** 

Fig. 4. The cell-id approach

discussion.

service.

User #2

There is a really difficult problem in this time of flight measurements: the synchronisation between transmitters and receivers. There are indeed two different synchronisation problems: the first concerns the synchronisation between transmitters (since multiple measurements are carried out from different transmitters) and the second concerns the receiver with the various transmitters. The two problems are not equivalent since if it is possible (not necessarily simple) to imagine "wiring" the various transmitters, it is often not possible to have a link from the transmitters to the receiver, other than the radio link Radio synchronisation is possible but requires a bandwidth proportional to the accuracy needed. In practice, synchronisation to the nanosecond6 is not achieved through radio links. In the case of GNSS, this synchronisation is achieved by adding an additional measurement, from an additional satellite, in order to solve this new unknown variable. In previous systems, such as Decca7, the synchronisation between transmitters and the receiver was not carried out: instead, differences of time measurements from two transmitters were carried out. In such a case, the synchronisation unknown disappears (because of the difference) and the positions of the receiver, characterised by a given difference of flight times, are located on a hyperboloid whose foci are the transmitters. Once again, multiple difference measurements are needed for positioning.

Note that the complexity of synchronisation of radio systems comes from the speed of light. Ultrasound based approaches do not have the same problem since the speed of the signal is reduced by a factor of nearly one million. In such a case, synchronisation to the millisecond is comparable to the nanosecond requirement of the radio system.

For the inter-transmitter synchronisation, two generic approaches have been implemented. The first one uses cables in order to create a real physical link between transmitters: then, a simple calibration phase, once only, is carried out in order to know the exact synchronisation. The second one, implemented in GPS for instance, is to use very slow drift clocks8 and to carry out a multitude of measurements from known locations in order to inverse the positioning problem and to determine the non-synchronisation variables (one for each transmitter). Of course, this approach is expensive and cannot be followed when designing low cost indoor positioning solutions.

The *Cell-id* approach is the simplest one and does not need any modelling (see figure 4). As a matter of fact, a coverage area is associated with the transmitter, whose shape is usually considered to be a hexagon (of course the actual shape depends highly on the radio environment). When the receiver is "simply" able to connect to the transmitter, one considers that it is within the coverage area. This is a simple way to provide a location. This is not very accurate for high power transmitters that have a wide radio range, but can be very good for very low range devices. Of course, in this latter case, the number of transmitters should be high if one wants a wide coverage. As usual, compromises have to be made.

<sup>6</sup> One nanosecond at the speed of light is equivalent to 30cm. When a typical positioning accuracy of one meter is wanted, such a synchronization precision is needed.

<sup>7</sup> Decca was a terrestrial positioning system. Propagation models were developed and it appeared that a better performance was obtained over sea rather than over land.

<sup>8</sup> Please note that using atomic clocks is not enough for synchronization purposes. These clocks are used for the low rate of their drift, hence the larger time interval required between synchronization updates.

Fig. 4. The cell-id approach

306 Global Navigation Satellite Systems – Signal, Theory and Applications

There is a really difficult problem in this time of flight measurements: the synchronisation between transmitters and receivers. There are indeed two different synchronisation problems: the first concerns the synchronisation between transmitters (since multiple measurements are carried out from different transmitters) and the second concerns the receiver with the various transmitters. The two problems are not equivalent since if it is possible (not necessarily simple) to imagine "wiring" the various transmitters, it is often not possible to have a link from the transmitters to the receiver, other than the radio link Radio synchronisation is possible but requires a bandwidth proportional to the accuracy needed. In practice, synchronisation to the nanosecond6 is not achieved through radio links. In the case of GNSS, this synchronisation is achieved by adding an additional measurement, from an additional satellite, in order to solve this new unknown variable. In previous systems, such as Decca7, the synchronisation between transmitters and the receiver was not carried out: instead, differences of time measurements from two transmitters were carried out. In such a case, the synchronisation unknown disappears (because of the difference) and the positions of the receiver, characterised by a given difference of flight times, are located on a hyperboloid whose foci are the transmitters. Once again, multiple difference measurements

Note that the complexity of synchronisation of radio systems comes from the speed of light. Ultrasound based approaches do not have the same problem since the speed of the signal is reduced by a factor of nearly one million. In such a case, synchronisation to the millisecond

For the inter-transmitter synchronisation, two generic approaches have been implemented. The first one uses cables in order to create a real physical link between transmitters: then, a simple calibration phase, once only, is carried out in order to know the exact synchronisation. The second one, implemented in GPS for instance, is to use very slow drift clocks8 and to carry out a multitude of measurements from known locations in order to inverse the positioning problem and to determine the non-synchronisation variables (one for each transmitter). Of course, this approach is expensive and cannot be followed when

The *Cell-id* approach is the simplest one and does not need any modelling (see figure 4). As a matter of fact, a coverage area is associated with the transmitter, whose shape is usually considered to be a hexagon (of course the actual shape depends highly on the radio environment). When the receiver is "simply" able to connect to the transmitter, one considers that it is within the coverage area. This is a simple way to provide a location. This is not very accurate for high power transmitters that have a wide radio range, but can be very good for very low range devices. Of course, in this latter case, the number of transmitters should be high if one wants a wide coverage. As usual, compromises have to be

6 One nanosecond at the speed of light is equivalent to 30cm. When a typical positioning accuracy of one

7 Decca was a terrestrial positioning system. Propagation models were developed and it appeared that a

8 Please note that using atomic clocks is not enough for synchronization purposes. These clocks are used for the low rate of their drift, hence the larger time interval required between synchronization updates.

is comparable to the nanosecond requirement of the radio system.

designing low cost indoor positioning solutions.

meter is wanted, such a synchronization precision is needed.

better performance was obtained over sea rather than over land.

are needed for positioning.

made.

## **2.2 Main differences with outdoor techniques**

Let us come back to the specific case of indoors: some major differences have to be kept in mind in comparison with outdoors. Let us also discuss the case of GNSS since this chapter is dedicated to indoor GNSS-based solutions. First of all, the various techniques are based on time of fight measurements, the same as outdoors, but consider the following parameters for discussion.


<sup>9</sup> This DOP allows the receiver to give a real-time estimation of the accuracy provided to the user (the User Estimated Range Error in GPS for example): it is of uppermost importance for any application or service.

Indoor Positioning with GNSS-Like Local Signal Transmitters 309

measurements Fingerprinting Time of flight

measurements Cell-Id

Technique

GNSS 9 WiFi 9 9<sup>i</sup>

UWB 9

TV 9vi

GSM/UMTS 9ii 9iii 9iv RFID 9<sup>v</sup>

i. Since wireless local area networks are not synchronised, distance (and not time) measurements are used. The distance is estimated through a power level measurement and a model of propagation (typically modified Friis formulae where the power of the distance is between 2.5 and 4 depending on the environment). This is not really accurate

ii. Angle measurements are already carried out at base stations in order to allow the use of the same frequency channel for transmissions in different directions. Thus, those measurements are available, but the limitations discussed in previous sections still

iii. These networks are not synchronised and mainly differences of time of flight have been proposed (but direct times of flight have also been proposed). Unfortunately, the propagation models are not well suitable and the best reported performance is around

one hundred metres outdoors, and can rise to a few hundreds of metres indoors. iv. Cell-Id is used by networks in order to route communications: once again, this is already implemented in mobile networks because it is needed. The accuracy is typically a few hundreds of metres but it is completely free and available. Many telecom

v. Many definitions of RFID (Radio Frequency IDentification) are proposed: let us consider this is a short range technology that allows two radio transmitters to exchange data, and identification, for instance. A simple way to carry out positioning (but not the only one) is to consider the cell-id model. The coverage area (or range) of a given transmitter is approximately known: when a second transmitter can connect to it, then it is located in the coverage area. In case of a very short range (say one metre or less), the accuracy of the positioning is thus better than one metre. The consequence is that the positioning is no longer a continuous process in space and time (as for GNSS for

vi. Television signal are available almost everywhere in modern countries: why not use them in order to position a receiver? This idea was developed a few years ago and an accuracy of around ten metres has been reported through time of flight measurement,

Although this chapter is dedicated to infrastructure based GNSS systems, other solutions have been investigated by the GNSS community. For instance, High Sensitivity GNSS, HS-

operators propose services based on GSM/UMTS cell-id positioning.

example), but becomes typically discrete.

**3. The first GNSS signal approach using pseudolites** 

System Angle

apply.

even indoors.

Bluetooth 9

Table 1. Summary of a few radio based positioning systems

and too dependent on the fluctuations of the environment.

improvement in the VDOP values, leading to a much better estimation of the user location accuracy.


#### **2.3 Main existing approaches**

Many positioning systems have been proposed with radio transmitters. All the above mentioned techniques have been implemented and this paragraph proposes a sort of classification depending on the technique. Table 1 gives provides a non-exhaustive summary of them. A few references are provided concerning UWB (Fontana 2004), Bluetooth (Takada et al. 2003), WiFi (Wang et al. 2004) or TV (Martone and Metzler 2005) signals.

<sup>10</sup> A fourth measurement is required for "synchronization" purposes as long as the receiver is on the earth's surface.

308 Global Navigation Satellite Systems – Signal, Theory and Applications

• *Distances*: indoors, the distances are much smaller than outdoors and new problems arise such as the so-called near-far effect. Depending on the codes that are used (case of GPS), there is a limit of detection of two signals with too high a power difference. The lower one will be undetectable because it is impossible to extract from the noise. Once again, this situation is almost impossible outdoors since the transmitters (the satellites in the case of GPS) are very far from the receiver and the difference in distances from two satellites can reach a maximum of a few decibels only. Indoors, this difference can reach a few tens of decibels: specific signal processing techniques are

• *Initial point in the calculations*. Classical algorithms of calculation of location are based on iterative techniques that require an initial estimation of the user position. In the case of GNSS, only three measurements are necessary for geometrical purposes10 since the intersection of the surfaces of three spheres gives two points, one of which is above the plane that includes the three centres (the satellites indeed) of the three spheres, the other one being below. When the receiver is on the surface of the earth, only the location that is below the plane is possible: thus, only three satellites are required from a geometrical point of view. Consequently, the initial location estimation is usually taken somewhere on the earth's surface, and this is sufficient. Indoors, the situation is a little bit different since the two resulting locations (above and below the plane) are rather close to each other and the choice of the initial estimation is fundamental in the convergence of the algorithms. Thus, either one chooses to use five transmitters (instead of four satellites) or to keep four transmitters and choose an initial location of the user

• *Immobility of the transmitters*: in the sky, GNSS satellites are non-stationary. This feature causes some troubles in the way one needs to calculate their locations each time one wants to carry out positioning, but offers some interesting features that are no longer available indoors where transmitters are stationary. The Doppler shifts are only due to the displacements of the mobile terminal, but in case of multipath it is not possible to wait in the same place for a while in order to average the results considering that only multipath will be varying, since if nothing moves around the propagation conditions have no reasons to change. Thus, static positioning is much

Many positioning systems have been proposed with radio transmitters. All the above mentioned techniques have been implemented and this paragraph proposes a sort of classification depending on the technique. Table 1 gives provides a non-exhaustive summary of them. A few references are provided concerning UWB (Fontana 2004), Bluetooth (Takada et al. 2003), WiFi (Wang et al. 2004) or TV (Martone and Metzler 2005)

10 A fourth measurement is required for "synchronization" purposes as long as the receiver is on the

that is inside the building (which is not such an easy task).

location accuracy.

then required.

more difficult indoors.

**2.3 Main existing approaches** 

signals.

earth's surface.

improvement in the VDOP values, leading to a much better estimation of the user


Table 1. Summary of a few radio based positioning systems


## **3. The first GNSS signal approach using pseudolites**

Although this chapter is dedicated to infrastructure based GNSS systems, other solutions have been investigated by the GNSS community. For instance, High Sensitivity GNSS, HS-

Indoor Positioning with GNSS-Like Local Signal Transmitters 311

Similar to the open cast mine, the case of modern so-called "urban canyons" are complex environments for GNSS signals (see figure 5). A receiver located between large buildings has some difficulties acquiring a sufficient number of satellites. When having additional signals from judiciously located pseudolites, a normal situation can be obtained, leading to

In the previous three examples, the pseudolite is used in order to "augment" the GPS system, its coverage or its accuracy. But one can push forward the concept towards a completely new system: this was imagined for positioning the Mars rover. A complete set of several pseudolites was deployed on the surface of the planet and the signals used for positioning, the same way it is achieved with GPS signals from space. Based on this idea, it

The basic idea is indeed very simple and is based on the construction of a local terrestrial constellation of GNSS-like signal generators (Kee et al. 2003). They are located at the corners of the building in order to simulate satellites. Figure 6 is a typical distribution although not optimal since the DOP is not very good (please refer to the discussion in previous sections).

Some major differences apply with comparison to satellites, the most important ones being the immobility of the pseudolites and the shorter distances between the pseudolites and the receiver (leading to unambiguous code for instance, as will be discussed in the next section). As discussed in previous sections, one has to take care of the initial location considered in the computations of the receiver location since the two possible solutions11 are not so far

11 Remember that four transmitters are used for geometrical (three) and synchronization (one) purposes. The intersection of the surfaces of three spheres gives two points located symmetrically apart from a plane that includes the three transmitters (this comes from the form of the equations that are nonlinear). Thus, in case of local transmitters, the final location obtained depends on the initial guess: if it is above the final location it will be the point above the plane, if it is below, the final location will be the

the positioning of the receiver in these kinds of environments.

was thought that an indoor positioning system could be designed.

This is nevertheless a good basis for understanding the concept.

Fig. 5. The urban canyon configuration

**3.3 The system for indoor positioning** 

point below the plane.

GNSS, had the objective to provide continuity of service with no additional infrastructure. The simple underlying idea is that the signals are still present indoors, but even lower in the noise than outdoors. Thus, if one is able to design a very highly sensitive receiver, it should be possible to locate indoors. A similar, but not identical, idea led to the design of the socalled Assisted-GNSS (Duffett-Smith and Rowe 2006). The initial goal was also to provide indoor positioning by "aiding" the receiver to find the signals in difficult environments. In such situations, one major problem with stand alone receivers is the impossibility of decoding the navigation message (too long to envisage having good radio conditions for such a long duration). Thus, a solution could be to send the navigation message through telecommunication networks that are widely available indoors. Thus, knowing the message, the receiver is able to use the high-sensitivity in order to acquire the GNSS signals and then is able to calculate a position since all the parameters needed (from the navigation message) are available. High sensitivity and assisted approaches are thus quite complementary.

Unfortunately, with a higher sensitivity, the receiver is now jammed with reflected signals in such a large amount that positioning, although possible, is really bad because there is too much interference. Thus, even if real improvements have been proposed in environments where the signals were just at the detection limit, these approaches are clearly not the ultimate solutions for indoor positioning and continuity of service. One has to move to infrastructure-based techniques.

## **3.1 Technical historical introduction**

In the early 1980s the first ideas of GPS-like signals transmitters arose from the considerations of the obvious limitations of the original system. How to use a GPS receiver when fewer than three or four satellites are available? What kind of approaches could be imagined to position the Mars rover? How to improve the VDOP of the constellation in case a good vertical accuracy is needed? Etc.

One answer could be to increase the number of satellites by a factor of two or three but the associated cost for the relatively reduced increment in performance was judged to be nonviable. One has to find another way. The idea of implementing GPS-like signal generators that could be locally deployed came out: the pseudolites were born.

#### **3.2 The concept of pseudo-satellites**

A pseudolite (which comes from the contraction of pseudo and satellites) is a generator that transmits GPS signals but which is not a satellite. Such a generator can easily be deployed on earth in places where the number of visible satellites is too low to allow standard positioning (Klein and Parkinson 1986). The first applications were thus naturally oriented towards open cast mines for optimisation purposes. Indeed, as the mine is dug, the view of the sky is reduced and the optimal number of satellites reduces. Adding a pseudolite allows a continuity of the positioning service to the mine to be provided.

A similar idea was developed in the context of so-called Local Area Augmentation Systems (LAAS) where the problem was to provide a good vertical accuracy to landing planes, for example. We know that this vertical accuracy is linked to the VDOP and that locating a satellite below the plane would greatly improve the VDOP. Since it is not possible, the use of a pseudolite seems once again a good idea (Bartone and Van Graas 2000).

310 Global Navigation Satellite Systems – Signal, Theory and Applications

GNSS, had the objective to provide continuity of service with no additional infrastructure. The simple underlying idea is that the signals are still present indoors, but even lower in the noise than outdoors. Thus, if one is able to design a very highly sensitive receiver, it should be possible to locate indoors. A similar, but not identical, idea led to the design of the socalled Assisted-GNSS (Duffett-Smith and Rowe 2006). The initial goal was also to provide indoor positioning by "aiding" the receiver to find the signals in difficult environments. In such situations, one major problem with stand alone receivers is the impossibility of decoding the navigation message (too long to envisage having good radio conditions for such a long duration). Thus, a solution could be to send the navigation message through telecommunication networks that are widely available indoors. Thus, knowing the message, the receiver is able to use the high-sensitivity in order to acquire the GNSS signals and then is able to calculate a position since all the parameters needed (from the navigation message) are available. High sensitivity and assisted approaches are thus quite complementary.

Unfortunately, with a higher sensitivity, the receiver is now jammed with reflected signals in such a large amount that positioning, although possible, is really bad because there is too much interference. Thus, even if real improvements have been proposed in environments where the signals were just at the detection limit, these approaches are clearly not the ultimate solutions for indoor positioning and continuity of service. One has to move to

In the early 1980s the first ideas of GPS-like signals transmitters arose from the considerations of the obvious limitations of the original system. How to use a GPS receiver when fewer than three or four satellites are available? What kind of approaches could be imagined to position the Mars rover? How to improve the VDOP of the constellation in case

One answer could be to increase the number of satellites by a factor of two or three but the associated cost for the relatively reduced increment in performance was judged to be nonviable. One has to find another way. The idea of implementing GPS-like signal generators

A pseudolite (which comes from the contraction of pseudo and satellites) is a generator that transmits GPS signals but which is not a satellite. Such a generator can easily be deployed on earth in places where the number of visible satellites is too low to allow standard positioning (Klein and Parkinson 1986). The first applications were thus naturally oriented towards open cast mines for optimisation purposes. Indeed, as the mine is dug, the view of the sky is reduced and the optimal number of satellites reduces. Adding a pseudolite allows

A similar idea was developed in the context of so-called Local Area Augmentation Systems (LAAS) where the problem was to provide a good vertical accuracy to landing planes, for example. We know that this vertical accuracy is linked to the VDOP and that locating a satellite below the plane would greatly improve the VDOP. Since it is not possible, the use

that could be locally deployed came out: the pseudolites were born.

a continuity of the positioning service to the mine to be provided.

of a pseudolite seems once again a good idea (Bartone and Van Graas 2000).

infrastructure-based techniques.

**3.1 Technical historical introduction** 

a good vertical accuracy is needed? Etc.

**3.2 The concept of pseudo-satellites** 

Similar to the open cast mine, the case of modern so-called "urban canyons" are complex environments for GNSS signals (see figure 5). A receiver located between large buildings has some difficulties acquiring a sufficient number of satellites. When having additional signals from judiciously located pseudolites, a normal situation can be obtained, leading to the positioning of the receiver in these kinds of environments.

In the previous three examples, the pseudolite is used in order to "augment" the GPS system, its coverage or its accuracy. But one can push forward the concept towards a completely new system: this was imagined for positioning the Mars rover. A complete set of several pseudolites was deployed on the surface of the planet and the signals used for positioning, the same way it is achieved with GPS signals from space. Based on this idea, it was thought that an indoor positioning system could be designed.

Fig. 5. The urban canyon configuration

## **3.3 The system for indoor positioning**

The basic idea is indeed very simple and is based on the construction of a local terrestrial constellation of GNSS-like signal generators (Kee et al. 2003). They are located at the corners of the building in order to simulate satellites. Figure 6 is a typical distribution although not optimal since the DOP is not very good (please refer to the discussion in previous sections). This is nevertheless a good basis for understanding the concept.

Some major differences apply with comparison to satellites, the most important ones being the immobility of the pseudolites and the shorter distances between the pseudolites and the receiver (leading to unambiguous code for instance, as will be discussed in the next section).

As discussed in previous sections, one has to take care of the initial location considered in the computations of the receiver location since the two possible solutions11 are not so far

 11 Remember that four transmitters are used for geometrical (three) and synchronization (one) purposes. The intersection of the surfaces of three spheres gives two points located symmetrically apart from a plane that includes the three transmitters (this comes from the form of the equations that are nonlinear). Thus, in case of local transmitters, the final location obtained depends on the initial guess: if it is above the final location it will be the point above the plane, if it is below, the final location will be the point below the plane.

Indoor Positioning with GNSS-Like Local Signal Transmitters 313

Calculations are then carried out at the master receiver and synchronisation values are sent back, through wires, to the pseudolites. Another way consists in transmitting these synchronisation data through a wireless link, leading this time to latency problems and potential interference (but this is an interesting approach). In addition to this concept, one imagined working the other way round by placing the receiver (which listens to the signals) in the same place as the pseudolites. By considering that one (or several) pseudolites are "pilot(s)", the receiver can synchronise its own pseudolite if it knows the distance(s) that separate(s) it from the pilot(s) pseudolite(s). The difference between received times for two different pseudolites (indeed the associated receivers) allows the synchronisation of the pseudolites. This once again requires data links. Of course, these solutions are clearly

Another simple approach consisted in locating pseudolites in places where the GNSS signals are available, namely outdoors, and to use the constellation time to synchronise the

*Code and carrier phase measurements* are possible. In the first case, code phase measurements are carried out: the positioning accuracy of the pseudolites needs to be in the range of a few decimetres. The resulting positioning is intended to reach a few meters, as outdoors. Note that multipath are bound to largely degrade this very optimistic goal (discussion follows). The other approach described is based on carrier phase measurements (Kee et al. 2001, Rizos et al. 2003). We know that this kind of measurement is much more accurate but suffers from the ambiguity resolution problem. Nevertheless performances reported are in the range of a few centimetres13: the requirement in terms of pseudolite location accuracy is also increased

*Ambiguity* is no longer such a difficult problem. In the case of code phase measurements, ambiguity is totally suppressed since indoor distances are much smaller than three hundred kilometres. In the case of carrier phase, ambiguity is still present but is not so high: typically fifty metres for indoor distances, the carrier phase ambiguity for frequency L1 is around 260. Current works are evaluating the possibility to use classical code phase ambiguity

A *potential accuracy of a few centimetres* is achievable with the carrier phase approach, even if these measurements are probably not the most important ones for the foreseen applications looking forward to the continuity of the positioning for mobile phones. Nevertheless this is

*Near-Far effect* is a new propagation concern (Madhani et al. 2003). Since the deployment complexity of the pseudolites must be reduced, their number should be reduced to a minimum. As a corollary, the distance between pseudolites should be increased to a maximum. Unfortunately, the Pseudo Random Noise (PRN) codes used in the case of GPS, for instance, have auto correlation functions that present some secondary peaks. These secondary peaks can have amplitudes of about -24 decibels (dB) in comparison to the main peak. This is very good for outdoors where the difference of distances from various satellites

13 Techniques similar to high accuracy methods for outdoors are used together with the associated

adding cost and complexity to the system.

to typically one centimetre (this task is not so easy to carry out).

resolution methods for the carrier phase resolution indoors.

significant of the capabilities of the principles.

problems such as the determination of the initial location.

transmitters.

away from each other if one uses the optimal number of transmitters (i.e. four in a 3D positioning system).

Fig. 6. Pseudolite indoor positioning system

#### **3.4 Advantages and main drawbacks**

Such an indoor positioning system is not widely deployed because of numerous major drawbacks, despite some fundamental advantages. Let us list the most important features and comment on whether they are an advantage or a drawback (Kanli 2004).

*Continuity with outdoor GNSS*: this is obviously a major advantage of the proposed system. Moreover, the continuity is obtained by using the same hardware as for outdoors (since GNSS are clearly a very good candidate when the satellites are visible and is almost free12). Note that using GNSS-like signals means that current receivers are already capable, with a software update, of processing them. This fact constitutes a second major advantage. The first drawback is the need for a local infrastructure.

*Synchronisation between pseudolites* is required. Satellites include atomic clocks or masers in order to reduce significantly the time drift but require a terrestrial infrastructure for synchronisation purposes. In the case of pseudolites, two approaches have been proposed: synchronous and asynchronous systems. In the latter case, the pseudolites are not synchronised and the measurement technique must carry out a sort of synchronisation: the method used is the double differencing that allows us to get rid of the synchronisation of the transmitters. The major drawback is then the need for a reference receiver that should be in radio visibility of the transmitters. Apart from the deployment complexity that this adds, a data link has to exist between the two receivers. This first approach is not intended to be selected for indoor positioning purposes. The other approach uses synchronous pseudolites. Several methods have been proposed: the simplest one in theory, but not in practice, is to link the various transmitters by wire. In such a case a sort of calibration phase is required in order to know precisely the delay between pseudolites. An implementation of this approach used a master receiver located in a known location with respect to all the pseudolites.

<sup>12</sup> A GNSS receiver integrated into a modern device is estimated to cost a few dollars.

312 Global Navigation Satellite Systems – Signal, Theory and Applications

away from each other if one uses the optimal number of transmitters (i.e. four in a 3D

Pseudolite

Fig. 6. Pseudolite indoor positioning system

first drawback is the need for a local infrastructure.

**3.4 Advantages and main drawbacks** 

Indoor receiver

and comment on whether they are an advantage or a drawback (Kanli 2004).

12 A GNSS receiver integrated into a modern device is estimated to cost a few dollars.

Such an indoor positioning system is not widely deployed because of numerous major drawbacks, despite some fundamental advantages. Let us list the most important features

*Continuity with outdoor GNSS*: this is obviously a major advantage of the proposed system. Moreover, the continuity is obtained by using the same hardware as for outdoors (since GNSS are clearly a very good candidate when the satellites are visible and is almost free12). Note that using GNSS-like signals means that current receivers are already capable, with a software update, of processing them. This fact constitutes a second major advantage. The

*Synchronisation between pseudolites* is required. Satellites include atomic clocks or masers in order to reduce significantly the time drift but require a terrestrial infrastructure for synchronisation purposes. In the case of pseudolites, two approaches have been proposed: synchronous and asynchronous systems. In the latter case, the pseudolites are not synchronised and the measurement technique must carry out a sort of synchronisation: the method used is the double differencing that allows us to get rid of the synchronisation of the transmitters. The major drawback is then the need for a reference receiver that should be in radio visibility of the transmitters. Apart from the deployment complexity that this adds, a data link has to exist between the two receivers. This first approach is not intended to be selected for indoor positioning purposes. The other approach uses synchronous pseudolites. Several methods have been proposed: the simplest one in theory, but not in practice, is to link the various transmitters by wire. In such a case a sort of calibration phase is required in order to know precisely the delay between pseudolites. An implementation of this approach used a master receiver located in a known location with respect to all the pseudolites.

Additional Pseudolite for synchronisation

positioning system).

Calculations are then carried out at the master receiver and synchronisation values are sent back, through wires, to the pseudolites. Another way consists in transmitting these synchronisation data through a wireless link, leading this time to latency problems and potential interference (but this is an interesting approach). In addition to this concept, one imagined working the other way round by placing the receiver (which listens to the signals) in the same place as the pseudolites. By considering that one (or several) pseudolites are "pilot(s)", the receiver can synchronise its own pseudolite if it knows the distance(s) that separate(s) it from the pilot(s) pseudolite(s). The difference between received times for two different pseudolites (indeed the associated receivers) allows the synchronisation of the pseudolites. This once again requires data links. Of course, these solutions are clearly adding cost and complexity to the system.

Another simple approach consisted in locating pseudolites in places where the GNSS signals are available, namely outdoors, and to use the constellation time to synchronise the transmitters.

*Code and carrier phase measurements* are possible. In the first case, code phase measurements are carried out: the positioning accuracy of the pseudolites needs to be in the range of a few decimetres. The resulting positioning is intended to reach a few meters, as outdoors. Note that multipath are bound to largely degrade this very optimistic goal (discussion follows). The other approach described is based on carrier phase measurements (Kee et al. 2001, Rizos et al. 2003). We know that this kind of measurement is much more accurate but suffers from the ambiguity resolution problem. Nevertheless performances reported are in the range of a few centimetres13: the requirement in terms of pseudolite location accuracy is also increased to typically one centimetre (this task is not so easy to carry out).

*Ambiguity* is no longer such a difficult problem. In the case of code phase measurements, ambiguity is totally suppressed since indoor distances are much smaller than three hundred kilometres. In the case of carrier phase, ambiguity is still present but is not so high: typically fifty metres for indoor distances, the carrier phase ambiguity for frequency L1 is around 260. Current works are evaluating the possibility to use classical code phase ambiguity resolution methods for the carrier phase resolution indoors.

A *potential accuracy of a few centimetres* is achievable with the carrier phase approach, even if these measurements are probably not the most important ones for the foreseen applications looking forward to the continuity of the positioning for mobile phones. Nevertheless this is significant of the capabilities of the principles.

*Near-Far effect* is a new propagation concern (Madhani et al. 2003). Since the deployment complexity of the pseudolites must be reduced, their number should be reduced to a minimum. As a corollary, the distance between pseudolites should be increased to a maximum. Unfortunately, the Pseudo Random Noise (PRN) codes used in the case of GPS, for instance, have auto correlation functions that present some secondary peaks. These secondary peaks can have amplitudes of about -24 decibels (dB) in comparison to the main peak. This is very good for outdoors where the difference of distances from various satellites

 13 Techniques similar to high accuracy methods for outdoors are used together with the associated problems such as the determination of the initial location.

Indoor Positioning with GNSS-Like Local Signal Transmitters 315

arriving signal. Let us consider that the frequencies of all the reflected signals are identical15. Then, the physical phenomenon that occurs is simply an addition, in amplitude, phase and delay, between all the reflected signals (and the first one, which should be the direct path16). The problem is now to be able to get rid of all contributions except the first one (which should be the direct path under our assumption). Such a time discrimination is somehow equivalent to the synchronisation problem and requires theoretically a radio bandwidth proportional to the time discrimination interval wanted: in our case, where nanoseconds are

Let us now come back to the specific problem of multipath in GNSS, and to GPS for illustration. The way time separation is obtained, from the transmitter to the receiver, is based on the famous auto correlation function (ACF) of the codes. A typical such function is

In case of multipath, we are interested by the main lobe of the ACF. Let us consider only one reflected path (in addition to the direct path) for simplicity of explanations, knowing that this is clearly not a real situation. If the reflected path is delayed by more than one and an half chip, the ACF of the incident signal (which is composed of the direct and reflected paths) with the receiver generated replica has the shape given in figure 8. Remember that a GPS chip length is given by 1/1023 milliseconds, hence 977.5 nanoseconds, which in turn corresponds to 293 meters. Thus, figure 8 is characteristic of a reflected path delayed by more than 440 meters. The receiver will be able without any problem to find the direct path considering (this is the assumption that is classically made) that the first peak of the ACF is

15 This could not be true in case of reflection on moving objects, such as cars for example. But in our indoor case, we are going to consider this hypothesis as correct. 16 The direct path could not be present and then the first received signal would also be a reflected path.

ACF

Time

sought, this bandwidth is too large and other approaches must be found.

given in figure 7.

the value being sought.

ACF

1

 Early-Late discriminator

0


Fig. 7. Typical autocorrelation function of a GPS code

This situation is one that we are not going to deal with in this chapter.

can reach a maximum of less than 2dB, but is a real problem indoors. In terms of phenomenon, the problem is related to the fact that if secondary peaks of a transmitter are greater than the main peak of another, then this second one will appear as noise and will not be detectable. This 24dB margin in power is reached as soon as the ratio between the closer and the farther transmitters reaches four: this is not an unusual situation indoors. Thus, a few solutions have been proposed, among which: 1/ pulsed transmissions consisting in allocating between 10 and 20 percent of the time to a particular pseudolite (this has shown to provide an additional margin of about 10dB corresponding to nearly an additional factor of two in distance), 2/ frequency shifts in order to almost eliminate the near-far effect, but at the cost of a substantial increase in the terminal complexity or 3/ in sophisticated mitigation algorithms that successively suppress the more powerful signals to finally extract the lowest one.

*Interferences* with outdoor signals. Another advantage of pseudolites is the ability to decide the power level to be transmitted, depending on the required coverage and performance, and of course on the environments. This advantage becomes a major drawback when thinking in terms of cohabitation with the outdoor world (Glennon et al. 2007, Yang and Morton 2009). If one takes the case of GPS (but this is true whatever the system considered), using GPS-like signals for indoor transmission is susceptible to create interference with the signals that could be received by an outdoor receiver receiving signals from the satellites. As a matter of fact, the same phenomenon as described indoors for the near-far may occur. Thanks to GPS project management, some specific PRN codes have been reserved for pseudolite operation14 at the early stages and this interference problem is slightly relaxed, but is still a real concern for GPS authorities. A specific section, at the end of this chapter, is dedicated to the regulations restricting the power levels allowed to be transmitted for indoor operations.

Finally, *multipath* are a major issue. Mitigation techniques must be found in order to imagine a proper operation of the code phase pseudolite system. This topic is such a challenge that the next section is dedicated to it.

## **3.5 The specific problem of multipath in indoor environments**

As already discussed in previous sections, indoor environments are characterized by the presence of many reflectors in the path from the transmitter and the receiver. All these reflected signals are going to combine at the receiver end and produce the really received signal on the receiver antenna. This signal is the one that the receiver is going to deal with since this is the real physical received signal. As this is not only the direct signal from the transmitter, and depending on the signal processing techniques used, the distance finally measured can be erroneous (remember that as a matter of fact, this is a time that is measured and not a distance).

From a physical point of view, the situation can be seen as follows: the physical quantity that is transmitted is indeed an electric field, given in V/m. It is furthermore characterized by a frequency, an amplitude, a phase and a delay, in comparison, say, with the first

 14 PRN 1 to 32 are reserved for so-called space vehicles, the satellites, and PRN 33, 34, 35, 36 and 37 are reserved for terrestrial transmitters.

314 Global Navigation Satellite Systems – Signal, Theory and Applications

can reach a maximum of less than 2dB, but is a real problem indoors. In terms of phenomenon, the problem is related to the fact that if secondary peaks of a transmitter are greater than the main peak of another, then this second one will appear as noise and will not be detectable. This 24dB margin in power is reached as soon as the ratio between the closer and the farther transmitters reaches four: this is not an unusual situation indoors. Thus, a few solutions have been proposed, among which: 1/ pulsed transmissions consisting in allocating between 10 and 20 percent of the time to a particular pseudolite (this has shown to provide an additional margin of about 10dB corresponding to nearly an additional factor of two in distance), 2/ frequency shifts in order to almost eliminate the near-far effect, but at the cost of a substantial increase in the terminal complexity or 3/ in sophisticated mitigation algorithms that successively suppress the more powerful signals to finally extract the lowest

*Interferences* with outdoor signals. Another advantage of pseudolites is the ability to decide the power level to be transmitted, depending on the required coverage and performance, and of course on the environments. This advantage becomes a major drawback when thinking in terms of cohabitation with the outdoor world (Glennon et al. 2007, Yang and Morton 2009). If one takes the case of GPS (but this is true whatever the system considered), using GPS-like signals for indoor transmission is susceptible to create interference with the signals that could be received by an outdoor receiver receiving signals from the satellites. As a matter of fact, the same phenomenon as described indoors for the near-far may occur. Thanks to GPS project management, some specific PRN codes have been reserved for pseudolite operation14 at the early stages and this interference problem is slightly relaxed, but is still a real concern for GPS authorities. A specific section, at the end of this chapter, is dedicated to the regulations restricting the power levels allowed to be transmitted for

Finally, *multipath* are a major issue. Mitigation techniques must be found in order to imagine a proper operation of the code phase pseudolite system. This topic is such a challenge that

As already discussed in previous sections, indoor environments are characterized by the presence of many reflectors in the path from the transmitter and the receiver. All these reflected signals are going to combine at the receiver end and produce the really received signal on the receiver antenna. This signal is the one that the receiver is going to deal with since this is the real physical received signal. As this is not only the direct signal from the transmitter, and depending on the signal processing techniques used, the distance finally measured can be erroneous (remember that as a matter of fact, this is a time that is measured

From a physical point of view, the situation can be seen as follows: the physical quantity that is transmitted is indeed an electric field, given in V/m. It is furthermore characterized by a frequency, an amplitude, a phase and a delay, in comparison, say, with the first

14 PRN 1 to 32 are reserved for so-called space vehicles, the satellites, and PRN 33, 34, 35, 36 and 37 are

**3.5 The specific problem of multipath in indoor environments** 

one.

indoor operations.

and not a distance).

reserved for terrestrial transmitters.

the next section is dedicated to it.

arriving signal. Let us consider that the frequencies of all the reflected signals are identical15. Then, the physical phenomenon that occurs is simply an addition, in amplitude, phase and delay, between all the reflected signals (and the first one, which should be the direct path16). The problem is now to be able to get rid of all contributions except the first one (which should be the direct path under our assumption). Such a time discrimination is somehow equivalent to the synchronisation problem and requires theoretically a radio bandwidth proportional to the time discrimination interval wanted: in our case, where nanoseconds are sought, this bandwidth is too large and other approaches must be found.

Let us now come back to the specific problem of multipath in GNSS, and to GPS for illustration. The way time separation is obtained, from the transmitter to the receiver, is based on the famous auto correlation function (ACF) of the codes. A typical such function is given in figure 7.

In case of multipath, we are interested by the main lobe of the ACF. Let us consider only one reflected path (in addition to the direct path) for simplicity of explanations, knowing that this is clearly not a real situation. If the reflected path is delayed by more than one and an half chip, the ACF of the incident signal (which is composed of the direct and reflected paths) with the receiver generated replica has the shape given in figure 8. Remember that a GPS chip length is given by 1/1023 milliseconds, hence 977.5 nanoseconds, which in turn corresponds to 293 meters. Thus, figure 8 is characteristic of a reflected path delayed by more than 440 meters. The receiver will be able without any problem to find the direct path considering (this is the assumption that is classically made) that the first peak of the ACF is the value being sought.

<sup>15</sup> This could not be true in case of reflection on moving objects, such as cars for example. But in our

indoor case, we are going to consider this hypothesis as correct. 16 The direct path could not be present and then the first received signal would also be a reflected path. This situation is one that we are not going to deal with in this chapter.

Indoor Positioning with GNSS-Like Local Signal Transmitters 317

evaluation purposes, since this situation is clearly not representative of reality), the envelope of the resulting error in pseudo-range measurement is drawn. The upper curve corresponds to reflected path in phase with the direct signal, when the lower curve is related to the case where the reflected path is out of phase with the direct path. Note that SDLL is absolutely not suitable for indoor environments since errors as high as 60 meters are possible17. The same almost applies to the Narrow Correlator18 since errors of 10 metres are still possible: if

0 100 200 300 400 500

17 Here one has the explanation why no commercial solutions are available in the field of code phase

18 Another constraint of the Narrow Correlator is the need for a receiving bandwidth of at least 8 MHz, which is not the current standard. Nevertheless, the standard is bound to evolve with the advent of Galileo in the frequency band L1/E1 since the current 2 MHz are too narrow for an acceptable detection

**SDLL**

Delay of reflected path in m

the goal of accuracy is in the range of a few metres, this approach is also not viable.

Pseudo-range error in m 80

40

0

**NC**



Fig. 11. Typical multipath environment indoors

pseudolites!

of their signals.

Fig. 10. Multipath effect of the pseudo-range measurement

Fig. 8. Typical autocorrelation function for a large delayed multipath

Fig. 9. Typical autocorrelation function for a small delayed multipath

Indoors, reflected path have delays that are indeed much smaller. In such a case, the ACF is completely disturbed (see figure 9) and can take many different shapes. The problem is now that the receiver will be fooled when detecting the maximum of the ACF which is no longer at the time of arrival of the direct path. Note also that this maximum now depends on the relative phases, delays and amplitudes of the direct and reflected paths.

The classical way this multipath effect is characterised is given in figure 10. This curve allows the comparison of various multipath mitigation techniques, as illustrated in figure 10 for the Standard Digital Locked Loop (SDLL) and the so-called Narrow Correlator (NC). Note the reading of the figure: considering a direct path and a single reflected path of amplitude half that of the direct path (only suitable for comparison purposes and certainly not for 316 Global Navigation Satellite Systems – Signal, Theory and Applications

ACF

Time

Time

ACF

ACF

1

 Early-Late discriminator

Fig. 8. Typical autocorrelation function for a large delayed multipath

Fig. 9. Typical autocorrelation function for a small delayed multipath

relative phases, delays and amplitudes of the direct and reflected paths.

Indoors, reflected path have delays that are indeed much smaller. In such a case, the ACF is completely disturbed (see figure 9) and can take many different shapes. The problem is now that the receiver will be fooled when detecting the maximum of the ACF which is no longer at the time of arrival of the direct path. Note also that this maximum now depends on the

The classical way this multipath effect is characterised is given in figure 10. This curve allows the comparison of various multipath mitigation techniques, as illustrated in figure 10 for the Standard Digital Locked Loop (SDLL) and the so-called Narrow Correlator (NC). Note the reading of the figure: considering a direct path and a single reflected path of amplitude half that of the direct path (only suitable for comparison purposes and certainly not for

 Early-Late discriminator

0


ACF

1

0


evaluation purposes, since this situation is clearly not representative of reality), the envelope of the resulting error in pseudo-range measurement is drawn. The upper curve corresponds to reflected path in phase with the direct signal, when the lower curve is related to the case where the reflected path is out of phase with the direct path. Note that SDLL is absolutely not suitable for indoor environments since errors as high as 60 meters are possible17. The same almost applies to the Narrow Correlator18 since errors of 10 metres are still possible: if the goal of accuracy is in the range of a few metres, this approach is also not viable.

Fig. 10. Multipath effect of the pseudo-range measurement

Fig. 11. Typical multipath environment indoors

<sup>17</sup> Here one has the explanation why no commercial solutions are available in the field of code phase pseudolites!

<sup>18</sup> Another constraint of the Narrow Correlator is the need for a receiving bandwidth of at least 8 MHz, which is not the current standard. Nevertheless, the standard is bound to evolve with the advent of Galileo in the frequency band L1/E1 since the current 2 MHz are too narrow for an acceptable detection of their signals.

Indoor Positioning with GNSS-Like Local Signal Transmitters 319

The first simplification concerns the synchronisation. In a similar way that outdoor pseudolites can synchronise themselves using GNSS signals, the idea here is to put an outdoor antenna on the roof of the building in order to obtain the constellation signals. Note, that in this case the antenna is probably (certainly indeed) receiving several satellite signals. Here is taken into account the second new idea that consists in forwarding this signal to the transmitters: the innovation lies in the fact that the same signal (which is probably made up of many satellite signals, as mentioned) will then be transmitted from the various transmitters, now called repeaters20. In this way, an obvious problem appears: if the repeaters are transmitting simultaneously, the same signals will be transmitted from different locations and, once received by the terminal, will certainly be considered as reflected paths. Since the principle is to carry out time measurements, and thus distance measurements, it is clearly not acceptable. Thus, the transmission is now achieved in a sequential manner with always only one repeater transmitting at a given time. This presents

• The first one carries out the computation of the location, at the receiver's end, for each transmitter successively. At each corresponding time the fourth coordinate (the socalled clock bias22) of the navigation solution vector is recorded. As soon as four successive computations have been obtained, it is possible to compute the indoor distances through the calculations of the differences between the fourth coordinates considered at different time. These differences give a new system of three independent equations that can be solved classically. The resolution gives the indoor location of the

• The second one carries out some differences of pseudo-range measurements at the precise instant of the transitions from one repeater to the next (Fluerasu et al. 2009, Fluearsu and Samama 2009). At these instants, the difference of the pseudo-ranges that are measured just before and just after the transition shows the value of the difference of distances between the two repeaters and the receiver. In order to obtain the indoor distances, a second difference is needed, as briefly explained below. Note that this approach also removes all the effects whose second derivative is zero, including

20 Please note that if the term is appropriate since transmitters are just "repeating" the outdoor received signal, it should not be confused with the classical repeater that is used for demonstration purposes or just for having outdoor signals available indoors. Here repeaters represent a new approach for indoor positioning and should be seen more as a means of improving some aspects of pseudolite rather than just forwarding signals. It is so true that all the sections could have been written with a signal generator

21 Only the dynamic range is now a limitation when the receiver is processing signals from two

22 This is clearly not the clock bias, but indeed the sum of all contributions that are common to all the satellites that are considered for the resolution: thus, this included the free space indoor distance that

another interesting advantage: the near-far effect is now removed21.

receiver. A short demonstration of this principle is given below.

**4.1 Introduction to the basic idea** 

**4.2 The systems proposed** 

instead of the outdoor antenna.

successive repeaters.

we want to obtain.

Two measurement systems are then possible.

Let us now come back to real situations where several (many indeed) multipath are present. In order to give an idea of such configurations, we consider the environment described in figure 11 which is a large car park. The structure is made of metallic beams and concrete walls. Cars are also modelled as red parallelepipeds. In figure 11 are also shown the path from transmitters to a receiver that is located in the centre of the building. The black paths are direct ones, while the green ones are reflected path: the conclusion is quite clear! In such cases, one can easily imagine that the ACF is even more disturbed than the ones proposed in figures 8 and 9.

## **3.6 The performances attainable**

The preceding pages have shown that the only multipath problem is enough to disqualify the pseudolite approach which uses code phase measurements. A few other multipath mitigation techniques are potentially available, such as the Strobe Correlator or the Double Delta correlator, but both require rather a high signal to noise ratio (SNR) in order to properly function19. This is not so easy to obtain indoors since reflected paths are bound to reduce significantly the signal to noise ratio (by destructively combining the electric fields). Thus, this kind of systems is not yet available with acceptable performance.

The other possibility is to use carrier phase measurements that we know are less sensitive to multipath (because the ambiguity is reduced to nineteen centimetres instead of three hundred metres for code phase). Unfortunately, carrier phase based systems are more complex to use in practice because they require both an initial location which is accurate to a few decimetres and the carrier phase to be followed continuously, which is much more difficult than to follow code phase. Such systems exist but are not widely deployed for these additional reasons (in conjunction with the need for infrastructure).

## **3.7 Short synthesis**

Pseudolite systems require an infrastructure deployment and synchronisation, and have to cope with near-far and multipath but provide full continuity with the technical approach for outdoors, GNSS, with only minor modifications to the receiver. This is a good candidate if no solution without infrastructure can be found but the community of service and application providers is not yet ready to accept such a solution, except in situations where an installation cost is counterbalanced by already well identified revenues.

## **4. The first step in overcoming some pseudolite linked problems: The repeaters**

Following pseudolites, one tries to propose ameliorations to the main drawbacks (Im et al. 2006, Jee et al. 2004). Since it is based on transmitters, the infrastructure is still present, but some approaches reduce its complexity by the introduction of the concept of a "common signal" to all the transmitters (Caratori et al. 2002).

 19 The Narrow Correlator is the only one that does not degrade the SNR while improving the multipath behaviour.

#### **4.1 Introduction to the basic idea**

318 Global Navigation Satellite Systems – Signal, Theory and Applications

Let us now come back to real situations where several (many indeed) multipath are present. In order to give an idea of such configurations, we consider the environment described in figure 11 which is a large car park. The structure is made of metallic beams and concrete walls. Cars are also modelled as red parallelepipeds. In figure 11 are also shown the path from transmitters to a receiver that is located in the centre of the building. The black paths are direct ones, while the green ones are reflected path: the conclusion is quite clear! In such cases, one can easily imagine that the ACF is even more disturbed than the ones proposed in

The preceding pages have shown that the only multipath problem is enough to disqualify the pseudolite approach which uses code phase measurements. A few other multipath mitigation techniques are potentially available, such as the Strobe Correlator or the Double Delta correlator, but both require rather a high signal to noise ratio (SNR) in order to properly function19. This is not so easy to obtain indoors since reflected paths are bound to reduce significantly the signal to noise ratio (by destructively combining the electric fields). Thus, this kind of systems is not yet available with acceptable

The other possibility is to use carrier phase measurements that we know are less sensitive to multipath (because the ambiguity is reduced to nineteen centimetres instead of three hundred metres for code phase). Unfortunately, carrier phase based systems are more complex to use in practice because they require both an initial location which is accurate to a few decimetres and the carrier phase to be followed continuously, which is much more difficult than to follow code phase. Such systems exist but are not widely deployed for these

Pseudolite systems require an infrastructure deployment and synchronisation, and have to cope with near-far and multipath but provide full continuity with the technical approach for outdoors, GNSS, with only minor modifications to the receiver. This is a good candidate if no solution without infrastructure can be found but the community of service and application providers is not yet ready to accept such a solution, except in situations where

Following pseudolites, one tries to propose ameliorations to the main drawbacks (Im et al. 2006, Jee et al. 2004). Since it is based on transmitters, the infrastructure is still present, but some approaches reduce its complexity by the introduction of the concept of a "common

19 The Narrow Correlator is the only one that does not degrade the SNR while improving the multipath

additional reasons (in conjunction with the need for infrastructure).

an installation cost is counterbalanced by already well identified revenues.

signal" to all the transmitters (Caratori et al. 2002).

**4. The first step in overcoming some pseudolite linked problems: The** 

figures 8 and 9.

performance.

**3.7 Short synthesis** 

**repeaters** 

behaviour.

**3.6 The performances attainable** 

The first simplification concerns the synchronisation. In a similar way that outdoor pseudolites can synchronise themselves using GNSS signals, the idea here is to put an outdoor antenna on the roof of the building in order to obtain the constellation signals. Note, that in this case the antenna is probably (certainly indeed) receiving several satellite signals. Here is taken into account the second new idea that consists in forwarding this signal to the transmitters: the innovation lies in the fact that the same signal (which is probably made up of many satellite signals, as mentioned) will then be transmitted from the various transmitters, now called repeaters20. In this way, an obvious problem appears: if the repeaters are transmitting simultaneously, the same signals will be transmitted from different locations and, once received by the terminal, will certainly be considered as reflected paths. Since the principle is to carry out time measurements, and thus distance measurements, it is clearly not acceptable. Thus, the transmission is now achieved in a sequential manner with always only one repeater transmitting at a given time. This presents another interesting advantage: the near-far effect is now removed21.

#### **4.2 The systems proposed**

Two measurement systems are then possible.


<sup>20</sup> Please note that if the term is appropriate since transmitters are just "repeating" the outdoor received signal, it should not be confused with the classical repeater that is used for demonstration purposes or just for having outdoor signals available indoors. Here repeaters represent a new approach for indoor positioning and should be seen more as a means of improving some aspects of pseudolite rather than just forwarding signals. It is so true that all the sections could have been written with a signal generator instead of the outdoor antenna.

<sup>21</sup> Only the dynamic range is now a limitation when the receiver is processing signals from two successive repeaters.

<sup>22</sup> This is clearly not the clock bias, but indeed the sum of all contributions that are common to all the satellites that are considered for the resolution: thus, this included the free space indoor distance that we want to obtain.

Indoor Positioning with GNSS-Like Local Signal Transmitters 321

represents the remaining contributions whose first derivatives are not zero, for example the clock acceleration) and 2/ a characteristic shape of the curve just after the skips, which is due to the receiver's loop that tends to come back to the equilibrium after this destabilisation. Note that the sum of the transitions, for a complete cycle should be zero. Of course, due to measurement errors, this is usually not the case, and the choice of the best

**0 20 40 60 80 100 120 140**

Time

The curve of figure 12 is a single difference of raw measurements. In order to extract the differences of distances mentioned above, there is the need to carry out, at the precise instant of transition, a second difference between two successive single differences. Thus, a process of double differencing is the basis of this proposed approach to repeater positioning. Following these measurement steps, the computations are similar to those described for the

Transitions Transitions

The most often implemented approach is the second one because it is simply based on classical measurements of GNSS receivers and that no additional computation errors affect the positioning. Tests have been carried out in various environments: each time, the system was deployed and positioning carried out with different receivers. Note that the receivers used are so-called software defined radio (SDR) receivers since the method is affected by multipath, in a similar way that pseudolite based systems are. Thus, a specific mitigation technique was implemented (described in a following section) which required the tracking loops to be slightly modified. Since proprietary receivers do not allow such modifications, an SDR receiver was required. It should be pointed out that transmitters are located in such a way that walls are included in the propagation path from the transmitters to the receiver. These environments, together with their "Ergospace23" representations, are as given in

23 Ergospace is the electromagnetic propagation software used for the deployment phase. The main goal

transitions to be considered for positioning has to be carried out.

Rep 2 Rep 3 Rep 1

Cycle

Fig. 12. Typical response of a receiver

Pseudo-ranges differences

**-46**

**-45,5**

**-45**

**-44,5**

**-44**

**-43,5**

**-43**

clock bias based approach.

figures 13 to 16 below.

is to evaluate the multipath related effects.

**4.3 The performance achieved** 

atmosphere propagation or the major part of clock drifts. This new differential mode is also susceptible to increasing the positioning accuracy.

Let us deal briefly with the mathematics of the first approach based on clock bias analysis. The method is based on the use of the clock bias coordinates. As described above, once one has carried out four receiver location computation (one for each repeater), a new vector is available, where the cti are the calculated fourth coordinates, the ctr(i) are the real clock bias of the receiver at each transmission times and the di the distances separating the repeaters from the receiver.

$$\begin{bmatrix} ct\_1 \\ ct\_2 \\ ct\_3 \\ ct\_4 \end{bmatrix} = \begin{bmatrix} ct\_r(t\_1) + d\_1 \\ ct\_r(t\_2) + d\_2 \\ ct\_r(t\_3) + d\_3 \\ ct\_r(t\_4) + d\_4 \end{bmatrix} \tag{1}$$

The unknown variables are now the di, but the problem appears to be the real clock bias of the receiver which is naturally not a constant. Thus, in (1), one has not only the four di unknowns, but also the four clock biases. The technique consists indeed in estimating the clock bias difference between instant t2 and instant t1 by the way of the clock drift computation carried out through Doppler measurements by the receiver. Thus, the idea is to consider that:

$$cct\_r(t\_j) = ct\_r(t\_i) + \sum\_{k=i+1}^{j} cdt\_k \tag{2}$$

Where cdtk is the clock bias rate (called the clock drift) at time tk. The various ctr(ti) of (1) are now reduced to a single unknown, ctr(t1). In addition, one knows that the four distances di are characterised by only three spatial coordinates, x, y and z of the receiver once the coordinates of the repeaters are known: this is a system requirement to provide the receiver with these coordinates. The indoor location computation is then carried out typically through hyperboloid intersection, as soon as the receiver is able to determine which repeater is transmitting at any given time. This is achieved through synchronisation which is made possible since the signals transmitted by all the repeaters are identical (thus, there is just the need for an initial calibration of the wire delays between the signal generator, or the outdoor antenna, and the repeaters).

On the other hand, the need to estimate the clock drift is somehow a constraint since the final performances will greatly depend on the quality of the receiver clock. Thus, another approach was proposed, based simply on classical measurements carried out by all current receivers: the raw pseudo-ranges. When one draws the difference of pseudo-ranges from one instant to the next in a repeater like system, the curve of figure 12 is obtained (note that in this example only three repeaters are deployed, leading to a 2D positioning).

Clear skips, called "transitions" in the figure, can be seen: they correspond to the difference of distances, dj-di, that characterises the increase or decrease of the distance from repeaters to the receiver when the transmitted signal switches from repeater i to repeater j. It is positive when the distance increases, and negative otherwise. Note also that two additional phenomena are present: 1/ a slow constant increase in the equilibrium value (which 320 Global Navigation Satellite Systems – Signal, Theory and Applications

Let us deal briefly with the mathematics of the first approach based on clock bias analysis. The method is based on the use of the clock bias coordinates. As described above, once one has carried out four receiver location computation (one for each repeater), a new vector is available, where the cti are the calculated fourth coordinates, the ctr(i) are the real clock bias of the receiver at each transmission times and the di the distances separating the repeaters

> 1 11 2 22 3 33 4 44

*r r r r*

*ct ct t d ct ct t d ct ct t d ct ct t d*

⎡ ⎤⎡ ⎤ +

<sup>⎢</sup> ⎥⎢ ⎥ <sup>+</sup> <sup>⎢</sup> ⎥⎢ ⎥ <sup>=</sup> <sup>⎢</sup> ⎥⎢ ⎥ <sup>+</sup> <sup>⎢</sup> ⎥⎢ ⎥

+ ⎣ ⎦⎣ ⎦

The unknown variables are now the di, but the problem appears to be the real clock bias of the receiver which is naturally not a constant. Thus, in (1), one has not only the four di unknowns, but also the four clock biases. The technique consists indeed in estimating the clock bias difference between instant t2 and instant t1 by the way of the clock drift computation carried out through Doppler measurements by the receiver. Thus, the idea is to

() ()

*r j r i k k i*

Where cdtk is the clock bias rate (called the clock drift) at time tk. The various ctr(ti) of (1) are now reduced to a single unknown, ctr(t1). In addition, one knows that the four distances di are characterised by only three spatial coordinates, x, y and z of the receiver once the coordinates of the repeaters are known: this is a system requirement to provide the receiver with these coordinates. The indoor location computation is then carried out typically through hyperboloid intersection, as soon as the receiver is able to determine which repeater is transmitting at any given time. This is achieved through synchronisation which is made possible since the signals transmitted by all the repeaters are identical (thus, there is just the need for an initial calibration of the wire delays between the signal generator, or the outdoor

On the other hand, the need to estimate the clock drift is somehow a constraint since the final performances will greatly depend on the quality of the receiver clock. Thus, another approach was proposed, based simply on classical measurements carried out by all current receivers: the raw pseudo-ranges. When one draws the difference of pseudo-ranges from one instant to the next in a repeater like system, the curve of figure 12 is obtained (note that

Clear skips, called "transitions" in the figure, can be seen: they correspond to the difference of distances, dj-di, that characterises the increase or decrease of the distance from repeaters to the receiver when the transmitted signal switches from repeater i to repeater j. It is positive when the distance increases, and negative otherwise. Note also that two additional phenomena are present: 1/ a slow constant increase in the equilibrium value (which

in this example only three repeaters are deployed, leading to a 2D positioning).

*ct t ct t cdt*

1

= + ∑ (2)

*j*

= +

( ) ( ) ( ) ( )

(1)

also susceptible to increasing the positioning accuracy.

from the receiver.

consider that:

antenna, and the repeaters).

atmosphere propagation or the major part of clock drifts. This new differential mode is

represents the remaining contributions whose first derivatives are not zero, for example the clock acceleration) and 2/ a characteristic shape of the curve just after the skips, which is due to the receiver's loop that tends to come back to the equilibrium after this destabilisation. Note that the sum of the transitions, for a complete cycle should be zero. Of course, due to measurement errors, this is usually not the case, and the choice of the best transitions to be considered for positioning has to be carried out.

Fig. 12. Typical response of a receiver

The curve of figure 12 is a single difference of raw measurements. In order to extract the differences of distances mentioned above, there is the need to carry out, at the precise instant of transition, a second difference between two successive single differences. Thus, a process of double differencing is the basis of this proposed approach to repeater positioning. Following these measurement steps, the computations are similar to those described for the clock bias based approach.

## **4.3 The performance achieved**

The most often implemented approach is the second one because it is simply based on classical measurements of GNSS receivers and that no additional computation errors affect the positioning. Tests have been carried out in various environments: each time, the system was deployed and positioning carried out with different receivers. Note that the receivers used are so-called software defined radio (SDR) receivers since the method is affected by multipath, in a similar way that pseudolite based systems are. Thus, a specific mitigation technique was implemented (described in a following section) which required the tracking loops to be slightly modified. Since proprietary receivers do not allow such modifications, an SDR receiver was required. It should be pointed out that transmitters are located in such a way that walls are included in the propagation path from the transmitters to the receiver. These environments, together with their "Ergospace23" representations, are as given in figures 13 to 16 below.

 23 Ergospace is the electromagnetic propagation software used for the deployment phase. The main goal is to evaluate the multipath related effects.

Indoor Positioning with GNSS-Like Local Signal Transmitters 323

The system used for these experiments consists of a few (typically four) transmitters which are located indoors and which transmit a signal provided by a GNSS-like signal generator (we used both an AeroFlex GPS-101 and a Spirent GSS6560). Note that only one such signal is required since the approach proposed is based on the transmission of the same signal through the various transmitters deployed. Note also that in order to satisfy the ongoing various regulations (both in the US and in Europe, briefly described in a following section) the power transmitted is limited (from -80dBm to -65dBm). The principle of the approach is given in the figure 17. The transmitting antennas had a radiating pattern with a maximal

**GPS Receiver** signals towards the transmitters)

 **Indoor receiving antenna**

A summary of the results obtained, all environments included, is given in figures 18 and 19. The first figure shows the results obtained in classrooms, an amphitheatre and an entrance hall. About 20 different locations have been tested in these environments. The various curves represent different ways to filter the resulting fixes obtained. The "unfiltered" curve takes into account all the fixes, with no filtering at all. The other three curves, named "-xm", give the resulting fixes obtained once we remove the ones that are outside the largest rectangle defined by the locations of the transmitters by more than x metres. Note that this is achieved for two main reasons: outside this rectangle, the DOP values increase very rapidly

Figure 19 is a summary of the results obtained in all experiments and with various receivers. In red in the figure are the results obtained in the car park, and the two blue curves are the results obtained in the other environments described. The two curves have been obtained

The main conclusion is that the current performances are roughly in the range of 3 to 4 metres for 80% of the fixes. It is of uppermost importance to understand that this can be considered as really raw fixes since calculations are carried out totally independently from one fix to the next. It is highly probable that basic smoothing or filtering (applied on pseudoranges or locations) would lead to a significant improvement. In addition, a complete continuity with outdoor GNSS is achieved since velocity computations are also possible

**Transmitter 1**

**Signal generator**

**Electronic box**

(carries out the amplification, the cycling and the splitting of the

gain of around 3dBi.

repeater to the next)

**Transmitter 3**

and the positioning algorithms sometimes do not converge.

(performs the pseudo-range measurements at the instant of the transition from one

Fig. 17. The system as it was deployed

with -80dBm and -65dBm respectively.

(Samama and Vervisch-Picois 2005).

**Transmitter 2 Transmitter 4**

Fig. 13. A car park

Fig. 14. An entrance hall

Fig. 15. Classrooms

Fig. 16. An amphitheatre

322 Global Navigation Satellite Systems – Signal, Theory and Applications

Fig. 13. A car park

Fig. 14. An entrance hall

Fig. 15. Classrooms

Fig. 16. An amphitheatre

The system used for these experiments consists of a few (typically four) transmitters which are located indoors and which transmit a signal provided by a GNSS-like signal generator (we used both an AeroFlex GPS-101 and a Spirent GSS6560). Note that only one such signal is required since the approach proposed is based on the transmission of the same signal through the various transmitters deployed. Note also that in order to satisfy the ongoing various regulations (both in the US and in Europe, briefly described in a following section) the power transmitted is limited (from -80dBm to -65dBm). The principle of the approach is given in the figure 17. The transmitting antennas had a radiating pattern with a maximal gain of around 3dBi.

#### Fig. 17. The system as it was deployed

A summary of the results obtained, all environments included, is given in figures 18 and 19. The first figure shows the results obtained in classrooms, an amphitheatre and an entrance hall. About 20 different locations have been tested in these environments. The various curves represent different ways to filter the resulting fixes obtained. The "unfiltered" curve takes into account all the fixes, with no filtering at all. The other three curves, named "-xm", give the resulting fixes obtained once we remove the ones that are outside the largest rectangle defined by the locations of the transmitters by more than x metres. Note that this is achieved for two main reasons: outside this rectangle, the DOP values increase very rapidly and the positioning algorithms sometimes do not converge.

Figure 19 is a summary of the results obtained in all experiments and with various receivers. In red in the figure are the results obtained in the car park, and the two blue curves are the results obtained in the other environments described. The two curves have been obtained with -80dBm and -65dBm respectively.

The main conclusion is that the current performances are roughly in the range of 3 to 4 metres for 80% of the fixes. It is of uppermost importance to understand that this can be considered as really raw fixes since calculations are carried out totally independently from one fix to the next. It is highly probable that basic smoothing or filtering (applied on pseudoranges or locations) would lead to a significant improvement. In addition, a complete continuity with outdoor GNSS is achieved since velocity computations are also possible (Samama and Vervisch-Picois 2005).

Indoor Positioning with GNSS-Like Local Signal Transmitters 325

• Carrier phase measurements are no longer possible (or in reality certainly very complex to carry out) since the skips that are the basis of the method, mean that the phases are lost at each transition, leading to the need for a new search for the integer ambiguity number at each transition. Thus a few meters of accuracy is the goal of this system: enough for the continuity of service, but improvement directions will not be easy to

• The sequential scheme is a problem when one wants to address dynamic positioning since the time the cycle takes should be taken into account in the displacement. This is quite complex to implement and only slow movements can be dealt with. This is acceptable for pedestrians in a commercial mall, but not for a car in a tunnel. This sequential technique is very interesting for time based double differencing, but not for

In addition, the multipath problem (Kaplan 2006) is not solved by the repeater concept and since code phase measurements are typically carried out, it has to be solved: this is the topic

This paragraph addresses a "short multipath insensitive code loop" (SMICL) mitigation technique, developed in the context of the repeater based positioning system: the goal is to mitigate multipath (Jardak and Samama 2010). For this, a new discriminator function has been proposed which is insensitive to multipath signals having relative delays of less than 146.5 m, equivalent to half a chip length. The standard discriminator used by the Standard DLL (SDLL, the DLL having a correlator spacing of 1 chip), has a non zero steady state error in the presence of multipath signals. This is due to the non-symmetrical behaviour of the composite ACF. As a result, when the early autocorrelation value equals the late autocorrelation value, the prompt replica is not synchronized with the direct signal, but rather with the composite signal. Consequently, another discriminator function was found: the proposed code discriminator compares the early correlation value to an adjusted version of the prompt one. The result is that the new discriminator expression yields zero when the prompt reaches the delay of the direct signal component even in presence of multipath rays

( ) ( ) 22 2 2 *D IE QE IP QP* =+ −+' ' (3)

(4)

dynamics where additional errors are present.

**4.5 The multipath mitigation technique developed** 

find.

dealt with in the next section.

of relative delays less than half a chip.

Where

The proposed new expression of the discriminator is given by:

'

> '

2 2

Δ

*QE QL QP QP*

Δ is the correlator spacing and IE, QE, IL, QL, IP and QP are respectively the in phase and in-quadrature phase of the Early, Late and Prompt classic correlators. Note that modified prompt correlators are introduced, IP' and QP', as described. Expression (3) is based on the

Δ

*IE IL IP IP*

<sup>⎧</sup> <sup>+</sup> = − ⎪⎪ <sup>−</sup> <sup>⎨</sup> <sup>+</sup> <sup>⎪</sup> = − ⎪⎩ <sup>−</sup>

2 2

Δ

Δ

## *Real Accuracy - % of total fixes*

Fig. 18. Results obtained in classrooms, amphitheatre and hall

## *Real Accuracy - % of total fixes*

Fig. 19. Summary of all the results

#### **4.4 The main limitations**

Synchronisation, absence of near-far effect and implementation of a differential approach are the main competitive advantages of repeaters over pseudolites. Unfortunately, they go with a few disadvantages, described below.


In addition, the multipath problem (Kaplan 2006) is not solved by the repeater concept and since code phase measurements are typically carried out, it has to be solved: this is the topic dealt with in the next section.

#### **4.5 The multipath mitigation technique developed**

This paragraph addresses a "short multipath insensitive code loop" (SMICL) mitigation technique, developed in the context of the repeater based positioning system: the goal is to mitigate multipath (Jardak and Samama 2010). For this, a new discriminator function has been proposed which is insensitive to multipath signals having relative delays of less than 146.5 m, equivalent to half a chip length. The standard discriminator used by the Standard DLL (SDLL, the DLL having a correlator spacing of 1 chip), has a non zero steady state error in the presence of multipath signals. This is due to the non-symmetrical behaviour of the composite ACF. As a result, when the early autocorrelation value equals the late autocorrelation value, the prompt replica is not synchronized with the direct signal, but rather with the composite signal. Consequently, another discriminator function was found: the proposed code discriminator compares the early correlation value to an adjusted version of the prompt one. The result is that the new discriminator expression yields zero when the prompt reaches the delay of the direct signal component even in presence of multipath rays of relative delays less than half a chip.

The proposed new expression of the discriminator is given by:

$$D = \left(I\mathcal{E}^2 + Q\mathcal{E}^2\right) - \left(IP^{\prime 2} + QP^{\prime 2}\right) \tag{3}$$

Where

324 Global Navigation Satellite Systems – Signal, Theory and Applications

*Real Accuracy - % of total fixes* 

0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00 **Positioning accuracy (m)**

*Real Accuracy - % of total fixes* 

0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 **Positioning accuracy (m) TSP-65 TSP-80 TORINO**

Synchronisation, absence of near-far effect and implementation of a differential approach are the main competitive advantages of repeaters over pseudolites. Unfortunately, they go

Fig. 18. Results obtained in classrooms, amphitheatre and hall

**%-Unfiltered %-5m %-2m %-1m**

**% of total fixes**

**% of total fixes**

Fig. 19. Summary of all the results

with a few disadvantages, described below.

**4.4 The main limitations** 

$$\begin{cases} IP = IP - \frac{\Delta}{2} \frac{IE + IL}{2 - \Delta} \\ QP' = QP - \frac{\Delta}{2} \frac{QE + QL}{2 - \Delta} \end{cases} \tag{4}$$

Δ is the correlator spacing and IE, QE, IL, QL, IP and QP are respectively the in phase and in-quadrature phase of the Early, Late and Prompt classic correlators. Note that modified prompt correlators are introduced, IP' and QP', as described. Expression (3) is based on the

Indoor Positioning with GNSS-Like Local Signal Transmitters 327

*With a 2 MHz front-end bandwidth receiver*, the current standard for GPS receivers, things are a little bit different25. The performances are in this case reduced (the efficiency is not as good for mitigation), and a typical result is an equivalence between the SMICL (at 2 MHz) and the NC (at 8 MHz). Thus, the SMICL allows one to obtain performances of the NC with the current available bandwidth. This is a nice result but it is not sufficient since we showed that 10 to 12 metres of accuracy is not enough indoors. Thus, a 2 MHz bandwidth is not

*With an 8 MHz front-end bandwidth receiver*, which is an intermediate plausible value for future GNSS receivers (including Galileo), the ACF is very close to that obtained with the theoretical unlimited bandwidth. The performance of the SMICL is then acceptable, as shown in figure 21 which compares NC and SMICL. Note that the vertical axis is now given in "chip" (0.01 is equivalent to approximately 3 metres). Based on this figure, multipath errors are reduced with the SMICL to three meters in the worst case (very short out-of-phase

Please keep in mind the fact that these results are obtained with only one reflected path. Some other simulations were carried out in the case of a typical environment involving several multipath rays and showed that the code measurement error due to multipath is

If one combines all the advantages of both pseudolites and repeaters, only the need for a local infrastructure and the multipath effects are not dealt with. The repeater based

0 0,05 0,1 0,15 0,2 0,25 0,3 0,35 0,4 0,45 0,5

Out-of-Phase, SMICL In-phase, SMICL In-Phase, NC Out-of-Phase, NC

Multipath relative delay (chips)

25 Once again, there is a direct link between multipath mitigation efficiency and bandwidth.

multipath) and to 0.7 m when the relative delay is between 0.1 and 0.5 chip.

also significantly reduced when the SMICL is considered.

Fig. 21. Comparison of SMICL and NC for an 8 MHz bandwidth

sufficient.

**4.6 Discussion** 




Tracking error (chip)

0,01

0,03

0,05

fact that the left part of the ACF is the one that is the least modified by multipath, but that in addition the prompt replica is modified by the presence of multipath. Thus, the new discriminator uses the Early correlator that is less modified and a modified form of the prompt correlator. Expressions (4) represent the way the prompt correlator is modified and are in fact obtained from the analysis of the general form of the multipath contribution to the discriminator. Indeed, for multipath of less than half a chip, one can show that

$$\begin{cases} IE + IL = \left( 2 - A \right) \sum\_{0 \le k \le N} A\_k \cos \left( \theta\_k - \hat{\theta} \right) \\ QE + QL = \left( 2 - A \right) \sum\_{0 \le k \le N} A\_k \sin \left( \theta\_k - \hat{\theta} \right) \end{cases} \tag{5}$$

The various sums in (5) being the multipath contributions, considering there are N reflected paths of amplitudes Ak and delay (θk-θ). The limitation of the efficiency of the method to reflected paths of less than half a chip is due to the validity domain of these approximations.

Let us now give the main results obtained for multipath mitigation. The proposed code loop is compared to the standard code loop and the Narrow Correlator (NC). The signal received is assumed to be the sum of a direct signal and a single reflected signal whose amplitude is half that of the direct signal. The following curves show the envelopes of the pseudo-range errors in a similar way as in figure 10.

*With an unlimited front-end bandwidth receiver*, the results are given in figure 20. The half chip limit is quite clear for the SMICL. Nevertheless, performances are better than SDLL and NC for short multipath24.

Fig. 20. Comparison of discriminators for an unlimited bandwidth

<sup>24</sup> Many simulations, carried out with Ergospace, have shown that this assumption concerning the delays of the reflected paths indoors is acceptable almost all the time.

326 Global Navigation Satellite Systems – Signal, Theory and Applications

fact that the left part of the ACF is the one that is the least modified by multipath, but that in addition the prompt replica is modified by the presence of multipath. Thus, the new discriminator uses the Early correlator that is less modified and a modified form of the prompt correlator. Expressions (4) represent the way the prompt correlator is modified and are in fact obtained from the analysis of the general form of the multipath contribution to

( ) ( )

*k N*

≤ ≤

ˆ 2 cos

( ) ( )

ˆ 2 sin

*k k*

*k k*

θ θ

∑ (5)

θ θ

the discriminator. Indeed, for multipath of less than half a chip, one can show that

*IE IL A*

⎨

In-phase multipath SMICL Out-of-phase multipath SMICL In-phase multipath NC Out-of-phase multipath NC In-phase multipath SDLL Out-of-phase multipath SDLL

Fig. 20. Comparison of discriminators for an unlimited bandwidth

delays of the reflected paths indoors is acceptable almost all the time.

errors in a similar way as in figure 10.

for short multipath24.




0

0,1

Code loop tracking error (chip)

0,2

0,3

0,4

0,5

*QE QL A*

0

<sup>⎧</sup> +=− <sup>−</sup> ⎪⎪

Δ

<sup>⎪</sup> <sup>+</sup> = − <sup>−</sup> ⎪⎩

Δ

0

The various sums in (5) being the multipath contributions, considering there are N reflected paths of amplitudes Ak and delay (θk-θ). The limitation of the efficiency of the method to reflected paths of less than half a chip is due to the validity domain of these approximations. Let us now give the main results obtained for multipath mitigation. The proposed code loop is compared to the standard code loop and the Narrow Correlator (NC). The signal received is assumed to be the sum of a direct signal and a single reflected signal whose amplitude is half that of the direct signal. The following curves show the envelopes of the pseudo-range

*With an unlimited front-end bandwidth receiver*, the results are given in figure 20. The half chip limit is quite clear for the SMICL. Nevertheless, performances are better than SDLL and NC

> 0 0,125 0,25 0,375 0,5 0,625 0,75 0,875 1 1,125 1,25 1,375 1,5 Multipath relative delay (chips)

24 Many simulations, carried out with Ergospace, have shown that this assumption concerning the

*k N*

≤ ≤

∑

*With a 2 MHz front-end bandwidth receiver*, the current standard for GPS receivers, things are a little bit different25. The performances are in this case reduced (the efficiency is not as good for mitigation), and a typical result is an equivalence between the SMICL (at 2 MHz) and the NC (at 8 MHz). Thus, the SMICL allows one to obtain performances of the NC with the current available bandwidth. This is a nice result but it is not sufficient since we showed that 10 to 12 metres of accuracy is not enough indoors. Thus, a 2 MHz bandwidth is not sufficient.

*With an 8 MHz front-end bandwidth receiver*, which is an intermediate plausible value for future GNSS receivers (including Galileo), the ACF is very close to that obtained with the theoretical unlimited bandwidth. The performance of the SMICL is then acceptable, as shown in figure 21 which compares NC and SMICL. Note that the vertical axis is now given in "chip" (0.01 is equivalent to approximately 3 metres). Based on this figure, multipath errors are reduced with the SMICL to three meters in the worst case (very short out-of-phase multipath) and to 0.7 m when the relative delay is between 0.1 and 0.5 chip.

Please keep in mind the fact that these results are obtained with only one reflected path. Some other simulations were carried out in the case of a typical environment involving several multipath rays and showed that the code measurement error due to multipath is also significantly reduced when the SMICL is considered.

Fig. 21. Comparison of SMICL and NC for an 8 MHz bandwidth

### **4.6 Discussion**

If one combines all the advantages of both pseudolites and repeaters, only the need for a local infrastructure and the multipath effects are not dealt with. The repeater based

 25 Once again, there is a direct link between multipath mitigation efficiency and bandwidth.

Indoor Positioning with GNSS-Like Local Signal Transmitters 329

antenna. A new problem arises: since a high level of interference can occur because of simultaneous broadcasting of different signals. This can induce severe interference that can disrupt the signal. If one observes the ACF at the receiver end (see figure 22), there is no longer one maximal peak for each code length, but N peaks if N is the number of transmitting repealites (assuming that all the transmissions are included in one code length,

Repealite Correlation figure with 20 chips delay


Receiver

d4 d3 d2

Δ<sup>23</sup> Δ<sup>34</sup>

The system shown in figure 23 uses a signal generator that ensures the synchronisation. A

d1

With 4 delayed channels the terminal is able to carry out 4 indoor pseudo-range measurements. These measurements lead to the equations of the system (the notations of

Δcable Δ<sup>12</sup>

but note that this is necessary for the system).

1.2

1

0.8

0.6

Normalized correlation

Signal Generator

Fig. 23. The repealite system

figure 23 are used):

0.4

0.2

0


Fig. 22. The resulting auto-correlation function at the receiver

**PRIk** <sup>=</sup> **PRj <sup>+</sup>**Δ**cable <sup>+</sup>** ΣΔ**uw + dk**

single signal is sufficient, as in the case of repeaters.

infrastructure is still required, although using only one signal distributed to all the transmitters clearly constitutes a huge improvement (also in terms of synchronisation). On another hand, multipath mitigation with the SMICL has shown impressive results that have been validated experimentally, as can be seen through the experimental results. But even with the SMICL, the repeater approach has two major limitations: the difficulty to carry out carrier phase measurement, hence limiting the accuracy attainable (although this is sufficient for the continuity with GNSS outdoors), and poorer performance in dynamic modes. The goal of the next step presented is to propose a synthesised approach that could be the way to overcome these last limitations.

## **5. The repealite concept: Mixing the advantages of both pseudolites and repeaters**

The cycling approach, implemented until now, has a great disadvantage: carrier phase measurements are almost impossible. In order to improve the indoor accuracy, a new approach is proposed based on the so-called "repealites26" approach which tries to cumulate the advantages of both repeaters and pseudolites (i.e. carrier phase measurements and same signal transmitted through all the transmitters). First theoretical works have shown a potential of less than one metre accuracy by implementing classical code measurement smoothing techniques using carrier phase measurements. The remaining problem is that repealites are now transmitting simultaneously, which leads to the near-far effect. Thus works have also been carried out concerning this effect.

## **5.1 Introduction to the idea**

It is rather simple in principle: synchronisation is advantageously carried out when the same single signal is transmitted by all the repealites and simultaneous transmissions allow us to implement carrier phase measurements (Vervisch-Picois et al. 2010). Multipath is always a problem but the SMICL, developed in the context of the repeater system, appears to be quite an efficient answer. The pseudolite double differencing approach is probably a little bit too complex for mass market devices (this could be discussed) thus the goal is simply to smooth the code phase measurements with carrier phase measurements, following the classical way of many current GNSS receivers.

The only remaining difficulty is now the near-far effect: a solution to this problem is proposed. Note that when both multipath effects and near-far effects have found a solution, one could consider that the pseudolite system is well suited, since two major problems are solved. As a matter of fact, this is quite true except for synchronisation purposes. Thus, the repealite approach seems to be rather an acceptable compromise.

#### **5.2 The proposed system architecture**

The proposed method comes from the transmitting approach of the repeated system, but instead of the sequential mode, the transmission on each antenna is delayed in such a way that the transmitted signals on each repealite do not interfere once they arrive at the receiver

<sup>26</sup> Repealite is a contraction of Repeater and Pseudolite.

328 Global Navigation Satellite Systems – Signal, Theory and Applications

infrastructure is still required, although using only one signal distributed to all the transmitters clearly constitutes a huge improvement (also in terms of synchronisation). On another hand, multipath mitigation with the SMICL has shown impressive results that have been validated experimentally, as can be seen through the experimental results. But even with the SMICL, the repeater approach has two major limitations: the difficulty to carry out carrier phase measurement, hence limiting the accuracy attainable (although this is sufficient for the continuity with GNSS outdoors), and poorer performance in dynamic modes. The goal of the next step presented is to propose a synthesised approach that could

**5. The repealite concept: Mixing the advantages of both pseudolites and** 

The cycling approach, implemented until now, has a great disadvantage: carrier phase measurements are almost impossible. In order to improve the indoor accuracy, a new approach is proposed based on the so-called "repealites26" approach which tries to cumulate the advantages of both repeaters and pseudolites (i.e. carrier phase measurements and same signal transmitted through all the transmitters). First theoretical works have shown a potential of less than one metre accuracy by implementing classical code measurement smoothing techniques using carrier phase measurements. The remaining problem is that repealites are now transmitting simultaneously, which leads to the near-far effect. Thus

It is rather simple in principle: synchronisation is advantageously carried out when the same single signal is transmitted by all the repealites and simultaneous transmissions allow us to implement carrier phase measurements (Vervisch-Picois et al. 2010). Multipath is always a problem but the SMICL, developed in the context of the repeater system, appears to be quite an efficient answer. The pseudolite double differencing approach is probably a little bit too complex for mass market devices (this could be discussed) thus the goal is simply to smooth the code phase measurements with carrier phase measurements, following the classical way

The only remaining difficulty is now the near-far effect: a solution to this problem is proposed. Note that when both multipath effects and near-far effects have found a solution, one could consider that the pseudolite system is well suited, since two major problems are solved. As a matter of fact, this is quite true except for synchronisation purposes. Thus, the

The proposed method comes from the transmitting approach of the repeated system, but instead of the sequential mode, the transmission on each antenna is delayed in such a way that the transmitted signals on each repealite do not interfere once they arrive at the receiver

be the way to overcome these last limitations.

works have also been carried out concerning this effect.

repealite approach seems to be rather an acceptable compromise.

**5.1 Introduction to the idea** 

of many current GNSS receivers.

**5.2 The proposed system architecture** 

26 Repealite is a contraction of Repeater and Pseudolite.

**repeaters** 

antenna. A new problem arises: since a high level of interference can occur because of simultaneous broadcasting of different signals. This can induce severe interference that can disrupt the signal. If one observes the ACF at the receiver end (see figure 22), there is no longer one maximal peak for each code length, but N peaks if N is the number of transmitting repealites (assuming that all the transmissions are included in one code length, but note that this is necessary for the system).

Fig. 22. The resulting auto-correlation function at the receiver

The system shown in figure 23 uses a signal generator that ensures the synchronisation. A single signal is sufficient, as in the case of repeaters.

Fig. 23. The repealite system

With 4 delayed channels the terminal is able to carry out 4 indoor pseudo-range measurements. These measurements lead to the equations of the system (the notations of figure 23 are used):

Indoor Positioning with GNSS-Like Local Signal Transmitters 331

pseudo-range measurements are a must if one wants the smoothing of the code by the

We have seen that the ACF of the various codes used in GNSS present secondary peaks that are the origin of the near-far problem. In the case of the repealite based system, this problem is enhanced since the same signal is repeated N times (in the case of N repealites transmitting simultaneously). Thus, the interferences are of uppermost importance, in particular when defining the delays between repealites (since superposing the repealite signal to a secondary peak of the preceding repealite would be a particularly bad idea). Thus, a proper choice of the delays has to be carried out in coordination with the code used and the size of the indoor

A few approaches have been proposed in order to reduce the near-far effect in the case of repealite systems, depending on the codes used. For the GPS codes, it appears that the appropriate delays are obtained when the ACF is close to zero (see figure 24): such "locations" are numerous but depend on the chosen code (the locations are not identical for all codes). In order to reduce the near-far, a double transmission technique is proposed: it consists indeed in modifying the shape of the transmitted signal in order to allow the receiver to carry out differences that could allow it to remove the most powerful signal which is the cause of the near-far. The signal sent is composed of the initial code to which is added, in opposite phase, the same signal delayed by half a chip. Improvements of up to 30dB in comparison with solutions where no near-far mitigation techniques are implemented have been reported. Note that this means still 20dB of improvement in the

carrier to be efficient: thanks to the SMICL, this is possible.

environment (because the signals should not interfere at the receiver).

power that can be managed in comparison with a pulsed pseudolite system.

Fig. 24. Optimal determination of the delays between repealites

The drawback of this approach is that it requires a specific signal to be sent and in turn a modification of the software of the receivers which have to be aware of this specific mode. Nevertheless, the efficiency theoretically demonstrated may be worth implementation.

$$\begin{cases} \text{PR}\_1 = d\_1 + \Delta\_{\text{table}} \\ \text{PR}\_2 = d\_2 + \Delta\_{\text{table}} + \Delta\_{\text{12}} \\ \text{PR}\_3 = d\_3 + \Delta\_{\text{table}} + \Delta\_{\text{12}} + \Delta\_{\text{23}} \\ \text{PR}\_4 = d\_4 + \Delta\_{\text{table}} + \Delta\_{\text{12}} + \Delta\_{\text{23}} + \Delta\_{\text{34}} \end{cases} \tag{6}$$

Where the PRk are the indoor pseudo-ranges measured by the receiver, Δcable is the common part of the delay in the cable between the generator and the first repealite (including error and clock bias between the generator clock and the clock of the receiver), the Δuw are the delays between repealite Ru and Rw and the dk are the indoor geometric distances between repealite Rk and the indoor receiver.

The locations of the transmitters have to be known27, as usual, and the indoor position is computed in a local referential with a classical GNSS algorithms. Note that the velocity can also be calculated in the local referential, just like GNSS outdoors, since the contribution of the clock drift of the generator to the Doppler is common to the 4 repealites and that the only contribution of the signal to Doppler is the relative velocity between the antenna of the indoor receiver and the antenna of repealite Ri.

#### **5.3 The main advantages**

The fact that repealites are transmitting in a continuous way allows us to follow the carrier phase of the signal, a source of potential improvements in the positioning accuracy. This feature could lead to a similar operating mode to carrier phase pseudolites, but this is not the main objective here. Another interesting improvement compared to repeaters is the ability to carry out dynamic positioning with no restriction since instantaneous measurements and calculations are carried out. It is also noticeable that dynamic positioning is bound to be of better quality since the receiver movement will have a direct impact on the average multipath distribution, leading to a more efficient averaging of their effects.

The continuity with outdoor GNSS is even simplified in comparison with a repeater where a switch between the outdoor mode and the indoor mode and its cycling scheme was required. With repealites, this switch only concerns the PRN number used which should be characteristic of indoors: the same apply to pseudolites.

The last main advantage is associated to synchronisation. The fact of using a single signal is an advantage in comparison to pseudolites, but does not allow the synchronisation problem to be completely removed since transmitters still have to be synchronised. This is currently achieved through wire connections, either by coaxial cables or by the way of optical fibres28. The synchronisation of the system is obtained once several measurements are carried out at known locations.

#### **5.4 The remaining limitations and the ways they are dealt with**

The two most important remaining limitations are respectively the multipath and the nearfar effect. Multipath effects are dealt with through the use of the SMICL. Note that good

<sup>27</sup> Some works are under consideration in order to propose methods for auto-positioning the transmitters.

<sup>28</sup> Optical fibres are also considered for the physical realization of the time delays between repealites.

330 Global Navigation Satellite Systems – Signal, Theory and Applications

 Δ

 ΔΔ

ΔΔΔ

(6)

2 2 12 3 3 12 23 4 4 12 23 34

Δ

<sup>⎪</sup> =+ + <sup>⎨</sup> =+ + + <sup>⎪</sup>

Δ

<sup>⎪</sup> =+ + + + <sup>⎩</sup>

Where the PRk are the indoor pseudo-ranges measured by the receiver, Δcable is the common part of the delay in the cable between the generator and the first repealite (including error and clock bias between the generator clock and the clock of the receiver), the Δuw are the delays between repealite Ru and Rw and the dk are the indoor geometric distances between

The locations of the transmitters have to be known27, as usual, and the indoor position is computed in a local referential with a classical GNSS algorithms. Note that the velocity can also be calculated in the local referential, just like GNSS outdoors, since the contribution of the clock drift of the generator to the Doppler is common to the 4 repealites and that the only contribution of the signal to Doppler is the relative velocity between the antenna of the

The fact that repealites are transmitting in a continuous way allows us to follow the carrier phase of the signal, a source of potential improvements in the positioning accuracy. This feature could lead to a similar operating mode to carrier phase pseudolites, but this is not the main objective here. Another interesting improvement compared to repeaters is the ability to carry out dynamic positioning with no restriction since instantaneous measurements and calculations are carried out. It is also noticeable that dynamic positioning is bound to be of better quality since the receiver movement will have a direct impact on the

The continuity with outdoor GNSS is even simplified in comparison with a repeater where a switch between the outdoor mode and the indoor mode and its cycling scheme was required. With repealites, this switch only concerns the PRN number used which should be

The last main advantage is associated to synchronisation. The fact of using a single signal is an advantage in comparison to pseudolites, but does not allow the synchronisation problem to be completely removed since transmitters still have to be synchronised. This is currently achieved through wire connections, either by coaxial cables or by the way of optical fibres28. The synchronisation of the system is obtained once several measurements are carried out at

The two most important remaining limitations are respectively the multipath and the nearfar effect. Multipath effects are dealt with through the use of the SMICL. Note that good

27 Some works are under consideration in order to propose methods for auto-positioning the

28 Optical fibres are also considered for the physical realization of the time delays between repealites.

average multipath distribution, leading to a more efficient averaging of their effects.

Δ

Δ

*cable cable cable cable*

1 1

*PR d PR d PR d PR d*

⎪

repealite Rk and the indoor receiver.

**5.3 The main advantages** 

known locations.

transmitters.

indoor receiver and the antenna of repealite Ri.

characteristic of indoors: the same apply to pseudolites.

**5.4 The remaining limitations and the ways they are dealt with** 

⎧ = +

pseudo-range measurements are a must if one wants the smoothing of the code by the carrier to be efficient: thanks to the SMICL, this is possible.

We have seen that the ACF of the various codes used in GNSS present secondary peaks that are the origin of the near-far problem. In the case of the repealite based system, this problem is enhanced since the same signal is repeated N times (in the case of N repealites transmitting simultaneously). Thus, the interferences are of uppermost importance, in particular when defining the delays between repealites (since superposing the repealite signal to a secondary peak of the preceding repealite would be a particularly bad idea). Thus, a proper choice of the delays has to be carried out in coordination with the code used and the size of the indoor environment (because the signals should not interfere at the receiver).

A few approaches have been proposed in order to reduce the near-far effect in the case of repealite systems, depending on the codes used. For the GPS codes, it appears that the appropriate delays are obtained when the ACF is close to zero (see figure 24): such "locations" are numerous but depend on the chosen code (the locations are not identical for all codes). In order to reduce the near-far, a double transmission technique is proposed: it consists indeed in modifying the shape of the transmitted signal in order to allow the receiver to carry out differences that could allow it to remove the most powerful signal which is the cause of the near-far. The signal sent is composed of the initial code to which is added, in opposite phase, the same signal delayed by half a chip. Improvements of up to 30dB in comparison with solutions where no near-far mitigation techniques are implemented have been reported. Note that this means still 20dB of improvement in the power that can be managed in comparison with a pulsed pseudolite system.

Fig. 24. Optimal determination of the delays between repealites

The drawback of this approach is that it requires a specific signal to be sent and in turn a modification of the software of the receivers which have to be aware of this specific mode. Nevertheless, the efficiency theoretically demonstrated may be worth implementation.

Indoor Positioning with GNSS-Like Local Signal Transmitters 333

the corners of a square that includes the complete trajectory. Their exact locations are given in the figure (note that the altitude of the repealites are also given and allow for quite a nice

The speed of the receiver is set at 1m/s and the multipath delay appeared to vary between 0.07 and 0.17 chips, equivalent to between 20 and 50 metres roughly. Note that since the SMICL is more sensitive to noise than the SDLL (or the NC), the simulations were carried out using 50dB-Hz for the C/N0 value. This is rather a high value for outdoors, but not impossible indoors since one decides the indoor power transmitted (except that regulations

The results are given in figure 26 for 2D and 3D positioning. These simulations show a few decimeters accuracy range for the whole trajectory, and results a little bit better for 2D than 3D. Some skips can be seen in figure 26 which are the ambiguity skips: thus, these skips are typically a multiple of nineteen centimeters. This allows us to evaluate the efficiency of the estimation of this ambiguity. Note that it is calculated every second and is based on the SMICL assisted measurement of the code phase. This once again confirms the very good

<sup>0</sup> 0 5 10 15 20 25 30

The problem of using the same frequency band as the outdoor GNSS is that interference may occur. Of course, when a single system is deployed, these interferences should be very limited and only disturb locally the outdoor receivers. Nevertheless, if no regulations exist, there is a potential danger for GNSS. Thus, some countries have worked towards the

Error on 2D Position Error on 3D Position

Time (s)

indoor VDOP). The receiver is considered to be at an altitude of zero meters.

are limiting the maximum allowed).

performance of the SMICL approach.

0.7

0.6

0.5

0.4

0.3

0.2

0.1

**6. Regulatory issues for L1/E1** 

Positioning error (m)

Mean Error 2D = 0.15 Mean Error 3D = 0.21

Fig. 26. Positioning accuracy obtained with a repealite system

development of constraints on the power allowed to be transmitted.

Another interesting proposition concerns the potential use of maximal sequences that have the advantage of providing us with a unique value of auto-correlation outside the main peak (Vervisch-Picois and Samama 2009). Thus, it is possible to carry out differences without the need for a half chip delay for the additional signal. The implementation is then quite easy and can be applied to an almost unlimited number of repealites.

In these cases, interference with outdoors is a very interesting and challenging topic since regulations are appearing in order to "preserve" the GNSS bands. The fact of using similar codes to those used outdoors is a real concern which could find an elegant solution through the use of originally designed sequences. Of course, a frequency shifted approach would definitively solve the interference problem with outdoors, but would require new frequency resources in the case of the modern Code Division Multiple Access (CDMA) GNSS systems.

### **5.5 A few preliminary estimated performances**

The smoothing of the code with the carrier phase is a very classical operation in GNSS. It consists of using the low noise carrier phase measurements in order to smooth the pseudorange measurements. It is very efficient in order to reduce thermal noise but not really for multipath. Thus, the coupling of the SMICL with this smoothing technique is a very nice combination. The Kalman filter implemented is then nearly optimal. Note that indoors, the main error source comes from multipath, since no atmospheric contributions or clock bias errors of transmitters (in this repealite based configuration) are present. In the present case, the filter uses the carrier phase measurement in order to carry out its estimation of the future state.

Simulations have been carried out considering a circular displacement of a pedestrian in a place where a severe multipath (only one) is present, sometimes of even greater amplitude than the direct path from a transmitter. This is achieved through a perfect reflector located in the close vicinity of the trajectory. As can be seen in figure 25, the repealites are located in

Fig. 25. Considered trajectory and repealite distribution

332 Global Navigation Satellite Systems – Signal, Theory and Applications

Another interesting proposition concerns the potential use of maximal sequences that have the advantage of providing us with a unique value of auto-correlation outside the main peak (Vervisch-Picois and Samama 2009). Thus, it is possible to carry out differences without the need for a half chip delay for the additional signal. The implementation is then

In these cases, interference with outdoors is a very interesting and challenging topic since regulations are appearing in order to "preserve" the GNSS bands. The fact of using similar codes to those used outdoors is a real concern which could find an elegant solution through the use of originally designed sequences. Of course, a frequency shifted approach would definitively solve the interference problem with outdoors, but would require new frequency resources in the case of the modern Code Division Multiple Access (CDMA)

The smoothing of the code with the carrier phase is a very classical operation in GNSS. It consists of using the low noise carrier phase measurements in order to smooth the pseudorange measurements. It is very efficient in order to reduce thermal noise but not really for multipath. Thus, the coupling of the SMICL with this smoothing technique is a very nice combination. The Kalman filter implemented is then nearly optimal. Note that indoors, the main error source comes from multipath, since no atmospheric contributions or clock bias errors of transmitters (in this repealite based configuration) are present. In the present case, the filter uses the carrier phase measurement in order to carry out its estimation of the

Simulations have been carried out considering a circular displacement of a pedestrian in a place where a severe multipath (only one) is present, sometimes of even greater amplitude than the direct path from a transmitter. This is achieved through a perfect reflector located in the close vicinity of the trajectory. As can be seen in figure 25, the repealites are located in

R4 R3

R1 R2

h2 = -3 m

h3 = +3 m

20 m Receiver trajectory

h4 = -3 m

h1 = +3 m

Fig. 25. Considered trajectory and repealite distribution

quite easy and can be applied to an almost unlimited number of repealites.

**5.5 A few preliminary estimated performances** 

GNSS systems.

future state.

the corners of a square that includes the complete trajectory. Their exact locations are given in the figure (note that the altitude of the repealites are also given and allow for quite a nice indoor VDOP). The receiver is considered to be at an altitude of zero meters.

The speed of the receiver is set at 1m/s and the multipath delay appeared to vary between 0.07 and 0.17 chips, equivalent to between 20 and 50 metres roughly. Note that since the SMICL is more sensitive to noise than the SDLL (or the NC), the simulations were carried out using 50dB-Hz for the C/N0 value. This is rather a high value for outdoors, but not impossible indoors since one decides the indoor power transmitted (except that regulations are limiting the maximum allowed).

The results are given in figure 26 for 2D and 3D positioning. These simulations show a few decimeters accuracy range for the whole trajectory, and results a little bit better for 2D than 3D. Some skips can be seen in figure 26 which are the ambiguity skips: thus, these skips are typically a multiple of nineteen centimeters. This allows us to evaluate the efficiency of the estimation of this ambiguity. Note that it is calculated every second and is based on the SMICL assisted measurement of the code phase. This once again confirms the very good performance of the SMICL approach.

Fig. 26. Positioning accuracy obtained with a repealite system

## **6. Regulatory issues for L1/E1**

The problem of using the same frequency band as the outdoor GNSS is that interference may occur. Of course, when a single system is deployed, these interferences should be very limited and only disturb locally the outdoor receivers. Nevertheless, if no regulations exist, there is a potential danger for GNSS. Thus, some countries have worked towards the development of constraints on the power allowed to be transmitted.

Indoor Positioning with GNSS-Like Local Signal Transmitters 335

The GPS predicted the need for terrestrial generators when reserving the specific codes, PRN 33 through 37, for ground transmitters. Galileo also included the possibility of using such transmitters. Note that since codes are different from satellite's ones, the limitations are

The following lines are based on report 168 of the ECC (ECC report 168), dated May 2011, and relate to indoor pseudolites. Similar to the case of repeaters, computations were carried out on the base on interference evaluations in the various GNSS associated frequency bands.

• The antenna of the pseudolite should point at the ground and be directed towards the

• The radiated power for an elevation angle superior to 0 degree should be reduced by

• The radiated power should be reduced to -59dBm in airport areas and specific mitigation techniques implemented when aircraft are in their parking stands.

Note that the power level is rather high in comparison to repeaters and largely sufficient in order to have all the techniques described in the chapter implemented in real conditions with good performance. As a matter of fact, the estimated range with -60dBm is around one hundred metres in real environments, i.e. including walls and multiple floor levels (ceilings). The remaining 10dB margin could be used in order to provide the receiver with a high SNR, required for the SMICL for instance. On the other hand, the interesting feature that consists in positioning a pseudolite on the ground pointing at the top of the building (in order to substantially increase the VDOP) will have to be implemented with a 6dB reduced

In addition, report 168 states the same as for repeaters concerning individual authorisations, insertion of guidance instructions in order to help the applicant in the deployment and interdiction of mobile pseudolites. It is also proposed that some authorities (military, government and meteorological services) be allowed to apply for

Moreover, the report mentioned that longer codes could improve both the compatibility with non-participative receivers and the performance of participative ones. Note that

The GNSS-like signal indoor positioning systems, either based on pseudolites, repeaters or repealites are a real alternative in order to provide users with a continuous service, at the cost of deploying a local infrastructure. This is now possible in particular thanks to multipath and near-far effect mitigation techniques. Performance attainable is in the metre range through rather good quality measurements and elementary computation algorithms. In comparison, such solutions as WiFi based ones, are based on low quality measurements

**6.3 The case of indoor pseudolites** 

inside of the building.

more than 6dB.

maximal power.

specific site limitations.

research works are on-going in this direction.

(power level typically) and complex computation algorithms.

**7. Synthesis and future trends** 

a little bit relaxed in comparison to repeaters.

For the L1 band, the main conclusions are as follows: • The radiated power should not exceed -50dBm.

## **6.1 General introduction**

The problem is due to the inter correlation functions (ICF) of the various code sequences that are used. As a matter of fact, these ICF have small peaks, comparable to the secondary peaks of the ACF. If the number of the ground based transmitters is too high or if the total power is too high, then the addition of these secondary peaks is likely to generate interferences to an unacceptable level for outdoor receivers.

Two different cases have been considered by the regulatory authorities: the repeaters and the pseudolites. The repeater case corresponds to a transmitter which uses the outdoor available signals and, after amplification, retransmits them indoors. The ICF between indoor signals and outdoor ones can been considered as being indeed ACF, thus leading to potentially higher interferences. Thus, the maximal acceptable power associated with repeaters is lower than for pseudolites29.

## **6.2 The case of the repeaters**

In the United States it is not legal to sell GPS repeaters and only the Federal government or agencies operating under its direction, parties that would have received either a Special Temporary Authority (STA) or an Experimental License, or parties operating in an anechoic chamber are authorised to use such devices.

In Europe, things are a little bit different and regulations are based on the Electronic Communications Committee (ECC) report 145 (ECC report 145), dated May 2010. Studies were carried out on the base on interference evaluations in the various GNSS associated frequency bands. Let us concentrate on the L1 band (1559 to 1610 MHz). The global conclusions are as follows:


Some experimental results presented in previous sections were carried out with -80dBm and have shown acceptable performance within a typical range of 20 metres.

In addition to the above technical recommendations, report 145 states that any authorisations should include guidance instructions in order to help the applicant in the deployment phase of the repeaters. Also, particular attention is recommended for installations close to airports or to military sites.

Finally, the report proposes that any uses of repeaters should be subject to individual authorisation and that no mobile use should be permitted.

<sup>29</sup> Please note that the various indoor positioning systems proposed in this chapter have to be considered as « pseudolite based » for regulation purposes, although the so-called « repeater based » approach could also be implemented using repeaters (in the sense of the regulations), and then fall into the corresponding regulation, of course.

<sup>30</sup> The so-called eirp (Equivalent isotropically radiated power).

## **6.3 The case of indoor pseudolites**

334 Global Navigation Satellite Systems – Signal, Theory and Applications

The problem is due to the inter correlation functions (ICF) of the various code sequences that are used. As a matter of fact, these ICF have small peaks, comparable to the secondary peaks of the ACF. If the number of the ground based transmitters is too high or if the total power is too high, then the addition of these secondary peaks is likely to generate

Two different cases have been considered by the regulatory authorities: the repeaters and the pseudolites. The repeater case corresponds to a transmitter which uses the outdoor available signals and, after amplification, retransmits them indoors. The ICF between indoor signals and outdoor ones can been considered as being indeed ACF, thus leading to potentially higher interferences. Thus, the maximal acceptable power associated with

In the United States it is not legal to sell GPS repeaters and only the Federal government or agencies operating under its direction, parties that would have received either a Special Temporary Authority (STA) or an Experimental License, or parties operating in an anechoic

In Europe, things are a little bit different and regulations are based on the Electronic Communications Committee (ECC) report 145 (ECC report 145), dated May 2010. Studies were carried out on the base on interference evaluations in the various GNSS associated frequency bands. Let us concentrate on the L1 band (1559 to 1610 MHz). The global

• The maximum gain of the repeater, from outdoor antenna to indoor antenna should be

• The maximum power re-radiated that are not GNSS signals should be less that -20dBm.

Some experimental results presented in previous sections were carried out with -80dBm and

In addition to the above technical recommendations, report 145 states that any authorisations should include guidance instructions in order to help the applicant in the deployment phase of the repeaters. Also, particular attention is recommended for

Finally, the report proposes that any uses of repeaters should be subject to individual

29 Please note that the various indoor positioning systems proposed in this chapter have to be considered as « pseudolite based » for regulation purposes, although the so-called « repeater based » approach could also be implemented using repeaters (in the sense of the regulations), and then fall into

have shown acceptable performance within a typical range of 20 metres.

interferences to an unacceptable level for outdoor receivers.

repeaters is lower than for pseudolites29.

chamber are authorised to use such devices.

• The radiated power30 should not exceed -77dBm.

• The repeater should include filtering.

installations close to airports or to military sites.

the corresponding regulation, of course.

authorisation and that no mobile use should be permitted.

30 The so-called eirp (Equivalent isotropically radiated power).

**6.2 The case of the repeaters** 

conclusions are as follows:

limited to 45dB.

**6.1 General introduction** 

The GPS predicted the need for terrestrial generators when reserving the specific codes, PRN 33 through 37, for ground transmitters. Galileo also included the possibility of using such transmitters. Note that since codes are different from satellite's ones, the limitations are a little bit relaxed in comparison to repeaters.

The following lines are based on report 168 of the ECC (ECC report 168), dated May 2011, and relate to indoor pseudolites. Similar to the case of repeaters, computations were carried out on the base on interference evaluations in the various GNSS associated frequency bands. For the L1 band, the main conclusions are as follows:


Note that the power level is rather high in comparison to repeaters and largely sufficient in order to have all the techniques described in the chapter implemented in real conditions with good performance. As a matter of fact, the estimated range with -60dBm is around one hundred metres in real environments, i.e. including walls and multiple floor levels (ceilings). The remaining 10dB margin could be used in order to provide the receiver with a high SNR, required for the SMICL for instance. On the other hand, the interesting feature that consists in positioning a pseudolite on the ground pointing at the top of the building (in order to substantially increase the VDOP) will have to be implemented with a 6dB reduced maximal power.

In addition, report 168 states the same as for repeaters concerning individual authorisations, insertion of guidance instructions in order to help the applicant in the deployment and interdiction of mobile pseudolites. It is also proposed that some authorities (military, government and meteorological services) be allowed to apply for specific site limitations.

Moreover, the report mentioned that longer codes could improve both the compatibility with non-participative receivers and the performance of participative ones. Note that research works are on-going in this direction.

## **7. Synthesis and future trends**

The GNSS-like signal indoor positioning systems, either based on pseudolites, repeaters or repealites are a real alternative in order to provide users with a continuous service, at the cost of deploying a local infrastructure. This is now possible in particular thanks to multipath and near-far effect mitigation techniques. Performance attainable is in the metre range through rather good quality measurements and elementary computation algorithms. In comparison, such solutions as WiFi based ones, are based on low quality measurements (power level typically) and complex computation algorithms.

Indoor Positioning with GNSS-Like Local Signal Transmitters 337

Jee GI, Choi JH, Bu SC., (2004), Indoor positioning using TDOA measurements from

Kanli M.O., (2004), "Limitations of Pseudolite Systems using off-the-shelf GPS receivers", *The* 

Kaplan ED, Hegarty C., (2006), Understanding GPS: principles and applications. 2nd ed.

Kee C, Yun D, Jun H, Parkinson B, Pullen S, Lagenstein T., (2001), Centimeter-accuracy

Kee C, Jun H, Yun D, (2003), "Indoor Navigation System using Asynchronous Pseudolites",

Klein D. and Parkinson B. W., (1986), "The Use of Pseudolites for Improving GPS

Kupper A., (2005), Location based services — fundamentals and operation. *John Wiley and* 

Madhani P.H, Axelrad P., Krumvieda K., Thomas J., (2003), "Application of Successive

Parkinson BW, Spilker Jr. JJ., (1996) Global positioning system: theory and applications.

Rizos C., Barnes J., Wang J., Small D., Voigt G. and Gambale N., (2003), "LocataNet:

Samama N., Vervisch-Picois A., (2005), "3D Indoor Velocity Vector Determination Using

Samama N, (2008), "Global Positioning – Technologies and Performance", *Wiley InterScience*,

Takada Y, Kishimoto M, Kawamura N, Komoda N, Yamazaki T, Oiso H, Masanari T.,

Vervisch-Picois A, Samama N, (2006), "Analysis of 3D Repeater Based Indoor Positioning System – Specific Case of Indoor DOP ", *ENC-GNSS 2006*, Manchester, UK. Vervisch-Picois A., Samama N., (2009), "Interference Mitigation In A Repeater And

Vervisch-Picois A., Selmi I., Gottesman Y., Samama N., (2010), "Current Status of the

Wang Y, Jia X, Rizos C., (2004), Two new algorithms for indoor Wireless Positioning System

*Transaction on Aerospace and Electronic System*, vol. 39, no 2, pp. 481–487. Martone M, Metzler J., (2005), Prime time positioning: using broadcast TV signals to fill GPS

Performance." *Global Positioning System*, volume 3. Institute of Navigation,

Interference Cancellation to the GPS Pseudolite Near-Far Problem" *IEEE* 

Intelligent Time-Synchronised Pseudolite Transceivers for cm-Level Stand-Alone

(2003), An information service system using Bluetooth in an exhibition hall. *Annales* 

Pseudolite Indoor Positioning System", *IEEE Journal of Specific Topics on Signal* 

Repealite Based Approach - A Sub-Meter Indoor Positioning System", *IEEE-*

(WPS). *ION GNSS 17th International Technical Meeting of the Satellite Division*, Long

switched GPS repeater. *ION GNSS 2004*, Long Beach (CA).

*International Symposium on GNSS/GPS*, Sydney, Australia.

indoor navigation using GPS-like pseudolites. *GPS World*.

*Artech House*, Norwoood, MA, USA.

*Journal of Navigation*, 56, pp 443-455.

Acquisition gaps. *GPS World 2005*, pp 52–59.

*des Telecommunications* 2003, ¾, pp 507–530.

*NAVITEC 2010*, Noordwijk, The Netherlands.

*Processing*, Vol. 3, N°5, PP.810-820.

American Institute of Aeronautics and Astronautics.

Positioning", *11th IAIN World Congress*, Berlin, Germany.

GNSS Based Repeaters", *ION GNSS 2005*, Long Beach, USA.

Washington, DC.

Hoboken, USA.

Beach (CA).

*Sons*.

A classical way to cope with the continuity of service is to consider GNSS for outdoors and another solution for indoors, say WiFi, UWB or inertial systems. These types of approaches are called hybridisation. Another approach, being currently investigated, is to find a combination of techniques that would complement each other depending on the type of environments, but not based on a dichotomy between indoors and outdoors. Indeed, a specificity of positioning is that environments, indoors as well as outdoors, are much more complex than imagined.

An example of the approach could be a coupling between repealites and an inertial system, deployed in a very large building, such as warehouses or office blocks. In such a way, the three techniques are used in turn where appropriate, and this does not mean just indoors or outdoors. Outdoors where the sky is free, GNSS is used, but as soon as obstacles are present, in urban canyons for example, a coupling with inertial is carried out. In places where too few satellites are available, one or two additional repealites could be used. Indoors, the same applies: a repealite system is deployed in rather a large area where one meter accuracy is enough for direction determination and the propagation environment is not so important that good SNR are easy to obtain. When a user is leaving these "great halls" and entering offices or corridors, the inertial system is once again activated. Such a system is efficient in all possible environments.

### **8. References**


336 Global Navigation Satellite Systems – Signal, Theory and Applications

A classical way to cope with the continuity of service is to consider GNSS for outdoors and another solution for indoors, say WiFi, UWB or inertial systems. These types of approaches are called hybridisation. Another approach, being currently investigated, is to find a combination of techniques that would complement each other depending on the type of environments, but not based on a dichotomy between indoors and outdoors. Indeed, a specificity of positioning is that environments, indoors as well as outdoors, are much more

An example of the approach could be a coupling between repealites and an inertial system, deployed in a very large building, such as warehouses or office blocks. In such a way, the three techniques are used in turn where appropriate, and this does not mean just indoors or outdoors. Outdoors where the sky is free, GNSS is used, but as soon as obstacles are present, in urban canyons for example, a coupling with inertial is carried out. In places where too few satellites are available, one or two additional repealites could be used. Indoors, the same applies: a repealite system is deployed in rather a large area where one meter accuracy is enough for direction determination and the propagation environment is not so important that good SNR are easy to obtain. When a user is leaving these "great halls" and entering offices or corridors, the inertial system is once again activated. Such a system is efficient in

Bartone C, Van Graas F., (2000), Ranging airport pseudolite for local area augmentation.

Caratori J., François M., Samama N., (2002), "Universal Positioning Theory Based on Global

Duffett-Smith P, Rowe R., (2006), Comparative A-GPS and 3G-MATRIX testing in a dense

Fluerasu A., Jardak N., Vervisch-Picois A., Samama N., (2009), "GNSS Repeater

Fluerasu A., Samama N., (2009), "GNSS transmitter based indoor positioning systems -

Glennon E. P., Bryant R. C., Dempster A. G., Mumford P. J. , (2007), "Post Correlation CWI

Im S-H, Jee G-I, Cho YB., (2006), An indoor positioning system using time-delayed GPS

Jardak N., Samama N., (2010), "Short Multipath Insensitive Code Loop Discriminator", *IEEE* 

Based Approach for Indoor Positioning: Current Status", *ENC-GNSS2009*, Naples,

Deployment rules in real buildings", *13th IAIN World Congress*, Stockholm, Sweden.

and Cross Correlation Mitigation Using Delayed PIC" *ION GNSS*, Forth Worth,

*IEEE Trans Aerosp Electron Syst* 36(1), pp 278–286.

repeater. *ION GNSS 2006*, Forth Worth, (TX).

Positioning System – Upgrade", *InLoc2002*, Bonn, Germany.

ECC report 145, (2010), Regulatory framework for GNSS repeaters, St. Petersburg. ECC report 168, ( 2011), Regulatory framework for indoor GNSS pseudolites, Miesbach. Fontana RJ., (2004), Recent system applications of short-pulse ultra-wideband (UWB)

technology. *IEEE Trans Microwave Theory Tech*, pp 2087–2104.

*Trans. on Aerospace and Electronic Systems*, Vol. 46, PP.278-295.

urban environment. *ION GNSS 2006*, Forth Worth (TX).

complex than imagined.

all possible environments.

Italy.

USA.

**8. References** 


Utilization of a mobile platform is important for effectively acquiring spatial data over a wide area (Zhao & Shibasaki, 2000). Although mobile mapping technology was developed in the late 1980s, the more recent availability of Global Positioning Systems (GPSs) and inertial measurement units (IMUs), the latter being a combination of accelerometers and gyroscopes, has made mobile mapping systems possible, particularly for aerial surveys and ground vehicle surveys (Manandhar & Shibasaki, 2002). Remote sensors—such as image sensors or laser scanners—are instruments that gather information about an object or area from a distance. Using these sensors for surveying and collecting information from mobile platforms has become a valuable means of disaster mapping, environmental monitoring,

Trajectory tracking of a mobile platform is considered part of directing the movement of a platform from one place on Earth to another. Although GPS gives excellent trajectory tracking performance, it is not adequate to use for mobile mapping in terms of its lack of attitude information and low data acquisition frequency. On the other hand, an IMU is a

An IMU exhibits position errors, called drift errors that tend to increase with time in an unrestrained manner. This degradation is due to errors in the initialization of an IMU and inertial sensor imperfections such as accelerometers bias and gyroscope drift. By mitigating this growth and bounding, the errors update the inertial system periodically by fixing external reference sources. The combination of GPS and IMU has become increasingly common as the characteristics of these two mobile positioning technologies complement each other. Firstly, an IMU provides continuous positioning drifts, whereas GPS measurements do not drift, but are not continuously available. Also, GPS, as external data, is used not only for position updates but also for error correction of inertial components such as attitude, heading, velocity, gyro bias, and accelerometer bias. However, the integration of

To obtain both the wide area coverage of remote sensors and the high levels of detail and accuracy of ground surveying at low costs, a mobile mapping system has been developed in this research. All the measurement tools are mounted on a mobile platform to acquire detailed information. This mobile platform integrates and combines equipment such as digital cameras, a small and cheap laser scanner, an inexpensive IMU, GPS, and VMS (Velocity Measurement System). These sensors are integrated by a high-precision

closed system that is used to detect attitude and position with high frequency.

IMU and GPS is restricted due to the cost of high quality inertial components.

**1. Introduction** 

and urban mapping, amongst others.

Masahiko Nagai

*Thailand* 

*Asian Institute of Technology* 

Yang C., Morton J., (2009), "Adaptive Replica Code Synthesis for Interference Suppression in GNSS Receivers", *ION ITM*, Anaheim, USA. **14** 

## Masahiko Nagai

*Asian Institute of Technology Thailand* 

## **1. Introduction**

338 Global Navigation Satellite Systems – Signal, Theory and Applications

Yang C., Morton J., (2009), "Adaptive Replica Code Synthesis for Interference Suppression in

Utilization of a mobile platform is important for effectively acquiring spatial data over a wide area (Zhao & Shibasaki, 2000). Although mobile mapping technology was developed in the late 1980s, the more recent availability of Global Positioning Systems (GPSs) and inertial measurement units (IMUs), the latter being a combination of accelerometers and gyroscopes, has made mobile mapping systems possible, particularly for aerial surveys and ground vehicle surveys (Manandhar & Shibasaki, 2002). Remote sensors—such as image sensors or laser scanners—are instruments that gather information about an object or area from a distance. Using these sensors for surveying and collecting information from mobile platforms has become a valuable means of disaster mapping, environmental monitoring, and urban mapping, amongst others.

Trajectory tracking of a mobile platform is considered part of directing the movement of a platform from one place on Earth to another. Although GPS gives excellent trajectory tracking performance, it is not adequate to use for mobile mapping in terms of its lack of attitude information and low data acquisition frequency. On the other hand, an IMU is a closed system that is used to detect attitude and position with high frequency.

An IMU exhibits position errors, called drift errors that tend to increase with time in an unrestrained manner. This degradation is due to errors in the initialization of an IMU and inertial sensor imperfections such as accelerometers bias and gyroscope drift. By mitigating this growth and bounding, the errors update the inertial system periodically by fixing external reference sources. The combination of GPS and IMU has become increasingly common as the characteristics of these two mobile positioning technologies complement each other. Firstly, an IMU provides continuous positioning drifts, whereas GPS measurements do not drift, but are not continuously available. Also, GPS, as external data, is used not only for position updates but also for error correction of inertial components such as attitude, heading, velocity, gyro bias, and accelerometer bias. However, the integration of IMU and GPS is restricted due to the cost of high quality inertial components.

To obtain both the wide area coverage of remote sensors and the high levels of detail and accuracy of ground surveying at low costs, a mobile mapping system has been developed in this research. All the measurement tools are mounted on a mobile platform to acquire detailed information. This mobile platform integrates and combines equipment such as digital cameras, a small and cheap laser scanner, an inexpensive IMU, GPS, and VMS (Velocity Measurement System). These sensors are integrated by a high-precision

3,072×2,048 pixels Focal length: 24.0mm Price: \$1,500US Weight: 500g

2,048×1,536 pixels

Focal length: 10.0mm Price: \$6,000US Weight: 500g

Price: \$4,000US Weight: 4,000g

Fiber optic gyro Accuracy Angle: ±0.1°

Price: \$4,000US Weight: 150g

Initially, calibration of digital images is necessary due to the estimation of interior orientation parameters. Interior orientation is conducted to decide interior orientation parameters, principal point (x0, y0), focus length (f), and distortion coefficient (K1). Control points for camera calibration are taken as stereo images several times. Camera calibration is performed by the bundle adjustment using target control points. In order to estimate appropriate lens distortion for a digital camera, lens distortion mode is shown in Eq. (1) and Eq. (2). These equations consider only radial symmetric distortion (Kunii & Chikatsu, 2001). Image coordinate of (x, y) is corrected and transferred to new image coordinates of (xu, yu). Interior orientation parameters that are computed in this calibration for Canon EOS 10D are

Angular resolution: 0.25º Max. distance: 80m Accuracy (20m) : 10mm

Angle velocity: ±0.05°/s Acceleration: ±0.002G Price: \$20,000US Weight: 1,000g

Accuracy differential: 30cm Velocity accuracy: 0.1(95%)

Range: -120~+250 km/h Resolution: 10 mm/P Price: \$13,000US Weight: 1.7kg

and TM4.

Green, red, and NIR sensitivity with bands approximately equal to TM2, TM3,

Sensors Model Specifications

Canon EOS 10D

ADC3

SICK LMS-291

Seiki Co., Ltd. TA7544

IMU Tamagawa

GPS Ashtech

VMS Ono Sokki

Table 1. List of sensors on mobile platform

**2.2.1 Calibration of digital camera** 

shown in Table 2.

G12

Co., Ltd. LC-3110

IR Camera Tetracam

Digital Camera

Laser Scanner

positioning system designed for moving environments and they carry out a key role in hybrid positioning.

In this paper direct geo-referencing is achieved automatically from a mobile platform with hybrid positioning by multi-sensor integration. Here, direct geo-referencing means georeferencing that does not require that the ground control points accurately measure ground coordinate values. Data are acquired and digital surfaces are modeled using equipment which is mounted on a mobile platform. This allows objects to be automatically rendered in rich shapes and detailed textures.

## **2. System design for hybrid positioning and sensor integration**

The key attributes of the design of the system are low cost, ease of use, and mobility (Parra & Angel, 2005). Firstly, it utilizes a small laser scanner, commercially available digital cameras, and a relatively inexpensive IMU such as FOG (Fiber Optic Gyro), not a highperformance and expensive IMU like Ring Laser Gyro. The IMU and other measurement tools used are much cheaper than those in existing aerial measurement systems, such as Applanix's POS and Leica's ADS40 (Cramera, 2006). Moreover, these low-cost instruments are easily available on the market. Recent technological advances have also led to low-cost sensors such as micro electro mechanical system (MEMS) gyros. For example, it is considered that MEMS gyros will supplant FOG in the near future and that the price will be approximately one-tenth of that of FOG. For this reason, FOG was selected for this paper in an attempt to improve a low-cost system for the future. Secondly, "mobility" here means the item is lightweight and simple to modify. Such sensors allow the system to be borne by a variety of platforms: UAV (Unmanned Aerial Vehicle), ground vehicles, humans, and others. These sensors are generally low-performance, but they are light and low-cost while still meeting the specifications. These handy sensors are improved by integrating their data.

#### **2.1 Sensors**

In this paper a laser scanner, digital cameras, an IMU, a GPS, and a VMS are used to find the precise trajectory of sensors and to construct a digital surface model as a mobile mapping system. To automatically construct such a model, it is necessary to develop a high-frequency positioning system to determine the movement of the sensors in details. The integration of GPS and IMU data is effective for high-accuracy positioning of a mobile platform. A 3D shape is acquired by the laser scanner as point cloud data and texture information is acquired by the digital cameras all from the same platform simultaneously. The sensors used in this paper are listed in Table 1.

#### **2.2 Sensors' calibration**

Calibration of sensors is necessary for two reasons. One is to estimate the interior orientation parameter, such as lens distortion and focal length that are mechanical oriented parameters. The other reason calibration is necessary is to estimate exterior orientation parameters, such as a transformation matrix that has a relative position and attitude among sensors. All the sensors are tightly mounted on a platform and they have constant calibration parameters during the measurement. The purpose of calibration is chiefly to integrate all the sensors and positioning devices to a common single coordinating system, so that captured data can be integrated and expressed in terms of a common world coordinate system.

340 Global Navigation Satellite Systems – Signal, Theory and Applications

positioning system designed for moving environments and they carry out a key role in

In this paper direct geo-referencing is achieved automatically from a mobile platform with hybrid positioning by multi-sensor integration. Here, direct geo-referencing means georeferencing that does not require that the ground control points accurately measure ground coordinate values. Data are acquired and digital surfaces are modeled using equipment which is mounted on a mobile platform. This allows objects to be automatically rendered in

The key attributes of the design of the system are low cost, ease of use, and mobility (Parra & Angel, 2005). Firstly, it utilizes a small laser scanner, commercially available digital cameras, and a relatively inexpensive IMU such as FOG (Fiber Optic Gyro), not a highperformance and expensive IMU like Ring Laser Gyro. The IMU and other measurement tools used are much cheaper than those in existing aerial measurement systems, such as Applanix's POS and Leica's ADS40 (Cramera, 2006). Moreover, these low-cost instruments are easily available on the market. Recent technological advances have also led to low-cost sensors such as micro electro mechanical system (MEMS) gyros. For example, it is considered that MEMS gyros will supplant FOG in the near future and that the price will be approximately one-tenth of that of FOG. For this reason, FOG was selected for this paper in an attempt to improve a low-cost system for the future. Secondly, "mobility" here means the item is lightweight and simple to modify. Such sensors allow the system to be borne by a variety of platforms: UAV (Unmanned Aerial Vehicle), ground vehicles, humans, and others. These sensors are generally low-performance, but they are light and low-cost while still meeting the specifications. These handy sensors are improved by integrating their data.

In this paper a laser scanner, digital cameras, an IMU, a GPS, and a VMS are used to find the precise trajectory of sensors and to construct a digital surface model as a mobile mapping system. To automatically construct such a model, it is necessary to develop a high-frequency positioning system to determine the movement of the sensors in details. The integration of GPS and IMU data is effective for high-accuracy positioning of a mobile platform. A 3D shape is acquired by the laser scanner as point cloud data and texture information is acquired by the digital cameras all from the same platform simultaneously. The sensors

Calibration of sensors is necessary for two reasons. One is to estimate the interior orientation parameter, such as lens distortion and focal length that are mechanical oriented parameters. The other reason calibration is necessary is to estimate exterior orientation parameters, such as a transformation matrix that has a relative position and attitude among sensors. All the sensors are tightly mounted on a platform and they have constant calibration parameters during the measurement. The purpose of calibration is chiefly to integrate all the sensors and positioning devices to a common single coordinating system, so that captured data can

be integrated and expressed in terms of a common world coordinate system.

**2. System design for hybrid positioning and sensor integration** 

hybrid positioning.

**2.1 Sensors** 

used in this paper are listed in Table 1.

**2.2 Sensors' calibration** 

rich shapes and detailed textures.


Table 1. List of sensors on mobile platform

## **2.2.1 Calibration of digital camera**

Initially, calibration of digital images is necessary due to the estimation of interior orientation parameters. Interior orientation is conducted to decide interior orientation parameters, principal point (x0, y0), focus length (f), and distortion coefficient (K1). Control points for camera calibration are taken as stereo images several times. Camera calibration is performed by the bundle adjustment using target control points. In order to estimate appropriate lens distortion for a digital camera, lens distortion mode is shown in Eq. (1) and Eq. (2). These equations consider only radial symmetric distortion (Kunii & Chikatsu, 2001). Image coordinate of (x, y) is corrected and transferred to new image coordinates of (xu, yu). Interior orientation parameters that are computed in this calibration for Canon EOS 10D are shown in Table 2.

$$\mathbf{x}\_{u} = \mathbf{x}' + \mathbf{x}' \text{ (Kyr2)}\tag{1}$$

Navigation is the continuous positioning which is the process of monitoring and controlling the movement of a vehicle from one place to another. Inertial navigation is the selfdetermination of the instantaneous position and other parameters of motion of a vehicle by measuring specific force, angular velocity, and time in a previously selected coordinate system. The basic concept is to determine the vehicle velocity and position by real time

Figure. 1 shows an overview of data processing of sensor integration from navigation as a hybrid positioning to mapping by direct geo-referencing with a laser scanner. In this paper, the following data are acquired and those data are integrated: base station GPS data, remote station GPS data, IMU data, digital images, and laser range data. Although the data are acquired in different frequencies, they are synchronized with each other by GPS time.

First, differential or kinematic GPS post processing is conducted. Second, the processed GPS data and the IMU data are integrated by a Kalman filter to estimate the sensor trajectory. The bundle block adjustment (BBA) of the digital images is then made to acquire georeferenced images and exterior orientations with the support of GPS and IMU data, which are the sensor position and attitude, as an external aid. Also, VMS and other sensors can be considered as external aids if GPS accuracy is not enough to support IMU in urban areas. Finally, GPS data, IMU data, and the external aides are combined to regenerate highprecision and time-series sensor position and attitude. Finally, these hybrid positioning data are used for the geo-referencing of the laser range data and for the construction of a digital

Bundle block adjustment to estimate sensor orientations

In general, IMUs exhibit position errors that tend to increase with time in an unbounded manner. This degradation occurs due to errors of initialization of IMUs and inertial sensor

Pure navigation process

Trajectory estimation by integration of GPS/IMU data and External Source

IMU data Digital images

Laser range GPS data data

(external aids)

Geo referencing of laser range data

Construction of DSM

**3. Multi sensor integration** 

integration of governing differential equations.

surface model as 3D point cloud data.

**4. Hybrid positioning** 

Post-processing GPS data

Integration of GPS/IMU by Kalman filtering

Fig. 1. Overview of data processing for multi sensor integration

$$\mathbf{y}\_u = \mathbf{y}' + \mathbf{x}' \text{ (Kyr2)}\tag{2}$$

where x'= x – x0; y' = – (y – y0); r2 = x'2 + y'2; (x, y): image coordinate


Table 2. Interior orientation parameters of Canon EOS 10D

#### **2.2.2 Calibration of laser scanner**

Calibration of a laser scanner is not easy because the laser beam is invisible where the wavelength is approximately 905 nm±10 nm. Thus, in this research, a solar cell is utilized for laser beam detection. External parameters of the laser scanner are estimated by computing the scale factor, rotation matrix and shift vector by converting the laser scanner coordinate to the fiducial coordinate, which can be a common coordinate of the system.

3D Helmert's transformation equation, Equation (3), is used to estimate the laser scanner's external parameter. The laser scanner coordinates (Xl, Yl, Zl) are converted to the fiducial coordinates (Xt, Yt, Zt). Scale factor (s), rotation matrix (R), and translation matrix (Tx, Ty, Tz) are estimated by the least square method (Shapiro, 1978). According to this calibration methodology, external parameters of a laser scanner can be decided accurately, and it helps to combine a laser scanner with other sensors such as digital camera or IMU.

$$
\begin{pmatrix} \mathbf{X}\mathbf{t} \\ \mathbf{Y}\mathbf{t} \\ \mathbf{Z}\mathbf{t} \end{pmatrix} = \mathbf{s}\mathbf{R} \begin{pmatrix} \mathbf{X}\mathbf{l} \\ \mathbf{Y}\mathbf{l} \\ \mathbf{Z}\mathbf{l} \end{pmatrix} + \begin{pmatrix} \mathbf{T}\mathbf{x} \\ \mathbf{T}\mathbf{y} \\ \mathbf{T}\mathbf{z} \end{pmatrix} \tag{3}
$$

#### **2.2.3 Boresight offset measurement**

Boresight offset must be estimated between the GPS and the IMU. In the hybrid positioning circulation, differences in position and velocity between the IMU and the GPS, and other sensors are used to estimate the severity of errors. If the vehicle only goes straight, this error amount is not affected because the relative movement is constant. However, if the vehicle turns, the error amount is not constant. The position and velocity of the near axis of gyration are small, although those of its far axis of gyration are large. In this paper, the boresight offset from the GPS to the IMU in the vehicle is obtained through direct measurement by using a total station.

The transformation matrix, which includes a rotation matrix and a translation matrix from the vehicle coordinate system to a world coordinate system, is calculated from positioning data where the origin of the common coordinate system considered as the center of IMU. A rotation matrix and a translation matrix are dependent on the instant posture and position of the vehicle when the vehicle is moving. On the other hand, the transformation matrix from the local coordinate system to the vehicle coordinate system is calculated based on the external calibration parameter measured physically as a boresight offset. This is a physical measurement, so it includes some measurement errors. However, this is initial information and it can be removed by further filtering.

## **3. Multi sensor integration**

342 Global Navigation Satellite Systems – Signal, Theory and Applications

xu = x' + x' (K1r2) (1)

yu = y' + x' (K1r2) (2)

Calibration of a laser scanner is not easy because the laser beam is invisible where the wavelength is approximately 905 nm±10 nm. Thus, in this research, a solar cell is utilized for laser beam detection. External parameters of the laser scanner are estimated by computing the scale factor, rotation matrix and shift vector by converting the laser scanner coordinate

3D Helmert's transformation equation, Equation (3), is used to estimate the laser scanner's external parameter. The laser scanner coordinates (Xl, Yl, Zl) are converted to the fiducial coordinates (Xt, Yt, Zt). Scale factor (s), rotation matrix (R), and translation matrix (Tx, Ty, Tz) are estimated by the least square method (Shapiro, 1978). According to this calibration methodology, external parameters of a laser scanner can be decided accurately, and it helps

> Xt Xl Tx Yt sR Yl Ty Zt Zl Tz

⎛ ⎞ ⎛ ⎞⎛ ⎞ ⎜ ⎟ ⎜ ⎟⎜ ⎟ = +

(3)

⎝ ⎠ ⎝ ⎠⎝ ⎠

Boresight offset must be estimated between the GPS and the IMU. In the hybrid positioning circulation, differences in position and velocity between the IMU and the GPS, and other sensors are used to estimate the severity of errors. If the vehicle only goes straight, this error amount is not affected because the relative movement is constant. However, if the vehicle turns, the error amount is not constant. The position and velocity of the near axis of gyration are small, although those of its far axis of gyration are large. In this paper, the boresight offset from the GPS to the IMU in the vehicle is obtained through direct measurement by

The transformation matrix, which includes a rotation matrix and a translation matrix from the vehicle coordinate system to a world coordinate system, is calculated from positioning data where the origin of the common coordinate system considered as the center of IMU. A rotation matrix and a translation matrix are dependent on the instant posture and position of the vehicle when the vehicle is moving. On the other hand, the transformation matrix from the local coordinate system to the vehicle coordinate system is calculated based on the external calibration parameter measured physically as a boresight offset. This is a physical measurement, so it includes some measurement errors. However, this is initial information

x0 1,532.9966 pixels f 24.6906 mm y0 1,037.3240 pixels K1 1.5574e-008

to the fiducial coordinate, which can be a common coordinate of the system.

to combine a laser scanner with other sensors such as digital camera or IMU.

where x'= x – x0; y' = – (y – y0); r2 = x'2 + y'2; (x, y): image coordinate

Table 2. Interior orientation parameters of Canon EOS 10D

**2.2.2 Calibration of laser scanner** 

**2.2.3 Boresight offset measurement** 

and it can be removed by further filtering.

using a total station.

Navigation is the continuous positioning which is the process of monitoring and controlling the movement of a vehicle from one place to another. Inertial navigation is the selfdetermination of the instantaneous position and other parameters of motion of a vehicle by measuring specific force, angular velocity, and time in a previously selected coordinate system. The basic concept is to determine the vehicle velocity and position by real time integration of governing differential equations.

Figure. 1 shows an overview of data processing of sensor integration from navigation as a hybrid positioning to mapping by direct geo-referencing with a laser scanner. In this paper, the following data are acquired and those data are integrated: base station GPS data, remote station GPS data, IMU data, digital images, and laser range data. Although the data are acquired in different frequencies, they are synchronized with each other by GPS time.

First, differential or kinematic GPS post processing is conducted. Second, the processed GPS data and the IMU data are integrated by a Kalman filter to estimate the sensor trajectory. The bundle block adjustment (BBA) of the digital images is then made to acquire georeferenced images and exterior orientations with the support of GPS and IMU data, which are the sensor position and attitude, as an external aid. Also, VMS and other sensors can be considered as external aids if GPS accuracy is not enough to support IMU in urban areas. Finally, GPS data, IMU data, and the external aides are combined to regenerate highprecision and time-series sensor position and attitude. Finally, these hybrid positioning data are used for the geo-referencing of the laser range data and for the construction of a digital surface model as 3D point cloud data.

Fig. 1. Overview of data processing for multi sensor integration

## **4. Hybrid positioning**

In general, IMUs exhibit position errors that tend to increase with time in an unbounded manner. This degradation occurs due to errors of initialization of IMUs and inertial sensor

selected, and covariance must be initialized in order to continue Kalman filtering circulation in response to the GPS data validation. The accuracy of the integration depends on the

> Euler matrix generation

Latitude / longitude wander azimuth generation

This research adopts the Kalman filter and the following steps are included, as shown in

1. Initialization of covariance value and first estimation of each variable 2. Inspection of GPS validity and selection of measurements equation

and its covariance New measurement

IMU attitude

Coordinate transformation

Velocity generation coriolis compensation gravity compensation

Relative rate generation

> Kalman gain generation

Measurement matrix calculation

GPS on/off

Delta angle Delta velocity

accuracy of the referenced GPS. In this case, it is approximately 30 cm.

Initial attitude

Quaternion update

generation IMU velocity

Fig. 3. Pure navigation block diagram expressed roughly step by step

Initial position initial wander azimuth

Inertial estimate

Earth rate generation

Update covariance matrix

> Update filter estimate

Fig. 4. Kalman filter circulation

Figure 4.

Project ahead

Torque rate

imperfections such as accelerometer bias and gyroscope drift. Hybrid positioning can mitigate the error by being updated periodically with external fixes, such as GPS, VMS, images, radio aids, or Doppler radar. Hybrid positioning is for finding the location of a mobile platform using or combining several different positioning technologies. The effect of fixing positions is that it allows for the reset or the correction of the position errors of the inertial system to the same level of accuracy inherent in the position fixing technology. The inertial system error grows at a rate equal to the velocity error. Therefore, external data is used not only for the position update but also the error correction of inertial components such as attitude, heading, velocity, gyro bias, and accelerometer bias. Furthermore, the error of the external data such as misalignment error, boresight error, and scale factor error is corrected in the same manner. Typical hybrid strapdown navigation is shown in Figure. 2.

Fig. 2. Typical hybrid strap down navigation configuration

## **4.1 GPS and IMU data are integrated by Kalman filter**

The Kalman filter can be used to optimally estimate the system states. One of the distinct advantages of the Kalman filter is that time varying coefficients are permitted in the model. With this filter, the final estimation is based on a combination of prediction and actual measurement. Figure 3 shows the pure navigation algorithm for deciding IMU attitude and IMU velocity step by step (Kumagai et al., 2002). Inertial navigation starts to define the initial attitude and heading based on the alignment of the system. It is processed and then it changes to the navigation mode. Over the years, the quality of IMUs has risen, but they are still affected by systemic errors. In this research, a GPS measurement is applied as an actual measurement to aid the IMU by correcting this huge drift error. With Kalman filtering, the sensor position and attitude are determined at 200 Hz.

Figure 4 shows the Kalman filter circulation diagram for the integration of the GPS and IMU data (Kumagai et al., 2000). Individual measurement equations and transition equations are 344 Global Navigation Satellite Systems – Signal, Theory and Applications

imperfections such as accelerometer bias and gyroscope drift. Hybrid positioning can mitigate the error by being updated periodically with external fixes, such as GPS, VMS, images, radio aids, or Doppler radar. Hybrid positioning is for finding the location of a mobile platform using or combining several different positioning technologies. The effect of fixing positions is that it allows for the reset or the correction of the position errors of the inertial system to the same level of accuracy inherent in the position fixing technology. The inertial system error grows at a rate equal to the velocity error. Therefore, external data is used not only for the position update but also the error correction of inertial components such as attitude, heading, velocity, gyro bias, and accelerometer bias. Furthermore, the error of the external data such as misalignment error, boresight error, and scale factor error is corrected in the same manner. Typical hybrid strapdown navigation is shown in Figure. 2.

> Strapdown Processing and Navigation

System Output - Position - Velocity - Acceleration - Attitude - Heading - Angular Rate - Application output

Kalman filter

Observation Processing

The Kalman filter can be used to optimally estimate the system states. One of the distinct advantages of the Kalman filter is that time varying coefficients are permitted in the model. With this filter, the final estimation is based on a combination of prediction and actual measurement. Figure 3 shows the pure navigation algorithm for deciding IMU attitude and IMU velocity step by step (Kumagai et al., 2002). Inertial navigation starts to define the initial attitude and heading based on the alignment of the system. It is processed and then it changes to the navigation mode. Over the years, the quality of IMUs has risen, but they are still affected by systemic errors. In this research, a GPS measurement is applied as an actual measurement to aid the IMU by correcting this huge drift error. With Kalman filtering, the

Figure 4 shows the Kalman filter circulation diagram for the integration of the GPS and IMU data (Kumagai et al., 2000). Individual measurement equations and transition equations are

X-gyro Y-gyro Z-gyro

Δθ's

ΔV's

Fig. 2. Typical hybrid strap down navigation configuration

**4.1 GPS and IMU data are integrated by Kalman filter** 

sensor position and attitude are determined at 200 Hz.

Rotation Sensors

> X-accel. Y-accel. Z-accel.

External aids - GPS - Images - Speed meter - etc.

Translation Sensors

selected, and covariance must be initialized in order to continue Kalman filtering circulation in response to the GPS data validation. The accuracy of the integration depends on the accuracy of the referenced GPS. In this case, it is approximately 30 cm.

Fig. 3. Pure navigation block diagram expressed roughly step by step

Fig. 4. Kalman filter circulation

This research adopts the Kalman filter and the following steps are included, as shown in Figure 4.


The position and attitude of the sensors are dictated by the integration of the GPS and IMU data, as well as by the image orientations that are acquired from digital cameras or digital video cameras. One of the main objectives of this paper is to integrate inexpensive sensors into a high-precision positioning system. Integration of the GPS (which operates at 1 Hz) with the IMU (200 Hz) has to be made with Kalman filtering for the geo-referencing of laser range data with a frequency of 18 Hz. The positioning accuracy of the GPS/IMU integration data is based on GPS accuracy. On the other hand, both position and attitude can be estimated with very high accuracy using the BBA as image orientations. However, the

Therefore, the combination of the BBA and Kalman filtering is conducted to increase accuracy, as shown in Figure 6. The BBA results are assumed to be true position values. They provide initial attitude and heading without any IMU alignment. The IMU is initialized by Kalman filtering using the BBA result every 10 seconds to avoid a culmination of errors. That is, after every computation of the BBA, the IMU data and its errors are corrected. Figure 6 shows the strapdown navigation algorithm for the GPS/IMU integration and the BBA result. The combination of GPS, IMU, and images can be a hybrid positioning. As a result of the multi sensor integration, the trajectory of the hybrid positioning can assure sufficient geo-referencing accuracy for the images. The trajectory of the digital camera can be representative of the trajectory of the platform because the GPS and IMU data are initialized by camera orientation. Their coordinate is fitted to the digital camera coordinate. Figure 7 shows the hybrid position as trajectories of GPS/IMU/images and GPS/IMU. The coordinate system is JGD2000 (Japan Geodetic Datum 2000). The black solid line is the combination of GPS/IMU/images, and the red solid line is the ordinary combination of

Table 3. Accuracy of the image orientation

**4.3 Hybrid positioning by multi sensor integration** 

images are not taken frequently; in this case every 10 seconds.


## **4.2 Bundle block adjustment (BBA) of digital images**

The image exterior orientation is determined by the BBA for the mosaicked digital images where the BBA is a nonlinear least squares optimization method using the tie points of the inside block (Takagi & Shimoda, 2004). Bundle block configuration increases both the reliability and accuracy of object reconstruction. An object point is determined by the intersection of more than two images, which provides local redundancy for gross error detection and which consequently forms a better intersection geometry (Chen et al., 2003). Therefore, in this paper, the digital images are set to overlap by more than 50% in the forward direction, and by more than 30% on each side. The GPS and IMU data obtained in a previous step allow the automatic setting of tie points in overlapped images and reduce the time spent searching for tie points by limiting the search area based on the epipolar line. The epipolar line is the straight line of intersection between the epipolar plane and the image plane, and it is estimated by the sensor position and attitude, which is derived from GPS/IMU integration. It connects the image point in one image through the image point in the next image. Figure 5 shows an image orientation series with tie points that overlap each other. The image resolution is extremely high (approximately 1.5 cm), so it is easy to detect small gaps or cracks.

The accuracy of the image orientation (ba) is estimated by comparison with 20 control points (cp) as shown in Table 3. The average error of the plane is approximately 3 to 6 cm. The average error of the height is approximately 10 cm. That is, although the BBA is done automatically, the result is very accurate compared with the differential GPS or the GPS/IMU integration data, in which the average error is based on GPS accuracy. Moreover, the processing time is very short. Thus, the BBA's results aid Kalman filtering by initializing the position and attitude in the next step to acquire a greater accurate trajectory.

Fig. 5. Image orientation with tie points

346 Global Navigation Satellite Systems – Signal, Theory and Applications

The image exterior orientation is determined by the BBA for the mosaicked digital images where the BBA is a nonlinear least squares optimization method using the tie points of the inside block (Takagi & Shimoda, 2004). Bundle block configuration increases both the reliability and accuracy of object reconstruction. An object point is determined by the intersection of more than two images, which provides local redundancy for gross error detection and which consequently forms a better intersection geometry (Chen et al., 2003). Therefore, in this paper, the digital images are set to overlap by more than 50% in the forward direction, and by more than 30% on each side. The GPS and IMU data obtained in a previous step allow the automatic setting of tie points in overlapped images and reduce the time spent searching for tie points by limiting the search area based on the epipolar line. The epipolar line is the straight line of intersection between the epipolar plane and the image plane, and it is estimated by the sensor position and attitude, which is derived from GPS/IMU integration. It connects the image point in one image through the image point in the next image. Figure 5 shows an image orientation series with tie points that overlap each other. The image resolution is extremely high (approximately 1.5 cm), so it is easy to detect

The accuracy of the image orientation (ba) is estimated by comparison with 20 control points (cp) as shown in Table 3. The average error of the plane is approximately 3 to 6 cm. The average error of the height is approximately 10 cm. That is, although the BBA is done automatically, the result is very accurate compared with the differential GPS or the GPS/IMU integration data, in which the average error is based on GPS accuracy. Moreover, the processing time is very short. Thus, the BBA's results aid Kalman filtering by initializing

the position and attitude in the next step to acquire a greater accurate trajectory.

3. Calculation of measurements 4. Calculation of the Kalman gain 5. Calculation of estimations 6. Calculation of next covariance 7. Updating of the covariance.

small gaps or cracks.

Fig. 5. Image orientation with tie points

**4.2 Bundle block adjustment (BBA) of digital images** 


Table 3. Accuracy of the image orientation

#### **4.3 Hybrid positioning by multi sensor integration**

The position and attitude of the sensors are dictated by the integration of the GPS and IMU data, as well as by the image orientations that are acquired from digital cameras or digital video cameras. One of the main objectives of this paper is to integrate inexpensive sensors into a high-precision positioning system. Integration of the GPS (which operates at 1 Hz) with the IMU (200 Hz) has to be made with Kalman filtering for the geo-referencing of laser range data with a frequency of 18 Hz. The positioning accuracy of the GPS/IMU integration data is based on GPS accuracy. On the other hand, both position and attitude can be estimated with very high accuracy using the BBA as image orientations. However, the images are not taken frequently; in this case every 10 seconds.

Therefore, the combination of the BBA and Kalman filtering is conducted to increase accuracy, as shown in Figure 6. The BBA results are assumed to be true position values. They provide initial attitude and heading without any IMU alignment. The IMU is initialized by Kalman filtering using the BBA result every 10 seconds to avoid a culmination of errors. That is, after every computation of the BBA, the IMU data and its errors are corrected. Figure 6 shows the strapdown navigation algorithm for the GPS/IMU integration and the BBA result. The combination of GPS, IMU, and images can be a hybrid positioning.

As a result of the multi sensor integration, the trajectory of the hybrid positioning can assure sufficient geo-referencing accuracy for the images. The trajectory of the digital camera can be representative of the trajectory of the platform because the GPS and IMU data are initialized by camera orientation. Their coordinate is fitted to the digital camera coordinate. Figure 7 shows the hybrid position as trajectories of GPS/IMU/images and GPS/IMU. The coordinate system is JGD2000 (Japan Geodetic Datum 2000). The black solid line is the combination of GPS/IMU/images, and the red solid line is the ordinary combination of

the yaw angles of these two methods; the black solid line is the combination of GPS/IMU/images, and the red solid line is the ordinary integration of GPS/IMU. In the case of the GPS/IMU/images combination, an accurate azimuth angle, as a yaw angle, is recorded from the BBA. Thus, the yaw angle is then accurate from the beginning of the measurement. On the other hand, in the ordinary combination of GPS/IMU, the Kalman filter gradually estimates the state of a system from measurements which contain random


120

Fig. 8. Comparison of Yaw angle


1

153

Fig. 9. Comparison of Roll and Pitch angle

305

457

609

761

913

1065

1

153

305

457

609

761

913

1065

1217

time(1/20sec)

1369

1521

1673

1825

1977

2129

2281

2433

2585

2737

2889

Pitch (°)

Roll (°)

358

596

834

1072

1310

1548

1786

2024

2262

time (1/20sec)

1369

1521

1673

1825

1977

2129

2281

2433

2585

2737

2889

1217

2500

time (1/20sec)

2738

2976

3214

3452

3690

GPS/IMU/images GPS/IMU

3928

4166

4404

4642

4880

0

50

100

Yaw (°)

150

200

GPS/IMU/images GPS/IMU

GPS/IMU. On the one hand, with an ordinary GPS/IMU, the trajectory becomes notched because the position is revised forcibly by GPS due to drift error. The platform changes its attitude rapidly, especially in the corner, so the notched trajectory is very obvious. The drift error remains in the calculation until the alignment of IMU is complete. On the other hand, with GPS/IMU/images, the drift error of IMU is aligned by initialization from the bundle block adjustment. Moreover, the trajectory is very smooth in the corner.

Fig. 6. Strapdown navigation diagram with images

Fig. 7. Hybrid position

#### **4.4 Evaluation of hybrid positioning**

Trajectory tracking by ordinary GPS/IMU integration is compared to the combining of GPS/IMU and continual digital images, in order to validate this combination. Figure 8 shows 348 Global Navigation Satellite Systems – Signal, Theory and Applications

GPS/IMU. On the one hand, with an ordinary GPS/IMU, the trajectory becomes notched because the position is revised forcibly by GPS due to drift error. The platform changes its attitude rapidly, especially in the corner, so the notched trajectory is very obvious. The drift error remains in the calculation until the alignment of IMU is complete. On the other hand, with GPS/IMU/images, the drift error of IMU is aligned by initialization from the bundle

> Kalman filter 1Hz

X (m) JGD2000 11220 11210 11200 11190 11180 11170 11160 11150 11140 11130

Trajectory tracking by ordinary GPS/IMU integration is compared to the combining of GPS/IMU and continual digital images, in order to validate this combination. Figure 8 shows

Velocity Error Position Error

+ ‐

+ ‐

GPS Position

GPS Velocity

> GPS

1Hz

Attitude 0.1Hz

Position (BBA)

Image orientation

Camera

GPS/IMU/images GPS/IMU

block adjustment. Moreover, the trajectory is very smooth in the corner.

Hybrid position and attitude 200 Hz

IMU 200Hz

25670 25665 25660

25655 25650 25645

25640 25635 25630

Y (

Fig. 7. Hybrid position

**4.4 Evaluation of hybrid positioning** 

m) JGD2000

Fig. 6. Strapdown navigation diagram with images

IMU Velocity

> IMU Position

the yaw angles of these two methods; the black solid line is the combination of GPS/IMU/images, and the red solid line is the ordinary integration of GPS/IMU. In the case of the GPS/IMU/images combination, an accurate azimuth angle, as a yaw angle, is recorded from the BBA. Thus, the yaw angle is then accurate from the beginning of the measurement. On the other hand, in the ordinary combination of GPS/IMU, the Kalman filter gradually estimates the state of a system from measurements which contain random

Fig. 8. Comparison of Yaw angle

Fig. 9. Comparison of Roll and Pitch angle

There are several advantages to utilizing a UAV. One of the most important advantages is that it is unmanned and therefore can fly over dangerous zones. This advantage suits the purpose of direct geo-referencing in this study. Direct geo-referencing does not require that ground control points have accurately measured ground coordinate values. In dangerous zones, it is impossible to set control points, unlike the case in normal aerial surveys. The addition of this direct geo-referencing method from a UAV could be an ideal tool for monitoring dangerous situations. Therefore, this UAV-based mapping system is perfectly suited for disaster areas such as landslides and floods, and for other applications such as

> Main rotor 2 rotors, diameter 4.8m Tail rotor 2 rotors, diameter 0.8m

All the sensors are tightly mounted under the UAV to ensure that they have a constant geometric relationship during the measurement. The digital cameras and the laser scanner are calibrated to estimate the relative position and attitude. Moreover, all sensors are controlled by a laptop PC and are synchronized by GPS time, one pulse per second. Finally

During measurement, the platform, including all of the sensors, is continuously changing its position and attitude with respect to time. For direct geo-referencing of laser range data, the hybrid positioning data are used. There are two coordinate systems; the laser scanner and the hybrid positioning, WGS84 (World Geodetic System 1984) based on GPS and the BBA data. It is necessary to transform the laser scanner coordinate into the WGS84 coordinate by geo-referencing. Geo-referencing of the laser range data is determined by the 3D Helmert's transformation, which is computed by the rotation matrix and translation matrix with the hybrid positioning data and calibration parameters as offset values, as shown in Equation (3), for calibration by the laser scanner. The offset values from the laser scanner to the digital cameras in the body frame are already obtained by the sensor calibration. Geo-referencing of

Figure 11 shows the 3D point cloud data that are directly geo-referenced by the hybrid IMU data. In this research, the WGS84 is used as the base coordinate system for the 3D point cloud data. The UAV-based system in this research utilized a landslide survey by

Operational 3km or more

Flight time 1 hour Ceiling 2,000m

Weight 330kg Payload 100kg Motor 83.5 hp

river monitoring.

Table 4. Specification of RPH2

the sensors are set, as shown in Figure 10.

laser range data and images is done directly.

**5.1.1 UAV based system** 

**5.1.2 Digital 3D modeling** 

errors. That is, error estimation is not enough at the beginning, and the yaw angle gradually improves. For that reason, in the case of ordinary GPS/IMU, it is necessary to have the system alignment before the measurement in order to estimate an accurate azimuth angle. Therefore, in this proposed method, the system alignment is not required. Figure 9 shows the roll and the pitch angle of the two methods. The phenomenon of the error for the roll and pitch angle is the same as the yaw angle.

## **5. Experiment**

In order to appraise the characteristics and performance of the proposed algorithm, two experiments are conducted by using a UAV (Unmanned Aerial Vehicle) and ground vehicle (a car) as a platform. In the case of the UAV, images are used for external aid, whereas VMS is used as an external aid in the case of the ground vehicle.

## **5.1 UAV (Unmanned Aerial Vehicle) based mapping system**

A UAV based mapping system is developed to obtain both the wide-area coverage of remote sensors and the high levels of detail and accuracy of ground surveying, at low cost. All the measurement tools are mounted under the UAV, which resembles a helicopter, to acquire detailed information from low altitudes, unlike high altitude systems in satellites or airplanes. The survey is conducted from the sky, but the resolution and accuracy are equal to those of ground surveying. Moreover, the UAV can acquire data easily as well as safely.

In this paper, all of the measurement tools are mounted under the UAV, which is a helicopter-like model RPH2 made by Fuji Heavy Industries, Ltd., and shown in Figure 10. All the sensors are mounted tightly to the bottom of the fuselage. The RPH2 is 4.1m long, 1.3m wide and 1.8m high. Table 4 shows its main specifications.

Fig. 10. UAV, model RPH2 made by Fuji Heavy Industries, Ltd.

As shown in Table 4, the RPH2 is a large UAV; however, it is considered a platform for the experimental development of a multi-sensor integration algorithm. The RPH2 has a large payload capacity; thus, it can carry large numbers of sensors, control PCs, and a large battery. After the algorithm is developed by a large platform, a small UAV system is implemented using selected inexpensive sensors for certain observation targets.

350 Global Navigation Satellite Systems – Signal, Theory and Applications

errors. That is, error estimation is not enough at the beginning, and the yaw angle gradually improves. For that reason, in the case of ordinary GPS/IMU, it is necessary to have the system alignment before the measurement in order to estimate an accurate azimuth angle. Therefore, in this proposed method, the system alignment is not required. Figure 9 shows the roll and the pitch angle of the two methods. The phenomenon of the error for the roll

In order to appraise the characteristics and performance of the proposed algorithm, two experiments are conducted by using a UAV (Unmanned Aerial Vehicle) and ground vehicle (a car) as a platform. In the case of the UAV, images are used for external aid, whereas VMS

A UAV based mapping system is developed to obtain both the wide-area coverage of remote sensors and the high levels of detail and accuracy of ground surveying, at low cost. All the measurement tools are mounted under the UAV, which resembles a helicopter, to acquire detailed information from low altitudes, unlike high altitude systems in satellites or airplanes. The survey is conducted from the sky, but the resolution and accuracy are equal to those of ground surveying. Moreover, the UAV can acquire data easily as well as safely. In this paper, all of the measurement tools are mounted under the UAV, which is a helicopter-like model RPH2 made by Fuji Heavy Industries, Ltd., and shown in Figure 10. All the sensors are mounted tightly to the bottom of the fuselage. The RPH2 is 4.1m long,

As shown in Table 4, the RPH2 is a large UAV; however, it is considered a platform for the experimental development of a multi-sensor integration algorithm. The RPH2 has a large payload capacity; thus, it can carry large numbers of sensors, control PCs, and a large battery. After the algorithm is developed by a large platform, a small UAV system is

implemented using selected inexpensive sensors for certain observation targets.

GPS (Ashtech G12)

PC

IMU

(Tamagawa Seiki TA7544) Digital Camera (Canon EOS-10D) Laser Scanner (SICK LMS-291)

and pitch angle is the same as the yaw angle.

is used as an external aid in the case of the ground vehicle.

**5.1 UAV (Unmanned Aerial Vehicle) based mapping system** 

1.3m wide and 1.8m high. Table 4 shows its main specifications.

Fig. 10. UAV, model RPH2 made by Fuji Heavy Industries, Ltd.

**5. Experiment** 

There are several advantages to utilizing a UAV. One of the most important advantages is that it is unmanned and therefore can fly over dangerous zones. This advantage suits the purpose of direct geo-referencing in this study. Direct geo-referencing does not require that ground control points have accurately measured ground coordinate values. In dangerous zones, it is impossible to set control points, unlike the case in normal aerial surveys. The addition of this direct geo-referencing method from a UAV could be an ideal tool for monitoring dangerous situations. Therefore, this UAV-based mapping system is perfectly suited for disaster areas such as landslides and floods, and for other applications such as river monitoring.


Table 4. Specification of RPH2

## **5.1.1 UAV based system**

All the sensors are tightly mounted under the UAV to ensure that they have a constant geometric relationship during the measurement. The digital cameras and the laser scanner are calibrated to estimate the relative position and attitude. Moreover, all sensors are controlled by a laptop PC and are synchronized by GPS time, one pulse per second. Finally the sensors are set, as shown in Figure 10.

## **5.1.2 Digital 3D modeling**

During measurement, the platform, including all of the sensors, is continuously changing its position and attitude with respect to time. For direct geo-referencing of laser range data, the hybrid positioning data are used. There are two coordinate systems; the laser scanner and the hybrid positioning, WGS84 (World Geodetic System 1984) based on GPS and the BBA data. It is necessary to transform the laser scanner coordinate into the WGS84 coordinate by geo-referencing. Geo-referencing of the laser range data is determined by the 3D Helmert's transformation, which is computed by the rotation matrix and translation matrix with the hybrid positioning data and calibration parameters as offset values, as shown in Equation (3), for calibration by the laser scanner. The offset values from the laser scanner to the digital cameras in the body frame are already obtained by the sensor calibration. Geo-referencing of laser range data and images is done directly.

Figure 11 shows the 3D point cloud data that are directly geo-referenced by the hybrid IMU data. In this research, the WGS84 is used as the base coordinate system for the 3D point cloud data. The UAV-based system in this research utilized a landslide survey by

The navigation units used in the sensor system are composed of a DGPS, an IMU (FOG) and a VMS (Velocity Measurement System). The DGPS is responsible for measuring the vehicle's position using a satellite signal. The IMU, consisting of accelerometers and gyroscopes, measures the acceleration and direction changes of the vehicle, while VMS is in charge of measuring the vehicle's velocity with high accuracy. The combination of GPS/IMU/VMS is complementary as the velocity from VMS, and the acceleration and direction changes from IMU can be used to locate a vehicle's position when the GPS signal is unavailable. Moreover, the GPS can be used to rectify the output of IMU. VMS data is more accurate when compared with DGPS; thus, the estimation of VMS errors becomes possible when DGPS is valid. Therefore, it is possible to acquire a more precise positioning in the pure PDOP condition like an urban operation by means of blending VMS data. The strapdown navigation diagram for a ground vehicle mapping system is shown in Figure 13. The

Kalman filter is processed every 10 seconds.

Fig. 12. Ground vehicle based system

Hybrid IMU

200Hz

Fig. 13. Strapdown navigation diagram with VMS

IMU 200Hz

10 Hz

Kalman filter

Position error Velocity error

RTK-GPS <sup>+</sup>‐

GPS velocity 1Hz

10Hz VMS

GPS position

+ ‐

IMU position

IMU velocity

Speed

reconstructing a digital surface model. A digital camera and a laser scanner were mounted on a UAV to acquire detailed information from low altitude. The surveying is carried out from the sky, but the resolution and accuracy are the same level as a ground survey. Because of the utilization of a UAV, the data of the landslide site can be easily acquired collectively with safety and mobility. This new survey can be an intermediate method between aerial surveys and ground surveys.

Fig. 11. 3D point cloud model

## **5.2 Ground vehicle based mapping system**

Understanding road environments has become increasing more important in recent years due to a wide range of applications, such as intelligent vehicles, driving assistance and sign inventory systems or route guidance systems for navigation assistance. For drivers, traffic signs/signals provide crucial information for safety and smooth driving; thus, they play an important role in all kinds of driver support systems. Much work on traffic signal/sign detection and recognition has been done in recent years and sensor systems now consist of three different types of sensors, including laser scanners for measuring object geometry, digital cameras for capturing scene texture, and the moving platform equipped with a GPS/IMU/VMS based hybrid positioning system.

### **5.2.1 Ground vehicle based system**

Figure 12 shows the ground vehicle based system where all sensors are mounted on the roof of the vehicle. Two of the laser range scanners are mounted on the back and scan the horizontal plane. VMS is also mounted and is used to assist the navigation unit to locate vehicle positions when the GPS signal is unavailable. The other two laser scanners are placed on the front and rear of the vehicle's roof. The front one scans with an elevation of about 30 degrees to capture the front scene, especially the important urban spatial objects that assist navigation. For data measurement, all the sensors are under control of the vehicle-borne computers and synchronized by a GPS clock.

352 Global Navigation Satellite Systems – Signal, Theory and Applications

reconstructing a digital surface model. A digital camera and a laser scanner were mounted on a UAV to acquire detailed information from low altitude. The surveying is carried out from the sky, but the resolution and accuracy are the same level as a ground survey. Because of the utilization of a UAV, the data of the landslide site can be easily acquired collectively with safety and mobility. This new survey can be an intermediate method between aerial

Understanding road environments has become increasing more important in recent years due to a wide range of applications, such as intelligent vehicles, driving assistance and sign inventory systems or route guidance systems for navigation assistance. For drivers, traffic signs/signals provide crucial information for safety and smooth driving; thus, they play an important role in all kinds of driver support systems. Much work on traffic signal/sign detection and recognition has been done in recent years and sensor systems now consist of three different types of sensors, including laser scanners for measuring object geometry, digital cameras for capturing scene texture, and the moving platform equipped with a

Figure 12 shows the ground vehicle based system where all sensors are mounted on the roof of the vehicle. Two of the laser range scanners are mounted on the back and scan the horizontal plane. VMS is also mounted and is used to assist the navigation unit to locate vehicle positions when the GPS signal is unavailable. The other two laser scanners are placed on the front and rear of the vehicle's roof. The front one scans with an elevation of about 30 degrees to capture the front scene, especially the important urban spatial objects that assist navigation. For data measurement, all the sensors are under control of the

surveys and ground surveys.

Fig. 11. 3D point cloud model

**5.2 Ground vehicle based mapping system** 

GPS/IMU/VMS based hybrid positioning system.

vehicle-borne computers and synchronized by a GPS clock.

**5.2.1 Ground vehicle based system** 

The navigation units used in the sensor system are composed of a DGPS, an IMU (FOG) and a VMS (Velocity Measurement System). The DGPS is responsible for measuring the vehicle's position using a satellite signal. The IMU, consisting of accelerometers and gyroscopes, measures the acceleration and direction changes of the vehicle, while VMS is in charge of measuring the vehicle's velocity with high accuracy. The combination of GPS/IMU/VMS is complementary as the velocity from VMS, and the acceleration and direction changes from IMU can be used to locate a vehicle's position when the GPS signal is unavailable. Moreover, the GPS can be used to rectify the output of IMU. VMS data is more accurate when compared with DGPS; thus, the estimation of VMS errors becomes possible when DGPS is valid. Therefore, it is possible to acquire a more precise positioning in the pure PDOP condition like an urban operation by means of blending VMS data. The strapdown navigation diagram for a ground vehicle mapping system is shown in Figure 13. The Kalman filter is processed every 10 seconds.

Fig. 12. Ground vehicle based system

Fig. 13. Strapdown navigation diagram with VMS

resolution is 0.25°; that is, the density of 3D laser points is approximately 20 cm per point. After comparing the mapping accuracy with the laser point density, it was found that the accuracy is good enough for mapping DSM. Therefore, the accuracy of hybrid positioning

Error Error Error DSM DSM DSM Ground Control point

No. (X) (Y) (Z) (X) (Y) (Z) (X) (Y) (Z) -11184.877 -25630.253 42.755 -11184.696 -25630.836 42.915 0.181 0.583 0.160 -11185.471 -25622.727 42.952 -11185.557 -25622.789 42.971 0.086 0.062 0.019 -11167.603 -25670.474 42.391 -11168.282 -25670.312 42.406 0.679 0.162 0.015 -11177.107 -25634.721 42.704 -11177.262 -25634.918 42.523 0.155 0.197 0.181 -11152.866 -25641.753 42.029 -11152.172 -25641.036 42.071 0.694 0.717 0.042 -11176.511 -25625.571 42.824 -11176.467 -25625.426 42.767 0.044 0.145 0.057 -11153.911 -25643.823 42.534 -11154.375 -25643.041 42.075 0.464 0.782 0.459 -11150.564 -25631.724 42.340 -11150.887 -25631.869 42.296 0.323 0.145 0.044 -11176.771 -25635.344 43.992 -11176.394 -25635.308 44.082 0.377 0.036 0.090 -11186.666 -25631.657 44.289 -11186.417 -25631.888 44.202 0.249 0.231 0.087

In this paper, robust trajectory tracking by hybrid positioning was developed and a digital surface model was reconstructed with multi–sensor integration using entirely inexpensive sensors, such as a small laser scanner, digital cameras, an inexpensive IMU, a GPS, and a VMS. A new method of direct geo-referencing was proposed for laser range data and images by combining a Kalman filter and the BBA or VMS. Because the result of BBA avoids the accumulation of drift errors in the Kalman filtering, the geo-referenced laser range data and the images were automatically overlapped properly in the common world coordinate system. Hybrid positioning data is acquired by using or combining several different positioning technologies. Since this paper focused on how to integrate the sensors into a mobile platform, all the sensors and instruments were assembled and mounted under a mobile platform such as a UAV or ground vehicle in this experiment. Finally, the precise trajectory, including attitude of the sensors, was computed as the hybrid positioning for direct geo-referencing of a laser scanner. The hybrid positioning data is used to reconstruct

Zhao, H. & Shibasaki, R. (2000). *Reconstruction of Textured Urban 3D Model by Ground-Based Laser Range and CCD Images*, IEICE Trans. Inf.&Syst., vol.E83-D, No.7 Manandhar, D. & Shibasaki, R. (2002). *Auto-Extraction of Urban Features from Vehicle-Borne Laser Data*, ISPRS, GeoSpatial Theory, Processing and Application, Ottawa

Ave. Error

0.325 0.306 0.115

Unit: m

including their attitude is considered approximately 10 to 30 cm.

Table 5. Positioning accuracy assessment from DSM

**7. Conclusion** 

digital surface models.

**8. References** 

## **5.2.2 Object extraction**

Figure 14 shows the point cloud acquired by the laser scanner, which is geo referenced with hybrid positioning data and contains not only traffic signs/signals but also the surroundings (vegetation and buildings) beyond the road. Object segmentation and feature extraction of important objects, such as traffic signs/signals, are conducted. This is most common especially after most redundant range points have been detached from the point cloud after boundary extraction. Range points represent the geographic information of all the objects in the form of 3D discrete coordinates, which have no description of attribute and there is no topological relation among the data points. However, spatial features exist which can be used for object segmentation. It is clear that the traffic sign/signal has a strong linear feature after being projected onto the horizontal plane, while points belonging to other spatial objects (such as trees) are scattered on the horizontal plane without a dominant direction. This spatial feature can be utilized for segmentation.

Fig. 14. 3D point cloud data from vehicle based system

## **6. Validation**

In this mobile mapping system, multiple sensors are integrated, thus making it difficult to point out the origins of errors for positioning. Therefore, accuracy of positioning is assessed by an ordinary survey method, the result of which is compared with a digital surface model and control points from the oriented images; the accuracy is 3 to 10 cm, as shown in Table. 3. Those control points are considered as true values and selected feature points such as object corners, which can recognize both images and a digital surface model (DSM). As a result, the average error of the digital surface model is approximately 10 to 30 cm, as shown in Table. 5.

For this validation, DSM has been reconstructed and geo-referenced by using a hybrid position. The laser range data are acquired 50 m away from the object and the scan angler 354 Global Navigation Satellite Systems – Signal, Theory and Applications

Figure 14 shows the point cloud acquired by the laser scanner, which is geo referenced with hybrid positioning data and contains not only traffic signs/signals but also the surroundings (vegetation and buildings) beyond the road. Object segmentation and feature extraction of important objects, such as traffic signs/signals, are conducted. This is most common especially after most redundant range points have been detached from the point cloud after boundary extraction. Range points represent the geographic information of all the objects in the form of 3D discrete coordinates, which have no description of attribute and there is no topological relation among the data points. However, spatial features exist which can be used for object segmentation. It is clear that the traffic sign/signal has a strong linear feature after being projected onto the horizontal plane, while points belonging to other spatial objects (such as trees) are scattered on the horizontal plane without a dominant

In this mobile mapping system, multiple sensors are integrated, thus making it difficult to point out the origins of errors for positioning. Therefore, accuracy of positioning is assessed by an ordinary survey method, the result of which is compared with a digital surface model and control points from the oriented images; the accuracy is 3 to 10 cm, as shown in Table. 3. Those control points are considered as true values and selected feature points such as object corners, which can recognize both images and a digital surface model (DSM). As a result, the average error of the digital surface model is approximately 10 to 30 cm, as shown in

For this validation, DSM has been reconstructed and geo-referenced by using a hybrid position. The laser range data are acquired 50 m away from the object and the scan angler

direction. This spatial feature can be utilized for segmentation.

Fig. 14. 3D point cloud data from vehicle based system

**6. Validation** 

Table. 5.

**5.2.2 Object extraction** 

resolution is 0.25°; that is, the density of 3D laser points is approximately 20 cm per point. After comparing the mapping accuracy with the laser point density, it was found that the accuracy is good enough for mapping DSM. Therefore, the accuracy of hybrid positioning including their attitude is considered approximately 10 to 30 cm.


Table 5. Positioning accuracy assessment from DSM

## **7. Conclusion**

In this paper, robust trajectory tracking by hybrid positioning was developed and a digital surface model was reconstructed with multi–sensor integration using entirely inexpensive sensors, such as a small laser scanner, digital cameras, an inexpensive IMU, a GPS, and a VMS. A new method of direct geo-referencing was proposed for laser range data and images by combining a Kalman filter and the BBA or VMS. Because the result of BBA avoids the accumulation of drift errors in the Kalman filtering, the geo-referenced laser range data and the images were automatically overlapped properly in the common world coordinate system. Hybrid positioning data is acquired by using or combining several different positioning technologies. Since this paper focused on how to integrate the sensors into a mobile platform, all the sensors and instruments were assembled and mounted under a mobile platform such as a UAV or ground vehicle in this experiment. Finally, the precise trajectory, including attitude of the sensors, was computed as the hybrid positioning for direct geo-referencing of a laser scanner. The hybrid positioning data is used to reconstruct digital surface models.

## **8. References**

Zhao, H. & Shibasaki, R. (2000). *Reconstruction of Textured Urban 3D Model by Ground-Based Laser Range and CCD Images*, IEICE Trans. Inf.&Syst., vol.E83-D, No.7

Manandhar, D. & Shibasaki, R. (2002). *Auto-Extraction of Urban Features from Vehicle-Borne Laser Data*, ISPRS, GeoSpatial Theory, Processing and Application, Ottawa

**Part 3** 

**GNSS Errors Mitigation and Modelling** 

