**Meet the editor**

Ahmed H. Mohamed is Assistant Professor of Geomatics at the University of Florida. He teaches courses and conduct research in satellite and inertial navigation geodetic science, surveying and mapping, geospatial informatics and remote sensing as applied to engineering and resource applications. He is the managing director of the NaGeM lab, specialized in satellite and inertial

navigation hardware and software systems. Prof. Mohamed has international experience working for academia and the industry: he directed the Geomatics Engineering Technology program of the Royal Commission, Saudi Arabia; between 1999-2004 he lead a team of scientists at the Alberta Research Council, Canada to develop Geomatics technologies; he also worked as research and teaching assistant at the University of Calgary between 1994 and 1999; and worked for Qatar University and Ain-shams University in Egypt. Prof. Mohamed has over 30 technical publications on navigation, georeferencing, and mapping in Earth science and engineering peer journals and proceedings.

Contents

**Preface VII**

**Section 1 Fundamentals of GNSS 1**

**Space-Weather Events 3**

Pierre-Richard Cornely

**Large Bridges 117**

Carvalho dos Santos

Hatem F. Halaoui

**Time-Weighted Graphs 145**

**Section 2 Applications 115**

Karpov and Mikhail Manzheliy

Alberto Cina and Marco Piras

Chapter 1 **Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused by**

Chapter 2 **Receiver Biases in Global Positioning Satellite Ranging 49**

Chapter 4 **Analysis of Ambiguity False Fixing within a GBAS11 87**

Chapter 3 **Stand-Alone Satellite-Based Global Positioning 67**

Paolo Dabove\* and Ambrogio Manzino\*

Chapter 5 **L1 GPS as Tool for Monitoring the Dynamic Behavior of**

Chapter 6 **Intelligent Traffic System: Implementing Road Networks with**

Ana Paula Camargo Larocca, Ricardo Ernesto Schaal and Marcelo

Robert J. Buenker, Gennady Golubkov, Maxim Golubkov, Ivan

### Contents




Preface

and civilian applications.

The advent of satellites and the advances thereafter of satellite navigation from the early 1960s till now changed the face of our life on Earth; Google maps and GPS car navigation systems are just testimonial of the indispensability of its importance. And yet, men still look to the sky for help. In the past, it was stellar navigation that guided men in their journeys and voyages around the Globe. Radio navigation aids appeared in the late 1930s and matured in the 1940s. Early studies exploring the idea of navigation by satellites were carried out by the US Air and Naval Forces in the early 1960s. The Navy Navigation Satellite System (NNSS or Transit) based on measuring the Doppler frequency of the received radio signal was the first all-weather continuous global naviga‐ tion satellite system to emerge. The DoD NAVSTAR Global Positioning System program was ini‐ tiated almost a decade later, through the merger of US Naval and Air Force projects. The first experimental GPS satellite was launched in 1977, while the first GPS receivers were introduced during the 1980s; they were big, cumbersome and expensive, and used a single channel receiver which consecutively locked on to each visible satellite and performed the pseudo-range measure‐ ment. By the 1990s, the receivers were multi-channel, more compact and cheaper; thanks to the emergence of the microprocessor. This was the period when GPS started to migrate from aircraft and ships – large and expensive platforms – to become a ubiquitous navigation system in military

Several other nations caught into the satellite navigation race. The Soviet Union launched its Global Navigation Satellite System (Globalnaya Navigatsionnaya Sputnikovaya Sistema - GLO‐ NASS) between 1967 and 1978. The fall of the Soviet Union, in the 1990s, resulted in retraction and partial population of the GLONASS constellation in orbit for several years. By 2010, GLO‐ NASS has again achieved 100% coverage, and it's fully operational as a global system operated and maintained by the Russian Federal Space Agency. The European Union and China have re‐ cently bought into the race, with the emerging Galileo and BeiDou satellite systems. October 2011 saw the first two of four operational Galileo satellites launch to validate the system. Full comple‐ tion of the 30-satellite Galileo system is expected by 2019. The BDS system, on the other hand, consists of two separate satellite constellations – a limited test system BeiDou-1 that has been re‐ gionally operational above China since 2000, and a full-scale global navigation system BeiDou-2 that is currently under construction. Full completion of the 35-satellite system is expected in 2020. Nevertheless, the BeiDou-2 is partially operational in China since December 2011, with 10 satel‐ lites in use, and began offering services to customers in the Asia-Pacific region in December 2012. Satellite navigation using microwave radio signal offers two advantages over stellar navigation; the first being genuine all weather day-and-night operation, the other being the absence of expen‐ sive to build-and-maintain optical tracking systems. The big cost difference in satellite navigation systems arises from the use of precision range-finding techniques as the primary measurement, rather than precision angular measurement. Whether an angular measurement is performed in

### Preface

The advent of satellites and the advances thereafter of satellite navigation from the early 1960s till now changed the face of our life on Earth; Google maps and GPS car navigation systems are just testimonial of the indispensability of its importance. And yet, men still look to the sky for help. In the past, it was stellar navigation that guided men in their journeys and voyages around the Globe. Radio navigation aids appeared in the late 1930s and matured in the 1940s. Early studies exploring the idea of navigation by satellites were carried out by the US Air and Naval Forces in the early 1960s. The Navy Navigation Satellite System (NNSS or Transit) based on measuring the Doppler frequency of the received radio signal was the first all-weather continuous global naviga‐ tion satellite system to emerge. The DoD NAVSTAR Global Positioning System program was ini‐ tiated almost a decade later, through the merger of US Naval and Air Force projects. The first experimental GPS satellite was launched in 1977, while the first GPS receivers were introduced during the 1980s; they were big, cumbersome and expensive, and used a single channel receiver which consecutively locked on to each visible satellite and performed the pseudo-range measure‐ ment. By the 1990s, the receivers were multi-channel, more compact and cheaper; thanks to the emergence of the microprocessor. This was the period when GPS started to migrate from aircraft and ships – large and expensive platforms – to become a ubiquitous navigation system in military and civilian applications.

Several other nations caught into the satellite navigation race. The Soviet Union launched its Global Navigation Satellite System (Globalnaya Navigatsionnaya Sputnikovaya Sistema - GLO‐ NASS) between 1967 and 1978. The fall of the Soviet Union, in the 1990s, resulted in retraction and partial population of the GLONASS constellation in orbit for several years. By 2010, GLO‐ NASS has again achieved 100% coverage, and it's fully operational as a global system operated and maintained by the Russian Federal Space Agency. The European Union and China have re‐ cently bought into the race, with the emerging Galileo and BeiDou satellite systems. October 2011 saw the first two of four operational Galileo satellites launch to validate the system. Full comple‐ tion of the 30-satellite Galileo system is expected by 2019. The BDS system, on the other hand, consists of two separate satellite constellations – a limited test system BeiDou-1 that has been re‐ gionally operational above China since 2000, and a full-scale global navigation system BeiDou-2 that is currently under construction. Full completion of the 35-satellite system is expected in 2020. Nevertheless, the BeiDou-2 is partially operational in China since December 2011, with 10 satel‐ lites in use, and began offering services to customers in the Asia-Pacific region in December 2012.

Satellite navigation using microwave radio signal offers two advantages over stellar navigation; the first being genuine all weather day-and-night operation, the other being the absence of expen‐ sive to build-and-maintain optical tracking systems. The big cost difference in satellite navigation systems arises from the use of precision range-finding techniques as the primary measurement, rather than precision angular measurement. Whether an angular measurement is performed in the radio or optical bands, it is always more expensive to perform than a range-finding measure‐ ment. The fundamental idea behind all current satellite navigation schemes is that of measuring the 'pseudo-range' to four or more satellites, the orbital position which is known for high preci‐ sion. This information permits the solution of a system of algebraic equations that yield the geo‐ graphical position of the receiver. The range measurement is achieved by synchronizing the navigation receiver with a master clock broadcast by the satellites, and then measuring the time it takes for a coded navigation signal to travel from the satellite to the receiver.

Time measurement and the effect of the relativity laws and the variation of the clock rates are presented in the first paper of this book by Prof. Buenker Robert of the Bergische Universität Wuppertal in Germany and his colleagues from the Russian Academy of Sciences and the I. Kant Baltic Federal University of Russia. They discussed several aspects of time keeping in light of the relativity laws pertaining to satellite navigation. As the radio signal travels from the satellite to the receiver it passes through different layers of the atmosphere, ionized and neutral, causing the signal to delay. The accuracy and integrity of the pseudo-range measurement are affected, a topic discussed by Professor Cornely in the second chapter of the book where those effects and other sources of errors pertaining to satellite navigation are presented. Geometry is a major player in global positioning, where the best accuracy is always achieved when satellites are widely spaced within the receiver's field of view. The distribution of the satellites around the receiver is meas‐ ured by the Geometrical Dilution of Precision (GDOP) factor. Alberto Cina and Marco Piras of the Department of Environment in Turin, Italy, presented a profound discussion of stand-alone posi‐ tioning and the effect of the satellite-receiver geometry in the third paper.

The whole domain of differential GPS techniques evolved from the 1980s, using a supplementary channel to carry additional correction data, applied to the GPS pseudo-range measurement to cor‐ rect the atmospheric delays, orbital drift and other sources of error. The most accurate differential GPS systems today have positioning errors at the sub-decimeter level. Augmented systems are a reality nowadays. There exists a few systems out there that help achieve better accuracy in differ‐ ential GPS mode. Paolo Dabove and Ambrogio Manzino of Turin, Italy, provided a detailed dis‐ cussion of the positioning quality control within a ground-augmented satellite system. They used weights within a neural network scheme to provide an optimized network solution in real time. They showed that survey-grade solution can be achieved in NRTK mode with their developed methodology.

The second part of this book is dedicated to applications. Profs. Larocca and Schall from the Uni‐ versity of Sao Paulo along with Prof. Santos from the University of New Brunswick showed a case study of monitoring the dynamic behavior of bridges using GPS. They conducted field ex‐ periments and presented analyses of the results that show the effectiveness of GPS in their appli‐ cations, adding another precious testing tool to the Engineering community. Lastly, Prof. Hatem F. Halaoui from the Haigazian University, showed the application of geo-spatial technology in intelligent traffic systems and its implementation in road networks using time-weighted graphs. The application shows the effectiveness of marrying the Geographic Information Systems with the Satellite-based navigation systems in optimizing traffic systems to reduce cost and time.

> **Prof. Ahmed H Mohamed,** University of Florida, USA

**Section 1**

**Fundamentals of GNSS**

## **Fundamentals of GNSS**

the radio or optical bands, it is always more expensive to perform than a range-finding measure‐ ment. The fundamental idea behind all current satellite navigation schemes is that of measuring the 'pseudo-range' to four or more satellites, the orbital position which is known for high preci‐ sion. This information permits the solution of a system of algebraic equations that yield the geo‐ graphical position of the receiver. The range measurement is achieved by synchronizing the navigation receiver with a master clock broadcast by the satellites, and then measuring the time it

Time measurement and the effect of the relativity laws and the variation of the clock rates are presented in the first paper of this book by Prof. Buenker Robert of the Bergische Universität Wuppertal in Germany and his colleagues from the Russian Academy of Sciences and the I. Kant Baltic Federal University of Russia. They discussed several aspects of time keeping in light of the relativity laws pertaining to satellite navigation. As the radio signal travels from the satellite to the receiver it passes through different layers of the atmosphere, ionized and neutral, causing the signal to delay. The accuracy and integrity of the pseudo-range measurement are affected, a topic discussed by Professor Cornely in the second chapter of the book where those effects and other sources of errors pertaining to satellite navigation are presented. Geometry is a major player in global positioning, where the best accuracy is always achieved when satellites are widely spaced within the receiver's field of view. The distribution of the satellites around the receiver is meas‐ ured by the Geometrical Dilution of Precision (GDOP) factor. Alberto Cina and Marco Piras of the Department of Environment in Turin, Italy, presented a profound discussion of stand-alone posi‐

The whole domain of differential GPS techniques evolved from the 1980s, using a supplementary channel to carry additional correction data, applied to the GPS pseudo-range measurement to cor‐ rect the atmospheric delays, orbital drift and other sources of error. The most accurate differential GPS systems today have positioning errors at the sub-decimeter level. Augmented systems are a reality nowadays. There exists a few systems out there that help achieve better accuracy in differ‐ ential GPS mode. Paolo Dabove and Ambrogio Manzino of Turin, Italy, provided a detailed dis‐ cussion of the positioning quality control within a ground-augmented satellite system. They used weights within a neural network scheme to provide an optimized network solution in real time. They showed that survey-grade solution can be achieved in NRTK mode with their developed

The second part of this book is dedicated to applications. Profs. Larocca and Schall from the Uni‐ versity of Sao Paulo along with Prof. Santos from the University of New Brunswick showed a case study of monitoring the dynamic behavior of bridges using GPS. They conducted field ex‐ periments and presented analyses of the results that show the effectiveness of GPS in their appli‐ cations, adding another precious testing tool to the Engineering community. Lastly, Prof. Hatem F. Halaoui from the Haigazian University, showed the application of geo-spatial technology in intelligent traffic systems and its implementation in road networks using time-weighted graphs. The application shows the effectiveness of marrying the Geographic Information Systems with the Satellite-based navigation systems in optimizing traffic systems to reduce cost and time.

> **Prof. Ahmed H Mohamed,** University of Florida,

> > USA

takes for a coded navigation signal to travel from the satellite to the receiver.

tioning and the effect of the satellite-receiver geometry in the third paper.

methodology.

VIII Preface

**Chapter 1**

**Relativity Laws for the Variation of Rates of Clocks**

**by Space-Weather Events**

Robert J. Buenker, Gennady Golubkov,

Additional information is available at the end of the chapter

Maxim Golubkov, Ivan Karpov and

Mikhail Manzheliy

**1. Introduction**

http://dx.doi.org/10.5772/55792

this value with c, i.e., Δr = c Δt.

**Moving in Free Space and GPS Positioning Errors Caused**

Relativity theory plays a key role in the operation of the Global Positioning System (GPS). It relies on Einstein's second postulate of the special theory of relativity (STR) [1] that states that the speed of light in free space is a constant (c), independent of the state of motion of both the source and the observer. As a result, it becomes possible to measure distances using atomic clocks. Indeed, the modern-day definition of the meter is the distance travelled by a light pulse in c-1 s [2]. The value of the speed of light in free space is simply defined to have a fixed value of c=2.99792458 x 108 ms-1. As a consequence, the distance Δr between two fixed points is therefore obtained by measuring the elapsed time Δt for light to traverse it and multiplying

A key objective in the GPS methodology is to accurately measure the distance between an orbiting satellite and a given object. This type of basic information for a series of satellites can then be used to accurately determine the location of the object [3]. The measurement of the elapsed time for light to pass between two points requires communication between atomic clocks located at each position. The problem that needs to be overcome in the GPS methodology is that the rates of the clocks carried onboard satellites are not the same as those that are stationary on the Earth's surface. For this reason it is imperative that we have a quantitative understanding of how the rates of clocks vary with their state of motion and position in a

> © 2013 Buenker et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2013 Buenker et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

gravitational field. This topic will be covered in the next section of this chapter.

## **Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused by Space-Weather Events**

Robert J. Buenker, Gennady Golubkov, Maxim Golubkov, Ivan Karpov and Mikhail Manzheliy

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/55792

**1. Introduction**

Relativity theory plays a key role in the operation of the Global Positioning System (GPS). It relies on Einstein's second postulate of the special theory of relativity (STR) [1] that states that the speed of light in free space is a constant (c), independent of the state of motion of both the source and the observer. As a result, it becomes possible to measure distances using atomic clocks. Indeed, the modern-day definition of the meter is the distance travelled by a light pulse in c-1 s [2]. The value of the speed of light in free space is simply defined to have a fixed value of c=2.99792458 x 108 ms-1. As a consequence, the distance Δr between two fixed points is therefore obtained by measuring the elapsed time Δt for light to traverse it and multiplying this value with c, i.e., Δr = c Δt.

A key objective in the GPS methodology is to accurately measure the distance between an orbiting satellite and a given object. This type of basic information for a series of satellites can then be used to accurately determine the location of the object [3]. The measurement of the elapsed time for light to pass between two points requires communication between atomic clocks located at each position. The problem that needs to be overcome in the GPS methodology is that the rates of the clocks carried onboard satellites are not the same as those that are stationary on the Earth's surface. For this reason it is imperative that we have a quantitative understanding of how the rates of clocks vary with their state of motion and position in a gravitational field. This topic will be covered in the next section of this chapter.

© 2013 Buenker et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Buenker et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

It also must be recognized that the speed of light only remains constant as it moves through free space and also remains at the same gravitational potential. This is an important distinction that has significant practical consequences when the light pulses must pass through regions in which space-weather events are occurring.

( ) ( )

*t tx x xt y y z z*


ge

ge

¢ = -

e

¢ = ¢ =

speed v, and that their coordinate systems are coincident for *t* =*t* ′

ing experimental observations. Unlike the simple *t* =*t* ′

version of the Lorentz transformation.

The space-time coordinates (*x*, *y*, *z*, *t*) and (*x* ′

value for *ε* is *γ* <sup>−</sup><sup>1</sup>

theory given below.

e

<sup>2</sup> vc a v b

> c d

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

. However, Lorentz pointed out than any value for ε in the general Lorentz

, *y* ′ , *z* ′ , *t* ′

observers who are respectively at rest in two inertial systems S and are compared in these equations. It is assumed that is moving along the positive x axis relative to S with constant

transformation leaves Maxwell's equations invariant and thus there is a degree of freedom that needs to be eliminated on some other basis before a specific version can be unambiguously determined. This degree of freedom will be important in the ensuing discussion of relativity

The form of the above equations suggested that the Newtonian concept of absolute time needed to be altered in order to be consistent with Maxwell's electromagnetic theory and correspond‐

transformation, it would appear from eq. (1a) that the space and time coordinates are mixed in a fundamental way. This meant that observers in relative motion to one another might not generally agree on the time a given event occurred. Poincaré [7] argued in 1898, for example, that the lengths of time intervals might be different for the two observers, i.e. *Δt* ≠*Δt* ′

therefore that two events that occur simultaneously in S might not occur at the same time based on clocks that are at rest in . This eventuality has since come to be known as *remote nonsimultaneity*. The space-time mixing concept also predicts that the rates of moving clocks are slower than those of identical counterparts employed by the observer in his own rest frame (time dilation). Larmor [8] published a different version of eqs. (1a-d) (with *ε* =1) in 1900, as well as a proof that one obtains the same prediction for relativistic length variations (hereafter referred to as FitzGerald-Lorentz length contraction or FLC) on this basis that both FitzGerald [9] and Lorentz [10] had derived independently in 1889 and 1892, respectively, to second order in vc−<sup>1</sup> [5]. Time dilation and remote non-simultaneity also follow directly from Larmor's

In 1905 Einstein's paper [1] on what he later referred to as the special theory of relativity (STR) became the focus of attention, although at least a decade passed before his ideas gained wide acceptance among his fellow physicists. He came out strongly against the necessity of there being an aether which serves as the medium in which light is carried. He argued instead that such a concept is superfluous, and likewise that there is no space in absolute rest and no "velocity vector associated with a point of empty space in which electromagnetic processes take place" [11]. He formulated his version of electromagnetic theory on the basis of two welldefined postulates, the RP and the constancy of the speed of light in free space independent of the state of motion of the detector/observer or the source of the light. Starting from this basis,

(1)

5

) for the same object measured by

relation assumed for the Galilean

c-2)<sup>−</sup>0.5

http://dx.doi.org/10.5772/55792

]. Voigt's

, and

=0 [*γ* =(1−v<sup>2</sup>

Experimental and theoretical studies of the state of the ionosphere and physicochemical processes that occur in it are caused by the necessity of reliable functioning of communica‐ tion channels in different frequency bands. In recent years, most attention is paid to the improvement of satellite communication and navigation systems that use transionospher‐ ic communication channels. The reliability of communication and navigation systems using ionospheric communication channels depends mainly on the knowledge of the iono‐ sphere behavior both in quiet and perturbed conditions. This determines the situation when not only ionospheric plasma perturbations related to the dynamics of the atmosphere but also processes related to electromagnetic wave interaction with neutral atoms and molecules of the medium in which the wave propagates should be treated as inhomogene‐ ities. On the other hand, an analysis of disturbances and failures of operation of space communication systems that use the decimeter wave range and the development of theoretical concepts of physical processes responsible for these phenomena gives new information about the state of the medium and provides the opportunities for further improvement of communication systems. This led to the necessity of the development of special experimental techniques for studying the ionosphere in order to determine the physical reasons for GPS signal delays. In the second part of this Chapter the role of Rydberg atoms and molecules of neutral ionospheric plasma components excited in collisions with electrons in the formation of UHF and infrared radiations of the E – and D– layers of Earth's ionosphere is discussed. A new physical mechanism of satellite signal delay due to a cascade of re-emissions on Rydberg states in the decimeter range is suggested.

#### **2. Time dilation and length contraction**

One of the key results of Maxwell's theory of electromagnetism introduced in 1864 was the finding that the speed of light waves has a constant value that is given by the relation: c=(*ε*0*μ*0)<sup>−</sup>0.5, where *ε*<sup>0</sup> is the electric permittivity and *μ*<sup>0</sup> is the magnetic permeability in free space. This result appeared quite strange to physicists of the time because it immediately raised questions about how the speed of anything could be the same in all inertial systems. It clashed with the idea that there was an aether in which light waves are permanently at rest.

Maxwell's equations are not invariant to the Galilean transformation and therefore seemed to be inconsistent with the relativity principle (RP). Voigt [4] was the first [5] to give a different space-time transformation that did accomplish this objective. It differed by a constant factor in all of its equations from what is now known as the Lorentz transformation (LT). In 1899 Lorentz wrote down a general version of this transformation [6]:

#### Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused http://dx.doi.org/10.5772/55792 5

$$\begin{aligned} \mathbf{t'} &= \mathbf{\dot{\nu}c} \left( t - \mathbf{v} \mathbf{x} \mathbf{c}^{-2} \right) & \mathbf{a} \\ \mathbf{x'} &= \mathbf{\dot{\nu}c} \left( \mathbf{x} - \mathbf{v}t \right) \\ \mathbf{y'} &= \mathbf{\dot{\varepsilon}} \mathbf{y} & \mathbf{c} \\ \mathbf{z'} &= \boldsymbol{\varepsilon} \mathbf{z} & \mathbf{d} \end{aligned} \tag{1}$$

The space-time coordinates (*x*, *y*, *z*, *t*) and (*x* ′ , *y* ′ , *z* ′ , *t* ′ ) for the same object measured by observers who are respectively at rest in two inertial systems S and are compared in these equations. It is assumed that is moving along the positive x axis relative to S with constant speed v, and that their coordinate systems are coincident for *t* =*t* ′ =0 [*γ* =(1−v<sup>2</sup> c-2)<sup>−</sup>0.5 ]. Voigt's value for *ε* is *γ* <sup>−</sup><sup>1</sup> . However, Lorentz pointed out than any value for ε in the general Lorentz transformation leaves Maxwell's equations invariant and thus there is a degree of freedom that needs to be eliminated on some other basis before a specific version can be unambiguously determined. This degree of freedom will be important in the ensuing discussion of relativity theory given below.

It also must be recognized that the speed of light only remains constant as it moves through free space and also remains at the same gravitational potential. This is an important distinction that has significant practical consequences when the light pulses must pass through regions

Experimental and theoretical studies of the state of the ionosphere and physicochemical processes that occur in it are caused by the necessity of reliable functioning of communica‐ tion channels in different frequency bands. In recent years, most attention is paid to the improvement of satellite communication and navigation systems that use transionospher‐ ic communication channels. The reliability of communication and navigation systems using ionospheric communication channels depends mainly on the knowledge of the iono‐ sphere behavior both in quiet and perturbed conditions. This determines the situation when not only ionospheric plasma perturbations related to the dynamics of the atmosphere but also processes related to electromagnetic wave interaction with neutral atoms and molecules of the medium in which the wave propagates should be treated as inhomogene‐ ities. On the other hand, an analysis of disturbances and failures of operation of space communication systems that use the decimeter wave range and the development of theoretical concepts of physical processes responsible for these phenomena gives new information about the state of the medium and provides the opportunities for further improvement of communication systems. This led to the necessity of the development of special experimental techniques for studying the ionosphere in order to determine the physical reasons for GPS signal delays. In the second part of this Chapter the role of Rydberg atoms and molecules of neutral ionospheric plasma components excited in collisions with electrons in the formation of UHF and infrared radiations of the E – and D– layers of Earth's ionosphere is discussed. A new physical mechanism of satellite signal delay due to a cascade of re-emissions on Rydberg states in the decimeter range is

One of the key results of Maxwell's theory of electromagnetism introduced in 1864 was the finding that the speed of light waves has a constant value that is given by the relation: c=(*ε*0*μ*0)<sup>−</sup>0.5, where *ε*<sup>0</sup> is the electric permittivity and *μ*<sup>0</sup> is the magnetic permeability in free space. This result appeared quite strange to physicists of the time because it immediately raised questions about how the speed of anything could be the same in all inertial systems. It clashed with the idea that there was an aether in which light waves are permanently at

Maxwell's equations are not invariant to the Galilean transformation and therefore seemed to be inconsistent with the relativity principle (RP). Voigt [4] was the first [5] to give a different space-time transformation that did accomplish this objective. It differed by a constant factor in all of its equations from what is now known as the Lorentz transformation (LT). In 1899

in which space-weather events are occurring.

4 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

**2. Time dilation and length contraction**

Lorentz wrote down a general version of this transformation [6]:

suggested.

rest.

The form of the above equations suggested that the Newtonian concept of absolute time needed to be altered in order to be consistent with Maxwell's electromagnetic theory and correspond‐ ing experimental observations. Unlike the simple *t* =*t* ′ relation assumed for the Galilean transformation, it would appear from eq. (1a) that the space and time coordinates are mixed in a fundamental way. This meant that observers in relative motion to one another might not generally agree on the time a given event occurred. Poincaré [7] argued in 1898, for example, that the lengths of time intervals might be different for the two observers, i.e. *Δt* ≠*Δt* ′ , and therefore that two events that occur simultaneously in S might not occur at the same time based on clocks that are at rest in . This eventuality has since come to be known as *remote nonsimultaneity*. The space-time mixing concept also predicts that the rates of moving clocks are slower than those of identical counterparts employed by the observer in his own rest frame (time dilation). Larmor [8] published a different version of eqs. (1a-d) (with *ε* =1) in 1900, as well as a proof that one obtains the same prediction for relativistic length variations (hereafter referred to as FitzGerald-Lorentz length contraction or FLC) on this basis that both FitzGerald [9] and Lorentz [10] had derived independently in 1889 and 1892, respectively, to second order in vc−<sup>1</sup> [5]. Time dilation and remote non-simultaneity also follow directly from Larmor's version of the Lorentz transformation.

In 1905 Einstein's paper [1] on what he later referred to as the special theory of relativity (STR) became the focus of attention, although at least a decade passed before his ideas gained wide acceptance among his fellow physicists. He came out strongly against the necessity of there being an aether which serves as the medium in which light is carried. He argued instead that such a concept is superfluous, and likewise that there is no space in absolute rest and no "velocity vector associated with a point of empty space in which electromagnetic processes take place" [11]. He formulated his version of electromagnetic theory on the basis of two welldefined postulates, the RP and the constancy of the speed of light in free space independent of the state of motion of the detector/observer or the source of the light. Starting from this basis, Einstein went on to derive a version of the relativistic space-time transformation that leaves Maxwell's equations invariant. It is exactly the same transformation that Larmor [8] had reported five years earlier and has ever since been referred to as the Lorentz transformation (LT), i.e. with a value of *ε* =1 (Einstein refers to this function as *φ* in Ref. 1) in Lorentz's general eqs. (1a-d). Consequently, Larmor's previous conclusion about the FLC also became an integral part of STR, as well as Poincaré's conjecture [7] of remote non-simultaneity.

One of the most interesting features of Einstein's derivation of the LT is that it contains a different justification for choosing a value of unity for *ε* / *φ* than Larmor gave. On p. 900 of Ref. 1, he states without further discussion that "*φ* is a temporarily unknown function of v," the relative speed of the two rest frames S and occurring in the LT. He then goes on to show that considerations of symmetry exclude any other possible value for this function than unity.

Einstein proposed a number of intriguing experiments to test STR, many of which involved the phenomenon of time dilation. The relationship between the time intervals *Δt* and *Δt* ′ between the same two events based on stationary clocks in S and , respectively, can be obtained directly from eq. (1a) as:

$$
\Delta t' = \gamma \mathbf{z} \left(\Delta t - \mathbf{v} \Delta \mathbf{x} \mathbf{c}^{-2}\right) \tag{2}
$$

be running slower than the other at the same time. It is always the "moving" clock that is running slower than its identical stationary counterpart, although it is not self-evident how this distinction can be made unequivocally based on the above derivations. Many authors [12-14] have taken this result to be a clear indication that something is fundamentally wrong with Einstein's STR. The majority view among relativistic physicists nonetheless holds that such a situation is a direct consequence of the RP [15], which holds that all inertial systems are

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

The same type of symmetry arises for length measurements, in which case analogous manip‐

which are the FLC predictions in the direction parallel to the relative velocity of the two observers. The orthodox interpretation is that length contraction and time dilation occur together in both rest frames, i.e. eq. (5) goes with eq. (6), whereas eq. (3) goes with eq. (7). In other words, lengths on the moving object contract in the parallel direction at the same time (and by the same factor) that the rates of its clocks slow down. There is no difference in the length measurements in directions that are perpendicular to the relative velocity of the two rest frames according to STR [1], as is easily seen from eqs. (1c-d) using the value of *ε* =1 which defines Einstein's (and Larmor's) LT. The resulting anisotropic character of the FLC is another of the STR predictions that has evoked much discussion and controversy over the years.

It is important see that the above symmetry in the predicted results of the two observers represents a stark departure from the long-held belief in the *objectivity of measurement*. Since time-immemorial it had previously been assumed that different observers should always agree in principle as to which piece of cloth is longer or which bag of flour weighs more. The rational system of units is clearly based on this principle. Accordingly, one assumes that while individual observers can disagree on the numerical value of a given physical measurement because they use different units to express their results, they should nonetheless agree completely on the *ratio* of any two such values of the same type. In what follows, this state of affairs will be referred to as the Principle of Rational Measurement (PRM). Einstein's theory [1] claims instead that observers in relative motion may not only disagree on the ratio of two such measured values but even on their numerical order. It is not feasible to simply state that, on the basis of eq. (6), the reason the observer in S obtains a smaller value for the length of a given object than his counterpart in S' is because he employs a unit of distance which is *γ* times larger. One could state with equal justification based on eq. (7) that the unit of distance in S is *γ* times smaller than in . In short, the whole concept of rational units is destroyed by the

Einstein's belief in the above symmetry principle was by no means absolute, however. In the same paper [1], he includes a discussion of a clock that moves from its original rest position

predicted symmetric character of measurement in Einstein's STR.

. (6)

http://dx.doi.org/10.5772/55792

7

¢, (7)

D =D *x x* ¢ g

D =D *x x* g

equivalent.

ulations of eq. (1b) and its inverse lead to both:

whereby Δx is the distance between the locations of the clock in S at the times the measurements are made there. If one has the normal situation in which the two measurements are made at exactly the same location in S, i.e. with *Δx* =0, it follows from STR (with *ε* =1) that

$$
\Delta t' = \gamma \Delta t.\tag{3}
$$

This equation corresponds to time dilation in the rest frame of the observer in S. It states that the observer in must obtain a larger value for the elapsed time since *γ* >1. One can alter the procedure, however, so that the roles of the two observers are reversed. The inverse relation to eq. (1a) in the LT, i.e. with *ε* =1, is:

$$t = \gamma \left( t' + \mathbf{v} \mathbf{x}' \mathbf{c}^{-2} \right). \tag{4}$$

The corresponding relation between time intervals thus becomes

$$
\Delta t = \gamma \Delta t'\tag{5}
$$

when one makes the standard assumption that the measurements are carried out at the same location in (*Δx* ′ =0). When one compares eq. (3) with eq. (5), it is evident that there is something paradoxical about Einstein's result. This comparison seems to be telling us that each clock can be running slower than the other at the same time. It is always the "moving" clock that is running slower than its identical stationary counterpart, although it is not self-evident how this distinction can be made unequivocally based on the above derivations. Many authors [12-14] have taken this result to be a clear indication that something is fundamentally wrong with Einstein's STR. The majority view among relativistic physicists nonetheless holds that such a situation is a direct consequence of the RP [15], which holds that all inertial systems are equivalent.

Einstein went on to derive a version of the relativistic space-time transformation that leaves Maxwell's equations invariant. It is exactly the same transformation that Larmor [8] had reported five years earlier and has ever since been referred to as the Lorentz transformation (LT), i.e. with a value of *ε* =1 (Einstein refers to this function as *φ* in Ref. 1) in Lorentz's general eqs. (1a-d). Consequently, Larmor's previous conclusion about the FLC also became an integral

One of the most interesting features of Einstein's derivation of the LT is that it contains a different justification for choosing a value of unity for *ε* / *φ* than Larmor gave. On p. 900 of Ref. 1, he states without further discussion that "*φ* is a temporarily unknown function of v," the relative speed of the two rest frames S and occurring in the LT. He then goes on to show that considerations of symmetry exclude any other possible value for this function than unity.

Einstein proposed a number of intriguing experiments to test STR, many of which involved the phenomenon of time dilation. The relationship between the time intervals *Δt* and *Δt* ′ between the same two events based on stationary clocks in S and , respectively, can be obtained

( ) <sup>2</sup> *t t*

whereby Δx is the distance between the locations of the clock in S at the times the measurements are made there. If one has the normal situation in which the two measurements are made at

This equation corresponds to time dilation in the rest frame of the observer in S. It states that the observer in must obtain a larger value for the elapsed time since *γ* >1. One can alter the procedure, however, so that the roles of the two observers are reversed. The inverse relation

v xc , - D = D- D ¢ (2)

(3)

vx c . - = +¢ ¢ (4)

¢ (5)

ge

exactly the same location in S, i.e. with *Δx* =0, it follows from STR (with *ε* =1) that

D =D *t t* ¢ . g

( ) <sup>2</sup> *t t* g

> D=D *t t* g

when one makes the standard assumption that the measurements are carried out at the same

paradoxical about Einstein's result. This comparison seems to be telling us that each clock can

=0). When one compares eq. (3) with eq. (5), it is evident that there is something

The corresponding relation between time intervals thus becomes

part of STR, as well as Poincaré's conjecture [7] of remote non-simultaneity.

6 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

directly from eq. (1a) as:

to eq. (1a) in the LT, i.e. with *ε* =1, is:

location in (*Δx* ′

The same type of symmetry arises for length measurements, in which case analogous manip‐ ulations of eq. (1b) and its inverse lead to both:

$$
\Delta \mathbf{x}' = \mathbf{y} \Delta \mathbf{x}.\tag{6}
$$

$$
\Delta \mathbf{x} = \mathbf{y} \Delta \mathbf{x}',\tag{7}
$$

which are the FLC predictions in the direction parallel to the relative velocity of the two observers. The orthodox interpretation is that length contraction and time dilation occur together in both rest frames, i.e. eq. (5) goes with eq. (6), whereas eq. (3) goes with eq. (7). In other words, lengths on the moving object contract in the parallel direction at the same time (and by the same factor) that the rates of its clocks slow down. There is no difference in the length measurements in directions that are perpendicular to the relative velocity of the two rest frames according to STR [1], as is easily seen from eqs. (1c-d) using the value of *ε* =1 which defines Einstein's (and Larmor's) LT. The resulting anisotropic character of the FLC is another of the STR predictions that has evoked much discussion and controversy over the years.

It is important see that the above symmetry in the predicted results of the two observers represents a stark departure from the long-held belief in the *objectivity of measurement*. Since time-immemorial it had previously been assumed that different observers should always agree in principle as to which piece of cloth is longer or which bag of flour weighs more. The rational system of units is clearly based on this principle. Accordingly, one assumes that while individual observers can disagree on the numerical value of a given physical measurement because they use different units to express their results, they should nonetheless agree completely on the *ratio* of any two such values of the same type. In what follows, this state of affairs will be referred to as the Principle of Rational Measurement (PRM). Einstein's theory [1] claims instead that observers in relative motion may not only disagree on the ratio of two such measured values but even on their numerical order. It is not feasible to simply state that, on the basis of eq. (6), the reason the observer in S obtains a smaller value for the length of a given object than his counterpart in S' is because he employs a unit of distance which is *γ* times larger. One could state with equal justification based on eq. (7) that the unit of distance in S is *γ* times smaller than in . In short, the whole concept of rational units is destroyed by the predicted symmetric character of measurement in Einstein's STR.

Einstein's belief in the above symmetry principle was by no means absolute, however. In the same paper [1], he includes a discussion of a clock that moves from its original rest position and returns via a closed path at some later time. The conclusion is that the rate of the moving clock is decreased during its journey, so that upon return to the point of origin it shows a smaller value for the elapsed time than an identical clock that remained stationary there. The symmetry is broken by the fact that it is possible to distinguish which clock remained at rest in the original inertial system and which did not [1,16]. Einstein ends the discussion by concluding that a clock located at the Equator should run slower by a factor of *γ*(RE*ω*) than its identical counterpart at one of the Poles (RE is the Earth's radius and *ω* is the circular frequency of rotation about the polar axis). This explanation raises a number of questions of its own, however. For example, how can we be sure that a moving clock did not undergo acceleration before attaining its current state of uniform translation? If it did, then its clock rate should be determined independently by its current speed relative to the inertial system from which it was accelerated, and should not depend, at least directly, on its speed relative to a given observer in another inertial system.

the normal wavelength (*λ*0) should be observed by virtue of time dilation at the source. Because of the constancy of the speed of light and the predicted decrease in frequency emanating from the source, an increase in wavelength should be found in accordance with *the transverse Doppler*

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

( ) ( ) <sup>1</sup>

(*Θ* is the angle from which the light is observed). Of the many difficulties inherent in the experiment, the main one was the requirement of measuring the wavelength of light emitted

by the authors was to average the wavelengths obtained for light emitted in both the forward and backward directions, thereby removing the first-order dependence on v in eq. (8). The test was performed with hydrogen canal rays for velocities in the neighborhood of v=0.005 c. The two shifted H*β* lines in the corresponding spectra were recorded on the same photographic plate as the un-shifted line at 4861 Å. The predicted shift in the experiment was 0.0472Å, i.e. *λ* −*λ*0, with v=0.00441 c. Six different plates were considered and the average shift observed was +0.0468 Å. The sign is also in agreement with expectations based on eq. (8), that is, a shift

The accuracy of the Ives-Stilwell experiment was gradually improved. It was later estimated that the actual experimental uncertainty in the original investigation lay in the 10-15% range, although the experimental points could be fitted to a curve with as little as 2-3% deviation. Mandelberg and Witten [19] made substantial improvements in the overall procedure, using hydrogen ion velocities of up to 0.0093 c. Their results indicate that the exponent in the *γ*(v) factor in eq. (8) for the quadratic shift has a value of 0.498±0.025, within about 5% of Einstein's

In spite of the generally good agreement between theory and experiment with regard to the time-dilation effect, there is still a loose end that has been almost universally ignored in the ensuing discussion. It is clear that the predicted slowing down of clocks in the transverse Doppler investigations is accompanied by an increase in wavelength, not the *decrease* that must be expected on the basis of the FLC (see Sect. II). According to the RP, the wavelength of the radiation that would be measured by an observer co-moving with the light source is just the normal value obtained when the identical source is at rest in the laboratory. The only way to explain this deduction is to assume that the refraction grating that is used in the rest frame of the moving source has *increased in length* by the same factor *γ*(v) as the clock rates have decreased. Moreover, the amount of the increase must be the same in all directions because the clock-rate decrease is clearly independent of the orientation of the radiation to the labora‐ tory observer. That is the only way to explain how the speed of light can be the same in all directions, at least if one believes in an objective theory of measurement. This conclusion is quite different than for the anisotropic length contraction phenomenon predicted by the FLC. *It indicates instead that time dilation in a given rest frame is accompanied by isotropic length expan‐*

v 1 – vc cos , - = Q (8)

http://dx.doi.org/10.5772/55792

9

<sup>2</sup> ) relative to the observer. The ingenious solution proposed

0

 l

lg

*effect* according to the general relation:

in a perpendicular direction (*<sup>Θ</sup>* <sup>=</sup> <sup>π</sup>

to the red was measured in each case.

predicted value of 0.5.

The answers to the above questions could only be determined by experiments which were not yet available in 1905. For example, Einstein pointed out that time dilation should produce a transverse (second-order) Doppler effect not expected from the classical theory of light propagation. Eqs. (3) and (5) indicate that a moving light source should emit a lower frequency (greater period) of radiation for a given observer than that of an identical stationary source in his laboratory. The symmetry principle derived from the LT indicates that two observers exchanging light signals should each measure a *red shift,* i.e a lowering in frequency and corresponding increase in wavelength, from the other's source of radiation when the line joining them is perpendicular to the direction of their relative velocity.

Two years later [17], Einstein used the results of his 1905 paper to derive the Equivalence Principle (EP) between kinematic acceleration and gravity. His analysis of electromagnetic interactions at different gravitational potentials led him to make several more revolutionary predictions on the basis of the EP. First of all, he concluded that the light-speed postulate did not have universal validity. He claimed instead that it should increase with the altitude of the light source. At the same time, he determined that the frequency of light waves should increase as the source moves to a higher gravitational potential. Consequently, light emanating from the Sun should be detected on the Earth's surface with a lower frequency than for an identical source located there. This effect has since been verified many times and in general is referred to as the *gravitational red shift*.

#### **3. Six experiments that led to GPS**

#### **3.1. Transverse Doppler effect**

The first significant test of Einstein's time-dilation theory was carried out by Ives and Stilwell in 1938 [18]. The object of this experiment is to measure the wavelength *λ* of light emitted from a moving source at nearly right angles to the observer in the laboratory. In addition to the ordinary first-order Doppler effect, it was expected from STR [1] that a quadratic shift from the normal wavelength (*λ*0) should be observed by virtue of time dilation at the source. Because of the constancy of the speed of light and the predicted decrease in frequency emanating from the source, an increase in wavelength should be found in accordance with *the transverse Doppler effect* according to the general relation:

and returns via a closed path at some later time. The conclusion is that the rate of the moving clock is decreased during its journey, so that upon return to the point of origin it shows a smaller value for the elapsed time than an identical clock that remained stationary there. The symmetry is broken by the fact that it is possible to distinguish which clock remained at rest in the original inertial system and which did not [1,16]. Einstein ends the discussion by concluding that a clock located at the Equator should run slower by a factor of *γ*(RE*ω*) than its identical counterpart at one of the Poles (RE is the Earth's radius and *ω* is the circular frequency of rotation about the polar axis). This explanation raises a number of questions of its own, however. For example, how can we be sure that a moving clock did not undergo acceleration before attaining its current state of uniform translation? If it did, then its clock rate should be determined independently by its current speed relative to the inertial system from which it was accelerated, and should not depend, at least directly, on its speed relative to a given

The answers to the above questions could only be determined by experiments which were not yet available in 1905. For example, Einstein pointed out that time dilation should produce a transverse (second-order) Doppler effect not expected from the classical theory of light propagation. Eqs. (3) and (5) indicate that a moving light source should emit a lower frequency (greater period) of radiation for a given observer than that of an identical stationary source in his laboratory. The symmetry principle derived from the LT indicates that two observers exchanging light signals should each measure a *red shift,* i.e a lowering in frequency and corresponding increase in wavelength, from the other's source of radiation when the line

Two years later [17], Einstein used the results of his 1905 paper to derive the Equivalence Principle (EP) between kinematic acceleration and gravity. His analysis of electromagnetic interactions at different gravitational potentials led him to make several more revolutionary predictions on the basis of the EP. First of all, he concluded that the light-speed postulate did not have universal validity. He claimed instead that it should increase with the altitude of the light source. At the same time, he determined that the frequency of light waves should increase as the source moves to a higher gravitational potential. Consequently, light emanating from the Sun should be detected on the Earth's surface with a lower frequency than for an identical source located there. This effect has since been verified many times and in general is referred

The first significant test of Einstein's time-dilation theory was carried out by Ives and Stilwell in 1938 [18]. The object of this experiment is to measure the wavelength *λ* of light emitted from a moving source at nearly right angles to the observer in the laboratory. In addition to the ordinary first-order Doppler effect, it was expected from STR [1] that a quadratic shift from

joining them is perpendicular to the direction of their relative velocity.

observer in another inertial system.

8 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

to as the *gravitational red shift*.

**3.1. Transverse Doppler effect**

**3. Six experiments that led to GPS**

$$\mathcal{A} = \mathcal{\gamma}\left(\mathbf{v}\right)\mathcal{\lambda}\_0\left(1 - \mathbf{v}\mathbf{c}^{-1}\cos\Theta\right),\tag{8}$$

(*Θ* is the angle from which the light is observed). Of the many difficulties inherent in the experiment, the main one was the requirement of measuring the wavelength of light emitted

in a perpendicular direction (*<sup>Θ</sup>* <sup>=</sup> <sup>π</sup> <sup>2</sup> ) relative to the observer. The ingenious solution proposed by the authors was to average the wavelengths obtained for light emitted in both the forward and backward directions, thereby removing the first-order dependence on v in eq. (8). The test was performed with hydrogen canal rays for velocities in the neighborhood of v=0.005 c. The two shifted H*β* lines in the corresponding spectra were recorded on the same photographic plate as the un-shifted line at 4861 Å. The predicted shift in the experiment was 0.0472Å, i.e. *λ* −*λ*0, with v=0.00441 c. Six different plates were considered and the average shift observed was +0.0468 Å. The sign is also in agreement with expectations based on eq. (8), that is, a shift to the red was measured in each case.

The accuracy of the Ives-Stilwell experiment was gradually improved. It was later estimated that the actual experimental uncertainty in the original investigation lay in the 10-15% range, although the experimental points could be fitted to a curve with as little as 2-3% deviation. Mandelberg and Witten [19] made substantial improvements in the overall procedure, using hydrogen ion velocities of up to 0.0093 c. Their results indicate that the exponent in the *γ*(v) factor in eq. (8) for the quadratic shift has a value of 0.498±0.025, within about 5% of Einstein's predicted value of 0.5.

In spite of the generally good agreement between theory and experiment with regard to the time-dilation effect, there is still a loose end that has been almost universally ignored in the ensuing discussion. It is clear that the predicted slowing down of clocks in the transverse Doppler investigations is accompanied by an increase in wavelength, not the *decrease* that must be expected on the basis of the FLC (see Sect. II). According to the RP, the wavelength of the radiation that would be measured by an observer co-moving with the light source is just the normal value obtained when the identical source is at rest in the laboratory. The only way to explain this deduction is to assume that the refraction grating that is used in the rest frame of the moving source has *increased in length* by the same factor *γ*(v) as the clock rates have decreased. Moreover, the amount of the increase must be the same in all directions because the clock-rate decrease is clearly independent of the orientation of the radiation to the labora‐ tory observer. That is the only way to explain how the speed of light can be the same in all directions, at least if one believes in an objective theory of measurement. This conclusion is quite different than for the anisotropic length contraction phenomenon predicted by the FLC. *It indicates instead that time dilation in a given rest frame is accompanied by isotropic length expan‐* *sion.* We will return to this point in the discussion of the next series of time-dilation tests involving the decay of meta-stable particles.

**3.3. High-speed rotor experiments**

moving faster in the laboratory.

where *v*<sup>r</sup>

to (R1 <sup>2</sup> –R2 2 )*ω* <sup>2</sup>

detect violations of STR.

faster in the laboratory than the object of the measurements

The above two experiments were in quantitative agreement with Einstein's predictions about the amount of time dilation [1], but neither one of them provided a test for the truly revolu‐ tionary conclusion that measurement is *subjective*. As discussed in Sect. II, Einstein's LT leads to both eqs. (3) and (5) and therefore is completely ambiguous about which of two clocks in motion is running slower. Yet his argument about clocks located respectively on the Equator and one of the Earth's Poles [1] suggests a completely different, far more traditional (objective), view of such relationships. Both theories indicate that accelerated clocks should run slower than those left behind at their original location. One needs a "two-way experiment" to actually differentiate between the two theoretical positions, one in which the "observer" is moving

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

Hay et al. [25] carried out x-ray frequency measurements employing the Mössbauer effect that had the potential of resolving this question. Both the light source and the absorber/detector were mounted on an ultracentrifuge that attained rotational speeds of up to 500 revolutions per second. The authors thereby eliminated the angular dependence in eq. (8) by ensuring that

more important in the present context, the absorber was fastened near the rim of the rotor while the light source was located close to the rotor's axis [25], which means that the situation is opposite to that in the Ives-Stilwell investigation [18] and it is now the observer who is

According to Einstein's LT [1], the above distinction should have no bearing on the outcome of the experiment. This point is borne out by Will's analysis of the transverse Doppler effect

> ( ) 0.5 <sup>r</sup> <sup>2</sup> , <sup>e</sup>

and *v*e are the observed frequencies at the absorber/receiver and the emitter, respec‐

tively, and ve is the speed of the emitter relative to the receiver. The symmetry in this expression is obvious. The receiver should always measure a smaller frequency than that observed at the light source. Will's analysis is also consistent with the Mansouri-Sexl approach [27] used to

In describing their experimental results, Hay et al. [25] state that the "expected shift can be calculated in two ways." By this they mean either by treating the acceleration of the rotor as an effective gravitational field (Einstein's EP) or by "using the time dilatation of special relativity." However, the empirical formula they report for the expected fractional shift in both the energy and frequency of the gamma rays is not consistent with eq. (9) since it is proportional

2c<sup>2</sup> , where R1 and R2 are the respective distances of the absorber and x-ray source

*<sup>v</sup>* = - (9)

1 v *<sup>v</sup>*

[26] in which he expresses the expected result as follows [see his eq. (6)]:

e

<sup>2</sup> ). Even

http://dx.doi.org/10.5772/55792

11

the relative motion of the source to the detector was almost perfectly transverse (*<sup>Θ</sup>* <sup>=</sup> <sup>π</sup>

#### **3.2. Lifetimes of meta-stable particles**

Shortly after the first Ives-Stilwell investigation, Rossi and coworkers [20] reported on their measurements of the transition rate of cosmic-ray muons and other mesons traveling at different speeds relative to the Earth's atmosphere. The effect of time dilation is much greater in this experiment since the particles are observed to move with speeds exceeding v=0.9 c. The lifetime *τ*0 of muons at rest is 2.20x10-6 s. The decay is exponential and so the transition probability in the laboratory is proportional to exp ( <sup>−</sup>*<sup>t</sup>* / *<sup>τ</sup>*0). According to eq. (5) (with *<sup>t</sup>* ′ =*τ*0), the lifetime *τ* of the accelerated muons in the Earth's atmosphere should be *γ*(v) times larger. It was found that the survival probability is proportional to exp (−*t* / *γτ*0) for all values of v, which is therefore consistent with the predicted increase in the muon lifetime. Subsequent improvements in the experiment [21] verified this result to within a few per cent.

In the original studies of meson decays in the atmosphere [20], the quantity which was determined experimentally is the average range before decay L. It was assumed that the value of L is proportional to both the speed v of the particles and their lifetime *τ* in a given rest frame. Consequently, the range is shorter from the vantage point of the particles' rest frame than for an observer on the Earth's surface, since the latter's measured lifetime is longer. Implicit in this conclusion is the assumption that the speed v of the particles relative to the Earth's surface is the same for observers in all rest frames, and thus that L / *τ* has a constant value for each of them. The results of the latter experiments have subsequently been used in textbooks [22, 23] to illustrate the relationship between time dilation and length contraction in STR [1]. Closer examination of the measured data shows that this conclusion is actually incorrect. The reason that the range of the particles is shorter for the observer moving with the meta-stable particles is clearly *because his unit of distance is longer* than for his counterpart O on the Earth's surface. The distance traveled is actually the same for both observers, just as is the amount of elapsed time for them. The reason that O measures a longer time is because his clock runs *γ*(v) times faster. Yet both agree on the speed v of the muons based on their respective units of time and distance. This means that the unit of time (his "second") for is *γ*(v) s using the standard definition of 1.0 s employed by O in his rest frame [24]. Similarly, the unit of distance (his "meter") for must also be *γ*(v) times larger than the standard unit of 1.0 m employed by O. As a result, finds the distance between his position and the Earth's surface to be systematically smaller by a factor of *γ*(v) than does O. As before with the Ives-Stilwell experiment, the conclusion is thus *not* that lengths in the rest frame of the accelerated system have contracted, but rather that they have expanded relative to those of identical objects in the Earth's rest frame. Moreover, the amount of the length expansion must be the same in all directions, since otherwise it is impossible to explain how and O can still agree on the value of the speed of light independent of its orientation to each of them.

#### **3.3. High-speed rotor experiments**

*sion.* We will return to this point in the discussion of the next series of time-dilation tests

Shortly after the first Ives-Stilwell investigation, Rossi and coworkers [20] reported on their measurements of the transition rate of cosmic-ray muons and other mesons traveling at different speeds relative to the Earth's atmosphere. The effect of time dilation is much greater in this experiment since the particles are observed to move with speeds exceeding v=0.9 c. The lifetime *τ*0 of muons at rest is 2.20x10-6 s. The decay is exponential and so the transition

probability in the laboratory is proportional to exp ( <sup>−</sup>*<sup>t</sup>* / *<sup>τ</sup>*0). According to eq. (5) (with *<sup>t</sup>* ′

improvements in the experiment [21] verified this result to within a few per cent.

the lifetime *τ* of the accelerated muons in the Earth's atmosphere should be *γ*(v) times larger. It was found that the survival probability is proportional to exp (−*t* / *γτ*0) for all values of v, which is therefore consistent with the predicted increase in the muon lifetime. Subsequent

In the original studies of meson decays in the atmosphere [20], the quantity which was determined experimentally is the average range before decay L. It was assumed that the value of L is proportional to both the speed v of the particles and their lifetime *τ* in a given rest frame. Consequently, the range is shorter from the vantage point of the particles' rest frame than for an observer on the Earth's surface, since the latter's measured lifetime is longer. Implicit in this conclusion is the assumption that the speed v of the particles relative to the Earth's surface is the same for observers in all rest frames, and thus that L / *τ* has a constant value for each of them. The results of the latter experiments have subsequently been used in textbooks [22, 23] to illustrate the relationship between time dilation and length contraction in STR [1]. Closer examination of the measured data shows that this conclusion is actually incorrect. The reason that the range of the particles is shorter for the observer moving with the meta-stable particles is clearly *because his unit of distance is longer* than for his counterpart O on the Earth's surface. The distance traveled is actually the same for both observers, just as is the amount of elapsed time for them. The reason that O measures a longer time is because his clock runs *γ*(v) times faster. Yet both agree on the speed v of the muons based on their respective units of time and distance. This means that the unit of time (his "second") for is *γ*(v) s using the standard definition of 1.0 s employed by O in his rest frame [24]. Similarly, the unit of distance (his "meter") for must also be *γ*(v) times larger than the standard unit of 1.0 m employed by O. As a result, finds the distance between his position and the Earth's surface to be systematically smaller by a factor of *γ*(v) than does O. As before with the Ives-Stilwell experiment, the conclusion is thus *not* that lengths in the rest frame of the accelerated system have contracted, but rather that they have expanded relative to those of identical objects in the Earth's rest frame. Moreover, the amount of the length expansion must be the same in all directions, since otherwise it is impossible to explain how and O can still agree on the value of the speed of

=*τ*0),

involving the decay of meta-stable particles.

10 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

light independent of its orientation to each of them.

**3.2. Lifetimes of meta-stable particles**

The above two experiments were in quantitative agreement with Einstein's predictions about the amount of time dilation [1], but neither one of them provided a test for the truly revolu‐ tionary conclusion that measurement is *subjective*. As discussed in Sect. II, Einstein's LT leads to both eqs. (3) and (5) and therefore is completely ambiguous about which of two clocks in motion is running slower. Yet his argument about clocks located respectively on the Equator and one of the Earth's Poles [1] suggests a completely different, far more traditional (objective), view of such relationships. Both theories indicate that accelerated clocks should run slower than those left behind at their original location. One needs a "two-way experiment" to actually differentiate between the two theoretical positions, one in which the "observer" is moving faster in the laboratory than the object of the measurements

Hay et al. [25] carried out x-ray frequency measurements employing the Mössbauer effect that had the potential of resolving this question. Both the light source and the absorber/detector were mounted on an ultracentrifuge that attained rotational speeds of up to 500 revolutions per second. The authors thereby eliminated the angular dependence in eq. (8) by ensuring that

the relative motion of the source to the detector was almost perfectly transverse (*<sup>Θ</sup>* <sup>=</sup> <sup>π</sup> <sup>2</sup> ). Even

more important in the present context, the absorber was fastened near the rim of the rotor while the light source was located close to the rotor's axis [25], which means that the situation is opposite to that in the Ives-Stilwell investigation [18] and it is now the observer who is moving faster in the laboratory.

According to Einstein's LT [1], the above distinction should have no bearing on the outcome of the experiment. This point is borne out by Will's analysis of the transverse Doppler effect [26] in which he expresses the expected result as follows [see his eq. (6)]:

$$\frac{\upsilon\_{\mathbf{r}}}{\upsilon\_{\mathbf{e}}} = \left(1 - \mathbf{v}\_{\mathbf{e}}^{2}\right)^{0.5},\tag{9}$$

where *v*<sup>r</sup> and *v*e are the observed frequencies at the absorber/receiver and the emitter, respec‐ tively, and ve is the speed of the emitter relative to the receiver. The symmetry in this expression is obvious. The receiver should always measure a smaller frequency than that observed at the light source. Will's analysis is also consistent with the Mansouri-Sexl approach [27] used to detect violations of STR.

In describing their experimental results, Hay et al. [25] state that the "expected shift can be calculated in two ways." By this they mean either by treating the acceleration of the rotor as an effective gravitational field (Einstein's EP) or by "using the time dilatation of special relativity." However, the empirical formula they report for the expected fractional shift in both the energy and frequency of the gamma rays is not consistent with eq. (9) since it is proportional

to (R1 <sup>2</sup> –R2 2 )*ω* <sup>2</sup> 2c<sup>2</sup> , where R1 and R2 are the respective distances of the absorber and x-ray source from the rotor axis [25]. In other words, *the sign of the shift changes* when the positions of the absorber and source are exchanged, in clear violation of Einstein's symmetry principle for time dilation. What the empirical formula actually shows is that the magnitude of the effect is correctly predicted by the LT, *but not the sign*.

The rotor experiments were also carried out later by Kündig [28] and by Champeney et al. [29]. The latter authors report their results in terms of the speeds va and vs of the absorber and the x-ray source, respectively:

$$\frac{\Delta\nu}{\nu} = \frac{\mathbf{v}\_{\text{a}}^{2} - \mathbf{v}\_{\text{s}}^{2}}{2\mathbf{c}^{2}}.\tag{10}$$

theory of measurement. Accordingly, one assumes the relationship for the gravitational red

2 2 a s 2

where the centrifugal field of force corresponds to a radial gravitational field strength of R*ω* <sup>2</sup> in each case (*Δφ* is the difference in gravitational potential). There is clearly no ambiguity as to which clock lies higher in the gravitational field and thus there is no question as to which clock runs slower in this formulation. This point is missed in many theoretical descriptions of the frequency shifts in the rotor experiments. For example, Sard [30] claims that the gravita‐ tional red shift and the transverse Doppler effects are consistent, and that this "is not surpris‐ ing, since both follow in a straightforward way from the interweaving of space and time in the Lorentz transformation." This view completely overlooks the simple fact that the transverse Doppler formula in eq. (9) derived from the LT does not allow for the occurrence of blue shifts, whereas the corresponding EP result in eq. (13) demands that they occur whenever the absorber in the rotor experiments moves faster than the x-ray source relative to the laboratory rest frame. Sherwin [31] attempts to clarify the situation by asserting that the ambiguity inherent in eq. (9) only holds for uniform translation, but he fails to give actual experimental results that verify this conclusion. Both Rindler [32] and Sard [30] try to justify the empirical result in terms of orthodox STR by considering the relative retardations experienced by the two clocks in the rotor experiments. This argument leads directly to eqs. (10-11), *but it does so by eliminating the basic subjective character of the LT in doing so.* Once one assumes that two clock rates are strictly proportional, in accord with eq. (12), there is no longer any basis for claiming that the measurement process is subjective. More discussion of this general subject may be

The gravitational red shift derived in Einstein's Jahrbuch review [17] is expressed in the

( ) <sup>2</sup>

In this formula, νX is the frequency of light emitted from a source that is at rest at a given gravitational potential, whereas νD is the corresponding value of the frequency detected by an observer who is not in relative motion to the source but is located at some other position in the gravitational field. The potential energy difference *Φ* between the source and detector determines the fractional amount of the frequency shift. For an experiment near the Earth's surface, *Φ* =gh, where h is the difference in altitude between the source and detector and g is

1 c . - = +F (14)

D X n n

R –R c 2c 2c

n

**3.4. Terrestrial measurements of the gravitational red shift**

n

 jw

( ) <sup>2</sup> 2 2 2 2 a s

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

v v

D -D - == = (13)

http://dx.doi.org/10.5772/55792

13

shift [30]:

found in Ref. [33].

equation below [cf. eq. (13)]:

This result makes several additional points clear: a) there is a shift to higher frequency (to the blue) if the absorber is rotating faster than the source and b) the magnitude of the shift is consistent with a higher-order formula:

$$\frac{\Delta\nu}{\nu} = \frac{\mathcal{V}\left(\mathbf{v}\_{\text{a}}\right)}{\mathcal{V}\left(\mathbf{v}\_{\text{s}}\right)}.\tag{11}$$

If the x-ray source is at rest in the laboratory (vs=0), eq. (11) reduces to *Δν <sup>ν</sup>* <sup>=</sup>*γ*(va). Kündig [28] summarizes this result by stating "that the clock which experiences acceleration is retarded compared to the clock at rest." The slower a clock, the more waves it counts per second [24], hence the observed *blue shift* when the absorber is located farther from the rotor axis than the source. The empirical results are thus seen to be consistent with Einstein's speculation [1] about the relative rates of clocks located at different latitudes on the Earth's surface. The ambiguity implied by eqs. (3) and (5), as derived from the LT, is replaced by a completely objective theory of time dilation satisfying the relation:

$$
\gamma \left( \mathbf{v}\_0 \right) \,\Delta t \,\, = \gamma \left( \mathbf{v}\_0' \right) \,\Delta t',\tag{12}
$$

where v0 and 0 are the respective velocities of the clocks relative to a specific rest frame, which in the present case is the rotor axis. The above equation is also seen to be consistent with both the Ives-Stilwell [18,19] and meta-stable decay experiments [20,21], for which the observer is at rest in the laboratory (v0 =0) and the object of the measurement has undergone acceleration to speed 0 relative to him.

It is important to see that the alternative derivation [29] of the empirical formula for frequency shifts given in eq. (10) that makes use of Einstein's EP [17] also corresponds to an objective theory of measurement. Accordingly, one assumes the relationship for the gravitational red shift [30]:

from the rotor axis [25]. In other words, *the sign of the shift changes* when the positions of the absorber and source are exchanged, in clear violation of Einstein's symmetry principle for time dilation. What the empirical formula actually shows is that the magnitude of the effect is

The rotor experiments were also carried out later by Kündig [28] and by Champeney et al. [29]. The latter authors report their results in terms of the speeds va and vs of the absorber and the

> 2 2 a s 2 v – v . 2c

This result makes several additional points clear: a) there is a shift to higher frequency (to the blue) if the absorber is rotating faster than the source and b) the magnitude of the shift is

> ( ) ( ) a s v . v

summarizes this result by stating "that the clock which experiences acceleration is retarded compared to the clock at rest." The slower a clock, the more waves it counts per second [24], hence the observed *blue shift* when the absorber is located farther from the rotor axis than the source. The empirical results are thus seen to be consistent with Einstein's speculation [1] about the relative rates of clocks located at different latitudes on the Earth's surface. The ambiguity implied by eqs. (3) and (5), as derived from the LT, is replaced by a completely objective theory

g

( ) ( ) 0 0

 g

in the present case is the rotor axis. The above equation is also seen to be consistent with both the Ives-Stilwell [18,19] and meta-stable decay experiments [20,21], for which the observer is at rest in the laboratory (v0 =0) and the object of the measurement has undergone acceleration

It is important to see that the alternative derivation [29] of the empirical formula for frequency shifts given in eq. (10) that makes use of Einstein's EP [17] also corresponds to an objective

are the respective velocities of the clocks relative to a specific rest frame, which

g

<sup>D</sup> <sup>=</sup> (10)

<sup>D</sup> <sup>=</sup> (11)

v v , D= D *t t* ¢ ¢ (12)

*<sup>ν</sup>* <sup>=</sup>*γ*(va). Kündig [28]

n

n g

n

If the x-ray source is at rest in the laboratory (vs=0), eq. (11) reduces to *Δν*

n

correctly predicted by the LT, *but not the sign*.

12 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

consistent with a higher-order formula:

of time dilation satisfying the relation:

relative to him.

where v0 and 0

to speed 0

x-ray source, respectively:

$$\frac{\Delta\nu}{\nu} = \frac{-\Delta\rho}{\text{c}^2} = \frac{\alpha\rho^2}{2\text{c}^2} \left(\text{R}\_\text{a}^2 - \text{R}\_\text{s}^2\right) = \frac{\text{v}\_\text{a}^2 - \text{v}\_\text{s}^2}{2\text{c}^2} \tag{13}$$

where the centrifugal field of force corresponds to a radial gravitational field strength of R*ω* <sup>2</sup> in each case (*Δφ* is the difference in gravitational potential). There is clearly no ambiguity as to which clock lies higher in the gravitational field and thus there is no question as to which clock runs slower in this formulation. This point is missed in many theoretical descriptions of the frequency shifts in the rotor experiments. For example, Sard [30] claims that the gravita‐ tional red shift and the transverse Doppler effects are consistent, and that this "is not surpris‐ ing, since both follow in a straightforward way from the interweaving of space and time in the Lorentz transformation." This view completely overlooks the simple fact that the transverse Doppler formula in eq. (9) derived from the LT does not allow for the occurrence of blue shifts, whereas the corresponding EP result in eq. (13) demands that they occur whenever the absorber in the rotor experiments moves faster than the x-ray source relative to the laboratory rest frame. Sherwin [31] attempts to clarify the situation by asserting that the ambiguity inherent in eq. (9) only holds for uniform translation, but he fails to give actual experimental results that verify this conclusion. Both Rindler [32] and Sard [30] try to justify the empirical result in terms of orthodox STR by considering the relative retardations experienced by the two clocks in the rotor experiments. This argument leads directly to eqs. (10-11), *but it does so by eliminating the basic subjective character of the LT in doing so.* Once one assumes that two clock rates are strictly proportional, in accord with eq. (12), there is no longer any basis for claiming that the measurement process is subjective. More discussion of this general subject may be found in Ref. [33].

#### **3.4. Terrestrial measurements of the gravitational red shift**

The gravitational red shift derived in Einstein's Jahrbuch review [17] is expressed in the equation below [cf. eq. (13)]:

$$
\nu\_{\rm D} = \nu\_{\chi} \left( 1 + \left. \Phi \mathbf{c}^{-2} \right| \right) . \tag{14}
$$

In this formula, νX is the frequency of light emitted from a source that is at rest at a given gravitational potential, whereas νD is the corresponding value of the frequency detected by an observer who is not in relative motion to the source but is located at some other position in the gravitational field. The potential energy difference *Φ* between the source and detector determines the fractional amount of the frequency shift. For an experiment near the Earth's surface, *Φ* =gh, where h is the difference in altitude between the source and detector and g is the acceleration due to gravity. The sign convention is such that *Φ* >0 when the source is located at a higher gravitational potential than the detector.

The same proportionality factor occurs [17,34] in the relationship between the light speeds measured at the same two positions in the gravitational field, where cD is the value measured at the detector and cX is the corresponding value measured at the light source:

$$\mathbf{c}\_{\rm D} = \mathbf{c}\_{\chi} \left( \mathbf{1} + \Phi \mathbf{c}^{-2} \right). \tag{15}$$

The above discussion raises another question, however, namely what happens to the speed of light as it descends from the source? According to eq. (15) and Einstein's EP [17,34], the answer is clearly that it decreases, and by the same fractional amount as for the associated frequency red shift given in eq. (14). To be specific for the Pound-Snider example, the speed of light must have a value of c at ground level for an observer located there, whereas the same observer measures the value at the light source to be ghc-2 times larger. In other words, the speed of light actually *decreases* as it passes downward from the source to the detector at ground level. This is clearly the opposite change that is expected from the "apparent weight" argument alluded to above. Again, there is ample experimental evidence that Einstein's eq. (15) is quantitatively accurate. One knows, for example, from the work of Shapiro et al. [39] that radio signals slow in the neighborhood of massive bodies such as Venus and Mercury. There are time delays between the emission of radio pulses towards either of these planets and detection of their echoes when they return to the Earth's surface. This shows that the speed of light

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

http://dx.doi.org/10.5772/55792

15

decreases under the influence of a gravitational field, i.e. with *Φ* <0 in eq. (15).

*completely independent of the latter's position in a gravitational field*:

l

*frame*, is S<sup>0</sup>

 n

Measurement (PRM). The quantity in parentheses in eqs. (14,15), (1 + *Φ*c−<sup>2</sup>

There is another important question that needs to be considered in the present discussion, however, namely what happens to the *wavelength of light* as the source changes its position in a gravitational field? Einstein [17] concluded that "light coming from the solar surface…has a longer wavelength than the light generated terrestrially from the same material on Earth [40]." Since this statement comes directly after his proof of eq. (14) for frequencies, it seems quite plausible he was basing it on the assumption that the product of wavelength *λ* and frequency ν is equal to the speed of light. In order to obtain the quoted result on this basis, however, it was also necessary for him to assume that the speed of light is actually the same at the Sun's surface as on the Earth. This explanation is nonetheless in conflict with eq. (15), which it must be emphasized appears in the same publication [17]. Making the same assumption about the relationship between light speed, wavelength and frequency and comparing eq. (15) with eq. (14) leads unequivocally to the conclusion that the *wavelength of light for a given source is*

> 1 1 D DD XX X

 nl

It is important in general to recognize that in each of eqs. (14,15,17), the same quantity is the object of measurement for two different observers. The only reason why these observers do not obtain the same result in each case *is because they employ different units for their various measurements*. In Sect. II, this state of affairs has been referred to as the Principle of Rational

upon as a *conversion factor* between their respective units; it will be designated in the ensuing discussion simply as S. It will be seen that the conversion factors for other physical quantities can always be expressed as powers of S. For example, the conversion factor for wavelength, and therefore distances in general because *the scaling must be completely uniform in each rest*

frequency and time are inversely related, it follows that the conversion factor for times is S−<sup>1</sup>

. These conversion factors satisfy the ordinary rules of algebra. For example, since

cc . - - = == (17)

), can be looked

.

As a result, it is clear that according to the EP, both the frequency and the speed of light increase as the light source moves to a higher gravitational potential. Moreover, the fractional amount of the change is the same in both cases. As will be discussed in the following, there is ample experimental evidence to indicate that both of these relations are correct.

Terrestrial confirmation of the gravitational red shift first became possible with the advent of the Mössbauer technique for detecting changes in x-ray frequencies. Pound and Snider [35] placed a 57Fe source at a distance h of 22.5 m above an absorber. According to eq. (14), the fractional change in the x-ray frequency (3.47x1018 Hz) should have a value of ghc-2=2.45x10-15. The authors employed the EP directly to obtain their results. By imparting a downward velocity of v= −ghc−<sup>1</sup> to the detector, the resulting first-order Doppler effect exactly balanced the gravitational red shift:

$$\nu\_{\rm D} = \nu\_{\chi} \Big( 1 + \mathbf{v} \mathbf{c}^{-1} \Big). \tag{16}$$

The observed results were found to be within expectations by a factor of 0.9990±0.0076 [35].

The interpretation of the above experiment is nonetheless a matter of some confusion. Pound and Snider [35] basically avoided a theoretical discussion of the ramifications of their work vis-à-vis the general theory of relativity (GTR [36]), but they noted that "the description of the effect as an 'apparent weight' of photons is suggestive. The velocity difference predicted is identical to that which a material object would acquire in free fall for a time equal to the time of flight." The latter value is assumed to be t=hc−<sup>1</sup> , so that the expected velocity increase of the falling particle would indeed be *Δ*v=gt=ghc−<sup>1</sup> . However, it is important to see that the frequency of the emitted light *in absolute terms* is actually the same at the light source as it is on the ground below when it arrives there. As Einstein stated in his 1911 article [34], this is a clear physical requirement because the number of wave crests arriving at a given point over a specific time interval is a constant [37]. *The reason that the detector at the ground level records a higher frequency is because the unit of time there is greater than at the location of the light source.* Therefore, more wave crests are counted in 1 s on the ground clock than for its counterpart at a higher altitude. Comparison of the rates of atomic clocks located at different altitudes over long periods of time has confirmed Einstein's conclusion quantitatively on this point [38].

The above discussion raises another question, however, namely what happens to the speed of light as it descends from the source? According to eq. (15) and Einstein's EP [17,34], the answer is clearly that it decreases, and by the same fractional amount as for the associated frequency red shift given in eq. (14). To be specific for the Pound-Snider example, the speed of light must have a value of c at ground level for an observer located there, whereas the same observer measures the value at the light source to be ghc-2 times larger. In other words, the speed of light actually *decreases* as it passes downward from the source to the detector at ground level. This is clearly the opposite change that is expected from the "apparent weight" argument alluded to above. Again, there is ample experimental evidence that Einstein's eq. (15) is quantitatively accurate. One knows, for example, from the work of Shapiro et al. [39] that radio signals slow in the neighborhood of massive bodies such as Venus and Mercury. There are time delays between the emission of radio pulses towards either of these planets and detection of their echoes when they return to the Earth's surface. This shows that the speed of light decreases under the influence of a gravitational field, i.e. with *Φ* <0 in eq. (15).

the acceleration due to gravity. The sign convention is such that *Φ* >0 when the source is located

The same proportionality factor occurs [17,34] in the relationship between the light speeds measured at the same two positions in the gravitational field, where cD is the value measured

( ) <sup>2</sup>

As a result, it is clear that according to the EP, both the frequency and the speed of light increase as the light source moves to a higher gravitational potential. Moreover, the fractional amount of the change is the same in both cases. As will be discussed in the following, there is ample

Terrestrial confirmation of the gravitational red shift first became possible with the advent of the Mössbauer technique for detecting changes in x-ray frequencies. Pound and Snider [35] placed a 57Fe source at a distance h of 22.5 m above an absorber. According to eq. (14), the fractional change in the x-ray frequency (3.47x1018 Hz) should have a value of ghc-2=2.45x10-15. The authors employed the EP directly to obtain their results. By imparting a downward velocity of v= −ghc−<sup>1</sup> to the detector, the resulting first-order Doppler effect exactly balanced

( ) <sup>1</sup>

The observed results were found to be within expectations by a factor of 0.9990±0.0076 [35].

The interpretation of the above experiment is nonetheless a matter of some confusion. Pound and Snider [35] basically avoided a theoretical discussion of the ramifications of their work vis-à-vis the general theory of relativity (GTR [36]), but they noted that "the description of the effect as an 'apparent weight' of photons is suggestive. The velocity difference predicted is identical to that which a material object would acquire in free fall for a time equal to the time

frequency of the emitted light *in absolute terms* is actually the same at the light source as it is on the ground below when it arrives there. As Einstein stated in his 1911 article [34], this is a clear physical requirement because the number of wave crests arriving at a given point over a specific time interval is a constant [37]. *The reason that the detector at the ground level records a higher frequency is because the unit of time there is greater than at the location of the light source.* Therefore, more wave crests are counted in 1 s on the ground clock than for its counterpart at a higher altitude. Comparison of the rates of atomic clocks located at different altitudes over long periods of time has confirmed Einstein's conclusion quantitatively on this point [38].

D X c c 1 c . - = +F (15)

1 vc . - = + (16)

, so that the expected velocity increase of the

. However, it is important to see that the

at the detector and cX is the corresponding value measured at the light source:

experimental evidence to indicate that both of these relations are correct.

D X n

of flight." The latter value is assumed to be t=hc−<sup>1</sup>

falling particle would indeed be *Δ*v=gt=ghc−<sup>1</sup>

 n

at a higher gravitational potential than the detector.

14 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

the gravitational red shift:

There is another important question that needs to be considered in the present discussion, however, namely what happens to the *wavelength of light* as the source changes its position in a gravitational field? Einstein [17] concluded that "light coming from the solar surface…has a longer wavelength than the light generated terrestrially from the same material on Earth [40]." Since this statement comes directly after his proof of eq. (14) for frequencies, it seems quite plausible he was basing it on the assumption that the product of wavelength *λ* and frequency ν is equal to the speed of light. In order to obtain the quoted result on this basis, however, it was also necessary for him to assume that the speed of light is actually the same at the Sun's surface as on the Earth. This explanation is nonetheless in conflict with eq. (15), which it must be emphasized appears in the same publication [17]. Making the same assumption about the relationship between light speed, wavelength and frequency and comparing eq. (15) with eq. (14) leads unequivocally to the conclusion that the *wavelength of light for a given source is completely independent of the latter's position in a gravitational field*:

$$
\mathcal{A}\_{\rm D} = \mathbf{c}\_{\rm D} \boldsymbol{\nu\_{D}}^{-1} = \mathbf{c}\_{\chi} \boldsymbol{\nu\_{\chi}}^{-1} = \mathcal{A}\_{\chi}.\tag{17}
$$

It is important in general to recognize that in each of eqs. (14,15,17), the same quantity is the object of measurement for two different observers. The only reason why these observers do not obtain the same result in each case *is because they employ different units for their various measurements*. In Sect. II, this state of affairs has been referred to as the Principle of Rational Measurement (PRM). The quantity in parentheses in eqs. (14,15), (1 + *Φ*c−<sup>2</sup> ), can be looked upon as a *conversion factor* between their respective units; it will be designated in the ensuing discussion simply as S. It will be seen that the conversion factors for other physical quantities can always be expressed as powers of S. For example, the conversion factor for wavelength, and therefore distances in general because *the scaling must be completely uniform in each rest frame*, is S<sup>0</sup> . These conversion factors satisfy the ordinary rules of algebra. For example, since frequency and time are inversely related, it follows that the conversion factor for times is S−<sup>1</sup> . The unit of velocity or speed is equal to the ratio of the distance travelled to elapsed time and hence the conversion factor in this case is equal to the ratio of the corresponding conversion factors, namely S<sup>0</sup> /S−<sup>1</sup> =S, consistent with the previous definition from eq. (15).

( ) <sup>2</sup>

fractional increase is ghc−<sup>2</sup> =*Φ*c−<sup>2</sup>

, the same as for distance.

companion publication [42].

**3.5. Clock rates on circumnavigating airplanes**

S0

Indeed, one can derive this equation to a suitable approximation using Newton's gravita‐ tional theory and the definition of gravitational potential energy as mgh=m*Φ*. According to STR [1], the value of the energy EX at the light source is equal to mc2 and thus the

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

of h for which the difference between inertial and gravitational mass is negligible. *The fact that energy and frequency scale in the same way means that Planck's constant ħ does not vary with gravitational potential.* This in turn means that angular momentum in general scales as

Einstein also stated in his Jahrbuch article [17] that the above dependence of energy on gravitational potential implies that there is a position-dependent component correspond‐ ing to an inertial mass m equal to E/c2 [40]. Because E and c [i.e. the generic speed of light of eq. (15)] both vary as S, it follows that the unit of inertial mass scales as S-1, the same as time. Values of linear momentum p=mv are the same for observers located at differ‐ ent gravitational potentials, since the corresponding scaling/conversion factors for inertial mass and velocity exactly cancel one another. With reference to the Pound-Snider experi‐ ment [35], this means that x-ray photons *increase in momentum* as they fall through the gravitational field. This follows from quantum mechanics and the fact that the correspond‐ ing wavelength of the radiation decreases in the process. Hence, the E=pc relation holds for observers located at all gravitational potentials since the light speed is decreasing at exactly the same rate as the momentum is increasing, while the energy of the photons is conserved throughout the entire process. This situation might seem counter-intuitive, but it emphasizes that photons are different than material particles with non-zero rest mass, for which momentum and velocity are strictly proportional (see also the related discus‐ sion of light refraction in Ref. 41). Note also that because of the governing quantum mechanical equations, this result is consistent with the *νλ* =c relation for light waves, also already alluded to above. More details about gravitational scaling may be found in a

Despite the progress that had been made in carrying out experiments that verify various aspects of Einstein's STR [1], there was still considerable uncertainty as to how to predict the results of future investigations of the time-dilation effect by the time the next significant advance was made in 1971. At that time, Hafele and Keating [43-44] carried out tests of cesium atomic clocks located onboard circumnavigating airplanes that traveled in opposite directions around the globe. In their introductory remarks, the authors noted that the "relativistic clock paradox" was still a subject of "the most enduring scientific debates of this century." The first paper presents some predictions of the expected time differences for clocks located on each airplane as well as at the origin of their respective flights. Their calculations are clearly based on an *objective* theory of time dilation that is consistent with eq. (12). The symmetric relationship

D X <sup>X</sup> E E 1 c SE . - = +F = (18)

http://dx.doi.org/10.5772/55792

17

, in agreement with eq. (18) at least for infinitesimal values

It is useful to employ the above concepts to follow the course of the light pulses in the Pound-Snider experiment [35]. The analysis of two events is sufficient to illustrate the main points in this discussion, namely the initial emission of the x-rays at the higher gravitational potential (I) and their subsequent arrival at the absorber (II). It is important to use the same set of units in comparing the corresponding measured results. In terms of the units at the x-ray source, the observer X there measures the following values for event I: *ν*X(I)=*ν*, *λ*X(I)=*λ* and cX(I)=c, where *ν* and *λ* are the standard frequency and wavelength of the light source. Upon arrival at the absorber, the same observer measures the corresponding values: *ν*X(II)=*ν*, *λ*X(II)=*λ*S−<sup>1</sup> and cX(II)=cS<sup>−</sup><sup>1</sup> . In other words, the value of the frequency has not changed during the passage of the x-rays down to the absorber, *but the speed of light has decreased* in accordance with the EP (S>1). Consequently, the wavelength of the light must have decreased by the same factor in order to satisfy the general relation (phase velocity of light) in free space between frequency, wavelength and light speed (*λ*X*ν*<sup>X</sup> =cX).

The observer D located at the absorber measures generally different values for the same two events, not because he is considering fundamentally different processes, but rather because he bases his numerical results on a different system of physical units (PRM). He therefore finds for event II: *ν*D(II)=S*ν*, *λ*D(II)=*λ*S−<sup>1</sup> and cD(II)=c. His unit of distance is the same and so there is complete agreement on the value of the wavelength (*λ*S−<sup>1</sup> ). However, his units of frequency and speed are S times *smaller* than for his counterpart at the higher gravitational potential, and therefore he measures values for these two quantities which are S times *greater* in each case. This set of results illustrates the very important general principle of measurement, namely *that the numerical value obtained is inversely proportional to the unit in which it is expressed.* The corresponding values for the initial emission process are accordingly: *ν*D(I)=S*ν*, *λ*D(I)=*λ* and cD(I)=Sc.

In absolute terms what has happened as a result of the downward passage of the x-rays between source and absorber/detector is that the light frequency has remained constant throughout the entire process, exactly as Einstein demanded in his Jahrbuch review [17]. On the other hand, both the light speed and the corresponding wavelength of the radiation have deceased by a factor of S. This is in agreement with Einstein's second postulate of STR [1], namely that the observer at the absorber must find that the speed of light when it arrives there has a value of c in his units because it is at the same gravitational potential as the observer at that point in time.

Before closing this section, it is worthwhile to mention how the units of other physical properties vary/scale with gravitational potential. To begin with, energy E satisfies the same proportionality relationship as for frequency and light speed [see eqs. (14,15)]:

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused http://dx.doi.org/10.5772/55792 17

$$\mathbf{E}\_{\rm D} = \mathbf{E}\_{\chi} \left( \mathbf{1} + \mathbf{\Phi} \mathbf{c}^{-2} \right) = \mathbf{S} \mathbf{E}\_{\chi}. \tag{18}$$

Indeed, one can derive this equation to a suitable approximation using Newton's gravita‐ tional theory and the definition of gravitational potential energy as mgh=m*Φ*. According to STR [1], the value of the energy EX at the light source is equal to mc2 and thus the fractional increase is ghc−<sup>2</sup> =*Φ*c−<sup>2</sup> , in agreement with eq. (18) at least for infinitesimal values of h for which the difference between inertial and gravitational mass is negligible. *The fact that energy and frequency scale in the same way means that Planck's constant ħ does not vary with gravitational potential.* This in turn means that angular momentum in general scales as S0 , the same as for distance.

Einstein also stated in his Jahrbuch article [17] that the above dependence of energy on gravitational potential implies that there is a position-dependent component correspond‐ ing to an inertial mass m equal to E/c2 [40]. Because E and c [i.e. the generic speed of light of eq. (15)] both vary as S, it follows that the unit of inertial mass scales as S-1, the same as time. Values of linear momentum p=mv are the same for observers located at differ‐ ent gravitational potentials, since the corresponding scaling/conversion factors for inertial mass and velocity exactly cancel one another. With reference to the Pound-Snider experi‐ ment [35], this means that x-ray photons *increase in momentum* as they fall through the gravitational field. This follows from quantum mechanics and the fact that the correspond‐ ing wavelength of the radiation decreases in the process. Hence, the E=pc relation holds for observers located at all gravitational potentials since the light speed is decreasing at exactly the same rate as the momentum is increasing, while the energy of the photons is conserved throughout the entire process. This situation might seem counter-intuitive, but it emphasizes that photons are different than material particles with non-zero rest mass, for which momentum and velocity are strictly proportional (see also the related discus‐ sion of light refraction in Ref. 41). Note also that because of the governing quantum mechanical equations, this result is consistent with the *νλ* =c relation for light waves, also already alluded to above. More details about gravitational scaling may be found in a companion publication [42].

#### **3.5. Clock rates on circumnavigating airplanes**

The unit of velocity or speed is equal to the ratio of the distance travelled to elapsed time and hence the conversion factor in this case is equal to the ratio of the corresponding conversion

It is useful to employ the above concepts to follow the course of the light pulses in the Pound-Snider experiment [35]. The analysis of two events is sufficient to illustrate the main points in this discussion, namely the initial emission of the x-rays at the higher gravitational potential (I) and their subsequent arrival at the absorber (II). It is important to use the same set of units in comparing the corresponding measured results. In terms of the units at the x-ray source, the observer X there measures the following values for event I: *ν*X(I)=*ν*, *λ*X(I)=*λ* and cX(I)=c, where *ν* and *λ* are the standard frequency and wavelength of the light source. Upon arrival at the absorber, the same observer measures the corresponding values: *ν*X(II)=*ν*, *λ*X(II)=*λ*S−<sup>1</sup> and

/S−<sup>1</sup> =S, consistent with the previous definition from eq. (15).

. In other words, the value of the frequency has not changed during the passage of

and cD(II)=c. His unit of distance is the same and so there

). However, his units of frequency

the x-rays down to the absorber, *but the speed of light has decreased* in accordance with the EP (S>1). Consequently, the wavelength of the light must have decreased by the same factor in order to satisfy the general relation (phase velocity of light) in free space between frequency,

The observer D located at the absorber measures generally different values for the same two events, not because he is considering fundamentally different processes, but rather because he bases his numerical results on a different system of physical units (PRM). He therefore finds

and speed are S times *smaller* than for his counterpart at the higher gravitational potential, and therefore he measures values for these two quantities which are S times *greater* in each case. This set of results illustrates the very important general principle of measurement, namely *that the numerical value obtained is inversely proportional to the unit in which it is expressed.* The corresponding values for the initial emission process are accordingly: *ν*D(I)=S*ν*, *λ*D(I)=*λ* and

In absolute terms what has happened as a result of the downward passage of the x-rays between source and absorber/detector is that the light frequency has remained constant throughout the entire process, exactly as Einstein demanded in his Jahrbuch review [17]. On the other hand, both the light speed and the corresponding wavelength of the radiation have deceased by a factor of S. This is in agreement with Einstein's second postulate of STR [1], namely that the observer at the absorber must find that the speed of light when it arrives there has a value of c in his units because it is at the same gravitational potential as the observer at

Before closing this section, it is worthwhile to mention how the units of other physical properties vary/scale with gravitational potential. To begin with, energy E satisfies the same

proportionality relationship as for frequency and light speed [see eqs. (14,15)]:

factors, namely S<sup>0</sup>

cX(II)=cS<sup>−</sup><sup>1</sup>

cD(I)=Sc.

that point in time.

wavelength and light speed (*λ*X*ν*<sup>X</sup> =cX).

16 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

for event II: *ν*D(II)=S*ν*, *λ*D(II)=*λ*S−<sup>1</sup>

is complete agreement on the value of the wavelength (*λ*S−<sup>1</sup>

Despite the progress that had been made in carrying out experiments that verify various aspects of Einstein's STR [1], there was still considerable uncertainty as to how to predict the results of future investigations of the time-dilation effect by the time the next significant advance was made in 1971. At that time, Hafele and Keating [43-44] carried out tests of cesium atomic clocks located onboard circumnavigating airplanes that traveled in opposite directions around the globe. In their introductory remarks, the authors noted that the "relativistic clock paradox" was still a subject of "the most enduring scientific debates of this century." The first paper presents some predictions of the expected time differences for clocks located on each airplane as well as at the origin of their respective flights. Their calculations are clearly based on an *objective* theory of time dilation that is consistent with eq. (12). The symmetric relationship that is inherent in eqs. (3) and (5) derived from the LT is completely ignored. Gone is the idea that the observer at the airport should find that the clocks on the airplanes run slower while his counterpart on the airplane should find the opposite ordering in clock rates. To sustain this argument, Hafele and Keating argue [43] that standard clocks located on the Earth's surface are generally not suitable as reference in their calculations because of their acceleration around the Earth's polar axis.

All speeds to be inserted into eq. (12) must therefore be computed relative to the Earth's nonrotating polar axis, or more simply, relative to the gravitational midpoint of the Earth itself. The latter therefore serves the same purpose as the rotor axis in the Hay et al. experiments [25] discussed in Sect. III.C. In short, the rotational speed RΩ cos*χ* of the Earth at a given latitude *χ* must be taken into account for both objects on the ground and on the airplanes. If the airplane travels with ground speed v in an easterly direction, this means that the ratio of the elapsed times of an onboard clock *τ* and that of a clock on the ground *τ*0 is:

$$\frac{\tau}{\tau\_0} = \frac{\gamma \left( \text{R}\Omega \cos \chi \right)}{\gamma \left( \text{v} + \text{R}\Omega \cos \chi \right)} \sim 1 - \frac{2 \text{v} \text{R}\Omega \cos \chi \text{ } + \text{v}^2}{2 \text{c}^2}. \tag{19}$$

a transponder system on the rocket, leaving only the second-order transverse Doppler effect and the gravitational red shifts as the cause of the observed frequency shifts. The flight lasted

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

http://dx.doi.org/10.5772/55792

19

The speed of the rocket was monitored as a function of altitude and the amount of the frequency shift was predicted on the basis of eq. (20) except that one took into account the variation of g with altitude to obtain the desired accuracy and also consider the effect of the centrifugal acceleration of the ground station [see eq. (1) of Ref. 45]. The position and velocity of both the rocket and the ground station were measured in an Earth-centered inertial frame, consistent with the assumptions of the Hafele-Keating analysis [43] of their data for circumnavigating airplanes. It was also necessary to take account of the refractive index of the ionosphere as the rocket passed through this region of space. The Doppler-canceling feature of the experiment

The authors concluded that the rocket experiment was consistent with Einstein's EP to a suitably high level of accuracy. By assuming the validity of the EP, they were also able to place an upper bound on the ratio of the one-way and two-way velocity of light. The corresponding Δc/c values ranged from 1.9x10-8 to -5.6x10-8 during various parts of the rocket's trajectory.

The experiments described in the preceding question provide the necessary empirical data to enable the accurate prediction of how the rates of clocks vary as they are carried onboard rockets and satellites. This information is critical for the success of the Global Positioning System because of the need to measure the elapsed times of light signals to pass between its

Before considering how to make the desired predictions based on the above experience, it is interesting to examine how Einstein's original theory of clock rate variations [1] has held up in the light of actual experiments. There has been unanimity in the theoretical discussions accompanying these experiments in claiming that their results mirror perfectly Einstein's predictions. This is particularly true of the authors of the transverse Doppler experiments using high-speed rotors [25,28-29] as well as for Hafele and Keating in their investigations of the

Such an evaluation overlooks the basic point discussed in Sect. II, however, namely that Einstein had two clearly distinguishable conclusions regarding the question of which of two identical clocks in relative motion runs slower in a given experiment [1]. One set of predictions was based on the LT and the conclusion that all inertial systems are equivalent by virtue of the RP. This is a thoroughly *subjective* theory of measurement which is characterized by the ambiguous, seemingly contradictory, results obtained in eqs. (3) and (5), both of which are derived in a straightforward manner from the LT. In this view, which clock runs slower is simply a matter of one's perspective. This conclusion is consistent with Will's equation [26] for the transverse Doppler effect (see eq. [9]), which states that the emitted frequency of a

for nearly two hours and carried the rocket to a maximum altitude of 10000 km.

was also successful in removing the gross effects of such variations.

**4. Application of the experimental clock-rate results**

rates of atomic clocks onboard circumnavigating airplanes [43,44].

satellites and the Earth's surface.

The interesting conclusion is that a clock traveling in the westerly direction with the same ground speed (v<0) should run faster than both its easterly counterpart and also the ground clock at the airport of departure.

The latter expression does not include the effect of the gravitational red shift discussed in Sect. III.D. If the altitude of the airplane relative to the ground is h, the ratio becomes:

$$\frac{\pi}{\tau\_0} \sim 1 + \frac{\text{gh}}{\text{c}^2} - \frac{2\text{vR}\Omega \cos \chi \text{ } \text{+ v}^2}{2\text{c}^2}. \tag{20}$$

The gravitational increase in clock rate on the airplane was typically somewhat smaller than the time-dilation effect for the clock traveling eastward, so that the overall effect was still a decrease in its rate relative to the clock at the airport. The two effects reinforce each other for the airplane traveling in the westerly direction.

The predicted effects were mirrored in the experimental data [44]. The observed time difference between the ground and easterly traveling clocks was -40 ± 23 ns as compared to the predicted value of -59 ± 10 ns. The corresponding values for the westward traveling clock were +275 ± 21 ns vs. 273 ± 7 ns. The main sources of error were due to instabilities in the cesium clock rates and uncertainties in the actual flight path of the airplanes.

#### **3.6. Transverse Doppler effect on a rocket**

The transverse Doppler effect was again used to study the effects of motion and gravity on clock rates in an experiment carried out in 1976 by Vessot and Levine [45]. The frequency of a hydrogen-maser oscillator system carried onboard a rocket was measured as it moved up and down in the Earth's gravitational field. The first-order Doppler effect was eliminated by using a transponder system on the rocket, leaving only the second-order transverse Doppler effect and the gravitational red shifts as the cause of the observed frequency shifts. The flight lasted for nearly two hours and carried the rocket to a maximum altitude of 10000 km.

The speed of the rocket was monitored as a function of altitude and the amount of the frequency shift was predicted on the basis of eq. (20) except that one took into account the variation of g with altitude to obtain the desired accuracy and also consider the effect of the centrifugal acceleration of the ground station [see eq. (1) of Ref. 45]. The position and velocity of both the rocket and the ground station were measured in an Earth-centered inertial frame, consistent with the assumptions of the Hafele-Keating analysis [43] of their data for circumnavigating airplanes. It was also necessary to take account of the refractive index of the ionosphere as the rocket passed through this region of space. The Doppler-canceling feature of the experiment was also successful in removing the gross effects of such variations.

The authors concluded that the rocket experiment was consistent with Einstein's EP to a suitably high level of accuracy. By assuming the validity of the EP, they were also able to place an upper bound on the ratio of the one-way and two-way velocity of light. The corresponding Δc/c values ranged from 1.9x10-8 to -5.6x10-8 during various parts of the rocket's trajectory.

#### **4. Application of the experimental clock-rate results**

that is inherent in eqs. (3) and (5) derived from the LT is completely ignored. Gone is the idea that the observer at the airport should find that the clocks on the airplanes run slower while his counterpart on the airplane should find the opposite ordering in clock rates. To sustain this argument, Hafele and Keating argue [43] that standard clocks located on the Earth's surface are generally not suitable as reference in their calculations because of their acceleration around

All speeds to be inserted into eq. (12) must therefore be computed relative to the Earth's nonrotating polar axis, or more simply, relative to the gravitational midpoint of the Earth itself. The latter therefore serves the same purpose as the rotor axis in the Hay et al. experiments [25] discussed in Sect. III.C. In short, the rotational speed RΩ cos*χ* of the Earth at a given latitude *χ* must be taken into account for both objects on the ground and on the airplanes. If the airplane travels with ground speed v in an easterly direction, this means that the ratio of the elapsed

*<sup>γ</sup>*(v <sup>+</sup> <sup>R</sup>*Ω*cos*χ*) <sup>∼</sup>1<sup>−</sup> 2vR*Ω*cos*<sup>χ</sup>* <sup>+</sup> <sup>v</sup><sup>2</sup>

The interesting conclusion is that a clock traveling in the westerly direction with the same ground speed (v<0) should run faster than both its easterly counterpart and also the ground

The latter expression does not include the effect of the gravitational red shift discussed in Sect.

<sup>c</sup><sup>2</sup> <sup>−</sup> 2vR*<sup>Ω</sup>* cos*<sup>χ</sup>* <sup>+</sup> <sup>v</sup><sup>2</sup>

The gravitational increase in clock rate on the airplane was typically somewhat smaller than the time-dilation effect for the clock traveling eastward, so that the overall effect was still a decrease in its rate relative to the clock at the airport. The two effects reinforce each other for

The predicted effects were mirrored in the experimental data [44]. The observed time difference between the ground and easterly traveling clocks was -40 ± 23 ns as compared to the predicted value of -59 ± 10 ns. The corresponding values for the westward traveling clock were +275 ± 21 ns vs. 273 ± 7 ns. The main sources of error were due to instabilities in the cesium clock rates

The transverse Doppler effect was again used to study the effects of motion and gravity on clock rates in an experiment carried out in 1976 by Vessot and Levine [45]. The frequency of a hydrogen-maser oscillator system carried onboard a rocket was measured as it moved up and down in the Earth's gravitational field. The first-order Doppler effect was eliminated by using

III.D. If the altitude of the airplane relative to the ground is h, the ratio becomes:

2c<sup>2</sup> . (19)

2c<sup>2</sup> . (20)

times of an onboard clock *τ* and that of a clock on the ground *τ*0 is:

<sup>=</sup> *<sup>γ</sup>*(R*Ω*cos*χ*)

*τ τ*0

and uncertainties in the actual flight path of the airplanes.

the airplane traveling in the westerly direction.

**3.6. Transverse Doppler effect on a rocket**

<sup>∼</sup><sup>1</sup> <sup>+</sup> gh

*τ τ*0

18 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

clock at the airport of departure.

the Earth's polar axis.

The experiments described in the preceding question provide the necessary empirical data to enable the accurate prediction of how the rates of clocks vary as they are carried onboard rockets and satellites. This information is critical for the success of the Global Positioning System because of the need to measure the elapsed times of light signals to pass between its satellites and the Earth's surface.

Before considering how to make the desired predictions based on the above experience, it is interesting to examine how Einstein's original theory of clock rate variations [1] has held up in the light of actual experiments. There has been unanimity in the theoretical discussions accompanying these experiments in claiming that their results mirror perfectly Einstein's predictions. This is particularly true of the authors of the transverse Doppler experiments using high-speed rotors [25,28-29] as well as for Hafele and Keating in their investigations of the rates of atomic clocks onboard circumnavigating airplanes [43,44].

Such an evaluation overlooks the basic point discussed in Sect. II, however, namely that Einstein had two clearly distinguishable conclusions regarding the question of which of two identical clocks in relative motion runs slower in a given experiment [1]. One set of predictions was based on the LT and the conclusion that all inertial systems are equivalent by virtue of the RP. This is a thoroughly *subjective* theory of measurement which is characterized by the ambiguous, seemingly contradictory, results obtained in eqs. (3) and (5), both of which are derived in a straightforward manner from the LT. In this view, which clock runs slower is simply a matter of one's perspective. This conclusion is consistent with Will's equation [26] for the transverse Doppler effect (see eq. [9]), which states that the emitted frequency of a moving light source is necessarily lower than for the same source when it is at rest in the laboratory of the observer. By contrast, Einstein's alternative conclusion was that an acceler‐ ated clock is definitely slowed relative to one that remains at rest in its original location. That corresponds to a thoroughly objective theory of measurement consistent with eq. (12), which requires the assignment of a definite origin (objective rest system ORS [46]) from which to measure the speeds of the respective clocks being compared.

be moving relative to him? The latter question is particularly important for the GPS method‐ ology since it relies on the assumption that the time of emission of a light signal is the same on the ground as on an orbiting satellite as long as one corrects for differences in clock rates at the respective locations. We will return to these questions in Sect. V, but first attention will be focused on the way in which the available experimental data on clock rate variations can be

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

http://dx.doi.org/10.5772/55792

21

There are two causes of clock-rate variations in terrestrial experiments. Both of them involve the ECM, one depending exclusively on the speed of a given clock relative to that position and the other on the corresponding difference in gravitational potential. As a result, in comparing different clocks on the Earth's surface, it is necessary to know both the latitude *χ* of each clock as well as its altitude h relative to sea level. The time-dilation effect is governed exclusively by eq. (12), whereby the speed to be inserted in the respective *γ* factors is equal to RE*Ω*cos*χ*, where *Ω* is the Earth's rotational frequency (2π radians per 24 h = 86 400 s) and RE is the Earth's radius (or more accurately, the distance between the location of the clock and the ECM). A standard clock S needs to be designated. Theoretically, there is no restriction on its location. Its latitude *χ*<sup>S</sup> (as well as its altitude rS relative to the ECM) is then an important parameter in computing the ratio of the rates of each clock in the network with that of the standard clock. It is helpful

> ( ) ( ) E

 c

<sup>W</sup> <sup>=</sup> <sup>W</sup> (21)

= + (22)

*t t* æ ö D= D ç ÷ è ø (23)

 c

R cos R . R cos

g

g

this purpose, it is helpful to define a second ratio S for each clock:

E S

According to eq. (12), this ratio tells us how much slower (if R>1) or faster (if R<1) the given clock runs than the standard if both are located at the same gravitational potential. The gravitational red shift needs to be taken into account to obtain the actual clock-rate ratio. For

> ( ) <sup>S</sup> 2 g r–r S1 , c

where r is the distance of the clock to the ECM. This ratio tells us how much faster (S>1) the clock runs relative to the standard by virtue of their difference in gravitational potential. The elapsed time *Δt* on the clock for a given event can then be converted to the corresponding

<sup>R</sup> . <sup>S</sup>

elapsed time *Δt*S on the standard clock by combining the two ratios as follows:

S

applied in practice.

to define the ratio R as follows:

**4.1. Synchronization of world-wide clock networks**

There is no doubt which of Einstein's two conclusions is meant by the various authors when they assert that their results are perfectly consistent with relativity theory. This point was made quite clear in Sherwin's analysis [31] of the rotor experiments, which he referred to as a demonstration of "the transverse Doppler effect for accelerating systems." It is the *unambigu‐ ous* nature of the result of the clock paradox that sets the rotor experiments apart from the classical theory of time dilation derived from the LT. Hafele and Keating [43] take the same position in presenting the underlying theory of their measurements of the rates of atomic clocks. In that case, the authors claim the determining factor for the choice of the Earth's nonrotating axis as the reference system for application of eq. (12) is that it is an inertial frame. However, this conclusion overlooks the fact that the Earth is constantly accelerating as it makes its way around the Sun, and thus it is at least quantitatively inaccurate to claim that a position at one of the Earth's Poles is in constant uniform translation [46].

A simpler alternative view is that the Earth's center of mass (ECM) plays the same role for clock rates as for falling bodies. The only way to change the direction in which free fall occurs is to somehow escape the gravitational pull of the Earth. This has been most notably accom‐ plished with the NASA expeditions to the Moon, in which case there is a definite point in the journey where the rocket tends to fall away from planet Earth and towards its satellite instead. It also occurs in a more conventional fashion when a body undergoes sufficient centrifugal acceleration in a parallel direction to the Earth's surface. In this view, the ORS [46] for com‐ puting time dilation is always the same as the reference point for free fall, and thus is subject to change in exactly the same manner. The current speed of the Earth relative to the Sun is therefore irrelevant for the quantitative prediction of clock rates for the applications at hand.

There is a far less subtle point to be considered in the present discussion, however. The conclusion that an accelerated clock always runs slower than one that remains at its original rest location has very definite consequences as far as relativity theory is concerned. It shows unequivocally that Einstein's symmetry principle [1] is violated, and therefore that the LT is contradicted by the experimental findings. Discussions in the literature [30-32] always emphasize that the acceleration effect was predicted by Einstein in his original paper [1] and that it is also entirely consistent with his EP [17], but there is never any mention of the conflict with the LT, which is after all the essential cornerstone of STR from which many long-accepted conclusions follow. The latter include most especially the FLC and remote non-simultaneity. Showing that the LT fails in one of its most basic predictions, the ambiguity in the ordering of local clock rates for two moving observers [31], necessarily raises equally critical questions about the validity of all its other conclusions. For example, does Lorentz invariance hold for the space and time variables under these experimental conditions? Are two events that occur at exactly the same time for one observer not simultaneous for someone else who happens to be moving relative to him? The latter question is particularly important for the GPS method‐ ology since it relies on the assumption that the time of emission of a light signal is the same on the ground as on an orbiting satellite as long as one corrects for differences in clock rates at the respective locations. We will return to these questions in Sect. V, but first attention will be focused on the way in which the available experimental data on clock rate variations can be applied in practice.

#### **4.1. Synchronization of world-wide clock networks**

moving light source is necessarily lower than for the same source when it is at rest in the laboratory of the observer. By contrast, Einstein's alternative conclusion was that an acceler‐ ated clock is definitely slowed relative to one that remains at rest in its original location. That corresponds to a thoroughly objective theory of measurement consistent with eq. (12), which requires the assignment of a definite origin (objective rest system ORS [46]) from which to

There is no doubt which of Einstein's two conclusions is meant by the various authors when they assert that their results are perfectly consistent with relativity theory. This point was made quite clear in Sherwin's analysis [31] of the rotor experiments, which he referred to as a demonstration of "the transverse Doppler effect for accelerating systems." It is the *unambigu‐ ous* nature of the result of the clock paradox that sets the rotor experiments apart from the classical theory of time dilation derived from the LT. Hafele and Keating [43] take the same position in presenting the underlying theory of their measurements of the rates of atomic clocks. In that case, the authors claim the determining factor for the choice of the Earth's nonrotating axis as the reference system for application of eq. (12) is that it is an inertial frame. However, this conclusion overlooks the fact that the Earth is constantly accelerating as it makes its way around the Sun, and thus it is at least quantitatively inaccurate to claim that a position

A simpler alternative view is that the Earth's center of mass (ECM) plays the same role for clock rates as for falling bodies. The only way to change the direction in which free fall occurs is to somehow escape the gravitational pull of the Earth. This has been most notably accom‐ plished with the NASA expeditions to the Moon, in which case there is a definite point in the journey where the rocket tends to fall away from planet Earth and towards its satellite instead. It also occurs in a more conventional fashion when a body undergoes sufficient centrifugal acceleration in a parallel direction to the Earth's surface. In this view, the ORS [46] for com‐ puting time dilation is always the same as the reference point for free fall, and thus is subject to change in exactly the same manner. The current speed of the Earth relative to the Sun is therefore irrelevant for the quantitative prediction of clock rates for the applications at hand.

There is a far less subtle point to be considered in the present discussion, however. The conclusion that an accelerated clock always runs slower than one that remains at its original rest location has very definite consequences as far as relativity theory is concerned. It shows unequivocally that Einstein's symmetry principle [1] is violated, and therefore that the LT is contradicted by the experimental findings. Discussions in the literature [30-32] always emphasize that the acceleration effect was predicted by Einstein in his original paper [1] and that it is also entirely consistent with his EP [17], but there is never any mention of the conflict with the LT, which is after all the essential cornerstone of STR from which many long-accepted conclusions follow. The latter include most especially the FLC and remote non-simultaneity. Showing that the LT fails in one of its most basic predictions, the ambiguity in the ordering of local clock rates for two moving observers [31], necessarily raises equally critical questions about the validity of all its other conclusions. For example, does Lorentz invariance hold for the space and time variables under these experimental conditions? Are two events that occur at exactly the same time for one observer not simultaneous for someone else who happens to

measure the speeds of the respective clocks being compared.

20 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

at one of the Earth's Poles is in constant uniform translation [46].

There are two causes of clock-rate variations in terrestrial experiments. Both of them involve the ECM, one depending exclusively on the speed of a given clock relative to that position and the other on the corresponding difference in gravitational potential. As a result, in comparing different clocks on the Earth's surface, it is necessary to know both the latitude *χ* of each clock as well as its altitude h relative to sea level. The time-dilation effect is governed exclusively by eq. (12), whereby the speed to be inserted in the respective *γ* factors is equal to RE*Ω*cos*χ*, where *Ω* is the Earth's rotational frequency (2π radians per 24 h = 86 400 s) and RE is the Earth's radius (or more accurately, the distance between the location of the clock and the ECM). A standard clock S needs to be designated. Theoretically, there is no restriction on its location. Its latitude *χ*<sup>S</sup> (as well as its altitude rS relative to the ECM) is then an important parameter in computing the ratio of the rates of each clock in the network with that of the standard clock. It is helpful to define the ratio R as follows:

$$\mathbf{R} = \frac{\chi\left(\mathbb{R}\_{\mathrm{E}}\Omega\cos\chi\right)}{\chi\left(\mathbb{R}\_{\mathrm{E}}\Omega\cos\chi\_{\mathrm{S}}\right)}.\tag{21}$$

According to eq. (12), this ratio tells us how much slower (if R>1) or faster (if R<1) the given clock runs than the standard if both are located at the same gravitational potential. The gravitational red shift needs to be taken into account to obtain the actual clock-rate ratio. For this purpose, it is helpful to define a second ratio S for each clock:

$$\mathbf{S} = 1 + \frac{\mathbf{g}\left(\mathbf{r} - \mathbf{r}\_{\mathrm{S}}\right)}{\mathbf{c}^{2}},\tag{22}$$

where r is the distance of the clock to the ECM. This ratio tells us how much faster (S>1) the clock runs relative to the standard by virtue of their difference in gravitational potential. The elapsed time *Δt* on the clock for a given event can then be converted to the corresponding elapsed time *Δt*S on the standard clock by combining the two ratios as follows:

$$
\Delta t\_{\rm S} = \left(\frac{\rm R}{\rm S}\right) \Delta t.\tag{23}
$$

It is possible to obtain the above ratios without having any communication between the laboratories that house the respective clocks. The necessary synchronization can begin by sending a light signal directly from the position of the clock A lying closest to the standard clock S. The corresponding distance can be determined to as high an accuracy as possible using GPS. Division by c then gives the elapsed time for the one-way travel of the signal based on the standard clock S. The time of arrival on the standard clock is then adjusted backward by this amount to give the time of emission TS0(A) for the signal, again based on the standard clock. The corresponding time of the initial emission read from the local clock is also stored with the value T0 (A). In principle, all subsequent timings can be determined by subtracting T0 (A) from the current reading on clock A to obtain *Δt* to be inserted in eq. (23). The time TS of the event on the standard clock is then computed to be:

$$\mathbf{T}^{\rm S} = \frac{\mathbf{R}}{\mathbf{S}} \Delta t + \mathbf{T}^{\rm S0} \left(\mathbf{A}\right),\tag{24}$$

current value of its speed v and altitude r relative to the ECM. The acceleration due to gravity changes in flight and so the ratio S also has to be computed in a more fundamental way. For this purpose, it is helpful to define the following quantity connected with the gravitational

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

potential, which is a generalization of the quantity in parentheses in eq. (18):

where G is the universal gravitational constant (6.670x10-11 Nm2

which simplifies to eq. (22) near the Earth's surface (with g=

The corresponding value of the R ratio is at least simple in form:

( ) ( )

gg

on the determination of the satellite speed v relative to the ECM at each instant.

g

for the satellite and the standard clock:

occurring during the launch phase.

( ) <sup>E</sup> 2

c r

mass of the Earth (5.975x1024 kg). The value of S is therefore given as the ratio of the A values

( ) ( )

( )

 c

( ) S ES

Note that the latitude *χ*S is not that of the launch position relative to the ECM, but rather that of the original standard clock. The accuracy of the adjustment procedure depends primarily

In this application the underlying principle is to adjust the satellite clock rate to the corre‐ sponding standard value over the entire flight, including the period after orbit has been achieved. The correction is made continuously in small intervals by using eq. (23) and the current values of R and S in each step. The result is tantamount to having the standard clock running at its normal rate on the satellite. This above procedure super-cedes the "precorrection" technique commonly discussed in the literature [3] according to which the satellite clock is *physically* adjusted prior to launch. The latter's goal is to approximately correct for the estimated change in clock rate expected if the satellite ultimately travels in a constant circular trajectory once it achieves orbital speed. The present theoretical procedure has the advantage of being able to account for departures from a perfectly circular orbit and also for rate changes

 g

v v R . v R cos

= + (25)

) and ME is the gravitational

http://dx.doi.org/10.5772/55792

23

/kg2

A rS S , A r <sup>=</sup> (26)

GME rS <sup>2</sup> ).

= = <sup>W</sup> (27)

GM Ar 1 ,

where R and S are the specific values of the ratios computed above for clock A.

Once the above procedure has been applied to clock A, it attains equivalent status as a standard. The next step therefore can be applied to the clock which is nearest either to clock A or clock S. In this way the network of standard clocks can be extended indefinitely across the globe. Making use of the "secondary" standard (A) naturally implies that all timings there are based on its adjusted readings. It is important to understand that no physical adjustments need to be made to the secondary clock, rather its direct readings are simply combined with the R and S factors in eq. (24) to obtain the timing results for a hypothetical standard. A discussion of this general point has been given earlier by Van Flandern [47]. The situation is entirely analogous to having a clock in one's household that runs systematically slower than the standard rate. One can nonetheless obtain accurate timings by multiplying the readings from the faulty clock by an appropriate factor and keeping track of the time that has elapsed since it was last set to the correct time. The key word in this discussion is "systematic." If the error is always of quantitatively reliable magnitude, the faulty clock can replace the standard without making any repairs.

#### **4.2. Adjustment of satellite clocks**

The same principles used to standardize clock rates on the Earth's surface can also be applied for adjusting satellite clocks. Assume that the clock is running at the standard rate prior to launch and is perfectly synchronized with the standard clock (i.e. as adjusted at the local position). In order to illustrate the principles involved, the gravitational effects of other objects in the neighborhood of the satellite are neglected in the following discussion, as well as inhomogeneous characteristics of the Earth's gravitational field.

The main difference relative to the previous example is that the R and S factors needed to make the adjustment from local to standard clock rate using eqs. (23-24) are no longer constant. Their computation requires a precise knowledge of the trajectory of the satellite, specifically the current value of its speed v and altitude r relative to the ECM. The acceleration due to gravity changes in flight and so the ratio S also has to be computed in a more fundamental way. For this purpose, it is helpful to define the following quantity connected with the gravitational potential, which is a generalization of the quantity in parentheses in eq. (18):

$$\mathbf{A(r)} = 1 + \frac{\mathbf{GM\_E}}{\mathbf{c^2r}}\,\mathrm{\,\,\,}\tag{25}$$

where G is the universal gravitational constant (6.670x10-11 Nm2 /kg2 ) and ME is the gravitational mass of the Earth (5.975x1024 kg). The value of S is therefore given as the ratio of the A values for the satellite and the standard clock:

$$\mathbf{S} = \frac{\mathbf{A}\left(\mathbf{r\_S}\right)}{\mathbf{A}\left(\mathbf{r}\right)},\tag{26}$$

which simplifies to eq. (22) near the Earth's surface (with g= GME rS <sup>2</sup> ).

The corresponding value of the R ratio is at least simple in form:

It is possible to obtain the above ratios without having any communication between the laboratories that house the respective clocks. The necessary synchronization can begin by sending a light signal directly from the position of the clock A lying closest to the standard clock S. The corresponding distance can be determined to as high an accuracy as possible using GPS. Division by c then gives the elapsed time for the one-way travel of the signal based on the standard clock S. The time of arrival on the standard clock is then adjusted backward by this amount to give the time of emission TS0(A) for the signal, again based on the standard clock. The corresponding time of the initial emission read from the local clock is also stored

(A) from the current reading on clock A to obtain *Δt* to be inserted in eq. (23). The time TS

Once the above procedure has been applied to clock A, it attains equivalent status as a standard. The next step therefore can be applied to the clock which is nearest either to clock A or clock S. In this way the network of standard clocks can be extended indefinitely across the globe. Making use of the "secondary" standard (A) naturally implies that all timings there are based on its adjusted readings. It is important to understand that no physical adjustments need to be made to the secondary clock, rather its direct readings are simply combined with the R and S factors in eq. (24) to obtain the timing results for a hypothetical standard. A discussion of this general point has been given earlier by Van Flandern [47]. The situation is entirely analogous to having a clock in one's household that runs systematically slower than the standard rate. One can nonetheless obtain accurate timings by multiplying the readings from the faulty clock by an appropriate factor and keeping track of the time that has elapsed since it was last set to the correct time. The key word in this discussion is "systematic." If the error is always of quantitatively reliable magnitude, the faulty clock can replace the standard

The same principles used to standardize clock rates on the Earth's surface can also be applied for adjusting satellite clocks. Assume that the clock is running at the standard rate prior to launch and is perfectly synchronized with the standard clock (i.e. as adjusted at the local position). In order to illustrate the principles involved, the gravitational effects of other objects in the neighborhood of the satellite are neglected in the following discussion, as well as

The main difference relative to the previous example is that the R and S factors needed to make the adjustment from local to standard clock rate using eqs. (23-24) are no longer constant. Their computation requires a precise knowledge of the trajectory of the satellite, specifically the

inhomogeneous characteristics of the Earth's gravitational field.

( ) <sup>S</sup> <sup>R</sup> S0 T T A, <sup>S</sup>

where R and S are the specific values of the ratios computed above for clock A.

of the event on the standard clock is then computed to be:

22 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

(A). In principle, all subsequent timings can be determined by subtracting

= D+*t* (24)

with the value T0

without making any repairs.

**4.2. Adjustment of satellite clocks**

T0

$$\mathbf{R} = \frac{\boldsymbol{\gamma}\left(\mathbf{v}\right)}{\boldsymbol{\gamma}\left(\mathbf{v}\_{\rm S}\right)} = \frac{\boldsymbol{\gamma}\left(\mathbf{v}\right)}{\boldsymbol{\gamma}\left(\mathbf{R}\_{\rm E}\boldsymbol{\Omega}\cos\chi\_{\rm S}\right)}.\tag{27}$$

Note that the latitude *χ*S is not that of the launch position relative to the ECM, but rather that of the original standard clock. The accuracy of the adjustment procedure depends primarily on the determination of the satellite speed v relative to the ECM at each instant.

In this application the underlying principle is to adjust the satellite clock rate to the corre‐ sponding standard value over the entire flight, including the period after orbit has been achieved. The correction is made continuously in small intervals by using eq. (23) and the current values of R and S in each step. The result is tantamount to having the standard clock running at its normal rate on the satellite. This above procedure super-cedes the "precorrection" technique commonly discussed in the literature [3] according to which the satellite clock is *physically* adjusted prior to launch. The latter's goal is to approximately correct for the estimated change in clock rate expected if the satellite ultimately travels in a constant circular trajectory once it achieves orbital speed. The present theoretical procedure has the advantage of being able to account for departures from a perfectly circular orbit and also for rate changes occurring during the launch phase.

#### **5. The new Lorentz transformation**

Although the relativistic principles for adjusting the rates of moving clocks are well under‐ stood, there is still an open question about how this all fits in with Einstein's STR [1]. The standard argument, starting with the high-speed rotor experiments of Hay et al. [25], has been that one only needs to know the speed of the clock relative to some specific inertial system in order to compute the decrease in its rate using eq. (12). While this is true, the procedure itself cannot be said to be consistent with the basic subjectivity of Einstein's original theory. As discussed in the beginning of Sect. IV, one has to disregard the predictions of the LT in order to have a theory which unambiguously states which of two clocks is running slower, namely the one that is accelerated by the greater amount relative to the aforementioned inertial system [28,31]. The consequences of ignoring the predictions of the LT in this situation are generally avoided in theoretical discussions of clock rates, with authors [16, 30-32] preferring instead to explain how correct predictions can be obtained by assuming that the amount of time dilation is proportional to the *γ*(v) factor computed from the speed v of the accelerated clock. The argument is made that the LT can only be used for clocks in uniform translation [31], but the implication is that its many other predictions still retain their validity, such as remote nonsimultaneity and Lorentz space-time invariance. However, the fact is that there is no basis for making the latter conclusion. The best that can be said is that there is no essential connection between the two types of predictions, and so failing in one of them does not necessarily rule out the validity of the others.

The standard procedure for addressing such problems is to find a way to amend the theory that removes the contradiction at hand, namely in the present case, the prediction of STR that two clocks can both be running slower than the other at the same time [1], while still retaining the capability of making all the other hitherto successful predictions of the theory without further modification. In previous work [48-50] it has been shown that the following space-time transformation accomplishes this objective by making a different choice for the function *ε* in eqs. (1a-d) than the value of unity assumed by Einstein in his original derivation of the LT [1]:

$$\begin{aligned} t' &= \mathbf{Q}^{-1} t & \mathbf{a} \\ \mathbf{x'} &= \eta \mathbf{Q}^{-1} \left(\mathbf{x} - \mathbf{v}t\right) & \mathbf{b} \\ y' &= \eta \left(\boldsymbol{\gamma} \mathbf{Q}\right)^{-1} y & \mathbf{c} \\ z' &= \eta \left(\boldsymbol{\gamma} \mathbf{Q}\right)^{-1} z & \mathbf{d} \end{aligned} \tag{28}$$

speed of the primed and unprimed coordinate systems related in the transformation. It leads

The alternative Lorentz transformation (ALT) in eqs. (28a-d) satisfies both of Einstein's postulates [1] because of its direct relation to the general equations in eqs. (1a-d), and it is also

consistent with the same velocity transformation (VT) that is an integral part of STR:

( ) ( ) ( )

1 – v c v v a

1– v c b

*xz z*

1–v c . c

It is more than an *alternative* to Einstein's LT, however, because it is possible to show that the LT is invalid even for uniformly translating systems [51]. This can be seen by considering the example of two observers measuring the length of a line that is oriented perpendicularly to their relative velocity. The FLC, which is derived from the LT, predicts [1] that they must agree

must agree on the speed of light, however, one concludes that the corresponding two distance

This contradiction has been referred to as the "clock riddle" in earlier work [50-51], as opposed

merely the result of Einstein's original (undeclared) assumption mentioned above. The contradiction is easily removed by making a different assumption for the value of *ε* in eqs. (1a-d). The ALT makes this choice by assuming that the clock rates of the two observers must be strictly proportional, as expressed by eq. (28a). The latter assumption is clearly consistent with the high-speed rotor [25,28-29] and Hafele-Keating [43-44] experiments, and leads to the result expected on the basis of time dilation and the light-speed constancy postulate. The ALT also leads directly to the key experimental results connected with the aberration of starlight at the zenith and the Fresnel light-drag experiment, since they can be derived quite simply from

The ALT is the centerpiece of a thoroughly objective version of relativity theory. It subscribes to the Principle of Rational Measurement (PRM), whereby two observers must always agree in principle on the ratio of physical quantities of the same type [42]. There is never any question about which clock is running slower or which distance is shorter, unlike the case for Einstein's STR. The new theory [48,49] does this by restating the RP: The laws of physics are the same in all inertial systems, *but the units on which they are expressed may vary systematically between rest frames.* The passengers locked below deck in Galileo's ship cannot notice differences between traveling on a calm sea and being tied up on the dock next to the shore, but this does not

to the "clock paradox" frequently discussed in the literature. In fact, the *Δy* =*Δy* ′

=*Δy* ′ .

hg

<sup>1</sup> 12 1


<sup>1</sup> 12 1

*y xy y*


*u uu u*

*u uu u*

of space and time variables in the LT instead of the simpler result in eq. (28a).

<sup>1</sup> <sup>2</sup>


*u uu u*

*x xx x*

¢ = -= -

( )

¢ = =

( )

¢ = =

z

measurements also differ, since *Δy* =c*Δt* ≠c*Δt* ′

on this value (*Δy* =*Δy* ′

the VT of eqs. (29a-c).

different elapsed times (*Δt* ≠*Δt* ′

g

g

and *z* = *z* ′ relations of the LT instead of eqs. (28c-d), but it also forces the mixing

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

h

). If their clocks run at different rates, however, they will measure

) for a light pulse to travel the same distance. Since they also

(29)

25

http://dx.doi.org/10.5772/55792

prediction is

hg

to the *y* = *y* ′

In this set of equations, *<sup>η</sup>* =(1–v*ux*c−<sup>2</sup> )−<sup>1</sup> =(1–v*xt* <sup>−</sup><sup>1</sup> c<sup>−</sup>2)<sup>−</sup><sup>1</sup> and Q is a proportionality factor that defines the ratio of the two clock rates in question. It is assumed that the rates of moving clocks are strictly proportional to one another, thereby eliminating the space-time mixing inherent in the LT. Reviewing the arguments that Einstein used in his derivation of the LT shows that he in fact made an undeclared assumption therein without any attempt to justify it on the basis of experimental observations. The assumption was that *ε* can only depend on v, the relative speed of the primed and unprimed coordinate systems related in the transformation. It leads to the *y* = *y* ′ and *z* = *z* ′ relations of the LT instead of eqs. (28c-d), but it also forces the mixing of space and time variables in the LT instead of the simpler result in eq. (28a).

**5. The new Lorentz transformation**

24 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

out the validity of the others.

In this set of equations, *<sup>η</sup>* =(1–v*ux*c−<sup>2</sup>

Although the relativistic principles for adjusting the rates of moving clocks are well under‐ stood, there is still an open question about how this all fits in with Einstein's STR [1]. The standard argument, starting with the high-speed rotor experiments of Hay et al. [25], has been that one only needs to know the speed of the clock relative to some specific inertial system in order to compute the decrease in its rate using eq. (12). While this is true, the procedure itself cannot be said to be consistent with the basic subjectivity of Einstein's original theory. As discussed in the beginning of Sect. IV, one has to disregard the predictions of the LT in order to have a theory which unambiguously states which of two clocks is running slower, namely the one that is accelerated by the greater amount relative to the aforementioned inertial system [28,31]. The consequences of ignoring the predictions of the LT in this situation are generally avoided in theoretical discussions of clock rates, with authors [16, 30-32] preferring instead to explain how correct predictions can be obtained by assuming that the amount of time dilation is proportional to the *γ*(v) factor computed from the speed v of the accelerated clock. The argument is made that the LT can only be used for clocks in uniform translation [31], but the implication is that its many other predictions still retain their validity, such as remote nonsimultaneity and Lorentz space-time invariance. However, the fact is that there is no basis for making the latter conclusion. The best that can be said is that there is no essential connection between the two types of predictions, and so failing in one of them does not necessarily rule

The standard procedure for addressing such problems is to find a way to amend the theory that removes the contradiction at hand, namely in the present case, the prediction of STR that two clocks can both be running slower than the other at the same time [1], while still retaining the capability of making all the other hitherto successful predictions of the theory without further modification. In previous work [48-50] it has been shown that the following space-time transformation accomplishes this objective by making a different choice for the function *ε* in eqs. (1a-d) than the value of unity assumed by Einstein in his original derivation of the LT [1]:

( )

(28)

c<sup>−</sup>2)<sup>−</sup><sup>1</sup> and Q is a proportionality factor that

 = Q a Q –v b ( Q) c Q. d

1 1

)−<sup>1</sup> =(1–v*xt* <sup>−</sup><sup>1</sup>

defines the ratio of the two clock rates in question. It is assumed that the rates of moving clocks are strictly proportional to one another, thereby eliminating the space-time mixing inherent in the LT. Reviewing the arguments that Einstein used in his derivation of the LT shows that he in fact made an undeclared assumption therein without any attempt to justify it on the basis of experimental observations. The assumption was that *ε* can only depend on v, the relative


( )

*x xt y y z z*

1 1


*t t*

¢ ¢ = ¢ = ¢ =

h

h g

h g The alternative Lorentz transformation (ALT) in eqs. (28a-d) satisfies both of Einstein's postulates [1] because of its direct relation to the general equations in eqs. (1a-d), and it is also consistent with the same velocity transformation (VT) that is an integral part of STR:

$$u\_x' = \left(1 - \left\|\mathbf{v}\mu\_x\mathbf{c}^{-2}\right\|^{-1}\left(\mu\_x - \mathbf{v}\right) = \eta \left(\mu\_x - \mathbf{v}\right) \quad \text{a}$$

$$u\_y' = \left.\chi^{-1}\left(1 - \left\|\mathbf{v}\mu\_x\mathbf{c}^{-2}\right\|^{-1}\right)u\_y = \eta\chi^{-1}u\_y \quad \text{b} \tag{29}$$

$$u\_x' = \gamma^{-1}\left(1 - \left\|\mathbf{v}\mu\_x\mathbf{c}^{-2}\right\|^{-1}\right)^{-1}u\_z = \eta\chi^{-1}u\_z. \tag{30}$$

It is more than an *alternative* to Einstein's LT, however, because it is possible to show that the LT is invalid even for uniformly translating systems [51]. This can be seen by considering the example of two observers measuring the length of a line that is oriented perpendicularly to their relative velocity. The FLC, which is derived from the LT, predicts [1] that they must agree on this value (*Δy* =*Δy* ′ ). If their clocks run at different rates, however, they will measure different elapsed times (*Δt* ≠*Δt* ′ ) for a light pulse to travel the same distance. Since they also must agree on the speed of light, however, one concludes that the corresponding two distance measurements also differ, since *Δy* =c*Δt* ≠c*Δt* ′ =*Δy* ′ .

This contradiction has been referred to as the "clock riddle" in earlier work [50-51], as opposed to the "clock paradox" frequently discussed in the literature. In fact, the *Δy* =*Δy* ′ prediction is merely the result of Einstein's original (undeclared) assumption mentioned above. The contradiction is easily removed by making a different assumption for the value of *ε* in eqs. (1a-d). The ALT makes this choice by assuming that the clock rates of the two observers must be strictly proportional, as expressed by eq. (28a). The latter assumption is clearly consistent with the high-speed rotor [25,28-29] and Hafele-Keating [43-44] experiments, and leads to the result expected on the basis of time dilation and the light-speed constancy postulate. The ALT also leads directly to the key experimental results connected with the aberration of starlight at the zenith and the Fresnel light-drag experiment, since they can be derived quite simply from the VT of eqs. (29a-c).

The ALT is the centerpiece of a thoroughly objective version of relativity theory. It subscribes to the Principle of Rational Measurement (PRM), whereby two observers must always agree in principle on the ratio of physical quantities of the same type [42]. There is never any question about which clock is running slower or which distance is shorter, unlike the case for Einstein's STR. The new theory [48,49] does this by restating the RP: The laws of physics are the same in all inertial systems, *but the units on which they are expressed may vary systematically between rest frames.* The passengers locked below deck in Galileo's ship cannot notice differences between traveling on a calm sea and being tied up on the dock next to the shore, but this does not preclude the possibility that all their clocks run at a different rate in one state of constant motion than in the other. Such distinctions first become clear when they are able to look out of a window and compare their measurements of the same quantity with those of their counterpart in another rest frame.

of time (with duration 5-20 min) and for a long period (lasting several hours). In the first case, it occurs under the influence of the radiation coming from the solar flare. In the second case, it takes place in 30-35 hours after the flare under the influence of the solar wind. A concrete example is the time-dependence of the violations of the GPS satellite system during periods of solar activity which was published on the website of Cornell University [54]. According to measurements carried out in real-time monitoring stations of Arecibo Observatory (Puerto Rico) daily from August 30 to September 02, 2011 between 03.00 and 04.00 on the Coordinated Universal Time (UTC), there was a 20-minute failure of GPS. The horizontal positioning error

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

http://dx.doi.org/10.5772/55792

27

More powerful geomagnetic disturbances lead to the complete disappearance of the signal at the GPS receiver for a sufficiently long period of time [55]. Thus, the data obtained at the Sao Luis Observatory (Brazil) on September 15-16, 2011 showed that the loss of GPS signal occurred several times during the day. The signal at the receiver sporadically disappeared five times for 5-30 minutes each between 16.00 UTC September 15 to 01.00 UTC September 16, 2011. Moreover, the horizontal positioning error during these days

It was shown in [56] that during solar flares of different power there was a certain sequence of decreasing carrier/noise ratio for the frequencies L1=1.57542 GHz and L2=1.22760 GHz. During the solar flare of X-1 level (22.15 UTC, December 14, 2006) the carrier/noise ratio for the frequency L1 had become worse. At the same time, the carrier/noise ratio for the L2 frequency remained unchanged. The flare of the X-3 level (02.40 UTC, December 13, 2006) led to a simultaneous deterioration of the carrier/noise ratio for both frequencies. The duration of

The next phenomenon that deserves attention is the increasing power of the signal received by GPS receiver during the period of strong solar activity. The time-dependence of the power of the GPS signal, and an integral number of failures at the receiver during a geomagnetic disturbance on July 15, 2000 was published in [57]. There was an increase of about three times in the intensity of the signal at the receiver with respect to satellite signal power. The integral number of failures grew with increasing intensity of the received signal. The authors did not

The observations show that a significant increase of high-frequency (UHF) radiation from the upper atmosphere is a result of solar flares. The intensity of UHF radiation in such events in 40 or more times higher than typical levels of solar microwave bursts [58]. In particular, such events were observed during periods of geomagnetic disturbances, such as observations at wavelengths in the 3-50 cm range [59]. The observations were made simultaneously at several points within the project SETI. The intensities were not given in this paper. The corresponding graphical dependences were normalized to the maximum value. Analysis of different possi‐ bilities to generate the observed UHF radiation has shown that the largest contribution to the resulting picture of the spectrum is made by transitions between Rydberg levels of neutral components of the non-equilibrium two-temperature plasma. They are excited by the action of solar radiation flux or a stream of electrons emitted from the upper ionosphere in a collision

here reached 50 m and more.

greatly exceeded the value of 50 m.

the phenomena observed in both cases was about 30 minutes.

give an explanation for the growth of intensity.

with electrons [55].

At the same time, the ALT insists on remote simultaneity by virtue of eq. (28a), since it is no longer possible for *Δt* to vanish without *Δt* ′ doing so as well. It does away with space-time mixing, the FLC and Lorentz invariance, and it also removes any possibility of time reversal ( *Δt <sup>Δ</sup><sup>t</sup>* ′ <0) since the proportionality factor Q is necessarily greater than zero in eqs. (28a-d). Not one of the latter effects has ever been verified experimentally, although there has been endless speculation about each of them. The ALT might be more aptly referred to as the "GPS LT," because it conforms to the basic assumption of this technology that the rates of atomic clocks in relative motion are only affected by their current state of motion and position in a gravita‐ tional field. More details concerning these aspects of relativity theory may be found in a companion article [42].

#### **6. GPS positioning errors during strong solar activity**

The relativistic and gravitational effects discussed above are observed at sufficiently large distances from GPS satellites to the E-layer of the atmosphere. In accordance with existing experimental data these effects lead to positioning errors which do not exceed 3 m on the Earth's surface. The next largest sources of error may be due to satellite geometry with respect to the GPS receiver, as well as orbit deformation due to the influence of gravity and the uneven distribution of this field.

We consider below another key aspect of the process of point-location determination by GPS. It is based on the fact that the distance can be determined using information from the atomic clock, and the assumption that the light pulses are always moving with the same speed through space. Then, to determine the appropriate distance it is sufficient to multiply this speed by the time of passage of the light pulse between two points in space. Use of this simple approach leads to problems in GPS positioning that arise from the possibility of light pulse propagation at different speeds through free space. This becomes particularly evident during periods of solar activity and the occurrence of magnetic storms, when the positioning errors are quite large. If one takes the generally accepted view that associates the distortion of GPS signals with wave optics, the positioning errors that occur during these periods correspond to an increase in the optical length of the signal propagation. Allowance for refraction in the scattering of waves on the plasma-type seals like blobs and bubbles with refractive index *nf* = 1.0000162 [52] in the F- and upper E-layers of the ionosphere (at altitudes of 100-400 km above the Earth's surface) leads to errors in positioning which are close to relativistic ones. The latter is in good agreement with experiment [53].

With increasing solar activity the time of the GPS signal passage from satellite to the Earth rises, which leads to positioning error enlargement. And it can be realized both in a short period of time (with duration 5-20 min) and for a long period (lasting several hours). In the first case, it occurs under the influence of the radiation coming from the solar flare. In the second case, it takes place in 30-35 hours after the flare under the influence of the solar wind. A concrete example is the time-dependence of the violations of the GPS satellite system during periods of solar activity which was published on the website of Cornell University [54]. According to measurements carried out in real-time monitoring stations of Arecibo Observatory (Puerto Rico) daily from August 30 to September 02, 2011 between 03.00 and 04.00 on the Coordinated Universal Time (UTC), there was a 20-minute failure of GPS. The horizontal positioning error here reached 50 m and more.

preclude the possibility that all their clocks run at a different rate in one state of constant motion than in the other. Such distinctions first become clear when they are able to look out of a window and compare their measurements of the same quantity with those of their counterpart

At the same time, the ALT insists on remote simultaneity by virtue of eq. (28a), since it is no

mixing, the FLC and Lorentz invariance, and it also removes any possibility of time reversal

*<sup>Δ</sup><sup>t</sup>* ′ <0) since the proportionality factor Q is necessarily greater than zero in eqs. (28a-d). Not one of the latter effects has ever been verified experimentally, although there has been endless speculation about each of them. The ALT might be more aptly referred to as the "GPS LT," because it conforms to the basic assumption of this technology that the rates of atomic clocks in relative motion are only affected by their current state of motion and position in a gravita‐ tional field. More details concerning these aspects of relativity theory may be found in a

The relativistic and gravitational effects discussed above are observed at sufficiently large distances from GPS satellites to the E-layer of the atmosphere. In accordance with existing experimental data these effects lead to positioning errors which do not exceed 3 m on the Earth's surface. The next largest sources of error may be due to satellite geometry with respect to the GPS receiver, as well as orbit deformation due to the influence of gravity and the uneven

We consider below another key aspect of the process of point-location determination by GPS. It is based on the fact that the distance can be determined using information from the atomic clock, and the assumption that the light pulses are always moving with the same speed through space. Then, to determine the appropriate distance it is sufficient to multiply this speed by the time of passage of the light pulse between two points in space. Use of this simple approach leads to problems in GPS positioning that arise from the possibility of light pulse propagation at different speeds through free space. This becomes particularly evident during periods of solar activity and the occurrence of magnetic storms, when the positioning errors are quite large. If one takes the generally accepted view that associates the distortion of GPS signals with wave optics, the positioning errors that occur during these periods correspond to an increase in the optical length of the signal propagation. Allowance for refraction in the scattering of waves on the plasma-type seals like blobs and bubbles with refractive index *nf* = 1.0000162 [52] in the F- and upper E-layers of the ionosphere (at altitudes of 100-400 km above the Earth's surface) leads to errors in positioning which are close to relativistic ones. The latter is in good

With increasing solar activity the time of the GPS signal passage from satellite to the Earth rises, which leads to positioning error enlargement. And it can be realized both in a short period

doing so as well. It does away with space-time

in another rest frame.

companion article [42].

distribution of this field.

agreement with experiment [53].

( *Δt*

longer possible for *Δt* to vanish without *Δt* ′

26 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

**6. GPS positioning errors during strong solar activity**

More powerful geomagnetic disturbances lead to the complete disappearance of the signal at the GPS receiver for a sufficiently long period of time [55]. Thus, the data obtained at the Sao Luis Observatory (Brazil) on September 15-16, 2011 showed that the loss of GPS signal occurred several times during the day. The signal at the receiver sporadically disappeared five times for 5-30 minutes each between 16.00 UTC September 15 to 01.00 UTC September 16, 2011. Moreover, the horizontal positioning error during these days greatly exceeded the value of 50 m.

It was shown in [56] that during solar flares of different power there was a certain sequence of decreasing carrier/noise ratio for the frequencies L1=1.57542 GHz and L2=1.22760 GHz. During the solar flare of X-1 level (22.15 UTC, December 14, 2006) the carrier/noise ratio for the frequency L1 had become worse. At the same time, the carrier/noise ratio for the L2 frequency remained unchanged. The flare of the X-3 level (02.40 UTC, December 13, 2006) led to a simultaneous deterioration of the carrier/noise ratio for both frequencies. The duration of the phenomena observed in both cases was about 30 minutes.

The next phenomenon that deserves attention is the increasing power of the signal received by GPS receiver during the period of strong solar activity. The time-dependence of the power of the GPS signal, and an integral number of failures at the receiver during a geomagnetic disturbance on July 15, 2000 was published in [57]. There was an increase of about three times in the intensity of the signal at the receiver with respect to satellite signal power. The integral number of failures grew with increasing intensity of the received signal. The authors did not give an explanation for the growth of intensity.

The observations show that a significant increase of high-frequency (UHF) radiation from the upper atmosphere is a result of solar flares. The intensity of UHF radiation in such events in 40 or more times higher than typical levels of solar microwave bursts [58]. In particular, such events were observed during periods of geomagnetic disturbances, such as observations at wavelengths in the 3-50 cm range [59]. The observations were made simultaneously at several points within the project SETI. The intensities were not given in this paper. The corresponding graphical dependences were normalized to the maximum value. Analysis of different possi‐ bilities to generate the observed UHF radiation has shown that the largest contribution to the resulting picture of the spectrum is made by transitions between Rydberg levels of neutral components of the non-equilibrium two-temperature plasma. They are excited by the action of solar radiation flux or a stream of electrons emitted from the upper ionosphere in a collision with electrons [55].

Rydberg states of the particles *A* \* \* are not populated at altitudes *h* ≤50km due to the quenching process. This takes place at the time of interaction of Rydberg particles with unexcited molecules of oxygen during the intermediate stage of ion complex *<sup>A</sup>* <sup>+</sup> (*<sup>n</sup> <sup>L</sup>* ) *<sup>O</sup>* <sup>2</sup>

where *s* is the vibrational quantum number. It is due to the fact that the negative molecular

background of the ionization continuum (see Figure 1). Taking into account these two factors, it can be assumed that the layer of the atmosphere emitting in the decimeter range is formed

The increased solar activity leads to the formation of two types of non-equilibrium plasma in the D- and E-layers of the ionosphere. They are *recombination plasma* and *photoionization plasma*. The first type corresponds to a non-equilibrium two-temperature plasma in which Rydberg states are populated by collisional transitions of free electrons into bound states of the discrete spectrum due to inelastic interaction with the neutral components of the environ‐ ment [61]. The electron temperature *Te* here varies from 1000 to 3000 K [62], and the ambient temperature *Ta* of this layer varies from 200 to 300 K depending on altitude. Note that in the lower D-layer, the collisional mechanism of Rydberg-state population is dominant. Thermal‐ ization of electrons takes place mainly due to the reaction of the vibrational excitation of

The second type corresponds to a photoionization plasma in which the population of Rydberg states occurs under the action of the light flux coming from the solar flare. In this case, the quantity of the Rydberg states is determined by the intensity of incident radiation. Here, in contrast to the recombination plasma, the low-lying Rydberg states are also populated, which

Let us consider the effect of molecules *N2* and *O2* on the UHF spectrum of spontaneous emission (absorption) in the decimeter band which appears in the D- and E-layers of Earth's ionosphere during strong geomagnetic disturbances [63]. Since the concentration of free electrons *ne* is small compared with the concentration of atmospheric particles *ρa* both in normal conditions and in case of a magnetic storm, there is no noticeable change in the ambient temperature *Ta* [64,65]. Both for night and day time, it is of the order of thermal temperature in the D- and Elayers [66]. It is due to the fact that a high translational temperature of the particles coming from the ionosphere (F-layer and above) when entering a denser medium is spent on the

**7. The radiation of recombination plasma in the decimeter band**

*L <sup>f</sup>* ) + *O*2,

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

− has a series of resonance vibrationally excited autoionizing levels located on the

<sup>−</sup> (*s*)<sup>→</sup> *<sup>A</sup>* \* \*(*<sup>n</sup> <sup>f</sup>*

creation (harpoon mechanism), i.e.

between 50 and 110 km.

molecular nitrogen

*<sup>e</sup>* <sup>−</sup> <sup>+</sup> *<sup>N</sup>*2(v=0 )<sup>→</sup> *<sup>N</sup>*<sup>2</sup>

) <sup>+</sup> *<sup>O</sup>*<sup>2</sup> <sup>→</sup> *<sup>A</sup>* <sup>+</sup> (*<sup>n</sup> <sup>L</sup>* ) *<sup>O</sup>* <sup>2</sup>

−→*<sup>e</sup>* <sup>−</sup> <sup>+</sup> *<sup>N</sup>*2(v≥<sup>1</sup> )

leads to production of the infrared radiation.

leading to the formation of an intermediate negative ion.

*A* \* \*(*ni L <sup>i</sup>*

ion *<sup>O</sup>* <sup>2</sup>

− (*s*)

29

http://dx.doi.org/10.5772/55792

**Figure 1.** Harpoon mechanism of the quenching scheme.

Highly excited states of atoms and molecules are called *Rydberg* states if they are located near the ionization limit and are characterized by the presence of an infinite sequence of energy levels converging to the ionization threshold. They represent an intermediate between the lowlying excited states and ionized states located in the continuous spectrum. Rydberg atoms and molecules have one excited weakly bound electron, the state of which is characterized by the energy level and angular momentum *l* with respect to the ion core. Energy levels with large angular momenta do not depend on *l*. They are so-called orbitally degenerate states. These Rydberg states are statistically the most stable when the electron spends most of the time at large distances from the ion core. The process leading to the formation of these states is called *l*-mixing. In the upper atmosphere *l*-mixing flows quickly and is irreversible, i.e. almost all Rydberg particles are in degenerate states. As a result, the differences between atoms and molecules are lost, and the range of UHF radiation is homogeneous (i.e. not dependent on the chemical composition of the excited particles) [60]. The *l*-mixing process takes place in a dense neutral gas medium with a concentration greater than 10-12 cm-3, which corresponds to the altitude *h* ≤110 km. The criterion for the efficiency of the process is the condition that the volume of electron cloud of the Rydberg particle *A\*\** (with radius 2*n2 a0*, where *a0* is equal to the Bohr radius) has at least one neutral molecule *M*. The interaction between them leads to the formation of quasimolecules *A\*\*M.* The potential energy curves of these molecules split off from degenerate Coulomb levels and are classified according to the value of the angular momentum *L* of a weakly bound electron with respect to the molecule *M* [60]. The shape of the potential curve is determined by the characteristics of the elastic scattering of slow electrons on molecule *M*.

The optical transitions between split and degenerate states of quasimolecule *A\*\*M* that occur without changing the principal quantum numbers (*Δ n* =0) correspond to radiation in the decimeter range [61].

Rydberg states of the particles *A* \* \* are not populated at altitudes *h* ≤50km due to the quenching process. This takes place at the time of interaction of Rydberg particles with unexcited molecules of oxygen during the intermediate stage of ion complex *<sup>A</sup>* <sup>+</sup> (*<sup>n</sup> <sup>L</sup>* ) *<sup>O</sup>* <sup>2</sup> − (*s*)

creation (harpoon mechanism), i.e.

$$A^{\ast\ast} \text{(\n}\_{i}L\_{\text{-}i}\text{)} + \text{O}\_{2} \rightarrow A^{\ast} \text{(\n}\_{i}L\_{\text{-}i}\text{(\n}\_{i}\text{-}\text{(\n}\_{\text{-}f}L\_{\text{-}f}\text{)} + \text{O}\_{2}\text{)}$$

where *s* is the vibrational quantum number. It is due to the fact that the negative molecular ion *<sup>O</sup>* <sup>2</sup> − has a series of resonance vibrationally excited autoionizing levels located on the background of the ionization continuum (see Figure 1). Taking into account these two factors, it can be assumed that the layer of the atmosphere emitting in the decimeter range is formed between 50 and 110 km.

The increased solar activity leads to the formation of two types of non-equilibrium plasma in the D- and E-layers of the ionosphere. They are *recombination plasma* and *photoionization plasma*. The first type corresponds to a non-equilibrium two-temperature plasma in which Rydberg states are populated by collisional transitions of free electrons into bound states of the discrete spectrum due to inelastic interaction with the neutral components of the environ‐ ment [61]. The electron temperature *Te* here varies from 1000 to 3000 K [62], and the ambient temperature *Ta* of this layer varies from 200 to 300 K depending on altitude. Note that in the lower D-layer, the collisional mechanism of Rydberg-state population is dominant. Thermal‐ ization of electrons takes place mainly due to the reaction of the vibrational excitation of molecular nitrogen

*<sup>e</sup>* <sup>−</sup> <sup>+</sup> *<sup>N</sup>*2(v=0 )<sup>→</sup> *<sup>N</sup>*<sup>2</sup> −→*<sup>e</sup>* <sup>−</sup> <sup>+</sup> *<sup>N</sup>*2(v≥<sup>1</sup> )

Highly excited states of atoms and molecules are called *Rydberg* states if they are located near the ionization limit and are characterized by the presence of an infinite sequence of energy levels converging to the ionization threshold. They represent an intermediate between the lowlying excited states and ionized states located in the continuous spectrum. Rydberg atoms and molecules have one excited weakly bound electron, the state of which is characterized by the energy level and angular momentum *l* with respect to the ion core. Energy levels with large angular momenta do not depend on *l*. They are so-called orbitally degenerate states. These Rydberg states are statistically the most stable when the electron spends most of the time at large distances from the ion core. The process leading to the formation of these states is called *l*-mixing. In the upper atmosphere *l*-mixing flows quickly and is irreversible, i.e. almost all Rydberg particles are in degenerate states. As a result, the differences between atoms and molecules are lost, and the range of UHF radiation is homogeneous (i.e. not dependent on the chemical composition of the excited particles) [60]. The *l*-mixing process takes place in a dense neutral gas medium with a concentration greater than 10-12 cm-3, which corresponds to the altitude *h* ≤110 km. The criterion for the efficiency of the process is the condition that the

the Bohr radius) has at least one neutral molecule *M*. The interaction between them leads to the formation of quasimolecules *A\*\*M.* The potential energy curves of these molecules split off from degenerate Coulomb levels and are classified according to the value of the angular momentum *L* of a weakly bound electron with respect to the molecule *M* [60]. The shape of the potential curve is determined by the characteristics of the elastic scattering of slow electrons

The optical transitions between split and degenerate states of quasimolecule *A\*\*M* that occur without changing the principal quantum numbers (*Δ n* =0) correspond to radiation in the

*a0*, where *a0* is equal to

volume of electron cloud of the Rydberg particle *A\*\** (with radius 2*n2*

**Figure 1.** Harpoon mechanism of the quenching scheme.

28 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

on molecule *M*.

decimeter range [61].

leading to the formation of an intermediate negative ion.

The second type corresponds to a photoionization plasma in which the population of Rydberg states occurs under the action of the light flux coming from the solar flare. In this case, the quantity of the Rydberg states is determined by the intensity of incident radiation. Here, in contrast to the recombination plasma, the low-lying Rydberg states are also populated, which leads to production of the infrared radiation.

#### **7. The radiation of recombination plasma in the decimeter band**

Let us consider the effect of molecules *N2* and *O2* on the UHF spectrum of spontaneous emission (absorption) in the decimeter band which appears in the D- and E-layers of Earth's ionosphere during strong geomagnetic disturbances [63]. Since the concentration of free electrons *ne* is small compared with the concentration of atmospheric particles *ρa* both in normal conditions and in case of a magnetic storm, there is no noticeable change in the ambient temperature *Ta* [64,65]. Both for night and day time, it is of the order of thermal temperature in the D- and Elayers [66]. It is due to the fact that a high translational temperature of the particles coming from the ionosphere (F-layer and above) when entering a denser medium is spent on the vibrational and rotational excitation of atmospheric molecules. Further relaxation of the excitation occurs in the process of resonant transfer of internal energy. Its transport is carried out in subsequent collisions. The vibrational temperature *Tv* should relax faster than the rotational *TN*. The temperatures *Ta* and *TN* usually equalize fairly quickly because of the rapid exchange of energy between rotational and translational motions. The rate of transmission increases with increasing excitation energy of the medium. Thus, in the D-layer the electron temperature *Te* separates from the ambient temperature *Ta*, and non-equilibrium quasistation‐ ary two-temperature recombination plasma is set, where *Ta<<Te*, which agrees well with direct measurements [66].

states where radiative transitions dominate. Thus, two effective regions of population can be

the levels located between these regions is negligible since they are effectively quenched

Passage of the electrons through the neck of the flow on the energy scale is the slowest stage

from the condition of minimum of quenching rate constant due to transitions to low-lying

27.21 <sup>11600</sup> . 4.5 10

(here concentration of electrons *ne* is in cm<sup>−</sup> 3 and temperature *T* in K). The dependences of *n*\* on the concentration *ne* and temperature *T <sup>e</sup>* of free electrons calculated according to (30) are

The Rydberg state perturbations by the neutral particles are reduced to the splitting of degenerate levels of the highly excited *A* \* \* N2 and *A* \* \* O2 quasimolecules in groups of sublevels whose positions are determined by the characteristics of weakly bound electron scattering on the nitrogen and oxygen molecules [60]. Moreover, the transitions between the *L* = 0 − 3 terms, split off from the Coulomb levels, provide the greatest contribution to the UHF radiation, where *L* is the angular momentum of the weakly bound electron with respect to the perturbing particle of the medium [63]. A non-equilibrium two-temperature plasma is formed at 10 <sup>12</sup> <sup>&</sup>lt;*ρ<sup>a</sup>* <10 16 cm <sup>−</sup> 3densities when the detachment of the electron temperature *<sup>T</sup> <sup>e</sup>* from the medium temperature *T <sup>a</sup>* occurs under the influence of the flux of ionospheric electrons. Populations of Rydberg particles generated here are defined by the temperature *T <sup>e</sup>*, concentration *ne* and flow of free electrons, and medium concentration *ρa*. Increase in concentration in the lower part of the D-layer leads to an increase in the shock quenching rate of Rydberg states. At *ρ<sup>a</sup>* <sup>≥</sup><sup>10</sup> 16 cm <sup>−</sup> 3 their population decreases sharply. The spatial region for the formation of excited particles concentrates in the lower part of the E-layer and within the D-layer closer to its upper boundary (this region is 25-30 km wide), where the states with

of the process that determines the kinetics of non-equilibrium plasma. The energy *E*\*

*e*

é ù = » ê ú ë û \* <sup>×</sup> \*

2 1 3

*Ry <sup>n</sup> E T*

principal quantum numbers *n* ≈20÷60 will be populated most effectively [63].

*Rydberg l* and *degenerate L* terms. The latter are defined as [60]

The states with small momenta *l* ≤*l* \* are perturbed strongly by the ionic core (for example, *l* \* =3 in the nitrogen and oxygen molecules). The perturbing particle M field in turn only influences such superpositions of Coulomb center states that have low electron angular momenta *L* with respect to M. These two «different center» groups of terms will be called

, where excitation of the states is due to radiative transitions. The population of

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

( ) <sup>2</sup>

*e*

1 4

, where the collision processes dominate, and

http://dx.doi.org/10.5772/55792

is found

31

(30)

identified in the spectrum: the part with *En*<*E*\*

*n*

because of radiative transitions in the IR, visible, and UV range.

the part *En* >*E*\*

listed in Table 1.

states and is defined as [67]:

**Figure 2.** Energy scheme for neck-of-flow location.

Assuming that the electron flow is stationary and their concentration *ne* slightly varies over time, the populations of excited states of Rydberg atoms and molecules *mn*(*ne*, *Te*) can be determined from the balanced equations taking into account the processes of recombination, ionization and radiation [67]. Since the frequency of collisions of electrons with medium particles is equal to 1012-1014 s-1, two population distributions of atoms and molecules on excitation energies of discrete states are formed in the plasma due to rapid energy exchange between bound and free electrons with medium particles and each other. The first distribution with a temperature *Te* corresponds to highly excited Rydberg states with binding energies *En*, which is less than the characteristic *<sup>E</sup>* \* energy. Rydberg states are not populated close to this energy at the *ΔE* < < *E*\* interval that is called the bottleneck of the recombination flux or «neck of flow» (see Fig.2). The second population distribution with the ambient *Ta* temperature is caused by collisions between particles of the medium and refers to the low-lying electronic states where radiative transitions dominate. Thus, two effective regions of population can be identified in the spectrum: the part with *En*<*E*\* , where the collision processes dominate, and the part *En* >*E*\* , where excitation of the states is due to radiative transitions. The population of the levels located between these regions is negligible since they are effectively quenched because of radiative transitions in the IR, visible, and UV range.

vibrational and rotational excitation of atmospheric molecules. Further relaxation of the excitation occurs in the process of resonant transfer of internal energy. Its transport is carried out in subsequent collisions. The vibrational temperature *Tv* should relax faster than the rotational *TN*. The temperatures *Ta* and *TN* usually equalize fairly quickly because of the rapid exchange of energy between rotational and translational motions. The rate of transmission increases with increasing excitation energy of the medium. Thus, in the D-layer the electron temperature *Te* separates from the ambient temperature *Ta*, and non-equilibrium quasistation‐ ary two-temperature recombination plasma is set, where *Ta<<Te*, which agrees well with direct

30 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

Assuming that the electron flow is stationary and their concentration *ne* slightly varies over time, the populations of excited states of Rydberg atoms and molecules *mn*(*ne*, *Te*) can be determined from the balanced equations taking into account the processes of recombination, ionization and radiation [67]. Since the frequency of collisions of electrons with medium particles is equal to 1012-1014 s-1, two population distributions of atoms and molecules on excitation energies of discrete states are formed in the plasma due to rapid energy exchange between bound and free electrons with medium particles and each other. The first distribution with a temperature *Te* corresponds to highly excited Rydberg states with binding energies *En*, which is less than the characteristic *<sup>E</sup>* \* energy. Rydberg states are not populated close to this energy at the *ΔE* < < *E*\* interval that is called the bottleneck of the recombination flux or «neck of flow» (see Fig.2). The second population distribution with the ambient *Ta* temperature is caused by collisions between particles of the medium and refers to the low-lying electronic

measurements [66].

**Figure 2.** Energy scheme for neck-of-flow location.

Passage of the electrons through the neck of the flow on the energy scale is the slowest stage of the process that determines the kinetics of non-equilibrium plasma. The energy *E*\* is found from the condition of minimum of quenching rate constant due to transitions to low-lying states and is defined as [67]:

$$E\_{\bullet} = \frac{R\,\mathrm{y}}{n\_{\mathrm{\ast}}^2} \approx 27.21 \left[ \frac{n\_{\mathrm{\ast}}}{4.5 \cdot 10^{13}} \left( 11600 \Big/ T\_{\mathrm{\mathcal{E}}} \right)^2 \right]^{1/4} \,\mathrm{.}\tag{30}$$

(here concentration of electrons *ne* is in cm<sup>−</sup> 3 and temperature *T* in K). The dependences of *n*\* on the concentration *ne* and temperature *T <sup>e</sup>* of free electrons calculated according to (30) are listed in Table 1.

The Rydberg state perturbations by the neutral particles are reduced to the splitting of degenerate levels of the highly excited *A* \* \* N2 and *A* \* \* O2 quasimolecules in groups of sublevels whose positions are determined by the characteristics of weakly bound electron scattering on the nitrogen and oxygen molecules [60]. Moreover, the transitions between the *L* = 0 − 3 terms, split off from the Coulomb levels, provide the greatest contribution to the UHF radiation, where *L* is the angular momentum of the weakly bound electron with respect to the perturbing particle of the medium [63]. A non-equilibrium two-temperature plasma is formed at 10 <sup>12</sup> <sup>&</sup>lt;*ρ<sup>a</sup>* <10 16 cm <sup>−</sup> 3densities when the detachment of the electron temperature *<sup>T</sup> <sup>e</sup>* from the medium temperature *T <sup>a</sup>* occurs under the influence of the flux of ionospheric electrons. Populations of Rydberg particles generated here are defined by the temperature *T <sup>e</sup>*, concentration *ne* and flow of free electrons, and medium concentration *ρa*. Increase in concentration in the lower part of the D-layer leads to an increase in the shock quenching rate of Rydberg states. At *ρ<sup>a</sup>* <sup>≥</sup><sup>10</sup> 16 cm <sup>−</sup> 3 their population decreases sharply. The spatial region for the formation of excited particles concentrates in the lower part of the E-layer and within the D-layer closer to its upper boundary (this region is 25-30 km wide), where the states with principal quantum numbers *n* ≈20÷60 will be populated most effectively [63].

The states with small momenta *l* ≤*l* \* are perturbed strongly by the ionic core (for example, *l* \* =3 in the nitrogen and oxygen molecules). The perturbing particle M field in turn only influences such superpositions of Coulomb center states that have low electron angular momenta *L* with respect to M. These two «different center» groups of terms will be called *Rydberg l* and *degenerate L* terms. The latter are defined as [60]

$$\mathcal{U}\_{\mathcal{H}}^{(LL\_Z)}(\mathbb{R}) = -\frac{1}{2\left[n - \mu\_{LL\_Z}(\mathbb{R})\right]^2} - \frac{\beta}{2\mathcal{R}^{\*4}},\tag{31}$$

**Figure 3.** Potential energy curves of the quasimolecule *A\*\*M.*

the atomic system of units with ℏ=*<sup>e</sup>* <sup>=</sup>*me*

distance of the А \* \* М quasimolecule, quantity *ν* = −2 *E* <sup>−</sup> <sup>1</sup> / <sup>2</sup>

Radiation intensities of photon energy emission per unit time for *L* → *L* ′

changing principal quantum number *Δ n* = 0 defined as

*I n*

moment, *β* and *β* ′

radiation.

Here *a* is the length of electron scattering on the perturbing particle М, *Q* is its quadrupole

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

semiclassical momentum of a weakly bound electron in the Coulomb field, *R* is the interatomic

the Rydberg particle measured from the ground state of ion *A*+ (here and throughout, we use

Figure 3 shows the energy diagram of the potential energy curves of quantum *L*-states depending on interatomic coordinate *R.* For angular momenta *L*=0 and *L*=1 they are split from degenerate Coulomb levels with principal quantum numbers *n*+1 and *n* in the classical turning points. Blue arrows indicate the optical transitions between the split and degenerate states of the quasimolecule *А\*\*М* occurring without changing the principal quantum number *Δ n* =0. This corresponds to UHF radiation in the decimeter range. The red arrows show similar transitions with a change in principal quantum number *Δ n* =1 which correspond to IR

4

w ¢ ¢

16 ( ) ( ) , <sup>9</sup> *L L L L <sup>L</sup> L L WL L*

*Ln L l <sup>L</sup> L n WLn <sup>C</sup> c*

w

*I C n*

3

4 1


®

<sup>3</sup> \* 16 ( ) ( ) , <sup>9</sup>

*n*

*l l*

³

*c*

2

® ¢ = ®= ¢ (33)

2

n

= ®= å (34)

n

=1).

are the isotropic and anisotropic polarizability components, *κ* (*R*) is the

, where *E* is the total energy of

http://dx.doi.org/10.5772/55792

33

and *L* → *n* without

where quantum defects induced by the field of the molecule M are equal to

$$\begin{split} \mu\_{\perp L\_{\mathcal{L}}}(\mathcal{R}) &= -\frac{1}{\pi} \arctan\left[\kappa \,(\mathcal{R}) K\_{L,L\_{\mathcal{L}}} (\mathcal{R})\right], \\ \mathcal{K}\_{L,L\_{\mathcal{L}}}(\mathcal{R}) &= a \delta\_{\perp 0} - \frac{L(L+1) - 3L\_{\mathcal{Z}}^{2}}{L(L+1)(2L-1)(2L+3)} Q(1 - \delta\_{\perp 0}) - \frac{\pi \kappa \,(\mathcal{R})}{(2L-1)(2L+1)(2L+3)} \left[\beta + \frac{L(L-1) - 3L\_{\mathcal{Z}}^{2}}{(2L-1)(2L+3)} \beta^{\*}\right]. \end{split} \tag{32}$$


**Table 1.** Dependency of neck-of-the-flow *<sup>n</sup>* <sup>∗</sup> position on the concentration *<sup>n</sup> <sup>e</sup>* and temperature of free electrons *T <sup>e</sup>*.

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused http://dx.doi.org/10.5772/55792 33

**Figure 3.** Potential energy curves of the quasimolecule *A\*\*M.*

( )

,

ë û

, 0 0

*K*

*<sup>Z</sup> L L <sup>L</sup> <sup>L</sup>*

<sup>1</sup> ( ) arctan ( ) ( ) ,

é ù = - ê ú

 k

*R RR*

*Z Z*

*L L L L*

*a*

d

**Concentration of free electrons** *n e* , см − 3

p

*Z*

*K*

m

2

**Table 1.** Dependency of neck-of-the-flow *<sup>n</sup>* <sup>∗</sup> position on the concentration *<sup>n</sup> <sup>e</sup>*

*L L*

32 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

*U n*

*Z*

*R*

<sup>4</sup> <sup>2</sup>

b

(31)

(32)

2

*Z*

and temperature of free electrons *T <sup>e</sup>*.

bb

<sup>1</sup> ( ) , <sup>2</sup> 2 ()

 <sup>=</sup> - - é ù - ê ú ë û

*L L*

( 1) 3 ( ) ( 1) 3 ( ) (1 ) . ( 1) (2 1) (2 3) (2 1) (2 1) (2 3) (2 1) (2 3)

+ - é ù - - = - - - ê ú ¢ +-+ -++ -+ ë û

101 77 91 101 102 58 69 76 103 43 52 57 104 33 39 43 105 25 29 32 106 19 22 24 107 14 17 18 108 11 13 14 109 8 10 11 1010 6 7 8 1011 5 6 6 1012 4 4 5

p k

**Effective neck of the flow principal quantum number** *<sup>n</sup>* **<sup>∗</sup>at various** *Te* **values (K)** *Te*=1000К *Te*=2000К *Te*=3000К

+

*LL L <sup>R</sup> LL L R Q LL L L LLL LL*

d

m

where quantum defects induced by the field of the molecule M are equal to

*Z*

*R n R*

Here *a* is the length of electron scattering on the perturbing particle М, *Q* is its quadrupole moment, *β* and *β* ′ are the isotropic and anisotropic polarizability components, *κ* (*R*) is the semiclassical momentum of a weakly bound electron in the Coulomb field, *R* is the interatomic distance of the А \* \* М quasimolecule, quantity *ν* = −2 *E* <sup>−</sup> <sup>1</sup> / <sup>2</sup> , where *E* is the total energy of the Rydberg particle measured from the ground state of ion *A*+ (here and throughout, we use the atomic system of units with ℏ=*<sup>e</sup>* <sup>=</sup>*me* =1).

Figure 3 shows the energy diagram of the potential energy curves of quantum *L*-states depending on interatomic coordinate *R.* For angular momenta *L*=0 and *L*=1 they are split from degenerate Coulomb levels with principal quantum numbers *n*+1 and *n* in the classical turning points. Blue arrows indicate the optical transitions between the split and degenerate states of the quasimolecule *А\*\*М* occurring without changing the principal quantum number *Δ n* =0. This corresponds to UHF radiation in the decimeter range. The red arrows show similar transitions with a change in principal quantum number *Δ n* =1 which correspond to IR radiation.

Radiation intensities of photon energy emission per unit time for *L* → *L* ′ and *L* → *n* without changing principal quantum number *Δ n* = 0 defined as

$$I\_{LL'} = \overline{\mathcal{W}\_{\mathcal{H}}(L \to L')} = \frac{16 \,\,\alpha^4 (L \to L')}{9 \,\,c^3} \mathcal{C}^2\_{\nu\_L} \,\_{L \, L'} \tag{33}$$

$$I\_{L,n} = \overline{W\_{\mathcal{H}}(L \to n \,)} = \frac{16 \,\, \alpha^4 (L \to n \,)}{9 \,\, ^3} \sum\_{l \ge l \, ^\*}^{n-1} \mathbb{C}^2\_{\, ^\*L \, ^\*L} \tag{34}$$

where *ω* (*L* → *L* ′ ) and *ω* (*L* → *n* ) are the appropriate transition frequencies, *<sup>ν</sup><sup>L</sup>* (*n*) <sup>=</sup> <sup>1</sup> / <sup>−</sup> <sup>2</sup> *<sup>U</sup> <sup>n</sup>* (*L* , 0 ) (*R* = *n* 2 ) is the effective principal quantum number of *L* term. The procedure for calculating the coefficients *<sup>C</sup> <sup>ν</sup><sup>L</sup> <sup>L</sup> <sup>L</sup>* ′ and *<sup>C</sup> <sup>ν</sup><sup>L</sup> <sup>L</sup> <sup>l</sup>* is described in detail in [63]. Averaging of the intensities (33) and (34) is over all possible interatomic coordinates *R* as follows *W*¯

eff eff ( , ) ( , ) 2 ( ), *nFT n T n T e e ee*

The lifetime of degenerate Rydberg states of the quasimolecule characterized by the time of

intensity of the energy emission in IR region. Since with the frequency increase the radiation

max max max max min max ( ,) , *nF T T T nn F T T F e e*

− 1 ( *ω* ) = *W n* ¯

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

<sup>R</sup> <sup>∼</sup> *<sup>ω</sup>* <sup>−</sup> <sup>5</sup> / 3 [68], the gain factor can be expressed in the form

( ) ( ) 5 3 <sup>5</sup>

3 2 2 2 2

and *B <sup>i</sup>*

 a

) / 8066⋅27.21, *T <sup>a</sup>* (a.u.) =*T <sup>a</sup>* (*K*) / 11600⋅27.21

¢

¢ å å (39)

2 units. It will be recalled that in the system of the atomic

<sup>−</sup> <sup>1</sup> / 10 and ~ *<sup>ρ</sup> <sup>a</sup>*

w

 w

(38)

http://dx.doi.org/10.5772/55792

35

− 3 and electron temperature

1 15

are their quadrupole moments and

, and medium temperature *T <sup>a</sup>*

− 2 / 15 as the temper‐

. Quantity *<sup>n</sup>* max is the position

 tw= = (37)

(*Δ n* = 1 ) *n* 3, where *W <sup>n</sup>* (*Δ n* = 1 ) is the

 w

R

w

) which is defined as *τ*

; = *e*

where the maximum is achieved at a frequency *<sup>ω</sup>* <sup>=</sup> *<sup>n</sup>* max

<sup>T</sup> max <sup>=</sup> <sup>3500</sup> K. Under these conditions, the value of *<sup>F</sup>* max <sup>∼</sup> <sup>10</sup> <sup>1</sup> <sup>0</sup>

units, the concentration *<sup>ρ</sup> <sup>a</sup>* of medium, rotational constant *<sup>B</sup> <sup>a</sup>*

(a.u.) <sup>=</sup> *<sup>B</sup> <sup>a</sup>*

The corresponding dependences are shown in Table 3.

(cm <sup>−</sup> <sup>1</sup>

17 7

p

2 2

*α* (*O*2) ≈ 0.21 respectively. The values of *Q <sup>i</sup>*

rotational constants expressed in *<sup>e</sup> <sup>a</sup>* <sup>0</sup>

of the maximum population of Rydberg states, which is defined as [63]

r

( )( ) <sup>675</sup> () ( ) . *<sup>n</sup> <sup>T</sup> BQ Q <sup>i</sup> <sup>i</sup> i i e a <sup>i</sup> <sup>i</sup> <sup>i</sup>*

a

<sup>=</sup> ê ú ¢ ë û


Here *α* (*i*) denotes the weight factors of *N*2 and *O*2 molecules equal to *α* (*N*2) ≃ 0.79 and

(x is the exponent that characterizes the medium concentration which in these conditions varies from 12 to 16). Note, that according to (10) the position of the Rydberg state population

ature of electrons and medium concentration increase. In the calculations performed below, the spectroscopic parameters of molecules and molecular ions of nitrogen and oxygen are taken from [69]. Scattering lengths, quadrupole moments, and static polarizabilities of the nitrogen and oxygen molecules, for which data from [70-74] were used, are given in Table 2.

) along the *n* axes decreases as ~ *<sup>T</sup> <sup>e</sup>*

spontaneous emission in the infrared

R

lifetime decreases as *τ*

w

max

are determined as

(a.u.) <sup>≃</sup> (0.52917) <sup>3</sup> <sup>⋅</sup> <sup>10</sup> *<sup>x</sup>* <sup>−</sup> <sup>24</sup> , *<sup>B</sup> <sup>a</sup>*

maximum *<sup>m</sup>* max (*Te* , *<sup>ρ</sup> <sup>a</sup>*

*ρ a*

(i.e. *<sup>τ</sup>* <sup>e</sup> ff <sup>≃</sup> *<sup>τ</sup>*

$$\overline{\mathcal{W}\_{n\,L}} = \int \mathcal{W}\_{n\,L} \text{ (R )}\,\Psi\_{\,\,n\,L}^2 \text{ ( $\bar{r} = n\,\,2$ ,  $\bar{\mathbf{R}}$ )}\,d\,\,\mathbf{R}$$

(*<sup>Ψ</sup> <sup>n</sup> <sup>L</sup>* is the electron wave function of А\*\*М quasimolecule). This corresponds to a static consideration of the problem, which is valid if the average electron velocity 1 / *n* is much higher than the characteristic velocity of atomic particles in the medium when we have the inequality *n* 2 *Ta* / *Ma* < < 1 (where *Ma* is the reduced mass). Under our conditions, this sit‐ uation is certainly satisfied. As the main distortion of the Coulomb wave function is in the vicinity of the perturbing particle M, we can put to the first approximation that *W*¯ *<sup>n</sup> <sup>L</sup>* ≈ *Wn <sup>L</sup>* ( *R* = *n* <sup>2</sup> ). Considering also that radiation from the states with a given value of *νL* is not coherent, we should add consistently the transition intensities (33) and (34) for the corresponding emission lines of the A\* \* N2 and A\* \* O2 quasimolecules separately in or‐ der to determine a total contribution of all possible transitions and multiply them by the ef‐ fective population of the Rydberg *L* term *<sup>m</sup>* <sup>e</sup> ff ( *n* , *ω* , *T <sup>e</sup>* ) = *P <sup>n</sup>* ( *ω* ) *m <sup>n</sup>* ( *ne* , *T <sup>e</sup>*). By analogy with [67] they are calculated as

$$m\_{\rm eff}(n, o, T\_{\mathcal{C}}) = P\_{\mathcal{H}}(o) \left[ \frac{n \frac{2}{\mathcal{C}} (T\_{\mathcal{C}})}{\Sigma\_{\!i}} \right] \frac{2 \pi}{T\_{\mathcal{C}}} \right]^{3/2} n^2 \exp(1/2 n^2 T\_{\mathcal{C}}) \tag{35}$$

(*Σ<sup>i</sup>* is the statistical sum of positive molecular ions). The non-equilibrium plasma factor *P <sup>n</sup>* (*ω* ) taking into account the flux of precipitating electrons from the ionosphere and the quenching of Rydberg states of the particles can be represented as

$$P\_{\mathcal{H}}(o) = \mathcal{F}\_{\mathcal{H}}(T\_{\mathcal{C}'}o) \exp\left[-\int f\_{\mathcal{H}}\left(\rho\_{\mathcal{A}'}T\_{\mathcal{C}}\right)\right].\tag{36}$$

The coefficient *F <sup>n</sup>* is given by the flow of electrons, and the value of *<sup>f</sup> <sup>n</sup>* ( *<sup>ρ</sup><sup>a</sup>* , *<sup>T</sup> <sup>a</sup>* ) <sup>=</sup> <sup>16</sup> *<sup>ρ</sup><sup>a</sup> <sup>n</sup>* <sup>2</sup> / *<sup>T</sup> <sup>a</sup>* characterizes the decrease in the populations of *<sup>m</sup>* <sup>e</sup> ff due to quenching of the Rydberg particles in a neutral medium [63]. It is defined as the ratio of the effective number *<sup>n</sup>* <sup>e</sup> ff of electrons, moving in a flow with velocity 2 *Te* and passing the distance <sup>∼</sup> <sup>2</sup> *Te <sup>τ</sup>* <sup>e</sup> ff through a unit area during the lifetime *<sup>τ</sup>* <sup>e</sup> ff of the split-off *L* excited state, to the equilibrium concentration *<sup>n</sup> <sup>e</sup>* of electrons, i.e.

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused http://dx.doi.org/10.5772/55792 35

$$F\_{\pi}(T\_{\mathcal{C}'}, \alpha) = n\_{\rm eff}(T\_{\mathcal{C}'}, \alpha) \Big/ n\_{\mathcal{C}} = \sqrt{2 \, T\_{\mathcal{C}}} \, \tau\_{\rm eff}(\, \alpha \text{)},\tag{37}$$

The lifetime of degenerate Rydberg states of the quasimolecule characterized by the time of spontaneous emission in the infrared

where *ω* (*L* → *L* ′ ) and *ω* (*L* → *n* ) are the appropriate transition frequencies,

Averaging of the intensities (33) and (34) is over all possible interatomic coordinates *R* as

(*<sup>Ψ</sup> <sup>n</sup> <sup>L</sup>* is the electron wave function of А\*\*М quasimolecule). This corresponds to a static consideration of the problem, which is valid if the average electron velocity 1 / *n* is much higher than the characteristic velocity of atomic particles in the medium when we have the inequality *n* 2 *Ta* / *Ma* < < 1 (where *Ma* is the reduced mass). Under our conditions, this sit‐ uation is certainly satisfied. As the main distortion of the Coulomb wave function is in the vicinity of the perturbing particle M, we can put to the first approximation that

*<sup>n</sup> <sup>L</sup>* ≈ *Wn <sup>L</sup>* ( *R* = *n* <sup>2</sup> ). Considering also that radiation from the states with a given value of *νL* is not coherent, we should add consistently the transition intensities (33) and (34) for

der to determine a total contribution of all possible transitions and multiply them by the ef‐

3/2 2

*T*

is the statistical sum of positive molecular ions). The non-equilibrium plasma factor *P <sup>n</sup>* (*ω* ) taking into account the flux of precipitating electrons from the ionosphere and the quenching

*e n e e* p

( ) <sup>2</sup> (,, ) () exp(1 2 ) *i*

*n T m nT P n nT*

é ù <sup>=</sup> ê ú <sup>S</sup> ê ú ë û

 w *e e*

*P FT* ( ) ( , ) exp ( , ). *<sup>T</sup> n n <sup>f</sup>*

*<sup>f</sup> <sup>n</sup>* ( *<sup>ρ</sup><sup>a</sup>* , *<sup>T</sup> <sup>a</sup>* ) <sup>=</sup> <sup>16</sup> *<sup>ρ</sup><sup>a</sup> <sup>n</sup>* <sup>2</sup> / *<sup>T</sup> <sup>a</sup>* characterizes the decrease in the populations of *<sup>m</sup>* <sup>e</sup> ff

through a unit area during the lifetime *<sup>τ</sup>* <sup>e</sup> ff

of electrons, i.e.

*e n aa* <sup>=</sup> é ù

The coefficient *F <sup>n</sup>* is given by the flow of electrons, and the value of

quenching of the Rydberg particles in a neutral medium [63]. It is defined as the ratio of the

rë û

of electrons, moving in a flow with velocity 2 *Te* and passing the

 w N2 and A\* \*

(*L* , 0 ) (*R* = *n* 2 ) is the effective principal quantum number of *L* term. The

and *<sup>C</sup> <sup>ν</sup><sup>L</sup> <sup>L</sup> <sup>l</sup>* is described in detail in [63].

O2 quasimolecules separately in or‐

( *n* , *ω* , *T <sup>e</sup>* ) = *P <sup>n</sup>* ( *ω* ) *m <sup>n</sup>* ( *ne* , *T <sup>e</sup>*). By


(35)

due to

of the split-off *L* excited state,

2 2

*<sup>ν</sup><sup>L</sup>* (*n*) <sup>=</sup> <sup>1</sup> / <sup>−</sup> <sup>2</sup> *<sup>U</sup> <sup>n</sup>*

*<sup>n</sup> <sup>L</sup>* <sup>=</sup> *∫ Wn <sup>L</sup>* (*<sup>R</sup>* ) *<sup>Ψ</sup> <sup>n</sup> <sup>L</sup>*

follows *W*¯

*W*¯

(*Σ<sup>i</sup>*

procedure for calculating the coefficients *<sup>C</sup> <sup>ν</sup><sup>L</sup> <sup>L</sup> <sup>L</sup>* ′

34 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

the corresponding emission lines of the A\* \*

analogy with [67] they are calculated as

w

eff

effective number *<sup>n</sup>* <sup>e</sup> ff

distance <sup>∼</sup> <sup>2</sup> *Te <sup>τ</sup>* <sup>e</sup> ff

to the equilibrium concentration *<sup>n</sup> <sup>e</sup>*

fective population of the Rydberg *L* term *<sup>m</sup>* <sup>e</sup> ff

of Rydberg states of the particles can be represented as

w

2 (*r*¯ = *n* 2, *R* ) *d R*

(i.e. *<sup>τ</sup>* <sup>e</sup> ff <sup>≃</sup> *<sup>τ</sup>* R ) which is defined as *τ* R − 1 ( *ω* ) = *W n* ¯ (*Δ n* = 1 ) *n* 3, where *W <sup>n</sup>* (*Δ n* = 1 ) is the intensity of the energy emission in IR region. Since with the frequency increase the radiation lifetime decreases as *τ* <sup>R</sup> <sup>∼</sup> *<sup>ω</sup>* <sup>−</sup> <sup>5</sup> / 3 [68], the gain factor can be expressed in the form

$$F\_{\pi}(T\_{\mathcal{C}'} \, \alpha) = \sqrt{T\_{\mathcal{C}} \Big/ T\_{\text{max}}} \left( n \Big/ n\_{\text{max}} \right)^{5} F\_{\text{max}} = \sqrt{T\_{\mathcal{C}} \Big/ T\_{\text{max}}} \left( \alpha\_{\text{min}} / \alpha \right)^{5/3} F\_{\text{max}} \tag{38}$$

where the maximum is achieved at a frequency *<sup>ω</sup>* <sup>=</sup> *<sup>n</sup>* max − 3 and electron temperature <sup>T</sup> max <sup>=</sup> <sup>3500</sup> K. Under these conditions, the value of *<sup>F</sup>* max <sup>∼</sup> <sup>10</sup> <sup>1</sup> <sup>0</sup> . Quantity *<sup>n</sup>* max is the position of the maximum population of Rydberg states, which is defined as [63]

$$m\_{\rm max} = \left[\frac{2^{17}\sqrt{2}\,\pi^7}{675}T\_e^{3/2}\,\rho^2\!\_{\tilde{\mathcal{A}}}(\sum\_{\tilde{i}}\alpha(\mathbf{i})\mathcal{B}\_{\tilde{i}}\mathcal{Q}\_{\tilde{i}}^2)(\sum\_{\tilde{i}'}\alpha(\mathbf{i}')\mathcal{Q}\_{\tilde{i}'}^2)\right]^{-1/15}\tag{39}$$

Here *α* (*i*) denotes the weight factors of *N*2 and *O*2 molecules equal to *α* (*N*2) ≃ 0.79 and *α* (*O*2) ≈ 0.21 respectively. The values of *Q <sup>i</sup>* and *B <sup>i</sup>* are their quadrupole moments and rotational constants expressed in *<sup>e</sup> <sup>a</sup>* <sup>0</sup> 2 units. It will be recalled that in the system of the atomic units, the concentration *<sup>ρ</sup> <sup>a</sup>* of medium, rotational constant *<sup>B</sup> <sup>a</sup>* , and medium temperature *T <sup>a</sup>* are determined as

$$\mathcal{P}\_{\boldsymbol{\mu}}\text{ (a.u.)} \simeq (0.52917)^3 \cdot 10^{\times -24}, \; \mathcal{B}\_{\boldsymbol{\mu}}\text{ (a.u.)} \simeq \mathcal{B}\_{\boldsymbol{\mu}}\text{ (cm}^{-1}\text{)} \Big/ 8066 \cdot 27.21, \; T\_{\boldsymbol{\mu}}\text{ (a.u.)} \simeq T\_{\boldsymbol{\mu}}(K) / 11600 \cdot 27.21$$

(x is the exponent that characterizes the medium concentration which in these conditions varies from 12 to 16). Note, that according to (10) the position of the Rydberg state population maximum *<sup>m</sup>* max (*Te* , *<sup>ρ</sup> <sup>a</sup>* ) along the *n* axes decreases as ~ *<sup>T</sup> <sup>e</sup>* <sup>−</sup> <sup>1</sup> / 10 and ~ *<sup>ρ</sup> <sup>a</sup>* − 2 / 15 as the temper‐ ature of electrons and medium concentration increase. In the calculations performed below, the spectroscopic parameters of molecules and molecular ions of nitrogen and oxygen are taken from [69]. Scattering lengths, quadrupole moments, and static polarizabilities of the nitrogen and oxygen molecules, for which data from [70-74] were used, are given in Table 2. The corresponding dependences are shown in Table 3.


\*) Because the contribution of the second term in square brackets in (3) is small, we do not give β ′ values for these molecules

**Table 2.** Scattering lengths *a*, quadrupole moments *Q*, and static polarizabilities β of oxygen and nitrogen molecules \*)

The intensity of the UHF radiation per unit volume is written as follows:

$$\mathcal{W}\_{\text{tot}}(\boldsymbol{\alpha}) = \sum\_{i=1,2} \boldsymbol{\alpha}\{i\} \sum\_{\boldsymbol{n}\boldsymbol{S}} \boldsymbol{m}\_{\boldsymbol{n}\boldsymbol{L}}\{i\} \overline{\boldsymbol{W}\_{\boldsymbol{n}}^{(i)}(\boldsymbol{L}\rightarrow\boldsymbol{S})}.\tag{40}$$

split-off and degenerate Coulomb levels and the transitions *L* → *L* ′ between the split-off levels, which are calculated according to formulas (31) and (32). At the 0.8 - 10 GHz frequency range the distributions with respect to *L* for the radiation lines depending on *n* for each

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

are marked with symbols *NL <sup>L</sup>* ′, *NL <sup>n</sup>* and *OL <sup>L</sup>* ′, *OL <sup>n</sup>* for A\*\*N2 and A\*\*O2 quasimole‐

 79 74 71 69 58 54 52 50 42 40 38 37 31 29 28 27 23 21 20 20

converge in the limit to the *L* →*n* transitions, where *L* = 0, 1, 2, 3. These lines

*<sup>n</sup>* **ma** *<sup>x</sup> Te*=1000К *Te*=2000К *Te*=3000К *Te*=4000К

transitions, which

http://dx.doi.org/10.5772/55792

37

of Rydberg states on medium concentration <sup>ρ</sup> *<sup>a</sup>*

quasimolecule contain the series of four lines, corresponding to the *L* → *L* ′

**Figure 4.** UHF radiation lines of the A\*\*N2 and A\*\*O2 quasimolecules.

**Table 3.** Dependency of the position of maximum population *<sup>n</sup>* ma *<sup>x</sup>*

and temperature *T <sup>e</sup>*of free electrons

for increasing *L* ′

cules, respectively.

**ρ** *a* **, cm − 3**

The *S* index in (40) takes on various values (*L* ′ > *L* or *L* ′ = *n*) according to the sequence in which radiation lines intersect the straight-line *ω* frequency. The distribution of *n*-dependent emission lines (corresponding to *L* = 0 − 3 values) for *N*2 and *O*<sup>2</sup> molecules contain the series of four *L* → *L* ′ transitions which converge with increasing *L* ′ to the limiting *L* → *n* transition [63].

The shift in frequency of these limits for *NL <sup>n</sup>* and *OL <sup>n</sup>* series leads to the heterogeneity of the UHF spectrum, in which there are three spectral ranges of frequencies. The first includes the frequencies *ω* = 1.17 − 1.706 GHz, the second corresponds to the *ω* = 4.31 − 6.094 GHz range, and third contains the *ω* = 7.27 − 10.00 GHz one.

The optical thickness of the radiating layer *HR* ( *ω* ) is defined by the limiting value of the medium concentration *<sup>ρ</sup> <sup>a</sup>* (*HR* ( *<sup>ω</sup>* )) from which the value of *<sup>W</sup> <sup>n</sup>* ( *<sup>i</sup>*) =∑ *S <sup>W</sup> <sup>n</sup>* ( *i* ) (*L* → *S* ) vanishes. This situation occurs when the concentration of electrons *<sup>n</sup> <sup>e</sup>* <sup>≤</sup> <sup>10</sup> 6 cm <sup>−</sup> 3 for medium concentration *ρ<sup>a</sup>* <sup>=</sup> <sup>10</sup> <sup>1</sup> 5 cm <sup>−</sup> 3. On the other hand, the dependence of *mn <sup>L</sup>* ( *<sup>ρ</sup> <sup>a</sup>* (*z*) ) is determined by the quenching process of Rydberg particles and drops sharply when *ρ<sup>a</sup>* ≥ 10 <sup>1</sup> <sup>6</sup> cm − 3 [63]. In these circumstances, a quenching is accompanied by the transfer of electronic excitation energy into translational motion of the heavy medium components. Since the concentration of excited particles is small compared with the medium concentration, the process of cooling flow of free electrons in the rotational transitions of molecules should be carried efficiently in the bottom of the D-layer. The Rydberg states are eliminated when the *Te* and *T <sup>a</sup>* temperatures are equalized. This condition is the criterion for the lower boundary *Hp* of the plasma layer, which does not depend on the radiation frequency and corresponds to *ρ<sup>a</sup>* ∼ 10 <sup>1</sup> 6 cm <sup>−</sup> 3.

Figure 4 shows the dependence of the frequency of the emission lines for A\*\*N2 and A\*\*O2 quasimolecules on the principal quantum number n for the transitions *L* →*n* between the

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused http://dx.doi.org/10.5772/55792 37

**Figure 4.** UHF radiation lines of the A\*\*N2 and A\*\*O2 quasimolecules.

**Molecules** *a Q* **β**

\*) Because the contribution of the second term in square brackets in (3) is small, we do not give β ′

The intensity of the UHF radiation per unit volume is written as follows:

1,2

=

 a

w

36 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

range, and third contains the *ω* = 7.27 − 10.00 GHz one.

molecules

of four *L* → *L* ′

medium concentration *<sup>ρ</sup> <sup>a</sup>*

*ρ<sup>a</sup>* ∼ 10 <sup>1</sup> 6 cm <sup>−</sup> 3.

transition [63].

*<sup>O</sup>* <sup>2</sup> 1.60 [70] <sup>−</sup> 1.04 [72] 10.6 [73] *N* <sup>2</sup> 0.75 [71] − 1.30 [72] 11.1 [74]

**Table 2.** Scattering lengths *a*, quadrupole moments *Q*, and static polarizabilities β of oxygen and nitrogen molecules \*)

tot ( ) () () ( ). *<sup>i</sup> <sup>n</sup> i nS <sup>W</sup> i i m W LS nL*

The *S* index in (40) takes on various values (*L* ′ > *L* or *L* ′ = *n*) according to the sequence in which radiation lines intersect the straight-line *ω* frequency. The distribution of *n*-dependent emission lines (corresponding to *L* = 0 − 3 values) for *N*2 and *O*<sup>2</sup> molecules contain the series

The shift in frequency of these limits for *NL <sup>n</sup>* and *OL <sup>n</sup>* series leads to the heterogeneity of the UHF spectrum, in which there are three spectral ranges of frequencies. The first includes the frequencies *ω* = 1.17 − 1.706 GHz, the second corresponds to the *ω* = 4.31 − 6.094 GHz

The optical thickness of the radiating layer *HR* ( *ω* ) is defined by the limiting value of the

vanishes. This situation occurs when the concentration of electrons *<sup>n</sup> <sup>e</sup>* <sup>≤</sup> <sup>10</sup> 6 cm <sup>−</sup> 3 for medium concentration *ρ<sup>a</sup>* <sup>=</sup> <sup>10</sup> <sup>1</sup> 5 cm <sup>−</sup> 3. On the other hand, the dependence of *mn <sup>L</sup>* ( *<sup>ρ</sup> <sup>a</sup>* (*z*) ) is determined by the quenching process of Rydberg particles and drops sharply when *ρ<sup>a</sup>* ≥ 10 <sup>1</sup> <sup>6</sup> cm − 3 [63]. In these circumstances, a quenching is accompanied by the transfer of electronic excitation energy into translational motion of the heavy medium components. Since the concentration of excited particles is small compared with the medium concentration, the process of cooling flow of free electrons in the rotational transitions of molecules should be carried efficiently in the bottom of the D-layer. The Rydberg states are eliminated when the *Te* and *T <sup>a</sup>* temperatures are equalized. This condition is the criterion for the lower boundary *Hp* of the plasma layer, which does not depend on the radiation frequency and corresponds to

Figure 4 shows the dependence of the frequency of the emission lines for A\*\*N2 and A\*\*O2 quasimolecules on the principal quantum number n for the transitions *L* →*n* between the

(*HR* ( *<sup>ω</sup>* )) from which the value of *<sup>W</sup> <sup>n</sup>*

( )

transitions which converge with increasing *L* ′ to the limiting *L* → *n*

<sup>=</sup> å å ® (40)

( *<sup>i</sup>*) =∑ *S <sup>W</sup> <sup>n</sup>*

( *i* ) (*L* → *S* )

values for these

split-off and degenerate Coulomb levels and the transitions *L* → *L* ′ between the split-off levels, which are calculated according to formulas (31) and (32). At the 0.8 - 10 GHz frequency range the distributions with respect to *L* for the radiation lines depending on *n* for each quasimolecule contain the series of four lines, corresponding to the *L* → *L* ′ transitions, which for increasing *L* ′ converge in the limit to the *L* →*n* transitions, where *L* = 0, 1, 2, 3. These lines are marked with symbols *NL <sup>L</sup>* ′, *NL <sup>n</sup>* and *OL <sup>L</sup>* ′, *OL <sup>n</sup>* for A\*\*N2 and A\*\*O2 quasimole‐ cules, respectively.


**Table 3.** Dependency of the position of maximum population *<sup>n</sup>* ma *<sup>x</sup>* of Rydberg states on medium concentration <sup>ρ</sup> *<sup>a</sup>* and temperature *T <sup>e</sup>*of free electrons

It is seen that the frequency shift for the *NL <sup>n</sup>* and *OL <sup>n</sup>* limits relative to each other leads to the formation of three spectral ranges, which transitions are suppressed for small values of *n*. This difference is caused by the characteristics of slow-electron scattering by the nitrogen and oxygen molecules.

The intensities of incoherent UHF radiation for the excited medium calculated by the formula (11) in the 0.8-1.8 GHz range are presented in Fig.5 and Fig.6 as functions of frequency for quiet and disturbed ionosphere states. It is shown that the profile of UHF radiation is a nonmonotonic function of the radiation frequency and increases sharply near the right edge of the range.

> **Figure 6.** The spectrum of UHF radiation for a disturbed ionosphere. Curve (1) corresponds to electron temperature Te=2000 K, and neutral medium concentration ρ*a*=10<sup>12</sup> <sup>−</sup>1013 cm -3; (2) Te=2000 K, ρ*a*=10<sup>14</sup> cm -3; (3) Te=3000 K,

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

With increase in electron concentration *<sup>n</sup> <sup>e</sup>* by two orders of magnitude, the relative intensity W increases by about four orders of magnitude. This is likely directly related to the observed

of the sun storm increases [56] because the position of the first radiation intensity attenuation region (1.17 – 1.71 GHz) and the «transparency window» of the propagation of satellite signals

The photoionization plasma is formed during 20-30 minutes under the influence of wideband radiation coming into the atmosphere after a solar flare has occurred. This process is caused by multi-quantum excitation of the electronic states of nitrogen and oxygen atoms and molecules. In this case the spin-forbidden character of the corresponding radiative transitions is removed due to interaction of excited particles with ambient molecules *M* and creation of the *A\*\*N2* and *A\*\*O2* quasimolecules. The distribution of Rydberg-state populations for large values of principal quantum number *n >30* is similar to that discussed above in non-equilibri‐ um recombination plasma. A difference lies in the fact that the top of the energy scale is further

<sup>2</sup> frequencies of GPS signal as the power

http://dx.doi.org/10.5772/55792

39

1 and *L*

<sup>ρ</sup>*a*=10<sup>12</sup> <sup>−</sup>1013 cm -3; (4) Te=3000 K, ρ*a*=1014 cm -3.

effect of sequential disappearance of the *L*

**8. Radiation of photoionization plasma**

nearly coincide.

**Figure 5.** The spectrum of UHF radiation for a quiet ionosphere. Curve (1) corresponds to electron temperature Te=1000 K and medium concentration ρ*a*=10<sup>12</sup> <sup>−</sup>1013 cm -3; (2) Te=1000 K, ρ*a*=1014 cm -3; (3) Te=2000 K, ρ*a*=10<sup>12</sup> <sup>−</sup>10<sup>13</sup> cm -3.

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused http://dx.doi.org/10.5772/55792 39

**Figure 6.** The spectrum of UHF radiation for a disturbed ionosphere. Curve (1) corresponds to electron temperature Te=2000 K, and neutral medium concentration ρ*a*=10<sup>12</sup> <sup>−</sup>1013 cm -3; (2) Te=2000 K, ρ*a*=10<sup>14</sup> cm -3; (3) Te=3000 K, <sup>ρ</sup>*a*=10<sup>12</sup> <sup>−</sup>1013 cm -3; (4) Te=3000 K, ρ*a*=1014 cm -3.

With increase in electron concentration *<sup>n</sup> <sup>e</sup>* by two orders of magnitude, the relative intensity W increases by about four orders of magnitude. This is likely directly related to the observed effect of sequential disappearance of the *L* 1 and *L* <sup>2</sup> frequencies of GPS signal as the power of the sun storm increases [56] because the position of the first radiation intensity attenuation region (1.17 – 1.71 GHz) and the «transparency window» of the propagation of satellite signals nearly coincide.

#### **8. Radiation of photoionization plasma**

It is seen that the frequency shift for the *NL <sup>n</sup>* and *OL <sup>n</sup>* limits relative to each other leads to

38 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

the formation of three spectral ranges, which transitions are suppressed for small values of *n*.

This difference is caused by the characteristics of slow-electron scattering by the nitrogen and

The intensities of incoherent UHF radiation for the excited medium calculated by the formula

(11) in the 0.8-1.8 GHz range are presented in Fig.5 and Fig.6 as functions of frequency for quiet

and disturbed ionosphere states. It is shown that the profile of UHF radiation is a non-

monotonic function of the radiation frequency and increases sharply near the right edge of the

**Figure 5.** The spectrum of UHF radiation for a quiet ionosphere. Curve (1) corresponds to electron temperature Te=1000 K and medium concentration ρ*a*=10<sup>12</sup> <sup>−</sup>1013 cm -3; (2) Te=1000 K, ρ*a*=1014 cm -3; (3) Te=2000 K, ρ*a*=10<sup>12</sup> <sup>−</sup>10<sup>13</sup>

oxygen molecules.

range.

cm -3.

The photoionization plasma is formed during 20-30 minutes under the influence of wideband radiation coming into the atmosphere after a solar flare has occurred. This process is caused by multi-quantum excitation of the electronic states of nitrogen and oxygen atoms and molecules. In this case the spin-forbidden character of the corresponding radiative transitions is removed due to interaction of excited particles with ambient molecules *M* and creation of the *A\*\*N2* and *A\*\*O2* quasimolecules. The distribution of Rydberg-state populations for large values of principal quantum number *n >30* is similar to that discussed above in non-equilibri‐ um recombination plasma. A difference lies in the fact that the top of the energy scale is further depleted by photoionization of the Rydberg states. For small values *n* < 10, the process of quenching of Rydberg and low-lying excited states takes place because of predissociation including non-adiabatic transitions through intermediate valence configurations as well as the resonant (non-resonant) transfer of internal energy due to collisional processes with subse‐ quent thermalization of the medium. The effect mentioned above is confirmed by an observed increase in temperature of the neutral environment *Ta* with increasing height in the range 40-60 km. Thus, in photoionization plasma the orbitally degenerate L-states are populated primarily for principal quantum numbers 10<*n*<30. In contrast to the recombination plasma, the popu‐ lation-distribution function here is dependent on the intensity of light and is not a function of either the temperature of free electrons Te or the resulting shape of the absorption line.

of Rydberg states, which should reproduce the simple dependence of intensity *I* on *n* at the left long-wave side of the profile, for example, as *n* 2 exp ( − *α n* 2 ) [63]. The slope of this curve

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

http://dx.doi.org/10.5772/55792

41

The success of the GPS methodology rests on an objective theory of measurement which denies Einstein's symmetry principle derived on the basis of the LT. There is never any ambiguity in principle as to which of two clocks in relative motion runs slower. The goal of relativity theory is to quantitatively predict relative rates of clocks on the basis of information regarding their respective states of motion and positions in a gravitational field. For this purpose it is necessary to designate a specific rest frame (ORS) to act as reference in determining the speeds of the clocks to be used to compute the amount of time dilation. The Earth's center of mass (ECM) serves as the ORS for computing the rates of clocks located on orbiting satellites as well as on the Earth's surface. The effects of gravity on the relative rates can then also be computed from knowledge of their respective positions in space relative to the ECM. To the surprise of many, it was found that the transverse Doppler effect was *asymmetric* when the light source and detector are mounted on a high-speed rotor, thus agreeing with Einstein's alternative inter‐ pretation of time dilation that rejects the above symmetry principle and concludes instead that an accelerated clock always runs slower than its stationary counterpart. The EP was used to obtain a quantitative prediction of these results, but it was pointed out by Sherwin that *the lack of symmetry expected from the LT proves that it is only valid for uniform translation.* This conclusion has been largely ignored, however, and emphasis has been placed instead on the possibility of using the experimental data to quantitatively predict the dependence of the rates of clocks on their state of motion and position in a gravitational field. While this information has proven invaluable in the development of GPS, it leaves open the question of whether other predictions of the LT such as space-time mixing, remote non-simultaneity, Fitzgerald-Lorentz length contraction (FLC) and Lorentz invariance are valid under similar circumstances in which acceleration is present, and even whether the LT has validity for uniformly translating systems. In considering this point it has been noted that Einstein made an undeclared assumption, *a hidden postulate*, in his derivation of the LT which he made no attempt to justify. An example has been given to show that this assumption in fact leads to contradictory predictions regarding whether two observers in uniform relative motion agree or disagree on the length of an object that is oriented perpendicularly to their velocity (clock riddle). It has thereupon been shown that all hitherto successful predictions of relativity theory, including those that are relevant to GPS, can be obtained by assuming that clock rates of observers have a constant ratio as long as neither their relative state of motion nor their respective positions in a gravitational field change. The latter assumption leads to an alternative Lorentz transformation (ALT) which is compatible with Einstein's two postulates of relativity and the same velocity transformation (VT) that is obtained using conventional STR. This version of the theory eliminates the contradiction of the clock riddle while continuing to successfully predict the aberration of starlight at the zenith and the results of the Fresnel light-drag experiment. The resulting theory

depends on the ratio of luminous flux and the rate of quenching.

**9. Conclusion**

Under these conditions, spontaneous decimeter radiation (which fits to *Δ n* =0) will be accompanied by strong IR radiation (with *Δ n* =1), the intensity of which exceeds by at least three orders of magnitude the intense UHF radiation of recombination plasma [61]. This can be explained by the fact that according to [63], the maximum of IR radiation for the recombi‐ nation plasma accounts for the interval of principal quantum numbers 20 < *n* < 30, and in case of the photo-ionization plasma, the maximum should be located in the vicinity of *n* = 10. It should also be noted that the *l-*mixing process for the low-lying Rydberg states is suppressed, and the influence of the environment should be significantly reduced [60]. This is particularly important for the formation of the frequency spectrum profile of the IR radiation under conditions of strong solar activity. The maximum of the radiation should occur near the bottom of the D-layer at an altitude of 50-60 km. This is due to the fast decrease ~ *n* 4 of the *l*-mixing cross section with increasing principal quantum number *n.*


**Table 4.** Dependence of principal quantum number *n*¯ on medium concentration <sup>ρ</sup> *<sup>a</sup>*

This phenomenon should lead to the population emptying of the low-lying molecular Rydberg states as a result of predissociation [68]. In reality, it is necessary that *l*-mixing occurs, i.e. inside the Rydberg-electron cloud of the A\*\* particle, at least one medium molecule *M* should react to form an *A\*\*M* quasimolecule. This condition corresponds to the principal quantum number

$$
\overline{n} = 10^4 \left[ \frac{3}{32\pi \left( 0.53 \right)^3 \rho\_{\mathcal{A}}} \right]^{1/6},
\tag{41}
$$

which depends on the concentration of the medium *<sup>ρ</sup> <sup>a</sup>* . Table 4 shows that it is realized at a concentration of about 1017 cm-3. This means that the process of *l*-mixing plays an important role in shaping the frequency profile. No less important here are the processes of quenching of Rydberg states, which should reproduce the simple dependence of intensity *I* on *n* at the left long-wave side of the profile, for example, as *n* 2 exp ( − *α n* 2 ) [63]. The slope of this curve depends on the ratio of luminous flux and the rate of quenching.

#### **9. Conclusion**

depleted by photoionization of the Rydberg states. For small values *n* < 10, the process of quenching of Rydberg and low-lying excited states takes place because of predissociation including non-adiabatic transitions through intermediate valence configurations as well as the resonant (non-resonant) transfer of internal energy due to collisional processes with subse‐ quent thermalization of the medium. The effect mentioned above is confirmed by an observed increase in temperature of the neutral environment *Ta* with increasing height in the range 40-60 km. Thus, in photoionization plasma the orbitally degenerate L-states are populated primarily for principal quantum numbers 10<*n*<30. In contrast to the recombination plasma, the popu‐ lation-distribution function here is dependent on the intensity of light and is not a function of either the temperature of free electrons Te or the resulting shape of the absorption line.

Under these conditions, spontaneous decimeter radiation (which fits to *Δ n* =0) will be accompanied by strong IR radiation (with *Δ n* =1), the intensity of which exceeds by at least three orders of magnitude the intense UHF radiation of recombination plasma [61]. This can be explained by the fact that according to [63], the maximum of IR radiation for the recombi‐ nation plasma accounts for the interval of principal quantum numbers 20 < *n* < 30, and in case of the photo-ionization plasma, the maximum should be located in the vicinity of *n* = 10. It should also be noted that the *l-*mixing process for the low-lying Rydberg states is suppressed, and the influence of the environment should be significantly reduced [60]. This is particularly important for the formation of the frequency spectrum profile of the IR radiation under conditions of strong solar activity. The maximum of the radiation should occur near the bottom of the D-layer at an altitude of 50-60 km. This is due to the fast decrease ~ *n* 4 of the *l*-mixing

, 10 <sup>1</sup> 2 cm-3 1 10 10 <sup>2</sup> 10 <sup>3</sup> 10 <sup>4</sup> 10 <sup>5</sup>

This phenomenon should lead to the population emptying of the low-lying molecular Rydberg states as a result of predissociation [68]. In reality, it is necessary that *l*-mixing occurs, i.e. inside the Rydberg-electron cloud of the A\*\* particle, at least one medium molecule *M* should react to form an *A\*\*M* quasimolecule. This condition corresponds to the principal quantum number

> 3 <sup>3</sup> <sup>10</sup> , 32 (0.53)

concentration of about 1017 cm-3. This means that the process of *l*-mixing plays an important role in shaping the frequency profile. No less important here are the processes of quenching

 r *a*

1 6

(41)

. Table 4 shows that it is realized at a

*n*¯ 77 52 36 24 16 11

cross section with increasing principal quantum number *n.*

40 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

**Table 4.** Dependence of principal quantum number *n*¯ on medium concentration <sup>ρ</sup> *<sup>a</sup>*

4

p

é ù <sup>=</sup> ê ú ê ú ë û

*n*

which depends on the concentration of the medium *<sup>ρ</sup> <sup>a</sup>*

ρ *a*

The success of the GPS methodology rests on an objective theory of measurement which denies Einstein's symmetry principle derived on the basis of the LT. There is never any ambiguity in principle as to which of two clocks in relative motion runs slower. The goal of relativity theory is to quantitatively predict relative rates of clocks on the basis of information regarding their respective states of motion and positions in a gravitational field. For this purpose it is necessary to designate a specific rest frame (ORS) to act as reference in determining the speeds of the clocks to be used to compute the amount of time dilation. The Earth's center of mass (ECM) serves as the ORS for computing the rates of clocks located on orbiting satellites as well as on the Earth's surface. The effects of gravity on the relative rates can then also be computed from knowledge of their respective positions in space relative to the ECM. To the surprise of many, it was found that the transverse Doppler effect was *asymmetric* when the light source and detector are mounted on a high-speed rotor, thus agreeing with Einstein's alternative inter‐ pretation of time dilation that rejects the above symmetry principle and concludes instead that an accelerated clock always runs slower than its stationary counterpart. The EP was used to obtain a quantitative prediction of these results, but it was pointed out by Sherwin that *the lack of symmetry expected from the LT proves that it is only valid for uniform translation.* This conclusion has been largely ignored, however, and emphasis has been placed instead on the possibility of using the experimental data to quantitatively predict the dependence of the rates of clocks on their state of motion and position in a gravitational field. While this information has proven invaluable in the development of GPS, it leaves open the question of whether other predictions of the LT such as space-time mixing, remote non-simultaneity, Fitzgerald-Lorentz length contraction (FLC) and Lorentz invariance are valid under similar circumstances in which acceleration is present, and even whether the LT has validity for uniformly translating systems.

In considering this point it has been noted that Einstein made an undeclared assumption, *a hidden postulate*, in his derivation of the LT which he made no attempt to justify. An example has been given to show that this assumption in fact leads to contradictory predictions regarding whether two observers in uniform relative motion agree or disagree on the length of an object that is oriented perpendicularly to their velocity (clock riddle). It has thereupon been shown that all hitherto successful predictions of relativity theory, including those that are relevant to GPS, can be obtained by assuming that clock rates of observers have a constant ratio as long as neither their relative state of motion nor their respective positions in a gravitational field change. The latter assumption leads to an alternative Lorentz transformation (ALT) which is compatible with Einstein's two postulates of relativity and the same velocity transformation (VT) that is obtained using conventional STR. This version of the theory eliminates the contradiction of the clock riddle while continuing to successfully predict the aberration of starlight at the zenith and the results of the Fresnel light-drag experiment. The resulting theory recognizes that the rates of proper clocks do differ between inertial systems, but that it is nonetheless impossible for a given observer to determine his state of motion on the basis of strictly *in situ* measurements. This is because there is a uniform scaling of the rates of all natural clocks when they are accelerated by the same amount. The same holds true for all other physical quantities. This conclusion is perfectly consistent with the RP, which can be restated as follows: *The laws of physics are the same in all inertial systems but the units in which they are expressed can and do differ in a systematic manner.*

states of the A\*\*N2 and A\*\*O2 quasimolecules in the E-and D-layers of the ionosphere, i.e. they

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

, Maxim Golubkov2

\*Address all correspondence to: bobwtal@yahoo.de; buenker@uni-wuppertal.de alternative‐

1 Fachbereich C-Mathematik und Naturwissenschaften, Bergische Universität Wuppertal,

[1] Einstein A. Zur Elektrodynamik bewegter Körper. Annalen der Physik 1905;322(10)

[3] Will C M. Was Einstein Right?: Putting General Relativity To The Test. 2nd ed. New

[5] Pais A. 'Subtle is the Lord …' The Science and the Life of Albert Einstein. Oxford

[6] Lorentz H A. Electromagnetic phenomena in a system moving with any velocity smaller than that of light. Proceedings of the Royal Netherlands Academy of Arts

[7] Poincaré H. La mesure du temps. Revue de métaphysique et de morale 1898;6 1-13.

[10] Lorentz H. A. The Relative Motion of the Earth and the Ether. Zittingsverlag Akad.

[9] Fitzgerald G F. The Ether and the Earth's Atmosphere Science 1889;13(328) 390.

[8] Larmor J. Aether and Matter. Cambridge University Press; 1900.

[2] Wilkie T. Time to remeasure the metre. New Scientist 1983;27 Oct. 258-263.

[4] Voigt W. Ueber das Doppler'sche Princip. Göttinger Nachrichten 1887;7 41-51.

2 Semenov Institute of Chemical Physics, Russian Academy of Sciences, Moscow, Russia

, Ivan Karpov3

and

http://dx.doi.org/10.5772/55792

43

are determined by quantum properties of the propagating medium.

, Gennady Golubkov2

**Author details**

Robert J. Buenker1

Mikhail Manzheliy2

Wuppertal, Germany

**References**

891-921.

lorentztransformation.blogspot.com

3 I. Kant Baltic Federal University, Russia

York: Basic Books Inc; 1993, p. 272.

University Press; 1982, p. 125.

and Sciences 1904;10 809-831.

V. Wet. 1892;1 74–79.

The difference between the impacts of photoionization and recombination plasma on the distortion of the GPS satellite signals is clearly manifested in the observed dependence of the positioning errors with respect to time. In the first case, there is sharp and narrow (up to 20 min) peak with a positioning error of more than 50 m. The second case corresponds to the formation of a bell-shaped dependence of the error with a typical width of several hours and positioning errors of 15-20 m.

During periods of solar activity, characteristic IR radiation should be formed, which is indicative of Rydberg states having been formed in the D-layer of the Earth's ionosphere. The integrated intensity close to the IR radiation maximum gives information about the population of Rydberg states at altitudes of 50-60 km. These results can be used as a starting point for the corresponding kinetic calculations. The slope on the left side of the frequency profile contains information about the magnitude of the light flux.

An independent line of the research may be a systematic analysis of the long-wave IR spectrum of radiation (for *Δ n* =1 transitions) emitted by the D-layer for a long time during a strong geomagnetic storm. This band of the spectrum falls in the range of principal quantum numbers 20 < *n* < 40, where the *l*-mixing process is very efficient. The features of the frequency profile described above should also appear in this case. By contrast, the spectral appearance of *n L* → (*n* −1) *L* ′ transitions will be richer with the radiation of A\*\*N2 and A\*\*O2 quasimole‐ cules in the decimeter range (for the *Δ n* =0 transitions). This effect may serve as a plasma diagnostic and IR-tomography of the ionosphere D-layer.

In conclusion, it should be noted that two physical factors play key roles during the propaga‐ tion of GPS satellite signals. The first is the absorption and subsequent stimulated emission of electromagnetic waves on Rydberg states of the A\*\*N2 and A\*\*O2 quasimolecules with a time delay of 10-5-10-6 seconds in a single scattering process. The second is due to incoherent plasma radiation. These processes are applied independently of each other. The increase in two to three times in the envelope of the resonance intensity profile and the formation of a phase shift are the most characteristic features for the resonance absorption of electromagnetic waves with subsequent re-emission. These two processes alone can explain the power growth and the disappearance of GPS signal observed in [57]. This means that the frequencies of the signals emitted by the satellite and subsequently received on the ground may differ substantially from one other.

Thus, the physical cause of the time delay and phase shift of the GPS satellite signal during periods of strong solar activity is associated with a cascade of resonant re-emission of Rydberg states of the A\*\*N2 and A\*\*O2 quasimolecules in the E-and D-layers of the ionosphere, i.e. they are determined by quantum properties of the propagating medium.

#### **Author details**

recognizes that the rates of proper clocks do differ between inertial systems, but that it is nonetheless impossible for a given observer to determine his state of motion on the basis of strictly *in situ* measurements. This is because there is a uniform scaling of the rates of all natural clocks when they are accelerated by the same amount. The same holds true for all other physical quantities. This conclusion is perfectly consistent with the RP, which can be restated as follows: *The laws of physics are the same in all inertial systems but the units in which they are expressed can*

The difference between the impacts of photoionization and recombination plasma on the distortion of the GPS satellite signals is clearly manifested in the observed dependence of the positioning errors with respect to time. In the first case, there is sharp and narrow (up to 20 min) peak with a positioning error of more than 50 m. The second case corresponds to the formation of a bell-shaped dependence of the error with a typical width of several hours and

During periods of solar activity, characteristic IR radiation should be formed, which is indicative of Rydberg states having been formed in the D-layer of the Earth's ionosphere. The integrated intensity close to the IR radiation maximum gives information about the population of Rydberg states at altitudes of 50-60 km. These results can be used as a starting point for the corresponding kinetic calculations. The slope on the left side of the frequency profile contains

An independent line of the research may be a systematic analysis of the long-wave IR spectrum of radiation (for *Δ n* =1 transitions) emitted by the D-layer for a long time during a strong geomagnetic storm. This band of the spectrum falls in the range of principal quantum numbers 20 < *n* < 40, where the *l*-mixing process is very efficient. The features of the frequency profile described above should also appear in this case. By contrast, the spectral appearance of *n L* → (*n* −1) *L* ′ transitions will be richer with the radiation of A\*\*N2 and A\*\*O2 quasimole‐ cules in the decimeter range (for the *Δ n* =0 transitions). This effect may serve as a plasma

In conclusion, it should be noted that two physical factors play key roles during the propaga‐ tion of GPS satellite signals. The first is the absorption and subsequent stimulated emission of electromagnetic waves on Rydberg states of the A\*\*N2 and A\*\*O2 quasimolecules with a time delay of 10-5-10-6 seconds in a single scattering process. The second is due to incoherent plasma radiation. These processes are applied independently of each other. The increase in two to three times in the envelope of the resonance intensity profile and the formation of a phase shift are the most characteristic features for the resonance absorption of electromagnetic waves with subsequent re-emission. These two processes alone can explain the power growth and the disappearance of GPS signal observed in [57]. This means that the frequencies of the signals emitted by the satellite and subsequently received on the ground may differ substantially from

Thus, the physical cause of the time delay and phase shift of the GPS satellite signal during periods of strong solar activity is associated with a cascade of resonant re-emission of Rydberg

*and do differ in a systematic manner.*

42 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

positioning errors of 15-20 m.

one other.

information about the magnitude of the light flux.

diagnostic and IR-tomography of the ionosphere D-layer.

Robert J. Buenker1 , Gennady Golubkov2 , Maxim Golubkov2 , Ivan Karpov3 and Mikhail Manzheliy2

\*Address all correspondence to: bobwtal@yahoo.de; buenker@uni-wuppertal.de alternative‐ lorentztransformation.blogspot.com

1 Fachbereich C-Mathematik und Naturwissenschaften, Bergische Universität Wuppertal, Wuppertal, Germany

2 Semenov Institute of Chemical Physics, Russian Academy of Sciences, Moscow, Russia

3 I. Kant Baltic Federal University, Russia

#### **References**


[11] Pais A. 'Subtle is the Lord …' The Science and the Life of Albert Einstein. Oxford University Press; 1982, p. 137.

[29] Champeney D C, Isaak G R, Khan A M. Measurement of Relativistic Time Dilatation

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

http://dx.doi.org/10.5772/55792

45

[31] Sherwin C W. Some Recent Experimental Tests of the 'Clock Paradox'. Physical Re‐

[33] Buenker R J. The Sign of the Doppler Shift in Ultracentrifuge Experiments. Apeiron 2012;19(3) 218-237. http://redshift.vif.com/JournalFiles/V19NO3PDF/V19N3BU1.pdf

[34] Einstein A. Über den Einfluß der Schwerkraft auf die Ausbreitung des Lichtes. An‐

[35] Pound R V, Snider J L. Effect of Gravity on Gamma Radiation. Physical Review 1965;

[36] Einstein A. Die Feldgleichungen der Gravitation. Königlich Preussische Akademie

[37] Pais A. 'Subtle is the Lord …' The Science and the Life of Albert Einstein. Oxford

[39] Shapiro I I. Fourth Test of General Relativity. Physical Review Letters 1964;13(26)

[40] Pais A. 'Subtle is the Lord …' The Science and the Life of Albert Einstein. Oxford

[41] Buenker R J. Use of Hamilton's Canonical Equations to Modify Newton's Corpuscu‐ lar Theory of Light. Russian Journal of Chemical Physics 2003;22(10) 124-128. http://

[42] Buenker R J. Gravitational and Kinetic Scaling of Units. Apeiron 2008;15(4) 382-413.

[43] Hafele J C, Keating R E. Around-the-World Atomic Clocks: Predicted Relativistic

[44] Hafele J C, Keating R E. Around-the-World Atomic Clocks: Observed Relativistic

[45] Vessot R F C, Levine M W. A Test of the Equivalence Principle Using a Space-Borne

[46] Buenker R J. Time Dilation and the Concept of an Objective Rest System. Apeiron 2010;17(2) 99-125. http://redshift.vif.com/JournalFiles/V17NO2PDF/V17N2BUE.pdf

http://redshift.vif.com/JournalFiles/V15NO4PDF/V15N4BU1.pdf

Clock. General Relativity and Gravitation 1979;10(3) 181-204.

[38] Rindler W. in Essential Relativity. New York: Springer Verlag; 1977, p. 21.

[30] Sard R D. Relativistic Mechanics. New York: W. A. Benjamin; 1970, p. 319.

[32] Rindler W. in Essential Relativity. New York: Springer Verlag; 1977, p. 32.

using the Mössbauer Effect. Nature 1963;198 1186-1187.

view 1960;120(1) 17-21.

140(3B) 788-803.

789-791.

nalen der Physik 1911;340(10) 898-908.

der Wissenschaften 1915; 831–847.

University Press; 1982, p. 194.

University Press; 1982, p. 178.

arxiv.org/ftp/physics/papers/0411/0411110.pdf

Time Gains. Science 1972;177 166-168.

Time Gains. Science 1972;177 168-170.


[11] Pais A. 'Subtle is the Lord …' The Science and the Life of Albert Einstein. Oxford

[14] Phipps Jr. T E. Old Physics for New - A Worldview Alternative to Einstein's Relativi‐

[15] Goldstein H. Classical Mechanics. Reading, Massachusetts: Addison-Wesley Publish‐

[16] v. Laue M. Die Relativitätstheorie. vol. 1, 7th ed.. Braunschweig: Vieweg; 1955, p. 36.

[17] Einstein A. Relativitätsprinzip und die aus demselben gezogenen Folgerungen. Jahr‐

[18] Ives H E, Stilwell G R. An Experimental Study of the Rate of a Moving Atomic Clock. Journal of the Optical Society of America 1938;287 215-219; contd. in: 1941;31(5)

[19] Mandelberg H I, Witten L. Experimental Verification of the Relativistic Doppler Ef‐

[20] Rossi B, Greisen K, Stearns J C, Froman D K, Koontz P. G. Further Measurements of

[21] Ayres D S, Caldwell D O, Greenberg A J, Kenney R W, Kurz R J, Stearns B F. Com‐

[22] Weidner R T, Sells R L. Elementary Modern Physics. Boston: Allyn and Bacon; 1962,

[23] Serway R A, Beichner R J. Physics for Scientists and Engineers. 5th ed. Orlando: Har‐

[25] Hay H J, Schiffer J P, Cranshaw T E, Egelstaff P A. Measurement of the Red Shift in an Accelerated System Using the Mössbauer Effect in Fe57. Physical Review Letters

[26] Will C M. Clock synchronization and isotropy of the one-way speed of light. Physical

[27] Mansouri R, Sexl R U. A test theory of special relativity: III. Second-order tests. Gen‐

[28] Kuendig W. Measurement of the Transverse Doppler Effect in an Accelerated Sys‐

fect. Journal of the Optical Society of America 1962;52(5) 529-535.

the Mesotron Lifetime. Physical Review 1942;61(11-12) 675-679.

parison of π+ and π- Lifetimes. Physical Review 1967;157(5) 1288-1292.

[24] Sard R D. Relativistic Mechanics. New York: W. A. Benjamin; 1970, p. 124.

[12] Dingle H. Science at the Crossroads. London: Martin, Brian & O'Keefe; 1972.

[13] McCausland I. Problems in Special Relativity. Wireless World 1983;Oct. 63-65.

University Press; 1982, p. 137.

44 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

ing Co.; 1950, p. 193.

369-374.

p. 410.

court; 1999, p. 1262.

1960;4(4) 165-166.

Review D 1992;45 403-411.

eral Relativity and Gravitation 1977;8(10) 809-814.

tem. Physical Review 1963;129(6) 2371-2375.

ty Theory. Montreal: Apeiron; 2006, p. 142.

buch der Radioaktivität und Elektronik 1907;4 411-462.


[47] Van Flandern T. What the Global Positioning System Tells Us about Relativity. In: Selleri F. (ed.) Open Questions in Relativistic Physics. Montreal: Apeiron; 1998. p81-90.

Atmosphere and Ionosphere: Dynamics, Processes and Monitoring. New York:

Relativity Laws for the Variation of Rates of Clocks Moving in Free Space and GPS Positioning Errors Caused

http://dx.doi.org/10.5772/55792

47

[61] Golubkov G V. Influence of the Medium on the Electromagnetic Radiation Spectrum of Highly Excited Atoms and Molecules. Russian Journal of Physical Chemistry B

[62] Oyama K I, Abe T, Mori H, and Liu J Y. Electron Temperature in Night Time Spora‐ dic E-Layer at Mid-Latitude. Annales Geophysicae 2008; 26 (3) 533-541, DOI: 10.5194/

[63] Golubkov G V, Golubkov M G, Manzhelii M I. Microwave Radiation in the Upper Atmosphere of the Earth during Strong Geomagnetic Disturbances. Russian Journal

[64] Pavelyev A, Tsuda T, Igarashi K, Liou Y A, Hocke K. Wave Structure in the Iono‐ spheric D – and E – layers Observed by Ratio Holography Analysis of the GPS/MET Radio Occultation Data. Journal of Atmospheric and Solar-Terrestrial Physics 2003;

[65] Jacobsen K S, Pedersen A, Moen J I, and Bekkeng T A. A New Langmuir Probe Con‐ cept for Rapid Sampling of Space Plasma Electron Density. Measurement Science

[66] Kurihara J, Oyama S, Nozawa S, Tsuda T T, Fujii R, Ogawa Y, Miyaoka H, Iwagami N, Abe T, Oyama K I, Kosch M J, Aruliah A, Griffin E, Kauristie K. Temperature En‐ hancements and Vertical Winds in the Lower Thermosphere Associated with Auro‐ ral Heating During the DELTA Campaign. Journal of Geophysical Research 2009;

[67] Biberman L M, Vorobiev V S, Iakubov I T. Kinetics of Nonequilibrium Low-Temper‐

[68] Golubkov G V, Ivanov G K. Rydberg States of Atoms and Molecules and Elementary

[69] Radtsig A A, Smirnov B M. Handbook of Atomic and Molecular Physics. Moscow:

[70] Noble C N, Burke P G. R–Matrix Calculations of Low-Energy Electron Scattering by Oxygen Molecules. Physical Review Letters 1992; 68(13) 2011-2014, DOI: 10.1103/

[71] Sun W, Morrison M A, Isaacs W A, Trail W K, Alle D T, Gulley R J, Brennan M J, Buckman S J. Detailed Theoretical and Experimental Analysis of Low-Energy Elec‐ tron-N2 Scattering. Physical Review A 1995; 52(2) 1229-1256, DOI: 10.1103/PhysRe‐

and Technology 2010; 21(8) 085902, DOI:10.1088/0957-0233/21/8/085902

of Physical Chemistry B 2012; 6(1) 112-127, DOI: 10.1134/S1990793112010186

Springer; 2010. p1-67, DOI: 10.1007/978-90-481-3212-6\_1

2011; 5(6) 925-930, DOI: 10.1134/S1990793111060108

65(1) 59-70, DOI: 10.1016/S1364-6826(02)00226-2

114(12) A12306, DOI: 10.1029/2009JA014392

Processes with their Participation. Moscow: URSS; 2001

ature Plasma. Moscow: Nauka; 1982

Atomizdat; 1980

PhysRevLett.68.2011

vA.52.1229

angeo-26-533-2008


Atmosphere and Ionosphere: Dynamics, Processes and Monitoring. New York: Springer; 2010. p1-67, DOI: 10.1007/978-90-481-3212-6\_1

[61] Golubkov G V. Influence of the Medium on the Electromagnetic Radiation Spectrum of Highly Excited Atoms and Molecules. Russian Journal of Physical Chemistry B 2011; 5(6) 925-930, DOI: 10.1134/S1990793111060108

[47] Van Flandern T. What the Global Positioning System Tells Us about Relativity. In: Selleri F. (ed.) Open Questions in Relativistic Physics. Montreal: Apeiron; 1998.

[48] Buenker R J. The Global Positioning System and the Lorentz Transformation. Apeir‐ on 2008;15(3) 254-269. http://redshift.vif.com/JournalFiles/V15NO3PDF/

[49] Buenker R J. Simultaneity and the Constancy of the Speed of Light: Normalization of Space-time Vectors in the Lorentz Transformation. Apeiron 2009;16(1) 96-146.

[50] Buenker R J. Einstein's Hidden Postulate. Apeiron 2012;19(3) 282-301. http://

[51] Buenker R J. The Clock Riddle: The Failure of Einstein's Lorentz Transformation. Apeiron 2012;19(1) 84-95. http://redshift.vif.com/JournalFiles/V19NO1PDF/

[52] Langley R B. GPS, the Ionosphere, and the Solar Maximum. GPS World 2000; 11(7)

[53] Markgraf M. Phoenix GPS Tracking System. Flight Report; 2005, VSB 30-DLR-

[55] Golubkov G V, Manzhelii M I, Karpov I V. Chemical Physics of the Upper Atmos‐ phere. Russian Journal of Physical Chemistry B 2011; 5(3) 406-411, DOI: 10.1134/

[56] Cerruti A P, Kintner P M Jr, Gary D E, Manucci A J, Meyer R F, Doherty P, and Cos‐ ter A J. Effect of Intense December 2006 Solar Radio Bursts on GPS Receivers. Space

[57] Afraimovich E L, Astafieva E I, Berngardt O I, Lesyuta O S, Demyanov V V, Konda‐ kova T N, Shpynev B G. Mid-Latitude Amplitude Scintillation of GPS Signals and GPS Performance Slips at the Auroral Oval Boundary. Radiophysics and Quantum

[58] Avakyan S V. Physics of the Solar-Terrestrial Coupling: Results, Problems, and New Approaches. Geomagnetism and Aeronomy 2008; 48(4) 417-424, DOI: 10.1134/

[59] Troitskii V S, Bondar' L N, Starodubtsev A M. The Search for Sporadic Radio Emis‐ sion from Space. Soviet Physics Uspekhi 1975; 17(4) 607–609, DOI: 10.1070/

[60] Golubkov G V, Golubkov M G, Ivanov G K. Rydberg States of Atoms and Molecules in a Field of Neutral Particles. In: Bychkov V L, Golubkov G V, Nikitin A I (ed.) The

Electronics 2004; 47(7) 453-468, DOI: 10.1023/B:RAQE.0000047237.67771.bc

redshift.vif.com/JournalFiles/V19NO3PDF/V19N3BU2.pdf

Weather 2008; 6(10) S10D07, DOI: 10.1029/2007SW000375

p81-90.

V15N3BU1.pdf

46 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

V19N1BUE.pdf

S1990793111030055

S0016793208040014

PU1975v017n04ABEH004633

[54] http://gps.ece.cornell.edu/realtime.php

44-49

RP-0001


[72] Greenhow C, Smith W V. Molecular quadrupole moments ofN2, andO2. Journal of Chemical Physics 1951; 19(10) 1298-1300, DOI: 10.1063/1.1748017

**Chapter 2**

), although the

**Receiver Biases in Global Positioning Satellite Ranging**

The ionosphere is a dispersive medium, and taking advantage of this property enables the ionosphere's total electron content (TEC) to be estimated from the differential delay between the twoGPSfrequencies (L1 = 1575.42 andL2 = 1227.6MHz).Other effects inthe system,referred to as biases, introduce delays between the two frequencies that must be removed to resolve the TECaccurately.Thesebiasesare introducedeitherbythe satelliteorbythe receiver.This chapter presents the results of an investigation into GPS receiver bias dependence on temperature. Receiver and satellite biases, can be significant, and if not treated correctly, can corrupt the GPS total electron content (TEC) estimate. It has been shown that the GPS satellite biases are fairly stable day-to-day (Sardon et al., 1994 and Wilson et al., 1999), whereas the GPS receiver biases can have significant variation, sometimes over intervals of hours [Coco, 1991]. The Jet Propul‐ sion Laboratory (JPL) delivers estimates of the 10 day average of individual satellite biases to the U.S. Air Force. According to Komjathy (2012), the 10-day average for each satellite differs from the individual day values by 0.1-0.3 nanoseconds (ns). However,to successfully make this 10-day average measurement, the Faraday rotation due to the ionosphere must be removed with an extremely high degree of accuracy. Standard errors for absolute GPS TEC estimation

are estimated by the community to be ~3 TECU (1 TECU = 1016 electrons/m2

GPS observations using the following equation Dyrud (2008):

differentialGPSTECmeasurementis farmore accurate (~0.02TECU).These levelsofTECbiases canposesomesignificantchallenges inionosphericimagereconstructionas reportedbyCornely (2003). Most scientists working in the area think we should have absolute TEC values from GPS atthe 0.1- 0.2 TECU level, an extremely demanding goal. Slant TEC estimates are obtained from

> ( ) , 22 , 2.85 *i k L L ki i k STEC D ns D ns B B* = - - -+ é ù

eë û (1)

© 2013 Cornely; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

© 2013 Cornely; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

distribution, and reproduction in any medium, provided the original work is properly cited.

Pierre-Richard Cornely

http://dx.doi.org/10.5772/55567

**1. Introduction**

Additional information is available at the end of the chapter


### **Receiver Biases in Global Positioning Satellite Ranging**

Pierre-Richard Cornely

[72] Greenhow C, Smith W V. Molecular quadrupole moments ofN2, andO2. Journal of

[73] Zeiss G, Meath W J. Dispersion Energy Constants ofC 6 (A, B), Dipole Oscillator Strength Sums and Refractivities for Li, N, O, H2, NH3, H2O, NO, N2O. Molecular

[74] Soven P. Self-Consistent Linear Response Study of the Polarizability of Molecular Ni‐ trogen. Journal of Chemical Physics 1985; 82(7) 3289-3291, DOI: 10.1063/1.448227

Chemical Physics 1951; 19(10) 1298-1300, DOI: 10.1063/1.1748017

48 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

Physics 1977; 33(4) 1155-1176, DOI: 10.1080/00268977700100991

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/55567

#### **1. Introduction**

The ionosphere is a dispersive medium, and taking advantage of this property enables the ionosphere's total electron content (TEC) to be estimated from the differential delay between the twoGPSfrequencies (L1 = 1575.42 andL2 = 1227.6MHz).Other effects inthe system,referred to as biases, introduce delays between the two frequencies that must be removed to resolve the TECaccurately.Thesebiasesare introducedeitherbythe satelliteorbythe receiver.This chapter presents the results of an investigation into GPS receiver bias dependence on temperature. Receiver and satellite biases, can be significant, and if not treated correctly, can corrupt the GPS total electron content (TEC) estimate. It has been shown that the GPS satellite biases are fairly stable day-to-day (Sardon et al., 1994 and Wilson et al., 1999), whereas the GPS receiver biases can have significant variation, sometimes over intervals of hours [Coco, 1991]. The Jet Propul‐ sion Laboratory (JPL) delivers estimates of the 10 day average of individual satellite biases to the U.S. Air Force. According to Komjathy (2012), the 10-day average for each satellite differs from the individual day values by 0.1-0.3 nanoseconds (ns). However,to successfully make this 10-day average measurement, the Faraday rotation due to the ionosphere must be removed with an extremely high degree of accuracy. Standard errors for absolute GPS TEC estimation are estimated by the community to be ~3 TECU (1 TECU = 1016 electrons/m2 ), although the differentialGPSTECmeasurementis farmore accurate (~0.02TECU).These levelsofTECbiases canposesomesignificantchallenges inionosphericimagereconstructionas reportedbyCornely (2003). Most scientists working in the area think we should have absolute TEC values from GPS atthe 0.1- 0.2 TECU level, an extremely demanding goal. Slant TEC estimates are obtained from GPS observations using the following equation Dyrud (2008):

$$\text{STEC}\_{i,k} = \left[ 2.85 \left( D\_{L2} \text{ns} - D\_{L2} \text{ns} - B\_k - B\_i + \mathfrak{a} \right) \right]\_{i,k} \tag{1}$$

© 2013 Cornely; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Cornely; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

In (1), STECi,k represents the slant TEC and has the units of TECU (1 TECU = 1016 electrons/m2 ), the constant 2.85 is the conversion factor from time in ns to TECU and has the units of TECU/ ns; DL1 and DL2 represent the total measured range delay in ns from the L1ε and L2 (L1 = 1575.42 and L2 = 1227.6 MHz) frequencies, respectively. The subscript k represents the satellite transmitting the data and the subscript i represents the receiver that is collecting the data. Bk and Bi represent differential delays in ns, BkL1-BkL2 and BiL1-BiL2, due to the satellite and receiver. Finally, ε represents other error sources, again in ns, which are assumed to be small.

**2. Receiver bias estimation**

in **Figure 1**.

To test our hypothesis of temperature dependence on receiver bias estimation, we estimated daily receiver bias values during the night-time period from 12:00 am to 2:00 am when the ionosphere is expected to be fairly stable. A software package known as MIT Automated Processing of GPS (MAPGPS) has been developed to automate the processing of GPS data into global total electron density (TEC) maps. Observations are used from all available GPS receivers during all geomagnetic conditions where data has been successfully collected. In this section, the architecture of the MAPGPS software is described. Particular attention is given to the algorithms used to estimate the individual receiver biases. The MAPGPS approach to solving the receiver bias problem uses three different methods: minimum scalloping, least squares, and zero-TEC. A top level block diagram illustration of the MAPGPS system is shown

Receiver Biases in Global Positioning Satellite Ranging

http://dx.doi.org/10.5772/55567

51

To estimate receiver biases, MAPGPS uses a procedure which relies on a combination of three different methods. The first method, minimum scalloping, was developed by P. Doherty (private communication) and depends on the assumption that the ''scalloping'' in the values of zenith TEC estimates from the different satellites observed over a 24 h period should be minimized. The second method, least squares, uses a least squares fitting routine to compute the differential receiver biases. The least squares method in MAPGPS is based on an earlier procedure developed at MIT Lincoln Laboratory (Gaposchkin and Coster 1993; K. Duh, private communication). The least squares method measures only the differential biases, however, and so must be combined with the other methods to determine absolute bias levels. The third method used by MAPGPS is the zero TEC method. In this method, the bias value of the receiver is selected to be that which sets the minimum value of the TEC over a 24-h period to be zero.

We describe these methods in more detail in the following paragraphs.

**Figure 1.** Illustration of the MAPGPS processing steps

It is worth noting that part of the GPS modernization includes a third civilian frequency, L5 = 1176.45 MHz, (Kaplan 2006), which will allow for additional estimates of the TEC using frequency combinations at each time sample. The advantages for the atmospheric science community for GPS modernization are described in Van Dierendonck and **Coster** (2001). In the early days of GPS, the community was uncertain how to estimate either set of biases. However, it was quickly recognized that the delay introduced by the ionosphere changed. Techniques applying this property to estimate the biases are described in works by Mannucci et al., 1993, and Komjathy, 1997. Work at the NASA JPL (e.g. Lanyi and Roth, 1988) led to initial algorithms for bias estimation. Other methods were developed using single receivers (Coco et al., 1991, Gaposchkin and Coster, 1993). More powerful techniques based on the global network of receivers (Wilson and Mannucci, 1993 and Sardon et al., 1994) then followed. The estimation of satellite and receiver biases remains an area of significant and active investigation within the ionospheric community. New and enhanced techniques have been recently developed that estimate receiver differential biases for all available GPS stations (typically around 1000 sites) on a daily basis (Komjathy et al., 2005 and Rideout and Coster, 2006). The research community needs more efficient and improved estimation algorithms to properly perform process and quality checks on the large amount of GPS data currently available on a daily basis.

This chapter looks at one possible error in the standard practices of determining receiver biases. Specifically, the most common procedure within the community involves applying a single value for the receiver bias, and another one for each individual satellite bias, over a 24-hour time frame. For the satellite biases, most groups rely on those estimated by the international GNSS organization (IGS) or those of IGS contributing members. These values are available online at the Crustal Dynamics Data Information System (CDDIS). Receiver biases, on the other hand, are typically estimated by the individual groups processing TEC. Typically, the estimate of the receiver bias is made over a full 24-hour period. This practice averages over any temporal changes in temperature. In the literature, there have been hints of issues with temperature, for example the thermal influence on time and frequency transfer has been studied (Rieck, C., 2003). It is assumed that the temperature dependence in the L1-L2 receiver bias is due to a combination of the following: 1) the hardware in the pre-amplifier of the antenna, 2) to the cable connecting the antenna, and 3) the receiver hardware itself. Receivers that measure the biases due to the receiver hardware have been built (Dyrud et al., 2008), but they neglect to account for the receiver bias due to 1) and 2). In this chapter, our aim is to demonstrate that changes in the estimated receiver bias exist due to changes in temperature, and this will affect the absolute TEC that can be obtained with GPS

#### **2. Receiver bias estimation**

),

In (1), STECi,k represents the slant TEC and has the units of TECU (1 TECU = 1016 electrons/m2

50 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

Finally, ε represents other error sources, again in ns, which are assumed to be small.

quality checks on the large amount of GPS data currently available on a daily basis.

the absolute TEC that can be obtained with GPS

This chapter looks at one possible error in the standard practices of determining receiver biases. Specifically, the most common procedure within the community involves applying a single value for the receiver bias, and another one for each individual satellite bias, over a 24-hour time frame. For the satellite biases, most groups rely on those estimated by the international GNSS organization (IGS) or those of IGS contributing members. These values are available online at the Crustal Dynamics Data Information System (CDDIS). Receiver biases, on the other hand, are typically estimated by the individual groups processing TEC. Typically, the estimate of the receiver bias is made over a full 24-hour period. This practice averages over any temporal changes in temperature. In the literature, there have been hints of issues with temperature, for example the thermal influence on time and frequency transfer has been studied (Rieck, C., 2003). It is assumed that the temperature dependence in the L1-L2 receiver bias is due to a combination of the following: 1) the hardware in the pre-amplifier of the antenna, 2) to the cable connecting the antenna, and 3) the receiver hardware itself. Receivers that measure the biases due to the receiver hardware have been built (Dyrud et al., 2008), but they neglect to account for the receiver bias due to 1) and 2). In this chapter, our aim is to demonstrate that changes in the estimated receiver bias exist due to changes in temperature, and this will affect

and Bi

the constant 2.85 is the conversion factor from time in ns to TECU and has the units of TECU/ ns; DL1 and DL2 represent the total measured range delay in ns from the L1ε and L2 (L1 = 1575.42 and L2 = 1227.6 MHz) frequencies, respectively. The subscript k represents the satellite transmitting the data and the subscript i represents the receiver that is collecting the data. Bk

It is worth noting that part of the GPS modernization includes a third civilian frequency, L5 = 1176.45 MHz, (Kaplan 2006), which will allow for additional estimates of the TEC using frequency combinations at each time sample. The advantages for the atmospheric science community for GPS modernization are described in Van Dierendonck and **Coster** (2001). In the early days of GPS, the community was uncertain how to estimate either set of biases. However, it was quickly recognized that the delay introduced by the ionosphere changed. Techniques applying this property to estimate the biases are described in works by Mannucci et al., 1993, and Komjathy, 1997. Work at the NASA JPL (e.g. Lanyi and Roth, 1988) led to initial algorithms for bias estimation. Other methods were developed using single receivers (Coco et al., 1991, Gaposchkin and Coster, 1993). More powerful techniques based on the global network of receivers (Wilson and Mannucci, 1993 and Sardon et al., 1994) then followed. The estimation of satellite and receiver biases remains an area of significant and active investigation within the ionospheric community. New and enhanced techniques have been recently developed that estimate receiver differential biases for all available GPS stations (typically around 1000 sites) on a daily basis (Komjathy et al., 2005 and Rideout and Coster, 2006). The research community needs more efficient and improved estimation algorithms to properly perform process and

represent differential delays in ns, BkL1-BkL2 and BiL1-BiL2, due to the satellite and receiver.

To test our hypothesis of temperature dependence on receiver bias estimation, we estimated daily receiver bias values during the night-time period from 12:00 am to 2:00 am when the ionosphere is expected to be fairly stable. A software package known as MIT Automated Processing of GPS (MAPGPS) has been developed to automate the processing of GPS data into global total electron density (TEC) maps. Observations are used from all available GPS receivers during all geomagnetic conditions where data has been successfully collected. In this section, the architecture of the MAPGPS software is described. Particular attention is given to the algorithms used to estimate the individual receiver biases. The MAPGPS approach to solving the receiver bias problem uses three different methods: minimum scalloping, least squares, and zero-TEC. A top level block diagram illustration of the MAPGPS system is shown in **Figure 1**.

To estimate receiver biases, MAPGPS uses a procedure which relies on a combination of three different methods. The first method, minimum scalloping, was developed by P. Doherty (private communication) and depends on the assumption that the ''scalloping'' in the values of zenith TEC estimates from the different satellites observed over a 24 h period should be minimized. The second method, least squares, uses a least squares fitting routine to compute the differential receiver biases. The least squares method in MAPGPS is based on an earlier procedure developed at MIT Lincoln Laboratory (Gaposchkin and Coster 1993; K. Duh, private communication). The least squares method measures only the differential biases, however, and so must be combined with the other methods to determine absolute bias levels. The third method used by MAPGPS is the zero TEC method. In this method, the bias value of the receiver is selected to be that which sets the minimum value of the TEC over a 24-h period to be zero. We describe these methods in more detail in the following paragraphs.

**Figure 1.** Illustration of the MAPGPS processing steps

#### **3. Minimum scalloping method**

The minimum scallop technique is based on the principle that zenith TEC values computed from low elevation angle data should not, on average, be different from zenith TEC values computed from high elevation angles. Of course, during some time periods, there may be temporary ionospheric structures that make this assumption false, such as at sunset or sunrise when gradients are frequently present in the TEC. Because of this, this technique has been implemented in a way that tries to average TEC values over a relatively long period. In general, a mapping function converts the line-of sight TEC at lower elevation angles to its vertical or zenith value. However, the part of the TEC observation that is caused by receiver bias, and not the ionosphere, is unaffected by elevation angle. When the mapping function is applied to a receiver using an incorrect receiver bias, the estimated vertical TEC values will be in error. Specifically, the vertical TEC values that correspond to low elevation angles will be either increased or decreased, depending on the sign of the bias. The minimum scallop technique can be applied to the TEC data around local midnight, where there should be less inhomogeneity in the TEC. For a given receiver bias, we bin and median filter the vertical TEC values by elevation angle, and determine the flatness of the resulting median TEC versus elevation-angle. The receiver bias that gives the flattest value of TEC versus elevation angle is the minimum scallop receiver bias. While no systematic errors were found to be associated with this technique, the standard deviation of day-to-day estimations of the same receiver is about 2 TECU. It is not clear whether this 2 TECU value is due to innate problems with the algorithm or with actual receiver bias variability, or some combination of both.

#### **4. Least squares method**

To use the least squares method, a system of equations is set up using a number of observations and unknowns. On any given day, each GPS receiver observes multiple satellites multiple times. Each observation consists of a TEC estimate, a satellite bias, and a receiver bias. The receiver bias is assumed to be constant over the day for each receiver, and the satellite bias is assumed to be constant over the day for each satellite. In this processing, the satellite biases are assumed to be known. The problem of estimating the receiver biases is thus over-deter‐ mined to a large degree as there are many more observations than unknown receiver biases. Accordingly, the method of least squares is well suited to find the best-fit solution. To compute the differential relationship between the different receiver biases, a system of difference observations is created as follows: An observation, O, is described as (ignoring measurement error):

$$
\bullet \text{ Q} = \text{O} - \text{S} = T + \text{R} \tag{2}
$$

*OTSR* = ++ . (3)

Receiver Biases in Global Positioning Satellite Ranging

http://dx.doi.org/10.5772/55567

(5)

53

(6)

( ) ( ) *Q Q TT R R* 1 2 12 1 2 ==-+ - (4)

1 2

n n

æ öæ ö æ ö -= - + - = - + - ç ÷ç ÷ ç ÷ è øè ø è ø

where *νT* represents the vertical, or zenith, TEC. If the ionosphere pierce points of the two observations are sufficiently close together, the zenith TECs cancel out and we are left with

è ø

The system equations (2)-(6) can be solved by the least squares method. MAPGPS define the observations to be sufficiently close together if their ionospheric pierce point locations at 450 km were separated by no more than 100 km in the horizontal direction. The least squares method does not determine the absolute value of the biases. It only gives the relative biases.

This method is based on the principle that the TEC often approaches zero during the night or at high latitudes. Therefore, the receiver bias in this method is calculated by setting the minimum observed value of the TEC over a 24 h period equal to zero. This method has the advantage that it is simple to use, and in a relative sense it is reasonably robust. Nevertheless, especially in the equatorial regions, one does not anticipate the minimum value of the TEC to be equal to zero. Noise can also affect the estimation. Using this assumption to calculate the

In MAPGPS, the receiver bias determination is an iterative process involving several steps. This process is illustrated inside the dashed line in Figure 10. This Figure shows how the

If there are two observations by two different receivers, we have the equation:

( ) 12 12 12 1 2

12 12 12 1 2 , *QQ TT RR R R T T ZZ ZZ ZZ Z Z*

> 12 12 12 12 *QQ RR ZZ ZZ* æ ö -= - ç ÷

Applying the vertical TEC mapping function, Z, we get:

the desired equation:

**5. Zero TEC method**

biases may cause non-negligible errors.

**6. MAPGPS receiver bias determination**

where T is the line-of-sight TEC, S is the satellite bias, and R is the receiver bias. S is assumed to be known a priori from previous calculations, so it is subtracted as a constant, leaving the value Q, our partially corrected observation:

$$O = T + S + R.\tag{3}$$

If there are two observations by two different receivers, we have the equation:

$$Q\_1 = Q\_2 = \left(T\_1 - T\_2\right) + \left(R\_1 - R\_2\right) \tag{4}$$

Applying the vertical TEC mapping function, Z, we get:

$$\frac{Q\_1}{Z\_1} - \frac{Q\_2}{Z\_2} = \left(\frac{T\_1}{Z\_1} - \frac{T\_2}{Z\_2}\right) + \left(\frac{R\_1}{Z\_1} - \frac{R\_2}{Z\_2}\right) = \left(\nu T\_1 - \nu T\_2\right) + \left(\frac{R\_1}{Z\_1} - \frac{R\_2}{Z\_2}\right) \tag{5}$$

where *νT* represents the vertical, or zenith, TEC. If the ionosphere pierce points of the two observations are sufficiently close together, the zenith TECs cancel out and we are left with the desired equation:

$$\frac{Q\_1}{Z\_1} - \frac{Q\_2}{Z\_2} = \left(\frac{R\_1}{Z\_1} - \frac{R\_2}{Z\_2}\right) \tag{6}$$

The system equations (2)-(6) can be solved by the least squares method. MAPGPS define the observations to be sufficiently close together if their ionospheric pierce point locations at 450 km were separated by no more than 100 km in the horizontal direction. The least squares method does not determine the absolute value of the biases. It only gives the relative biases.

#### **5. Zero TEC method**

**3. Minimum scalloping method**

52 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

**4. Least squares method**

value Q, our partially corrected observation:

error):

The minimum scallop technique is based on the principle that zenith TEC values computed from low elevation angle data should not, on average, be different from zenith TEC values computed from high elevation angles. Of course, during some time periods, there may be temporary ionospheric structures that make this assumption false, such as at sunset or sunrise when gradients are frequently present in the TEC. Because of this, this technique has been implemented in a way that tries to average TEC values over a relatively long period. In general, a mapping function converts the line-of sight TEC at lower elevation angles to its vertical or zenith value. However, the part of the TEC observation that is caused by receiver bias, and not the ionosphere, is unaffected by elevation angle. When the mapping function is applied to a receiver using an incorrect receiver bias, the estimated vertical TEC values will be in error. Specifically, the vertical TEC values that correspond to low elevation angles will be either increased or decreased, depending on the sign of the bias. The minimum scallop technique can be applied to the TEC data around local midnight, where there should be less inhomogeneity in the TEC. For a given receiver bias, we bin and median filter the vertical TEC values by elevation angle, and determine the flatness of the resulting median TEC versus elevation-angle. The receiver bias that gives the flattest value of TEC versus elevation angle is the minimum scallop receiver bias. While no systematic errors were found to be associated with this technique, the standard deviation of day-to-day estimations of the same receiver is about 2 TECU. It is not clear whether this 2 TECU value is due to innate problems with the algorithm

To use the least squares method, a system of equations is set up using a number of observations and unknowns. On any given day, each GPS receiver observes multiple satellites multiple times. Each observation consists of a TEC estimate, a satellite bias, and a receiver bias. The receiver bias is assumed to be constant over the day for each receiver, and the satellite bias is assumed to be constant over the day for each satellite. In this processing, the satellite biases are assumed to be known. The problem of estimating the receiver biases is thus over-deter‐ mined to a large degree as there are many more observations than unknown receiver biases. Accordingly, the method of least squares is well suited to find the best-fit solution. To compute the differential relationship between the different receiver biases, a system of difference observations is created as follows: An observation, O, is described as (ignoring measurement

where T is the line-of-sight TEC, S is the satellite bias, and R is the receiver bias. S is assumed to be known a priori from previous calculations, so it is subtracted as a constant, leaving the

*QOST R* = -= + . (2)

or with actual receiver bias variability, or some combination of both.

This method is based on the principle that the TEC often approaches zero during the night or at high latitudes. Therefore, the receiver bias in this method is calculated by setting the minimum observed value of the TEC over a 24 h period equal to zero. This method has the advantage that it is simple to use, and in a relative sense it is reasonably robust. Nevertheless, especially in the equatorial regions, one does not anticipate the minimum value of the TEC to be equal to zero. Noise can also affect the estimation. Using this assumption to calculate the biases may cause non-negligible errors.

#### **6. MAPGPS receiver bias determination**

In MAPGPS, the receiver bias determination is an iterative process involving several steps. This process is illustrated inside the dashed line in Figure 10. This Figure shows how the

three methods described above are applied either in combination or individually to resolve the receiver bias. The first step in this process is to filter out data with elevation angles of less than 30degrees in order to separate out mapping function effects from the calcula‐ tion of receiver bias. This is done because scalloping can be caused by both the receiver bias and a faulty mapping function. Above 30 degrees elevation angle, all the mapping functions are very similar, and yet at 30 degrees the slant delay is still almost twice the zenith delay at zenith. This leaves enough elevation angle effect to separate out the receiver bias. Once the data with elevation angles greater than 30 degrees has been collected, groups of sites are created by starting with a randomly selected seed site. Next all remaining available receiver sites are searched to find the receiver site in the closest proximity to any of the group members. If this identified site is within a 400 km horizontal distance, it is added to the group. The 400 km horizontal distance was chosen in order to ensure that within any group there would be a large number of coincident TEC measurements (defined as TEC estimates from different receivers that are separated by not more than 50 km in the horizontal location at the pierce point height of 450 km). This procedure is repeated until either there are no remaining sites within 400 km of any group member, or the group reaches a maximum number of 100 sites. Once this step is complete, the method for determining the differential bias is dependent on the number of sites in the group. If the group consists of three or more sites, the differential biases are found using the least squares method which outputs relative, not absolute, biases. To set the level of the absolute bias, we could simply use the minimum scallop receiver bias for a single receiver. However, the minimum scallop technique has a certain amount of random uncertainty embedded in it. To reduce this uncertainty, we determine the minimum scallop receiver bias for all of the receivers in the group. We then find the average difference between these values and those computed by the least squares technique. In this way, the error in the absolute value of the receiver bias in the group is reduced by a factor of the square root of the number of receivers. Processing data from 1,000 stations takes approximately 12 h to run on a single computer with a Pentium 4 class.

**7. Description of experiments**

tion varies from receiver to receiver.

**8. Experiment at the first site**

temperature.

**9. Results**

In the experiments, four Scintillation Network Decision Aid (SCINDA) GPS receivers were used at two different sites. The SCINDA is a network of ground-based receivers that monitor scintillations at the UHF and L-Band frequencies caused by electron density irregularities in the equatorial ionosphere (Groves et al., 1997). The four SCINDA receivers are all NovAtel dual frequency GISTM receivers running the specialized software described in Carrano and Groves (2006). The temperature data were collected with EasyLog USB data loggers, which collect temperature, dew point and humidity data. Temperature data were verified by comparing the measurements to the National Oceanic and Atmospheric Administration's measurements at the site of one of the first site. These temperature loggers have a stated manufacturers accuracy of ±0.5°C(±1°F), and a repeatability of ±0.1°C (±0.2°F) over the range -35°C to 80°C (-31°F to 176°F). In the sampling, the temperature is measured to the 0.5°C level. For the experiment at the first site, data were collected for over a hundred days using two temperature sensors (one indoors and the other outdoors) and a single GPS receiver. In this experiment, the goal was to determine and remove the temperature dependence of the receiver bias. A second experiment was conducted at the other site which involved a comparison of TEC estimates from three collocated GPS receivers over a single 24-hour period. The primary goal of this experiment was to observe if the temperature dependence of receiver bias estima‐

Receiver Biases in Global Positioning Satellite Ranging

http://dx.doi.org/10.5772/55567

55

We will first describe experiments at the first site where data was collected from day 15 to 150, 2010. This data set is used to attempt to model and remove the temperature dependence on receiver bias estimation. The remaining residual bias is examined for dependence on other

In a first experiment, temperature data were collected with two temperature loggers that were collocated with the single GPS SCINDA receiver that ran continuously at the first site. One of these temperature loggers was located inside the building near the receiver and the other was located outside the building on the pole of the antenna. It is important to note that the antenna was located on a black top roof, where temperatures are expected to exceed the local ambient

For each day, an average temperature for the night-time period from 12:00 am to 2:00 am was computed. **Figure2** shows the outdoor and indoor temperature data plotted as a function of

the corresponding bias value from day 15 to 150 2010 (January 2010 to June 2010).

parameters that may need to be considered in future modeling efforts.

If the group contains fewer than three sites, the differential bias is found using the minimum scalloping technique. First, the zero TEC bias is computed and used as the initial guess. Next for a range of biases about the initial guess, a histogram of average TEC values versus elevation angle bin is constructed for data in a 4 h window centered on 2 a.m. local (solar) time. The slope of this histogram is calculated. If the minimum slope is at the edge of the range of biases, the process is assumed to have failed and the process is repeated with the histogram con‐ structed for data in a 4 h window centered on solar noon. If the process fails a second time, it is repeated again with larger bias limits. In general, this process succeeds approximately 95% of the time. The zero TEC method is employed in two cases. This method is used in the 5% of the cases where the minimum scalloping technique fails and it is automatically used for receivers located at latitudes above 65 degrees latitude. The reason for this latitude criterion is the typical high latitude variability of the TEC. These values are expected to be larger in magnitude than the scalloping effect. Because of this, the minimum scalloping technique fails to work for high latitude data.

#### **7. Description of experiments**

three methods described above are applied either in combination or individually to resolve the receiver bias. The first step in this process is to filter out data with elevation angles of less than 30degrees in order to separate out mapping function effects from the calcula‐ tion of receiver bias. This is done because scalloping can be caused by both the receiver bias and a faulty mapping function. Above 30 degrees elevation angle, all the mapping functions are very similar, and yet at 30 degrees the slant delay is still almost twice the zenith delay at zenith. This leaves enough elevation angle effect to separate out the receiver bias. Once the data with elevation angles greater than 30 degrees has been collected, groups of sites are created by starting with a randomly selected seed site. Next all remaining available receiver sites are searched to find the receiver site in the closest proximity to any of the group members. If this identified site is within a 400 km horizontal distance, it is added to the group. The 400 km horizontal distance was chosen in order to ensure that within any group there would be a large number of coincident TEC measurements (defined as TEC estimates from different receivers that are separated by not more than 50 km in the horizontal location at the pierce point height of 450 km). This procedure is repeated until either there are no remaining sites within 400 km of any group member, or the group reaches a maximum number of 100 sites. Once this step is complete, the method for determining the differential bias is dependent on the number of sites in the group. If the group consists of three or more sites, the differential biases are found using the least squares method which outputs relative, not absolute, biases. To set the level of the absolute bias, we could simply use the minimum scallop receiver bias for a single receiver. However, the minimum scallop technique has a certain amount of random uncertainty embedded in it. To reduce this uncertainty, we determine the minimum scallop receiver bias for all of the receivers in the group. We then find the average difference between these values and those computed by the least squares technique. In this way, the error in the absolute value of the receiver bias in the group is reduced by a factor of the square root of the number of receivers. Processing data from 1,000 stations takes approximately 12 h to run on a single

54 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

If the group contains fewer than three sites, the differential bias is found using the minimum scalloping technique. First, the zero TEC bias is computed and used as the initial guess. Next for a range of biases about the initial guess, a histogram of average TEC values versus elevation angle bin is constructed for data in a 4 h window centered on 2 a.m. local (solar) time. The slope of this histogram is calculated. If the minimum slope is at the edge of the range of biases, the process is assumed to have failed and the process is repeated with the histogram con‐ structed for data in a 4 h window centered on solar noon. If the process fails a second time, it is repeated again with larger bias limits. In general, this process succeeds approximately 95% of the time. The zero TEC method is employed in two cases. This method is used in the 5% of the cases where the minimum scalloping technique fails and it is automatically used for receivers located at latitudes above 65 degrees latitude. The reason for this latitude criterion is the typical high latitude variability of the TEC. These values are expected to be larger in magnitude than the scalloping effect. Because of this, the minimum scalloping technique fails

computer with a Pentium 4 class.

to work for high latitude data.

In the experiments, four Scintillation Network Decision Aid (SCINDA) GPS receivers were used at two different sites. The SCINDA is a network of ground-based receivers that monitor scintillations at the UHF and L-Band frequencies caused by electron density irregularities in the equatorial ionosphere (Groves et al., 1997). The four SCINDA receivers are all NovAtel dual frequency GISTM receivers running the specialized software described in Carrano and Groves (2006). The temperature data were collected with EasyLog USB data loggers, which collect temperature, dew point and humidity data. Temperature data were verified by comparing the measurements to the National Oceanic and Atmospheric Administration's measurements at the site of one of the first site. These temperature loggers have a stated manufacturers accuracy of ±0.5°C(±1°F), and a repeatability of ±0.1°C (±0.2°F) over the range -35°C to 80°C (-31°F to 176°F). In the sampling, the temperature is measured to the 0.5°C level. For the experiment at the first site, data were collected for over a hundred days using two temperature sensors (one indoors and the other outdoors) and a single GPS receiver. In this experiment, the goal was to determine and remove the temperature dependence of the receiver bias. A second experiment was conducted at the other site which involved a comparison of TEC estimates from three collocated GPS receivers over a single 24-hour period. The primary goal of this experiment was to observe if the temperature dependence of receiver bias estima‐ tion varies from receiver to receiver.

We will first describe experiments at the first site where data was collected from day 15 to 150, 2010. This data set is used to attempt to model and remove the temperature dependence on receiver bias estimation. The remaining residual bias is examined for dependence on other parameters that may need to be considered in future modeling efforts.

#### **8. Experiment at the first site**

In a first experiment, temperature data were collected with two temperature loggers that were collocated with the single GPS SCINDA receiver that ran continuously at the first site. One of these temperature loggers was located inside the building near the receiver and the other was located outside the building on the pole of the antenna. It is important to note that the antenna was located on a black top roof, where temperatures are expected to exceed the local ambient temperature.

#### **9. Results**

For each day, an average temperature for the night-time period from 12:00 am to 2:00 am was computed. **Figure2** shows the outdoor and indoor temperature data plotted as a function of the corresponding bias value from day 15 to 150 2010 (January 2010 to June 2010).

**Figure 2.** Night-time receiver bias versus outdoor temperature (top) or indoor temperature (bottom) for the time peri‐ od day 15-150, 2010. Data collected at the MIT Haystack Observatory Optics building in Westford, MA.

**Figure 3.** The top plot shows the computed linear trend in the receiver bias versus outdoor temperature data set. The

Receiver Biases in Global Positioning Satellite Ranging

http://dx.doi.org/10.5772/55567

57

To remove this correlation, a linear trend was computed for the outdoor temperature versus receiver bias (shown in the top plot of **Figure 2)** and removed from the data. The residuals are shown in the bottom plot of **Figure 2**. **Figure 3** shows the resulting receiver bias residuals versus the indoor temperature. In this plot, a clear break can still be observed in the data at 24°C. It is assumed that data below 24°C is indicative of time periods where the heater in the optics building was having difficulty maintaining a constant tempera‐ ture. This may account for the larger fluctuations below 24°C. For this data set, the data were divided into two groups corresponding to the residuals associated with the data less

bottom plot shows the receiver bias residuals versus temperature.

than and greater than or equal to 24 °C.

It is important to note that the temperatures and bias values shown in this plot and used in this chapter are from the 12:00 am to 2:00 am local time period. The Pearson correlation coefficient (r), which represents the linear relationship between two variables, is determined for both the outdoor temperature versus bias (top plot) and the indoor temperature versus bias (bottom plot). Other fits to the data (such as fitting to a quadratic) were no more significant, as such the linear model was used exclusively. For a statistical sample size greater than 100, as is the case in the data being used with 131 points, an r value greater that.254 or less than -. 254 is rated significant. Because the Pearson correlation coefficient r, is equal to -.289 for the outdoor temperature versus bias value versus -.253 for the indoor temperature versus bias, we decided to first remove the correlation observed between the receiver bias and the outdoor temperature. In addition, as shown in **Figure 2**, there is an apparent break in the indoor temperature data versus receiver bias (this can be observed visually at approximately 24°C).

**Figure 3.** The top plot shows the computed linear trend in the receiver bias versus outdoor temperature data set. The bottom plot shows the receiver bias residuals versus temperature.

**Figure 2.** Night-time receiver bias versus outdoor temperature (top) or indoor temperature (bottom) for the time peri‐

It is important to note that the temperatures and bias values shown in this plot and used in this chapter are from the 12:00 am to 2:00 am local time period. The Pearson correlation coefficient (r), which represents the linear relationship between two variables, is determined for both the outdoor temperature versus bias (top plot) and the indoor temperature versus bias (bottom plot). Other fits to the data (such as fitting to a quadratic) were no more significant, as such the linear model was used exclusively. For a statistical sample size greater than 100, as is the case in the data being used with 131 points, an r value greater that.254 or less than -. 254 is rated significant. Because the Pearson correlation coefficient r, is equal to -.289 for the outdoor temperature versus bias value versus -.253 for the indoor temperature versus bias, we decided to first remove the correlation observed between the receiver bias and the outdoor temperature. In addition, as shown in **Figure 2**, there is an apparent break in the indoor temperature data versus receiver bias (this can be observed visually at approximately 24°C).

od day 15-150, 2010. Data collected at the MIT Haystack Observatory Optics building in Westford, MA.

56 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

To remove this correlation, a linear trend was computed for the outdoor temperature versus receiver bias (shown in the top plot of **Figure 2)** and removed from the data. The residuals are shown in the bottom plot of **Figure 2**. **Figure 3** shows the resulting receiver bias residuals versus the indoor temperature. In this plot, a clear break can still be observed in the data at 24°C. It is assumed that data below 24°C is indicative of time periods where the heater in the optics building was having difficulty maintaining a constant tempera‐ ture. This may account for the larger fluctuations below 24°C. For this data set, the data were divided into two groups corresponding to the residuals associated with the data less than and greater than or equal to 24 °C.

Linear fits were computed for each of these groups. These fits and the corresponding Pearson r values are shown in **Figures 4** and **5**. For the data corresponding to tempera‐ tures greater than 24 °C, the Pearson r value is fairly high (r = 0.398). Once these linear trends have been estimated and removed from the data, the final residuals are shown in **Figure 6**. Clearly a 2-3 TEC fluctuation level remains in the biases versus day of year plot. To test if additional factors should be accounted for in our bias estimation techniques, several atmospheric parameters were examined to see if any additional correlations could be found. The atmospheric parameters selected to study for correlations included the Ap, Kp, and the daily and 90-day averaged F10.7 cm solar flux. These parameters are impor‐ tant in measuring solar activity (daily and 90-day averaged F10.7 cm solar flux) and the

Receiver Biases in Global Positioning Satellite Ranging

http://dx.doi.org/10.5772/55567

59

Earth's magnetic and electric field disturbances (the Kp and Ap).

**Figure 6.** The computed linear trend in the receiver bias residual versus indoor temperature less than 24°C.

attributed to remaining errors in the satellite biases.

The only parameter that strongly stood out as highly correlated with our residuals was the daily f10.7 cm solar flux, which is a measurement of solar radio emissions at 2800 MHz. This correlation can be observed in **Figure 7** where a scaled daily F10.7 cm value is over plotted onto the final residual receiver bias. The correlation factor between these two data sets is 0.64. The observed correlation is suspected to be related to the mapping function used in the receiver bias estimation. This mapping function is described in Rideout and Coster, 2006 and depends only on the elevation, and has no built-in term for the ionospheric shell-height. This approach was based on the assumption that there is little variability in the ionospheric shell-height in the two hour window after midnight when all the receiver biases are estimated. Nevertheless, it is important to point out that the ionospheric shell-height does have some day-to-day variability that is not captured in the model being used, and this additional error source could perhaps account for some of the un-modeled errors observed. Some of this error can also be

**Figure 4.** Receiver Bias Residuals versus Indoor Temperature for day 15-150, 2010.

**Figure 5.** The computed linear trend in the receiver bias residual versus indoor temperature greater than or equal to 24°C.

Linear fits were computed for each of these groups. These fits and the corresponding Pearson r values are shown in **Figures 4** and **5**. For the data corresponding to tempera‐ tures greater than 24 °C, the Pearson r value is fairly high (r = 0.398). Once these linear trends have been estimated and removed from the data, the final residuals are shown in **Figure 6**. Clearly a 2-3 TEC fluctuation level remains in the biases versus day of year plot. To test if additional factors should be accounted for in our bias estimation techniques, several atmospheric parameters were examined to see if any additional correlations could be found. The atmospheric parameters selected to study for correlations included the Ap, Kp, and the daily and 90-day averaged F10.7 cm solar flux. These parameters are impor‐ tant in measuring solar activity (daily and 90-day averaged F10.7 cm solar flux) and the Earth's magnetic and electric field disturbances (the Kp and Ap).

**Figure 6.** The computed linear trend in the receiver bias residual versus indoor temperature less than 24°C.

**Figure 4.** Receiver Bias Residuals versus Indoor Temperature for day 15-150, 2010.

58 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

**Figure 5.** The computed linear trend in the receiver bias residual versus indoor temperature greater than or equal to 24°C.

The only parameter that strongly stood out as highly correlated with our residuals was the daily f10.7 cm solar flux, which is a measurement of solar radio emissions at 2800 MHz. This correlation can be observed in **Figure 7** where a scaled daily F10.7 cm value is over plotted onto the final residual receiver bias. The correlation factor between these two data sets is 0.64. The observed correlation is suspected to be related to the mapping function used in the receiver bias estimation. This mapping function is described in Rideout and Coster, 2006 and depends only on the elevation, and has no built-in term for the ionospheric shell-height. This approach was based on the assumption that there is little variability in the ionospheric shell-height in the two hour window after midnight when all the receiver biases are estimated. Nevertheless, it is important to point out that the ionospheric shell-height does have some day-to-day variability that is not captured in the model being used, and this additional error source could perhaps account for some of the un-modeled errors observed. Some of this error can also be attributed to remaining errors in the satellite biases.

of the different receivers. Only where the estimates of vertical TEC differ is it possible to see

Receiver Biases in Global Positioning Satellite Ranging

http://dx.doi.org/10.5772/55567

61

**Figure 9.** Vertical TEC estimates of three closely located (~10 - 20m separation) SCINDA receivers at the MWA site

Shown in **Figure 9** are the line-of-sight (los) differential TEC measurements between these different receivers as a function of time of day. The top plot shows the differential los TEC measurements between receivers MW2 and MW1, the middle plot shows the differential los TEC between receivers MW1 and MW3, and the bottom plot show the differential los TEC between MW2 and MW3. The different colors in each plot represent the different GPS satellites. As all of receivers were within 10-20 m of each other, the differential los TEC values should be either constant as a function of time of day, or at the very least, relatively constant. What is observed, however, are larger values in the differential TEC during the daytime and smaller values during the night. This diurnal structure is largest in the bottom plot which shows the differential los TEC values between MW2 and MW3 and smallest in the middle plot which shows differential los TEC between MW1 and MW3. One would surmise from this that receiver MW2 has a receiver bias temperature dependence that differs significantly from that of MW1 and MW3. While the exact temperature excursion between day and night on April 27, 2008 is not known for the site, it is known from historical records that the low local temperature for April 27, 2008 was 68F, and the high temperature was 90F. Based on the observations in Figure

plotted as a function of time of day for April 27, 2008.

the different colors.

**Figure 7.** Final Residual Receiver Bias following removal of both indoor and outdoor temperature trends

**Figure 8.** Receiver bias residuals and daily f10.7 measurements versus day of year 2010 as a function of time of day.

#### **10. Experiment at the second site**

On April 25, 2008, three Scintillation Network Decision Aid (SCINDA) GPS receivers were located at the MWA site, separated by approximately 10-20 m. **Figure 8** shows the total vertical TEC estimated by each receiver over the full 24 hour period. The vertical TEC plot is shown here to illustrate the diurnal pattern of the TEC., and indeed a clear diurnal pattern can be observed, with the maximum TEC reaching 18-20 TEC units at around 4-5 UT and the minimum TEC reaching 0-2 TEC units around 16-17 UT. Receiver bias and satellite biases have been applied to the data. The different colors (red, black, and blue) represent the TEC estimates of the different receivers. Only where the estimates of vertical TEC differ is it possible to see the different colors.

**Figure 9.** Vertical TEC estimates of three closely located (~10 - 20m separation) SCINDA receivers at the MWA site plotted as a function of time of day for April 27, 2008.

**Figure 8.** Receiver bias residuals and daily f10.7 measurements versus day of year 2010 as a function of time of day.

**Figure 7.** Final Residual Receiver Bias following removal of both indoor and outdoor temperature trends

60 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

On April 25, 2008, three Scintillation Network Decision Aid (SCINDA) GPS receivers were located at the MWA site, separated by approximately 10-20 m. **Figure 8** shows the total vertical TEC estimated by each receiver over the full 24 hour period. The vertical TEC plot is shown here to illustrate the diurnal pattern of the TEC., and indeed a clear diurnal pattern can be observed, with the maximum TEC reaching 18-20 TEC units at around 4-5 UT and the minimum TEC reaching 0-2 TEC units around 16-17 UT. Receiver bias and satellite biases have been applied to the data. The different colors (red, black, and blue) represent the TEC estimates

**10. Experiment at the second site**

Shown in **Figure 9** are the line-of-sight (los) differential TEC measurements between these different receivers as a function of time of day. The top plot shows the differential los TEC measurements between receivers MW2 and MW1, the middle plot shows the differential los TEC between receivers MW1 and MW3, and the bottom plot show the differential los TEC between MW2 and MW3. The different colors in each plot represent the different GPS satellites. As all of receivers were within 10-20 m of each other, the differential los TEC values should be either constant as a function of time of day, or at the very least, relatively constant. What is observed, however, are larger values in the differential TEC during the daytime and smaller values during the night. This diurnal structure is largest in the bottom plot which shows the differential los TEC values between MW2 and MW3 and smallest in the middle plot which shows differential los TEC between MW1 and MW3. One would surmise from this that receiver MW2 has a receiver bias temperature dependence that differs significantly from that of MW1 and MW3. While the exact temperature excursion between day and night on April 27, 2008 is not known for the site, it is known from historical records that the low local temperature for April 27, 2008 was 68F, and the high temperature was 90F. Based on the observations in Figure

The error in the mapping function also lacks a valid plasmaspheric model. Recently Carrano et al., 2009 developed a Kalman filter model to estimate plasmaspheric total electron content. If a better understanding of how to model the temperature effect of the bias estimation can be obtained, it is possible that an extended version of the Kalman filter for bias determination will lead to more accurate estimates of the bias. Clearly more experimental and modeling studies

All experiments and partial results in this chapter were obtained from works by the following

, W. Rideout1

Curtin Institute for Radio Astronomy, International Centre for Radio Astronomy Research

Physics and Engineering Department, Eastern Nazarene College, Massachusetts, USA

[1] Charles, S. Carrano and Keith M. Groves, ((2006). The GPS Segment of the AFRL-SCINDA Global Network and the Challenges of Real-Time TEC Estimation in theE‐ quatorial Ionosphere, Proceedings of ION NTM January 2006, Monterey, CA 1036.,

[2] Carrano, C. S, Anghel, A, Quinn, R. A, & Groves, K. M. Kalman filter estimation of plasmaspheric total electron content using GPS, ((2009). Radio Science, doi:

[3] Coco, D. S, Cokcr, C. E, Dahlkc, S. R, & Clynch, J. R. (1991). Variability of GPS Satel‐ lite Differential Group Delay Biases, IEEE *Transactions of* Aerospace and Electronic

, D. Herne3

, their contributions to this

Receiver Biases in Global Positioning Satellite Ranging

http://dx.doi.org/10.5772/55567

63

are warranted.

investigators:

**Author details**

**References**

Pierre-Richard Cornely

2006, 18-20.

10.1029/2008RS004070., 44RS0A10, 2009.

*Systems,* 27 (6), 931- 938.

A. Coster1

1

2

3

**Acknowledgements**

, J. Williams2

Department of Physics, Siena College

work is highly appreciated

MIT Haystack Observatory

, A. Weatherwax2

**Figure 10.** Differential line of sight TEC between three different SCINDA receivers at the MWA site plotted

9, we are suggesting that the observed changes in the differential TEC are due to a receiver bias temperature dependence that differs between the receivers MW1, MW2, and MW3. In other words, the temperature dependence for each receiver is unique and thus requires that the receiver bias be estimated for each individual receiver.

#### **11. Summary**

There are four main conclusions from the above discussion.


With respect to how accurately the absolute total electron content can be determined using GPS for use in the low frequency array calibration it is possible that a 2-3 TEC scatter still remains in the bias estimation process even after the temperature effect is removed (see **Figure 7)**. This number does not represent the relative TEC accuracy which is based on the differential phase measurements of TEC. The relative TEC can be estimated on the order of 0.01 TEC units. The 2-3 TEC scatter remaining in the observed bias estimation is likely due to the inaccuracies of the model used for the ionospheric mapping function. As was mentioned earlier, the mapping function being used does not account for any changes in the ionospheric scale height. The error in the mapping function also lacks a valid plasmaspheric model. Recently Carrano et al., 2009 developed a Kalman filter model to estimate plasmaspheric total electron content. If a better understanding of how to model the temperature effect of the bias estimation can be obtained, it is possible that an extended version of the Kalman filter for bias determination will lead to more accurate estimates of the bias. Clearly more experimental and modeling studies are warranted.

### **Acknowledgements**

All experiments and partial results in this chapter were obtained from works by the following investigators:

A. Coster1 , J. Williams2 , A. Weatherwax2 , W. Rideout1 , D. Herne3 , their contributions to this work is highly appreciated

1 MIT Haystack Observatory

2 Department of Physics, Siena College

3 Curtin Institute for Radio Astronomy, International Centre for Radio Astronomy Research

### **Author details**

9, we are suggesting that the observed changes in the differential TEC are due to a receiver bias temperature dependence that differs between the receivers MW1, MW2, and MW3. In other words, the temperature dependence for each receiver is unique and thus requires that

**Figure 10.** Differential line of sight TEC between three different SCINDA receivers at the MWA site plotted

**2.** This temperature dependence appears to include both indoor and outdoor temperatures

**3.** Other factors that are not now accounted for, such as the daily F10.7 cm flux, appear to

**4.** GPS receiver biases appear to be dependent on the individual receiver, and each receiver

With respect to how accurately the absolute total electron content can be determined using GPS for use in the low frequency array calibration it is possible that a 2-3 TEC scatter still remains in the bias estimation process even after the temperature effect is removed (see **Figure 7)**. This number does not represent the relative TEC accuracy which is based on the differential phase measurements of TEC. The relative TEC can be estimated on the order of 0.01 TEC units. The 2-3 TEC scatter remaining in the observed bias estimation is likely due to the inaccuracies of the model used for the ionospheric mapping function. As was mentioned earlier, the mapping function being used does not account for any changes in the ionospheric scale height.

the receiver bias be estimated for each individual receiver.

62 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

There are four main conclusions from the above discussion.

be important in receiver bias estimation.

must be treated independently

**1.** A clear temperature dependence on the GPS receiver bias is evident

**11. Summary**

Pierre-Richard Cornely

Physics and Engineering Department, Eastern Nazarene College, Massachusetts, USA

#### **References**


[4] Cornely, P-R.J. Flexible Prior Model: Three-dimensional ionospheric tomography, Radio Sci., doi:10.1020/2002RS002703,5

[17] Wilson, B. D, Mannucci, A. J, & Edwards, C. D. (1993). Sub-daily northern hemi‐ sphere ionospheric maps using the IGS GPS network, *Proceedings of the Seventh Inter‐ national* Wilson, B., D; Yinger, Colleen H; Feess, William A; Shank, Chris, (1999), New and improved- The broadcast inter-frequency biases, *GPS World*, Sept. 1999,, 10(9),

Receiver Biases in Global Positioning Satellite Ranging

http://dx.doi.org/10.5772/55567

65

56-66.


[17] Wilson, B. D, Mannucci, A. J, & Edwards, C. D. (1993). Sub-daily northern hemi‐ sphere ionospheric maps using the IGS GPS network, *Proceedings of the Seventh Inter‐ national* Wilson, B., D; Yinger, Colleen H; Feess, William A; Shank, Chris, (1999), New and improved- The broadcast inter-frequency biases, *GPS World*, Sept. 1999,, 10(9), 56-66.

[4] Cornely, P-R.J. Flexible Prior Model: Three-dimensional ionospheric tomography,

[5] Coster, A, & Komjathy, A. (2008). Space Weather and the Global Positioning System,

[6] Dyrud, L, Jovancevic, A, Brown, A, Wilson, D, & Ganguly, S. (2008). Ionospheric Measurement with GPS: Receiver techniques and methods, *Radio Sci.,* 43, RS6002,

[7] Gaposchkin, E. M, & Coster, A. J. Bias Determination, Lincoln Laboratory Technical

[8] Groves, K, Basu, S, Weber, E, Smitham, M, Kuenzler, H, Valladares, C, & Sheehan, R. MacKenzie, E., Secan, J., Ning, P., McNeil, W., Moonan, D., and Kendra, M., ((1997).

[9] Kaplan, E. D. Understanding GPS: principles and applications, ((2006). nav‐

[10] Komjathy, A. (1997). Global Ionospheric Total Electron Content Mapping Using the. Global Positioning System. Ph.D. dissertation, Department of Geodesy and Geomat‐

[11] Komjathy, A, Sparks, L, Wilson, B. D, & Mannucci, A. J. (2005). Automated Daily Processing of more than 1000 ground-based GPS Receivers to atudy intense iono‐ spheric storms, Radio Science, RS6006, doi:10.1029/2005RS003279,Komjathy, A., pri‐

[12] Lanyi, G, & Roth, T. (1988). A comparison of mapped and measured total ionospher‐ ic electron content using global positioning system and beacon satellite observations,

[13] Rieck, C, Jarlemark, P, Jaldehag, K, & Johansson, J. (2003). Thermal influence on the receiver chain of GPS carrier phase equipment for time and frequency transfer, *Fre‐ quency Control Symposium and PDA Exhibition Jointly with the 17th European Frequency and Time Forum, 2003. Proceedings of the 2003 IEEE International*, vol., no., May 2003,

[14] Rideout, W, & Coster, A. (2006). Automated GPS processing for global total electron

[15] Sardon, E, Rius, A, & Zarraoa, N. (1994). Estimation of the transmitter and receiver differential biases and the ionospheric total electron content from Global Positioning System observations," *Radio Science* May-June 1994, p. 577- 586., 29(3), 577-586.

[16] Van Dierendonck, A. J, & Coster, A. J. (2001). Benefits of GPS Modernization to the Atmospheric Science Community, Proceedings of ION 57th Annual Meeting, June,

Equatorial scintillation and systems support, *Radio Sci.,* 32, 2047.

Radio Sci., doi:10.1020/2002RS002703,5

64 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

doi:10.1029/2007RS003770.

Report 971 (MIT)., 1-1.

*Space Weather*, 6, S06D04, doi:10.1029/SW000400.

techgps.com, Artech House, Norwood, MA.

ics Engineering, University of New Brunswick.

vate communication, 2012., 40

doi:FREQ.2003.1275110., 326-331.

content data, GPS Solutions, DOIs10291-006-0029-5.

2001, Albuquerque, New Mexico, 382-390., 11-13.

*Radio Sci,, 23 (4), 483-4.*

**Chapter 3**

**Stand-Alone Satellite-Based Global Positioning**

The main principle of stand-alone positioning with pseudo-range is a "multiple intersection"

If the satellite's position is known and the "range" between each satellite and the rover receiver is measured, the rover position in the same reference system of the satellite's position could

In this chapter, principles of measurement and least squares calculations will be discussed, with the purpose of estimating the receiver position and its accuracy. This accuracy can be

The aim of this chapter is not to show new procedures but to explain concepts described in the bibliography, often fragmentary and without examples. To improve comprehension of the

Each application of GPS positioning (stand-alone, relative, differential, generation of differ‐ ential correction, etc) is based on knowledge of a satellite's position in ECEF (Earth Centred

The satellite position is estimated using the ephemerides which are contained in the navigation

In the case of the GPS system, the ECEF satellite position is not included in this message, where dedicated parameters are contained which, following Keplerian law and considering the

> © 2013 Cina and Piras; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2013 Cina and Piras; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Earth Fixed) geocentric cartesian reference system, which is fixed to the Earth.

perturbation effect, allow calculation of the satellite position [6].

Alberto Cina and Marco Piras

based on measurement of distances [1].

"foreseen" through the DOP calculation [2], [3].

values and concepts, some numeric examples are given.

http://dx.doi.org/10.5772/55611

**1. Introduction**

be estimated.

**2. Satellite positions**

message broadcast by the satellite.

Additional information is available at the end of the chapter

### **Stand-Alone Satellite-Based Global Positioning**

Alberto Cina and Marco Piras

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/55611

**1. Introduction**

The main principle of stand-alone positioning with pseudo-range is a "multiple intersection" based on measurement of distances [1].

If the satellite's position is known and the "range" between each satellite and the rover receiver is measured, the rover position in the same reference system of the satellite's position could be estimated.

In this chapter, principles of measurement and least squares calculations will be discussed, with the purpose of estimating the receiver position and its accuracy. This accuracy can be "foreseen" through the DOP calculation [2], [3].

The aim of this chapter is not to show new procedures but to explain concepts described in the bibliography, often fragmentary and without examples. To improve comprehension of the values and concepts, some numeric examples are given.

### **2. Satellite positions**

Each application of GPS positioning (stand-alone, relative, differential, generation of differ‐ ential correction, etc) is based on knowledge of a satellite's position in ECEF (Earth Centred Earth Fixed) geocentric cartesian reference system, which is fixed to the Earth.

The satellite position is estimated using the ephemerides which are contained in the navigation message broadcast by the satellite.

In the case of the GPS system, the ECEF satellite position is not included in this message, where dedicated parameters are contained which, following Keplerian law and considering the perturbation effect, allow calculation of the satellite position [6].

GPS ephemerides allow for the estimation of the satellite position in an inertial reference system, from which it is possible to define the ECEF coordinates.

where:

the Earth and it is subtended by X0

**•** *f* is the "true anomaly", shown in Fig. 1.

navigation message (.nav), as described in Fig. 2.

in grey and the clock corrections are reported in the boxes.

satellite, estimated every 30 minutes [5].

indicated by boxes.

Broadcast GALILEO ephemerides are defined in the same way [10].

and after the time reference indicated in the navigation file.

A part of GLONASS navigation file is shown in Figure 3.

**•** Ω is the longitude of the ascending node, it is the angle which has its vertex in the centre of

**•** P is the periapsis and it is the closest position of the satellite with respect to the Earth. The

*a* and *e2* are known, the satellite position can be estimated starting from an initial position in the orbital plane, which is defined by a mean anomaly μ0. These parameters are in the RINEX

**Figure 2.** Broadcast GPS ephemerides in RINEX navigation (.nav) for satellite ID=23. Keplerian elements are reported

GLONASS ephemerides are instead described in a different way: navigational messages in the reference system, PZ90, already contain the ECEF position, velocity and acceleration of each

Each set of parameters can be applied in estimating the satellite's position 15 minutes before

Satellite position is estimated using the 4th order Runge-Kutta numerical integration of [7].

**Figure 3.** Broadcast GLONASS ephemerides are contained in the RINEX navigation file for Satellite ID 4. Geocentric positions (XYZ) are highlighted in grey, the velocity in bold and the accelerations underlined. The clock parameters are

angle in C between the ascending node K and P is the periapsis argument.

**•** The angle *i* between the equatorial plane and orbital plane is the orbit inclination.

and the ascending node K.

Stand-Alone Satellite-Based Global Positioning

http://dx.doi.org/10.5772/55611

69

This system is described in Fig. 1 where:

*V* is the vernal point, the intersection of the ecliptic and the equatorial planes.

*X0* is the axis connecting the Earth's origin C and the point *V*. This direction is considered fixed, excluding some variations due to secular effects.

In the ECEF system, X is the axis of intersection of equatorial plane and the Greenwich mean meridian plane. Z is the axis which coincides with the Earth's rotation axis and Y completes a right-handed orthogonal system.

The Greenwich meridian plane rotates around the Earth's rotation axis with an angular speed ωe =7.2921151467\*10-5 rad/s, in accordance with GAST (Greenwich Apparent Sideral Time).

Θ, the subtended angle between X and X<sup>0</sup> , is "sideral time".

**Figure 1.** ECEF system and Keplerian elements

The six parameters used to describing the satellite orbit are given in Table 1.


1 The Keplerian element is t0 , which is the epoch of the passage to periapsis. In the GPS navigation message, the anomaly at the ephemerids reference epoch toe is given.

**Table 1.** Keplerian elements

where:

GPS ephemerides allow for the estimation of the satellite position in an inertial reference

is the axis connecting the Earth's origin C and the point *V*. This direction is considered fixed,

In the ECEF system, X is the axis of intersection of equatorial plane and the Greenwich mean meridian plane. Z is the axis which coincides with the Earth's rotation axis and Y completes a

The Greenwich meridian plane rotates around the Earth's rotation axis with an angular speed ωe =7.2921151467\*10-5 rad/s, in accordance with GAST (Greenwich Apparent Sideral Time).

, is "sideral time".

system, from which it is possible to define the ECEF coordinates.

*V* is the vernal point, the intersection of the ecliptic and the equatorial planes.

The six parameters used to describing the satellite orbit are given in Table 1.

Orbit eccentricity (b= semi minor axis)

a Semi-major axis of the elliptical orbit Size and shape of the orbit

μ<sup>0</sup> Mean anomaly at the reference epoch toe1 Position in the orbital plane 1 The Keplerian element is t0 , which is the epoch of the passage to periapsis. In the GPS navigation message, the anomaly

ω Argument of periapsis Orbital plane in the Ω Longitude of ascending node apparent system

This system is described in Fig. 1 where:

right-handed orthogonal system.

Θ, the subtended angle between X and X<sup>0</sup>

**Figure 1.** ECEF system and Keplerian elements

i Orbit inclination

at the ephemerids reference epoch toe is given.

**Table 1.** Keplerian elements

*<sup>e</sup>* <sup>=</sup> <sup>a</sup><sup>2</sup> <sup>−</sup>b<sup>2</sup> *a* 2

excluding some variations due to secular effects.

68 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

*X0*


*a* and *e2* are known, the satellite position can be estimated starting from an initial position in the orbital plane, which is defined by a mean anomaly μ0. These parameters are in the RINEX navigation message (.nav), as described in Fig. 2.


**Figure 2.** Broadcast GPS ephemerides in RINEX navigation (.nav) for satellite ID=23. Keplerian elements are reported in grey and the clock corrections are reported in the boxes.

Broadcast GALILEO ephemerides are defined in the same way [10].

GLONASS ephemerides are instead described in a different way: navigational messages in the reference system, PZ90, already contain the ECEF position, velocity and acceleration of each satellite, estimated every 30 minutes [5].

Each set of parameters can be applied in estimating the satellite's position 15 minutes before and after the time reference indicated in the navigation file.

A part of GLONASS navigation file is shown in Figure 3.

Satellite position is estimated using the 4th order Runge-Kutta numerical integration of [7].


**Figure 3.** Broadcast GLONASS ephemerides are contained in the RINEX navigation file for Satellite ID 4. Geocentric positions (XYZ) are highlighted in grey, the velocity in bold and the accelerations underlined. The clock parameters are indicated by boxes.

Broadcast ephemerides have about 1 meter accuracy, but more precision may be necessary for geodetic application. Precise ephemerides can be used as an alternative to broadcast ephem‐ erides. These ephemerides are calculated "a posteriori", and they have a centimetric level accuracy. The SP3 format [12] is used to describe these orbits and positions and satellite clock offsets are contained both for GPS and GLONASS constellations, with a rate of 15 minutes.

An example is reported in Fig. 4 [8].

**Figure 5.** Pseudo-range measurement and time scale

is the measured range between satellite (j) and receiver (i);

We consider three different time reference scales (see Fig. 5):

In this way, the measured distance is a "pseudo-range" because there is an offset between the

**•** atomic time scale (*ta*), which is the GPS time fundamental reference maintained by the clocks

The satellite and receiver time scales are aligned with the fundamental scale (ta), when the

( ) ( ) ( ) ( )

where ρ is the geometric range between the two phase centres of the satellite and receiver

<sup>é</sup> ù é <sup>ù</sup> = - -- = -- - = êë ú ê û ë úû

() () () () ()

*Rt c t t t t c t t t t*

If each time is referenced to the fundamental scale, the measured range will be:

*j jj j j i i i i i*

 d

), which is defined by the atomic clocks housed in each satellite;

are estimated. These offsets are time dependent and they have to be considered

), which is determined by the internal receiver clock (usually a quartz

d

 d

(2)

m/s = light speed in vacuum.

satellite clock and the receiver clock.

as biases in the measurement of the range.

*j j i i*

=- -

rd

antennas, described as follows.

( )

*ct t*

() ()

 d

d

in the control centre;

**•** satellite time scale (*tj*

**•** receiver time scale (*ti*

and δ<sup>i</sup>

oscillator).

offset δ<sup>j</sup>

where:

c ≅ 3\*108

*Ri j* ( ) *<sup>j</sup> Rt ct <sup>i</sup>* = D (1)

Stand-Alone Satellite-Based Global Positioning

http://dx.doi.org/10.5772/55611

71

**Figure 4.** Ephemerids in SP3 format: PG #sat., X, Y, Z ECEF position

#### **3. Range measurement using pseudo-range**

The measurement of the range between the satellite and the receiver is estimated using the pseudo-range, the code part of the signal, which is composed of squared waves. These waves are generated by an algorithm PRN (pseudo-random noise), which is periodically repeated over time [4].

In the GPS system, this component is composed of the pseudo-range C/A and P, when available. From the IIRM block, the new pseudo-range L2C is also available.

In the GLONASS system, there are two pseudo-ranges: ST (standard accuracy) and VT (usage agreed with the Russian Federation Defence Ministry) [5].

The theoretical part of the measurement of distance between the satellite and the receiver (the range) is based on the measure of "time of fly". It is the time taken between the transmission of the signal by the satellite and the receipt by the receiver.

The range measurement is realized by means of a cross-correlation procedure between two signals. When the signal is received by the receiver, the receiver generates an identical replica and moves it with respect to time. This operation is concluded when the maximum correlation is reached (Fig. 5).

In others words, the time of fly Δt is the movement that has to be applied to the replica of the signal in order to have correct alignment with the received signal.

The two signals, received and replica, are identical, but there is a misalignment caused by the travel time in space between the satellite and the receiver, as defined in equation 1.

**Figure 5.** Pseudo-range measurement and time scale

$$R\_i^j(t) = c\Delta t \tag{1}$$

where:

Broadcast ephemerides have about 1 meter accuracy, but more precision may be necessary for geodetic application. Precise ephemerides can be used as an alternative to broadcast ephem‐ erides. These ephemerides are calculated "a posteriori", and they have a centimetric level accuracy. The SP3 format [12] is used to describe these orbits and positions and satellite clock offsets are contained both for GPS and GLONASS constellations, with a rate of 15 minutes.

The measurement of the range between the satellite and the receiver is estimated using the pseudo-range, the code part of the signal, which is composed of squared waves. These waves are generated by an algorithm PRN (pseudo-random noise), which is periodically repeated

In the GPS system, this component is composed of the pseudo-range C/A and P, when

In the GLONASS system, there are two pseudo-ranges: ST (standard accuracy) and VT (usage

The theoretical part of the measurement of distance between the satellite and the receiver (the range) is based on the measure of "time of fly". It is the time taken between the transmission

The range measurement is realized by means of a cross-correlation procedure between two signals. When the signal is received by the receiver, the receiver generates an identical replica and moves it with respect to time. This operation is concluded when the maximum correlation

In others words, the time of fly Δt is the movement that has to be applied to the replica of the

The two signals, received and replica, are identical, but there is a misalignment caused by the

travel time in space between the satellite and the receiver, as defined in equation 1.

available. From the IIRM block, the new pseudo-range L2C is also available.

An example is reported in Fig. 4 [8].

over time [4].

is reached (Fig. 5).

**Figure 4.** Ephemerids in SP3 format: PG #sat., X, Y, Z ECEF position

70 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

**3. Range measurement using pseudo-range**

agreed with the Russian Federation Defence Ministry) [5].

of the signal by the satellite and the receipt by the receiver.

signal in order to have correct alignment with the received signal.

*Ri j* is the measured range between satellite (j) and receiver (i);

c ≅ 3\*108 m/s = light speed in vacuum.

In this way, the measured distance is a "pseudo-range" because there is an offset between the satellite clock and the receiver clock.

We consider three different time reference scales (see Fig. 5):


The satellite and receiver time scales are aligned with the fundamental scale (ta), when the offset δ<sup>j</sup> and δ<sup>i</sup> are estimated. These offsets are time dependent and they have to be considered as biases in the measurement of the range.

If each time is referenced to the fundamental scale, the measured range will be:

$$\begin{aligned} R\_i^j(t) &= c \left[ \left( t^j - \delta^j(t) \right) - \left( t\_i - \delta\_i(t) \right) \right] = c \left[ \left( t^j - t\_i \right) - \left( \delta^j(t) - \delta\_i(t) \right) \right] = \\ &= \rho\_i^j - c \left( \delta^j(t) - \delta\_i(t) \right) \end{aligned} \tag{2}$$

where ρ is the geometric range between the two phase centres of the satellite and receiver antennas, described as follows.

$$\rho\_i^j(t) = \sqrt{\left(X^j - X\_i\right)^2 + \left(Y^j - Y\_i\right)^2 + \left(Z^j - Z\_i\right)^2} \tag{3}$$

The receiver clock offset is difficult to model therefore an additional unknown is considered

The system could be solved only if a sufficient or redundant number of equations of obser‐

In the first, the receiver is stationary over a point for several epochs, without changing its position. In the second, the receiver moves and its coordinates change in each epoch. In the kinematic, four unknowns have to be solved for each epoch: three for the position (X, Y, Z)

Linearization of Equation (6) and considering the approximate values of the receiver position,

If the approximate values of the position are not available, it is possible to set them equal to

In order to define the system, it is important to linearize the equation of the geometric range,

(0) (0) (0)

*x y z ndorder*

*jjj*

*XYZ* rrr

æö æö æö ¶¶¶ =+ + + + ç÷ ç÷ ç÷ ¶¶¶ èø èø èø

(0) (0) (0) ;;; *X X xYY y ZZ z i i iii i i i i* =+ =+ =+ (7)

*, yi , zi*

2

. The solution procedure is

(8)

 d

Separating unknown and known terms in Equation 2, the pseudo-range equation will be:

() () () () *j jj Rt c t t c t i ii* + = dr

.

(6)

Stand-Alone Satellite-Based Global Positioning

http://dx.doi.org/10.5772/55611

73

at each epoch of measurement: the bias of the receiver clock δ<sup>i</sup>

We consider two different types of positioning: static and kinematic.

the effective position unknowns will be the corrections *xi*

zero, solving the system with additional iterations.

by means a Taylor series, as follows:

r r (0)

iterative and the final solution is independent of the approximate values.

*j j iii i i iii ii i*

where the unknowns are:

receiver clock offset.

vation is available.

δi

as:

(X, Y, Z) position of the receiver;

**4. Observation equations**

and one for the receiver clock offset [9].

The bias due to the satellite clock is modeled with a low order polynomial function (2nd degree for GPS and GALILEO and linear for GLONASS), where the coefficients are broadcast in the navigation message.

Fig. 2 and Fig. 3 give examples of the RINEX navigation files for the GPS and GLONASS constellations.

In the GPS situation, the group delay (TGD) and the relativistic effect Δtr have to be consid‐ ered, in order to estimate the satellite clock offset, using the following equation:

$$\mathcal{S}^{\dagger}(t) = a\_0 + a\_1 \left(t - t\_{\alpha c}\right) + a\_2 \left(t - t\_{\alpha c}\right)^2 + \Delta t\_r - T\_{GD} \tag{4}$$

The velocity of a group is the velocity of the signal propagation and it is different from the single phase velocity of each component. TGD is broadcast in the navigation message.

The relativistic effect is due to the satellite high-speed in its orbit which has to be considered in proximity to the Earth, for its mass and its potential.

The element of relativistic correction is calculated according to GPS specification [6]:

$$
\Delta t\_r = F \sqrt{ae \sin E\_K} \tag{5}
$$

where a, e, EK are semi-major axis, orbit eccentricity and satellite anomaly, respectively.

$$F = -\frac{2\sqrt{GM}}{c^2} \text{and it is defined in [6].}$$

For example, the relativistic effect for a GPS satellite, considering: *a=26500 km, e=10-2 m, sinEK=1, F= - 4.442807633\*10-10 s / m2* is:

$$
\Delta t\_r = -4.442807633 \, ^\circ \text{I} \, ^{\circ 10}\sqrt{26500000} \, ^\circ \text{I} \, ^\circ \text{I} \, ^\circ \text{I} \, ^\circ \text{I} \, ^\circ \text{s} \, ^\circ \text{s} \, ^\circ \text{s} \, ^\circ \text{s}
$$

The synchronism between all atomic satellite clocks is considered.

It is not possible to remove the effect of satellite clock bias in stand-alone positioning as it leads to an error of about 1 ns (10-9 s), which corresponds to an error equal to 30 cm of the range measured between the satellite and the receiver.

The receiver clocks are typically quartz oscillators, which have less long-term stability com‐ pared to atomic clocks. On the supposition that errors of synchronization is approximately equal to 1 ms (10-3 s). This error, in the distance between the satellite and the receiver consid‐ ering speed of light, is equal to 300 km.

The receiver clock offset is difficult to model therefore an additional unknown is considered at each epoch of measurement: the bias of the receiver clock δ<sup>i</sup> .

Separating unknown and known terms in Equation 2, the pseudo-range equation will be:

$$R\_i^j(t) + c\delta^j(t) = \rho\_i^j(t) - c\delta\_i(t) \tag{6}$$

where the unknowns are:

( ) ( ) ( ) 22 2

The bias due to the satellite clock is modeled with a low order polynomial function (2nd degree for GPS and GALILEO and linear for GLONASS), where the coefficients are broadcast in the

Fig. 2 and Fig. 3 give examples of the RINEX navigation files for the GPS and GLONASS

In the GPS situation, the group delay (TGD) and the relativistic effect Δtr have to be consid‐

The velocity of a group is the velocity of the signal propagation and it is different from the

The relativistic effect is due to the satellite high-speed in its orbit which has to be considered

2

*t a att att t T* = + - + - +D - (4)

sin *r K* D = *t F ae E* (5)

*oc oc r GD*

( ) ( )

single phase velocity of each component. TGD is broadcast in the navigation message.

The element of relativistic correction is calculated according to GPS specification [6]:

where a, e, EK are semi-major axis, orbit eccentricity and satellite anomaly, respectively.

is:

The synchronism between all atomic satellite clocks is considered.

For example, the relativistic effect for a GPS satellite, considering: *a=26500 km, e=10-2 m,*

It is not possible to remove the effect of satellite clock bias in stand-alone positioning as it leads to an error of about 1 ns (10-9 s), which corresponds to an error equal to 30 cm of the

The receiver clocks are typically quartz oscillators, which have less long-term stability com‐ pared to atomic clocks. On the supposition that errors of synchronization is approximately equal to 1 ms (10-3 s). This error, in the distance between the satellite and the receiver consid‐

*s* ≈20*ns*

*t XX YY ZZ* = - +- +- (3)

*jj j j i ii i*

ered, in order to estimate the satellite clock offset, using the following equation:

01 2 ( )

( )

72 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

r

*j*

in proximity to the Earth, for its mass and its potential.

d

*<sup>c</sup>* <sup>2</sup> and it is defined in [6].

*<sup>Δ</sup>tr* <sup>=</sup> <sup>−</sup>4.442807633 \* <sup>10</sup>−<sup>10</sup> <sup>26500000</sup> \* <sup>1</sup> \* <sup>10</sup>−<sup>2</sup> <sup>≈</sup><sup>2</sup> \* <sup>10</sup>−<sup>8</sup>

range measured between the satellite and the receiver.

*sinEK=1, F= - 4.442807633\*10-10 s / m2*

ering speed of light, is equal to 300 km.

navigation message.

constellations.

*<sup>F</sup>* <sup>=</sup> <sup>−</sup> <sup>2</sup> *GM*

(X, Y, Z) position of the receiver;

δi receiver clock offset.

The system could be solved only if a sufficient or redundant number of equations of obser‐ vation is available.

#### **4. Observation equations**

We consider two different types of positioning: static and kinematic.

In the first, the receiver is stationary over a point for several epochs, without changing its position. In the second, the receiver moves and its coordinates change in each epoch. In the kinematic, four unknowns have to be solved for each epoch: three for the position (X, Y, Z) and one for the receiver clock offset [9].

Linearization of Equation (6) and considering the approximate values of the receiver position, as:

$$X\_i = X\_i^{(0)} + \mathbf{x}\_i; \quad Y\_i = Y\_i^{(0)} + y\_i; \quad Z\_i = Z\_i^{(0)} + z\_i;\tag{7}$$

the effective position unknowns will be the corrections *xi , yi , zi* . The solution procedure is iterative and the final solution is independent of the approximate values.

If the approximate values of the position are not available, it is possible to set them equal to zero, solving the system with additional iterations.

In order to define the system, it is important to linearize the equation of the geometric range, by means a Taylor series, as follows:

$$\boldsymbol{\rho}\_{i}^{j} = \boldsymbol{\rho}\_{i}^{j\text{(\(0\)}} + \left(\frac{\partial \boldsymbol{\rho}\_{i}^{j}}{\partial \mathbf{X}\_{i}}\right)\_{\{0\}} \mathbf{x}\_{i} + \left(\frac{\partial \boldsymbol{\rho}\_{i}^{j}}{\partial \mathbf{Y}\_{i}}\right)\_{\{0\}} \mathbf{y}\_{i} + \left(\frac{\partial \boldsymbol{\rho}\_{i}^{j}}{\partial \mathbf{Z}\_{i}}\right)\_{\{0\}} \mathbf{z}\_{i} + 2nd order \tag{8}$$

The second order terms are small and can be neglected, and, substitute for the first derivatives, Equation (6) becomes:

$$\begin{split} R\_i^j(\mathbf{t}) + c\boldsymbol{\delta}^j(\mathbf{t}) &= \sqrt{\left(\mathbf{X}^j - \mathbf{X}\_i^{(0)}\right)^2 + \left(\mathbf{Y}^j - \mathbf{Y}\_i^{(0)}\right)^2 + \left(\mathbf{Z}^j - \mathbf{Z}\_i^{(0)}\right)^2} + \\ -\left(\frac{\mathbf{X}^j - \mathbf{X}\_i^{(0)}}{\rho\_i^{j(0)}}\right)\mathbf{x}\_i - \left(\frac{\mathbf{Y}^j - \mathbf{Y}\_i^{(0)}}{\rho\_i^{j(0)}}\right)\mathbf{y}\_i - \left(\frac{\mathbf{Z}^j - \mathbf{Z}\_i^{(0)}}{\rho\_i^{j(0)}}\right)\mathbf{z}\_i - c\boldsymbol{\delta}\_i(\mathbf{t}) \end{split} \tag{9}$$

which reduced to the Gauss model:

$$A\mathbf{x} - l\_o = \mathbf{v} \tag{10}$$

To consider 2 epochs of measurement in static session, Equation (11) has to be modified as

1 0

LLLLL

The least squares solution starts from the Gauss model, which was described in (10), and is

Some consideration will given to the weight matrix *P*. It is generally obtained by inverting the variance–covariance matrix of the observations *CLL* multiplied by the variance of the unit of

> 2 1 <sup>0</sup> *LL P C* s

**•** *P* equal to the identical matrix *I* if the ranges are considered to have the same weight;

**•** *P* is a diagonal matrix, with different values, as in the following example.

*i*

*i*

*i*

d

d

*c*

2

( )

*t*

*t*

(1) 1(0) 1 1 (5) 5(0)

*Rt c t*

K

1 1

( )

*t*

( ) ( )

*t*

( ) (

( ) ( )

r

Stand-Alone Satellite-Based Global Positioning

http://dx.doi.org/10.5772/55611




r

r

( )

*t*

æ ö <sup>+</sup> ç ÷

2 5) 5(0) 2 2

*t t*

r

() ) ()

(

+ è - ø

(1) 1 (5) 1

*i i*

d

d

d

(

*i i*

d

*c*

(1) 1(0) 2 2

K

LLLLLLLLLLLL

(1) <sup>1</sup> <sup>2</sup>

*i i i*

*i i i*

*y Rt c t z v c Rt c t*

æ ö ç ÷ç <sup>÷</sup> <sup>+</sup> ç ÷ç <sup>÷</sup> ç ÷ç <sup>÷</sup> - = ç ÷ç <sup>÷</sup> ç ÷ç <sup>+</sup> <sup>÷</sup> ç ÷ç <sup>÷</sup> è ø ç ÷

( ) ( )

*R t*

^ and the variance–covariance matrix of the estimated solution Cxx

<sup>1</sup> <sup>ˆ</sup> *<sup>n</sup> x NT*- <sup>=</sup> (13)


( 5)

( )

(12)

75

r

)

*x*

) () 1 0

) ()

() () () 0 1



K K K KK

() () () 0 1

 r

defined by means of calculus and inversion of the normal matrix N.

follows:

(0) (1) (0) (1) (0) (1) 1 11 1(0) 1(0) 1(0) 1 11

*X Xt Y Yt Z Zt t tt*

() () ()

é ù ê ú


rr


() () ()

(0) (5) (0) (5) (0) (5) 111 5(0) 5(0) 5(0) 1 11

*X Xt Y Yt Z Zt ttt*

(0) (5) (0) (5) (0) (5) 22 2

*X XtY Yt Z Zt tt t*

*X Xt Y Yt Z Zt ttt*

() () ()

() () ()

5(0) 5(0) 5(0) 22 2

() (

rrr

*ii i i ii*

*i ii i ii*

() (

LLLL LLLLLLLLLLLLLLLLLLLLL (0) (1) (0) (1) (0) (1) 222 1(0) 1(0) 1(0) 2 22

*ii i i ii*

rrr

*ii i ii i*

rr

( )

epoch 1 t

r

1

( )

t

where:

*Tn* <sup>=</sup> *<sup>A</sup><sup>T</sup> Pl*

0.

epoch 2

2

**5. Least Squares**

The estimated residuals *v*

can be calculated, considering:

*N* = *A<sup>T</sup> PA* is called normal matrix

weight, which is defined "a priori".

Different methods can be used to defining the matrix *P*:

using the least squares estimator to define the solution, if the number of satellites is greater than four.

In the case of five satellites, (apexes in Equation (11)), the design matrix (A), known terms vector (l0) and unknowns (x) can be written as:

(0) (1) (0) (1) (0) (1) 1(0) 1(0) 1(0) (0) (2) (0) (2) (0) (2) 2(0) 2(0) 2(0) (0) (3) (0) (3) (0) (3) 3(0) 3(0) 3(0) (0) (4) (0) (4) (0) (4) 4(0) 4(0) 4(0) 1 1 1 1 *ii i i ii ii i i ii ii i i ii ii i i ii XX YY ZZ XX YY ZZ XX YY ZZ XX YY ZZ* rrr rrr rrr rrr --- - --- - --- - --- - (1) (1) 1(0) (2) (2) 2(0) (3) (3) 3(0) (4) (4) 4(0) (5) (5) 5(0) (0) (5) (0) (5) (0) (5) 5(0) 5(0) 5(0) 1 *i i i i i i i i i i i <sup>i</sup> i i ii i i ii R c <sup>x</sup> R c <sup>y</sup> R c z R c <sup>c</sup> R c XX YY ZZ* d r d r d r d r d d r rrr é ù ê ú + æ ö ç ÷ + ç ÷ - + - ç ÷ ç ÷ ç ÷ + - è ø + - --- ë û (1) (2) (3) (4) (5) *v v v v v* æ ö æ ö ç ÷ ç ÷ ç ÷ ç ÷ <sup>=</sup> ç ÷ ç ÷ ç ÷ ç ÷ è ø è ø (11)

It is convenient to define the offset of the receiver clock in metres, in order to avoid possible problems of ill conditioning of the system, due to the light speed *c* which is prevalent with respect to the other values. This is easily achievable, multiplying the offset by the light speed.

This positioning model with pseudo-range measurement defines a single position at each epoch and is useful for kinematic procedures.

The static solution requires adding additional columns in the design matrix (A), in order to estimate the offset of the receiver clock at each epoch.

To consider 2 epochs of measurement in static session, Equation (11) has to be modified as follows:

( ) ( ) (0) (1) (0) (1) (0) (1) 1 11 1(0) 1(0) 1(0) 1 11 (0) (5) (0) (5) (0) (5) 111 5(0) 5(0) 5(0) 1 11 1 2 () () () epoch 1 t 1 0 () () () () ( epoch 2 ) () 1 0 () ( t ) () *i ii i ii ii i i ii X Xt Y Yt Z Zt t tt X Xt Y Yt Z Zt ttt* r rr rrr - -- - --- - K K K KK LLLL LLLLLLLLLLLLLLLLLLLLL (0) (1) (0) (1) (0) (1) 222 1(0) 1(0) 1(0) 2 22 (0) (5) (0) (5) (0) (5) 22 2 5(0) 5(0) 5(0) 22 2 () () () 0 1 () () () () () () 0 1 () () () *i ii i i ii ii i ii i x X Xt Y Yt Z Zt ttt X XtY Yt Z Zt tt t* rrr rr r é ù ê ú --- - -- - ë û LLLLL K K K KK ( ) (1) 1(0) 1 1 (5) 5(0) (1) 1 (5) 1 (1) <sup>1</sup> <sup>2</sup> 2 ( 1 1 (1) 1(0) 2 2 ( 5) 2 5) 5(0) 2 2 ( ) ( ) ( ) ( ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) () ) () *i i i i i i i i i i i i Rt c t y Rt c t z v c Rt c t c R t t t t c t t t t* d d d d d d r r r r - - æ ö <sup>+</sup> ç ÷ æ ö ç ÷ç <sup>÷</sup> <sup>+</sup> ç ÷ç <sup>÷</sup> ç ÷ç <sup>÷</sup> - = ç ÷ç <sup>÷</sup> ç ÷ç <sup>+</sup> <sup>÷</sup> ç ÷ç <sup>÷</sup> è ø ç ÷ + è - ø K r LLLLLLLLLLLL K (12)

#### **5. Least Squares**

The least squares solution starts from the Gauss model, which was described in (10), and is defined by means of calculus and inversion of the normal matrix N.

The estimated residuals *v* ^ and the variance–covariance matrix of the estimated solution Cxx can be calculated, considering:

$$
\hat{\mathbf{x}} = \mathbf{N}^{-1} T\_n \tag{13}
$$

where:

The second order terms are small and can be neglected, and, substitute for the first derivatives,

(0) (0) (0) (0) (0) (0)

*Rt c t X X Y Y Z Z*

*jj j j j i ii i*

æ öæ öæ ö --- - - -- ç ÷ç ÷ç ÷ è øè øè ø

*jjj ii i i i i i jjj*

*XX YY ZZ*

*iii*

rrr

( ) ( ) ( ) <sup>222</sup> (0) (0) (0)

*x y zct*

using the least squares estimator to define the solution, if the number of satellites is greater

In the case of five satellites, (apexes in Equation (11)), the design matrix (A), known terms

1

1

*i*

æ ö

*i*

*i*

ç ÷

d

*z*

1

1

It is convenient to define the offset of the receiver clock in metres, in order to avoid possible problems of ill conditioning of the system, due to the light speed *c* which is prevalent with respect to the other values. This is easily achievable, multiplying the offset by the light speed.

This positioning model with pseudo-range measurement defines a single position at each

The static solution requires adding additional columns in the design matrix (A), in order to

+ = - +- +- +

( )

*Ax l v <sup>o</sup>* - = (10)

(1) (1) 1(0) (2) (2) 2(0) (3) (3) 3(0) (4) (4) 4(0) (5) (5) 5(0)

 r

æ ö æ ö ç ÷ ç ÷

(1) (2) (3) (4) (5)

ç ÷ ç ÷ <sup>=</sup> ç ÷ ç ÷ ç ÷

(11)

*v v v v v*

 r

 r

 r

 r

ç ÷ è ø è ø

*i i*

+ -

d

*i i*

d

*i i*

d

d

d

+ -

*i i <sup>i</sup> i i*

*R c*

*R c <sup>c</sup>*

*R c <sup>x</sup> R c <sup>y</sup> R c*

ç ÷ + ç ÷ - + - ç ÷

ç ÷ + - è ø

(9)

d

Equation (6) becomes:

than four.

() ()

which reduced to the Gauss model:

d

74 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

vector (l0) and unknowns (x) can be written as:

(0) (1) (0) (1) (0) (1) 1(0) 1(0) 1(0) (0) (2) (0) (2) (0) (2) 2(0) 2(0) 2(0) (0) (3) (0) (3) (0) (3) 3(0) 3(0) 3(0) (0) (4) (0) (4) (0) (4) 4(0) 4(0) 4(0)

é ù ê ú






*XX YY ZZ*

*XX YY ZZ*

rrr

*XX YY ZZ*

rrr

*XX YY ZZ*

rrr

*XX YY ZZ*

rrr

epoch and is useful for kinematic procedures.

estimate the offset of the receiver clock at each epoch.

*ii i i ii*

(0) (5) (0) (5) (0) (5) 5(0) 5(0) 5(0) 1

rrr

*ii i i ii ii i i ii ii i i ii ii i i ii*

*N* = *A<sup>T</sup> PA* is called normal matrix

$$T\_n = A^\top P I\_0.$$

Some consideration will given to the weight matrix *P*. It is generally obtained by inverting the variance–covariance matrix of the observations *CLL* multiplied by the variance of the unit of weight, which is defined "a priori".

$$P = \sigma\_0^2 \mathbb{C}\_{LL}^{-1} \tag{14}$$

Different methods can be used to defining the matrix *P*:


$$P = \begin{bmatrix} p\_1 \\ & p\_2 & 0 \\ & & p\_3 \\ & 0 & & p\_4 \\ & & & p\_5 \end{bmatrix} \tag{15}$$

Based on the estimated solution *x*

unit weight, can be obtained:

n= number of equations; r= number of unknowns.

where:

^, the estimated residuals *<sup>v</sup>*

2 0 0 ˆ ˆ ˆ ˆ , <sup>ˆ</sup> *Tv Pv v Ax l n r* =- = s

Finally, the variance–covariance matrix of the estimated solution Cxx can be determined,

2

é ù ê ú

ë û

 s

sss

This matrix has the variances of the estimated parameters along the diagonal, and the cova‐

The matrix *N-1* is also called the "cofactor matrix"; it is independent of the measurements, but only dependent on the geometrical satellite-receiver configuration. We will show later how

In order to consider the several range measurement biases, Equation (6) has to be modified as

() () () () () () () *j j j jj j Rt c t t c t Tt It Et <sup>i</sup> i ii i i* + = - + ++

 d

*XZ YZ Z Xt Yt Zt t*

sss

2

(19)

2

*X XY Y*

s

s

s

0 2

considering the normal matrix and the apostriori variance of unit weight:

2 1

= =


ˆ

s

*C N*

*xx*

**6. Measurement bias of the pseudo-range**

A short description will be given in the following.

dr

riances are in the off-diagonal elements.

*N-1* is used to calculate DOP.

follows:

where:

*T* = tropospheric delay; *I* = ionospheric delay; *E* = ephemerides error.

^, and the apostriori variance of the

http://dx.doi.org/10.5772/55611

77

Stand-Alone Satellite-Based Global Positioning

(18)


But what is a possible correct weight to use in *P*?

There are different strategies used to selecting the weight of the range:


$$p = \cos^2 z + a \sin^2 z\tag{16}$$

with a recommended value of a =0.3.

These weight functions are shown in Fig. 6.

**•** Signal-noise ratio (SNR) observed each epoch for each range could be used as the weight.

**Figure 6.** Weights dependence on zenithal angle

Based on the estimated solution *x* ^, the estimated residuals *<sup>v</sup>* ^, and the apostriori variance of the unit weight, can be obtained:

$$
\hat{\boldsymbol{\sigma}} = A \hat{\boldsymbol{x}} - l\_0 \quad , \quad \hat{\sigma}\_0^2 = \frac{\hat{\boldsymbol{\sigma}}^T P \hat{\boldsymbol{\upsilon}}}{n - r} \tag{17}
$$

where:

(15)

1

*p*

There are different strategies used to selecting the weight of the range:

specification the accuracy of each satellite can be estimated [13].

=

But what is a possible correct weight to use in *P*?

76 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

following model: *p* =cos<sup>2</sup>

with a recommended value of a =0.3.

**Figure 6.** Weights dependence on zenithal angle

These weight functions are shown in Fig. 6.

2

*p P p*

0

3

ë û

**•** Starting from the **URA** (*User Range Accuracy*), which represents the maximum error of the range foreseen during the period of validation of the ephemerides, it is a statistical value of the accuracy for each satellite. **URA** is contained in the navigation message and it is independent of the satellite's elevation or other environmental conditions. Using the GPS

**•** Each weight can be considered depending on the satellite's elevation. For example, a function of the zenithal angle *z* can be used, where each satellite can be weighted with the

**•** Signal-noise ratio (SNR) observed each epoch for each range could be used as the weight.

*z*. EUREF suggests an alternative method [11]:

2 2 *p za z* = + cos sin (16)

é ù ê ú

4

*p*

0

5

*p*

n= number of equations;

r= number of unknowns.

Finally, the variance–covariance matrix of the estimated solution Cxx can be determined, considering the normal matrix and the apostriori variance of unit weight:

$$\mathbf{C}\_{\rm xx} = \hat{\sigma}\_0^2 \mathbf{N}^{-1} = \begin{bmatrix} \sigma\_X^2 \\ \sigma\_{\rm XY} & \sigma\_Y^2 \\ \sigma\_{\rm XZ} & \sigma\_{\rm YZ} & \sigma\_Z^2 \\ \sigma\_{\rm Xt} & \sigma\_{\rm Yt} & \sigma\_{\rm Zt} \end{bmatrix} \tag{18}$$

This matrix has the variances of the estimated parameters along the diagonal, and the cova‐ riances are in the off-diagonal elements.

The matrix *N-1* is also called the "cofactor matrix"; it is independent of the measurements, but only dependent on the geometrical satellite-receiver configuration. We will show later how *N-1* is used to calculate DOP.

#### **6. Measurement bias of the pseudo-range**

In order to consider the several range measurement biases, Equation (6) has to be modified as follows:

$$R\_i^j(t) + c\delta^j(t) = \rho\_i^j(t) - c\delta\_i(t) + T\_i^j(t) + I\_i^j(t) + E\_i^j(t) \tag{19}$$

where:

*T* = tropospheric delay;

*I* = ionospheric delay;

*E* = ephemerides error.

A short description will be given in the following.

Tropospheric delay is an error which occurs in only the low part of the atmosphere, up to 40 km from the Earth's surface.

*ρi j*

rotation *ωe* to the coordinates *X* and *Y*, as:

*i*

not give more significant benefit in terms of accuracy.

used, the variance–covariance matrix has also to be rotated.

**•** *n*-axis coincides with the meridian tangent directed north;

Therefore, it is possible to define a local plane with its origin at Pi

**•** *u*-axis coincides with the normal to the ellipsoid passing through Pi

nates φ and λ, considering an Eulerian Cartesian tern, defined as following:

**8. DOP and satellite visibility**

**•** *e*-axis completes the clockwise tern.

**Figure 7.** ECEF coordinates (X Y Z) and Eulerian system (e n u)

 can be estimated using the satellite coordinates and the first approximate position of the receiver. The effect of the Earth rotation is considered, applying the velocity of terrestrial

> wt

 wt

In conclusion, this position is determined by also taking into account both the time propagation and the Earth's rotation, leading to a more precise receiver position. A further iteration does

Sometimes it could be useful to define the ECEF coordinates with respect to a local plane, which is tangential to the ellipsoid at a defined point. This local system allows the separation of the horizontal component from the vertical, where the GPS is less precise. If this system is

(21)

79

, with geographical coordi‐

Stand-Alone Satellite-Based Global Positioning

http://dx.doi.org/10.5772/55611

;

cos( ) sin( ) 0 sin( ) cos( ) 0 0 01

*X X Y Y Z Z*

*j j jj*

æ öé - ùæ ö ç ÷ ê úç ÷ ç ÷ <sup>=</sup> ç ÷ ç ÷ ç ÷ ç ÷ ç ÷ è øë ûè ø

*j j*

*i e i e i*

wt

wt

*i e i e i*

*j j jj*

This error depends on: pressure, temperature and relative humidity. It can be estimated using a model, such as the one described in [3, 4]. In the standard condition (i.e. temperature =273.16 K, pression=1013.25 mbar, e=0), this error is equal to 2.3 m in the zenith position (z=0°) and it increases when the zenithal angle increases.

This is the main reason that it is better to avoid using satellites with low elevation (zenithal angle > 75°) to realize the positioning.

Ionospheric delay, on the other hand, depends on the electronic content of the ionosphere layer of the atmosphere between 40 km and 1000 km above the Earth's surface; it changes with the sun's activities. This delay is dispersive, that is, it depends on the signal frequency. The value of this delay is variable but it is normally greater than the tropospheric delay. Dual frequencies receivers can use the dispersive nature of the ionosphere to completely remove the delay.

Ephemeride errors depend on the satellite position: broadcast ephemerides have a meter-level accuracy; precise or predicted rapid products instead have centimeter level of accuracy. These products are available on the IGS (International GNSS Service) website.

These errors are spatially correlated; therefore they have similar effects on two receivers in close proximity. These biases can be eliminated or reduced using relative positioning, but not in the stand-alone positioning.

Others errors such as phase centre variation, multipath and hardware delay are less significant in this context and are not considered.

#### **7. Relative motion in the stand-alone positioning**

The position, estimated as described above, does not consider certain important effects which happen during the time of the propagation of the signal from the satellite to the receiver (about 67 ms).

The following effects occur during this period:


The estimation of the receiver position requires an additional iteration where the satellite position is modified, taking into account the propagation time τ.

$$\mathbf{t}^{\dagger} = \mathbf{t}^{\dagger} - \boldsymbol{\pi}\_{i}^{\dagger} \qquad \qquad \boldsymbol{\pi}\_{i}^{\dagger} = \frac{\rho\_{i}^{\dagger}}{\rho\_{-}} \int\_{\mathcal{C}} \tag{20}$$

*ρi j* can be estimated using the satellite coordinates and the first approximate position of the receiver. The effect of the Earth rotation is considered, applying the velocity of terrestrial rotation *ωe* to the coordinates *X* and *Y*, as:

$$
\begin{pmatrix} X\_i^j \\ Y\_i^j \\ Z\_i^j \end{pmatrix} = \begin{bmatrix} \cos(\alpha\_\varepsilon \tau\_i^j) & -\sin(\alpha\_\varepsilon \tau\_i^j) & 0 \\ \sin(\alpha\_\varepsilon \tau\_i^j) & \cos(\alpha\_\varepsilon \tau\_i^j) & 0 \\ 0 & 0 & 1 \end{bmatrix} \begin{pmatrix} X^j \\ Y^j \\ Z^j \end{pmatrix} \tag{21}
$$

In conclusion, this position is determined by also taking into account both the time propagation and the Earth's rotation, leading to a more precise receiver position. A further iteration does not give more significant benefit in terms of accuracy.

#### **8. DOP and satellite visibility**

Tropospheric delay is an error which occurs in only the low part of the atmosphere, up to 40

This error depends on: pressure, temperature and relative humidity. It can be estimated using a model, such as the one described in [3, 4]. In the standard condition (i.e. temperature =273.16 K, pression=1013.25 mbar, e=0), this error is equal to 2.3 m in the zenith position (z=0°) and it

This is the main reason that it is better to avoid using satellites with low elevation (zenithal

Ionospheric delay, on the other hand, depends on the electronic content of the ionosphere layer of the atmosphere between 40 km and 1000 km above the Earth's surface; it changes with the sun's activities. This delay is dispersive, that is, it depends on the signal frequency. The value of this delay is variable but it is normally greater than the tropospheric delay. Dual frequencies receivers can use the dispersive nature of the ionosphere to completely remove the delay.

Ephemeride errors depend on the satellite position: broadcast ephemerides have a meter-level accuracy; precise or predicted rapid products instead have centimeter level of accuracy. These

These errors are spatially correlated; therefore they have similar effects on two receivers in close proximity. These biases can be eliminated or reduced using relative positioning, but not

Others errors such as phase centre variation, multipath and hardware delay are less significant

The position, estimated as described above, does not consider certain important effects which happen during the time of the propagation of the signal from the satellite to the receiver (about

The estimation of the receiver position requires an additional iteration where the satellite

 t

*c* r

(20)

**•** the Earth has rotated about its spin axis by about 30 m to the East on the equator.

*<sup>j</sup> jjj j <sup>i</sup> i i t t*

 =- = t

products are available on the IGS (International GNSS Service) website.

**7. Relative motion in the stand-alone positioning**

position is modified, taking into account the propagation time τ.

km from the Earth's surface.

increases when the zenithal angle increases.

78 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

angle > 75°) to realize the positioning.

in the stand-alone positioning.

67 ms).

in this context and are not considered.

The following effects occur during this period:

**•** the satellite position is changed by about 250 m;

Sometimes it could be useful to define the ECEF coordinates with respect to a local plane, which is tangential to the ellipsoid at a defined point. This local system allows the separation of the horizontal component from the vertical, where the GPS is less precise. If this system is used, the variance–covariance matrix has also to be rotated.

Therefore, it is possible to define a local plane with its origin at Pi , with geographical coordi‐ nates φ and λ, considering an Eulerian Cartesian tern, defined as following:


**Figure 7.** ECEF coordinates (X Y Z) and Eulerian system (e n u)

The P coordinates can be expressed in the ECEF system as:

$$\begin{aligned} X &= \left(N + h\right)\cos\phi\cos\lambda\\ Y &= \left(N + h\right)\cos\phi\sin\lambda\\ Y &= \left[N\left(1 - e^2\right) + h\right]\sin\phi \end{aligned} \qquad \begin{aligned} X &= \frac{a}{\sqrt{1 - e^2\sin^2\phi}}\\ X &= \left[N\left(1 - e^2\right) + h\right]\sin\phi \end{aligned} \tag{22}$$

Azimuth *A*, elevation *E* and distance *d* of the different satellites can be estimated in order to plan the measurement session. Each element is determined by the following equations:

222

*e nu*

An example is given in Appendix A to illustrate the various aspects discussed in this chapter.

Estimate GDOP, PDOP, HDOP, VDOP, TDOP factors at P with coordinates: φ=45°3'48", λ=7°39'41", h=0 m, referenced in WGS84 (a=6378137 e2=0.006694379990) ellipsoid, on 2012/04/06 at 8:53:59 am. Moreover Calculate the "satellite visibility" at P. The ECEF coordi‐

sss

= +++

2 2

*e n*

2 2

= ++

s s

= +

s

= =

s

*GDOP PDOP HDOP VDOP TDOP*

**Appendix A**

**DOP and satellite visibility**

**Table 2.** Satellite's coordinates in ECEF

In the following, the solution is provided.

nates of the visible satellites are known [8]:

**sat X Y Z**

The geographical coordinates are transformed to a DEG system:

 22504974.806 13900127.123 -2557240.727 -3760396.280 -17947593.853 19494169.070 9355256.428 -12616043.006 21189549.365 23959436.524 5078878.903 -10562274.680 10228692.060 -19322124.315 14550804.347 23867142.480 -3892848.382 10941892.224 21493427.163 -15051899.636 3348924.156 14198354.868 13792955.212 17579451.054 18493109.722 4172695.812 18776775.463 -8106932.299 12484531.565 22195338.169 8363810.808 21755378.568 13378858.106

*u t*

2222

*e nut*

(26)

81

Stand-Alone Satellite-Based Global Positioning

http://dx.doi.org/10.5772/55611

ssss

The coordinates of the receiver and of satellite k can be roto-translated into a Eulerian tern, using the following [1, 2]:

$$\begin{pmatrix} e\_i^k \\ n\_i^k \\ u\_i^k \end{pmatrix} = \mathbb{R}\_x^u \begin{pmatrix} X^k - X\_i \\ Y^k - Y\_i \\ Z^k - Z\_i \end{pmatrix} \quad \text{with} \tag{23}$$

The transformed variance–covariance matrix (*Cuu*) described in the local Eulerian system is determined considering the variance propagation law:

$$\begin{vmatrix} -\sin\lambda & \cos\lambda & 0\\ -\sin\phi\cos\lambda & -\sin\phi\sin\lambda & \cos\phi\\ \cos\phi\cos\lambda & \cos\phi\sin\lambda & \sin\phi \end{vmatrix} \tag{24}$$

The number of satellites is not the unique factor in deciding when it is the best time to undertake field measurements. The geometry of the constellation is an important factor too. This factor is determined with the DOP factors. The DOP depends on the diagonal elements of the cofactor matrix and it can be decomposed into the following components:

$$\mathbf{C}\_{uu} = \mathbf{R}\_{\mathbf{x}}^{u} \mathbf{C}\_{XX} \mathbf{R}\_{\mathbf{x}}^{uT} \tag{25}$$

DOPs can be designed: it is not necessary to take any measurements, because the cofactor matrix can be obtained "a priori".

This operation called "planning", only requires knowing the receiver position and the almanac of the ephemerides used to determine the satellite's position. DOPs are like an instantaneous picture of the constellation. High values (for example GDOP > 6) are inadvisable to reach a good precision.

Azimuth *A*, elevation *E* and distance *d* of the different satellites can be estimated in order to plan the measurement session. Each element is determined by the following equations:

$$\begin{aligned} GDPOP &= \sqrt{\sigma\_{\epsilon}^{2} + \sigma\_{u}^{2} + \sigma\_{u}^{2} + \sigma\_{t}^{2}} \\ PDOP &= \sqrt{\sigma\_{\epsilon}^{2} + \sigma\_{u}^{2} + \sigma\_{u}^{2}} \\ HDOP &= \sqrt{\sigma\_{\epsilon}^{2} + \sigma\_{u}^{2}} \\ VDOP &= \sqrt{\sigma\_{u}^{2}} \\ TDOP &= \sqrt{\sigma\_{t}^{2}} \end{aligned} \tag{26}$$

An example is given in Appendix A to illustrate the various aspects discussed in this chapter.

#### **Appendix A**

The P coordinates can be expressed in the ECEF system as:

( )

80 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

*X Nh*

using the following [1, 2]:

= +

( )

( )

é ù = -+ ê ú ë û

*ZN e h*

determined considering the variance propagation law:

*u Rx*

matrix can be obtained "a priori".

good precision.

2

1 sin

cos cos

f l

f l

cos sin ;

*k k i i k uk ix i k k i i*

*e XX*

æö æ ö - ç÷ ç ÷ = - - èø è ø

*u ZZ*

l

fl

fl

matrix and it can be decomposed into the following components:

*n R Y Y with*

= + =

*<sup>a</sup> Y Nh <sup>N</sup>*

f

The coordinates of the receiver and of satellite k can be roto-translated into a Eulerian tern,

The transformed variance–covariance matrix (*Cuu*) described in the local Eulerian system is

sin cos 0 sin cos sin sin cos cos cos cos sin sin

é ù - ê ú =- -

 fl

 fl

The number of satellites is not the unique factor in deciding when it is the best time to undertake field measurements. The geometry of the constellation is an important factor too. This factor is determined with the DOP factors. The DOP depends on the diagonal elements of the cofactor

DOPs can be designed: it is not necessary to take any measurements, because the cofactor

This operation called "planning", only requires knowing the receiver position and the almanac of the ephemerides used to determine the satellite's position. DOPs are like an instantaneous picture of the constellation. High values (for example GDOP > 6) are inadvisable to reach a

ë û

 l

> f

 f

*u uT C RC R uu x XX x* <sup>=</sup> (25)

2 2

f

(22)

(23)

(24)

1 sin

*e*


#### **DOP and satellite visibility**

Estimate GDOP, PDOP, HDOP, VDOP, TDOP factors at P with coordinates: φ=45°3'48", λ=7°39'41", h=0 m, referenced in WGS84 (a=6378137 e2=0.006694379990) ellipsoid, on 2012/04/06 at 8:53:59 am. Moreover Calculate the "satellite visibility" at P. The ECEF coordi‐ nates of the visible satellites are known [8]:


**Table 2.** Satellite's coordinates in ECEF

In the following, the solution is provided.

The geographical coordinates are transformed to a DEG system:

$$\begin{aligned} E &= \operatorname{atan} \frac{\mu^{-j}}{\sqrt{m^{-j/2} + e^{-j/2}}} \\ A &= \operatorname{atan} \frac{e^{-j}}{m^k} \\ d &= \sqrt{e^{-j/2} + m^{-j/2} + \mu^{-j/2}} \end{aligned}$$

from which the geocentric coordinates of P are obtained.

*φ* =45.07333333° *λ* =7.791388889°

P will be the origin of the Eulerian local system. *A* matrix is calculated in the ECEF system considering (11), with the following values:


Stand-Alone Satellite-Based Global Positioning

http://dx.doi.org/10.5772/55611

83

= -0.7016 -0.0944 0.7063

0.7000 0.0942 0.7079

0.2594 0.0273 -0.0409


The cofactor matrix is now calculated in a Eulerian coordinate system considering (24), applying this equation only to the position components (grey elements in the Qxx matrix) and

Before estimating the azimuth, elevation and distances, it is necessary to estimate the satellite

**sat e n u ele (°) azi (°) dist (m)** 10775718.505 -18885463.599 8885172.533 **22.227 150.292 23488787.44** -17286050.680 18122569.546 3109852.920 **7.078 316.353 25237001.81** -13750650.216 9615363.143 13993256.686 **39.827 304.964 21848268.18** 1839308.792 -24727521.634 3405994.949 **7.821 175.746 25028667.26** -20513313.116 4946360.859 9273705.868 **23.725 283.557 23049167.47** -7040025.788 -8627144.285 17719096.273 **57.854 219.216 20927397.22** -17783003.382 -11271791.398 9631674.716 **24.582 237.631 23152918.79** 11776928.073 1175359.814 17314805.970 **55.646 84.301 20973316.65** 1669976.573 -84099.847 20262888.169 **85.283 92.883 20331761.64** 13453888.242 20207851.250 4844990.565 **11.286 33.655 24755570.98** 20446124.731 1550262.123 11006751.692 **28.226 85.664 23272213.30**

*<sup>u</sup>*= 0.0273 0.2943 0.0319

The satellite visibility can be estimated using (26). The final results are as follows.

**Table 6.** Elements of cofactor matrix in Eulerian coordinate system

and using (25), DOP factors can be determined:

position in an Eulerian system.

**Table 7.** Satellite visibility

*<sup>N</sup>* <sup>=</sup> *<sup>a</sup>* 1−*e* <sup>2</sup> sin<sup>2</sup> ϕ

Quu=*Rx*

*Rx uQxx Rx uT* =6388862.01 *m*

*X* =(*N* + *h* )cosϕcosλ =4472328.363 *m Y* =(*N* + *h* )cosϕsinλ =601613.841 *m Z* = *N* (1−*e* 2) + *h* sinϕ=4492322.547 *m*

**Table 5.** Elements of rotation matrix

not to the time element:


**Table 3.** Elements of A matrix

From which the cofactor matrix as (13) and (18) are obtainable using the weight matrix equal to the identity (P=I)


**Table 4.** Elements of cofactor matrix

The rotation matrix in P is defined by (23) and it is equal to:


**Table 5.** Elements of rotation matrix

*E* =*atn*

*A*=*atn*

*e j n k*

*d* = *e <sup>j</sup>*<sup>2</sup> + *n <sup>j</sup>*<sup>2</sup> + *u <sup>j</sup>* <sup>2</sup>

**Table 3.** Elements of A matrix

**Table 4.** Elements of cofactor matrix

The rotation matrix in P is defined by (23) and it is equal to:

to the identity (P=I)

Qxx=N-1=AT

from which the geocentric coordinates of P are obtained.

82 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

P will be the origin of the Eulerian local system. *A* matrix is calculated in the ECEF system

A= -0.9268 0.2148 -0.3082 -1

From which the cofactor matrix as (13) and (18) are obtainable using the weight matrix equal

A= 0.0092 0.2541 0.0463 -0.0149



0.6861 0.0092 0.4168 -0.3953

0.4168 0.0463 0.7464 -0.3999 -0.3953 -0.0149 -0.3999 0.3705

*φ* =45.07333333° *λ* =7.791388889°

considering (11), with the following values:

*u j n <sup>j</sup>* <sup>2</sup> + *e <sup>j</sup>* <sup>2</sup>

> The cofactor matrix is now calculated in a Eulerian coordinate system considering (24), applying this equation only to the position components (grey elements in the Qxx matrix) and not to the time element:


**Table 6.** Elements of cofactor matrix in Eulerian coordinate system

and using (25), DOP factors can be determined:

$$\mathcal{R}\_x^{\
u} Q\_{xx} \mathcal{R}\_x^{\
u T}$$

Before estimating the azimuth, elevation and distances, it is necessary to estimate the satellite position in an Eulerian system.

The satellite visibility can be estimated using (26). The final results are as follows.


**Table 7.** Satellite visibility

#### **Nomenclature**

DOP = Dilution of precision ECEF = Earth Centred Earth Fixed GLONASS= Globalnaya Navigatsionnaya Sputnikovaya Sistema GNSS = Global Navigation Satellite Systems GPS = Global Positioning System RINEX = Receiver Independent Exchange Format URA = User Range Accuracy

#### **Author details**

Alberto Cina and Marco Piras\*

\*Address all correspondence to: marco.piras@polito.it

Politecnico di Torino, DIATI - Department of Environment, Land and Infrastructure Engi‐ neering, Turin, Italy

[9] Blewitt, Geoffrey. "Basics of the GPS Technique: Observation Equations". http:// www.nbmg.unr.edu/staff/pdfs/Blewitt%20Basics%20of%20GPS.pdf. (accessed

Stand-Alone Satellite-Based Global Positioning

http://dx.doi.org/10.5772/55611

85

[10] European Union (2010): "European GNSS (Galileo) Open Service Signal In Space In‐ terface Control Document": http://ec.europa.eu/enterprise/policies/satnav/galileo/

[11] 05 European reference system. http://www.euref.eu/symposia/book2003/P-05-Ka‐

[12] International GNSS service: RemondiB., Spofford P.: "The National Geodetic Survey Standard GPS Format SP3". http://igscb.jpl.nasa.gov/igscb/data/format/sp3\_docu.txt.

[13] GPS Website: Global Positioning System Directorate System Engineering & Integra‐ tion Interface Specification IS-GPS 200. 21 Sept. 2011. http://www.gps.gov/technical/

open-service/index\_en.htm (accessed 31/7/2012)

niuth.pdf (accessed 25/08/2012

icwg/#is-gps-200 (accessed 31/7/2012).

(accessed 25/08/2012)

31/7/2012).

#### **References**


[9] Blewitt, Geoffrey. "Basics of the GPS Technique: Observation Equations". http:// www.nbmg.unr.edu/staff/pdfs/Blewitt%20Basics%20of%20GPS.pdf. (accessed 31/7/2012).

**Nomenclature**

DOP = Dilution of precision

ECEF = Earth Centred Earth Fixed

GPS = Global Positioning System

URA = User Range Accuracy

Alberto Cina and Marco Piras\*

Cambridge Press, (1997).

(accessed 25/08/2012).

cessed 25/08/2012)

**Author details**

neering, Turin, Italy

**References**

GNSS = Global Navigation Satellite Systems

RINEX = Receiver Independent Exchange Format

84 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

\*Address all correspondence to: marco.piras@polito.it

Politecnico di Torino, DIATI - Department of Environment, Land and Infrastructure Engi‐

[1] Borre, K, & Strang, G. Linear algebra, geodesy and GPS, Wellesley (USA), Wellesley-

[2] Cina, A. GPS. Principi, modalità e tecniche di posizionamento, Torino, CELID, (2000).

[3] Hoffmann Wellenhof B Lichtenegger H., Wasle E., GNSS Global Navigation Satellite System. GPS, GLONASS, Galileo & More, Austria, Springer Wien NewYork, (2008).

[5] Global Navigation Satellite System GLONASSInterface Control Document. Naviga‐

[6] GPS website: http://www.gps.gov/technical/ps/2008-SPS-performance-standard.pdf

[8] International GNSS service http://igscbjpl.nasa.gov/components/prods\_cb.html (ac‐

[7] Bern university: http://gaussgge.unb.ca/GLONASS.ICD.pdf (accessed 25/08/2012)

[4] Leick, A. GPS satellite surveying. Wiley (3th edition), (2003).

tion Radiosignal in Band L1, L2 (edition 5.1). Moscow, (2008).

GLONASS= Globalnaya Navigatsionnaya Sputnikovaya Sistema


**Chapter 4**

**Analysis of Ambiguity False Fixing within a GBAS1**

Since the first appearance of satellite positioning systems, GNSS positioning has become a standard and common practice. It is available in most parts of the territory of many nations. So, it is necessary to focus attention not on the feasibility of the positioning itself but on its quality control. In particular, we focus attention on the NRTK (Network Real Time Kinematic) positioning of a geodetic receiverinto a network of permanent stations, the CORS (Continuous Operating Reference Stations) network. The goal of the survey is to obtain a centimetre accuracy, which is usually achieved after a correct fixing of the ambiguity phase. For various reasons, however, it may be possible that the fixing of the integer ambiguities in the receiver can be unreliable. Although the number of false fixes of the ambiguity is limited, it is necessary to minimize these events, or to know some available control parameters needed to alert the

Not all of these parameters are controllable or modifiable; some of them can be controlled by

For these purposes, numerous NRTK experiments were performed over many days, using some receivers connected to an antenna settled on an undisturbed site, equidistant from the

> © 2013 Dabove\* and Manzino\*; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2013 Dabove\* and Manzino\*; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

the network manager, while others can be controlled by the rover receiver user.

1 GBAS is an acronym for Ground Based Augmentation System. For more details see [9]

Paolo Dabove\* and Ambrogio Manzino\*

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/55872

user when this eventuality is likely.

The causes of a false fix are mainly a result of four factors:

**2.** unreliable data transmission or high latencies;

**3.** disturbed or partially occluded measurement site; **4.** electronic or algorithmic problems in the receiver.

**1.** inaccurate differential corrections provided by the network;

**1. Introduction**

### **Analysis of Ambiguity False Fixing within a GBAS1**

Paolo Dabove\* and Ambrogio Manzino\*

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/55872

#### **1. Introduction**

Since the first appearance of satellite positioning systems, GNSS positioning has become a standard and common practice. It is available in most parts of the territory of many nations. So, it is necessary to focus attention not on the feasibility of the positioning itself but on its quality control. In particular, we focus attention on the NRTK (Network Real Time Kinematic) positioning of a geodetic receiverinto a network of permanent stations, the CORS (Continuous Operating Reference Stations) network. The goal of the survey is to obtain a centimetre accuracy, which is usually achieved after a correct fixing of the ambiguity phase. For various reasons, however, it may be possible that the fixing of the integer ambiguities in the receiver can be unreliable. Although the number of false fixes of the ambiguity is limited, it is necessary to minimize these events, or to know some available control parameters needed to alert the user when this eventuality is likely.

The causes of a false fix are mainly a result of four factors:


Not all of these parameters are controllable or modifiable; some of them can be controlled by the network manager, while others can be controlled by the rover receiver user.

For these purposes, numerous NRTK experiments were performed over many days, using some receivers connected to an antenna settled on an undisturbed site, equidistant from the

<sup>1</sup> GBAS is an acronym for Ground Based Augmentation System. For more details see [9]

<sup>© 2013</sup> Dabove\* and Manzino\*; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Dabove\* and Manzino\*; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

CORSs stations. Note the position of this station is considered as a 'false fix' (hereinafter also called FF) for all the real-time positioning of the rover receiver greater than a tolerance threshold. Later, we analyse the time series of some parameters available in real time (both on the user and network server side) during the NRTK positioning, in order to predict with a significantly high probability what the "most probable" situation for a false fix would be.

multipath error, that depends only on the rover site and therefore cannot be easily modelled,

neglected for sub-regional networks, using the IGS ultra-rapid products [10], while they can

After that, double differences equations can be written considering two receivers (h and k) and two satellites (p and q). By subtracting the single difference calculated for the satellite q from the one calculated for the satellite p, it is possible to obtain the double differences equation,

> ( ) that means ( ) ( ) *pq p q pq pq pq pq pq hk hk hk hk i hk i hk hk hk*

> > ( ) *pq pq pq pq pq pq hk hk i hk i hk hk hk*

are modelled spatially, they can be interpolated and broadcast to any rover receiver.

The differential corrections will no longer be broadcast by a single master station, but by the network centre. It is not always necessary for the network software to send the interpolated

 a

If the software uses the first differences, or undifferenced phase equations, in the first case, it is essential to calculate and model the clock errors of the receivers and, in the second case, also those of the satellites, in addition to a series of secondary biases that are usually not considered in differencing techniques. After the phase ambiguity fixing for all the network stations, it is possible to use the iono-free and geometry-free combinations to separate the ionospheric delay by the geometric one. In this way, having these biases for all the stations, it is possible to model the residual ionospheric and tropospheric biases and the ephemeris error, not only between stations h and k, but also among all the reference stations of the network. When these errors

 rl

When the distance between the two receivers is lower than 10 km, the atmospheric propagation delays and the ephemeris errors are irrelevant, allowing centimetric level of accuracy. Over this distance, these errors increase and cannot be neglected. Otherwise, these errors are spatially correlated and can be spatially modelled (as possible to see in [5]). However, to be able to predict and use these biases in real time, three conditions must be satisfied: the knowledge of a centimetric accuracy of the master positions; a control centre able to process data from all the stations in real-time; and the continuous carrier-phase ambiguity fixing, even when interstation distances reach 80–100 km. (This last condition in the control centre network software is called "common ambiguity level fixing"). This concept is equal to bring to the lefthand side of (3), among the known terms, the first two terms on the right-hand side, i.e.:

 a

 a*i i Ni I T E* - =+ - ++ (3)

*i Ni I T E* -- = ++ (4)

e

Analysis of Ambiguity False Fixing within a GBAS

*<sup>p</sup>* , it is possible to affirm that this term can be

(2)

http://dx.doi.org/10.5772/55872

89

() () () *p pp p p p ppp hk h k hk i hk hk i hk hk hk hk*

*i i i Ni cdT I T E* = - = + - - ++ +

it is possible to write:

f

f*i*  f

Focusing the attention to the orbit error *Ehk*

neglecting the random errors contribution:

f

 rl

 f  rl

also be modelled as happens into the GNSMARTTM software [11].

f

 f

The analysis provides an estimator of the ambiguity fixing validity, useful for providing an intuitive and synthetic information for a user during the survey, similar to a "traffic light", and depending on the level of reliability of the solution. Similarly, it is possible to imagine the same architecture transferred to the following inverse logic: if, in fact, the rover receiver transmits the raw data to the network, it is possible to understand more easily if the positions obtained are a false fix or not, and it is therefore possible to notify the user in real time during the survey. In any case, it is then possible to verify whether the false fixes are a result of some quality parameters (and in what percentage) depending on the network in real time, or if they are found in the minimal information that the receiver is able to transmit to the network, contained primarily in the NMEA (*National Marine Electronics Association*) messages.

#### **2. Network positioning concepts**

Carrier-phase differential positioning has known an enormous evolution owing to phase ambiguity fixing method named 'On the Fly' Ambiguity Resolution since 1992 (for example see [7]). Using this technique, a cycle slip recovery, also for moving receivers, was not prob‐ lematic, but positioning problems, when distances between master and rover exceed 10–15 km, had not yet been resolved. For this reason, at the end of the 1990s, the Network Real Time Kinematic (NRTK) or the Ground-Based Augmentation System (GBAS) was designed, as in [4].

First, to understand the network positioning concept, it is necessary to keep in mind some concepts about differential positioning. To do this, it is possible to write the carrier phase equation in units of length:

$$\boldsymbol{\phi}\_{k}^{p}(\mathbf{i}) = \boldsymbol{\rho}\_{k}^{p} - \boldsymbol{\alpha}\boldsymbol{t}\boldsymbol{T}\_{k} + \boldsymbol{\alpha}\boldsymbol{t}\boldsymbol{t}^{p} - \boldsymbol{\alpha}\_{i}\mathbf{i}\_{k}^{p} + \boldsymbol{T}\_{k}^{p} + \boldsymbol{M}\mathbf{i}\_{k}^{p} + \boldsymbol{E}\_{k}^{p} + \boldsymbol{\lambda}\_{i}\mathbf{N}\mathbf{i}\_{k}^{p} + \boldsymbol{\varepsilon}\_{k}^{p} \tag{1}$$

In this equation, *ϕ<sup>k</sup> p* (*i*) represents the carrier-phase measurement in units of length between the satellite p and the receiver k on the i-th frequency. On the right-hand side of the equation, in addition to the geometric range *ρ<sup>k</sup> <sup>p</sup>* , it is possible to find the biases related to receiver and satellite clocks multiplied by the speed of light ( *cdTk* and *cdt <sup>p</sup>* ), the ionospheric propagation delay *α<sup>i</sup> Ik p* (with a known coefficient *α<sup>i</sup>* = *f* <sup>1</sup> <sup>2</sup> / *<sup>f</sup> <sup>i</sup>* 2 that depends on the i-th frequency), the tropospheric propagation delay *Tk p* , the multipath error *M i k p* , the ephemeris error *Ek p* , the carrier-phase ambiguity multiply by the wavelength *λiN i k p* and, finally, the random errors *εk p* . Single differences can be written considering two receivers (h and k). Neglecting a multipath error, that depends only on the rover site and therefore cannot be easily modelled, it is possible to write:

CORSs stations. Note the position of this station is considered as a 'false fix' (hereinafter also called FF) for all the real-time positioning of the rover receiver greater than a tolerance threshold. Later, we analyse the time series of some parameters available in real time (both on the user and network server side) during the NRTK positioning, in order to predict with a significantly high probability what the "most probable" situation for a false fix would be.

The analysis provides an estimator of the ambiguity fixing validity, useful for providing an intuitive and synthetic information for a user during the survey, similar to a "traffic light", and depending on the level of reliability of the solution. Similarly, it is possible to imagine the same architecture transferred to the following inverse logic: if, in fact, the rover receiver transmits the raw data to the network, it is possible to understand more easily if the positions obtained are a false fix or not, and it is therefore possible to notify the user in real time during the survey. In any case, it is then possible to verify whether the false fixes are a result of some quality parameters (and in what percentage) depending on the network in real time, or if they are found in the minimal information that the receiver is able to transmit to the network, contained

Carrier-phase differential positioning has known an enormous evolution owing to phase ambiguity fixing method named 'On the Fly' Ambiguity Resolution since 1992 (for example see [7]). Using this technique, a cycle slip recovery, also for moving receivers, was not prob‐ lematic, but positioning problems, when distances between master and rover exceed 10–15 km, had not yet been resolved. For this reason, at the end of the 1990s, the Network Real Time Kinematic (NRTK) or the Ground-Based Augmentation System (GBAS) was designed, as in [4]. First, to understand the network positioning concept, it is necessary to keep in mind some concepts about differential positioning. To do this, it is possible to write the carrier phase

> *p p p p p p p pp k k k ik k k k i k k*

the satellite p and the receiver k on the i-th frequency. On the right-hand side of the equation,

satellite clocks multiplied by the speed of light ( *cdTk* and *cdt <sup>p</sup>* ), the ionospheric propagation

<sup>2</sup> / *<sup>f</sup> <sup>i</sup>* 2

, the multipath error *M i*

. Single differences can be written considering two receivers (h and k). Neglecting a

l

*<sup>p</sup>* , it is possible to find the biases related to receiver and

*k p*

*k p*

(*i*) represents the carrier-phase measurement in units of length between

 e

that depends on the i-th frequency), the

, the ephemeris error *Ek*

and, finally, the random errors

(1)

*p* , the

 a*i cdT cdt I T Mi E Ni* =- + - ++ ++ +

primarily in the NMEA (*National Marine Electronics Association*) messages.

**2. Network positioning concepts**

88 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

equation in units of length:

In this equation, *ϕ<sup>k</sup>*

*Ik p*

delay *α<sup>i</sup>*

*εk p* ( )

fr

*p*

in addition to the geometric range *ρ<sup>k</sup>*

tropospheric propagation delay *Tk*

(with a known coefficient *α<sup>i</sup>* = *f* <sup>1</sup>

carrier-phase ambiguity multiply by the wavelength *λiN i*

*p*

$$
\boldsymbol{\phi}\_{h\mathbf{k}}^{p}(\mathbf{i}) = \boldsymbol{\phi}\_{h}^{p}(\mathbf{i}) - \boldsymbol{\phi}\_{\mathbf{k}}^{p}(\mathbf{i}) = \boldsymbol{\rho}\_{hk}^{p} + \boldsymbol{\lambda}\_{\mathbf{i}} \mathbf{N} \mathbf{i}\_{hk}^{p} - \mathbf{c} \boldsymbol{\mathcal{C}} \boldsymbol{\Gamma}\_{hk} - \boldsymbol{\alpha}\_{\mathbf{i}} \mathbf{I}\_{hk}^{p} + \mathbf{T}\_{hk}^{p} + \mathbf{E}\_{hk}^{p} + \mathbf{z}\_{hk}^{p} \tag{2}
$$

Focusing the attention to the orbit error *Ehk <sup>p</sup>* , it is possible to affirm that this term can be neglected for sub-regional networks, using the IGS ultra-rapid products [10], while they can also be modelled as happens into the GNSMARTTM software [11].

After that, double differences equations can be written considering two receivers (h and k) and two satellites (p and q). By subtracting the single difference calculated for the satellite q from the one calculated for the satellite p, it is possible to obtain the double differences equation, neglecting the random errors contribution:

$$
\phi\_{hk}^{pq}(\mathbf{i}) \text{ that means } \phi\_{hk}^{p}(\mathbf{i}) - \phi\_{hk}^{q}(\mathbf{i}) = \rho\_{hk}^{pq} + \lambda\_{i} \text{N}i\_{hk}^{pq} - \alpha\_{i} I\_{hk}^{pq} + T\_{hk}^{pq} + E\_{hk}^{pq} \tag{3}
$$

When the distance between the two receivers is lower than 10 km, the atmospheric propagation delays and the ephemeris errors are irrelevant, allowing centimetric level of accuracy. Over this distance, these errors increase and cannot be neglected. Otherwise, these errors are spatially correlated and can be spatially modelled (as possible to see in [5]). However, to be able to predict and use these biases in real time, three conditions must be satisfied: the knowledge of a centimetric accuracy of the master positions; a control centre able to process data from all the stations in real-time; and the continuous carrier-phase ambiguity fixing, even when interstation distances reach 80–100 km. (This last condition in the control centre network software is called "common ambiguity level fixing"). This concept is equal to bring to the lefthand side of (3), among the known terms, the first two terms on the right-hand side, i.e.:

$$
\rho\_{\rm hk}^{pq}(\mathbf{i}) - \rho\_{\rm hk}^{pq} - \lambda\_{\rm i} \rm Ni\_{\rm hk}^{pq} = \alpha\_i I\_{\rm hk}^{pq} + T\_{\rm hk}^{pq} + E\_{\rm hk}^{pq} \tag{4}
$$

If the software uses the first differences, or undifferenced phase equations, in the first case, it is essential to calculate and model the clock errors of the receivers and, in the second case, also those of the satellites, in addition to a series of secondary biases that are usually not considered in differencing techniques. After the phase ambiguity fixing for all the network stations, it is possible to use the iono-free and geometry-free combinations to separate the ionospheric delay by the geometric one. In this way, having these biases for all the stations, it is possible to model the residual ionospheric and tropospheric biases and the ephemeris error, not only between stations h and k, but also among all the reference stations of the network. When these errors are modelled spatially, they can be interpolated and broadcast to any rover receiver.

The differential corrections will no longer be broadcast by a single master station, but by the network centre. It is not always necessary for the network software to send the interpolated biases to the rover receiver. Once the dispersive ( *α<sup>i</sup> Ihk pq* ) and non-dispersive ( *Thk pq* <sup>+</sup> *Ehk pq* ) biases for each station and each satellite have been estimated, it is possible to follow three paths:

threshold (the calibration on this tool depending on the type of receiver available) the effective presence, or the possibility of a false fixing in the position. The input data of this instrument (hereinafter referred to as the "FF Estimator") will be the real-time data available from the NMEA message, extractable from the rover receiver (number of satellites in view, Diluition Of Precision – DOP - index, delay of the differential correction), and those which may be obtain‐ able by the network software (typically, the tropospheric and ionospheric delay, the number

Analysis of Ambiguity False Fixing within a GBAS

http://dx.doi.org/10.5772/55872

91

The FF estimator will be composed of a neural network, trained a priori with some datasets, and will have, as a single output, the probability that the current fixing is a false fixing of the ambiguity of phase. In this regard, it will be necessary to identify, among all available data in real time, those most sensitive both to the deterioration of the accuracy and the presence of the false fixing. These parameters will be those used to calibrate ("train") the neural network. Downstream of the estimator, a representative index is provided as an output of the algorithm (shown schematically in Figure 1 by a traffic light), of the quality of the fixing, possibly allowing the user to re-initialize the measurement session. The change over time of these probabilities is useful to forecast an incorrect positioning (not always considered as a false fix) before it

Artificial neural networks are informatic tools that simulate the connections of neurons in the human brain, and are used to make decisions, approximate a function, or classify data. In our case, the function or the decision is to assess the probability of a false fixing, and provide as output an indicator simulating the behaviour of a traffic light, allowing the user to have visual and intuitive information about the quality of the coordinates of the NRTK survey, available

Neural networks are made up of *n* nodes, called artificial neurons, connected to each other, which, in turn, are ordered in layers. There is one input layer consisting of a limited number of neurons, one output layer (in our case constituted by a single neuron that represents the

of satellites fixed by the network, the network quality fixing).

**Figure 1.** Algorithm procedure

actually occurs.

in real time.

**4. Notes on neural network**


In MAC mode, the receiver's task is to use data provided by a master station and the first differences of some auxiliary stations. It can then perform a multi-base position and the biases can be interpolated as necessary. The amount of data and calculations that the rover receiver can do is considerable. On the other hand, the multi-base positioning can be much more accurate than other methods. Finally, the user chooses the best way to use the data in a subnet. The NRTK network must only carry out the ambiguity fixing and transmit MAC corrections. Inside this cell the same correction is identical for all rover receivers and could also be transmitted by radio.

In the FKP method, the network software models the geometric parameters and dispersive biases over a wide area. Subsequently, the data derived from a master station and the param‐ eters of this model are transmitted to the rover. The "one-way" approach for this type of correction could also be used in this case. Compared to the MAC method, the network software must also model the biases in the area of the network correctly.

Considering the VRS® positioning, the task of the network software is not only to model, but also to interpolate these biases, close to the position of the receiver that "sees" these corrections as if they came from a master station that actually exists. The task of the network software increases and decreases that of the receiver (as possible to see [1]).

In all three cases, it is necessary to control the quality of the positioning, both from the side of what provides the network, and from how the generic network product is used by the rover receiver. The quality of the positioning is a parameter that must be monitored in real time to avoid a wrong phase ambiguity fixing happening (also called "false fixing": FF). Often this is a result of internal problems of the network software or, more often, the poor quality of the environment in which the rover receiver works.

#### **3. Strategy for the control of the NRTK fixing**

To achieve this control, a tool was designed that, starting from the data available in real time by a user connected to an NRTK positioning service, can identify with a certain probability threshold (the calibration on this tool depending on the type of receiver available) the effective presence, or the possibility of a false fixing in the position. The input data of this instrument (hereinafter referred to as the "FF Estimator") will be the real-time data available from the NMEA message, extractable from the rover receiver (number of satellites in view, Diluition Of Precision – DOP - index, delay of the differential correction), and those which may be obtain‐ able by the network software (typically, the tropospheric and ionospheric delay, the number of satellites fixed by the network, the network quality fixing).

**Figure 1.** Algorithm procedure

biases to the rover receiver. Once the dispersive ( *α<sup>i</sup>*

90 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

model (Flächen Korrektur Parameter - FKP mode);

must also model the biases in the area of the network correctly.

increases and decreases that of the receiver (as possible to see [1]).

environment in which the rover receiver works.

**3. Strategy for the control of the NRTK fixing**

Auxiliary Concepts - MAC mode);

Station - VRS® mode).

transmitted by radio.

*Ihk pq*

for each station and each satellite have been estimated, it is possible to follow three paths:

**•** to transmit data from one network station (called the master station), together with the first differences of a few stations, called "auxiliary" stations, close to the rover receiver (Master

**•** to model these biases within the area of the network and transmit the parameters of this

**•** to interpolate these data close to the rover receiver and build some synthetic data for this receiver as if they were generated by a station very close to the receiver. (Virtual Reference

In MAC mode, the receiver's task is to use data provided by a master station and the first differences of some auxiliary stations. It can then perform a multi-base position and the biases can be interpolated as necessary. The amount of data and calculations that the rover receiver can do is considerable. On the other hand, the multi-base positioning can be much more accurate than other methods. Finally, the user chooses the best way to use the data in a subnet. The NRTK network must only carry out the ambiguity fixing and transmit MAC corrections. Inside this cell the same correction is identical for all rover receivers and could also be

In the FKP method, the network software models the geometric parameters and dispersive biases over a wide area. Subsequently, the data derived from a master station and the param‐ eters of this model are transmitted to the rover. The "one-way" approach for this type of correction could also be used in this case. Compared to the MAC method, the network software

Considering the VRS® positioning, the task of the network software is not only to model, but also to interpolate these biases, close to the position of the receiver that "sees" these corrections as if they came from a master station that actually exists. The task of the network software

In all three cases, it is necessary to control the quality of the positioning, both from the side of what provides the network, and from how the generic network product is used by the rover receiver. The quality of the positioning is a parameter that must be monitored in real time to avoid a wrong phase ambiguity fixing happening (also called "false fixing": FF). Often this is a result of internal problems of the network software or, more often, the poor quality of the

To achieve this control, a tool was designed that, starting from the data available in real time by a user connected to an NRTK positioning service, can identify with a certain probability

) and non-dispersive ( *Thk*

*pq* <sup>+</sup> *Ehk pq* ) biases

> The FF estimator will be composed of a neural network, trained a priori with some datasets, and will have, as a single output, the probability that the current fixing is a false fixing of the ambiguity of phase. In this regard, it will be necessary to identify, among all available data in real time, those most sensitive both to the deterioration of the accuracy and the presence of the false fixing. These parameters will be those used to calibrate ("train") the neural network. Downstream of the estimator, a representative index is provided as an output of the algorithm (shown schematically in Figure 1 by a traffic light), of the quality of the fixing, possibly allowing the user to re-initialize the measurement session. The change over time of these probabilities is useful to forecast an incorrect positioning (not always considered as a false fix) before it actually occurs.

#### **4. Notes on neural network**

Artificial neural networks are informatic tools that simulate the connections of neurons in the human brain, and are used to make decisions, approximate a function, or classify data. In our case, the function or the decision is to assess the probability of a false fixing, and provide as output an indicator simulating the behaviour of a traffic light, allowing the user to have visual and intuitive information about the quality of the coordinates of the NRTK survey, available in real time.

Neural networks are made up of *n* nodes, called artificial neurons, connected to each other, which, in turn, are ordered in layers. There is one input layer consisting of a limited number of neurons, one output layer (in our case constituted by a single neuron that represents the decision on the false fixing), and one or more intermediate layers. Each neuron communicates with one or more neurons in a subsequent layer. The first layer receives information from the input and returns it in the next layer through interconnections. The connections between one layer and the next do not transmit the output values derived from the previous layer directly, but transmit weighted values through weights wij that are assigned to each of these connec‐ tions. These weights are the unknowns of the problem; to calculate them it is necessary to "train" the network. In other words, it is necessary to perform a training phase which is based on providing, as input, a set of data for which you know the result in output. On each neuron of a layer the weighed contributions of neurons of the previous layers and a generally constant bias value are summed into the input. A function is applied at this sum and the result is the output value from that neuron (for example see [6]).

As mentioned above, it is necessary to provide a training phase of the neural network; there are essentially three ways: an *unsupervised learning*, one called *reinforcement learning* and one *supervised learning*. The first method (unsupervised) allows to change the weights considering only the input data, trying to evaluate in the data a certain number of clusters with similar characteristics. The second method (reinforcement) follows a supervised strategy but it can be adjusted depending on the response of the external application of the parameters obtained. The last mentioned mode (supervised) is used when, as in the present case, a dataset is available (called the *training set*) for a training phase. The algorithm of supervised learning implemented is the so-called "backpropagation", which uses both the data provided by the input and the known results in output, to modify the weights so as to minimize the prediction error called *network performance* (goal function). In the cases in this work, we need to minimize the mean square error:

$$E\_2 = \frac{1}{n} \sum\_{i=1}^{n} \left( T\_i - O\_i \right)^2 \tag{5}$$

error in the reverse direction, while in the second phase, the weights are modified in order to

minimize the difference between the real and the known, desired output.

**Figure 2.** Oversimplification of a neural network

therefore given by:

neurons, and *b <sup>k</sup>*

In Figure 2, the generic neuron k of the j-th layer is indicated by *Nj*

1 are the input neurons, whose number must be greater than or equal to the number of input. The input value I, for a generic neuron j of the k-th layer which is not of the first level, is

1

where *l* represents the number of neurons of the previous layer, *wh* the weights of these

of the layer above, weighted with weights *w* , are summed, adding a further bias *b* . The neurons process the input through a certain function called *transfer function*: of all the transfer functions, the Hard Limit, the Linear and Log Sigmoid are the best known (Figure 3). In addition to the transition function, each neuron is characterized by a threshold value. The threshold is the

the value of the bias. So for each neuron that is not at the first level, the inputs

*l*

*k k j h h Ib w* =

*k*

= + å (6)

. Neurons with subscript

Analysis of Ambiguity False Fixing within a GBAS

http://dx.doi.org/10.5772/55872

93

where *Oi* is the neuron output of the last level and *Ti* is the known value that we want. This process is iterative: the first iteration of the associated weights between neurons of two adjacent layers have values small and random, which are then updated at subsequent iterations. There are two ways to implement the training: the *incremental mode*; and the *batch mode.* In the first, the weights are updated after each input is applied to the network. In the second, all the inputs in the training set are applied to the network before the weights are updated (for example see [8]). To do this it is possible to use the gradient of the network performance compared to the weights, or the Jacobian of the network errors (*Ti* −*Oi* ) with respect to the weights. Gradient or Jacobian are calculated using the "backpropagation" technique, which gets the values of the weights by using the chain rule of calculus. The training algorithm used in these application is the Matlab® routine "trainlm", based on Levenberg-Marquardt optimization. The training phase occurs in two distinct phases, called the *forward pass* and the *backward pass,* respectively. The first phase allows both to derive the estimation error between the input data (applied to the input nodes) and the results obtained in output (which are known), and to propagate this error in the reverse direction, while in the second phase, the weights are modified in order to minimize the difference between the real and the known, desired output.

**Figure 2.** Oversimplification of a neural network

decision on the false fixing), and one or more intermediate layers. Each neuron communicates with one or more neurons in a subsequent layer. The first layer receives information from the input and returns it in the next layer through interconnections. The connections between one layer and the next do not transmit the output values derived from the previous layer directly, but transmit weighted values through weights wij that are assigned to each of these connec‐ tions. These weights are the unknowns of the problem; to calculate them it is necessary to "train" the network. In other words, it is necessary to perform a training phase which is based on providing, as input, a set of data for which you know the result in output. On each neuron of a layer the weighed contributions of neurons of the previous layers and a generally constant bias value are summed into the input. A function is applied at this sum and the result is the

As mentioned above, it is necessary to provide a training phase of the neural network; there are essentially three ways: an *unsupervised learning*, one called *reinforcement learning* and one *supervised learning*. The first method (unsupervised) allows to change the weights considering only the input data, trying to evaluate in the data a certain number of clusters with similar characteristics. The second method (reinforcement) follows a supervised strategy but it can be adjusted depending on the response of the external application of the parameters obtained. The last mentioned mode (supervised) is used when, as in the present case, a dataset is available (called the *training set*) for a training phase. The algorithm of supervised learning implemented is the so-called "backpropagation", which uses both the data provided by the input and the known results in output, to modify the weights so as to minimize the prediction error called *network performance* (goal function). In the cases in this work, we need to minimize the mean

> ( ) 2

where *Oi* is the neuron output of the last level and *Ti* is the known value that we want. This process is iterative: the first iteration of the associated weights between neurons of two adjacent layers have values small and random, which are then updated at subsequent iterations. There are two ways to implement the training: the *incremental mode*; and the *batch mode.* In the first, the weights are updated after each input is applied to the network. In the second, all the inputs in the training set are applied to the network before the weights are updated (for example see [8]). To do this it is possible to use the gradient of the network performance compared to the

or Jacobian are calculated using the "backpropagation" technique, which gets the values of the weights by using the chain rule of calculus. The training algorithm used in these application is the Matlab® routine "trainlm", based on Levenberg-Marquardt optimization. The training phase occurs in two distinct phases, called the *forward pass* and the *backward pass,* respectively. The first phase allows both to derive the estimation error between the input data (applied to the input nodes) and the results obtained in output (which are known), and to propagate this

*i i*

= - å (5)

) with respect to the weights. Gradient

1

1 *<sup>n</sup>*

*i E TO n* <sup>=</sup>

2

weights, or the Jacobian of the network errors (*Ti* −*Oi*

output value from that neuron (for example see [6]).

92 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

square error:

In Figure 2, the generic neuron k of the j-th layer is indicated by *Nj k* . Neurons with subscript 1 are the input neurons, whose number must be greater than or equal to the number of input. The input value I, for a generic neuron j of the k-th layer which is not of the first level, is therefore given by:

$$I\_j^k = b^k + \sum\_{h=1}^l w\_h \tag{6}$$

where *l* represents the number of neurons of the previous layer, *wh* the weights of these neurons, and *b <sup>k</sup>* the value of the bias. So for each neuron that is not at the first level, the inputs of the layer above, weighted with weights *w* , are summed, adding a further bias *b* . The neurons process the input through a certain function called *transfer function*: of all the transfer functions, the Hard Limit, the Linear and Log Sigmoid are the best known (Figure 3). In addition to the transition function, each neuron is characterized by a threshold value. The threshold is the minimum value that must be present in input that allows the neuron to send something in output and is, therefore, active. Beyond this threshold value the neuron transmits a value to the next layer that is the result of the application of the transfer function.

function in the hidden layer is the Log-Sigmoid, while the layer is linear for output. The training

Analysis of Ambiguity False Fixing within a GBAS

http://dx.doi.org/10.5772/55872

95

The dataset for training the network was divided as follows: 50% training; 30% validation; 20% testing. As mentioned, the control of what causes a FF is executed simultaneously in two respects: the verification of what is happening on the network at the time of the FF; and the control of what the user can see on the rover receiver. The quality parameters, reported and considered significant to identify the reason for the FF, are extended for a period before and after the false fixing, established at five-minute intervals (300 s). Almost always this period, especially before the false fixing, is sufficient to understand what is injuring the positioning.

Before testing the developed algorithm, it was necessary to choose the network architecture to support the measures. We chose the Geo++ GNSMART ™2 http://www.geopp.de/ index.php?sprachauswahl=en&bereich=0&kategorie=31&artikel=35 software as the network software of permanent stations because it allows the user to extract information on the realtime calculation of the network: it is possible to have the state vector of the network calculation as an output, every 15 seconds. This information is transmitted to the server, in an ASCII format, and links the age of the false fix with a network event that may cause estimation problems for the rover receiver. Within the software GNSMART (version 1.4.0) a network of five stations was therefore set up, settled around the measurement site and having an interstation distance of about 50 km. This type of configuration allows the simulatation of meshes of today's typical positioning services, and was therefore considered to be representative of

The permanent stations involved in these tests are shown in Figure 5 and detailed in Table 1:

ALES Alessandria LEICA GRX1200+GNSS LEIAR25.R3 48 km BIEL Biella LEICA GRX1200+GNSS LEIAR25.R3 39 km CRES Crescentino LEICA GRX1200+GNSS LEIAR25.R3 29 km LENT Lenta TPS NETG3 TPSCR.G3 27 km VIGE Vigevano TPS ODYSSEY\_E TPSCR3\_GGD 35 km

**Distance to VERC (rover)**

**Station ID City Receiver Antenna**

function used is (always) the Levenberg-Marquardt backpropagation Algorithm.

**5. Analysis of the quality parameters**

**5.1. Quality parameters of the NRTK network**

**Table 1.** Permanent stations involved in the NRTK test

2 For more details visit the webpage:

the tests performed.

**Figure 3.** Different types of Transfer Function

Generally, between one level and the next, all the neurons of the first level are connected with all the neurons of the second. It should be emphasized that there are neural networks in which there are also connections between neurons of the same level, but this does not happen in the case presented in this chapter. After the training phase, the effectiveness of the network is tested on a new set of data, called the *test set*. This set of data must be constituted by values of input and output never seen by the network. If the results offered by the network are close to the actual values, then the network can be considered valid. In these cases, it is said that the network is effective in generalization.

**Figure 4.** Neural network diagram

The neural network used in these experiments is shown schematically in Figure 4. The input consists of three vectors which are, for all epochs, the values of the HDOP (Horizontal Diluition Of Precision) index, the latency of the correction, and the number of fix satellites seen by the rover. The training set is constituted, for every epoch, by a Boolean vector consisting of 1, in the case of a false fix, and 0 otherwise. The hidden layer consists of three neurons. The transfer function in the hidden layer is the Log-Sigmoid, while the layer is linear for output. The training function used is (always) the Levenberg-Marquardt backpropagation Algorithm.

The dataset for training the network was divided as follows: 50% training; 30% validation; 20% testing. As mentioned, the control of what causes a FF is executed simultaneously in two respects: the verification of what is happening on the network at the time of the FF; and the control of what the user can see on the rover receiver. The quality parameters, reported and considered significant to identify the reason for the FF, are extended for a period before and after the false fixing, established at five-minute intervals (300 s). Almost always this period, especially before the false fixing, is sufficient to understand what is injuring the positioning.

#### **5. Analysis of the quality parameters**

minimum value that must be present in input that allows the neuron to send something in output and is, therefore, active. Beyond this threshold value the neuron transmits a value to

Generally, between one level and the next, all the neurons of the first level are connected with all the neurons of the second. It should be emphasized that there are neural networks in which there are also connections between neurons of the same level, but this does not happen in the case presented in this chapter. After the training phase, the effectiveness of the network is tested on a new set of data, called the *test set*. This set of data must be constituted by values of input and output never seen by the network. If the results offered by the network are close to the actual values, then the network can be considered valid. In these cases, it is said that the

The neural network used in these experiments is shown schematically in Figure 4. The input consists of three vectors which are, for all epochs, the values of the HDOP (Horizontal Diluition Of Precision) index, the latency of the correction, and the number of fix satellites seen by the rover. The training set is constituted, for every epoch, by a Boolean vector consisting of 1, in the case of a false fix, and 0 otherwise. The hidden layer consists of three neurons. The transfer

the next layer that is the result of the application of the transfer function.

94 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

**Figure 3.** Different types of Transfer Function

network is effective in generalization.

**Figure 4.** Neural network diagram

#### **5.1. Quality parameters of the NRTK network**

Before testing the developed algorithm, it was necessary to choose the network architecture to support the measures. We chose the Geo++ GNSMART ™2 http://www.geopp.de/ index.php?sprachauswahl=en&bereich=0&kategorie=31&artikel=35 software as the network software of permanent stations because it allows the user to extract information on the realtime calculation of the network: it is possible to have the state vector of the network calculation as an output, every 15 seconds. This information is transmitted to the server, in an ASCII format, and links the age of the false fix with a network event that may cause estimation problems for the rover receiver. Within the software GNSMART (version 1.4.0) a network of five stations was therefore set up, settled around the measurement site and having an interstation distance of about 50 km. This type of configuration allows the simulatation of meshes of today's typical positioning services, and was therefore considered to be representative of the tests performed.


The permanent stations involved in these tests are shown in Figure 5 and detailed in Table 1:

**Table 1.** Permanent stations involved in the NRTK test

<sup>2</sup> For more details visit the webpage:

The reference coordinates of the stations were obtained by adjusting 15 days of data with the Bernese GPS software (v. 5.0), in the reference system IGS05, constraining the coordinates with

Analysis of Ambiguity False Fixing within a GBAS

http://dx.doi.org/10.5772/55872

97

As mentioned, the control of what causes a FF is executed simultaneously in two re‐ spects: the verification of what is happening on the network at the time of the FF; and the control of what the user can see on the rover receiver. Therefore, by combining the network information with the information of the receiver, a reliable decision is more likely to be reached, regarding the measurement period. The association has been possible thanks to the GPS time scale in both files: the one produced by the receiver in NMEA message; and the one produced by the network. Thereafter, for predicting the quality of the NRTK positioning, a software was developed that does not yet use the information derived from the network, but is already suitable to process these parameters. Before using any software that automatically predicts a quality index of positioning, it was necessary to analyse which parameters are more sensitive to the fixing degradation when observing the network status parameters. Since the state vector, which is obtained as output from the network, is complete, it should be noted that much of the information is not essential for checking the quality of the network data. It was therefore necessary to make a decision. The following

**•** Number of GPS and GLONASS (GLObal'naya NAvigatsionnaya Sputnikovaya Sistema) satellites seen by the network stations for each age. In particular, it was decided to plot the

**•** rms (root mean square) of the carrier phases compensation obtained in the current epoch,

**•** The value of the ionospheric delay expressed in ppm along the vertical direction. This index, that is a characteristic value of the GNSMART software, is called I95 and is indicative of the

error obtained from the global estimation of the ionosphere (for example see [3]):

**First Order Effect of Ionosphere/Distance Dependency I95 [ppm]**

**•** The variation of the tropospheric delay (dry component) for each station in the network

(expressed in percentage and derived by the modified Hopfield model).

less 0 ÷ 2 small 2 ÷ 4 large 4 ÷ 8 severe "/> 8

minimum number among all satellites viewed from the stations of the network.

**•** Number of GPS and GLONASS satellites fixed by the network for each age.

the 5 closer IGS (International GNSS Service) stations.

are considered to be the only useful parameters:

expressed in metres (σep).

**Table 2.** Values of I95

**Figure 5.** Permanent stations used in the NRTK test

**Figure 6.** Pillar and antenna used for the NRTK test (VERC station)

The reference coordinates of the stations were obtained by adjusting 15 days of data with the Bernese GPS software (v. 5.0), in the reference system IGS05, constraining the coordinates with the 5 closer IGS (International GNSS Service) stations.

As mentioned, the control of what causes a FF is executed simultaneously in two re‐ spects: the verification of what is happening on the network at the time of the FF; and the control of what the user can see on the rover receiver. Therefore, by combining the network information with the information of the receiver, a reliable decision is more likely to be reached, regarding the measurement period. The association has been possible thanks to the GPS time scale in both files: the one produced by the receiver in NMEA message; and the one produced by the network. Thereafter, for predicting the quality of the NRTK positioning, a software was developed that does not yet use the information derived from the network, but is already suitable to process these parameters. Before using any software that automatically predicts a quality index of positioning, it was necessary to analyse which parameters are more sensitive to the fixing degradation when observing the network status parameters. Since the state vector, which is obtained as output from the network, is complete, it should be noted that much of the information is not essential for checking the quality of the network data. It was therefore necessary to make a decision. The following are considered to be the only useful parameters:



**Table 2.** Values of I95

**Figure 5.** Permanent stations used in the NRTK test

96 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

**Figure 6.** Pillar and antenna used for the NRTK test (VERC station)

**•** The variation of the tropospheric delay (dry component) for each station in the network (expressed in percentage and derived by the modified Hopfield model).

#### **5.2. The quality parameters available at the rover receiver**

The station, which for several days simulated a rover receiver, was a receiver placed in the Geomatics Laboratory at Vercelli; this receiver was connected with a geodetic antenna settled on the roof of the same laboratory (Figure 6). The coordinates of that station were calculated in the same network frame of reference system, through a post-processing approach of 24 hours of data with the nearest station (LENT).

acceptability of the three-dimensional error in 20 cm was established, in order to assess whether or not the event is a FF. This value is fairly generous and was chosen as corresponding approxi‐ mately to a wavelength of the carrier phase L1. Three-dimensional errors less than 20 cm do not necessarilycorrespondtoepochswhere thephase ambiguityis fixedinthe correctway:FFsmay exist for which the positioning errors are much lower than 20 cm. However, an ambiguity fixing that shows a three-dimensional error (i.e. the difference between the "true" position and the measured one) of more than 20 cm is definitely a false fix. Another threshold is the minimum length of the false fix (i.e. the minimum time necessary to be considered is significantly danger‐ ous). This limit is imposed as equal to 5 seconds, because FFs with a lower length are too short to be considered deleterious. It is possible to think now not a mobile positioning, but at a NRTK "stop & go" survey. In this hypothesis, it is expected that the surveyor acquire a point for more than 5 seconds and perform the average of the coordinates obtained, thus easily identifying the measure that causes the false fixing. This option can certainly be removed or modified in the

Analysis of Ambiguity False Fixing within a GBAS

http://dx.doi.org/10.5772/55872

99

The following graphs show both the network and receiver parameters that can justify and provide a false fix in some way. Two vertical stripes in green and red denotes the beginning and the end of FF. For each case of FF, a series of graphs is shown, showing the values described above in a common time scale for the network and the receiver. All analysis concern the analysis of the behaviour network-receiver within 24 hours length of RTK measurements. Considering the network shown in Figure 5, during the tests with VRS®, FKP and MAC corrections, it 21 false fixes were identified with a time length between 6 s and 1136 s (corre‐ sponding to 18 minutes). Many of these have a similar typology, and for that reason not all the false fixes mentioned before are reported here, only the three main types. We will now examine

The first typology involves the FF identified using the MAC product. It is possible to see the graphs relating to the variation of Δ3D three-dimensional error (related to the E, N and h coordinates), and the elevation error Δh, with the respective standard deviations. It was decided not to show the graphical error related to the coordinates E and N because it is insignificant if compared to those previously mentioned. It is common for the more significant error to be the altimeter one, and there isn't a preferred direction in the planimetric one.

By analysing the first FF and considering a time interval of five minutes, it is possible to see a similar trend in the latency of the correction (Figure 8). Note that the receiver does not receive data for a period of time of a few seconds. The DOP index registered by the receiver is kept fairly constant; there are, in fact, only "steps" corresponding with variations in the number of received satellites. 150 s after the start of the FF the number of these satellites decreases below the minimum available for the positioning. It is possible that one of the four satellites corrected by the network has not been available to the receiver. It is also possible to see the graph containing the value of the root mean square σ registered by the receiver. It can be seen how this magnitude does not reveal any particular problems before or during the FF, with a trend

choice of the FF, and during the training of the neural network.

three emblematic cases of FF.

without excessive anomalies.

*5.3.1. Typology of FF No. 1: Latency dependent*

For each network product used (FKP, MAC, VRS) (Table 3) and for each geodetic instrument, RTK positioning with 24 hours of session length were performed, with an acquisition rate of 1 s, in order to obtain results that were independent of the constellation view from the GNSS receiver. This chapter will show the results of a receiver which we believe has shown repre‐ sentative average behaviour with respect to all the receivers used. For each network product, both the raw data and the real time positions were saved: the last one is obtained through the GGA field of the NMEA message, transmitted via a serial port to a computer close to the receiver in the acquisition.


**Table 3.** Network products considered in the NRTK test

If the raw data of the receiver was saved mainly for the analysis of the quality of positioning in post-processing, the data obtained in real time is particularly useful for studying the causes and effects of a FF.

From the NMEA message sent by the receiver we extracted the following information:


#### **5.3. Search of the different types of FF and its related parameters**

To try and assess any FF occurred during the measurement phase, the reference coordinates were compared with those obtainedin the NRTK survey by the roverreceiver, considering only the "fix" coordinates, where "fix" means that the ambiguity value of the carrier phase measure‐ ments were fixed to the integer correct value. As already mentioned, several geodetic receiv‐ ers were tested, as well as different types of network products. Not all results are shown, but the idea here is to present the most significant ones. Another threshold was considered: the limit of acceptability of the three-dimensional error in 20 cm was established, in order to assess whether or not the event is a FF. This value is fairly generous and was chosen as corresponding approxi‐ mately to a wavelength of the carrier phase L1. Three-dimensional errors less than 20 cm do not necessarilycorrespondtoepochswhere thephase ambiguityis fixedinthe correctway:FFsmay exist for which the positioning errors are much lower than 20 cm. However, an ambiguity fixing that shows a three-dimensional error (i.e. the difference between the "true" position and the measured one) of more than 20 cm is definitely a false fix. Another threshold is the minimum length of the false fix (i.e. the minimum time necessary to be considered is significantly danger‐ ous). This limit is imposed as equal to 5 seconds, because FFs with a lower length are too short to be considered deleterious. It is possible to think now not a mobile positioning, but at a NRTK "stop & go" survey. In this hypothesis, it is expected that the surveyor acquire a point for more than 5 seconds and perform the average of the coordinates obtained, thus easily identifying the measure that causes the false fixing. This option can certainly be removed or modified in the choice of the FF, and during the training of the neural network.

The following graphs show both the network and receiver parameters that can justify and provide a false fix in some way. Two vertical stripes in green and red denotes the beginning and the end of FF. For each case of FF, a series of graphs is shown, showing the values described above in a common time scale for the network and the receiver. All analysis concern the analysis of the behaviour network-receiver within 24 hours length of RTK measurements. Considering the network shown in Figure 5, during the tests with VRS®, FKP and MAC corrections, it 21 false fixes were identified with a time length between 6 s and 1136 s (corre‐ sponding to 18 minutes). Many of these have a similar typology, and for that reason not all the false fixes mentioned before are reported here, only the three main types. We will now examine three emblematic cases of FF.

#### *5.3.1. Typology of FF No. 1: Latency dependent*

**5.2. The quality parameters available at the rover receiver**

98 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

Virtual Reference Station (VRS®

**Table 3.** Network products considered in the NRTK test

**•** the age of differential correction broadcast;

**•** HDOP (Horizontal Dilution of Precision) index.

**•** number of satellites received;

of data with the nearest station (LENT).

receiver in the acquisition.

and effects of a FF.

The station, which for several days simulated a rover receiver, was a receiver placed in the Geomatics Laboratory at Vercelli; this receiver was connected with a geodetic antenna settled on the roof of the same laboratory (Figure 6). The coordinates of that station were calculated in the same network frame of reference system, through a post-processing approach of 24 hours

For each network product used (FKP, MAC, VRS) (Table 3) and for each geodetic instrument, RTK positioning with 24 hours of session length were performed, with an acquisition rate of 1 s, in order to obtain results that were independent of the constellation view from the GNSS receiver. This chapter will show the results of a receiver which we believe has shown repre‐ sentative average behaviour with respect to all the receivers used. For each network product, both the raw data and the real time positions were saved: the last one is obtained through the GGA field of the NMEA message, transmitted via a serial port to a computer close to the

> **Type of product used by receiver Type of format** Master Auxiliary Concept (MAC) RTCM 3 Flachen Korrektur Parameter (FKP) RTCM 2.3

If the raw data of the receiver was saved mainly for the analysis of the quality of positioning in post-processing, the data obtained in real time is particularly useful for studying the causes

From the NMEA message sent by the receiver we extracted the following information:

**•** rms of ellipsoidal height, also calculated 30 s before the current epoch;

**5.3. Search of the different types of FF and its related parameters**

**•** mean square error (rms) of the position, calculated on the 30 s before the current epoch;

To try and assess any FF occurred during the measurement phase, the reference coordinates were compared with those obtainedin the NRTK survey by the roverreceiver, considering only the "fix" coordinates, where "fix" means that the ambiguity value of the carrier phase measure‐ ments were fixed to the integer correct value. As already mentioned, several geodetic receiv‐ ers were tested, as well as different types of network products. Not all results are shown, but the idea here is to present the most significant ones. Another threshold was considered: the limit of

) RTCM 3

The first typology involves the FF identified using the MAC product. It is possible to see the graphs relating to the variation of Δ3D three-dimensional error (related to the E, N and h coordinates), and the elevation error Δh, with the respective standard deviations. It was decided not to show the graphical error related to the coordinates E and N because it is insignificant if compared to those previously mentioned. It is common for the more significant error to be the altimeter one, and there isn't a preferred direction in the planimetric one.

By analysing the first FF and considering a time interval of five minutes, it is possible to see a similar trend in the latency of the correction (Figure 8). Note that the receiver does not receive data for a period of time of a few seconds. The DOP index registered by the receiver is kept fairly constant; there are, in fact, only "steps" corresponding with variations in the number of received satellites. 150 s after the start of the FF the number of these satellites decreases below the minimum available for the positioning. It is possible that one of the four satellites corrected by the network has not been available to the receiver. It is also possible to see the graph containing the value of the root mean square σ registered by the receiver. It can be seen how this magnitude does not reveal any particular problems before or during the FF, with a trend without excessive anomalies.

**Figure 7.** Residuals of coordinates for FF No. 1 (latency dependent)

The values of the ionosphere and troposphere are also considered, related to the variation of the CORS station constituting the network. Such information, which is not shown, have been extracted from the state vector of GNSMART, as described in the introductory part of the present work. In both cases, we do not obtain significant changes of the values at the beginning of the FF, while it has the presence of a peak of both ionospheric and tropospheric delay at the half of the same, which corresponds to the fixing of only three satellites by the network. The presence of the peak of the troposphere is also justified as an "attempt" by the network software to restore the correct behaviour in the biases estimation. The low number of corrected satellites compared to those seen by the receiver may be a temporary problem of the network software.

#### *5.3.2. Typology of FF No. 2: Wrong ionospheric network estimation*

This type of false fixing is unusual because there is no data before or after the FF: the first positioning with the ambiguity fixed, which occurred only 5 seconds after the switching on of the instrument, is already incorrect. The following graphs, relating to the 3D and elevation errors, show the presence of values only in correspondence to the FF error values in the order approximately of 50-60 cm, that are much higher with respect to the threshold set equal to 20 cm (Figure 9). At the beginning of the false fix, however, the rms of the position rises up to 30 cm before becoming very low.

**Figure 9.** Residuals of coordinates for FF No. 2 (wrong ionospheric network estimation)

**Figure 8.** Quality parameters for rover and network (FF No. 1)

Analysis of Ambiguity False Fixing within a GBAS

http://dx.doi.org/10.5772/55872

101

**Figure 8.** Quality parameters for rover and network (FF No. 1)

The values of the ionosphere and troposphere are also considered, related to the variation of the CORS station constituting the network. Such information, which is not shown, have been extracted from the state vector of GNSMART, as described in the introductory part of the present work. In both cases, we do not obtain significant changes of the values at the beginning of the FF, while it has the presence of a peak of both ionospheric and tropospheric delay at the half of the same, which corresponds to the fixing of only three satellites by the network. The presence of the peak of the troposphere is also justified as an "attempt" by the network software to restore the correct behaviour in the biases estimation. The low number of corrected satellites compared to those seen by the receiver may be a temporary problem of the network software.

This type of false fixing is unusual because there is no data before or after the FF: the first positioning with the ambiguity fixed, which occurred only 5 seconds after the switching on of the instrument, is already incorrect. The following graphs, relating to the 3D and elevation errors, show the presence of values only in correspondence to the FF error values in the order approximately of 50-60 cm, that are much higher with respect to the threshold set equal to 20 cm (Figure 9). At the beginning of the false fix, however, the rms of the position rises up to 30

*5.3.2. Typology of FF No. 2: Wrong ionospheric network estimation*

**Figure 7.** Residuals of coordinates for FF No. 1 (latency dependent)

100 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

cm before becoming very low.

**Figure 9.** Residuals of coordinates for FF No. 2 (wrong ionospheric network estimation)

The graphs in Figure 10 do not show meaningful behaviour which can be attributed as the cause of the false fix. In fact, the latency of the correction is lower in magnitude than what is already found. The DOP index remains constant as the number of satellites seen by the receiver is also constant, while the number of satellites seen, and with ambiguity fixed by the network, increases (both GPS and GLONASS). Finally, the variation of the σ value is not significant during the FF, only before. There is, on the other hand, a quite significant variation in the ionospheric and tropospheric delays (the latter especially after the end of the FF, probably corresponding to the intention of restoring, using the network software, a more correct, common level of ambiguity fixing. This is shown in Figure 11).

**Figure 11.** Atmospheric parameters estimated by the network software (FF No. 2)

Analysis of Ambiguity False Fixing within a GBAS

http://dx.doi.org/10.5772/55872

103

**Figure 12.** Residuals of coordinates for FF No. 3

**Figure 10.** Quality parameters for rover and network (FF No. 2)

Probably,thereisalsoanetworkproblemeven300secondsbeforethefalsefixing.Itisn'tpossible that, 450 seconds after the FF, there is a variation of more than 3 metres in the ionosphere.

#### *5.3.3. Typology of FF No. 3: High DOP index variation*

This typology represents the false fix with the longer length, similar to type 2 with the lack of data before and after the FF, but it comes back to fixing the ambiguities around the epoch 1,300.

As seen in Figure 12, the rms of coordinates increases and exceeds the first decimeter before the FF. It is possible to know, unlike the other cases, that the value of the ionospheric delay during the FF decreases suddenly, unlike that of the tropospheric one which has a fairly constant trend (Figure 14). The DOP values, while being low at the beginning of the FF, rise after about 90 seconds, with values greater than or equal to 2, and decreasing to reasonably low values around the epoch 1300 (Figure 13).

**Figure 11.** Atmospheric parameters estimated by the network software (FF No. 2)

**Figure 12.** Residuals of coordinates for FF No. 3

The graphs in Figure 10 do not show meaningful behaviour which can be attributed as the cause of the false fix. In fact, the latency of the correction is lower in magnitude than what is already found. The DOP index remains constant as the number of satellites seen by the receiver is also constant, while the number of satellites seen, and with ambiguity fixed by the network, increases (both GPS and GLONASS). Finally, the variation of the σ value is not significant during the FF, only before. There is, on the other hand, a quite significant variation in the ionospheric and tropospheric delays (the latter especially after the end of the FF, probably corresponding to the intention of restoring, using the network software, a more correct,

Probably,thereisalsoanetworkproblemeven300secondsbeforethefalsefixing.Itisn'tpossible that, 450 seconds after the FF, there is a variation of more than 3 metres in the ionosphere.

This typology represents the false fix with the longer length, similar to type 2 with the lack of data before and after the FF, but it comes back to fixing the ambiguities around the epoch 1,300. As seen in Figure 12, the rms of coordinates increases and exceeds the first decimeter before the FF. It is possible to know, unlike the other cases, that the value of the ionospheric delay during the FF decreases suddenly, unlike that of the tropospheric one which has a fairly constant trend (Figure 14). The DOP values, while being low at the beginning of the FF, rise after about 90 seconds, with values greater than or equal to 2, and decreasing to reasonably

common level of ambiguity fixing. This is shown in Figure 11).

102 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

**Figure 10.** Quality parameters for rover and network (FF No. 2)

*5.3.3. Typology of FF No. 3: High DOP index variation*

low values around the epoch 1300 (Figure 13).

Despite the existence of a FF where it is not possible to identify the main trigger, it is however possible to affirm that the significant values that enable the presence of a false fix to be

Analysis of Ambiguity False Fixing within a GBAS

http://dx.doi.org/10.5772/55872

105

**3.** the DOP index and then the parallel number of satellites used (parameters that can be

**4.** the variations of the ionospheric and tropospheric biases computed by the network

These parameters are visible from both the receiver (points 1, 2, 3) and the network (point 4). Some of them are usually transmitted to the network software with the NMEA message, in the case of a bidirectional exchange of information. With this message, the network software

Some analysis was carried out in real time using CORS networks with larger extension (interstation distances of about 100 and 150 km), but the results are omitted because, with these networks, the number of FFs obtained is amplified with respect to the network presented (without changing the typology), with an inter-station distances of about 50 km [1]. This deterioration is clearly linked to a growing difficulty in the interpolation of the corrections by

After the analysis of the types of false fix and the main factors that determine the occurrence of an incorrect fix, it was decided to develop a neural network to predict the wrong fix of the ambiguity phase. To do this, a "neural network" toolbox, available in the Matlab® computer program, was used. Particular attention was devoted to the training phase: it is of fundamental importance to "train" the network correctly, in such a manner that it is able to predict, after this phase, the possible false fixing of the ambiguity. Four different methods were tested, depending both on the type of the input training file (each file was composed by 3 days of

**• Net1:** the network is composed of 3 neurons; the training file derives from a real data file of

**• Net2:** the network is composed of 3 neurons; the training file is a file prepared ad hoc (in terms of data densification and order of FF typologies, such as FF due to latency, DOP index and number of satellites, hereafter called *man-made*) containing about 15 FFs, owing mainly

**• Net3:** the network is composed of 10 neurons; the training file derives from a real data file

**2.** the noise level of the σ value of the coordinates on the receiver;

is able to evaluate the state of the receiver (fixing or not).

the network if the inter-station distances increase.

24 hours' length (containing of about 15 FF);

**6. Implementation and results of the neural network**

observations), and the number of neurons considered in the hidden layer:

to latency correction and sudden variation in the number of satellites;

of 24 hours' length (containing about 15 FFs also in this case);

identified with high probability are:

**1.** the latency of the correction;

separated);

software.

**Figure 13.** Quality parameters for rover and network (FF No. 3)

**Figure 14.** Atmospheric parameters estimated by the network software (FF No. 3)

Despite the existence of a FF where it is not possible to identify the main trigger, it is however possible to affirm that the significant values that enable the presence of a false fix to be identified with high probability are:

**1.** the latency of the correction;

**Figure 13.** Quality parameters for rover and network (FF No. 3)

104 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

**Figure 14.** Atmospheric parameters estimated by the network software (FF No. 3)


These parameters are visible from both the receiver (points 1, 2, 3) and the network (point 4). Some of them are usually transmitted to the network software with the NMEA message, in the case of a bidirectional exchange of information. With this message, the network software is able to evaluate the state of the receiver (fixing or not).

Some analysis was carried out in real time using CORS networks with larger extension (interstation distances of about 100 and 150 km), but the results are omitted because, with these networks, the number of FFs obtained is amplified with respect to the network presented (without changing the typology), with an inter-station distances of about 50 km [1]. This deterioration is clearly linked to a growing difficulty in the interpolation of the corrections by the network if the inter-station distances increase.

#### **6. Implementation and results of the neural network**

After the analysis of the types of false fix and the main factors that determine the occurrence of an incorrect fix, it was decided to develop a neural network to predict the wrong fix of the ambiguity phase. To do this, a "neural network" toolbox, available in the Matlab® computer program, was used. Particular attention was devoted to the training phase: it is of fundamental importance to "train" the network correctly, in such a manner that it is able to predict, after this phase, the possible false fixing of the ambiguity. Four different methods were tested, depending both on the type of the input training file (each file was composed by 3 days of observations), and the number of neurons considered in the hidden layer:


**• Net4:** the network is composed of 10 neurons; the training file is a *man-made* file containing about 15 FFs, owing mainly to latency correction and sudden variation in the number of satellites.

To understand which types of previous network obtain the best performances, an additional day (session length of 86400 epochs) of independent measurements was used. The results obtained are shown in Figure 15.

analysed, considering only the epochs with a fixed ambiguity phase. Even for these experi‐ ments it was considered to be a static survey with the rover instrument mentioned above, located in Vercelli (Figure 6), the CORSs network as described above (Figure 5), and only

Analysis of Ambiguity False Fixing within a GBAS

http://dx.doi.org/10.5772/55872

107

**Figure 16.** False fix prediction: no FF (green); warning (yellow); and FF detected (red alert)

Table 4 shows the number of FFs obtained by considering the two different corrections and three different sessions of measurement (each of which, as previously mentioned, with a duration of 72 hours). It may be noted first that, with the VRS® correction, a smaller number of false fixes is obtained generally. It should be emphasized that the experiments were not performed considering the two corrections simultaneously, but the choice of using such a big survey time window allows us to affirm that there is no dependence on the geometry of the

**Day # of False fixes**


Day 1 7 1 Day 2 15 5 Day 3 18 18

VRS® and MAC corrections.

**Table 4.** Number of false fixes / day

satellite constellation, or on external factors.

**Figure 15.** Neural network results (training phase): percentage of FF detected or not

It is possible to see how Net3 is the network that identifies all the FFs, declaring 20% as certain FFs while 80% are defined as possible FFs (hereinafter called also *warning*). The other networks have worse behaviour, not identifying 20% or 60% of false fixings occurring in the day used for the test phase. More specifically, and choosing a randomly FF that occurred on that day, it is possible to see that even in this case (Figure 16) Net3 predicted a FF about 5 seconds in advance. Other networks, such as Net2 or Net4, not only did not identify it, but also noticed of the event only after that it had started.

Therefore, considering Net3 as the best trained and available neural network, some tests with a session length of more than 72 hours (3 days) were carried out, in order to test the trained neural network on new data, acquired from different receivers of the main companies operating in the market. The results only show of a dual frequency and dual constellation receiver, chosen as representative (called a "medium receiver"), in terms of behaviour and quality of positioning. The information contained in the GGA field of the NMEA message was

**Figure 16.** False fix prediction: no FF (green); warning (yellow); and FF detected (red alert)

analysed, considering only the epochs with a fixed ambiguity phase. Even for these experi‐ ments it was considered to be a static survey with the rover instrument mentioned above, located in Vercelli (Figure 6), the CORSs network as described above (Figure 5), and only VRS® and MAC corrections.

Table 4 shows the number of FFs obtained by considering the two different corrections and three different sessions of measurement (each of which, as previously mentioned, with a duration of 72 hours). It may be noted first that, with the VRS® correction, a smaller number of false fixes is obtained generally. It should be emphasized that the experiments were not performed considering the two corrections simultaneously, but the choice of using such a big survey time window allows us to affirm that there is no dependence on the geometry of the satellite constellation, or on external factors.


**Table 4.** Number of false fixes / day

**• Net4:** the network is composed of 10 neurons; the training file is a *man-made* file containing about 15 FFs, owing mainly to latency correction and sudden variation in the number of

To understand which types of previous network obtain the best performances, an additional day (session length of 86400 epochs) of independent measurements was used. The results

**Figure 15.** Neural network results (training phase): percentage of FF detected or not

of the event only after that it had started.

It is possible to see how Net3 is the network that identifies all the FFs, declaring 20% as certain FFs while 80% are defined as possible FFs (hereinafter called also *warning*). The other networks have worse behaviour, not identifying 20% or 60% of false fixings occurring in the day used for the test phase. More specifically, and choosing a randomly FF that occurred on that day, it is possible to see that even in this case (Figure 16) Net3 predicted a FF about 5 seconds in advance. Other networks, such as Net2 or Net4, not only did not identify it, but also noticed

Therefore, considering Net3 as the best trained and available neural network, some tests with a session length of more than 72 hours (3 days) were carried out, in order to test the trained neural network on new data, acquired from different receivers of the main companies operating in the market. The results only show of a dual frequency and dual constellation receiver, chosen as representative (called a "medium receiver"), in terms of behaviour and quality of positioning. The information contained in the GGA field of the NMEA message was

satellites.

obtained are shown in Figure 15.

106 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

By means of a special program, developed in Matlab®, it was possible to analyse all those epochs when the estimated position differs by at least 20 cm (considered as certain FF because it is a static survey) from the "correct" one. It was also possible to create some statistics results, not on the quality of positioning but on the ability to predict and identify a FF in real time. We will analyse only two cases, considered significant, and chosen from among all the sessions, analysed during the training phase: the "best case", which represents the session in which the network predicted the maximum number of FFs, indicating them as a *warning* or as a *red alert* (epochs declared as a certain FF), and the so-called "worst case", when the network predicted the minimum number of FFs.

Figure 17 and 18 show the two extreme cases of prediction and detection of FF considering the MAC correction. It is possible to see how, in the "worst case", 86% of the false fixes are identified (about 9 out of 10 FFs), of which 42% are declared as certain. In the "best case" it is possible to see how all the FFs are detected, and 20% of these are declared as certain FFs. It should be emphasized that there is no cause for concern owing to a low rate of false fixes declared as certain: it is important that they are identified as even possible FFs (warning) rather than they are not identified at all. Considering the possibility of reporting the quality of the positioning to the user during the survey, and comparing the quality of the survey using a traffic light sysytem, the epochs when the neural network does not show any abnormality could be identified by a green light, the epochs when there is a *warning* by a yellow light, and when the

Analysis of Ambiguity False Fixing within a GBAS

http://dx.doi.org/10.5772/55872

109

correction, "best case" (72 hours of data, Net 3)

correction, "worst case" (72 hours of data, Net 2)

FF is declared as certain (*red alert*) by a red light.

**Figure 19.** False fix analysis: VRS®

**Figure 20.** False fix analysis: VRS®

**Figure 17.** False fix analysis: MAC correction, "best case" (72 hours of data, Net 3)

**Figure 18.** False fix analysis: MAC correction, "worst case" (72 hours of data, Net 2)

Some statistics on the stability of the ambiguity fixing were also carried out, and considered the length (in seconds) of the phase ambiguity fixing, but they are not reported in this paper. Figure 17 and 18 show the two extreme cases of prediction and detection of FF considering the MAC correction. It is possible to see how, in the "worst case", 86% of the false fixes are identified (about 9 out of 10 FFs), of which 42% are declared as certain. In the "best case" it is possible to see how all the FFs are detected, and 20% of these are declared as certain FFs. It should be emphasized that there is no cause for concern owing to a low rate of false fixes declared as certain: it is important that they are identified as even possible FFs (warning) rather than they are not identified at all. Considering the possibility of reporting the quality of the positioning to the user during the survey, and comparing the quality of the survey using a traffic light sysytem, the epochs when the neural network does not show any abnormality could be identified by a green light, the epochs when there is a *warning* by a yellow light, and when the FF is declared as certain (*red alert*) by a red light.

**Figure 19.** False fix analysis: VRS® correction, "best case" (72 hours of data, Net 3)

By means of a special program, developed in Matlab®, it was possible to analyse all those epochs when the estimated position differs by at least 20 cm (considered as certain FF because it is a static survey) from the "correct" one. It was also possible to create some statistics results, not on the quality of positioning but on the ability to predict and identify a FF in real time. We will analyse only two cases, considered significant, and chosen from among all the sessions, analysed during the training phase: the "best case", which represents the session in which the network predicted the maximum number of FFs, indicating them as a *warning* or as a *red alert* (epochs declared as a certain FF), and the so-called "worst case", when the network

predicted the minimum number of FFs.

108 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

**Figure 17.** False fix analysis: MAC correction, "best case" (72 hours of data, Net 3)

**Figure 18.** False fix analysis: MAC correction, "worst case" (72 hours of data, Net 2)

Some statistics on the stability of the ambiguity fixing were also carried out, and considered the length (in seconds) of the phase ambiguity fixing, but they are not reported in this paper.

**Figure 20.** False fix analysis: VRS® correction, "worst case" (72 hours of data, Net 2)

If we consider instead the VRS® correction, it can be seen that in the "worst case", 25% of FFs are not predicted. This percentage is quite worrying as it means that 1 in 4 FFs could not be identified. Of the remaining 75% of predicted FFs, 33% (or rather 1 in 3) are declared as a certain false fix, while 67% are a possible false fix (*warning*). Another important analysis regards the percentage of "errors" of the neural network, in terms of the number of sure or possible FF that there are not been verified.

As can be seen from Figure 21 and 22, the neural network can be considered efficient, consid‐ ering the MAC and VRS® corrections respectively, because 3% and 4% of cases report possible FFs that do not really occur, while never sure FF happen. So it is possible to affirm that the type of neural network and its training phase have proved effective. In this phase of the job, we only considered a few parameters obtained from the NMEA message, ignoring all the information derived from the network software that could further improve the results. We, therefore, want to continue in the future, to carry out experiments to try and reduce the 25% of unpredicted FFs that occur, and that cannot be identified by the parameters considered in the current state and, as shown in the graphs, reveal almost certainly a problem in a NRTK

Analysis of Ambiguity False Fixing within a GBAS

http://dx.doi.org/10.5772/55872

111

As shown in the present work, it is not always possible to identify which parameter (or its variation) leads to a false fix. It is, however, possible to affirm that, according to the studies carried out, the significant quantities that allow the receiver to identify the presence of a FF

It appears, in the authors' personal opinion, to be of great importance to consider the network parameters (such as ionospheric and tropospheric delays), to try to identify the triggers of FFs — currently unexplained by only considering the parameters — to attempt to reduce quite high rates of unreported false fixings. It should, however, be underlined again that a low rate of FF declared as certain is not a cause for concern. It is important that they are identified even

The input parameters of the network can be expanded. In addition, it is possible to affirm that the network can be trained for a "stop & go" positioning of the rover receiver or, with proper choice of these parameters, for a receiver in motion. The user will choose the type of survey

For a quality analysis of the survey to be made in retrospect, it might be important to combine visual information (colour) with the quality of the measurements. For example, it might classify these measures on a screen indicating these points and the respective colour code (from green for the measures considered reliable, up to red for the measures considered unreliable), so the user can later return to measure, perhaps with other conditions of the satellite visibility, the

and the receiver and, at the same time, the type of quality control of the survey.

network.

with high probability, are:

**•** the latency of correction;

**7. Conclusion and future developments**

**•** the noise of the rms value (σ) of the three coordinates;

as a possible FF (warning) rather than not be identified.

points made previously little or not reliable.

**•** the DOP index and therefore the number of satellites used in parallel.

**Figure 21.** MAC correction: Prediction errors

**Figure 22.** VRS® correction: Prediction errors

As can be seen from Figure 21 and 22, the neural network can be considered efficient, consid‐ ering the MAC and VRS® corrections respectively, because 3% and 4% of cases report possible FFs that do not really occur, while never sure FF happen. So it is possible to affirm that the type of neural network and its training phase have proved effective. In this phase of the job, we only considered a few parameters obtained from the NMEA message, ignoring all the information derived from the network software that could further improve the results. We, therefore, want to continue in the future, to carry out experiments to try and reduce the 25% of unpredicted FFs that occur, and that cannot be identified by the parameters considered in the current state and, as shown in the graphs, reveal almost certainly a problem in a NRTK network.

#### **7. Conclusion and future developments**

As shown in the present work, it is not always possible to identify which parameter (or its variation) leads to a false fix. It is, however, possible to affirm that, according to the studies carried out, the significant quantities that allow the receiver to identify the presence of a FF with high probability, are:

**•** the latency of correction;

If we consider instead the VRS® correction, it can be seen that in the "worst case", 25% of FFs are not predicted. This percentage is quite worrying as it means that 1 in 4 FFs could not be identified. Of the remaining 75% of predicted FFs, 33% (or rather 1 in 3) are declared as a certain false fix, while 67% are a possible false fix (*warning*). Another important analysis regards the percentage of "errors" of the neural network, in terms of the number of sure or possible FF that

there are not been verified.

110 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

**Figure 21.** MAC correction: Prediction errors

**Figure 22.** VRS®

correction: Prediction errors


It appears, in the authors' personal opinion, to be of great importance to consider the network parameters (such as ionospheric and tropospheric delays), to try to identify the triggers of FFs — currently unexplained by only considering the parameters — to attempt to reduce quite high rates of unreported false fixings. It should, however, be underlined again that a low rate of FF declared as certain is not a cause for concern. It is important that they are identified even as a possible FF (warning) rather than not be identified.

The input parameters of the network can be expanded. In addition, it is possible to affirm that the network can be trained for a "stop & go" positioning of the rover receiver or, with proper choice of these parameters, for a receiver in motion. The user will choose the type of survey and the receiver and, at the same time, the type of quality control of the survey.

For a quality analysis of the survey to be made in retrospect, it might be important to combine visual information (colour) with the quality of the measurements. For example, it might classify these measures on a screen indicating these points and the respective colour code (from green for the measures considered reliable, up to red for the measures considered unreliable), so the user can later return to measure, perhaps with other conditions of the satellite visibility, the points made previously little or not reliable.

#### **Author details**

Paolo Dabove\* and Ambrogio Manzino\*

\*Address all correspondence to: paolo.dabove@polito.it; ambrogio.manzino@polito.it

Department of Environment, Land and Infrastructure Engineering, Politecnico di Torino – Turin T.U., Italy

[10] http://igscb.jpl.nasa.gov/components/prods.html

[11] http://www.geopp.de/index.php?bereich=su&kategorie=&artikel=35&seite=2

Analysis of Ambiguity False Fixing within a GBAS

http://dx.doi.org/10.5772/55872

113

#### **References**


[10] http://igscb.jpl.nasa.gov/components/prods.html

**Author details**

Turin T.U., Italy

**References**

Paolo Dabove\* and Ambrogio Manzino\*

112 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

\*Address all correspondence to: paolo.dabove@polito.it; ambrogio.manzino@polito.it

and-applicationsaccessed 3 February 2012)., 191-217.

tioning?. Inside GNSS (2011). , 24-29.

sas City (KS-USA), September 1996.

tions. Paris: Lavoisier; (1994).

September (1992).

Dependent Errors. Geo++ White Paper; (2004).

2000, Salt Lake City (UT-USA), September 2000.

Department of Environment, Land and Infrastructure Engineering, Politecnico di Torino –

[1] Dabove, P, De Agostino, M, & Manzino, A M. Achievable positioning accuracies in a network of GNSS reference stations. In: Shuanggen Jin (ed.) Global Navigation Satel‐ lite Systems: Signal, Theory and Applications. Rijeka: InTech; (2011). Available from http://www.intechopen.com/books/global-navigation-satellite-systems-signal-theory-

[2] Dabove, P, & De Agostino, M. What effect does network size have on NRTK posi‐

[3] Wübbena, G, Schmitz, M, & Bagge, A. GNSMART Irregularity Readings for Distance

[4] Vollath, U, Buecherl, A, Landau, H, Pagels, C, & Wagner, B. ION GPS (2000). Multi-Base RTK using Virtual Reference Stations: proceedings of the 13th International Technical Meeting of the Satellite Division of The Institute of Navigation, ION GPS

[5] Wübbena, G, Bagge, A, Seeber, G, Böder, V, & Hankemeier, P. ION GPS (1996). Re‐ ducing distance dependent errors for real-time precise DGPS applications by estab‐ lishing reference station networks: proceedings of the 9th International Technical Meeting of the Satellite Division of The Institute of Navigation, ION GPS 1996, Kan‐

[6] Fausett, L. Fundamentals of neural networks: architectures, algorithms, and applica‐

[7] Landau, H, & Euler, H J. *ION GPS 1992:* On-the-Fly Ambiguity Resolution for Precise Differential Positioning: pr*oceedings of the 5th International Technical Meeting of the Sat‐ ellite Division of The Institute of Navigation, ION GPS 1992,* Albuquerque, (NM-USA),

[8] Frean, M. The Upstart Algorithm: A Method for Constructing and Training Feedfor‐

ward Neural Networks. In: Neural Computation; (1990). , 198-209.

[9] http://www.navipedia.net/index.php/GBAS\_Systems

[11] http://www.geopp.de/index.php?bereich=su&kategorie=&artikel=35&seite=2

**Section 2**

**Applications**

**Section 2**

### **Applications**

**Chapter 5**

**L1 GPS as Tool for Monitoring the Dynamic Behavior of**

In 1973 the JPO (Joint Program Office) subjected to United States Air Force received the mission of Department of Defense for implanting, developing, testing and use a spatial positioning system for military applications and able to calculated coordinates and guide missiles

The Global Positioning System is fruit of those studies which became able to use the L band of carrier phase (frequency microwaves around 1 up to 3 Ghz and wave length close to 23 cm) for calculating the spatial trilateration. Therefore, in 1978 began the launch of the first NAV‐ STAR (Navigation Satellite with Time and Ranging) satellites – the begging of GPS as is known today. Due to the high cost of project and as the MIT confirmed by itself the excellence on civil applications (geodesy, topography, navigation, digital modeling, simulators), the American Congress, with the acquiescence of the U.S. President, pressed the Pentagon to open the NAVSTAR system for civil use and other countries. However, only from the 90s is that the GPS became popular. This was a result of technological advance in the micro-computers field allowing the trackers manufacturers to produce the GPS receivers that processed in the

In this context, from the end of the 80´s, the technology of Global Positioning System, until then, used to conduct surveys of areas, deploy geodetic networks, manage resources, track fleets of vehicles, ships, achieve the control of displacements of structures under static load, etc. began to be used to characterize the dynamic displacement of large structures, earthquakes and so, it was being considered as a tool to extract the values of frequency and amplitude of

> © 2013 Camargo Larocca et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2013 Camargo Larocca et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

**Large Bridges**

Ana Paula Camargo Larocca, Ricardo Ernesto Schaal and Marcelo Carvalho dos Santos

http://dx.doi.org/10.5772/56567

according to the project "Star Wars".

receiver, the codes of the received signals.

**1. Introduction**

Additional information is available at the end of the chapter

## **L1 GPS as Tool for Monitoring the Dynamic Behavior of Large Bridges**

Ana Paula Camargo Larocca, Ricardo Ernesto Schaal and Marcelo Carvalho dos Santos

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/56567

#### **1. Introduction**

In 1973 the JPO (Joint Program Office) subjected to United States Air Force received the mission of Department of Defense for implanting, developing, testing and use a spatial positioning system for military applications and able to calculated coordinates and guide missiles according to the project "Star Wars".

The Global Positioning System is fruit of those studies which became able to use the L band of carrier phase (frequency microwaves around 1 up to 3 Ghz and wave length close to 23 cm) for calculating the spatial trilateration. Therefore, in 1978 began the launch of the first NAV‐ STAR (Navigation Satellite with Time and Ranging) satellites – the begging of GPS as is known today. Due to the high cost of project and as the MIT confirmed by itself the excellence on civil applications (geodesy, topography, navigation, digital modeling, simulators), the American Congress, with the acquiescence of the U.S. President, pressed the Pentagon to open the NAVSTAR system for civil use and other countries. However, only from the 90s is that the GPS became popular. This was a result of technological advance in the micro-computers field allowing the trackers manufacturers to produce the GPS receivers that processed in the receiver, the codes of the received signals.

In this context, from the end of the 80´s, the technology of Global Positioning System, until then, used to conduct surveys of areas, deploy geodetic networks, manage resources, track fleets of vehicles, ships, achieve the control of displacements of structures under static load, etc. began to be used to characterize the dynamic displacement of large structures, earthquakes and so, it was being considered as a tool to extract the values of frequency and amplitude of

© 2013 Camargo Larocca et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Camargo Larocca et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

the displacements with a good accuracy. The amplitude range of displacements detectable with GPS allows to be used as a tool for monitoring the displacement in several kind of structures and the development of sensors of 100 Hz date rate, it will be possible to identify not only the natural frequencies of a structure, but also the frequencies of its several vibration modes.

If the mobile mirror is moved a distance from its original position, the beam of blue light travel a distance greater than the red beam, causing a different pattern of interference, which can be

L1 GPS as Tool for Monitoring the Dynamic Behavior of Large Bridges

http://dx.doi.org/10.5772/56567

119

Considering the beam of light as the electromagnetic wave emitted by the antenna of a GPS satellite and the mirrors with the GPS antenna (Figure 2), vertical or horizontal movements senoidais suffered by the mobile antenna will change the phase of the signal collected by the receiver connected to this antenna. This changes the relationship between phases of the GPS signal received by antenna - mobile and static - since the length of the path is no longer the same. This phase change, then caused by the movements caused by the antenna may have its amplitude and frequency calculated. The mobile antenna, for example, can be fixed in a

seen on the screen (Figure 2).

**Figure 2.** Interferometer scheme with a displacement of one mirror

**Figure 3.** Interferometer of phase related to GPS signal and antennas

structure under dynamic loading action.

Goad (1989) conducted the first experiments with the purpose of investigating the feasibility of using GPS to monitor the crest of a dam in Lawrence, Kansas, USA and Lovse et al. (1995) performed the first test to measure the frequency vibration of Calgary Tower in Alberta, Canada, using GPS receivers in differential mode and conventional instruments. The authors verify that the frequency of vibration of the tower of 160 m high, under the wind action was approximately 0.3 Hz.

This time until the present day, several methods were developed and tested the detection of small amplitudes and frequencies of dynamic displacements, but with no results so promising as this that only uses the signal from two satellites through the interferometry technique.

#### **2. Theoretical basis of methods developed**

The method applied on this research uses the GPS data supported on the interferometer phase principle. The interferometry is use of the phenomenon of interference between signals, to perform, for example, measures of distance through the phase change caused on one of the both signals. Figure 1 shows, a light beam incident on a mirror is divided into two beams perpendicular to each other. Part of the light beam is reflected and part through another medium. This portion hits a silver mirror and is reflected. Furthermore, the other portion hits another mirror, which can be moved and is also reflected. In this case the beams walk the same distance and the reconstituted light source can be seen reflected on the screen (Holmes, 1998).

**Figure 1.** Interferometer scheme (*Michelson Interferometer*)

If the mobile mirror is moved a distance from its original position, the beam of blue light travel a distance greater than the red beam, causing a different pattern of interference, which can be seen on the screen (Figure 2).

**Figure 2.** Interferometer scheme with a displacement of one mirror

the displacements with a good accuracy. The amplitude range of displacements detectable with GPS allows to be used as a tool for monitoring the displacement in several kind of structures and the development of sensors of 100 Hz date rate, it will be possible to identify not only the natural frequencies of a structure, but also the frequencies of its several vibration

Goad (1989) conducted the first experiments with the purpose of investigating the feasibility of using GPS to monitor the crest of a dam in Lawrence, Kansas, USA and Lovse et al. (1995) performed the first test to measure the frequency vibration of Calgary Tower in Alberta, Canada, using GPS receivers in differential mode and conventional instruments. The authors verify that the frequency of vibration of the tower of 160 m high, under the wind action was

This time until the present day, several methods were developed and tested the detection of small amplitudes and frequencies of dynamic displacements, but with no results so promising as this that only uses the signal from two satellites through the interferometry technique.

The method applied on this research uses the GPS data supported on the interferometer phase principle. The interferometry is use of the phenomenon of interference between signals, to perform, for example, measures of distance through the phase change caused on one of the both signals. Figure 1 shows, a light beam incident on a mirror is divided into two beams perpendicular to each other. Part of the light beam is reflected and part through another medium. This portion hits a silver mirror and is reflected. Furthermore, the other portion hits another mirror, which can be moved and is also reflected. In this case the beams walk the same distance and the reconstituted light source can be seen reflected on the screen (Holmes, 1998).

modes.

approximately 0.3 Hz.

**2. Theoretical basis of methods developed**

118 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

**Figure 1.** Interferometer scheme (*Michelson Interferometer*)

Considering the beam of light as the electromagnetic wave emitted by the antenna of a GPS satellite and the mirrors with the GPS antenna (Figure 2), vertical or horizontal movements senoidais suffered by the mobile antenna will change the phase of the signal collected by the receiver connected to this antenna. This changes the relationship between phases of the GPS signal received by antenna - mobile and static - since the length of the path is no longer the same. This phase change, then caused by the movements caused by the antenna may have its amplitude and frequency calculated. The mobile antenna, for example, can be fixed in a structure under dynamic loading action.

**Figure 3.** Interferometer of phase related to GPS signal and antennas

$$\left\lVert \mathbf{\tilde{N}DPR}^{i,k}\_{A,B}\left(\mathbf{t}\right) = \mathbf{\tilde{N}D} \middle| R\left(t\right) \right\rVert\_{A,B}^{i,k} + c \left\lVert \mathbf{\tilde{N}D}d\Gamma\_{A,B}^{i,k} + l \left\lVert \mathbf{\tilde{N}DN}\_{A,B}^{i,k}\left(t\right) + \mathbf{\tilde{N}D}\mathbf{\tilde{2}}\_{A,B}^{i,k} \right\rVert \right.\tag{1}$$


$$\mathsf{NDF} = \, D \ltimes \mathsf{F} \tag{2}$$

*D*: corresponds to the matrix operator of the double phase difference. The size of the matrix is given by (*R* −1) ⋅ (*S* −1) , *R* ⋅*S* , where *R* is the number of receivers and *S* is the satellites number.

It is eq. [14]:

$$D \cdot \stackrel{\theta}{V} + D \cdot V = D \cdot A \cdot \delta\_\chi \tag{3}$$

or

$$\stackrel{\circ}{V}\stackrel{\circ}{+}V\stackrel{\circ}{=}A\stackrel{\circ}{+}\mathcal{S}\_{\text{x}}\tag{4}$$

**Figure 7.** GPS antenna over the electro-mechanical oscillator

**Figure 8.** Detail of electro-mechanical oscillator

**2.3. Spectral Analysis of GPS data**

The Fast Fourier Transform was the tool chosen to perform the analysis of the double difference phase residuals, here called raw residuals, in the frequencies domain and consequently, to identify the corresponding frequencies due to periodic displacements in a spectrum that also presents the very low frequencies due to multipath of the environments highly noisily and others - effects of variation of the antennas phase center - which is accentuated in highly

L1 GPS as Tool for Monitoring the Dynamic Behavior of Large Bridges

http://dx.doi.org/10.5772/56567

123

The essence of the Fourier Transform (FT) of a wave is to decompose or separate it into a sum of different frequencies senoides. If the sum of these senoides results in the form of original wave, then was given its Fourier Transform. A function of wavelength, in the time domain is

reflective environments and in a not static observations (Wells et al., 1986).

where:

*V* ' *o* =*D* ⋅*V o* : vector of double difference phase

*V* ' =*D* ⋅*V* : residuals vector of double difference phase

*A*' =*D* ⋅ *A*: shape of double difference phase matrix

The residuals vector can be written according to the vector of baseline-adjusted subtracting the baseline vector of raw baseline values (without adjustment):

$$
\begin{array}{ccccc}
\text{V}^{\cdot} & = & \mathbf{L}\_a & - & \mathbf{L}\_b \end{array}
\tag{5}
$$

where:

*V* : residuals vector, ie, difference between adjusted values and the raw values;

*L <sup>a</sup>* : vector of adjusted baselines components;

*L <sup>b</sup>* : vector of baselines components of processed from GPS observations.

#### **2.2. Electro-mechanic oscillator for calibrating vibrations**

To calibrate the measurement of dynamic displacements previously unknown was developed an electro-mechanical oscillator which applies controlled movements - on related to displace‐ ment and the speed of it - on the GPS antenna that will suffer the movements of the footbridge span and the bridge. The oscillator is powered by battery. Figures 7 and 8 present a GPS antenna mounted on the oscillator and a detail of the system that controls the amplitude of the displacement.

**Figure 7.** GPS antenna over the electro-mechanical oscillator

*D*: corresponds to the matrix operator of the double phase difference. The size of the matrix is given by (*R* −1) ⋅ (*S* −1) , *R* ⋅*S* , where *R* is the number of receivers and *S* is the satellites

*o*

*o*

: vector of double difference phase

122 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

=*D* ⋅ *A*: shape of double difference phase matrix

*L <sup>a</sup>* : vector of adjusted baselines components;

**2.2. Electro-mechanic oscillator for calibrating vibrations**

=*D* ⋅*V* : residuals vector of double difference phase

the baseline vector of raw baseline values (without adjustment):

'

*V* : residuals vector, ie, difference between adjusted values and the raw values;

*L <sup>b</sup>* : vector of baselines components of processed from GPS observations.

*DV DV DA <sup>x</sup>* ×+ × = ××

'' '

*VV A <sup>x</sup>* +=×

d

The residuals vector can be written according to the vector of baseline-adjusted subtracting

To calibrate the measurement of dynamic displacements previously unknown was developed an electro-mechanical oscillator which applies controlled movements - on related to displace‐ ment and the speed of it - on the GPS antenna that will suffer the movements of the footbridge span and the bridge. The oscillator is powered by battery. Figures 7 and 8 present a GPS antenna mounted on the oscillator and a detail of the system that controls the amplitude of the

d

(3)

(4)

*V L a b* = - *L* (5)

number.

or

where:

=*D* ⋅*V o*

*V* ' *o*

*V* '

*A*'

where:

displacement.

It is eq. [14]:

**Figure 8.** Detail of electro-mechanical oscillator

#### **2.3. Spectral Analysis of GPS data**

The Fast Fourier Transform was the tool chosen to perform the analysis of the double difference phase residuals, here called raw residuals, in the frequencies domain and consequently, to identify the corresponding frequencies due to periodic displacements in a spectrum that also presents the very low frequencies due to multipath of the environments highly noisily and others - effects of variation of the antennas phase center - which is accentuated in highly reflective environments and in a not static observations (Wells et al., 1986).

The essence of the Fourier Transform (FT) of a wave is to decompose or separate it into a sum of different frequencies senoides. If the sum of these senoides results in the form of original wave, then was given its Fourier Transform. A function of wavelength, in the time domain is transformed to the domain of frequencies, where is possible to determine the magnitude, frequency of the wave and perform the filtering of undesired frequencies (noise), Brighan (1974).

**The second structure** where the method was tested is the Hawkshaw Bridge (Figure 10), a cable-stayed bridge with cables anchored in a harp shape. The bridge is located in the province of New Brunswick, Canada, at Hawkshaw Bridge Road, 0.20 km North, at the intersection with Highway 2, in the Nackwic district and it link the two shores of the Saint John River.

L1 GPS as Tool for Monitoring the Dynamic Behavior of Large Bridges

http://dx.doi.org/10.5772/56567

125

The Hawkshaw Bridge is composed by a steel deck i-beam, with three spans. Inaugurated in 1967, the bridge has its longitudinal axis predominantly in the north-south direction. The center span has 217 m in length, the north direction span has 29.44 m and the south has 54.44 m, a total of 301.20 m. The deck is supported by two steel towers with 36 m in height where six sets of steel cables are fixed on each side. The board has width of 7.90 m, with two traffic

The footbridge and the bridge have in common the constructive system, based on the theory of cable-stayed. A description of the structural function of this type of structure is described

The cable-stayed bridge can be defined as a structure composed by a main beam supported by steel cables tensioned, stuck on top of one or more towers, having then two or more spans. A good comparison for understanding the structural system of a cable-stayed bridges is to imagine that the arms of the human body is the board of the bridge and the head becomes the tower, creating two identical spans in length (the arms) and the muscles support the arms (Figure 11). With a piece of rope, of 1m length, tie the two elbows and puts the middle of the rope on the head. Thus, the string will act as a cable stayed that supports the elbow. With a second piece of rope, of 1.5 m long cable is then the two pulses. There was the same way,

lanes in opposite directions (Figure 10).

**Figure 10.** Hawkshaw Cable-Stayed Bridge, New Brunswick, Canada

**3.1. Structural behavior of tested structures**

below.

#### **3. Tested structures for the method analysis**

The dynamic analysis of a structure aimed to determine the maximum displacements allowed by the project (design constraints), speed and accelerations (comfort for users), internal efforts, stress and deformations (fatigue of the material that composes the structure) (Laier, 2000). Thus, the analysis can diagnose the actual state of structure conservation (regardless of external appearance), predict its life time and determine economic solutions of recovering in order to prolong the durability of the structure.

Two bridges were submitted to dynamic tests – mobile load - to test the GPS as tool for monitoring structures.

**The first structure tested** was a cable-stayed wood footbridge built in Sao Carlos Engineering, University of São Paulo, São Paulo state, Brazil, in 2002 (Figure 9), Pletz (2003). The footbridge, which is presented as the first wood footbridge built in Brazil with the deck in curve shape, has a 35 meters total length deck on *Pinus taeda* wood and wood used for the tower was *Eucalyptus citriodora*. The tower consists of a pole with thirteen meters long, 55 cm in diameter at the base and 45 cm at the top. The footbridge was divided into seven modules with nominal dimensions of 5 m in length, 2 m wide, 20 cm in height, each consisting of 37 slides measuring 5 cm in width and 20 cm high and 5 m in length (Pletz, 2003).

**Figure 9.** Cable- Stayed wood footbridge in Sao Carlos Engineering School, USP

**The second structure** where the method was tested is the Hawkshaw Bridge (Figure 10), a cable-stayed bridge with cables anchored in a harp shape. The bridge is located in the province of New Brunswick, Canada, at Hawkshaw Bridge Road, 0.20 km North, at the intersection with Highway 2, in the Nackwic district and it link the two shores of the Saint John River.

The Hawkshaw Bridge is composed by a steel deck i-beam, with three spans. Inaugurated in 1967, the bridge has its longitudinal axis predominantly in the north-south direction. The center span has 217 m in length, the north direction span has 29.44 m and the south has 54.44 m, a total of 301.20 m. The deck is supported by two steel towers with 36 m in height where six sets of steel cables are fixed on each side. The board has width of 7.90 m, with two traffic lanes in opposite directions (Figure 10).

**Figure 10.** Hawkshaw Cable-Stayed Bridge, New Brunswick, Canada

The footbridge and the bridge have in common the constructive system, based on the theory of cable-stayed. A description of the structural function of this type of structure is described below.

#### **3.1. Structural behavior of tested structures**

transformed to the domain of frequencies, where is possible to determine the magnitude, frequency of the wave and perform the filtering of undesired frequencies (noise), Brighan

The dynamic analysis of a structure aimed to determine the maximum displacements allowed by the project (design constraints), speed and accelerations (comfort for users), internal efforts, stress and deformations (fatigue of the material that composes the structure) (Laier, 2000). Thus, the analysis can diagnose the actual state of structure conservation (regardless of external appearance), predict its life time and determine economic solutions of recovering in order to

Two bridges were submitted to dynamic tests – mobile load - to test the GPS as tool for

**The first structure tested** was a cable-stayed wood footbridge built in Sao Carlos Engineering, University of São Paulo, São Paulo state, Brazil, in 2002 (Figure 9), Pletz (2003). The footbridge, which is presented as the first wood footbridge built in Brazil with the deck in curve shape, has a 35 meters total length deck on *Pinus taeda* wood and wood used for the tower was *Eucalyptus citriodora*. The tower consists of a pole with thirteen meters long, 55 cm in diameter at the base and 45 cm at the top. The footbridge was divided into seven modules with nominal dimensions of 5 m in length, 2 m wide, 20 cm in height, each consisting of 37 slides measuring

**3. Tested structures for the method analysis**

124 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

5 cm in width and 20 cm high and 5 m in length (Pletz, 2003).

**Figure 9.** Cable- Stayed wood footbridge in Sao Carlos Engineering School, USP

prolong the durability of the structure.

monitoring structures.

(1974).

The cable-stayed bridge can be defined as a structure composed by a main beam supported by steel cables tensioned, stuck on top of one or more towers, having then two or more spans. A good comparison for understanding the structural system of a cable-stayed bridges is to imagine that the arms of the human body is the board of the bridge and the head becomes the tower, creating two identical spans in length (the arms) and the muscles support the arms (Figure 11). With a piece of rope, of 1m length, tie the two elbows and puts the middle of the rope on the head. Thus, the string will act as a cable stayed that supports the elbow. With a second piece of rope, of 1.5 m long cable is then the two pulses. There was the same way, placing the rope on the head, there is another cable stayed. The strength or compression of the two cables that support the arms (board) is felt in the human body on the head, then, the bridge tower. Thus, as the cables can be used as intermediate supports for beams, the concept of bridge estaiada can overcome long spans. The cables are then the most important elements of a bridge estaiada because they, under stress, support the weight of the beams and transfer efforts to the mooring system, the fixed towers, suffering compression (Tang et al. (1999); http:// www.pbs.org/wgbh/nova/bridge/meetcable.html).

Fonte: http://www.pbs.org/wgbh/nova/bridge/meetcable.html

**Figure 11.** Concept of cable-stayed bridge

#### **4. Tests on a cable-stayed wood footbridge**

According to the objective of to monitor the dynamic behavior of the footbridge were planned forced vibration tests to be made with pedestrians walking over the deck. Each test lasted approximately 10 minutes, enough time to excite the several vibration modes of the footbridge.

#### **4.1. Instrumentation**

The instrumentation consisted of a pair of GPS receivers with 20 Hz data rate with choke ring antennas and a transducer of displacement Kiowa DT 100, with Vishay data acquisition system of 20 channels and 10 Hz data rate, model 5100B Scanner. Figure 12 illustrates the layout of the instruments used on a footbridge and Figures 13 and 14 illustrate the layout of these instruments in the footbridge. The electro-mechanical oscillator was adjusted to apply a displacement with amplitude of 12 mm and frequency of 1.0 Hz.

Preliminary tests with pedestrians walking on the bridge were made for a rough knowledge of the dynamic behavior of the bridge, monitoring the extent and frequency of vertical displacement, which was conducted by two observers. One observer, using a total station and a piece of tape measure set in the center of the leg 2 determined the average amplitude of the approximate vertical displacement caused by pedestrians. Another observer determined with the aid of a stopwatch, the approximate frequency of the footbridge at the same time. And the

placing the rope on the head, there is another cable stayed. The strength or compression of the two cables that support the arms (board) is felt in the human body on the head, then, the bridge tower. Thus, as the cables can be used as intermediate supports for beams, the concept of bridge estaiada can overcome long spans. The cables are then the most important elements of a bridge estaiada because they, under stress, support the weight of the beams and transfer efforts to the mooring system, the fixed towers, suffering compression (Tang et al. (1999); http://

According to the objective of to monitor the dynamic behavior of the footbridge were planned forced vibration tests to be made with pedestrians walking over the deck. Each test lasted approximately 10 minutes, enough time to excite the several vibration modes of the footbridge.

The instrumentation consisted of a pair of GPS receivers with 20 Hz data rate with choke ring antennas and a transducer of displacement Kiowa DT 100, with Vishay data acquisition system of 20 channels and 10 Hz data rate, model 5100B Scanner. Figure 12 illustrates the layout of the instruments used on a footbridge and Figures 13 and 14 illustrate the layout of these instruments in the footbridge. The electro-mechanical oscillator was adjusted to apply a

Preliminary tests with pedestrians walking on the bridge were made for a rough knowledge of the dynamic behavior of the bridge, monitoring the extent and frequency of vertical displacement, which was conducted by two observers. One observer, using a total station and a piece of tape measure set in the center of the leg 2 determined the average amplitude of the approximate vertical displacement caused by pedestrians. Another observer determined with the aid of a stopwatch, the approximate frequency of the footbridge at the same time. And the

www.pbs.org/wgbh/nova/bridge/meetcable.html).

126 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

Fonte: http://www.pbs.org/wgbh/nova/bridge/meetcable.html

**4. Tests on a cable-stayed wood footbridge**

displacement with amplitude of 12 mm and frequency of 1.0 Hz.

**Figure 11.** Concept of cable-stayed bridge

**4.1. Instrumentation**

**Figure 14.** Transducer of displacements and data acquisition system

#### **4.2. Footbridge tests results**

The data processing collected with GPS and transducer displacement are described below.

**Figure 15.** Front view of pedestrians walking on a footbridge and view of GPS antenna over oscillator

L1 GPS as Tool for Monitoring the Dynamic Behavior of Large Bridges

http://dx.doi.org/10.5772/56567

129

**Figure 16.** Side view of pedestrians walking over a footbridge

Figure 17 presents the data obtained with the transducer displacement during the filed test carried out in a footbridge with pedestrians on moving - figures above -. By means of maximum and minimum values recorded it was determined the amplitude of displacement, resulting in 13 mm.

Applying the Fast Fourier Transform (FFT) to these values in the spectrum, it can clearly see the peak corresponding to the periodic movements recorded by the displacement transducer, with frequency of 2.0 Hz (Figure 18).

During the tests carried out, the measuring satellite and the reference were close to the 74 degrees (PRN 28) and 05 degrees PRN (26), respectively. Figure 19 illustrates the residuals of double phase difference between these satellites. The spectral analysis of these residues (Figure 20) allows extracting the frequency value of the electro-mechanical device, 1.1 Hz and value of the footbridge, 2.1 Hz, under forced vibration. In Figure 20 it can be observed also that the corresponding peak to the displacement applied by the oscillator at the antenna is perfect L1 GPS as Tool for Monitoring the Dynamic Behavior of Large Bridges http://dx.doi.org/10.5772/56567 129

**Figure 15.** Front view of pedestrians walking on a footbridge and view of GPS antenna over oscillator

**Figure 16.** Side view of pedestrians walking over a footbridge

**4.2. Footbridge tests results**

**Figure 14.** Transducer of displacements and data acquisition system

128 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

with frequency of 2.0 Hz (Figure 18).

13 mm.

The data processing collected with GPS and transducer displacement are described below. Figure 17 presents the data obtained with the transducer displacement during the filed test carried out in a footbridge with pedestrians on moving - figures above -. By means of maximum and minimum values recorded it was determined the amplitude of displacement, resulting in

Applying the Fast Fourier Transform (FFT) to these values in the spectrum, it can clearly see the peak corresponding to the periodic movements recorded by the displacement transducer,

During the tests carried out, the measuring satellite and the reference were close to the 74 degrees (PRN 28) and 05 degrees PRN (26), respectively. Figure 19 illustrates the residuals of double phase difference between these satellites. The spectral analysis of these residues (Figure 20) allows extracting the frequency value of the electro-mechanical device, 1.1 Hz and value of the footbridge, 2.1 Hz, under forced vibration. In Figure 20 it can be observed also that the corresponding peak to the displacement applied by the oscillator at the antenna is perfect

$$\begin{pmatrix} 8.3 \text{ cm} \ \% \text{\#} & 12 \text{ mm} \\ \text{\#} & 9.0 \text{ cm} \ \% \text{\#} & \text{\times} \end{pmatrix} \times \quad 13 \text{ mm} \tag{6}$$

The comparison of the values obtained with GPS and with the transducer can be summarized

**Figure 20.** Power spectrum of the raw residuals with peaks due to vertical dynamic displacements applied on the GPS

0 0.5 1 1.5 2 2.5 3 **Frequency (Hz)**

0 100 200 300 400 500 **Time (GPS 0.05 s)**

> footbridge response

L1 GPS as Tool for Monitoring the Dynamic Behavior of Large Bridges

http://dx.doi.org/10.5772/56567

131

Two other tests were performed with 12 minutes duration, with the objective of trying to detect the natural frequency of the footbridge and the harmonic frequencies of other vibration modes.

During these two tests, the reference satellite, PRN 29, was at 16 degrees of elevation and measuring satellite, PRN 28, was at 78 degrees and the footbridge was not instrumented with displacement transducer. The spectral analysis showed the occurrence of more two peaks, besides the peak due to oscillations caused by the electro-mechanical device in the antenna, with a frequency value of 1.1 Hz and the peak due to the action of walking with a frequency

in the table below.

antenna


**FFT Amplitude**

**Figure 19.** Raw phase residuals of the dynamic vertical displacement response

peak due oscillator

**Raw residuals (mm)**

**Figure 19.** Raw phase residuals of the dynamic vertical displacement response

**Figure 20.** Power spectrum of the raw residuals with peaks due to vertical dynamic displacements applied on the GPS antenna

The comparison of the values obtained with GPS and with the transducer can be summarized in the table below.

Two other tests were performed with 12 minutes duration, with the objective of trying to detect the natural frequency of the footbridge and the harmonic frequencies of other vibration modes.

During these two tests, the reference satellite, PRN 29, was at 16 degrees of elevation and measuring satellite, PRN 28, was at 78 degrees and the footbridge was not instrumented with displacement transducer. The spectral analysis showed the occurrence of more two peaks, besides the peak due to oscillations caused by the electro-mechanical device in the antenna, with a frequency value of 1.1 Hz and the peak due to the action of walking with a frequency


During these two tests, the reference satellite, PRN 29, was at 16 degrees of elevation and

16

the peak of frequency value equal to 3.1 Hz, corresponding to the value of the natural vertical **Table 1.** Values of natural frequency and amplitude of displacement obtained with the GPS and the transducer

of 2.0 Hz, illustrated in Figure 20. In Figures 21 and 22, below, illustrates besides these two peaks, the peak of frequency value equal to 3.1 Hz, corresponding to the value of the natural vertical frequency according to Pletz (2003), which presents to the same frequency, the value of 3.2 Hz obtained by the Finite Elements analysis and the fourth peak, with a frequency value equal to 4.1 Hz, corresponds to the frequency of the first vibration mode. frequency according to Pletz (2003), which presents to the same frequency, the value of 3.2 Hz obtained by the Finite Elements analysis and the fourth peak, with a frequency value equal to 4.1 Hz, corresponds to the frequency of the first vibration mode.

Figure 21 - Frequency detected by GPS data – first test **Figure 21.** Frequency detected by GPS data – first test

60

#### 50 response to pedestrians walked **5. Tests on a Hawkshaw cable-stayed bridge, Canada**

response to

0 10 20 30 40 Amplitude (FFT) oscilator vertical natural frequency 1 st harmonic With the objective to monitor the dynamic behavior of a Hawkshaw bridge it was planned forced vibration tests carried out with trucks – design trucks - on the deck during the author's doctoral internship at the University of New Brunswick, Canada in October 2003. The trial was supported by the Department of Geodesy and Geomatics Engineering of University of New Brunswick, Geodetic Research Laboratory, Canadian Center for Geodetic Engineering and the New Brunswick Department of Transportation (NBDT).

> 0 0,5 1 1,5 2 2,5 3 3,5 4 4,5 5 Frequency (Hz)

Figure 22 - Frequency detected by GPS data – second test

16

of 2.0 Hz, illustrated in Figure 20. In Figures 21 and 22, below, illustrates besides these two peaks, the peak of frequency value equal to 3.1 Hz, corresponding to the value of the natural vertical frequency according to Pletz (2003), which presents to the same frequency, the value of 3.2 Hz obtained by the Finite Elements analysis and the fourth peak, with a frequency value

frequency according to Pletz (2003), which presents to the same frequency, the value of 3.2 Hz

obtained by the Finite Elements analysis and the fourth peak, with a frequency value equal to

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

response to pedestrians walked

2.1 13.0

vertical natural frequency

response to pedestrians walked

1 st harmonic

1st harmonic

**Vertical amplitude displacement (mm)**

Frequency (Hz)

Figure 21 - Frequency detected by GPS data – first test

With the objective to monitor the dynamic behavior of a Hawkshaw bridge it was planned forced vibration tests carried out with trucks – design trucks - on the deck during the author's doctoral internship at the University of New Brunswick, Canada in October 2003. The trial was supported by the Department of Geodesy and Geomatics Engineering of University of New Brunswick, Geodetic Research Laboratory, Canadian Center for Geodetic Engineering and the

> 0 0,5 1 1,5 2 2,5 3 3,5 4 4,5 5 Frequency (Hz)

Figure 22 - Frequency detected by GPS data – second test

vertical natural frequency

During these two tests, the reference satellite, PRN 29, was at 16 degrees of elevation and

measuring satellite, PRN 28, was at 78 degrees and the footbridge was not instrumented with

**Frequency of footbridge response to pedestrians walked (Hz)**

displacement transducer. The spectral analysis showed the occurrence of more two peaks,

besides the peak due to oscillations caused by the electro-mechanical device in the antenna,

with a frequency value of 1.1 Hz and the peak due to the action of walking with a frequency of

2.0 Hz, illustrated in Figure 20. In Figures 21 and 22, below, illustrates besides these two peaks,

Transducer displacement -100 mm 2.0 13.0

**Table 1.** Values of natural frequency and amplitude of displacement obtained with the GPS and the transducer

the peak of frequency value equal to 3.1 Hz, corresponding to the value of the natural vertical

equal to 4.1 Hz, corresponds to the frequency of the first vibration mode.

response to oscilator

4.1 Hz, corresponds to the frequency of the first vibration mode.

0

**Figure 21.** Frequency detected by GPS data – first test

response to oscilator

New Brunswick Department of Transportation (NBDT).

**5. Tests on a Hawkshaw cable-stayed bridge, Canada**

Amplitude (FFT)

10

20

Amplitude (FFT)

**Equipment**

132 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

GPS – 20 Hz e antena Choke ring

30

40

50

**Figure 24.** General view of a restricted railway for installing equipments on a central span. On left, Ronald H. Joyce (*Senior Technical Advisor – Maintenance and Traffic*) and on right Neil Hill (*Bridge Superintendent*)

**Figure 25.** Reference stations used for monitoring the bridge

**Figure 24.** General view of a restricted railway for installing equipments on a central span. On left, Ronald H. Joyce

(*Senior Technical Advisor – Maintenance and Traffic*) and on right Neil Hill (*Bridge Superintendent*)

**Figure 25.** Reference stations used for monitoring the bridge

134 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

Figures 29 and 30 illustrate some examples of-trains that were used as mobile load during the dynamic test on the bridge deck.

20

31 illustrates the residuals of double difference phase of all satellites in relation to PRN 31, since it was looking for at that time, to check the vertical dynamic behavior of the central span when it allowed that four design-trucks crossed the bridge. The crossed takes nearly 75 seconds. In Figure 32 it is possible to see clearly the graphic description of vertical displacement of the instrumented middle span section that reached 8 cm amplitude. Therefore, other four design-trucks were asked to stop in the middle of the central span to take the

L1 GPS as Tool for Monitoring the Dynamic Behavior of Large Bridges

http://dx.doi.org/10.5772/56567

137


Raw residuals (mm)

During a second test, the reference and measuring satellites were close to the 80 degrees of elevation (PRN 02) and 06 degrees of elevation (PRN 31), respectively. Figure 33 illustrates the residuals of double difference phase for all satellites in relation to the PRN 02, as it was looking for by a lateral dynamic behavior of a central span. And in Figure 33 is possible to see clearly the graphic description of the lateral displacement of the instrumented section in the central

span, where a design-truck of 60 tons crossed the bridge. The crossed takes nearly 45 s.

middle of the deck and starts to exit the bridge, has average amplitude of 3.5 cm.

Figure 34 illustrates only the residuals of the lowest satellite (PRN 31) for better visualization of the lateral dynamic displacement caused by a mobile load of 60 tons. The spectral analysis of these residuals (Figure 35) allows extracting value of the deck´s lateral frequency vibration when subjected to vibration caused by a mobile load. The lateral frequency value of the deck was 0.60 Hz. The amplitude of dynamic displacement showed the average value of 3.5 cm. Furthermore, the lateral displacement of the board, when the truck starts to cross, reaches the

0 50 100 150 200 250 Epochs (0.2s)

 Figure 32 – Raw residuals describing the vertical displacement of central span

measures with the Total Station, obtaining a mean value 8.3 cm.

0 50 100 150 200 250 300 Epochs (0.2s)

5.2.2 Monitoring of lateral displacement of the deck

Figure 31 – Raw residuals of vertical displacement of central span – 4 design-trucks

During a second test, the reference and measuring satellites were close to the 80 degrees of elevation (PRN 02) and 06 degrees of elevation (PRN 31), respectively. Figure 33 illustrates the residuals of double difference phase for all satellites in relation to the PRN 02, as it was looking for by a lateral dynamic behavior of a central span. And in Figure 33 is possible to see clearly


**Figure 31.** Raw residuals of vertical displacement of central span – 4 design-trucks

*5.2.2. Monitoring of lateral displacement of the deck*

Raw residuals (m)

**Figure 30.** Design truck

**Figure 29.** Design truck

#### **5.2. Results of tests in a Hawkshaw Bridge**

#### *5.2.1. Monitoring of vertical displacement of the deck*

During one of several tests performed on the bridge, the measuring and the reference satellites were close to 81 degrees of elevation (PRN 02) and 09 degrees (PRN 31), respectively. Figure 31 illustrates the residuals of double difference phase of all satellites in relation to PRN 31, since it was looking for at that time, to check the vertical dynamic behavior of the central span when it allowed that four design-trucks crossed the bridge. The crossed takes nearly 75 seconds. In Figure 32 it is possible to see clearly the graphic description of vertical displacement of the instrumented middle span section that reached 8 cm amplitude. Therefore, other four design-trucks were asked to stop in the middle of the central span to take the measures with the Total Station, obtaining a mean value 8.3 cm.

other four design-trucks were asked to stop in the middle of the central span to take the

residuals of double difference phase for all satellites in relation to the PRN 02, as it was looking for by a lateral dynamic behavior of a central span. And in Figure 33 is possible to see clearly the graphic description of the lateral displacement of the instrumented section in the central

Figure 34 illustrates only the residuals of the lowest satellite (PRN 31) for better visualization of the lateral dynamic displacement caused by a mobile load of 60 tons. The spectral analysis of these residuals (Figure 35) allows extracting value of the deck´s lateral frequency vibration when subjected to vibration caused by a mobile load. The lateral frequency value of the deck was 0.60 Hz. The amplitude of dynamic displacement showed the average value of 3.5 cm. Furthermore, the lateral displacement of the board, when the truck starts to cross, reaches the

span, where a design-truck of 60 tons crossed the bridge. The crossed takes nearly 45 s.

middle of the deck and starts to exit the bridge, has average amplitude of 3.5 cm.


Raw residuals (mm)

0 50 100 150 200 250 Epochs (0.2s)

 Figure 32 – Raw residuals describing the vertical displacement of central span

20

**Figure 30.** Design truck

Figures 29 and 30 illustrate some examples of-trains that were used as mobile load during the

During one of several tests performed on the bridge, the measuring and the reference satellites were close to 81 degrees of elevation (PRN 02) and 09 degrees (PRN 31), respectively. Figure 31 illustrates the residuals of double difference phase of all satellites in relation to PRN 31, since it was looking for at that time, to check the vertical dynamic behavior of the central span when it allowed that four design-trucks crossed the bridge. The crossed takes nearly 75 seconds. In Figure 32 it is possible to see clearly the graphic description of vertical displacement of the instrumented middle span section that reached 8 cm amplitude. Therefore, other four design-trucks were asked to stop in the middle of the central span to take the measures with

dynamic test on the bridge deck.

136 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

**Figure 29.** Design truck

**5.2. Results of tests in a Hawkshaw Bridge**

*5.2.1. Monitoring of vertical displacement of the deck*

the Total Station, obtaining a mean value 8.3 cm.

Figure 31 – Raw residuals of vertical

measures with the Total Station, obtaining a mean value 8.3 cm.

displacement of central span – 4 design-trucks **Figure 31.** Raw residuals of vertical displacement of central span – 4 design-trucks

#### *5.2.2. Monitoring of lateral displacement of the deck*

5.2.2 Monitoring of lateral displacement of the deck During a second test, the reference and measuring satellites were close to the 80 degrees of elevation (PRN 02) and 06 degrees of elevation (PRN 31), respectively. Figure 33 illustrates the During a second test, the reference and measuring satellites were close to the 80 degrees of elevation (PRN 02) and 06 degrees of elevation (PRN 31), respectively. Figure 33 illustrates the residuals of double difference phase for all satellites in relation to the PRN 02, as it was looking for by a lateral dynamic behavior of a central span. And in Figure 33 is possible to see clearly

measures with the Total Station, obtaining a mean value 8.3 cm.

0 50 100 150 200 250 300 Epochs (0.2s)

Figure 31 – Raw residuals of vertical


Raw residuals (m)

31 illustrates the residuals of double difference phase of all satellites in relation to PRN 31, since it was looking for at that time, to check the vertical dynamic behavior of the central span when it allowed that four design-trucks crossed the bridge. The crossed takes nearly 75 seconds. In Figure 32 it is possible to see clearly the graphic description of vertical displacement of the instrumented middle span section that reached 8 cm amplitude. Therefore,

20

displacement of central span – 4 design-trucks vertical displacement of central span **Figure 32.** Raw residuals describing the vertical displacement of central span

the graphic description of the lateral displacement of the instrumented section in the central span, where a design-truck of 60 tons crossed the bridge. The crossed takes nearly 45 s.

Figure 34 illustrates only the residuals of the lowest satellite (PRN 31) for better visualization of the lateral dynamic displacement caused by a mobile load of 60 tons. The spectral analysis of these residuals (Figure 35) allows extracting value of the deck´s lateral frequency vibration when subjected to vibration caused by a mobile load. The lateral frequency value of the deck was 0.60 Hz. The amplitude of dynamic displacement showed the average value of 3.5 cm. Furthermore, the lateral displacement of the board, when the truck starts to cross, reaches the middle of the deck and starts to exit the bridge, has average amplitude of 3.5 cm. 5.2.2 Monitoring of lateral displacement of the deck During a second test, the reference and measuring satellites were close to the 80 degrees of elevation (PRN 02) and 06 degrees of elevation (PRN 31), respectively. Figure 33 illustrates the residuals of double difference phase for all satellites in relation to the PRN 02, as it was looking for by a lateral dynamic behavior of a central span. And in Figure 33 is possible to see clearly

**Figure 33.** Raw residuals of lateral dynamic displacements of deck middle of the deck and starts to exit the bridge, has average amplitude of 3.5 cm.

the graphic description of the lateral displacement of the instrumented section in the central

Spectral analysis of these residuals by FFT allows extracting the lateral frequency vibration value when subjected to vibration caused by a design-truck of 60 tons crossing the deck. The lateral frequency of the deck was 0.60 Hz (Figure 35).

#### **5.3. Team which collaborated for performing the tests**

Figures presented below are the people who helped to perform the test. The traffic controllers from NBTD in Figure 36 and in Figure 37 can be seen by the technicians who collaborated for performing the tests - Howard Biggar (Geomatics Technologist – GGE – UNB), E. Daniel

the graphic description of the lateral displacement of the instrumented section in the central span, where a design-truck of 60 tons crossed the bridge. The crossed takes nearly 45 s.

0 50 100 150 200 250 Epochs (0.2s)

 Figure 32 – Raw residuals describing the vertical displacement of central span

20

Figure 34 illustrates only the residuals of the lowest satellite (PRN 31) for better visualization of the lateral dynamic displacement caused by a mobile load of 60 tons. The spectral analysis of these residuals (Figure 35) allows extracting value of the deck´s lateral frequency vibration when subjected to vibration caused by a mobile load. The lateral frequency value of the deck was 0.60 Hz. The amplitude of dynamic displacement showed the average value of 3.5 cm. Furthermore, the lateral displacement of the board, when the truck starts to cross, reaches the

Spectral analysis of these residuals by FFT allows extracting the lateral frequency vibration value when subjected to vibration caused by a design-truck of 60 tons crossing the deck. The

Figures presented below are the people who helped to perform the test. The traffic controllers from NBTD in Figure 36 and in Figure 37 can be seen by the technicians who collaborated for performing the tests - Howard Biggar (Geomatics Technologist – GGE – UNB), E. Daniel

middle of the deck and starts to exit the bridge, has average amplitude of 3.5 cm.


**Figure 32.** Raw residuals describing the vertical displacement of central span

Raw residuals (mm)

During a second test, the reference and measuring satellites were close to the 80 degrees of elevation (PRN 02) and 06 degrees of elevation (PRN 31), respectively. Figure 33 illustrates the residuals of double difference phase for all satellites in relation to the PRN 02, as it was looking for by a lateral dynamic behavior of a central span. And in Figure 33 is possible to see clearly the graphic description of the lateral displacement of the instrumented section in the central

span, where a design-truck of 60 tons crossed the bridge. The crossed takes nearly 45 s.

Figure 34 illustrates only the residuals of the lowest satellite (PRN 31) for better visualization of the lateral dynamic displacement caused by a mobile load of 60 tons. The spectral analysis of these residuals (Figure 35) allows extracting value of the deck´s lateral frequency vibration when subjected to vibration caused by a mobile load. The lateral frequency value of the deck was 0.60 Hz. The amplitude of dynamic displacement showed the average value of 3.5 cm. Furthermore, the lateral displacement of the board, when the truck starts to cross, reaches the

**Figure 33.** Raw residuals of lateral dynamic displacements of deck

middle of the deck and starts to exit the bridge, has average amplitude of 3.5 cm.

31 illustrates the residuals of double difference phase of all satellites in relation to PRN 31, since it was looking for at that time, to check the vertical dynamic behavior of the central span when it allowed that four design-trucks crossed the bridge. The crossed takes nearly 75 seconds. In Figure 32 it is possible to see clearly the graphic description of vertical displacement of the instrumented middle span section that reached 8 cm amplitude. Therefore, other four design-trucks were asked to stop in the middle of the central span to take the

138 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

measures with the Total Station, obtaining a mean value 8.3 cm.

0 50 100 150 200 250 300 Epochs (0.2s)

5.2.2 Monitoring of lateral displacement of the deck

Figure 31 – Raw residuals of vertical displacement of central span – 4 design-trucks


Raw residuals (m)

lateral frequency of the deck was 0.60 Hz (Figure 35).

**5.3. Team which collaborated for performing the tests**

and the identification of modal frequencies of bridges under the controlled traffic action or

L1 GPS as Tool for Monitoring the Dynamic Behavior of Large Bridges

http://dx.doi.org/10.5772/56567

141

Authors express their gratitude to: to Dr. Richard B. Langley (coordinator of Geodetic Research Laboratory Dept. of Geodesy and Geomatics Engineering), University of New Brunswick, Canada for provide several inputs and ideas for trials on bridge; to Canadian Center for Geodetic Engineering (CCGE) for loaning NOVATEL receivers; to New Brunswick Depart‐ ment of Transportation (NBDT) for helping and permission for carried out trials on bridge; to Ronald H. Joyce (Senior Technical Advisor from NBDT); Neil Hill (Bridge Superintendent of Hawkshaw Bridge from NBDT); Howard Biggar (Geomatics Technologist of UNB); E. Daniel Wheaton (Chief technician civil engineering of UNB) and Jason Donald Bond (PhD Candidate at Department of Geodesy and Geomatics Engineering of UNB) and also to Université Laval's Centre for Research in Geomatics for kindly supplying the GPS data collected from sessions conducted at the Pierre-Laporte Bridge and coordinate values obtained by the Modified GPS-OTF Algorithm, to Dr. Boussaad Akrour for providing additional material and information

1 Department of Transportation Engineering, Polytechnic School, University of Sao Paulo,

2 Departament of Transportation, Sao Carlos Engineering School, University of Sao Paulo,

[1] Brigham, E. O. (1974). *The Fast Fourier Transform.* 1. ed. New Jersey. Prentice-Hall,

3 Department of Geodesy and Geomatics Engineering, University of New Brunswick,

and Marcelo Carvalho dos Santos3

not, may be used by engineering as a tool for monitoring structures.

**Acknowledgements**

about the bridge.

**Author details**

Brazil

Brazil

Canada

**References**

Inc.. Cap. 1, , 1-8.

Ana Paula Camargo Larocca1\*, Ricardo Ernesto Schaal2

\*Address all correspondence to: larocca.ana@usp.br

**Figure 37.** From left to right: Daniel, Howard, Jason and Ana Paula (author)

#### **6. Conclusions**

Based on studies, field experiments and analysis of results presented in this research, it can be concluded that the GPS data collection researched method has been established because its capability and efficiency. And therefore it ensured to GPS the label of monitoring instruments and characterization of the dynamic behavior of structures.

A comparison of results obtained with the GPS and the transducer displacement, resulting from vibration tests conducted on a wood cable-stayed footbridge to confirm the reliability of the results obtained by GPS to characterize the dynamic behavior of structures, which agreed satisfactorily with the values by the Finite Elements theory and theoretical values of CEB (Comite Euro-International du Beton - Bulletin D´ Information no . 209). Therefore, the results proved the efficiency and capability of the collection method and GPS data analysis to obtain the frequency values and amplitude of dynamic displacement, showing that the limitation imposed by the necessity of a particular satellite geometric configuration, in this case did not prejudiced the program and performing the tests. As the method does not required a good geometric distribution of satellites - and only two satellites -, allows obtaining reliable results on the dynamic behavior of a structure in any latitude of the globe. The use of electromechanical oscillator as a calibrator was reliable, providing also to produce calibrated values of frequencies and amplitudes of unknown displacements, since the electro-mechanical oscillator can be used to produce known oscillations.

The results of the second test of on using the method of this research on a large man-made road structure, the Hawkshaw Cable-stayed Bridge showed the full possibility on using GPS for characterizing the dynamic behavior of this type of structure.

Given the above, it was concluded that GPS, or the method of data collection employed, allows for the graphical description of the dynamic displacement amplitude of the middle span deck and the identification of modal frequencies of bridges under the controlled traffic action or not, may be used by engineering as a tool for monitoring structures.

#### **Acknowledgements**

Authors express their gratitude to: to Dr. Richard B. Langley (coordinator of Geodetic Research Laboratory Dept. of Geodesy and Geomatics Engineering), University of New Brunswick, Canada for provide several inputs and ideas for trials on bridge; to Canadian Center for Geodetic Engineering (CCGE) for loaning NOVATEL receivers; to New Brunswick Depart‐ ment of Transportation (NBDT) for helping and permission for carried out trials on bridge; to Ronald H. Joyce (Senior Technical Advisor from NBDT); Neil Hill (Bridge Superintendent of Hawkshaw Bridge from NBDT); Howard Biggar (Geomatics Technologist of UNB); E. Daniel Wheaton (Chief technician civil engineering of UNB) and Jason Donald Bond (PhD Candidate at Department of Geodesy and Geomatics Engineering of UNB) and also to Université Laval's Centre for Research in Geomatics for kindly supplying the GPS data collected from sessions conducted at the Pierre-Laporte Bridge and coordinate values obtained by the Modified GPS-OTF Algorithm, to Dr. Boussaad Akrour for providing additional material and information about the bridge.

#### **Author details**

**Figure 37.** From left to right: Daniel, Howard, Jason and Ana Paula (author)

140 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

and characterization of the dynamic behavior of structures.

(Comite Euro-International du Beton - Bulletin D´ Information no

for characterizing the dynamic behavior of this type of structure.

oscillator can be used to produce known oscillations.

Based on studies, field experiments and analysis of results presented in this research, it can be concluded that the GPS data collection researched method has been established because its capability and efficiency. And therefore it ensured to GPS the label of monitoring instruments

A comparison of results obtained with the GPS and the transducer displacement, resulting from vibration tests conducted on a wood cable-stayed footbridge to confirm the reliability of the results obtained by GPS to characterize the dynamic behavior of structures, which agreed satisfactorily with the values by the Finite Elements theory and theoretical values of CEB

proved the efficiency and capability of the collection method and GPS data analysis to obtain the frequency values and amplitude of dynamic displacement, showing that the limitation imposed by the necessity of a particular satellite geometric configuration, in this case did not prejudiced the program and performing the tests. As the method does not required a good geometric distribution of satellites - and only two satellites -, allows obtaining reliable results on the dynamic behavior of a structure in any latitude of the globe. The use of electromechanical oscillator as a calibrator was reliable, providing also to produce calibrated values of frequencies and amplitudes of unknown displacements, since the electro-mechanical

The results of the second test of on using the method of this research on a large man-made road structure, the Hawkshaw Cable-stayed Bridge showed the full possibility on using GPS

Given the above, it was concluded that GPS, or the method of data collection employed, allows for the graphical description of the dynamic displacement amplitude of the middle span deck

. 209). Therefore, the results

**6. Conclusions**

Ana Paula Camargo Larocca1\*, Ricardo Ernesto Schaal2 and Marcelo Carvalho dos Santos3

\*Address all correspondence to: larocca.ana@usp.br

1 Department of Transportation Engineering, Polytechnic School, University of Sao Paulo, Brazil

2 Departament of Transportation, Sao Carlos Engineering School, University of Sao Paulo, Brazil

3 Department of Geodesy and Geomatics Engineering, University of New Brunswick, Canada

#### **References**

[1] Brigham, E. O. (1974). *The Fast Fourier Transform.* 1. ed. New Jersey. Prentice-Hall, Inc.. Cap. 1, , 1-8.

[2] Cosser, E, Roberts, G. W, Xiaoling, M, & Dodson, A. H. (2003). *The Comparison of Sin‐ gle Frequency and Dual Frequency GPS for Bridge Deflection and Vibration Monitoring.* Proceeding, 11th FIG Symposium on Deformation Measurements, Santorini, Greece.

[16] Pretlove, A. J, et al. (1991). *Bulletin* d' Information Problems in Structures*.* In: CEB

L1 GPS as Tool for Monitoring the Dynamic Behavior of Large Bridges

http://dx.doi.org/10.5772/56567

143

[17] Radovanovic, R. S, & Teskey, W. F. (2001). *Dynamic Monitoring of Deforming Struc‐ tures: GPS versus Robotic Tacheometry Systems.* The 10th International Symposium on

[18] Roberts, G. W, Meng, X, & Dodson, A. H. (2001). *Data Processing and Multipath Miti‐ gation for GPS/Accelerometer Based Hybrid Structural Deflection Monitoring system.* De‐ formation Measurements and Analysis. ION GPS The 14th International Technical Meeting of the Satellite Division of the Institute of Navigation, 11-14 September 2001,

[19] Roberts, G. W, Meng, X, & Dodson, A. H. (2002). *Using Adaptive Filtering to Detect Multipath and Cycle Slips in GPS/Accelerometer Bridge Deflection Monitoring Data*. FIG

[20] Schaal, R. E, & Larocca, A. P. C. (2001). *A Methodology to Use the GPS for Monitoring Vertical Dynamic Subcentimeter Displacement*. Digital Earth Fredericton, New Bruns‐

[21] Schaal, R. E, & Larocca, A. P. C. (2001). *A Methodology to Use the GPS for Monitoring Vertical Dynamic Subcentimeter Displacement*. Digital Earth Fredericton, New Bruns‐

[22] Schaal, R. E, & Larocca, A. P. C. (2002). A Methodology for Monitoring Vertical dy‐ namic Sub-Centimeter Displacements with GPS. *GPS Solutions,* n.3, Winter., 5, 15-18.

[23] Schaal, R. E, & Larocca, A. P. C. (2002). A Methodology for Monitoring Vertical dy‐ namic Sub-Centimeter Displacements with GPS. *GPS Solutions,* n.3, Winter., 5, 15-18.

[24] Shun-Ichi NakamuraP. E. ((2000). GPS Measurement of Wind-Induced Suspension

[25] Sumitro, S. (2001). *Long Span Bridge Health Monitoring System in Japan*. Presented in a Workshop Sponsored by the National Science Foundation on Health Monitoring of Long Span Bridges, University of California, Irvine Campus, March, 2001. http://

[26] Tang, M. (1999). Suspension Bridges. In: Wai-Fah, C. et al. *Bridge Engineering Hand‐*

[27] Tranquilla, J. M. (1986). Mutlipath and Imaging Problems in GPS Receiver Antennas. Presented at Symposium on Antenna Technology and Applied Electromag VI.3. [28] Troitsky, M. S. (1977). Cable-stayed Bridges, Theory and Design. Wiliam Clowes &

www.krcnet.co.jp/papers/pdf/International/UCI2001\_sumitoro.PDF, 9-10.

Bridge Girder Displacements. *Journal of Structural Engineering,* , 126

Deformation Measurements (FIG), Orange, California, USA, March., 19-22.

*Vibrations Induced By People.* Lausanne. Cap. 1,(209-Vibration), 1-10.

XXII Internacional Congress, Washington, D.C. USA, April., 19-26.

Salt Lake City, USA., 2001.

wick- Canadá- 28, Junho., 2001.

wick- Canadá- 28, Junho., 2001.

*book.* CRC Press LLC. Cap. 19, , 19-11.

Sons, Limited. Cap. 1, , 1-10.


[16] Pretlove, A. J, et al. (1991). *Bulletin* d' Information Problems in Structures*.* In: CEB *Vibrations Induced By People.* Lausanne. Cap. 1,(209-Vibration), 1-10.

[2] Cosser, E, Roberts, G. W, Xiaoling, M, & Dodson, A. H. (2003). *The Comparison of Sin‐ gle Frequency and Dual Frequency GPS for Bridge Deflection and Vibration Monitoring.*

[4] Ko, J. M, Ni, Y. Q, Wang, J. Y, Sun, Z. G, & Zhou, X. T. (2000). *Studies of vibration-based damage detection of three cable-supported bridges in Hong Kong*. Proceedings of the Inter‐ national Conference on Engineering and Technological Sciences Session 5: Civil Engi‐ neering in the 21st Century, J. Song and G. Zhou (eds.), Science Press, Beijing, China,

[5] Langley, R. B. (1997). GPS Receiver System Noise. *GPS WORLD*- Innovation. June

[6] Larocca, A. P. C. (2004a). *O Uso do GPS como Instrumento de Controle de Deslocamentos Dinâmicos de Obras Civis- Aplicação na Área de Transportes*. São Carlos, 2004. 203p.Tese (Doutorado)- Escola de Engenharia de São Carlos, Universidade de São Paulo.

[7] Larocca, A. P. C, & Schaal, R. E. (2002). Degradation in the Detection of Millimetric Dynamic Movements due to Metallic Objects close to the GPS Antenna. In: COBRAC

[8] Larocca, A. P. C. (2004b). *Using High-rate GPS Data to Monitor the Dynamic Behavior of*

lite Division of Navigation. September 21-24, 2004,Long Beach, CA, USA., 2004.

[10] Ogaja, C, Rizos, C, & Wang, J. (2001). *A Dynamic GPS System for On-Line Structural Monitoring.* Int. on Kinematic System in Geodesy, Geomatics & Navigation (KIS

[11] Pletz, E. (2003). Cable-stayed footbridge with stress laminated timber deck composed of curved modules. São Carlos, 2003, Tese de Doutorado, Escola de Engenharia de

[12] Pletz, E. (2003). Passarela Estaiada com Tabuleiro de Madeira Laminada Protendida em Módulos Curvos. São Carlos, 2003. 164p. Tese (Doutorado)- Escola de Engenharia

[13] Podolny, J. R. W, & Scalzi, J. B. (1976). *Construction and Design of Cable-Stayed Bridges.*

[14] Pretlove, A. J, & Rainer, H. (1991). *Vibrations Induced By People.* Bulletin d' Informa‐

[15] Pretlove et al(1991). *Vibration Problems in Structures. Bulletin* d' Information In: CEB,

New York: JOHN Wiley & Sons. Cap.1, Cap.12, p. 425-453., 1-23.

International Technical Meeting of the Satel‐

2002, Florianópolis, 2002. CD-ROM. Santa Catarina- Brasil, Artigo n. 20.

[9] Leick, A. (2004). GPS satellite Surveying. New York, John Willey, 2004.

*a Cable-stayed Bridge.* ION GNSS The 17th

São Carlos, Universidade de São Paulo.

de São Carlos- Universidade de São Paulo.

tion In: CEB, 1991*.* Lausanne. Cap. 1, (209), 1-10.

1991*.* Lausanne. Appendix G, (209), 199-202.

2001), Banff, Canada, June. (Download PDF)., 5-8.

[3] Holmes, J. B. (1998). http://www.cbu.edu/~jholmes/252Light21n/sld040.htm.

FIG Symposium on Deformation Measurements, Santorini, Greece.

Proceeding, 11th

142 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

105-112., 2000.

1997.


[29] Tsakiri, M, Lekidis, V, Stewart, M, & Karabelas, J. (2003). *Testing Procedures for the monitoring of Seismic Induced Vibrations on a Cable-Stayed Highway Bridge.* Proceeding, 11th FIG Symposium on Deformation Measurements, Santorini, Greece, 2003.

**Chapter 6**

**Intelligent Traffic System: Implementing**

Additional information is available at the end of the chapter

Hatem F. Halaoui

**1. Introduction**

of the work proposed here.

**1.1. Spatial databases**

overlap of geometries.

**•** Transport specialist

http://dx.doi.org/10.5772/55654

**Road Networks with Time-Weighted Graphs**

This section introduces the chapter's main subject. First, spatial temporal and spatio-temporal databases are presented as the main databases used in all Geographical Information Systems (GISs) including driving direction systems. Second, a brief introduction of GIS is presented. Finally a brief overview of existing driving path application is included as applications similar

As most kinds of applications need databases, Geographical Information Systems use what is called spatial databases. Spatial (from space) databases are databases used to store information about geography like: geometries, positions, and coordinates. Also they include spatial operations to be applied on such kinds of data like distance, area, perimeter, direction, and

> © 2013 Halaoui; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

© 2013 Halaoui; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

distribution, and reproduction in any medium, provided the original work is properly cited.

Who might use spatial databases? The following do use spatial databases:

**•** Mobile phone users: where is the nearest gas station?

**•** Army field commander: enemy movement?

**•** Insurance risk manager: houses to be affected?

**•** Medical doctor: same treatments in some area?

**•** Sports: what seats have good view?

## **Intelligent Traffic System: Implementing Road Networks with Time-Weighted Graphs**

Hatem F. Halaoui

[29] Tsakiri, M, Lekidis, V, Stewart, M, & Karabelas, J. (2003). *Testing Procedures for the monitoring of Seismic Induced Vibrations on a Cable-Stayed Highway Bridge.* Proceeding,

FIG Symposium on Deformation Measurements, Santorini, Greece, 2003.

11th

144 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/55654

#### **1. Introduction**

This section introduces the chapter's main subject. First, spatial temporal and spatio-temporal databases are presented as the main databases used in all Geographical Information Systems (GISs) including driving direction systems. Second, a brief introduction of GIS is presented. Finally a brief overview of existing driving path application is included as applications similar of the work proposed here.

#### **1.1. Spatial databases**

As most kinds of applications need databases, Geographical Information Systems use what is called spatial databases. Spatial (from space) databases are databases used to store information about geography like: geometries, positions, and coordinates. Also they include spatial operations to be applied on such kinds of data like distance, area, perimeter, direction, and overlap of geometries.

Who might use spatial databases? The following do use spatial databases:


© 2013 Halaoui; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Halaoui; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

**•** Emergency services: locate calls.

#### **1.2. Temporal and spatio-temporal databases**

A database contains information and transactions that need to be stored, manipulated and retrieved. A number of computer applications that use databases deal only with the most recent or current data. Such a database is called snapshot databases, which represents the data at the current time with recent values. On the other hand, in some applications, users might need to know not just the current information but also past or future information as well. Databases dealing with past, present, and future data are called temporal databases or historical data‐ bases [5]. Such databases support applications such as managerial information, and geograph‐ ical information. An example of this is a bank account where the customer might need to know old balances or old transactions. In such a case, users need a history of the information. These databases process a temporal dimension to store and manipulate time-varying data. In temporal databases, time is involved in two common ways [5]:

**1.4. Driving path and GNSS as a GIS applications**

reference for scientific experiments.

**•** Road situation: is the road closed?

section 4 discusses some conclusions.

**2. Related and previous work**

where A\*Traffic is applied.

*2.1.1. Google earth*

**•** Road traffic: how much traffic is on the road?

important:

Global Navigation Satellite System or GNSS is a satellite navigation system that uses satellites to provide autonomous geo-spatial positioning with global coverage. It allows small devices (especially mobile) to receive and determine their location (longitude, latitude, and altitude) to within few meters using time signals transmitted along a line-of-sight by radio waves from satellites. These devices calculate the precise time as well as position, which can be used as a

Intelligent Traffic System: Implementing Road Networks with Time-Weighted Graphs

http://dx.doi.org/10.5772/55654

147

Finding the driving path is one of the most frequent queries in GIS applications. There are many factors that influence the criteria of finding the driving path; the following are the most

The chapter is organized as follows: this section introduces the subject in general. Section 2 presents some related and previous work including widely used applications. Section 3 presents our intelligent traffic system as the main solution of the problem at hand and finally

An overview of related practical and theoretical related work is presented in this section. Example queries are also illustrated. Moreover, the section briefly presents existing artificial intelligence in such applications. Finally, A\*Traffic is presented as the main algorithm used in this chapter where a proposal of a time-weighted graph is used as the main data structure

This sub-section presents one of the most widely used applications for finding driving directions: Google Earth. As a note, the application is not only used as driving directions application only but also offers other GIS services which are out of the scope of this chapter.

Google Earth [9, 10] is a geographical system that offers satellite images of the locations along with spatial information such as coordinates and elevation. It contains about 70 TB of Data [10]. It provides three main kinds of data: Raster data, Spatial and Non-spatial data, and Video. Moreover, Google Earth offers a set of functionalities, an important subset of which is:

**•** Distance: What is the distance between the origin and destination?

**2.1. Driving direction applications: Google earth as an example**


It is obvious that most spatial databases are also changing with time. For example, road networks will be constantly changing, areas of lands on maps may change, a moving plane will change its position over time, and so on. In such a case, we have spatial or geographical data that is changing with time. In this case spatial database becomes temporal as well and is called spatio-temporal database. Such databases are categorized in three main categories:


In reality, most spatial databases change with time and hence most of them are spatio-temporal databases.

#### **1.3. Geographical information systems**

Geographic Information System (GIS) is a collection of computer hardware, software for capturing, managing, analyzing, and displaying all forms of geographical information [5].

Geographical Information Systems are being involved in most aspects of life and businesses. All GIS's use spatial databases as their data warehouse that are manipulated and presented in a user interface. Later in this chapter, driving direction example queries are given as examples of GIS applications.

#### **1.4. Driving path and GNSS as a GIS applications**

**•** Emergency services: locate calls.

time).

databases.

of GIS applications.

**1.2. Temporal and spatio-temporal databases**

146 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

temporal databases, time is involved in two common ways [5]:

parcels where it could occur once every year or more.

**•** Third category that is a combination of the above.

**1.3. Geographical information systems**

**•** Time stamp: one attribute is used to save the time of the validity

A database contains information and transactions that need to be stored, manipulated and retrieved. A number of computer applications that use databases deal only with the most recent or current data. Such a database is called snapshot databases, which represents the data at the current time with recent values. On the other hand, in some applications, users might need to know not just the current information but also past or future information as well. Databases dealing with past, present, and future data are called temporal databases or historical data‐ bases [5]. Such databases support applications such as managerial information, and geograph‐ ical information. An example of this is a bank account where the customer might need to know old balances or old transactions. In such a case, users need a history of the information. These databases process a temporal dimension to store and manipulate time-varying data. In

**•** Time interval: two attributes are used to indicate the interval of validity (start time and end

It is obvious that most spatial databases are also changing with time. For example, road networks will be constantly changing, areas of lands on maps may change, a moving plane will change its position over time, and so on. In such a case, we have spatial or geographical data that is changing with time. In this case spatial database becomes temporal as well and is called spatio-temporal database. Such databases are categorized in three main categories:

**•** Discretely changing spatio-temporal databases: like the changes of the geometry of land

**•** Continuously changing spatio-temporal databases: mostly deal with moving objects like plane or cars where the position of the object is changing continuously with time.

In reality, most spatial databases change with time and hence most of them are spatio-temporal

Geographic Information System (GIS) is a collection of computer hardware, software for capturing, managing, analyzing, and displaying all forms of geographical information [5].

Geographical Information Systems are being involved in most aspects of life and businesses. All GIS's use spatial databases as their data warehouse that are manipulated and presented in a user interface. Later in this chapter, driving direction example queries are given as examples Global Navigation Satellite System or GNSS is a satellite navigation system that uses satellites to provide autonomous geo-spatial positioning with global coverage. It allows small devices (especially mobile) to receive and determine their location (longitude, latitude, and altitude) to within few meters using time signals transmitted along a line-of-sight by radio waves from satellites. These devices calculate the precise time as well as position, which can be used as a reference for scientific experiments.

Finding the driving path is one of the most frequent queries in GIS applications. There are many factors that influence the criteria of finding the driving path; the following are the most important:


The chapter is organized as follows: this section introduces the subject in general. Section 2 presents some related and previous work including widely used applications. Section 3 presents our intelligent traffic system as the main solution of the problem at hand and finally section 4 discusses some conclusions.

#### **2. Related and previous work**

An overview of related practical and theoretical related work is presented in this section. Example queries are also illustrated. Moreover, the section briefly presents existing artificial intelligence in such applications. Finally, A\*Traffic is presented as the main algorithm used in this chapter where a proposal of a time-weighted graph is used as the main data structure where A\*Traffic is applied.

#### **2.1. Driving direction applications: Google earth as an example**

This sub-section presents one of the most widely used applications for finding driving directions: Google Earth. As a note, the application is not only used as driving directions application only but also offers other GIS services which are out of the scope of this chapter.

#### *2.1.1. Google earth*

Google Earth [9, 10] is a geographical system that offers satellite images of the locations along with spatial information such as coordinates and elevation. It contains about 70 TB of Data [10]. It provides three main kinds of data: Raster data, Spatial and Non-spatial data, and Video. Moreover, Google Earth offers a set of functionalities, an important subset of which is:


In addition to introducing Google Earth, this section present a driving directions query as an example of the driving direction services that Google Earth offers, which is directly related to the work in this chapter.

amount of time but does not guarantee a best path. However, finding a good heuristic function could guarantee up to 95% finding the best path. This section includes the A\* search algorithm

Intelligent Traffic System: Implementing Road Networks with Time-Weighted Graphs

http://dx.doi.org/10.5772/55654

149

This section presents the application algorithms and the application of the intelligent driving path application used in our previous work, which is extended in this chapter. Examples of

A\* [2, 4] is an Artificial Intelligent graph algorithm proposed by Pearl. The main goal of A\* is to find a cheap cost graph path between two nodes in a graph using a heuristic function. The main goal of the heuristic function is to minimize the selection list at each step according to a logical and applicable criterion. In the graph example, finding the shortest path from one node to another has to be done by getting all possible paths and choosing the best. This process is very expensive and time consuming in the presence of a huge number of nodes. On the other hand, using an evaluation function (heuristic function) to minimize our choices according to intelligent and practical criterion would be much faster especially for applications requiring

**2.3. A\* traffic: Design, algorithms and implementation for a driving direction system**

to be used in the solution approach presented in this chapter.

**Figure 1.** Road and driving directions between New York, NY and Jersey City, NJ

executions demonstrated using our testing tool, are presented.

quick output as the driving direction application.

*2.3.1. A\*: an artificial inelegant algorithm for finding driving directions*

#### *2.1.2. Driving direction query by google earth: Form "New York, NY" to "Jersey City, NJ"*

Figure 1 shows the driving directions with a map image from New York, NY to Jersey City, NJ. The figure shows the input (origin and destination) and outputs on the map (Roads and driving directions). In such queries, "Google Earth" provides driving tips to be followed when driving from the origin to the destination given in the query. The user can be more specific by passing a full address (building, street, city, state, and zip assuming U.S.A. is the country.)

#### **2.2. Artificial intelligence and driving directions**

Artificial inelegance is used in graph searching algorithms. Russel and Norving [4] presents several intelligent graph searching algorithms. Here are two important ones:


The main idea behind these algorithms is that they do not try all possible cases to give an answer. The algorithms use heuristic function to un-check some of the paths. This saves huge Intelligent Traffic System: Implementing Road Networks with Time-Weighted Graphs http://dx.doi.org/10.5772/55654 149

**Figure 1.** Road and driving directions between New York, NY and Jersey City, NJ

**•** Answers location queries: the user gives a location (New York, USA) as input and gets a geographical image as an output. The image can be explored in details; this feature includes:

**•** Shows directions: the user gives an origin and destination as input and gets a map as output

**•** Displays spatial information: Google earth shows spatial information like coordinates,

**•** Has learning abilities: Google Earth saves recently and regularly visited locations and queries so that the user avoids delays the next time these locations or queries asked.

**•** Includes pre-known locations: Google Earth offers a list of the most used locations like

**•** Provides user interaction: the user is able to put place marks on the map so he/she can avoid

**•** Provides prepaid online service that provides the customer with live video (with restriction

**•** Other products like Google Earth PRO: it is a paid service that makes it very easy to research locations and present discoveries. In just a few clicks, the user can import site plans, property lists or client sites and share the view with his/her client or colleague. Moreover, the user

In addition to introducing Google Earth, this section present a driving directions query as an example of the driving direction services that Google Earth offers, which is directly related to

Figure 1 shows the driving directions with a map image from New York, NY to Jersey City, NJ. The figure shows the input (origin and destination) and outputs on the map (Roads and driving directions). In such queries, "Google Earth" provides driving tips to be followed when driving from the origin to the destination given in the query. The user can be more specific by passing a full address (building, street, city, state, and zip assuming U.S.A. is the country.)

Artificial inelegance is used in graph searching algorithms. Russel and Norving [4] presents

The main idea behind these algorithms is that they do not try all possible cases to give an answer. The algorithms use heuristic function to un-check some of the paths. This saves huge

several intelligent graph searching algorithms. Here are two important ones:

*2.1.2. Driving direction query by google earth: Form "New York, NY" to "Jersey City, NJ"*

cities, businesses, public places, etc.

148 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

elevations, altitude, and others.

government offices, schools, etc.

the work in this chapter.

**•** Greedy best-first search

**•** A\* search

showing the directions with driving hints on the map.

delays the next time visits the same place is visited.

and delay due to security) of any place in the world.

can export high-quality images to documents or the web.

**2.2. Artificial intelligence and driving directions**

amount of time but does not guarantee a best path. However, finding a good heuristic function could guarantee up to 95% finding the best path. This section includes the A\* search algorithm to be used in the solution approach presented in this chapter.

#### **2.3. A\* traffic: Design, algorithms and implementation for a driving direction system**

This section presents the application algorithms and the application of the intelligent driving path application used in our previous work, which is extended in this chapter. Examples of executions demonstrated using our testing tool, are presented.

#### *2.3.1. A\*: an artificial inelegant algorithm for finding driving directions*

A\* [2, 4] is an Artificial Intelligent graph algorithm proposed by Pearl. The main goal of A\* is to find a cheap cost graph path between two nodes in a graph using a heuristic function. The main goal of the heuristic function is to minimize the selection list at each step according to a logical and applicable criterion. In the graph example, finding the shortest path from one node to another has to be done by getting all possible paths and choosing the best. This process is very expensive and time consuming in the presence of a huge number of nodes. On the other hand, using an evaluation function (heuristic function) to minimize our choices according to intelligent and practical criterion would be much faster especially for applications requiring quick output as the driving direction application.

The heuristic function is not a constant static function. It is defined according to the problem in hand and passed to the A\* as a parameter.

In the case of A\* search for a direction path, the heuristic function F is built up from two main factors:

H = Straight Line distance to destination (distance between two coordinates).

G = Distance Traveled so far.

F=H+G.

At each node n, we compute F (n) is computed and the next step is chosen accordingly (the node with the least value of F is chosen).

#### *2.3.2. The A\* algorithm*

A\*(Origin, Destination, F)


Note that A\* Algorithm is a polynomial time algorithm with time complexity in O(n2 ) in the worst case and O(n\*logn) in the average and best cases.

Figure 2 is an example of the A\* algorithm behavior to find a path starting from "Arad" to "Bucharest" in Romania [4]. First, we start at Arad and go to the next neighbor with the best heuristic function (Sibiu). Second, explore all neighbors of Sibiu for the best heuristic function (evaluation of the function is shown). The algorithm continues to choose the best next step (with the least value of heuristic function) until it reaches Bucharest.

**Figure 2.** A\* algorithm behaviour to find a path starting from "Arad" to "Bucharest"

Intelligent Traffic System: Implementing Road Networks with Time-Weighted Graphs

http://dx.doi.org/10.5772/55654

151

Intelligent Traffic System: Implementing Road Networks with Time-Weighted Graphs http://dx.doi.org/10.5772/55654 151

The heuristic function is not a constant static function. It is defined according to the problem

In the case of A\* search for a direction path, the heuristic function F is built up from two main

At each node n, we compute F (n) is computed and the next step is chosen accordingly (the

of TE add UNi

**9.** From L choose N: the node with the least F(N) then pop all elements in S until predecessor

**11.** Go back to step 5 until the destination node appears or no more unmarked nodes exist

**12.** If no more unmarked nodes exist, return "No Solution" otherwise return the Stack content

Figure 2 is an example of the A\* algorithm behavior to find a path starting from "Arad" to "Bucharest" in Romania [4]. First, we start at Arad and go to the next neighbor with the best heuristic function (Sibiu). Second, explore all neighbors of Sibiu for the best heuristic function (evaluation of the function is shown). The algorithm continues to choose the best next step

Note that A\* Algorithm is a polynomial time algorithm with time complexity in O(n2

with their values of F(ni

)

) to L

with their values F(ni

and F(UNi

)

) in the

H = Straight Line distance to destination (distance between two coordinates).

in hand and passed to the A\* as a parameter.

150 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

node with the least value of F is chosen).

**1.** Define a List L that includes all visited nodes ni

worst case and O(n\*logn) in the average and best cases.

(with the least value of heuristic function) until it reaches Bucharest.

**2.** Define the Stack S that includes nodes ni

**3.** Start at origin (origin is the starting point)

G = Distance Traveled so far.

*2.3.2. The A\* algorithm*

A\*(Origin, Destination, F)

**4.** Mark the origin as visited

**5.** Push origin in the stack S

**6.** Add origin and F(origin) to L

of N appears on the top

**10.** Push N in the stack S

as a solution

**7.** Get top element TE of the Stack S

**8.** For each unmarked neighbor UNi

factors:

F=H+G.

**Figure 2.** A\* algorithm behaviour to find a path starting from "Arad" to "Bucharest"

*2.3.3. A\*traffic: A variation of A\* with road traffic as a factor*

A\*Traffic could be seen as a variation of A\* with the ability to take traffic into consideration when computing the driving direction solution. The main job is done in the heuristic function where a new factor is used to choose the next step. The new factor is the average traffic value (got online from real time databases) represented in the following form time/distance (exam‐ ple: 3 min/km).

The new Heuristic function will be:

F=H+G+T

Where:

H = Straight Line distance to destination (distance between two coordinates).

G = Distance Traveled so far.

T= Average Traffic delay

#### *2.3.4. Testing tool: Query example*

This sub-section presents the layout of the testing tool developed to test the algorithm proposed "A\*Traffic". For this purpose, an example query is presented.

**3.1. Time-weighted graph**

and executions.

*3.1.1. Time weights*

(minutes)?

As a result:

test example was done.

This section includes our graph proposal using time-weights computations possible examples

Intelligent Traffic System: Implementing Road Networks with Time-Weighted Graphs

http://dx.doi.org/10.5772/55654

153

Graphs are usually weighted with distances. In this chapter, time will be used as the weight of the graph edges. The main issue is: how to compute the graph edge weight in terms of time

**•** Each of these (road, streets,..) has a maximum speed limit that is stored in the database

The initial weight of the edge (minutes) = (edge distance (miles) ÷ speed (miles/hour)) \* 60

To clearly present the suggested idea, an image demonstration will be presented to show how a graph is built and weights are assigned. The following series of images show some part of Manhattan (New York, USA) in map, the process of creating vertices and edges, assigning weights for edges (using stored data) and finally applying the dynamic A\* algorithm on such example. Figure 4 shows the map of Manhattan (from Google maps) and the part where the

To answer this question, we make the following assumptions:

**Figure 3.** Path from "HU, Kantari St, Hamra" to "AUB, Bliss St, Hamra"

*3.1.2. Example: Part of Manhattan in a time-weighted graph*

**•** Each edge in the graph represents a road, street, highway, etc. or part of it

*2.3.5. Query example: From "HU, Kantari St, Hamra" to "AUB, Bliss St, Hamra" (Beirut, Lebanon)*

This example demonstrates the main feature of the software. It provides driving directions between "HU, Kantari St, Hamra" (Haigazian University, Beirut, Lebanon) and "AUB, Bliss St, Hamra" (American University in Beirut) in Beirut, Lebanon. In order to find the driving directions, the user has to provide a start address and a destination address and clicks on the "Go!" button. Once the button is clicked, the software generates a path (in blue) between the start and the destination addresses. The blue path generated is a short path (using A\*Traffic) to follow in order to drive from the start address to the destination address. Figure 3 illustrates this example.

#### **3. Road networks with time-weighted graphs**

This section includes our approach in presenting the intelligent traffic system. Our main idea is to construct a time graph. We mean by the time graph: a graph representing the map with edges weighted by numbers (minutes) representing the estimated time needed to drive the edge (represent a road or part of it). The section starts by presenting the "Time-weighted Graph", shows a possible example of the graph, gives an execution example when applying the dynamic A\*Traffic algorithm proposed in our previous work.

Intelligent Traffic System: Implementing Road Networks with Time-Weighted Graphs http://dx.doi.org/10.5772/55654 153

**Figure 3.** Path from "HU, Kantari St, Hamra" to "AUB, Bliss St, Hamra"

#### **3.1. Time-weighted graph**

This section includes our graph proposal using time-weights computations possible examples and executions.

#### *3.1.1. Time weights*

*2.3.3. A\*traffic: A variation of A\* with road traffic as a factor*

152 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

ple: 3 min/km).

F=H+G+T

this example.

Where:

The new Heuristic function will be:

G = Distance Traveled so far.

*2.3.4. Testing tool: Query example*

T= Average Traffic delay

A\*Traffic could be seen as a variation of A\* with the ability to take traffic into consideration when computing the driving direction solution. The main job is done in the heuristic function where a new factor is used to choose the next step. The new factor is the average traffic value (got online from real time databases) represented in the following form time/distance (exam‐

This sub-section presents the layout of the testing tool developed to test the algorithm proposed

*2.3.5. Query example: From "HU, Kantari St, Hamra" to "AUB, Bliss St, Hamra" (Beirut, Lebanon)*

This example demonstrates the main feature of the software. It provides driving directions between "HU, Kantari St, Hamra" (Haigazian University, Beirut, Lebanon) and "AUB, Bliss St, Hamra" (American University in Beirut) in Beirut, Lebanon. In order to find the driving directions, the user has to provide a start address and a destination address and clicks on the "Go!" button. Once the button is clicked, the software generates a path (in blue) between the start and the destination addresses. The blue path generated is a short path (using A\*Traffic) to follow in order to drive from the start address to the destination address. Figure 3 illustrates

This section includes our approach in presenting the intelligent traffic system. Our main idea is to construct a time graph. We mean by the time graph: a graph representing the map with edges weighted by numbers (minutes) representing the estimated time needed to drive the edge (represent a road or part of it). The section starts by presenting the "Time-weighted Graph", shows a possible example of the graph, gives an execution example when applying

H = Straight Line distance to destination (distance between two coordinates).

"A\*Traffic". For this purpose, an example query is presented.

**3. Road networks with time-weighted graphs**

the dynamic A\*Traffic algorithm proposed in our previous work.

Graphs are usually weighted with distances. In this chapter, time will be used as the weight of the graph edges. The main issue is: how to compute the graph edge weight in terms of time (minutes)?

To answer this question, we make the following assumptions:


As a result:

The initial weight of the edge (minutes) = (edge distance (miles) ÷ speed (miles/hour)) \* 60

#### *3.1.2. Example: Part of Manhattan in a time-weighted graph*

To clearly present the suggested idea, an image demonstration will be presented to show how a graph is built and weights are assigned. The following series of images show some part of Manhattan (New York, USA) in map, the process of creating vertices and edges, assigning weights for edges (using stored data) and finally applying the dynamic A\* algorithm on such example. Figure 4 shows the map of Manhattan (from Google maps) and the part where the test example was done.

**Figure 5.** Modeling graph vertices in the chosen part

Intelligent Traffic System: Implementing Road Networks with Time-Weighted Graphs

http://dx.doi.org/10.5772/55654

155

**Figure 6.** Considering graph vertices only

**Figure 4.** Manhattan and the chosen part (to be modeled in a graph)

The following figures (5, 6, and 7) show the following:


Intelligent Traffic System: Implementing Road Networks with Time-Weighted Graphs http://dx.doi.org/10.5772/55654 155

**Figure 5.** Modeling graph vertices in the chosen part

**Figure 6.** Considering graph vertices only

**Figure 4.** Manhattan and the chosen part (to be modeled in a graph)

154 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

The following figures (5, 6, and 7) show the following:

**•** And finally the whole graph: vertices and edges

**•** Vertices only

**•** Location of vertices in the map (for simplicity, only intersection were chosen)

*3.1.3. Online updates with dynamic A\*traffic*

terms of time units. The main question is:

in the heavy traffic road

**iii.** Sends T where T = (AV/D) \* 60

*3.1.4. Query Example*

Results, and Conclusions

Dynamic A\*Traffic assumes receiving online data whenever a related change occurs. In our new graph, the traffic system assumes receiving online data about current road situations in

Intelligent Traffic System: Implementing Road Networks with Time-Weighted Graphs

http://dx.doi.org/10.5772/55654

157

If the road is categorized as "heavy traffic", the online system simply does the following:

**ii.** Get the distance (D) of the road (or the part) where heavy traffic exists

**•** Recalculation is done and another path (figure (b))with 3.83 minutes was found

**i.** Calculates the average speed (AV) of moving vehicles (not exceeding the max limit)

Figure 9 (a) shows a calculated query path (using A\* algorithm) where figure 9 (b) shows a recalculation of the same query (with a new path) after online information about the traffic

(a) (b)

**Figure 9.** a) A short (time) path using A\*.(b) Another path for the same query after heavy traffic is recodedAnalysis,

How can we describe the road traffic changes in time units (minutes)?

situation is received. Here are some hints of the shown calculations:

**•** The calculated time T1 in figure (a) is 3.35 minutes

**•** After receiving updated data, T1 became 5.30 minutes

**Figure 7.** Building the graph edges (directed)

Figure 8 shows one of the edges with weight. The weight is computed as described in section 3.1.1. As a result the shown weight was computed as follows: w = 0.22 (distance) \* 30 (speed) / 60 = 0.11 minutes

**Figure 8.** A sample edge with a calculated weight

#### *3.1.3. Online updates with dynamic A\*traffic*

Dynamic A\*Traffic assumes receiving online data whenever a related change occurs. In our new graph, the traffic system assumes receiving online data about current road situations in terms of time units. The main question is:

How can we describe the road traffic changes in time units (minutes)?

If the road is categorized as "heavy traffic", the online system simply does the following:


#### *3.1.4. Query Example*

**Figure 7.** Building the graph edges (directed)

156 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

**Figure 8.** A sample edge with a calculated weight

60 = 0.11 minutes

Figure 8 shows one of the edges with weight. The weight is computed as described in section 3.1.1. As a result the shown weight was computed as follows: w = 0.22 (distance) \* 30 (speed) / Figure 9 (a) shows a calculated query path (using A\* algorithm) where figure 9 (b) shows a recalculation of the same query (with a new path) after online information about the traffic situation is received. Here are some hints of the shown calculations:


**Figure 9.** a) A short (time) path using A\*.(b) Another path for the same query after heavy traffic is recodedAnalysis, Results, and Conclusions

In brief, using dynamic A\*Traffic algorithm and applying it on time-weighted graphs has advantages such as:

**References**

er, NJ, (2003).

cations, IGI Global.

toril, Portugal, (2005).

(2008). , 53(2)

[1] Hatem Halaoui Spatial and Spatio-Temporal Databases Modeling: *Approaches for Modeling and Indexing Spatial and Spatio-Temporal Databases.* VDM Verlag, (2009). [2] Pearl, J. (1984). Heuristics: Intelligent Search Strategies for computer Problem Solv‐

Intelligent Traffic System: Implementing Road Networks with Time-Weighted Graphs

http://dx.doi.org/10.5772/55654

159

[3] Shekhar, S, & Chawla, S. Spatial Databases: A Tour. Prentice Hall. Upper Saddle Riv‐

[4] Stuart Russell and Peter Norvig *Artificial Intelligence a Modern Approach*. Prentice Hall,

[5] Hatem Halaoui(2008). *Towards Google Earth: A History of Earth Geography*". Book chap‐ ter (*Chapter XVI)*, Information Systems Research Methods, Epistemology, and Appli‐

[6] Halaoui Halaoui. Spatio-Temporal Data Model: Data Model and Temporal Index Us‐ ing Versions for Spatio-Temporal Databases. Proceedings of the GIS Planet 2005, Es‐

[7] Hatem Halaoui AIRSTD: An Approach for Indexing and retrieving Spatio-Temporal

[8] Hatem Halaoui A Spatio-Temporal Indexing Structure for Efficient Retrieval and Manipulation of Discretely Changing Spatial Data". Journal of Spatial Science

[9] Google Earth Explore, Search and Discover.Available: http://earth.google.com/tour/

[10] Google Earth Blog Google Earth Data Size Live Local, New languages coming. Avail‐ able: http://www.gearthblog.com/blog/archives/2006/09/news\_round‐

Databases". LNCS (IEEE/ACM SITIS 06), Springer-Verlag, (2007).

ing. Addison Wesley, Reading, Massachusetts.

Upper Saddle River, New Jersey, (2003).

index.html.Access date 22 December (2009).

up\_google.html.Access date 15 December (2009).


Note that our analysis now is focused on timing rather than distance and hence an optimal solution is a solution with minimum time rather than minimum distance.

Table 1 represents the results gathered from applying 100 executions in each case (long, average, and short) where:



**Table 1.** Percentages of quality of solutions over different casReferences

#### **Author details**

Hatem F. Halaoui

Address all correspondence to: hhalaoui@haigazian.edu.lb

Haigazian University, Lebanon

#### **References**

In brief, using dynamic A\*Traffic algorithm and applying it on time-weighted graphs has

**•** Saves a lot of execution time when finding the path. In an optimal algorithm all possible paths have to be found and the shortest is selected among them. Such an optimal algorithm is not in P (class of polynomial time algorithms) which could take years to solve in some cases. In our case, A\* and A\* traffic are in P. They guarantee finding a good solution but do

**•** Applying time-weighted graphs takes into consideration distance and speed and as a result

**•** Similar to A\*Traffic, our analysis showed that our solution is optimal in 88% of the times and 97% for short driving paths. The reason for such good results is that the A\* algorithm

Note that our analysis now is focused on timing rather than distance and hence an optimal

Table 1 represents the results gathered from applying 100 executions in each case (long,

Long distances (>300km) 76.4 % 14.2% 9.4% Average Distances (between 100km and 300km) 90.5% 7.2% 2.3% Short Distances (<100km) 97.2% 2% <1%

**solution**

**Good Solution**

**Bad Solution**

the algorithm will return the fastest path rather than the shortest path

solution is a solution with minimum time rather than minimum distance.

**•** Good solution: takes maximum of 30% more time than optimal solution

**•** Bad solution: Takes more than 30% more time than optimal solution

**Distances Optimal**

Average 88%

**Table 1.** Percentages of quality of solutions over different casReferences

Address all correspondence to: hhalaoui@haigazian.edu.lb

takes a lot of path related issues into consideration.

advantages such as:

not guarantee an optimal solution.

158 Global Navigation Satellite Systems - From Stellar to Satellite Navigation

average, and short) where:

**Author details**

Hatem F. Halaoui

Haigazian University, Lebanon

**•** Optimal solution: Best solution


### *Edited by Ahmed H Mohamed*

Today, satellite navigation offers convenient alternative to terrestrial and stellar navigation methods that is not only ubiquitous and easy to operate but also available day and night. The radio navigation technology, first appeared in the 1930s and matured in the 1940s, did not take off until the late 1960s and 1970s with the launch of the first navigation satellites by the US Naval and Air Forces, resulting from the NAVSTAR GPS program. The end user navigation equipment, bulky and expensive at the beginning, did not emerge until the microprocessor became viable during the late 1970s. Now-a-day three other global navigation satellite systems are fully or partially operational: the Russian GLONASS, the European Union Galileo, and the Chinese BeiDou. Where does the future lie? Probably in a network of global satellite navigation systems, with increase in satellite coverage and improved accuracy, integrity, and reliability, as these systems further mature. End user equipment will continue to be smaller, more accurate and cheaper. Yet in many respects, satellite navigation systems owe most to the old-time stellar navigation, by keeping man look up to the sky for help.

Global Navigation Satellite Systems - From Stellar to Satellite Navigation

Global Navigation

Satellite Systems

From Stellar to Satellite Navigation

*Edited by Ahmed H Mohamed*

Photo by Violka08 / iStock