3. RAIM approaches

The acceptance region Ω fundamentally defines the set of all the measurements y from which it is possible to determine a safe position estimate <sup>b</sup>x, i.e., for which the requirement on the <sup>P</sup>HMI is

In any geometry, the rule can be optimized in different ways. The two extreme approaches would be: 1) minimizing the PHMI given the requirement on the continuity is satisfied, or viceversa 2) minimizing the PFA (maximizing the continuity c) given the requirement on the

To define the PLs the total requirement on the PHMI, the PHMI, must be split into the different position components. In aviation, it is to be split into horizontal and vertical allocations, P

concern. PLAT and PLCT are defined as the maximum position error size (in the AT direction and in the CT direction) that can pass undetected with a probability smaller or equal to the

HMI. In ITS instead, it is to be split between the horizontal along-track (AT) and cross-

<sup>P</sup> <sup>j</sup>bxAT � <sup>x</sup>AT<sup>j</sup> <sup>&</sup>gt; <sup>δ</sup>jNo Alert � � <sup>≤</sup> <sup>P</sup>

<sup>P</sup> <sup>j</sup>bxCT � <sup>x</sup>CT<sup>j</sup> <sup>&</sup>gt; <sup>δ</sup>jNo Alert � � <sup>≤</sup> <sup>P</sup>

HMI ¼ PHMI. To satisfy the navigation availability requirement it has to be:

If those equations are satisfied integrity is maintained for the epoch under consideration. Instead of computing the PLs, the integrity monitoring system can simply compute the actual PHMI or an upperbound for it, and then compare it to the requirement PHMI. If PHMI ≤ PHMI,

In this Section the input and output parameters of a RAIM algorithm are summarized. Figure 2 shows a schematic representation of a RAIM algorithm. A RAIM algorithm is constituted of two blocks: the first one assesses the geometry or model strength the second one processes the

The model strength assessment takes as input the design matrix A and the distribution function of the observable f <sup>y</sup>, i.e., the observation model, at each epoch. Output of this first assessment are the PLs and/or the PHMI, and consequently the availability prediction for that epoch: if any PL > AL, or equivalently PHMI > PHMI, the navigation service is declared

HMI, whereas the vertical component is (generally) not of

PLAT ≤ ALAT and PLCT ≤ ALCT (6)

AT HMI

CT HMI hor HMI

(5)

PHMI is satisfied. The first is usually the preferred approach.

AT HMI and P

> AT HMI and P

PLAT <sup>¼</sup> arg min<sup>δ</sup>

PLCT <sup>¼</sup> arg min<sup>δ</sup>

2.4. RAIM input, output and performance parameters

real time observations and assesses their coherency.

CT

CT HMI, i.e.,

satisfied.

and P ver

with P AT HMI þ P

2.3. Protection levels (PL)

28 Multifunctional Operation and Application of GPS

track (CT) components, P

probability requirements, P

CT

integrity is maintained.

Since RAIM is linked with the estimation method, two approaches to RAIM can be distinguished:


A combination of both methods listed above is also possible. Here only FDE procedures are analyzed.

#### 3.1. FDE procedure

In an FDE procedure one assumes the possible occurrence of different hypotheses, the faultfree case (null hypothesis H0) and the occurrence of fault/anomalies (alternative hypotheses Hi). An FDE procedure is applied to detect whether an anomaly is affecting the system, and, in case of detection, exclude the anomalous observations. In a common FDE procedure, typically the BLUE is applied to the model corresponding to the hypothesis H<sup>i</sup> that is more likely (or safer to use). Once it has been decided which hypothesis is most likely to hold true (this decision is made through a statistical testing procedure), the estimator to be used is the BLUE for the model corresponding to that hypothesis. The BLUE for the unknown x in the linear model (1), assuming known dispersion of e, i.e., Dð Þ¼ e Qy, reads:

$$
\widehat{\underline{x}} = \underline{\mathrm{S}} \underline{\mathrm{y}}\tag{7}
$$

H<sup>0</sup> : y ¼ Ax þ e

where Cy is a m � q matrix which represents the 'signature' of the errors in the measurements and ∇ is a q-sized vector that contains the sizes of the biases in each degree of freedom (q) of Cy.

To test H<sup>a</sup> against H0, the Uniformly Most Powerful Invariant (UMPI) test statistic (through

where <sup>b</sup>e<sup>0</sup> <sup>¼</sup> <sup>y</sup> � <sup>A</sup>bx<sup>0</sup> is the vector of residuals computed considering the null hypothesis hold-

<sup>λ</sup> <sup>¼</sup> <sup>∇</sup>TQ�<sup>1</sup>

<sup>A</sup> and P<sup>⊥</sup>

Knowing (though only partially in case of H<sup>a</sup> ) the distributions of the test statistic under the different hypotheses, one can define a critical region K (to reject the null hypothesis) on the basis of type I and type II error probabilities. The critical region is one sided, of the type:

The theory above constitutes the basis of statistical hypothesis testing in linear models, that allows to build the specific test in the simple binary case of null versus one alternative hypothesis. In case one has to choose among multiple alternative hypotheses, one option is to employ a set of binary tests. However, a number of different methods exist in statistics, aiming to answer this more complex problem. Hypothesis testing based methods are known as Multiple Comparisons methods [11], while other methods that do not recur to hypothesis

In this section, first the observation models and the typical assumptions adopted in civil aviation applications are described, and next the two most popular RAIM algorithms developed for such

AQyP<sup>⊥</sup><sup>T</sup>

<sup>b</sup>e0Q�<sup>1</sup> <sup>y</sup> Cy

� ��<sup>1</sup>

ð Þ <sup>q</sup>; <sup>0</sup> and <sup>H</sup><sup>a</sup> : Tq � <sup>χ</sup><sup>2</sup>

<sup>y</sup> Q�<sup>1</sup> <sup>y</sup> Q^e<sup>0</sup>

ing true (bx<sup>0</sup> being the position estimator under the null hypothesis, obtained by Eq. (7)).

application of the Generalized Likelihood Ratio (GLR) criterion) reads:

<sup>H</sup><sup>0</sup> : Tq � <sup>χ</sup><sup>2</sup>

<sup>y</sup> Cy, <sup>Q</sup>^e<sup>0</sup> <sup>¼</sup> <sup>P</sup><sup>⊥</sup>

Tq <sup>¼</sup> <sup>b</sup><sup>e</sup> T <sup>0</sup> <sup>Q</sup>�<sup>1</sup> <sup>y</sup> Cy CT

The test statistic Tq is χ<sup>2</sup> distributed:

with non-centrality parameter:

<sup>y</sup> Q�<sup>1</sup>

<sup>y</sup> <sup>Q</sup>^e0Q�<sup>1</sup>

with k the test threshold (or critical value).

testing are known as Subset Selection methods [12].

<sup>∇</sup>^ <sup>¼</sup> CT

4. The aviation legacy

where Q�<sup>1</sup>

<sup>H</sup><sup>a</sup> : <sup>y</sup> <sup>¼</sup> Ax <sup>þ</sup> Cy<sup>∇</sup> <sup>þ</sup> <sup>e</sup> (9)

Integrity Monitoring: From Airborne to Land Applications

http://dx.doi.org/10.5772/intechopen.75777

31

CT <sup>y</sup> Q�<sup>1</sup>

<sup>A</sup> <sup>¼</sup> <sup>I</sup> � A ATQ�<sup>1</sup>

<sup>y</sup> <sup>b</sup>e<sup>0</sup> (10)

ð Þ q; λ (11)

ATQ�<sup>1</sup> y .

<sup>∇</sup>^ ∇ (12)

<sup>y</sup> A � ��<sup>1</sup>

K : Tq > k (13)

with <sup>S</sup> <sup>¼</sup> <sup>A</sup>TQ�<sup>1</sup> <sup>y</sup> A � ��<sup>1</sup> ATQ�<sup>1</sup> <sup>y</sup> the pseudo-inverse of matrix A in the metric defined by Qy.

Fundamentally <sup>b</sup><sup>x</sup> <sup>y</sup> � � in this approach will be constituted by different linear functions of the observable: it will be in the form of (7) when the null hypothesis H<sup>0</sup> is considered most likely, or conversely different forms <sup>b</sup>xi when they one of the alternative hypothesis is designated to be most likely.

#### 3.2. Statistical hypothesis testing

FDE procedures are based on statistical hypothesis testing [9]. In an FDE procedure statistical tests are performed to determine which hypothesis (fault-free/faulty) on the system state is most likely to hold, and determine the observable domain subdivision discussed in Section 3.1. In this chapter only linear models are analyzed, therefore a special attention shall be given to statistical hypothesis testing in linear models. The aim is to decide between competing linear models that could describe the observed phenomenon or process, once an observation has been made. Furthermore the observables are assumed to have normal distributions, and different hypotheses differ only in the specification of the functional model. The models considered are thus Gauss-Markov models [10].

Given the linear model of Eq. (1), we assume the random noise distribution to be known, Gaussian and zero mean:

$$
\underline{\varepsilon} \sim \mathcal{N}\left(0, \mathcal{Q}\_y\right) \tag{8}
$$

The linear system in (1) represents the state of standard or nominal operations, that is the case in which the system is working properly without any fault. This state is considered as the null hypothesis H0. The case of a fault affecting the system constitutes instead a different state, described by an alternative hypothesis Ha, under which the linear model assumes a different form. Therefore:

Integrity Monitoring: From Airborne to Land Applications http://dx.doi.org/10.5772/intechopen.75777 31

$$\begin{array}{ll}\mathcal{H}\_{0}: & \underline{\mathfrak{y}} = A\mathfrak{x} + \underline{\mathfrak{e}} \\ \mathcal{H}\_{a}: & \underline{\mathfrak{y}} = A\mathfrak{x} + \mathcal{C}\_{\mathcal{Y}}\nabla + \underline{\mathfrak{e}} \end{array} \tag{9}$$

where Cy is a m � q matrix which represents the 'signature' of the errors in the measurements and ∇ is a q-sized vector that contains the sizes of the biases in each degree of freedom (q) of Cy.

To test H<sup>a</sup> against H0, the Uniformly Most Powerful Invariant (UMPI) test statistic (through application of the Generalized Likelihood Ratio (GLR) criterion) reads:

$$\underline{T}\_{q} = \widehat{\underline{\varepsilon}}\_{0}^{T} \mathbf{Q}\_{y}^{-1} \mathbf{C}\_{y} \left( \mathbf{C}\_{y}^{T} \mathbf{Q}\_{y}^{-1} \mathbf{Q}\_{\widehat{\mathbf{e}}\_{0}} \widehat{\underline{\mathbf{e}}}\_{0} \mathbf{Q}\_{y}^{-1} \mathbf{C}\_{y} \right)^{-1} \mathbf{C}\_{y}^{T} \mathbf{Q}\_{y}^{-1} \widehat{\underline{\mathbf{e}}}\_{0} \tag{10}$$

where <sup>b</sup>e<sup>0</sup> <sup>¼</sup> <sup>y</sup> � <sup>A</sup>bx<sup>0</sup> is the vector of residuals computed considering the null hypothesis holding true (bx<sup>0</sup> being the position estimator under the null hypothesis, obtained by Eq. (7)).

The test statistic Tq is χ<sup>2</sup> distributed:

$$\mathcal{H}\_0: \underline{T}\_q \sim \chi^2(q, 0) \quad \text{and} \ \mathcal{H}\_a: \underline{T}\_q \sim \chi^2(q, \lambda) \tag{11}$$

with non-centrality parameter:

3.1. FDE procedure

30 Multifunctional Operation and Application of GPS

with <sup>S</sup> <sup>¼</sup> <sup>A</sup>TQ�<sup>1</sup>

Fundamentally <sup>b</sup><sup>x</sup> <sup>y</sup>

most likely.

<sup>y</sup> A � ��<sup>1</sup>

3.2. Statistical hypothesis testing

are thus Gauss-Markov models [10].

Gaussian and zero mean:

form. Therefore:

� �

ATQ�<sup>1</sup>

In an FDE procedure one assumes the possible occurrence of different hypotheses, the faultfree case (null hypothesis H0) and the occurrence of fault/anomalies (alternative hypotheses Hi). An FDE procedure is applied to detect whether an anomaly is affecting the system, and, in case of detection, exclude the anomalous observations. In a common FDE procedure, typically the BLUE is applied to the model corresponding to the hypothesis H<sup>i</sup> that is more likely (or safer to use). Once it has been decided which hypothesis is most likely to hold true (this decision is made through a statistical testing procedure), the estimator to be used is the BLUE for the model corresponding to that hypothesis. The BLUE for the unknown x in the linear

observable: it will be in the form of (7) when the null hypothesis H<sup>0</sup> is considered most likely, or conversely different forms <sup>b</sup>xi when they one of the alternative hypothesis is designated to be

FDE procedures are based on statistical hypothesis testing [9]. In an FDE procedure statistical tests are performed to determine which hypothesis (fault-free/faulty) on the system state is most likely to hold, and determine the observable domain subdivision discussed in Section 3.1. In this chapter only linear models are analyzed, therefore a special attention shall be given to statistical hypothesis testing in linear models. The aim is to decide between competing linear models that could describe the observed phenomenon or process, once an observation has been made. Furthermore the observables are assumed to have normal distributions, and different hypotheses differ only in the specification of the functional model. The models considered

Given the linear model of Eq. (1), we assume the random noise distribution to be known,

e � N 0; Qy � �

The linear system in (1) represents the state of standard or nominal operations, that is the case in which the system is working properly without any fault. This state is considered as the null hypothesis H0. The case of a fault affecting the system constitutes instead a different state, described by an alternative hypothesis Ha, under which the linear model assumes a different

<sup>b</sup><sup>x</sup> <sup>¼</sup> Sy (7)

(8)

<sup>y</sup> the pseudo-inverse of matrix A in the metric defined by Qy.

in this approach will be constituted by different linear functions of the

model (1), assuming known dispersion of e, i.e., Dð Þ¼ e Qy, reads:

$$
\lambda = \nabla^T Q\_{\hat{\mathbf{V}}}^{-1} \nabla \tag{12}
$$

where Q�<sup>1</sup> <sup>∇</sup>^ <sup>¼</sup> CT <sup>y</sup> Q�<sup>1</sup> <sup>y</sup> <sup>Q</sup>^e0Q�<sup>1</sup> <sup>y</sup> Cy, <sup>Q</sup>^e<sup>0</sup> <sup>¼</sup> <sup>P</sup><sup>⊥</sup> AQyP<sup>⊥</sup><sup>T</sup> <sup>A</sup> and P<sup>⊥</sup> <sup>A</sup> <sup>¼</sup> <sup>I</sup> � A ATQ�<sup>1</sup> <sup>y</sup> A � ��<sup>1</sup> ATQ�<sup>1</sup> y .

Knowing (though only partially in case of H<sup>a</sup> ) the distributions of the test statistic under the different hypotheses, one can define a critical region K (to reject the null hypothesis) on the basis of type I and type II error probabilities. The critical region is one sided, of the type:

$$K: T\_q > k \tag{13}$$

with k the test threshold (or critical value).

The theory above constitutes the basis of statistical hypothesis testing in linear models, that allows to build the specific test in the simple binary case of null versus one alternative hypothesis. In case one has to choose among multiple alternative hypotheses, one option is to employ a set of binary tests. However, a number of different methods exist in statistics, aiming to answer this more complex problem. Hypothesis testing based methods are known as Multiple Comparisons methods [11], while other methods that do not recur to hypothesis testing are known as Subset Selection methods [12].

#### 4. The aviation legacy

In this section, first the observation models and the typical assumptions adopted in civil aviation applications are described, and next the two most popular RAIM algorithms developed for such applications, i.e., the Weighted RAIM [13] and the Advanced RAIM (ARAIM) [14] algorithms, are introduced.

The distribution of the observable y depends on the state of the system. Under each hypothesis, y is assumed to be distributed as a multivariate normal distribution (Eqs. (1), (8) and (9)). It is possible to associate prior probabilities to the occurrence of the different hypotheses, in such a way that the variable H, representing the state of the system, has a prior Probability Mass Function (PMF), with discrete values pi for each realization. Thus H and y marginal distribu-

At this point the uncertainty about the y distribution is expressed by its dependence on the unknown variable ∇<sup>i</sup> beside x. To tackle this uncertainty, most RAIM algorithms assume worst-case bias scenarios or compute bounds for the worst-case risk that could result, see for

In [13] a Weighted RAIM implementation is described. This constitutes one of the first relevant RAIM algorithms conceived and is still in use today, typically implemented in aviation grade GPS receivers, to provide low-precision lateral integrity only. The method consists of the two steps defined in Section 2.4, the model strength assessment and the real time observation coherency assessment. Even though not theorized in the original paper, the method is based on the assumption of the observable distribution described in previous section, with the

A single test, the OMT, is used to judge the quality of the observations at each epoch [13]. The OMT, also known as χ<sup>2</sup> test, is a UMPI test that employs a test statistic of the form of Eq. (10), and addressing a most generic anomaly, i.e., with q ¼ m � n. Such test statistic coincides with

> T Q�<sup>1</sup>

If this statistic exceeds a certain threshold k, the estimated position is assumed significantly biased; otherwise, it is assumed acceptable. This threshold is chosen to meet the probability of False Alert requirement, PFA, knowing that in the fault-free hypothesis, the WSSE is distrib-

If a range error from one measurement occurs, the expected value of the test statistic grows, along with, proportionally, the expected position error. The satellite geometry determines how

WSSE <sup>¼</sup> <sup>b</sup><sup>e</sup>

) <sup>y</sup> � <sup>p</sup><sup>0</sup> � <sup>f</sup> <sup>y</sup>∣H<sup>0</sup> <sup>þ</sup><sup>X</sup>

Na

i¼1

pi � f <sup>y</sup>∣H<sup>i</sup>

Integrity Monitoring: From Airborne to Land Applications

http://dx.doi.org/10.5772/intechopen.75777

<sup>y</sup> <sup>b</sup><sup>e</sup> (15)

(14)

33

tions are:

instance [18].

4.3. Weighted RAIM

H �

⋮

8 >>><

>>>:

Pð Þ¼ H ¼ H<sup>0</sup> p<sup>0</sup> Pð Þ¼ H ¼ H<sup>1</sup> p<sup>1</sup>

P H ¼ HNa � � <sup>¼</sup> pNa

constraint that only single satellite faults are possibly occurring.

uted as a central <sup>χ</sup><sup>2</sup> with <sup>m</sup> � 4 degrees of freedom (using GPS only).

the Weighted Sum of Squared Errors (WSSE), defined as:

4.4. Model strength assessment

#### 4.1. GNSS anomalies and their models

The main categories of High Dynamics Threats (HDTs) to be monitored in aviation applications, which rely on code-based SPP, are here listed. The HDTs are threats that cannot be monitored by the GNSS ground control system, as opposed to the Low Dynamics Threats (LDTs) [14]. They are categorized into: [noitemsep].


From the snap-shot perspective (considering a single epoch of time), and working with carrierphase smoothed code measurements, an outlier in a single satellite is believed to be the main threat (in terms of probability of occurrence). Simultaneous outliers on multiple satellites (wide failure errors) can occur, but with a much lower likelihood [14]. Among these are the constellation faults (e.g. upload of incorrect navigation messages that may impact a full constellation).

Errors/anomalies in signal propagation, as ionosphere, troposphere and multipath, shall not be considered hazardous for aviation when the new civil frequency L5/E5, and new certified receivers, are available: the tropospheric delay has typically a small effect (and one can correct sufficiently well for this error source), ionosphere gradients/fronts effects are supposed to cancel out with the use of ionosphere-free combination, and multipath depends on the local satellite-receiver geometry and can be considered on a per satellite basis (typically outlier-like).

#### 4.2. General distribution of the observable

In the previous section the main threats possibly affecting the positioning system in aviation applications have been described: on this basis a model to describe the distribution of the observable, able to take into account the possible occurrence of anomalies, shall be formulated. The pdf of y is generally supposed to be known in standard fault free conditions, but it cannot be fully defined in the presence of anomalies. However, it is assumed that anomalies in the systems will occur with a low failure rate.

Different hypotheses can be defined to represent the state of the system: a fault free (null) hypothesis H<sup>0</sup> and Na alternative hypothesis Hi, representing the different possible types of anomalies affecting the system, with i ¼ 1, …, Na. Here only linear models are considered, and hypotheses of the type of Eq. (9), i.e., H<sup>i</sup> : y ¼ Ax þ ∇yi þ e.

Single satellite faults and constellation faults can be modeled by different Ci matrices: in case of single satellite faults, or combinations of independent single satellite faults, the main Ci 's to consider shall be the canonical unit vectors of Rm or <sup>m</sup> � <sup>q</sup> matrices made up of different canonical unit vectors of <sup>R</sup><sup>m</sup>, respectively; in case of constellation faults, a matrix Ci of <sup>m</sup> � <sup>n</sup> columns, fully complementing A in the vector space R<sup>m</sup>, shall be used.

The distribution of the observable y depends on the state of the system. Under each hypothesis, y is assumed to be distributed as a multivariate normal distribution (Eqs. (1), (8) and (9)). It is possible to associate prior probabilities to the occurrence of the different hypotheses, in such a way that the variable H, representing the state of the system, has a prior Probability Mass Function (PMF), with discrete values pi for each realization. Thus H and y marginal distributions are:

$$\underline{\mathcal{H}} \sim \begin{cases} P(\underline{\mathcal{H}} = \mathcal{H}\_0) = p\_0 \\ P(\underline{\mathcal{H}} = \mathcal{H}\_1) = p\_1 \\ \vdots \\ P(\underline{\mathcal{H}} = \mathcal{H}\_{N\_s}) = p\_{N\_s} \end{cases} \quad \Rightarrow \quad \underline{y} \sim p\_0 \cdot f\_{\underline{y}|\mathcal{H}o} + \sum\_{i=1}^{N\_s} p\_i \cdot f\_{\underline{y}|\mathcal{H}\_i} \tag{14}$$

At this point the uncertainty about the y distribution is expressed by its dependence on the unknown variable ∇<sup>i</sup> beside x. To tackle this uncertainty, most RAIM algorithms assume worst-case bias scenarios or compute bounds for the worst-case risk that could result, see for instance [18].

#### 4.3. Weighted RAIM

applications, i.e., the Weighted RAIM [13] and the Advanced RAIM (ARAIM) [14] algorithms,

The main categories of High Dynamics Threats (HDTs) to be monitored in aviation applications, which rely on code-based SPP, are here listed. The HDTs are threats that cannot be monitored by the GNSS ground control system, as opposed to the Low Dynamics Threats

From the snap-shot perspective (considering a single epoch of time), and working with carrierphase smoothed code measurements, an outlier in a single satellite is believed to be the main threat (in terms of probability of occurrence). Simultaneous outliers on multiple satellites (wide failure errors) can occur, but with a much lower likelihood [14]. Among these are the constellation faults (e.g. upload of incorrect navigation messages that may impact a full constellation). Errors/anomalies in signal propagation, as ionosphere, troposphere and multipath, shall not be considered hazardous for aviation when the new civil frequency L5/E5, and new certified receivers, are available: the tropospheric delay has typically a small effect (and one can correct sufficiently well for this error source), ionosphere gradients/fronts effects are supposed to cancel out with the use of ionosphere-free combination, and multipath depends on the local satellite-receiver geometry and can be considered on a per satellite basis (typically outlier-like).

In the previous section the main threats possibly affecting the positioning system in aviation applications have been described: on this basis a model to describe the distribution of the observable, able to take into account the possible occurrence of anomalies, shall be formulated. The pdf of y is generally supposed to be known in standard fault free conditions, but it cannot be fully defined in the presence of anomalies. However, it is assumed that anomalies in the

Different hypotheses can be defined to represent the state of the system: a fault free (null) hypothesis H<sup>0</sup> and Na alternative hypothesis Hi, representing the different possible types of anomalies affecting the system, with i ¼ 1, …, Na. Here only linear models are considered, and

Single satellite faults and constellation faults can be modeled by different Ci matrices: in case of single satellite faults, or combinations of independent single satellite faults, the main Ci 's to consider shall be the canonical unit vectors of Rm or <sup>m</sup> � <sup>q</sup> matrices made up of different canonical unit vectors of <sup>R</sup><sup>m</sup>, respectively; in case of constellation faults, a matrix Ci of <sup>m</sup> � <sup>n</sup>

are introduced.

4.1. GNSS anomalies and their models

32 Multifunctional Operation and Application of GPS

• Signal deformations, see [16];

• Code-carrier incoherency, see [17];

4.2. General distribution of the observable

systems will occur with a low failure rate.

hypotheses of the type of Eq. (9), i.e., H<sup>i</sup> : y ¼ Ax þ ∇yi þ e.

columns, fully complementing A in the vector space R<sup>m</sup>, shall be used.

(LDTs) [14]. They are categorized into: [noitemsep]. • Clock and ephemeris estimation errors, see [15];

> In [13] a Weighted RAIM implementation is described. This constitutes one of the first relevant RAIM algorithms conceived and is still in use today, typically implemented in aviation grade GPS receivers, to provide low-precision lateral integrity only. The method consists of the two steps defined in Section 2.4, the model strength assessment and the real time observation coherency assessment. Even though not theorized in the original paper, the method is based on the assumption of the observable distribution described in previous section, with the constraint that only single satellite faults are possibly occurring.

> A single test, the OMT, is used to judge the quality of the observations at each epoch [13]. The OMT, also known as χ<sup>2</sup> test, is a UMPI test that employs a test statistic of the form of Eq. (10), and addressing a most generic anomaly, i.e., with q ¼ m � n. Such test statistic coincides with the Weighted Sum of Squared Errors (WSSE), defined as:

$$\underline{\text{WSSE}} = \hat{\underline{\varepsilon}}^T \mathbf{Q}\_y^{-1} \hat{\underline{\varepsilon}} \tag{15}$$

If this statistic exceeds a certain threshold k, the estimated position is assumed significantly biased; otherwise, it is assumed acceptable. This threshold is chosen to meet the probability of False Alert requirement, PFA, knowing that in the fault-free hypothesis, the WSSE is distributed as a central <sup>χ</sup><sup>2</sup> with <sup>m</sup> � 4 degrees of freedom (using GPS only).

#### 4.4. Model strength assessment

If a range error from one measurement occurs, the expected value of the test statistic grows, along with, proportionally, the expected position error. The satellite geometry determines how the error in the range domain propagates into the position domain. The original Weighted RAIM algorithm focuses on monitoring only the vertical component of the position solution, but the same reasoning can be made for the other components. In a simple two-dimensional graph, plotting ffiffiffiffiffiffiffiffiffiffiffiffiffi WSSE p on the horizontal axis and the vertical position error on the vertical axis, their relation can be represented by a straight line (see Figure 3), with a steepness (slope), for satellite i given by:

$$V\_{\text{slope}\_i} = \frac{|S\_{[3,i]}|\sigma\_i}{\sqrt{1 - P\_{A\_{[i,i]}}}} \tag{16}$$

statistic and positioning error are uncorrelated. The VPL is the measure of the observation

The real time availability assessment is performed if the model strength assessment was passed successfully (VPL < VAL). At each epoch, once the observations are collected, the WSSE is computed and compared with the threshold. As in standard hypothesis testing, we have the following

The Weighted RAIM presented in the previous section was developed for the single GPS constellation and has been found generally suboptimal, even though presenting a very practical and efficient approach. An enhanced approached, known as ARAIM, provides the follow-

• in addition to single satellite faults, multi-dimensional faults (affecting multiple satellites

• the potential of the multi-constellation GNSS is fully exploited, instead of GPS only [14, 18]; • rather than using only single-frequency observations, use of dual-frequency observations,

• a proof of safety is given [14, 18]. Weighted RAIM is not proven to be always conservative; • different statistical tests, more tailored to detecting faults that have sensible impact on the

The basic concepts of ARAIM are here outlined. For more details, see [14, 18, 19]. Figure 4 shows a block diagram representation of the ARAIM algorithm. From a statistical point of

• Multiple Hypothesis approach with a-priori probabilities: the system is supposed to be in one out of a set of different possible states described by multiple hypotheses, to each of which is assigned an a-priori probability of occurrence (Section 4.2). The PHMI is computed by the sum of the PHMI under the different hypotheses, weighted on the base of their prior

• Solution Separation (SS) as test statistics: to discriminate between hypotheses, to eventually exclude faulty measurements, the difference between the position solutions under the different alternative hypotheses and the null hypothesis is computed and used as a test statistic. For each alternative hypothesis considered a difference vector (SS) is computed

and a test is run for each of the position components of the vector.

to remove the first order ionospheric delay, is foreseen, [14, 18];

If WSSE > k, reject the fault-free hypothesis and declare Alert (20)

Integrity Monitoring: From Airborne to Land Applications

http://dx.doi.org/10.5772/intechopen.75777

35

model strength: if VPL > VAL, integrity is not available for the geometry considered.

4.5. Real time availability

else standard operations continue.

at a time) are accounted for [14, 18];

position estimate [19], are employed.

view, ARAIM is based on the following concepts:

decision rule:

4.6. ARAIM

ing improvements [14]:

probabilities.

with <sup>S</sup> <sup>¼</sup> <sup>A</sup>TQ�<sup>1</sup> <sup>y</sup> A � ��<sup>1</sup> ATQ�<sup>1</sup> <sup>y</sup> , σ<sup>i</sup> ¼ ffiffiffiffiffiffiffi Qyi q ¼ σyi and where the subscripts in square brackets indicate the indexes of the matrix elements'. The Vertical Protection Level (VPL) is computed as:

$$\text{VPL} \equiv \max\_{i} \left( V\_{\text{slope}\_{i}} \right) k + k\_{\text{MD}} \sigma\_{\hat{x}\_{3}} \qquad i = 1, 2, \dots, m \tag{17}$$

where k and kMD are obtained as:

$$k = \sqrt{\text{inv-}\chi^2\_{\text{CDF}}(\overline{P}\_{\text{FA}}, m - n)}; \qquad k\_{\text{MD}} = \Psi^{-1}\left(\frac{\overline{P}\_{\text{HM}}}{mp}\right) \tag{18}$$

with inv-χ<sup>2</sup> CDFð Þ �; <sup>m</sup> � <sup>n</sup> the inverse of a central <sup>χ</sup><sup>2</sup> CDF function with <sup>m</sup> � <sup>n</sup> degrees of freedom, Ψð Þ� the tail probability of the cumulative distribution function of a zero mean unit Gaussian distribution, and p the a-priori probability of hazardous fault in one satellite. The above formulas for the VPL are based on the following expression of the integrity risk under an alternative hypothesis:

$$P\_{\text{HMI}|\mathcal{H}\_i} = P\_{\text{MD}\_i} \cdot P\left(|\underline{\hat{x}}\_3 - x\_3| \, > \, \text{VAL}|\mathcal{H}\_i\right) \tag{19}$$

which assumes that an integrity event corresponds to the simultaneous occurrence of an MD and a positioning error larger than the Vertical AL (VAL), and is justified by the fact that test

Figure 3. Representation of the weighted RAIM's Vslope concept.

statistic and positioning error are uncorrelated. The VPL is the measure of the observation model strength: if VPL > VAL, integrity is not available for the geometry considered.
