DBP steps per span

(a)

Figure 7. BER vs. transmission distance curves for PM-16-QAM coherent transmission system over DM link [59].

#### 3.4. Kalman filtering for polarization de-multiplexing

An effective way to double the transmission capacity is to employ PDM which allows the transmission of two information signals simultaneously on the orthogonal polarization states of the same optical carrier wave. However, due to fiber birefringence, the state of polarization is not preserved during the propagation on the fiber that leads to crosstalk upon the receipt of the signal. In coherent receivers, CMA [15] or MMA [16] is commonly employed in order to align the polarization states and recover the transmitted signal fully. However, CMA or MMA suffer from the drawbacks of low convergence speed and singularity problem [71]. Moreover, a separate phase estimation scheme is required to track the laser phase noise. Since the Kalman filter allows simultaneous tracking of several state variables provided a precise SSM, the Kalman filter and its variations including radius directed linear Kalman filter (RD-LKF), EKF and UKF are widely investigated for tracking the complex elements of the Jones matrix along with the carrier phase [61–63].

the process noise covariance recursively using the covariance matching method as described in

Applications of Kalman Filters for Coherent Optical Communication Systems

http://dx.doi.org/10.5772/intechopen.71617

The transmitted and received signal in the presence of phase noise and polarization rotation can be related as given in Eq. (36). Here, tk, r<sup>k</sup> and n<sup>k</sup> denote the transmitted and received signal and ASE noise in dual polarization, respectively. J<sup>k</sup> denotes the Jones matrix, θ<sup>k</sup> denotes the phase noise and α denotes the loss factor. Assuming negligible, the inverse of the Jones matrix can be described as in Eq. (37) and the elements of the Jones matrix satisfy Jyy ¼ J

section. The observation model in Eq. (36) can be rewritten in dual polarization as given in Eq. (38). Here, the subscripts x and y denote the x and y polarizations, respectively. The conventional approach to track the phase and the polarization effects using EKF, the state vector consists of the parameters, a, b, c, and d. However, we reduce the dimensions of the state vector and also the other matrices in the SSM by considering the complex elements in the state vector given by SðkÞ¼ ½JxxJxy φ�. Moreover, we also split up the nonlinear observation model given in Eq. (38), into a nonlinear and linear observation model, where we employ an EKF-CPANE for the joint tracking of phase and amplitude distortions and an LKF for tracking of the complex elements in the Jones matrix. The process noise covariance has been adaptively updated by employing the covariance matching method as described in Section 2. For more

<sup>t</sup><sup>k</sup> <sup>¼</sup> <sup>e</sup>�jθ<sup>k</sup> <sup>ð</sup>αJk<sup>Þ</sup>

<sup>¼</sup> <sup>e</sup>�jθ<sup>k</sup> <sup>ð</sup>αJk<sup>Þ</sup>

�1

Jyx Jyy " # <sup>¼</sup> <sup>a</sup> <sup>þ</sup> jb c <sup>þ</sup> jd

�<sup>1</sup> rx ry � �

Numerical investigations on both back-to-back (BTB) and transmission scenarios, have been carried out in [13], on the variations of the Kalman filter including EKF, UKF, CKF and A-CKF, for tracking the polarization state and phase noise jointly and are compared to the conventional MMA algorithm. Since the MMA can track only the polarization state, it is accompanied by a DD-CPE algorithm for the phase noise mitigation. It can be concluded from [13] that the CKF and A-CKF outperform the rest of the considered algorithms with a better tolerance towards polarization rotations, phase and amplitude noise. This can be attributed to the decrement in the inaccuracies through the linearization of the whole SSM in CKF/A-CKF, compared to EKF and UKF. The benefit from the adaptive computation of process noise covariance compared to the CKF can be observed at rotation angular frequencies of 400 Mrad/s and higher in the BTB case

þ

nx ny

r<sup>k</sup> þ n<sup>k</sup> (36)

� � (38)

�<sup>c</sup> <sup>þ</sup> jd a � jb � � (37)

xy [15]. From now on, for simplified notation, we omit the time variable k in this

� xx and 221

Section 2.3.

Jyx ¼ �J �

3.4.2.1. Principles of A-CKF

details on the A-CKF algorithm, please refer to [13].

J

tx ty � �

and at higher launch powers of 5 dBm in the transmission case [13].

�<sup>1</sup> <sup>¼</sup> Jxx Jxy

#### 3.4.1. RD-LKF, EKF and UKF for joint tracking polarization state and phase noise

An EKF has been proposed in [61] for joint tracking of the polarization and phase noise. It has also been reported that the EKF shows faster convergence than the conventional approach based on CMA and VV-CPE [61]. However, the variables in the state vector are restricted to real values, which would lead to singularity problems or divergence of the filter [63], besides increasing the dimensions of the vectors and matrices in the Kalman recursive equations. A polarization state tracking scheme using Kalman filter, which is immune to phase/frequency offset, has been introduced in [62], and is termed as RD-LKF. Although, it shows faster convergence compared to CMA, this method needs significant modifications for applying to higher order QAM. Moreover, it is not possible to track the carrier phase simultaneously with the polarization state. The joint tracking of polarization state and carrier phase using EKF has been experimentally verified in [57]. A reduced SSM using UKF has been introduced in [63], which facilitates the joint tracking of polarization state and phase noise. Here, the variables of the state vector are considered to be complex valued. This approach exhibits better performance compared to EKF at high OSNRs at the expense of additional computational effort.

#### 3.4.2. Adaptive cascaded Kalman filtering (A-CKF) for polarization de-multiplexing with simultaneous tracking of phase and amplitude distortions

A cascaded Kalman filtering (CKF), a series of EKF and linear Kalman filtering (LKF) for joint tracking of phase and amplitude distortions besides polarization state, has been proposed in [13]. By splitting up the conventional SSM into linear and nonlinear SSM, the inaccuracies in the linearization of the SSM as a whole can be reduced and thus CKF exhibits enhanced performance besides no increased computational cost compared to the approaches like UKF [63] and radius directed (RD) LKF [62]. Since the optimal performance of the Kalman filter depends on the noise covariances, we proposed an adaptive CKF (A-CKF) [13] to adapt the process noise covariance recursively using the covariance matching method as described in Section 2.3.

#### 3.4.2.1. Principles of A-CKF

3.4. Kalman filtering for polarization de-multiplexing

with the carrier phase [61–63].

220 Kalman Filters - Theory for Advanced Applications

tracking of phase and amplitude distortions

An effective way to double the transmission capacity is to employ PDM which allows the transmission of two information signals simultaneously on the orthogonal polarization states of the same optical carrier wave. However, due to fiber birefringence, the state of polarization is not preserved during the propagation on the fiber that leads to crosstalk upon the receipt of the signal. In coherent receivers, CMA [15] or MMA [16] is commonly employed in order to align the polarization states and recover the transmitted signal fully. However, CMA or MMA suffer from the drawbacks of low convergence speed and singularity problem [71]. Moreover, a separate phase estimation scheme is required to track the laser phase noise. Since the Kalman filter allows simultaneous tracking of several state variables provided a precise SSM, the Kalman filter and its variations including radius directed linear Kalman filter (RD-LKF), EKF and UKF are widely investigated for tracking the complex elements of the Jones matrix along

An EKF has been proposed in [61] for joint tracking of the polarization and phase noise. It has also been reported that the EKF shows faster convergence than the conventional approach based on CMA and VV-CPE [61]. However, the variables in the state vector are restricted to real values, which would lead to singularity problems or divergence of the filter [63], besides increasing the dimensions of the vectors and matrices in the Kalman recursive equations. A polarization state tracking scheme using Kalman filter, which is immune to phase/frequency offset, has been introduced in [62], and is termed as RD-LKF. Although, it shows faster convergence compared to CMA, this method needs significant modifications for applying to higher order QAM. Moreover, it is not possible to track the carrier phase simultaneously with the polarization state. The joint tracking of polarization state and carrier phase using EKF has been experimentally verified in [57]. A reduced SSM using UKF has been introduced in [63], which facilitates the joint tracking of polarization state and phase noise. Here, the variables of the state vector are considered to be complex valued. This approach exhibits better performance compared to EKF at high OSNRs at the expense of additional computational effort.

3.4.2. Adaptive cascaded Kalman filtering (A-CKF) for polarization de-multiplexing with simultaneous

A cascaded Kalman filtering (CKF), a series of EKF and linear Kalman filtering (LKF) for joint tracking of phase and amplitude distortions besides polarization state, has been proposed in [13]. By splitting up the conventional SSM into linear and nonlinear SSM, the inaccuracies in the linearization of the SSM as a whole can be reduced and thus CKF exhibits enhanced performance besides no increased computational cost compared to the approaches like UKF [63] and radius directed (RD) LKF [62]. Since the optimal performance of the Kalman filter depends on the noise covariances, we proposed an adaptive CKF (A-CKF) [13] to adapt

3.4.1. RD-LKF, EKF and UKF for joint tracking polarization state and phase noise

The transmitted and received signal in the presence of phase noise and polarization rotation can be related as given in Eq. (36). Here, tk, r<sup>k</sup> and n<sup>k</sup> denote the transmitted and received signal and ASE noise in dual polarization, respectively. J<sup>k</sup> denotes the Jones matrix, θ<sup>k</sup> denotes the phase noise and α denotes the loss factor. Assuming negligible, the inverse of the Jones matrix can be described as in Eq. (37) and the elements of the Jones matrix satisfy Jyy ¼ J � xx and Jyx ¼ �J � xy [15]. From now on, for simplified notation, we omit the time variable k in this section. The observation model in Eq. (36) can be rewritten in dual polarization as given in Eq. (38). Here, the subscripts x and y denote the x and y polarizations, respectively. The conventional approach to track the phase and the polarization effects using EKF, the state vector consists of the parameters, a, b, c, and d. However, we reduce the dimensions of the state vector and also the other matrices in the SSM by considering the complex elements in the state vector given by SðkÞ¼ ½JxxJxy φ�. Moreover, we also split up the nonlinear observation model given in Eq. (38), into a nonlinear and linear observation model, where we employ an EKF-CPANE for the joint tracking of phase and amplitude distortions and an LKF for tracking of the complex elements in the Jones matrix. The process noise covariance has been adaptively updated by employing the covariance matching method as described in Section 2. For more details on the A-CKF algorithm, please refer to [13].

$$\mathbf{t}\_{k} = \mathbf{e}^{-j\theta\_{k}} (a\mathbf{J}\_{k})^{-1} \mathbf{r}\_{k} + \mathbf{n}\_{k} \tag{36}$$

$$\mathbf{J}^{-1} = \begin{bmatrix} J\_{xx} & J\_{xy} \\ J\_{yx} & J\_{yy} \end{bmatrix} = \begin{bmatrix} a+jb & c+jd \\ -c+jd & a-jb \end{bmatrix} \tag{37}$$

$$
\begin{pmatrix} t\_x \\ t\_y \end{pmatrix} = \mathbf{e}^{-j\theta\_k} (a\mathbf{J}\_k)^{-1} \begin{pmatrix} r\_x \\ r\_y \end{pmatrix} + \begin{pmatrix} n\_x \\ n\_y \end{pmatrix} \tag{38}
$$

Numerical investigations on both back-to-back (BTB) and transmission scenarios, have been carried out in [13], on the variations of the Kalman filter including EKF, UKF, CKF and A-CKF, for tracking the polarization state and phase noise jointly and are compared to the conventional MMA algorithm. Since the MMA can track only the polarization state, it is accompanied by a DD-CPE algorithm for the phase noise mitigation. It can be concluded from [13] that the CKF and A-CKF outperform the rest of the considered algorithms with a better tolerance towards polarization rotations, phase and amplitude noise. This can be attributed to the decrement in the inaccuracies through the linearization of the whole SSM in CKF/A-CKF, compared to EKF and UKF. The benefit from the adaptive computation of process noise covariance compared to the CKF can be observed at rotation angular frequencies of 400 Mrad/s and higher in the BTB case and at higher launch powers of 5 dBm in the transmission case [13].

### 3.5. Kalman filtering for joint compensation of phase and frequency offset

Apart from digital equalization, carrier synchronization is also vital to mitigate the phase and frequency offsets between the transmitter laser and free running LO. Since the CPE methods have low tolerance towards FO, which may go as high as 5 GHz, a separate FO estimation (FOE) is required. Consequently, several FOE algorithms have been proposed in the literature that are either based on the phase increments between adjacent symbols [72] or spectrum based methods [73]. These methods are either not accurate for higher order QAM systems or computationally complex.

FO estimation is given in Eq. (41). Here, the CPANE algorithm is employed to compensate the residual FO, phase noise and ASE induced phase and amplitude distortions. Figure 8 illustrates the basic structure of this two stage EKF [65]. A similar two stage model using LKF has

The BER vs OSNR curves for LKF and EKF after the 2nd stage, using 200 and 500 training data symbols, for a FO of 1 GHz, are depicted in Figure 9 [65]. It can be concluded from [65], that both

> Stage 2 Residual FO, phase and amplitude noise compensation

EKF1 EKF2

Figure 8. Block diagram of two stage EKF for the joint compensation of FO, phase and amplitude noise [65].

16 18 20 22

Figure 9. BER vs. OSNR curves for LKF and EKF after residual FO compensation for a FO of 1 GHz [65].

LKF EKF 200 symbols 500 symbols Theory

OSNR (dB)

Training data

*r<sup>k</sup> r*ˆ*<sup>k</sup> a*ˆ*<sup>k</sup>*

Stage 1 Coarse FO compensation

10*−*<sup>4</sup>

10*−*<sup>3</sup>

10*−*<sup>2</sup>

BER

10*−*<sup>1</sup>

rk <sup>¼</sup> ake<sup>j</sup>ðwkTs<sup>þ</sup> <sup>∅</sup><sup>k</sup> <sup>Þ</sup> <sup>þ</sup> nk (39)

Applications of Kalman Filters for Coherent Optical Communication Systems

http://dx.doi.org/10.5772/intechopen.71617

223

mk <sup>¼</sup> <sup>e</sup><sup>j</sup>ðwþvk<sup>Þ</sup> <sup>þ</sup> <sup>ϑ</sup><sup>k</sup> (40)

^rk <sup>¼</sup> ake<sup>j</sup>ðΔwk<sup>þ</sup> <sup>∅</sup><sup>k</sup> <sup>Þ</sup> <sup>þ</sup> nk (41)

also been evaluated in [65] and compared to EKF.

### 3.5.1. LKF and EKF for FO estimation

A novel FOE algorithm using Kalman filtering have been proposed and numerically verified in [60]. The simulation results in [60] concludes that the Kalman filter can achieve faster convergence and outperforms the conventional FO estimation at low OSNR. In [64], FOE schemes based on blind and training data, using LKF and EKF have been proposed for QPSK systems. These Kalman based FOE algorithms are evaluated both numerically and experimentally, and are compared to FFT based FOE methods. The investigations in [64] report that the training data based Kalman FOE methods show better accuracy in estimating the FO in case of fewer symbols and high OSNR, compared to FFT based methods. However, a separate phase estimation has to be carried out after FO compensation.

### 3.5.2. Two stage EKF for joint compensation of FO, phase and amplitude noise

The Kalman based FOE algorithms proposed in [60, 64] can compensate only for the FO and therefore, the carrier phase has to be recovered separately after FO compensation. In [65], a two stage EKF method based on training data has been proposed for joint tracking of FO, phase and amplitude noise. In the first stage, a coarse estimate of FO is obtained using a set of training data symbols following the training data scheme proposed in [64]. In the second stage, CPANE algorithm has been employed to jointly compensate for the residual FO, phase and amplitude noise.

#### 3.5.2.1. Principles of two stage EKF

After linear equalization, the received signal on single polarization, with frequency and phase offset can be represented as given in Eq. (39). Here, rk and ak denote the received and transmitted symbol, respectively, at the time instant. w denotes the FO between the transmitter laser and the LO. Ts denotes the symbol duration. Øk and nk denote the phase noise and ASE noise, respectively. In order to obtain the measurement for FO, the first step is to wipe off the data phase which is performed by employing training data. Then the phase difference between the adjacent symbols [64] is computed, which gives the measurement of FO denoted by mk, in Eq. (40). Here, vk is given by Øk – Øk–1, and follows a Gaussian distribution. By considering the observation model given in Eq. (40) for EKF, a coarse FO estimation is performed in the first stage using a set of training data sequence. The input signal ^rk to the second stage after coarse FO estimation is given in Eq. (41). Here, the CPANE algorithm is employed to compensate the residual FO, phase noise and ASE induced phase and amplitude distortions. Figure 8 illustrates the basic structure of this two stage EKF [65]. A similar two stage model using LKF has also been evaluated in [65] and compared to EKF.

3.5. Kalman filtering for joint compensation of phase and frequency offset

computationally complex.

amplitude noise.

3.5.2.1. Principles of two stage EKF

3.5.1. LKF and EKF for FO estimation

222 Kalman Filters - Theory for Advanced Applications

estimation has to be carried out after FO compensation.

3.5.2. Two stage EKF for joint compensation of FO, phase and amplitude noise

Apart from digital equalization, carrier synchronization is also vital to mitigate the phase and frequency offsets between the transmitter laser and free running LO. Since the CPE methods have low tolerance towards FO, which may go as high as 5 GHz, a separate FO estimation (FOE) is required. Consequently, several FOE algorithms have been proposed in the literature that are either based on the phase increments between adjacent symbols [72] or spectrum based methods [73]. These methods are either not accurate for higher order QAM systems or

A novel FOE algorithm using Kalman filtering have been proposed and numerically verified in [60]. The simulation results in [60] concludes that the Kalman filter can achieve faster convergence and outperforms the conventional FO estimation at low OSNR. In [64], FOE schemes based on blind and training data, using LKF and EKF have been proposed for QPSK systems. These Kalman based FOE algorithms are evaluated both numerically and experimentally, and are compared to FFT based FOE methods. The investigations in [64] report that the training data based Kalman FOE methods show better accuracy in estimating the FO in case of fewer symbols and high OSNR, compared to FFT based methods. However, a separate phase

The Kalman based FOE algorithms proposed in [60, 64] can compensate only for the FO and therefore, the carrier phase has to be recovered separately after FO compensation. In [65], a two stage EKF method based on training data has been proposed for joint tracking of FO, phase and amplitude noise. In the first stage, a coarse estimate of FO is obtained using a set of training data symbols following the training data scheme proposed in [64]. In the second stage, CPANE algorithm has been employed to jointly compensate for the residual FO, phase and

After linear equalization, the received signal on single polarization, with frequency and phase offset can be represented as given in Eq. (39). Here, rk and ak denote the received and transmitted symbol, respectively, at the time instant. w denotes the FO between the transmitter laser and the LO. Ts denotes the symbol duration. Øk and nk denote the phase noise and ASE noise, respectively. In order to obtain the measurement for FO, the first step is to wipe off the data phase which is performed by employing training data. Then the phase difference between the adjacent symbols [64] is computed, which gives the measurement of FO denoted by mk, in Eq. (40). Here, vk is given by Øk – Øk–1, and follows a Gaussian distribution. By considering the observation model given in Eq. (40) for EKF, a coarse FO estimation is performed in the first stage using a set of training data sequence. The input signal ^rk to the second stage after coarse

$$r\_k = a\_k \mathbf{e}^{j(wkT\_s + \mathcal{Q}k)} + n\_k \tag{39}$$

$$m\_k = \left. \mathbf{e}^{j(w+v\_k)} + \left. \mathfrak{d}\_k \right| \tag{40}$$

$$
\hat{r}\_k = a\_k \mathbf{e}^{i(\mathbf{A}w\_k + \mathcal{O}\_k)} + n\_k \tag{41}
$$

The BER vs OSNR curves for LKF and EKF after the 2nd stage, using 200 and 500 training data symbols, for a FO of 1 GHz, are depicted in Figure 9 [65]. It can be concluded from [65], that both

Figure 8. Block diagram of two stage EKF for the joint compensation of FO, phase and amplitude noise [65].

Figure 9. BER vs. OSNR curves for LKF and EKF after residual FO compensation for a FO of 1 GHz [65].

LKF and EKF show faster convergence irrespective of the number of training data symbols utilized in the first stage. However, since the EKF estimates a complex quantity, it facilitates in compensating also for the amplitude noise and therefore, outperforms LKF. Moreover, as discussed earlier, EKF does not require any angle operations unlike LKF, and thereby the additional few computations required by the EKF compared to LKF can be sought to be compensated with the additional benefit of better tracking capability.

Author details

References

2016. pp. 1-3

2016. pp. 1-3

MSP.2013.2287021

JLT.2009.2028245

Lalitha Pakala\* and Bernhard Schmauss

\*Address all correspondence to: lalitha.pakala@fau.de

Institute of Microwaves and Photonics (LHFT) and Erlangen Graduate School for Advanced Optical Technologies (SAOT), University of Erlangen-Nuremberg, Erlangen, Germany

Applications of Kalman Filters for Coherent Optical Communication Systems

http://dx.doi.org/10.5772/intechopen.71617

225

[1] Zhu Y, Li A, Peng W, Kan C, Li Z, Chowdary S, Cui Y, Bai Y. Spectrally efficient single carrier 400G transmission enabled by probabilistic shaping. In: Optical Fiber Communi-

[2] Yu Y, Lu Y, Liu L, Huang Y, Wang X, Li L. Experimental demonstration of single carrier 400G/500G in 50-GHz grid for 1000-km transmission. In: Optical Fiber Communications;

[3] Chien H, Yu J. On single-carrier 400G line side optics using PM-256 QAM. In: European Conference on Optical Communications; 18-22 September 2016; Dusseldorf, Germany.

[4] Maeda H, Kotanigawa T, Saito K, Yokota M, Hamaoka F, Yoshida M, et al. Field trial of simultaneous 100-Gbps and 400-Gbps transmission using advanced digital coherent technologies. In: Optical Fiber Communications (OFC); 20-24 March 2016; Anaheim, CA, USA.

[5] Lau APT, Gao Y, Sui Q, Wang D, Zhuge Q, Morsy-Osman MH, et al. Advanced DSP techniques enabling high spectral efficiency and flexible transmissions: Toward elastic optical networks. IEEE Signal Processing Magazine. 2014;31(2):82-92. DOI: 10.1109/

[6] Ip E, Lau APT, Barros DJF, Kahn JM. Coherent detection in optical fiber systems. Optics

[7] Kikuchi K. Coherent optical communications: Historical perspectives and future directions. In: Optical and Fiber Communications Reports. Berlin, Heidelberg, Germany:

[8] Ip EM, Kahn JM. Fiber impairment compensation using coherent detection and digital signal processing. Journal of Lightwave Technology. 2010;28(4):502-519. DOI: 10.1109/

[9] Ip E, Kahn JM. Digital equalization of chromatic dispersion and polarization mode dispersion. Journal of Lightwave Technology. 2007;25(8):2033-2043. DOI: 10.1109/JLT.2007.900889

Express. 2008;16(2):753-791. DOI: https://doi.org/10.1364/OE.16.000753

Springer; 2010. pp. 11-49. DOI: 10.1007/978-3-642-10419-0\_2

cations (OFC); 19-23 March 2017; Los Angeles, CA, USA. 2017. pp. 1-3

19-23 March 2017; Los Angeles, CA, USA. 2017. pp. 1-3

This two stage EKF model has been extended in [66] to compensate also for the fiber nonlinearity in addition to FO, phase and amplitude noise. The first stage is similar and compensates FO coarsely, as discussed earlier. In the second stage, the total phase noise to be estimated comprises of both laser phase noise and fiber nonlinearities. The EKF-CPANE algorithm is employed for tracking the residual FO and the total phase noise in addition to amplitude noise. From the numerical analysis, it was reported in [66] that compared to LKF, the maximum possible transmission reach can be increased by an additional 500 km using EKF, at a BER of 2.4\*10<sup>2</sup> .

### 4. Conclusions

We have discussed in detail on how to exploit the potential of Kalman filters for the joint mitigation of several fiber optical transmission impairments in coherent optical transmission systems. Various Kalman based approaches for tracking carrier phase and frequency offset, polarization state have been reviewed. The CPANE algorithm and its implementation details using EKF for joint mitigation of linear and nonlinear phase noise as well as amplitude noise have been illustrated in detail. It is also verified that the combination of DBP and EKF enhances the nonlinear mitigation performance, at the expense of few DBP steps. A cascaded structure using LKF and EKF is illustrated for tracking the polarization state and carrier phase besides amplitude noise, simultaneously. A two stage EKF model for simultaneous tracking of FO, phase and amplitude noise is also discussed. From the discussed numerical verifications, it can be concluded that the Kalman filter based approaches for tracking the optical transmission impairments outperforms the conventional methods in coherent optical communication systems, with faster convergence, better tracking ability and more tolerance towards the optical transmission impairments. Since the Kalman filter is an optimal recursive MMSE estimator, with its attractive properties of hardware efficient implementation feasibility, less computational effort as well as memory requirements, it seems to be an essential component of future coherent optical receivers.

### Acknowledgements

The authors gratefully acknowledge the support of the Erlangen Graduate School in Advanced Optical Technologies (SAOT) funded by the German National Science Foundation (DFG) in the framework of the excellence initiative.

### Author details

LKF and EKF show faster convergence irrespective of the number of training data symbols utilized in the first stage. However, since the EKF estimates a complex quantity, it facilitates in compensating also for the amplitude noise and therefore, outperforms LKF. Moreover, as discussed earlier, EKF does not require any angle operations unlike LKF, and thereby the additional few computations required by the EKF compared to LKF can be sought to be com-

This two stage EKF model has been extended in [66] to compensate also for the fiber nonlinearity in addition to FO, phase and amplitude noise. The first stage is similar and compensates FO coarsely, as discussed earlier. In the second stage, the total phase noise to be estimated comprises of both laser phase noise and fiber nonlinearities. The EKF-CPANE algorithm is employed for tracking the residual FO and the total phase noise in addition to amplitude noise. From the numerical analysis, it was reported in [66] that compared to LKF, the maximum possible trans-

We have discussed in detail on how to exploit the potential of Kalman filters for the joint mitigation of several fiber optical transmission impairments in coherent optical transmission systems. Various Kalman based approaches for tracking carrier phase and frequency offset, polarization state have been reviewed. The CPANE algorithm and its implementation details using EKF for joint mitigation of linear and nonlinear phase noise as well as amplitude noise have been illustrated in detail. It is also verified that the combination of DBP and EKF enhances the nonlinear mitigation performance, at the expense of few DBP steps. A cascaded structure using LKF and EKF is illustrated for tracking the polarization state and carrier phase besides amplitude noise, simultaneously. A two stage EKF model for simultaneous tracking of FO, phase and amplitude noise is also discussed. From the discussed numerical verifications, it can be concluded that the Kalman filter based approaches for tracking the optical transmission impairments outperforms the conventional methods in coherent optical communication systems, with faster convergence, better tracking ability and more tolerance towards the optical transmission impairments. Since the Kalman filter is an optimal recursive MMSE estimator, with its attractive properties of hardware efficient implementation feasibility, less computational effort as well as memory requirements, it seems to be an essential component of future

The authors gratefully acknowledge the support of the Erlangen Graduate School in Advanced Optical Technologies (SAOT) funded by the German National Science Foundation (DFG) in the

.

mission reach can be increased by an additional 500 km using EKF, at a BER of 2.4\*10<sup>2</sup>

pensated with the additional benefit of better tracking capability.

224 Kalman Filters - Theory for Advanced Applications

4. Conclusions

coherent optical receivers.

Acknowledgements

framework of the excellence initiative.

Lalitha Pakala\* and Bernhard Schmauss

\*Address all correspondence to: lalitha.pakala@fau.de

Institute of Microwaves and Photonics (LHFT) and Erlangen Graduate School for Advanced Optical Technologies (SAOT), University of Erlangen-Nuremberg, Erlangen, Germany

### References


[10] Ip E, Kahn JM. Compensation of dispersion and nonlinear impairments using digital backpropagation. Journal of Lightwave Technology. 2008;26(20):3416-3425. DOI: 10.1109/ JLT.2008.927791

conjugated twin waves. Journal of Lightwave Technology. 2014;32(4):766-775. DOI:

Applications of Kalman Filters for Coherent Optical Communication Systems

http://dx.doi.org/10.5772/intechopen.71617

227

[22] Tao Z, Dou L, Yan W, Li L, Hoshida T, Rasmussen JC. Multiplier-free intrachannel nonlinearity compensating algorithm operating at symbol rate. Journal of Lightwave

[23] Gao Y, Cartledge JC, Karar AS, Yam SS-H, O'Sullivan M, Laperle C, et al. Reducing the complexity of perturbation based nonlinearity precompensation using symmetric EDC and pulse shaping. Optics Express. 2014;22(2):1209-1219. DOI: 10.1364/OE.22.001209 [24] Essiambre RJ, Kramer G, Winzer PJ, Foschini GJ, Goebel B. Capacity limits of optical fiber networks. Journal of Lightwave Technology. 2010;28(4):662-701. DOI: 10.1109/JLT.2009.

[25] Ellis AD, Zhao J, Cotter D. Approaching the non-linear Shannon limit. Journal of

[26] Wu M, Way WI. Fiber nonlinearity limitations in ultra-dense WDM systems. Journal of

[27] Li L, Tao Z, Dou L, Yan W, Oda S, Tanimura T, et al. Implementation efficient equalizer based on correlated backpropagation. In: Optical Fiber Communication Conference and Exposition (OFC/NFOEC), 2011 and the National Fiber Optics Engineers Conference; 6-

[28] Rafique D, Mussolin M, Forzati M, Martensson J, Chugtai MN, Ellis AD. Compensation of intra-channel nonlinear fibre impairments using simplified digital back-propagation

[29] Lin C, Holtmannspoetter M, Asif MR, Schmauss B. Compensation of transmission impairments by digital backward propagation for different link designs. In: 36th European Conference and Exhibition on Optical Communication; 19-23 September; Torino,

[30] Pakala L, Schmauss B. Performance evaluation of modulation format independent carrier phase estimation and decision directed carrier phase estimation for fiber nonlinearity mitigation. In: Proc. SPIE 9288, Photonics North 2014; 28 May; Montreal, Canada. 2014.

[31] Pakala L, Schmauss B. Enhanced transmission performance using digital back propagation and extended Kalman filtering for dispersion unmanaged links. In: Advanced Photonics 2016; 18-20 July; Vancouver, Canada. OSA Technical Digest (Optical Society of

[32] Rafique D, Zhao J, Ellis AD. Fundamental limitations of digital back-propagation in coherent transmission systems. In: Transparent Optical Networks (ICTON), 2011 13th International Conference on; 26-30 June; Stockholm, Sweden. 2011. pp. 1-4. DOI: 10.1109/ICTON.

algorithm. Optics Express. 2011;19(10):9453-9460. DOI: 10.1364/OE.19.009453

Lightwave Technology. 2010;28(4):423-433. DOI: 10.1109/JLT.2009.2030693

Lightwave Technology. 2004;22(6):1483-1498. DOI: 10.1109/JLT.2004.829222

10 March; Los Angeles, CA, USA. 2011. pp. 1-3

Italy. 2010. pp. 1-3. DOI: 10.1109/ECOC.2010.5621413

America); 2016. DOI: 10.1364/SPPCOM.2016.SpW1G.3

DOI: 10.1107/12.2074897

2011.5971101

Technology. 2011;29(17):2570-2576. DOI: 10.1109/JLT.2011.2160933

10.1109/JLT.2013.2280998

2039464


conjugated twin waves. Journal of Lightwave Technology. 2014;32(4):766-775. DOI: 10.1109/JLT.2013.2280998

[22] Tao Z, Dou L, Yan W, Li L, Hoshida T, Rasmussen JC. Multiplier-free intrachannel nonlinearity compensating algorithm operating at symbol rate. Journal of Lightwave Technology. 2011;29(17):2570-2576. DOI: 10.1109/JLT.2011.2160933

[10] Ip E, Kahn JM. Compensation of dispersion and nonlinear impairments using digital backpropagation. Journal of Lightwave Technology. 2008;26(20):3416-3425. DOI: 10.1109/

[11] Du LB, Rafique D, Napoli A, Spinnler B, Ellis AD, Kuschnerov M, et al. Digital fiber nonlinearity compensation: Toward 1-Tb/s transport. IEEE Signal Processing Magazine.

[12] Pakala L, Schmauss B. Extended Kalman filtering for joint mitigation of phase and amplitude noise in coherent QAM systems. Optics Express. 2016;24(6):6391-6401. DOI:

[13] Pakala L, Schmauss B. Joint tracking of polarization state and phase noise using adaptive cascaded Kalman filtering. IEEE Photonics Technology Letters. 2017;29(16):1297-1300.

[14] Zhou X, Nelson L. Advanced DSP for 400 Gb/s and beyond optical networks. Journal of Lightwave Technology. 2014;32(16):2716-2725. DOI: 10.1109/JLT.2014.2321135

[15] Kikuchi K. Polarization-demultiplexing algorithm in the digital coherent receiver. In: Proceedings of Digest LEOS Summer Topical Meetings; 21-23 July; Acapulco, Mexico.

[16] Fatadin I, Ives D, Savory SJ. Blind equalization and carrier phase recovery in a 16-QAM optical coherent system. Journal of Lightwave Technology. 2009;27(15):3042-3049. DOI:

[17] Stojanovic N, Huang Y, Hauske FN, Fang Y, Chen M, Xie C, et al. MLSE-based nonlinearity mitigation for WDM 112 Gbit/s PDM-QPSK transmissions with digital coherent receiver. In: Optical Fiber Communication Conference and Exposition (OFC/NFOEC), 2011 and the National Fiber Optic Engineers Conference; 6-11 March; Los Angeles, CA, USA. 2011. pp.

[18] Marcenac DD, Nesset D, Kelly AE, Brierley M, Ellis AD, Moodie DG, et al. 40 Gbit/s transmission over 406 km of NDSF using mid-span spectral inversion by four-wavemixing in a 2 mm long semiconductor optical amplifier. Electronics Letters. 1997;33

[19] Morshed M, Du LB, Lowery AJ. Mid-span spectral inversion for coherent optical OFDM systems: Fundamental limits to performance. Journal of Lightwave Technology. 2012;31

[20] Sackey I, Ros FD, Fischer JK, Richter T, Jazayerifar M, Peucheret C. Kerr nonlinearity mitigation: Mid-link spectral inversion versus digital backpropagation in 5x28-GBd PDM 16-QAM signal transmission. Journal of Lightwave Technology. 2015;33(9):1821-1827.

[21] Liu X, Chandrasekhar S, Winzer PJ, Tkach RW, Chraplyvy AR. Fiber-nonlinearity-tolerant superchannel transmission via nonlinear noise squeezing and generalized phase

JLT.2008.927791

226 Kalman Filters - Theory for Advanced Applications

10.1364/OE.24.006391

DOI: 10.1109/LPT.2017.2719279

10.1109/JLT.2009.2021961

(10):879-880. DOI: 10.1049/el:19970583

(1):58-66. DOI: 10.1109/JLT.2012.2227942

DOI: 10.1109/JLT.2015.2393152

1-3

2014;31(2):46-56. DOI: 10.1109/MSP.2013.2288110

2008. pp. 101-102. DOI: 10.1109/LEOSST.2008.4590509


[33] Secondini M, Rommel S, Meloni G, Fresi F, Forestieri E, Poti L. Single-step digital backpropagation for nonlinearity mitigation. Photonics Network Communications. 2016;31(3): 493-502. DOI: 10.1007/s11107-015-0586-z

[46] Piyawanno K, Kuschnerov M, Spinnler B, Lanki B. Nonlinearity mitigation with carrier phase estimation for coherent receivers with higher-order modulation formats. In: LEOS Annual Meeting Conference Proceedings, 2009 (LEOS'09); 4-8 October; Belek-Antalya,

Applications of Kalman Filters for Coherent Optical Communication Systems

http://dx.doi.org/10.5772/intechopen.71617

229

[47] Pakala L, Schmauss B. Non-linear mitigation using carrier phase estimation and K-Means clustering. In: Photonic Networks; 16. ITG Symposium; Proceedings of; 7-8 May;

[48] Lin C, Asif R, Holtmannspoetter M, Schmauss B. Nonlinear mitigation using carrier phase estimation and digital backward propagation in coherent QAM transmission. Optics

[49] Bisplinghoff A, Vogel C, Kupfer T, Langenbach S, Schmauss B. Slip-reduced carrier phase estimation for coherent transmission in the presence of non-linear phase noise. In: Optical Fiber Communication Conference and Exposition and the National Fiber Optic Engineers Conference (OFC/NFOEC), 2013; 17-21 March; Anaheim, CA, USA. 2013. pp. 1-3. DOI:

[50] Savory SJ. Digital coherent optical receivers: Algorithms and subsystems. IEEE Journal of Selected Topics in Quantum Electronics. 2010;16(5):1164-1179. DOI: 10.1109/JSTQE.2010.

[51] Kalman RE. A new approach to linear filtering and prediction problems. Transaction of

[52] Pessoa LM, Salgado HM, Darwazeh I. Efficient implementation of a phase estimation

[53] Pakala L, Schmauss B. Joint compensation of phase and amplitude noise using extended Kalman filter in coherent QAM systems. In: European Conference on Optical Communication (ECOC), 2014; 21-25 September; Cannes, France. 2014. pp. 1-3. DOI: 10.1109/

[54] Pakala L, Schmauss B. Extended Kalman filtering for simultaneous phase and amplitude noise mitigation in WDM systems. In: 17th International Conference on Transparent Optical Networks (ICTON); 5-9 July 2015; Budapest, Hungary. 2015. pp. 1-4. DOI: 10.1109/

[55] Zibar D, Hecker de Carvalho LH, Piels M, Doberstein A, Diniz J, Nebendahl B, et al. Application of machine learning techniques for amplitude and phase noise characterization. Journal of Lightwave Technology. 2015;33(7):1333-1343. DOI: 10.1109/JLT.2015.2394808

[56] Inoue T, Namiki S. Carrier recovery for M-QAM signals based on a block estimation process with Kalman filter. Optics Express. 2014;22(13):15376-15387. DOI: 10.1364/OE.22.015376

[57] Szafraniec B, Marshall TS, Nebendahl B. Performance monitoring and measurement techniques for coherent optical systems. Journal of Lightwave Technology. 2013;31(4):648-663.

algorithm in coherent optical systems. In: IEEE LEOS, STM; 2008. pp. 1-3

Turkey. 2009. pp. 1-2. DOI: 10.1109/LEOS.2009.5343204

Express. 2012;20(26):405-412. DOI: 10.1364/OE.20.00B405

the ASME-Journal of Basic Engineering. 1960;82:35-45

Leipzig, Germany. 2015

10.1364/OFC.2013.OTu3l.1

ECOC.2014.6964165

ICTON.2015.7193301

DOI: 10.1109/JLT.2012.2212234

2044751


[46] Piyawanno K, Kuschnerov M, Spinnler B, Lanki B. Nonlinearity mitigation with carrier phase estimation for coherent receivers with higher-order modulation formats. In: LEOS Annual Meeting Conference Proceedings, 2009 (LEOS'09); 4-8 October; Belek-Antalya, Turkey. 2009. pp. 1-2. DOI: 10.1109/LEOS.2009.5343204

[33] Secondini M, Rommel S, Meloni G, Fresi F, Forestieri E, Poti L. Single-step digital backpropagation for nonlinearity mitigation. Photonics Network Communications. 2016;31(3):

[34] Rafique D, Ellis AD. Nonlinear and ROADM induced penalties in 28 GBaud dynamic optical mesh networks employing electronic signal processing. Optics Express. 2011;19(18):

[35] Ip E, Kahn JM. Feedforward carrier recovery for coherent optical communications. Journal of Lightwave Technology. 2007;25(9):2675-2692. DOI: 10.1109/JLT.2007.902118 [36] Pfau T, Hoffmann S, Noe R. Hardware-efficient coherent digital receiver concept with feedforward carrier recovery for M-QAM constellations. Journal of Lightwave Technol-

[37] Li J, Li L, Tao Z, Hoshida T, Rasmussen JC. Laser-linewidth-tolerant feed-forward carrier phase estimator with reduced complexity for QAM. Journal of Lightwave Technology.

[38] Taylor MG. Phase estimation methods for optical coherent detection using digital signal processing. Journal of Lightwave Technology. 2009;27(7):901-914. DOI: 10.1109/JLT.2008.

[39] Gao Y, Lau APT, Lu C. Modulation-format-independent carrier phase estimation for square M-QAM systems. IEEE Photonics Technology Letters. 2013;25(11):1073-1076.

[40] Pakala L, Schmauss B. Improved decision directed carrier phase estimation for nonlinearity mitigation in 16-QAM systems. In: 16th International Conference on Transparent Optical Networks (ICTON); 6-10 July; Graz, Austria. 2014. pp. 1-4. DOI: 10.1109/ICTON.2014.6876047

[41] Viterbi A. Nonlinear estimation of PSK-modulated carrier phase with application to burst digital transmission. IEEE Transactions on Information Theory. 1983;29(4):543-551. DOI:

[42] Zhang S, Kam PY, Yu C, Chen J. Decision-aided carrier phase estimation for coherent optical communications. Journal of Lightwave Technology. 2010;28(11):1597-1607. DOI:

[43] Rice F, Rice M, Cowley B. A new algorithm for 16QAM carrier phase estimation using

[44] Li L, Tao Z, Liu L, Yan W, Oda S, Hoshida T, et al. XPM tolerant adaptive carrier phase recovery for coherent receiver based on phase noise statistics monitoring. In: 35th European Conference on Optical Communication 2009 (ECOC'09); 20-24 September; Vienna,

[45] Pakala L, Schmauss B. Nonlinearity and phase noise mitigation using feed-forward carrier phase estimation and digital backward propagation in coherent QAM transmis-

sion. Physics Procedia. 2014;56:1353-1357. DOI: 10.1016/j.phpro.2014.08.062

QPSK partitioning. Digital Signal Processing. 2002;12(1):77-86

493-502. DOI: 10.1007/s11107-015-0586-z

228 Kalman Filters - Theory for Advanced Applications

16739-16748. DOI: 10.1364/OE.19.016739

927778

DOI: 10.1109/LPT.2013.2259226

10.1109/TIT.1983.1056713

10.1109/JLT.2010.2048198

Austria. 2009. pp. 1-3

ogy. 2009;27(8):989-999. DOI: 10.1109/JLT.2008.2010511

2011;29(16):2358-2364. DOI: 10.1109/JLT.2011.2159580


[58] Pakala L, Schmauss B. Evaluation of correlated digital back propagation and extended Kalman filtering for nonlinear mitigation in PM-16-QAM WDM systems. In: Proc. SPIE 10130, Next-Generation Optical Communication: Components, Sub-systems, and Systems VI, 101300K; 28 January; San Francisco, CA, USA; 2017. pp. 1-9. DOI: 10.1107/ 12.2250488

[70] Zhu L, Li G. Folded digital backward propagation for dispersion managed fiber-optic transmission. Optics Express. 2011;19(7):5953-5959. DOI: 10.1364/OE.19.005953

Applications of Kalman Filters for Coherent Optical Communication Systems

http://dx.doi.org/10.5772/intechopen.71617

231

[71] Kikuchi K. Performance analyses of polarization demultiplexing based on constantmodulus algorithm in digital coherent optical receivers. Optics Express. 2011;19(2):9868-

[72] Leven A, Kaneda N, Koc U, Chen Y. Frequency estimation in intradyne reception. IEEE Photonics Technology Letters. 2007;19(6):366-368. DOI: 10.1109/LPT.2007.891893

[73] Selmi M, Jaouen Y, Ciblat P. Accurate digital frequency offset estimator for coherent PolMux QAM transmission systems. In: 35th European Conference on Optical Commu-

nications (ECOC' 2009); 20-24 September; Vienna, Austria. 2009. pp. 1-3

9880. DOI: 10.1364/OE.19.009868


[70] Zhu L, Li G. Folded digital backward propagation for dispersion managed fiber-optic transmission. Optics Express. 2011;19(7):5953-5959. DOI: 10.1364/OE.19.005953

[58] Pakala L, Schmauss B. Evaluation of correlated digital back propagation and extended Kalman filtering for nonlinear mitigation in PM-16-QAM WDM systems. In: Proc. SPIE 10130, Next-Generation Optical Communication: Components, Sub-systems, and Systems VI, 101300K; 28 January; San Francisco, CA, USA; 2017. pp. 1-9. DOI: 10.1107/

[59] Pakala L, Schmauss B. Effective mitigation of non-linearities using extended Kalman filtering for dispersion managed links. In: Proceedings of Photonic Networks; 17. ITG-

[60] Zhang S, Kam PY, Yu C, Chen J. Frequency offset estimation using a Kalman filter in coherent optical phase-shift keying systems. In: Conference on Lasers and Electro-Optics (CLEO) and Quantum Electronics and Laser Science Conference (QELS); 16-21 May; San

[61] Marshall T, Szafraniec B, Nebendahl B. Kalman filter carrier and polarization-state track-

[62] Yang Y, Cao G, Zhong K, Zhou X, Yao Y, Pak A, et al. Fast polarization-state tracking scheme based on radius-directed linear Kalman filter. Optics Express. 2015;23(15):19673-

[63] Jignesh J, Corcoran B, Zhu C, Lowery A. Unscented Kalman filters for polarization state tracking and phase noise mitigation. Optics Express. 2016;24(19):22282-22295. DOI:

[64] Jiang W, Yang Y, Zhang Q, Sun Y, Zhong K, Zhou X, et al. Application of Kalman filter in frequency offset estimation for coherent optical quadrature phase-shift keying communication system. Optical Engineering. 2016;55(9):096-102. DOI: 10.1117/1.OE.55.9.096102

[65] Pakala L, Schmauss B. Joint compensation of frequency offset, phase and amplitude noise using two stage extended Kalman filtering. In: Proceedings of Photonic Networks; 18.

[66] Pakala L, Schmauss B. Two stage extended Kalman filtering for joint compensation of frequency offset, linear and nonlinear phase noise and amplitude noise in coherent QAM systems. In: 19th International Conference on Transparent Optical Networks (ICTON); 2-

[67] Bar-Shalom Y, Rong Li X, Kirubarajan T. Estimation with Applications to Tracking and Navigation: Theory, Algorithms and Software. New York: Wiley; 2001. 584 p. DOI:

[68] Mehra R. Approaches to adaptive filtering. IEEE Transactions on Automatic Control.

[69] Ding W, Wang J, Rizos C. Improving adaptive Kalman estimation in GPS/INS integration. The Journal of Navigation. 2007;60:517-529. DOI: 10.1017/S0373463307004316

ITG-Symposium; 11-12 May; Leipzig, Germany. 2017. pp. 1-4

1972;17(5):693-698. DOI: 10.1109/TAC.1972.1100100

ing. Optics Letters. 2010;35(13):2203-2205. DOI: 10.1364/OL.35.002203

Symposium; 12-13 May; Leipzig, Germany. 2016

Jose, CA, USA. 2010. pp. 1-2

19680. DOI: 10.1364/OE.23.019673

6 July; Girona, Spain; 2017. pp. 1-4

10.1002/0471221279

10.1364/OE.24.022282

12.2250488

230 Kalman Filters - Theory for Advanced Applications


**Chapter 12**

Provisional chapter

**Kalman Filter for Moving Object Tracking: Performance**

DOI: 10.5772/intechopen.71731

Kalman Filter for Moving Object Tracking: Performance

This chapter presents Kalman filters for tracking moving objects and their efficient design strategy based on steady-state performance analysis. First, a dynamic/measurement model is defined for the tracking systems, assuming both position-only and position-velocity measurements. Then, problems with the Kalman filter design in tracking systems are summarized, and an efficient steady-state performance index proposed by the author [termed the root-mean-squared error index (the RMS index)] is introduced to resolve these concerns. The analytical relationship between the proposed RMS index and the covariance matrix of the process noise is shown, leading to a proposed design strategy that is based on this relationship. Theoretical performance analysis is conducted using the performance indices to show the optimality of the design strategy. Numerical simulations show the validity of the theoretical analyses and effectiveness of the proposed strategy in realistic situations. In addition, the optimal performance of the position-only-measured and position-velocity-measured systems is analyzed and compared. This comparison shows that the position-velocity-measured Kalman filter track-

ing is accurate when compared with the position-only-measured filter.

movement based on adaptive filtering by using a state-space model.

Keywords: Kalman tracking filter, moving object tracking, steady-state analysis,

Remote monitoring systems for cars and robots require accurate tracking of moving objects. Representative tracking algorithms include the Kalman filter [1–5] and its variants, such as the extended/unscented Kalman [6–9] and particle filters [10–12]. These can accurately track

To use the Kalman filter for the tracking of moving objects, it is necessary to design a dynamic model of target motion. The most common dynamic model is a constant velocity (CV) model

> © The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

© 2018 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

**Analysis and Filter Design**

Analysis and Filter Design

http://dx.doi.org/10.5772/intechopen.71731

Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

performance index, filter design, process noise

Kenshi Saho

Kenshi Saho

Abstract

1. Introduction

Provisional chapter

### **Kalman Filter for Moving Object Tracking: Performance Analysis and Filter Design** Kalman Filter for Moving Object Tracking: Performance Analysis and Filter Design

DOI: 10.5772/intechopen.71731

Kenshi Saho Kenshi Saho

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.71731

### Abstract

This chapter presents Kalman filters for tracking moving objects and their efficient design strategy based on steady-state performance analysis. First, a dynamic/measurement model is defined for the tracking systems, assuming both position-only and position-velocity measurements. Then, problems with the Kalman filter design in tracking systems are summarized, and an efficient steady-state performance index proposed by the author [termed the root-mean-squared error index (the RMS index)] is introduced to resolve these concerns. The analytical relationship between the proposed RMS index and the covariance matrix of the process noise is shown, leading to a proposed design strategy that is based on this relationship. Theoretical performance analysis is conducted using the performance indices to show the optimality of the design strategy. Numerical simulations show the validity of the theoretical analyses and effectiveness of the proposed strategy in realistic situations. In addition, the optimal performance of the position-only-measured and position-velocity-measured systems is analyzed and compared. This comparison shows that the position-velocity-measured Kalman filter tracking is accurate when compared with the position-only-measured filter.

Keywords: Kalman tracking filter, moving object tracking, steady-state analysis, performance index, filter design, process noise

### 1. Introduction

Remote monitoring systems for cars and robots require accurate tracking of moving objects. Representative tracking algorithms include the Kalman filter [1–5] and its variants, such as the extended/unscented Kalman [6–9] and particle filters [10–12]. These can accurately track movement based on adaptive filtering by using a state-space model.

To use the Kalman filter for the tracking of moving objects, it is necessary to design a dynamic model of target motion. The most common dynamic model is a constant velocity (CV) model

© The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

[1, 10], which assumes that the velocity is constant during a sampling interval. This model has been used in many applications because of its versatility, effectiveness, and simplicity. However, in almost conventional tracking systems, the selection of process noise (zero-mean white noise in the dynamic model) is conducted empirically [4, 6, 8]. This is because conventional studies tend to assume that process noise takes one of a limited number of forms, which is known as appropriate selections. Thus, despite the large number of investigations into Kalman filter trackers, the optimal selection of a process noise model has not been discussed. The general problems of model selection for Kalman filter trackers were discussed by Ekstrand in 2012 [1]. In the years since, further research on these issues has been conducted, but no satisfactory solutions to the abovementioned problems have been presented. Crouse [13] described a general solution for optimal trackers in a steady state. However, this method also requires an empirical selection of the dynamic models. A detailed analysis of the Kalman filter has been provided for various applications, including global navigation satellite systems [14] and video trackers [15]. However, only limited systems have yet been considered, and no definitive parameter-setting procedure for the Kalman tracking filter has been provided. Although various criteria have been proposed and investigated for the design of Kalman filters and its variants to achieve better tracking accuracy, robustness, and real-time capability, relationship between these performance indices and the model parameters such as the process noise variance is not discussed even in recent studies [16].

i. Analysis of the performance of a PVM Kalman filter with a CV model, based on the

Kalman Filter for Moving Object Tracking: Performance Analysis and Filter Design

http://dx.doi.org/10.5772/intechopen.71731

235

This chapter presents the theoretical analyses and simulations required to tackle these issues. The remainder of this chapter is organized as follows: Section 2 defines the tracking filtering problem dealt in this chapter and explains the existing concerns and models for POM and PVM Kalman filter design. Section 3 introduces our proposed efficient performance indices with their mathematical formulations. Section 4 presents the proposed process noise design strategy based on the performance index. Section 5 shows the theoretical analysis of the optimal POM and PVM Kalman filter performance in a steady state. The effectiveness of the PVM Kalman filter is proven by the comparison with the POM filter. Section 6 shows realistic maneuvering-targettracking application examples. Section 7 concludes this chapter and proposes future tasks.

This section introduces the Kalman filter for moving object tracking and defines the model

The Kalman filter for tracking moving objects estimates a state vector comprising the parameters of the target, such as position and velocity, based on a dynamic/measurement model. For simplicity, this chapter deals with a typical second-order one-dimensional Kalman filter

where x<sup>t</sup> and v<sup>t</sup> are the true position and velocity of the target moving object, respectively, and <sup>T</sup> denotes the transpose. The assumed dynamic model is a CV model, which is a simple and popular model for tracking moving objects. The CV model assumes that the velocity is

where xt<sup>k</sup> denotes the true state at time kT, T is the sampling interval, w<sup>k</sup> is the process noise with covariance matrix Q, and Φ is the transition matrix from kT to ð Þ k þ 1 T, which is expressed as

<sup>Φ</sup> <sup>¼</sup> <sup>1</sup> <sup>T</sup>

The Kalman filter predicts the target state based on this dynamic model.

<sup>x</sup><sup>t</sup> <sup>¼</sup> ð Þ <sup>x</sup><sup>t</sup> <sup>v</sup><sup>t</sup> <sup>T</sup>, (1)

xt<sup>k</sup> ¼ Φxtk�<sup>1</sup> þ wk, (2)

0 1 , (3)

ii. Application of the proposed process noise design strategy to a PVM Kalman filter.

iii. Comparison of the performance of optimal POM and PVM Kalman filters.

proposed index.

2. Problem statement

assumed in this chapter.

tracker whose true state vector is defined as

constant during the sampling interval, which is expressed as

2.1. Dynamic model

Another significant problem in a measurement model of the conventional Kalman tracking filter is that most studies consider only position measurements and therefore cannot make full use of modern sensors that are able to measure velocity, such as ultrawideband Doppler radar [17, 18]. Moreover, sensor fusion based on Internet of Things technology also enables the simultaneous measurement of position and velocity (e.g., sensor data fusion based on the communication between radars/lasers/sonars and speedometers embedded in targets). Consequently, Kalman filters for such systems have become an important area of research [19–24]. In Ref. [24], the extended Kalman filter for radar measurements is modified for range (position) and range-rate (velocity) measurements, and its effectiveness in realistic radar applications is verified. However, concrete design criterion is not shown. The number of conventional studies on position-velocity-measured (PVM) Kalman filters is smaller than those on the more common position-only-measured (POM) Kalman filters, and the performance and design of PVM Kalman filters are not sufficiently considered.

To resolve the two problems described above concerning the process noise selection and PVM systems, our previous work clarified the fundamental properties of PVM tracking filters [25, 26] and generated an efficient performance index to design an optimal process noise matrix [3, 5]. In the studies of PVM tracking filters [25, 26], fixed-gain PVM filter properties were analytically clarified, but there was no optimization of the PVM Kalman filters. In our work on the process noise matrix [3], an optimal POM Kalman filter, with respect to position prediction, was presented. In this chapter, an appropriate process noise design strategy, based on our proposed efficient steady-state performance index (introduced in Section 3), and its applicability are verified. Our previous work highlighted the following issues, which we address in this chapter:


This chapter presents the theoretical analyses and simulations required to tackle these issues. The remainder of this chapter is organized as follows: Section 2 defines the tracking filtering problem dealt in this chapter and explains the existing concerns and models for POM and PVM Kalman filter design. Section 3 introduces our proposed efficient performance indices with their mathematical formulations. Section 4 presents the proposed process noise design strategy based on the performance index. Section 5 shows the theoretical analysis of the optimal POM and PVM Kalman filter performance in a steady state. The effectiveness of the PVM Kalman filter is proven by the comparison with the POM filter. Section 6 shows realistic maneuvering-targettracking application examples. Section 7 concludes this chapter and proposes future tasks.

### 2. Problem statement

[1, 10], which assumes that the velocity is constant during a sampling interval. This model has been used in many applications because of its versatility, effectiveness, and simplicity. However, in almost conventional tracking systems, the selection of process noise (zero-mean white noise in the dynamic model) is conducted empirically [4, 6, 8]. This is because conventional studies tend to assume that process noise takes one of a limited number of forms, which is known as appropriate selections. Thus, despite the large number of investigations into Kalman filter trackers, the optimal selection of a process noise model has not been discussed. The general problems of model selection for Kalman filter trackers were discussed by Ekstrand in 2012 [1]. In the years since, further research on these issues has been conducted, but no satisfactory solutions to the abovementioned problems have been presented. Crouse [13] described a general solution for optimal trackers in a steady state. However, this method also requires an empirical selection of the dynamic models. A detailed analysis of the Kalman filter has been provided for various applications, including global navigation satellite systems [14] and video trackers [15]. However, only limited systems have yet been considered, and no definitive parameter-setting procedure for the Kalman tracking filter has been provided. Although various criteria have been proposed and investigated for the design of Kalman filters and its variants to achieve better tracking accuracy, robustness, and real-time capability, relationship between these performance indices and the model parameters such as the process

Another significant problem in a measurement model of the conventional Kalman tracking filter is that most studies consider only position measurements and therefore cannot make full use of modern sensors that are able to measure velocity, such as ultrawideband Doppler radar [17, 18]. Moreover, sensor fusion based on Internet of Things technology also enables the simultaneous measurement of position and velocity (e.g., sensor data fusion based on the communication between radars/lasers/sonars and speedometers embedded in targets). Consequently, Kalman filters for such systems have become an important area of research [19–24]. In Ref. [24], the extended Kalman filter for radar measurements is modified for range (position) and range-rate (velocity) measurements, and its effectiveness in realistic radar applications is verified. However, concrete design criterion is not shown. The number of conventional studies on position-velocity-measured (PVM) Kalman filters is smaller than those on the more common position-only-measured (POM) Kalman filters, and the performance and design of PVM

To resolve the two problems described above concerning the process noise selection and PVM systems, our previous work clarified the fundamental properties of PVM tracking filters [25, 26] and generated an efficient performance index to design an optimal process noise matrix [3, 5]. In the studies of PVM tracking filters [25, 26], fixed-gain PVM filter properties were analytically clarified, but there was no optimization of the PVM Kalman filters. In our work on the process noise matrix [3], an optimal POM Kalman filter, with respect to position prediction, was presented. In this chapter, an appropriate process noise design strategy, based on our proposed efficient steady-state performance index (introduced in Section 3), and its applicability are verified. Our previous work highlighted the following issues, which we

noise variance is not discussed even in recent studies [16].

234 Kalman Filters - Theory for Advanced Applications

Kalman filters are not sufficiently considered.

address in this chapter:

This section introduces the Kalman filter for moving object tracking and defines the model assumed in this chapter.

#### 2.1. Dynamic model

The Kalman filter for tracking moving objects estimates a state vector comprising the parameters of the target, such as position and velocity, based on a dynamic/measurement model. For simplicity, this chapter deals with a typical second-order one-dimensional Kalman filter tracker whose true state vector is defined as

$$\mathbf{x}\_{\mathbf{t}} = (\mathbf{x}\_{\mathbf{t}} \ \mathbf{v}\_{\mathbf{t}})^{\mathrm{T}},\tag{1}$$

where x<sup>t</sup> and v<sup>t</sup> are the true position and velocity of the target moving object, respectively, and <sup>T</sup> denotes the transpose. The assumed dynamic model is a CV model, which is a simple and popular model for tracking moving objects. The CV model assumes that the velocity is constant during the sampling interval, which is expressed as

$$\mathbf{x}\_{tk} = \mathbf{Q}\mathbf{x}\_{tk-1} + \mathbf{w}\_{k\prime} \tag{2}$$

where xt<sup>k</sup> denotes the true state at time kT, T is the sampling interval, w<sup>k</sup> is the process noise with covariance matrix Q, and Φ is the transition matrix from kT to ð Þ k þ 1 T, which is expressed as

$$
\boldsymbol{\Phi} = \begin{pmatrix} 1 & T \\ 0 & 1 \end{pmatrix} \tag{3}
$$

The Kalman filter predicts the target state based on this dynamic model.

#### 2.2. Measurement model

The measurements are simply modeled as

$$\mathbf{z}\_{k} = \mathbf{H}\mathbf{x}\_{tk} + \mathbf{v}\_{k\prime} \tag{4}$$

<sup>x</sup>~<sup>k</sup> <sup>¼</sup> <sup>Φ</sup>bx<sup>k</sup>�<sup>1</sup>, (9)

, respectively, and K<sup>k</sup> denotes the Kalman

http://dx.doi.org/10.5772/intechopen.71731

237

<sup>q</sup>: (14)

<sup>b</sup>x<sup>k</sup> <sup>¼</sup> <sup>x</sup>~<sup>k</sup> <sup>þ</sup> <sup>K</sup>kð Þ <sup>z</sup><sup>k</sup> � Hx~<sup>k</sup> , (10)

Kalman Filter for Moving Object Tracking: Performance Analysis and Filter Design

<sup>K</sup><sup>k</sup> <sup>¼</sup> <sup>P</sup>~kH<sup>T</sup> HP~kH<sup>T</sup> <sup>þ</sup> <sup>R</sup> � �: (11)

<sup>P</sup>~<sup>k</sup> <sup>¼</sup> <sup>Φ</sup>Pb<sup>k</sup>�<sup>1</sup>Φ<sup>T</sup> <sup>þ</sup> <sup>Q</sup>: (12)

<sup>P</sup>b<sup>k</sup> <sup>¼</sup> <sup>P</sup>~<sup>k</sup> � <sup>K</sup>kHP~k: (13)

where predicts and estimates are denoted by <sup>~</sup> and ^

2.4. Aspects of tracking filter design

empirically carried out.

where P<sup>k</sup> is the covariance matrix of errors determined from

gain that minimizes the errors in the estimated position and velocity. K<sup>k</sup> is calculated as

Moving object tracking obtains accurate and sequential estimation of the target position and velocity by using Eqs. (9)–(13). As indicated in Eqs. (1)–(13), the design parameters of the Kalman filter tracker are elements of the covariance matrix of the process noise Q. We must set Q to achieve tracking errors that are as small as possible. Thus, we must know how to design an appropriate Q. Moreover, we must be able to define the evaluation index of the filter performance. However, these issues have not been sufficiently deliberated because the selection of Q has not been sufficiently addressed in previous studies. Here, the design of Q is

In the conventional tracking systems, the most commonly used random acceleration (RA)

=4 T<sup>3</sup> =2

The appropriate selection of σ<sup>q</sup> is important because σ<sup>q</sup> (and sensor noise variance R) directly determines the performance of the tracking filter with the CV model. However, in conventional studies, process noises and their parameters are empirically selected, and the validity of the selection is discussed only casually [1, 16]. Many conventional tracking systems select the RA process noises (Qra), with variance σ<sup>q</sup> set based on the assumed target motion. However, no definitive method of determining σ<sup>q</sup> has been established. Although tracking index defined by Kalata [27] is known as an effective design parameter, its empirical selection is still required. Moreover, the validity in selecting the RA process noise is also questionable. Various other forms of Q are known and have been used for different target motions [12]. For example, random velocity model [2] and the diagonal Q, which do not include correlations in process noise [7], are also frequently used. However, for the reasons discussed earlier, the differences

σ2

process noise is often selected because it has a better performance. Its Q is

in performance between the various process noise models are not known.

<sup>Q</sup>ra <sup>¼</sup> <sup>T</sup><sup>4</sup>

T3 =2 T<sup>2</sup> !

where z<sup>k</sup> denotes the measurement vector, H denotes the measurement matrix, and v<sup>k</sup> is the measurement noise with covariance matrix R. This chapter considers two types of measurement systems, which are discussed as follows.

#### 2.2.1. Position-only-measured system

The POM system assumes that the sensors (such as radar, laser, and sonar) can measure only the position of the target. This is a general assumption in the moving object tracking. H and R of this model are expressed as

$$\mathbf{H} = \begin{pmatrix} 1 & 0 \\ \end{pmatrix} \tag{5}$$

$$\mathbf{R} = (B\_{\mathbf{x}})\_{\prime} \tag{6}$$

where Bxis the variance of the position measurement errors.

#### 2.2.2. Position-velocity-measured system

The PVM system assumes that the sensor system can measure position and velocity simultaneously. One example of the PVM model system is a pulse Doppler radar. Sensor fusion systems using communications of position/velocity sensors can also be expressed by the PVM model. H of this model is expressed as

$$\mathbf{H} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}. \tag{7}$$

We now assume that the noises of position and velocity measurements are uncorrelated, and R of PVM systems under this assumption is defined as

$$\mathbf{R} = \begin{pmatrix} B\_{\mathbf{x}} & \mathbf{0} \\ \mathbf{0} & B\_{\mathbf{v}} \end{pmatrix}. \tag{8}$$

where Bvis the variance of the velocity measurement errors.

#### 2.3. Kalman filter tracking

The Kalman filter tracker based on the abovementioned models sequentially estimates state vectors via the Kalman filter equations. The prediction and estimation are calculated as

Kalman Filter for Moving Object Tracking: Performance Analysis and Filter Design http://dx.doi.org/10.5772/intechopen.71731 237

$$
\tilde{\mathbf{x}}\_k = \mathbf{OM}\_{k-1} \tag{9}
$$

$$
\widehat{\mathbf{x}}\_{k} = \mathbf{\tilde{x}}\_{k} + \mathbf{K}\_{k} (\mathbf{z}\_{k} - \mathbf{H}\mathbf{\tilde{x}}\_{k}),
\tag{10}
$$

where predicts and estimates are denoted by <sup>~</sup> and ^ , respectively, and K<sup>k</sup> denotes the Kalman gain that minimizes the errors in the estimated position and velocity. K<sup>k</sup> is calculated as

$$\mathbf{K}\_k = \tilde{\mathbf{P}}\_k \mathbf{H}^T \{ \mathbf{H} \tilde{\mathbf{P}}\_k \mathbf{H}^T + \mathbf{R} \}. \tag{11}$$

where P<sup>k</sup> is the covariance matrix of errors determined from

$$
\tilde{\mathbf{P}}\_k = \mathbf{Q} \hat{\mathbf{P}}\_{k-1} \mathbf{Q}^\top + \mathbf{Q}. \tag{12}
$$

$$
\widehat{\mathbf{P}}\_k = \mathbf{\tilde{P}}\_k - \mathbf{K}\_k \mathbf{H} \mathbf{\tilde{P}}\_k. \tag{13}
$$

#### 2.4. Aspects of tracking filter design

2.2. Measurement model

236 Kalman Filters - Theory for Advanced Applications

The measurements are simply modeled as

ment systems, which are discussed as follows.

where Bxis the variance of the position measurement errors.

2.2.1. Position-only-measured system

of this model are expressed as

2.2.2. Position-velocity-measured system

model. H of this model is expressed as

2.3. Kalman filter tracking

of PVM systems under this assumption is defined as

where Bvis the variance of the velocity measurement errors.

z<sup>k</sup> ¼ Hxt<sup>k</sup> þ vk, (4)

H ¼ ð Þ 1 0 , (5)

R ¼ ð Þ B<sup>x</sup> , (6)

: (7)

: (8)

where z<sup>k</sup> denotes the measurement vector, H denotes the measurement matrix, and v<sup>k</sup> is the measurement noise with covariance matrix R. This chapter considers two types of measure-

The POM system assumes that the sensors (such as radar, laser, and sonar) can measure only the position of the target. This is a general assumption in the moving object tracking. H and R

The PVM system assumes that the sensor system can measure position and velocity simultaneously. One example of the PVM model system is a pulse Doppler radar. Sensor fusion systems using communications of position/velocity sensors can also be expressed by the PVM

> <sup>H</sup> <sup>¼</sup> 1 0 0 1

We now assume that the noises of position and velocity measurements are uncorrelated, and R

0 B<sup>v</sup>

The Kalman filter tracker based on the abovementioned models sequentially estimates state vectors via the Kalman filter equations. The prediction and estimation are calculated as

!

<sup>R</sup> <sup>¼</sup> <sup>B</sup><sup>x</sup> <sup>0</sup>

!

Moving object tracking obtains accurate and sequential estimation of the target position and velocity by using Eqs. (9)–(13). As indicated in Eqs. (1)–(13), the design parameters of the Kalman filter tracker are elements of the covariance matrix of the process noise Q. We must set Q to achieve tracking errors that are as small as possible. Thus, we must know how to design an appropriate Q. Moreover, we must be able to define the evaluation index of the filter performance. However, these issues have not been sufficiently deliberated because the selection of Q has not been sufficiently addressed in previous studies. Here, the design of Q is empirically carried out.

In the conventional tracking systems, the most commonly used random acceleration (RA) process noise is often selected because it has a better performance. Its Q is

$$\mathbf{Q\_{ra}} = \begin{pmatrix} T^4/4 & T^3/2 \\ T^3/2 & T^2 \end{pmatrix} \sigma\_{\mathbf{q}}^2. \tag{14}$$

The appropriate selection of σ<sup>q</sup> is important because σ<sup>q</sup> (and sensor noise variance R) directly determines the performance of the tracking filter with the CV model. However, in conventional studies, process noises and their parameters are empirically selected, and the validity of the selection is discussed only casually [1, 16]. Many conventional tracking systems select the RA process noises (Qra), with variance σ<sup>q</sup> set based on the assumed target motion. However, no definitive method of determining σ<sup>q</sup> has been established. Although tracking index defined by Kalata [27] is known as an effective design parameter, its empirical selection is still required. Moreover, the validity in selecting the RA process noise is also questionable. Various other forms of Q are known and have been used for different target motions [12]. For example, random velocity model [2] and the diagonal Q, which do not include correlations in process noise [7], are also frequently used. However, for the reasons discussed earlier, the differences in performance between the various process noise models are not known.

### 3. The efficient steady-state performance index (RMS index)

The process noise selection problems discussed in Section 2.3 must be solved to effectively design Kalman tracking filters. Thus, we must properly evaluate the performance of the filter. The effective steady-state performance index was derived [3] and is termed root-mean-squared error index (an RMS index). This section introduces the RMS index for POM and PVM systems and shows the analytical relationships between the RMS index and Q.

#### 3.1. Definition of RMS index

In tracking filtering, the following two functions are required:


The RMS index is proposed for the comprehensive evaluation of the performance of these two functions and is defined as

$$\varepsilon\_{\rm p} \equiv \sqrt{\mathbb{E}\left[\left(\chi\_{\rm tak} - \check{\chi}\_{k}\right)^{2}\right]} \tag{15}$$

First, the arbitrary Q is defined as

<sup>Q</sup>gen <sup>¼</sup> a b

Qra of Eq. (14) and b = 0 leads to the diagonal Q. The analytical relationship between Qgen and

where <sup>α</sup> and <sup>β</sup> are components of the steady-state Kalman gain <sup>K</sup><sup>∞</sup> <sup>¼</sup> <sup>α</sup>; <sup>β</sup>=<sup>T</sup> � �<sup>T</sup> calculated

C2

<sup>A</sup> <sup>¼</sup> <sup>a</sup>=Bx, B <sup>¼</sup> bT=Bx, C <sup>¼</sup> cT<sup>2</sup>

<sup>α</sup> <sup>¼</sup> <sup>1</sup> � <sup>β</sup><sup>2</sup>

The derivation process of these equations is shown in Ref. [3]. As shown in Eqs. (19)–(22), the

In a similar manner to the treatment of the POM system, this subsection introduces the RMS index of a PVM system and its relationship to Qgen. The RMS index of the PVM system is

> <sup>K</sup><sup>∞</sup> <sup>¼</sup> <sup>α</sup> <sup>T</sup><sup>η</sup> β=T θ

<sup>g</sup><sup>1</sup> <sup>α</sup>; <sup>β</sup>; <sup>η</sup>; <sup>θ</sup> � � <sup>¼</sup> βη � αθ � <sup>β</sup> � � αθ � βη � <sup>α</sup> � <sup>θ</sup> � � <sup>4</sup> � <sup>2</sup><sup>α</sup> � <sup>β</sup> � <sup>2</sup><sup>θ</sup> <sup>þ</sup> αθ � βη � � (25)

a2

s

<sup>2</sup>T<sup>4</sup> <sup>β</sup><sup>2</sup> <sup>þ</sup> qT<sup>4</sup> =4; σ<sup>2</sup> qT<sup>3</sup> =2; σ<sup>2</sup>

<sup>2</sup>α<sup>2</sup> <sup>þ</sup> <sup>2</sup><sup>β</sup> <sup>þ</sup> αβ

ð Þ 16 þ 4A � 4B þ C 8 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>C</sup>ð Þ <sup>16</sup> <sup>þ</sup> 4A � <sup>4</sup><sup>B</sup> <sup>þ</sup> <sup>C</sup> <sup>p</sup> <sup>þ</sup>

<sup>c</sup>T<sup>4</sup> <sup>þ</sup> <sup>g</sup><sup>2</sup> <sup>α</sup>; <sup>β</sup>; <sup>η</sup>; <sup>θ</sup> � �B<sup>x</sup> <sup>þ</sup> <sup>g</sup><sup>3</sup> <sup>α</sup>; <sup>β</sup>; <sup>η</sup>; <sup>θ</sup> � �T<sup>2</sup>

where a > 0, b > 0, and c > 0, and the dimensions of a, b, and c are [m<sup>2</sup>

<sup>p</sup>,pom <sup>¼</sup> <sup>a</sup><sup>c</sup>

respectively. For example, substituting ð Þ¼ <sup>a</sup>; <sup>b</sup>; <sup>c</sup> <sup>σ</sup><sup>2</sup>

ε2

4 �

ε<sup>p</sup> is expressed by the following closed form.

from (a, b, c) using the following equations:

optimal (a, b, c) is designed minimizing εp.

<sup>p</sup>,pvm <sup>¼</sup> <sup>2</sup> � <sup>2</sup><sup>η</sup> � <sup>θ</sup>

<sup>2</sup> <sup>β</sup> <sup>þ</sup> αθ � βη � � !<sup>2</sup>

where α, β, η, and θ are components of the steady-state Kalman gain:

3.3. RMS index of a PVM system

ε2

and,

<sup>β</sup> <sup>¼</sup> <sup>C</sup> <sup>þ</sup> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi <sup>C</sup>ð Þ <sup>16</sup> <sup>þ</sup> <sup>4</sup><sup>A</sup> � <sup>4</sup><sup>B</sup> <sup>þ</sup> <sup>C</sup> <sup>p</sup> b c ! (18)

Kalman Filter for Moving Object Tracking: Performance Analysis and Filter Design

<sup>q</sup>T<sup>2</sup> � � into Eq. (18) gives the

<sup>α</sup> <sup>4</sup> � <sup>2</sup><sup>α</sup> � <sup>β</sup> � � <sup>B</sup><sup>x</sup> (19)

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

], [m<sup>2</sup>

http://dx.doi.org/10.5772/intechopen.71731

Cð2A � 2B þ C 8

=B<sup>x</sup> (21)

Bv

<sup>g</sup><sup>1</sup> <sup>α</sup>; <sup>β</sup>; <sup>η</sup>; <sup>θ</sup> � � (23)

� � (24)

=C (22)

/s], and [m2

/s2 ], 239

(20)

where x~<sup>k</sup> is the predicted target position (second element of x~<sup>k</sup> ), E[] indicates the mean with respect to k, and xta<sup>k</sup> is the true position of a constant acceleration target which is

$$\mathbf{x}\_{\text{tak}} = \mathbf{a}\_{\text{c}} (kT)^2 / 2 \tag{16}$$

where a<sup>c</sup> is constant acceleration of the target. In the Kalman filter tracker using the CV model, it is assumed that the target velocity is constant during the sampling interval. Thus, for the constant acceleration target, a steady-state bias error occurs because of the difference between the target motion and the assumed dynamic model. Moreover, x~<sup>k</sup> includes random errors due to measurement noise. Thus, the RMS index ε<sup>p</sup> expresses both bias errors and random errors. With the steady-state bias error due to the model/motion difference of eac and the steady-state standard deviation of the random errors in x~<sup>k</sup> of σp, ε<sup>p</sup> is expressed as

$$
\varepsilon\_{\rm p} = \sqrt{\varepsilon\_{\rm ac}^2 + \sigma\_{\rm p}^2} \tag{17}
$$

σ<sup>p</sup> expresses the performance corresponding to Function 1 and eac expresses the performance corresponding to Function 2. The smaller these errors are, the better is the tracking filter. Thus, the minimum ε<sup>p</sup> achieves the best tracking filter in a steady state.

#### 3.2. RMS index of a POM system

One important advantage of the RMS index is that it can be expressed in closed form. The closed form of ε<sup>p</sup> for the POM system was derived in Ref. [3]. This subsection introduces the RMS index and its relationship to the design parameter Q in the POM system.

First, the arbitrary Q is defined as

3. The efficient steady-state performance index (RMS index)

and shows the analytical relationships between the RMS index and Q.

• Function 1. Reduces random errors caused by measurement noises.

ε<sup>p</sup> �

respect to k, and xta<sup>k</sup> is the true position of a constant acceleration target which is

standard deviation of the random errors in x~<sup>k</sup> of σp, ε<sup>p</sup> is expressed as

the minimum ε<sup>p</sup> achieves the best tracking filter in a steady state.

3.2. RMS index of a POM system

ε<sup>p</sup> ¼

RMS index and its relationship to the design parameter Q in the POM system.

q

σ<sup>p</sup> expresses the performance corresponding to Function 1 and eac expresses the performance corresponding to Function 2. The smaller these errors are, the better is the tracking filter. Thus,

One important advantage of the RMS index is that it can be expressed in closed form. The closed form of ε<sup>p</sup> for the POM system was derived in Ref. [3]. This subsection introduces the

In tracking filtering, the following two functions are required:

ating target is required for the CV model).

3.1. Definition of RMS index

238 Kalman Filters - Theory for Advanced Applications

functions and is defined as

The process noise selection problems discussed in Section 2.3 must be solved to effectively design Kalman tracking filters. Thus, we must properly evaluate the performance of the filter. The effective steady-state performance index was derived [3] and is termed root-mean-squared error index (an RMS index). This section introduces the RMS index for POM and PVM systems

• Function 2. Tracks targets with complicated motions (e.g., accurate tracking of an acceler-

The RMS index is proposed for the comprehensive evaluation of the performance of these two

where x~<sup>k</sup> is the predicted target position (second element of x~<sup>k</sup> ), E[] indicates the mean with

<sup>x</sup>ta<sup>k</sup> <sup>¼</sup> <sup>a</sup>cð Þ kT <sup>2</sup>

where a<sup>c</sup> is constant acceleration of the target. In the Kalman filter tracker using the CV model, it is assumed that the target velocity is constant during the sampling interval. Thus, for the constant acceleration target, a steady-state bias error occurs because of the difference between the target motion and the assumed dynamic model. Moreover, x~<sup>k</sup> includes random errors due to measurement noise. Thus, the RMS index ε<sup>p</sup> expresses both bias errors and random errors. With the steady-state bias error due to the model/motion difference of eac and the steady-state

> ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi e2 ac þ σ<sup>2</sup> p

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi E ð Þ xta<sup>k</sup> � x~<sup>k</sup>

r h i

2

=2 (16)

(15)

(17)

$$\mathbf{Q}\_{\text{gen}} = \begin{pmatrix} a & b \\ b & c \end{pmatrix} \tag{18}$$

where a > 0, b > 0, and c > 0, and the dimensions of a, b, and c are [m<sup>2</sup> ], [m<sup>2</sup> /s], and [m2 /s2 ], respectively. For example, substituting ð Þ¼ <sup>a</sup>; <sup>b</sup>; <sup>c</sup> <sup>σ</sup><sup>2</sup> qT<sup>4</sup> =4; σ<sup>2</sup> qT<sup>3</sup> =2; σ<sup>2</sup> <sup>q</sup>T<sup>2</sup> � � into Eq. (18) gives the Qra of Eq. (14) and b = 0 leads to the diagonal Q. The analytical relationship between Qgen and ε<sup>p</sup> is expressed by the following closed form.

$$
\epsilon\_{\rm p,pom}^2 = \frac{a\_{\rm c}^2 T^4}{\beta^2} + \frac{2\alpha^2 + 2\beta + \alpha\beta}{\alpha(4 - 2\alpha - \beta)} B\_{\rm x} \tag{19}
$$

where <sup>α</sup> and <sup>β</sup> are components of the steady-state Kalman gain <sup>K</sup><sup>∞</sup> <sup>¼</sup> <sup>α</sup>; <sup>β</sup>=<sup>T</sup> � �<sup>T</sup> calculated from (a, b, c) using the following equations:

$$\beta = \frac{\mathbb{C} + \sqrt{\mathbb{C}(16 + 4A - 4B + \mathbb{C})}}{4} - \sqrt{\frac{\mathbb{C}^2(16 + 4A - 4B + \mathbb{C})}{8\sqrt{\mathbb{C}(16 + 4A - 4B + \mathbb{C})}}} + \frac{\mathbb{C}(2A - 2B + \mathbb{C})}{8} \tag{20}$$

<sup>A</sup> <sup>¼</sup> <sup>a</sup>=Bx, B <sup>¼</sup> bT=Bx, C <sup>¼</sup> cT<sup>2</sup> =B<sup>x</sup> (21)

$$
\alpha = 1 - \beta^2 / \mathbb{C} \tag{22}
$$

The derivation process of these equations is shown in Ref. [3]. As shown in Eqs. (19)–(22), the optimal (a, b, c) is designed minimizing εp.

#### 3.3. RMS index of a PVM system

In a similar manner to the treatment of the POM system, this subsection introduces the RMS index of a PVM system and its relationship to Qgen. The RMS index of the PVM system is

$$\epsilon\_{\rm p,p\rm vm}^2 = \left(\frac{2 - 2\eta - \theta}{2(\beta + \alpha\theta - \beta\eta)}\right)^2 a\_c^2 T^4 + \frac{g\_2(\alpha, \beta, \eta, \theta)B\_\mathbf{x} + g\_3(\alpha, \beta, \eta, \theta)T^2 B\_\mathbf{v}}{g\_1(\alpha, \beta, \eta, \theta)}\tag{23}$$

where α, β, η, and θ are components of the steady-state Kalman gain:

$$\mathbf{K}\_{\circ} = \begin{pmatrix} a & T\eta \\ \beta/T & \theta \end{pmatrix} \tag{24}$$

and,

$$\mathcal{g}\_1(a,\beta,\eta,\theta) = \left(\beta\eta - a\theta - \beta\right)\left(a\theta - \beta\eta - a - \theta\right)\left(4 - 2a - \beta - 2\theta + a\theta - \beta\eta\right) \tag{25}$$

$$\begin{aligned} g\_2(a,\beta,\eta,\theta) &= a^3\theta(\theta-2)(\theta-1) + a^2\beta\{2-2\theta-3\eta\theta^2+6\eta\theta-2\eta\} \\ &+ a^2\theta^2(2-\theta) + 2a\theta\theta(\eta-2)(\theta-2) + a\theta^2(1+2\eta-\theta-3\eta^2+3\eta^2\theta) \\ &+ \beta^3\eta(1+\eta)(1-\eta) + \beta^2\eta(2-\eta)(\theta-2) + \beta^2(2-\theta) \end{aligned} \tag{26}$$
 
$$g\_3(a,\beta,\eta,\theta) = a\theta(2\eta^2 + 2\eta\theta + \theta^2 - \theta) + \eta\theta(2\eta+2\theta-2\eta^2-\theta^2-2\eta\theta) + \theta^2(2-\theta) \tag{27}$$

Eq. (22) is obtained from σ<sup>2</sup> <sup>p</sup> and eac of the steady-state PVM Kalman filter (α � β � η � θ filter) by using Eq. (17). The derivation processes for these are shown in Ref. [25]. The relationship between the steady-state Kalman gains and Qgen is derived as follows:

$$a = \frac{T^2 B\_\mathrm{v}}{1 - R\_{\mathrm{xv}} \beta^2 - (1 - \theta)\alpha - \theta} \left\{ \left( a\beta^2 + 2\beta^3 + \beta^2 + (\theta - 1)\theta\beta \right) \mathrm{R}\_{\mathrm{xv}} \right. $$

$$+ \alpha^2 R\_{\mathrm{xv}} \left( (1 - \theta) \left( 1 + 2\alpha\beta \right) + \beta^2 \theta + \beta \left( 3\theta - 2 - \theta^2 \right) \right) + \theta(1 - \theta)(\alpha - 1) \right\} \tag{28}$$

$$b = \frac{\beta^3 R\_{\text{xy}}^2 + R\_{\text{xy}} \left( a \left( \beta (1 - \theta) - \theta^2 + \theta \right) + \beta^2 \theta + \beta \theta + \theta^2 - \theta \right)}{1 - R\_{\text{xy}} \beta^2 - (1 - \theta) \alpha - \theta} \text{TB}\_{\text{v}} \tag{29}$$

$$\mathcal{L} = \frac{R\_{\text{vv}} \left( a\beta\theta + \beta^2(\theta + 1) - \beta\theta \right) - a\theta \left( \beta + \theta \right) + \beta\theta + \theta^2}{1 - R\_{\text{vv}}\beta^2 - (1 - \theta)\alpha - \theta} B\_{\text{v}} \tag{30}$$

where

$$R\_{\rm xv} = B\_{\rm x} / T^2 B\_{\rm v} \tag{31}$$

$$
\eta = \mathbb{R}\_{\text{xv}} \beta \tag{32}
$$

<sup>μ</sup>pom <sup>¼</sup> <sup>ε</sup><sup>2</sup>

where

solving

4.1.2. PVM system optimization

for the PVM system is given by

<sup>p</sup>,pvm=B<sup>x</sup> <sup>¼</sup> <sup>2</sup> � <sup>2</sup>βRxv � <sup>θ</sup>

<sup>2</sup> <sup>β</sup> <sup>þ</sup> αθ � <sup>β</sup><sup>2</sup> <sup>R</sup>xv � � !<sup>2</sup>

calculated by solving the following minimization problem.

arg min <sup>α</sup>, <sup>β</sup>, <sup>θ</sup> <sup>μ</sup>pvm <sup>α</sup>; <sup>β</sup>; <sup>θ</sup>;Rxv � �

4.2. Procedure and notes of the proposed design strategy

<sup>μ</sup>pvm <sup>¼</sup> <sup>ε</sup><sup>2</sup>

known Jury's test as

(a, b, c) for the PVM Kalman filter.

4.2.1. Design procedure for a POM system 1. Set B<sup>x</sup> from the sensor performance.

<sup>p</sup>,pom=B<sup>x</sup> <sup>¼</sup> <sup>a</sup><sup>D</sup>

a2 <sup>D</sup> <sup>¼</sup> <sup>a</sup><sup>2</sup> cT<sup>4</sup>

arg min a, b, <sup>c</sup> <sup>μ</sup>pomð Þ <sup>a</sup>; <sup>b</sup>; <sup>c</sup>; <sup>a</sup><sup>D</sup>

is the preset parameter for the proposed strategy. Substituting Eqs. (20)–(22) into (33), we obtain μpomð Þ a; b; c; a<sup>D</sup> . Using this, the optimal (a, b, c) for the POM system is determined by

sub: to: a > 0, b > 0, c > 0, and a<sup>D</sup> ¼ Const:

Like the POM system, a normalized RMS index can be used for the design of the PVM system. Normalizing Eq. (22) by B<sup>x</sup> and substituting Eqs. (31) and (32) into this, the evaluating function

aD

To design optimal (a, b, c) for the PVM system, the optimal steady-state Kalman gains are

sub: to: Stability conditions are satisfied, and a<sup>D</sup> ¼ Const:

where the stability conditions with respect to Kalman gains are easily derived by the well-

Substituting the optimal (α, β, θ) calculated by Eq. (37) into Eqs. (28)–(30), we obtain an optimal

ð Þ <sup>1</sup> � <sup>η</sup> <sup>β</sup> <sup>&</sup>lt; αθ and 4 � <sup>2</sup><sup>α</sup> � <sup>β</sup> � <sup>2</sup><sup>θ</sup> <sup>þ</sup> αθ � ηβ <sup>&</sup>gt; 0 and αθ � ηβ � <sup>α</sup> � <sup>θ</sup> <sup>þ</sup> <sup>1</sup> �

The procedure of the proposed strategy for each system is summarized in this section.

2 <sup>β</sup><sup>2</sup> <sup>þ</sup> <sup>2</sup>α<sup>2</sup> <sup>þ</sup> <sup>2</sup><sup>β</sup> <sup>þ</sup> αβ

Kalman Filter for Moving Object Tracking: Performance Analysis and Filter Design

<sup>α</sup> <sup>4</sup> � <sup>2</sup><sup>α</sup> � <sup>β</sup> � � (33)

http://dx.doi.org/10.5772/intechopen.71731

=B<sup>x</sup> (34)

<sup>2</sup> <sup>þ</sup> <sup>g</sup><sup>2</sup> <sup>α</sup>; <sup>β</sup>; <sup>θ</sup>;Rxv � � <sup>þ</sup> <sup>g</sup><sup>3</sup> <sup>α</sup>; <sup>β</sup>; <sup>θ</sup>;Rxv � �=Rxv

� �

<sup>g</sup><sup>1</sup> <sup>α</sup>; <sup>β</sup>; <sup>θ</sup>;Rxv � � (36)

(35)

241

(37)

� < 1 (38)

The derivation of these is given in the Appendix. Note that the dimensionless parameter Rxv corresponds to the ratio of the measurement accuracies in position and velocity and directly affects the tracking accuracy in PVM tracking systems. From these results, we also obtain the closed form of the RMS index for PVM systems and can design optimal Q using Eqs. (22)–(32).

#### 4. Filter design strategy based on the RMS index

Using the RMS index introduced in the previous section, we can design the Kalman filter parameters (i.e., Q) to achieve optimal tracking. This section defines the optimization problems for POM and PVM systems with a Q that minimizes the RMS index εp.

#### 4.1. RMS-index minimization problem

#### 4.1.1. POM system optimization

The evaluating function to determine optimal Q is εp,pom normalized by Bx, which is defined as

Kalman Filter for Moving Object Tracking: Performance Analysis and Filter Design http://dx.doi.org/10.5772/intechopen.71731 241

$$
\mu\_{\text{pom}} = \varepsilon\_{\text{p, pom}}^2 / B\_{\text{x}} = \frac{a\_{\text{D}}^2}{\beta^2} + \frac{2\alpha^2 + 2\beta + \alpha\beta}{a(4 - 2\alpha - \beta)}\tag{33}
$$

where

<sup>g</sup><sup>2</sup> <sup>α</sup>; <sup>β</sup>; <sup>η</sup>; <sup>θ</sup> <sup>¼</sup> <sup>α</sup><sup>3</sup>

Eq. (22) is obtained from σ<sup>2</sup>

<sup>þ</sup> <sup>α</sup><sup>2</sup> θ2

240 Kalman Filters - Theory for Advanced Applications

<sup>þ</sup> <sup>β</sup><sup>3</sup>

<sup>a</sup> <sup>¼</sup> <sup>T</sup><sup>2</sup>

<sup>c</sup> <sup>¼</sup> <sup>R</sup>xv αβθ <sup>þ</sup> <sup>β</sup><sup>2</sup>

4. Filter design strategy based on the RMS index

4.1. RMS-index minimization problem

4.1.1. POM system optimization

for POM and PVM systems with a Q that minimizes the RMS index εp.

<sup>þ</sup>α<sup>2</sup>

<sup>b</sup> <sup>¼</sup> <sup>β</sup><sup>3</sup> R2

where

θ θð Þ � <sup>2</sup> ð Þþ <sup>θ</sup> � <sup>1</sup> <sup>α</sup><sup>2</sup>

<sup>η</sup>ð Þ <sup>1</sup> <sup>þ</sup> <sup>η</sup> ð Þþ <sup>1</sup> � <sup>η</sup> <sup>β</sup><sup>2</sup>

between the steady-state Kalman gains and Qgen is derived as follows:

Bv

<sup>R</sup>xv ð Þ <sup>1</sup> � <sup>θ</sup> <sup>1</sup> <sup>þ</sup> <sup>2</sup>αβ <sup>þ</sup> <sup>β</sup><sup>2</sup>

xv <sup>þ</sup> <sup>R</sup>xv α βð Þ� <sup>1</sup> � <sup>θ</sup> <sup>θ</sup><sup>2</sup> <sup>þ</sup> <sup>θ</sup> <sup>þ</sup> <sup>β</sup><sup>2</sup>

<sup>g</sup><sup>3</sup> <sup>α</sup>; <sup>β</sup>; <sup>η</sup>; <sup>θ</sup> <sup>¼</sup> αθ <sup>2</sup>η<sup>2</sup> <sup>þ</sup> <sup>2</sup>ηθ <sup>þ</sup> <sup>θ</sup><sup>2</sup> � <sup>θ</sup> <sup>þ</sup> ηβ <sup>2</sup><sup>η</sup> <sup>þ</sup> <sup>2</sup><sup>θ</sup> � <sup>2</sup>η<sup>2</sup> � <sup>θ</sup><sup>2</sup> � <sup>2</sup>ηθ <sup>þ</sup> <sup>θ</sup><sup>2</sup>

by using Eq. (17). The derivation processes for these are shown in Ref. [25]. The relationship

<sup>1</sup> � <sup>R</sup>xvβ<sup>2</sup> � ð Þ <sup>1</sup> � <sup>θ</sup> <sup>α</sup> � <sup>θ</sup> αβ<sup>2</sup> <sup>þ</sup> <sup>2</sup>β<sup>3</sup> <sup>þ</sup> <sup>β</sup><sup>2</sup> <sup>þ</sup> ð Þ <sup>θ</sup> � <sup>1</sup> θβ <sup>R</sup>xv 

<sup>1</sup> � <sup>R</sup>xvβ<sup>2</sup> � ð Þ <sup>1</sup> � <sup>θ</sup> <sup>α</sup> � <sup>θ</sup>

<sup>R</sup>xv <sup>¼</sup> <sup>B</sup>x=T<sup>2</sup>

The derivation of these is given in the Appendix. Note that the dimensionless parameter Rxv corresponds to the ratio of the measurement accuracies in position and velocity and directly affects the tracking accuracy in PVM tracking systems. From these results, we also obtain the closed form of the RMS index for PVM systems and can design optimal Q using Eqs. (22)–(32).

Using the RMS index introduced in the previous section, we can design the Kalman filter parameters (i.e., Q) to achieve optimal tracking. This section defines the optimization problems

The evaluating function to determine optimal Q is εp,pom normalized by Bx, which is defined as

ð Þ� <sup>θ</sup> <sup>þ</sup> <sup>1</sup> βθ � αθ β <sup>þ</sup> <sup>θ</sup> <sup>þ</sup> βθ <sup>þ</sup> <sup>θ</sup><sup>2</sup> <sup>1</sup> � <sup>R</sup>xvβ<sup>2</sup> � ð Þ <sup>1</sup> � <sup>θ</sup> <sup>α</sup> � <sup>θ</sup>

<sup>θ</sup> <sup>þ</sup> βθ <sup>þ</sup> <sup>θ</sup><sup>2</sup> � <sup>θ</sup>

<sup>β</sup> <sup>2</sup> � <sup>2</sup><sup>θ</sup> � <sup>3</sup>ηθ<sup>2</sup> <sup>þ</sup> <sup>6</sup>ηθ � <sup>2</sup><sup>η</sup>

<sup>p</sup> and eac of the steady-state PVM Kalman filter (α � β � η � θ filter)

<sup>θ</sup> <sup>þ</sup> <sup>β</sup> <sup>3</sup><sup>θ</sup> � <sup>2</sup> � <sup>θ</sup><sup>2</sup> <sup>þ</sup> <sup>θ</sup>ð Þ <sup>1</sup> � <sup>θ</sup> ð Þ <sup>α</sup> � <sup>1</sup> (28)

θ

(26)

ð Þ 2 � θ (27)

TB<sup>v</sup> (29)

B<sup>v</sup> (30)

B<sup>v</sup> (31)

η ¼ Rxvβ (32)

ð Þ 2 � θ

ð Þþ <sup>2</sup> � <sup>θ</sup> <sup>2</sup>αβθ ηð Þ � <sup>2</sup> ð Þþ <sup>θ</sup> � <sup>2</sup> αβ<sup>2</sup> <sup>1</sup> <sup>þ</sup> <sup>2</sup><sup>η</sup> � <sup>θ</sup> � <sup>3</sup>η<sup>2</sup> <sup>þ</sup> <sup>3</sup>η<sup>2</sup>

<sup>η</sup>ð Þ <sup>2</sup> � <sup>η</sup> ð Þþ <sup>θ</sup> � <sup>2</sup> <sup>β</sup><sup>2</sup>

$$a\_{\rm D}^2 = a\_{\rm c}^2 T^4 / B\_{\rm x} \tag{34}$$

is the preset parameter for the proposed strategy. Substituting Eqs. (20)–(22) into (33), we obtain μpomð Þ a; b; c; a<sup>D</sup> . Using this, the optimal (a, b, c) for the POM system is determined by solving

$$\begin{aligned} \arg\min\_{a\_\tau b\_\tau c} & \quad \mu\_{\text{pom}}(a, b, c, a\_\mathcal{D}) \\ \text{sub. to.} & \quad a > 0, b > 0, c > 0, \text{and } a\_\mathcal{D} = \text{Const.} \end{aligned} \tag{35}$$

#### 4.1.2. PVM system optimization

Like the POM system, a normalized RMS index can be used for the design of the PVM system. Normalizing Eq. (22) by B<sup>x</sup> and substituting Eqs. (31) and (32) into this, the evaluating function for the PVM system is given by

$$\mu\_{\rm pvm} = \varepsilon\_{\rm p,pvm}^2 / \mathcal{B}\_{\rm x} = \left(\frac{2 - 2\beta \mathcal{R}\_{\rm xv} - \theta}{2\left(\beta + \alpha\theta - \beta^2 \mathcal{R}\_{\rm xv}\right)}\right)^2 \text{ap}^2 + \frac{\mathcal{G}\_2\{a, \beta, \theta, \mathcal{R}\_{\rm xv}\} + \mathcal{G}\_3\{a, \beta, \theta, \mathcal{R}\_{\rm xv}\}/\mathcal{R}\_{\rm xv}}{\mathcal{G}\_1\{a, \beta, \theta, \mathcal{R}\_{\rm xv}\}} \tag{36}$$

To design optimal (a, b, c) for the PVM system, the optimal steady-state Kalman gains are calculated by solving the following minimization problem.

$$\begin{aligned} \arg\min\_{a\_{\boldsymbol{\theta}}, \boldsymbol{\theta}\_{\boldsymbol{\theta}}, \boldsymbol{\theta}} & \quad \mu\_{\text{pwm}}(a, \boldsymbol{\beta}, \boldsymbol{\theta}, \boldsymbol{R}\_{\text{vv}}) \\ \text{sub. to} & \quad \text{Stability conditions are satisfied, and } a\_{\boldsymbol{\mathcal{D}}} = \text{Const.} \end{aligned} \tag{37}$$

where the stability conditions with respect to Kalman gains are easily derived by the wellknown Jury's test as

$$|(1 - \eta)\beta < a\theta \text{ and } 4 - 2a - \beta - 2\theta + a\theta - \eta\beta > 0 \text{ and } \left| a\theta - \eta\beta - a - \theta + 1 \right| < 1\tag{38}$$

Substituting the optimal (α, β, θ) calculated by Eq. (37) into Eqs. (28)–(30), we obtain an optimal (a, b, c) for the PVM Kalman filter.

#### 4.2. Procedure and notes of the proposed design strategy

The procedure of the proposed strategy for each system is summarized in this section.

#### 4.2.1. Design procedure for a POM system

1. Set B<sup>x</sup> from the sensor performance.


The methodology of presetting a<sup>D</sup> is discussed in the simulation section.

### 4.2.2. Design procedure for a PVM system


### 4.2.3. Notes on computation in the proposed strategy

With respect to the proposed strategy, note that:

• Eqs. (35) and (37) can be solved by gradient descent with several initial values. This is because that the parameter searching range is narrow due to the stability conditions.

technology will make data communications between robots, smartphones, and radar possible.

Kalman Filter for Moving Object Tracking: Performance Analysis and Filter Design

http://dx.doi.org/10.5772/intechopen.71731

243

This section presents theoretical performance analyses of the Kalman tracking filters by using the proposed design strategy. With respect to POM systems, our previous study [3] verified the effectiveness of the proposed strategy by comparison with a conventional random acceleration model based filter design. Thus, the RMS indices for the following filters are compared:

• Optimal POM filter: the Kalman filter for the POM system designed using the strategy

• Optimal PVM filter: the Kalman filter for the PVM system designed using the strategy

• RA filter: the Kalman filter for the PVM system with the RA process noises by using

The comparison of the optimal PVM filter with the RA filter indicates the effectiveness of the proposed strategy (i.e., considering the arbitrary covariance matrix of the process noise Qgen) and the comparison of the optimal POM and PVM filters illustrates the enhancement of tracking accuracy by using the velocity measurements in the proposed strategy. This section

Figure 1 shows the relationship between the design parameter a<sup>D</sup> and the minimum RMS index εp,opt for Rxv =1(Figure 1 left) and Rxv = 10 (Figure 1 right). It can be seen that the optimal PVM filter achieves the best performance. This result verifies that the proposed strategy determines steady-state gains corresponding to a better covariance matrix of process noise than the RA model. The optimal PVM filter also achieves better performance compared with the optimal POM filter even for Rxv ¼ 1, which means that the measurement accuracy of the

Thus, we can obtain approximated acceleration based on this novel technology.

5. Theoretical steady-state performance analysis

mentioned in Section 4.2.1.

mentioned in Section 4.2.2.

optimal σ<sup>q</sup> with respect to the RMS index.

Figure 1. Analytical relationship between a<sup>D</sup> and εp,opt(Rxv ¼ 1 (left), Rxv ¼ 10 (right)).

assumes that B<sup>x</sup> and T are normalized to 1.

• The proposed design process is only carried out once before using the Kalman filter. Although the computational cost of the above optimization process is not small, it does not affect the Kalman filtering process.

### 4.3. Discussion on preset parameter a<sup>D</sup>

Here, the appropriate presetting for a<sup>D</sup> in practical use is discussed. The covariance matrix of process noise Q determined by the proposed strategy is only optimal when a<sup>D</sup> is matched to the target acceleration and the target is moving with constant acceleration corresponding to aD. However, using the proposed strategy, the tracking accuracy is always better than when using conventional models as verified in Ref. [3]. Consequently, the proposed method achieves sufficient accuracy, even if a<sup>D</sup> is not matched to the true target acceleration. This means that the relatively small difference between the true and preset acceleration is acceptable. Thus, in practical use, we estimate an approximate or a typical value for the acceleration (e.g., mean and maximum) in advance based on the assumed motion of the target and then set a<sup>D</sup> by using this estimated value. The example application presented in Section 6 assumes the approximate maximum acceleration of the target is known and is used for the Kalman filter design.

Thus, target acceleration information is required for accurate Kalman filter tracking by using the proposed strategy. As a method to obtain an approximated acceleration, communications between the tracking systems and the accelerometers embedded in targets can be considered. Many sensing targets have acceleration sensors; for example, robots and vehicles have inertial sensors, and humans have accelerometers embedded in smartphones. Soon, Internet of Things technology will make data communications between robots, smartphones, and radar possible. Thus, we can obtain approximated acceleration based on this novel technology.

### 5. Theoretical steady-state performance analysis

2. Preset a<sup>D</sup> based on the approximate target acceleration.

2. Preset a<sup>D</sup> based on the approximate target acceleration.

4. Determine (a, b, c) from α; β; θ using Eqs. (28)–(30).

The methodology of presetting a<sup>D</sup> is discussed in the simulation section.

• Eqs. (35) and (37) can be solved by gradient descent with several initial values. This is because that the parameter searching range is narrow due to the stability conditions. • The proposed design process is only carried out once before using the Kalman filter. Although the computational cost of the above optimization process is not small, it does

Here, the appropriate presetting for a<sup>D</sup> in practical use is discussed. The covariance matrix of process noise Q determined by the proposed strategy is only optimal when a<sup>D</sup> is matched to the target acceleration and the target is moving with constant acceleration corresponding to aD. However, using the proposed strategy, the tracking accuracy is always better than when using conventional models as verified in Ref. [3]. Consequently, the proposed method achieves sufficient accuracy, even if a<sup>D</sup> is not matched to the true target acceleration. This means that the relatively small difference between the true and preset acceleration is acceptable. Thus, in practical use, we estimate an approximate or a typical value for the acceleration (e.g., mean and maximum) in advance based on the assumed motion of the target and then set a<sup>D</sup> by using this estimated value. The example application presented in Section 6 assumes the approximate

maximum acceleration of the target is known and is used for the Kalman filter design.

Thus, target acceleration information is required for accurate Kalman filter tracking by using the proposed strategy. As a method to obtain an approximated acceleration, communications between the tracking systems and the accelerometers embedded in targets can be considered. Many sensing targets have acceleration sensors; for example, robots and vehicles have inertial sensors, and humans have accelerometers embedded in smartphones. Soon, Internet of Things

3. Determine (a, b, c) by solving Eq. (35).

242 Kalman Filters - Theory for Advanced Applications

4.2.2. Design procedure for a PVM system

1. Set B<sup>x</sup> and B<sup>v</sup> from the sensor performance.

4.2.3. Notes on computation in the proposed strategy With respect to the proposed strategy, note that:

not affect the Kalman filtering process.

4.3. Discussion on preset parameter a<sup>D</sup>

3. Determine α; β; θ by solving Eq. (37).

This section presents theoretical performance analyses of the Kalman tracking filters by using the proposed design strategy. With respect to POM systems, our previous study [3] verified the effectiveness of the proposed strategy by comparison with a conventional random acceleration model based filter design. Thus, the RMS indices for the following filters are compared:


The comparison of the optimal PVM filter with the RA filter indicates the effectiveness of the proposed strategy (i.e., considering the arbitrary covariance matrix of the process noise Qgen) and the comparison of the optimal POM and PVM filters illustrates the enhancement of tracking accuracy by using the velocity measurements in the proposed strategy. This section assumes that B<sup>x</sup> and T are normalized to 1.

Figure 1 shows the relationship between the design parameter a<sup>D</sup> and the minimum RMS index εp,opt for Rxv =1(Figure 1 left) and Rxv = 10 (Figure 1 right). It can be seen that the optimal PVM filter achieves the best performance. This result verifies that the proposed strategy determines steady-state gains corresponding to a better covariance matrix of process noise than the RA model. The optimal PVM filter also achieves better performance compared with the optimal POM filter even for Rxv ¼ 1, which means that the measurement accuracy of the

Figure 1. Analytical relationship between a<sup>D</sup> and εp,opt(Rxv ¼ 1 (left), Rxv ¼ 10 (right)).

Figure 2. Analytical relationship between Rxv and εp,opt (a<sup>D</sup> ¼ 0:01 (left), a<sup>D</sup> ¼ 0:1 (right)).

position and velocity is the same. The addition of the velocity measurements effectively enhances the tracking accuracy. Furthermore, when the velocity measurement accuracy is high, the optimal PVM filter achieves greater accuracy than the POM filter.

> system, and the velocity measurement is based on the Doppler shift with the method presented in Ref. [18]. We determine a variance for this noise to set <sup>B</sup><sup>x</sup> <sup>¼</sup> <sup>9</sup> � <sup>10</sup>�<sup>4</sup> m2 and

> Using the RMS prediction error calculated from 1000 Monte Carlo simulations, the perfor-

First, the implementation of the Kalman filters for two-dimensional system is presented. The

1

CCCCA

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

. In these settings, Rxv ¼ 1. These values are the averages along the two axes.

<sup>x</sup>t<sup>k</sup> � <sup>x</sup>pmk � �<sup>2</sup> <sup>þ</sup> <sup>y</sup>t<sup>k</sup> � <sup>y</sup>pmk � �<sup>2</sup> � �

vuut (39)

Kalman Filter for Moving Object Tracking: Performance Analysis and Filter Design

http://dx.doi.org/10.5772/intechopen.71731

245

<sup>x</sup><sup>t</sup> <sup>¼</sup> <sup>x</sup><sup>t</sup> <sup>v</sup>xt <sup>y</sup><sup>t</sup> <sup>v</sup>yt � �<sup>T</sup> (40)

0010 ! (42)

� � (43)

(41)

<sup>B</sup><sup>v</sup> <sup>¼</sup> <sup>0</sup>:09 m<sup>2</sup>

mance is defined as

/s2

ε<sup>k</sup> ¼

6.2. Implementation of Kalman filter

1 1000

Figure 3. Simulation setting (simulation scenario (left), true target acceleration (right)).

X 1000

m¼1

implementation of a two-dimensional optimal POM filter is as follows:

Φ ¼

0

BBBB@

<sup>H</sup> <sup>¼</sup> <sup>1000</sup>

<sup>R</sup> <sup>¼</sup> <sup>B</sup><sup>x</sup> <sup>0</sup> 0 B<sup>x</sup>

where xpmk and ypmk are the predicted positions in the mth Monte Carlo simulation.

Figure 2 shows the relationship between Rxv and εp,opt for a<sup>2</sup> <sup>D</sup> ¼ 0:01 (left) and 0.1 (right). Both cases exhibit the same trend. For both optimal PVM and RA filters, better performance is achieved with better velocity measurement accuracy. The performance of the optimal PVM filter is better than that of the optimal POM filter including relatively small Rxv (the velocity measurement accuracy is low). In contrast, the performance of the RA filter is worse than that of the optimal POM filter for small Rxv because the covariance matrix is limited to Eq. (14). Moreover, by comparing the two insets of Figure 2, we see the greater effectiveness of the proposed strategy for relatively large aD.

### 6. Application to radar tracking simulation

Finally, this section provides an example of the Kalman filter tracker designed with the proposed strategy in a realistic application, namely, pulse Doppler radar tracking.

#### 6.1. Simulation setup

We simulated the pulse Doppler radar tracking of a maneuvering target and compared the tracking errors of the filters assumed in the previous section. Figure 3 shows the simulation scenario and the true target acceleration. The true target position is xtk; yt<sup>k</sup> <sup>¼</sup> ð Þ kT <sup>2</sup> ; 20þ ð Þ kT <sup>1</sup>:<sup>5</sup> cos ð ÞÞ <sup>π</sup>kT=<sup>5</sup> . Two-dimensional tracking in the x-y plane of the point target is assumed. We consider two pulse Doppler radars located at (x, y) = (0.5 m, 0) and (1.0 m, 0). The sampling interval T is 100 ms, and the observation time is 4 s. The transmitted signal is a pulse with central frequency of 60 GHz and bandwidth of 500 MHz. The received radar signals are calculated using ray tracing with the addition of the Gaussian white noise. The radar measurement parameter depends on the system under consideration: the POM system assumes the measurement of the position by using ranging results, and the PVM system assumes the position and velocity measurements where the position measurement is the same as the POM

Figure 3. Simulation setting (simulation scenario (left), true target acceleration (right)).

system, and the velocity measurement is based on the Doppler shift with the method presented in Ref. [18]. We determine a variance for this noise to set <sup>B</sup><sup>x</sup> <sup>¼</sup> <sup>9</sup> � <sup>10</sup>�<sup>4</sup> m2 and <sup>B</sup><sup>v</sup> <sup>¼</sup> <sup>0</sup>:09 m<sup>2</sup> /s2 . In these settings, Rxv ¼ 1. These values are the averages along the two axes. Using the RMS prediction error calculated from 1000 Monte Carlo simulations, the performance is defined as

$$\varepsilon\_{k} = \sqrt{\frac{1}{1000} \sum\_{m=1}^{1000} \left\{ \left( \mathbf{x}\_{\rm tk} - \mathbf{x}\_{\rm pmk} \right)^{2} + \left( y\_{\rm tk} - y\_{\rm pmk} \right)^{2} \right\}}\tag{39}$$

where xpmk and ypmk are the predicted positions in the mth Monte Carlo simulation.

#### 6.2. Implementation of Kalman filter

position and velocity is the same. The addition of the velocity measurements effectively enhances the tracking accuracy. Furthermore, when the velocity measurement accuracy is high,

cases exhibit the same trend. For both optimal PVM and RA filters, better performance is achieved with better velocity measurement accuracy. The performance of the optimal PVM filter is better than that of the optimal POM filter including relatively small Rxv (the velocity measurement accuracy is low). In contrast, the performance of the RA filter is worse than that of the optimal POM filter for small Rxv because the covariance matrix is limited to Eq. (14). Moreover, by comparing the two insets of Figure 2, we see the greater effectiveness of the

Finally, this section provides an example of the Kalman filter tracker designed with the

We simulated the pulse Doppler radar tracking of a maneuvering target and compared the tracking errors of the filters assumed in the previous section. Figure 3 shows the simulation

ð Þ kT <sup>1</sup>:<sup>5</sup> cos ð ÞÞ <sup>π</sup>kT=<sup>5</sup> . Two-dimensional tracking in the x-y plane of the point target is assumed. We consider two pulse Doppler radars located at (x, y) = (0.5 m, 0) and (1.0 m, 0). The sampling interval T is 100 ms, and the observation time is 4 s. The transmitted signal is a pulse with central frequency of 60 GHz and bandwidth of 500 MHz. The received radar signals are calculated using ray tracing with the addition of the Gaussian white noise. The radar measurement parameter depends on the system under consideration: the POM system assumes the measurement of the position by using ranging results, and the PVM system assumes the position and velocity measurements where the position measurement is the same as the POM

proposed strategy in a realistic application, namely, pulse Doppler radar tracking.

scenario and the true target acceleration. The true target position is xtk; yt<sup>k</sup>

<sup>D</sup> ¼ 0:01 (left) and 0.1 (right). Both

<sup>¼</sup> ð Þ kT <sup>2</sup>

; 20þ

the optimal PVM filter achieves greater accuracy than the POM filter.

Figure 2. Analytical relationship between Rxv and εp,opt (a<sup>D</sup> ¼ 0:01 (left), a<sup>D</sup> ¼ 0:1 (right)).

Figure 2 shows the relationship between Rxv and εp,opt for a<sup>2</sup>

proposed strategy for relatively large aD.

244 Kalman Filters - Theory for Advanced Applications

6.1. Simulation setup

6. Application to radar tracking simulation

First, the implementation of the Kalman filters for two-dimensional system is presented. The implementation of a two-dimensional optimal POM filter is as follows:

$$\mathbf{x}\_{\mathbf{t}} = \begin{pmatrix} \mathbf{x}\_{\mathbf{t}} & \upsilon\_{\mathbf{xt}} & \mathbf{y}\_{\mathbf{t}} & \upsilon\_{\mathbf{yt}} \end{pmatrix}^{\mathrm{T}} \tag{40}$$

$$\mathbf{OP} = \begin{pmatrix} 1 & T & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & T \\ 0 & 0 & 0 & 1 \end{pmatrix} \tag{41}$$

$$\mathbf{H} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{pmatrix} \tag{42}$$

$$\mathbf{R} = \begin{pmatrix} B\_{\mathbf{x}} & \mathbf{0} \\ \mathbf{0} & B\_{\mathbf{x}} \end{pmatrix} \tag{43}$$

$$\mathbf{Q} = \begin{pmatrix} a\_{\text{opt}} & b\_{\text{opt}} & 0 & 0 \\ b\_{\text{opt}} & c\_{\text{opt}} & 0 & 0 \\ 0 & 0 & a\_{\text{opt}} & b\_{\text{opt}} \\ 0 & 0 & b\_{\text{opt}} & c\_{\text{opt}} \end{pmatrix} \tag{44}$$

where vyt is the true velocity in the y-axis and aopt; bopt; copt � �is optimized (a, b, c), calculated using the procedure in Section 4.2.1. x<sup>t</sup> and Φ of a two-dimensional PVM filter are the same as for a POM filter. H and R are

$$\mathbf{H} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \tag{45}$$
 
$$\mathbf{R} = \begin{pmatrix} B\_\mathbf{x} & 0 & 0 & 0 \\ 0 & B\_\mathbf{v} & 0 & 0 \\ 0 & 0 & B\_\mathbf{x} & 0 \\ 0 & 0 & 0 & B\_\mathbf{v} \end{pmatrix} \tag{46}$$

noise to accurately track high-maneuvering target. Moreover, the mean RMS error of the optimal PVM filter is 32% of the error in optimal POM filter, and this clearly indicates the effectiveness of the velocity measurement, even when Rxv ¼ 1 (when the measurement reliability of the position and velocity are the same). These simulation results are consistent with

Kalman Filter for Moving Object Tracking: Performance Analysis and Filter Design

http://dx.doi.org/10.5772/intechopen.71731

247

In this chapter, the efficient steady-state performance index, known as the RMS index, was introduced for both POM and PVM Kalman filters for systems that track moving objects. Automatic design (preset) of the covariance matrix of the process noise Q, to realize optimal position prediction, was achieved using the analytical relationship between Q and the RMS index. The validity of the proposed design strategy was shown via analyses and simulations. These results verified that the proposed index attained accurate tracking when compared with the conventional RA-model-based Kalman filter design. A simulation of a realistic situation indicated that the optimal performance given by the proposed strategy is 41% better than that given by the conventional design procedure for a PVM system. Moreover, the optimal performance of the optimal POM and PVM Kalman filters was compared showing that the optimal

PVM Kalman filter is accurate when compared with the POM filter in a steady state.

The most important future objective is the extension of the RMS index-based design strategy to the third-order (and higher order) Kalman filters that are widely used for real applications. In third-order tracking, an acceleration is added to the state vector, becoming one of the input parameters of the Kalman filter. Performance analysis and the establishment of a design

the theoretical analyses presented in Figure 1.

7. Final remarks

Figure 4. Simulation results.

7.1. Conclusions

7.2. Future works

In addition, the formulation of Q is the same as in Eq. (43) and aopt; bopt; copt � � is calculated using the procedure in Section 4.2.2. A two-dimensional RA filter is the same as the optimal PVM filter, with the exception of Q. Q of the RA filter is

$$\mathbf{Q\_{ra}} = \begin{pmatrix} T^4/4 & T^3/2 & 0 & 0 \\ T^3/2 & T^2 & 0 & 0 \\ 0 & 0 & T^4/4 & T^3/2 \\ 0 & 0 & T^3/2 & T^2 \end{pmatrix} \sigma\_{\mathbf{q}}^2 \tag{47}$$

Next, the design for an appropriate a<sup>D</sup> is presented. We presume an approximate prediction of accelerations. For instance, when the maximum acceleration of the target in Figure 3 is predicted to be approximately <sup>a</sup><sup>c</sup> <sup>¼</sup> 3 m/s<sup>2</sup> , a<sup>D</sup> is then 1.0, from Eq. (34). Using this aDand the radar settings described in the previous section, we have aopt; bopt; copt � � for each filter.

#### 6.3. Results and discussion

Figure 4 shows the simulation results. Clearly, the filters using velocity measurements achieve greater accuracy than the optimal POM filter. The mean steady-state prediction RMS errors (E ε<sup>k</sup> ½ � in 2 s < kT) of the optimal POM, RA, and optimal PVM filters are 0.59, 0.46, and 0.19 m, respectively. These results indicate that the proposed strategy achieves greater accuracy than the conventional RA filter even in realistic situations. The mean RMS error of the optimal PVM filter is 41% of that of the RA filter. This is because the RA model cannot track the abrupt motion of the high-maneuvering target because of limitations in expressing the process noise. In contrast, the optimal PVM filter can set gains corresponding to the appropriate process

Figure 4. Simulation results.

Q ¼

where vyt is the true velocity in the y-axis and aopt; bopt; copt

for a POM filter. H and R are

246 Kalman Filters - Theory for Advanced Applications

0

BBB@

H ¼

0

BBB@

In addition, the formulation of Q is the same as in Eq. (43) and aopt; bopt; copt

T4 =4 T<sup>3</sup>

0

BBB@

radar settings described in the previous section, we have aopt; bopt; copt

T3

R ¼

PVM filter, with the exception of Q. Q of the RA filter is

predicted to be approximately <sup>a</sup><sup>c</sup> <sup>¼</sup> 3 m/s<sup>2</sup>

6.3. Results and discussion

Qra ¼

0

BBB@

aopt bopt 0 0 bopt copt 0 0 0 0 aopt bopt 0 0 bopt copt

using the procedure in Section 4.2.1. x<sup>t</sup> and Φ of a two-dimensional PVM filter are the same as

B<sup>x</sup> 000 0 B<sup>v</sup> 0 0 0 0 B<sup>x</sup> 0 000 B<sup>v</sup>

using the procedure in Section 4.2.2. A two-dimensional RA filter is the same as the optimal

=2 T<sup>2</sup> 0 0

0 0 T<sup>4</sup>

0 0 T<sup>3</sup>

Next, the design for an appropriate a<sup>D</sup> is presented. We presume an approximate prediction of accelerations. For instance, when the maximum acceleration of the target in Figure 3 is

Figure 4 shows the simulation results. Clearly, the filters using velocity measurements achieve greater accuracy than the optimal POM filter. The mean steady-state prediction RMS errors (E ε<sup>k</sup> ½ � in 2 s < kT) of the optimal POM, RA, and optimal PVM filters are 0.59, 0.46, and 0.19 m, respectively. These results indicate that the proposed strategy achieves greater accuracy than the conventional RA filter even in realistic situations. The mean RMS error of the optimal PVM filter is 41% of that of the RA filter. This is because the RA model cannot track the abrupt motion of the high-maneuvering target because of limitations in expressing the process noise. In contrast, the optimal PVM filter can set gains corresponding to the appropriate process

=20 0

=4 T<sup>3</sup> =2 1

CCCA σ2

, a<sup>D</sup> is then 1.0, from Eq. (34). Using this aDand the

� � for each filter.

=2 T<sup>2</sup>

1

CCCA

1

CCCA

1

CCCA

� �is optimized (a, b, c), calculated

(44)

(45)

(46)

� � is calculated

<sup>q</sup> (47)

noise to accurately track high-maneuvering target. Moreover, the mean RMS error of the optimal PVM filter is 32% of the error in optimal POM filter, and this clearly indicates the effectiveness of the velocity measurement, even when Rxv ¼ 1 (when the measurement reliability of the position and velocity are the same). These simulation results are consistent with the theoretical analyses presented in Figure 1.

### 7. Final remarks

#### 7.1. Conclusions

In this chapter, the efficient steady-state performance index, known as the RMS index, was introduced for both POM and PVM Kalman filters for systems that track moving objects. Automatic design (preset) of the covariance matrix of the process noise Q, to realize optimal position prediction, was achieved using the analytical relationship between Q and the RMS index. The validity of the proposed design strategy was shown via analyses and simulations. These results verified that the proposed index attained accurate tracking when compared with the conventional RA-model-based Kalman filter design. A simulation of a realistic situation indicated that the optimal performance given by the proposed strategy is 41% better than that given by the conventional design procedure for a PVM system. Moreover, the optimal performance of the optimal POM and PVM Kalman filters was compared showing that the optimal PVM Kalman filter is accurate when compared with the POM filter in a steady state.

#### 7.2. Future works

The most important future objective is the extension of the RMS index-based design strategy to the third-order (and higher order) Kalman filters that are widely used for real applications. In third-order tracking, an acceleration is added to the state vector, becoming one of the input parameters of the Kalman filter. Performance analysis and the establishment of a design strategy for such systems (i.e., position/acceleration and position/velocity/acceleration measured Kalman filters) are important considerations for advanced sensor fusion systems under development. Moreover, considerations of other dynamic models (e.g., the constant turn model) should also be probed for use in many applications including pedestrian tracking.

### A. Appendix

#### A.1. Derivation of Eqs. (28)–(30)

Because we assume a steady state, the index k of all parameters and matrices is omitted in the following calculations. The ith row and jth column of a matrix P are denoted as Pi,j .

Eq. (11) is also written using Pb as

$$\mathbf{K} = \widehat{\mathbf{P}} \mathbf{H}^{\mathrm{T}} \mathbf{R}^{-1} \tag{48}$$

<sup>α</sup>B<sup>x</sup> <sup>¼</sup> ð Þ <sup>1</sup> � <sup>α</sup> <sup>α</sup>B<sup>x</sup> <sup>þ</sup> <sup>2</sup>ηT<sup>2</sup>

Address all correspondence to: saho@pu-toyama.ac.jp

DOI: 10.1109/ACCESS.2015.2486766

10.3390/app7020159

dsp.2015.09.004

Sciences. 2016;24:524-540. DOI: 10.3906/elk-1309-60

we arrive at Eqs. (28)–(30).

Author details

Kenshi Saho

Toyama, Japan

References

<sup>B</sup><sup>v</sup> <sup>þ</sup> <sup>θ</sup>T<sup>2</sup>

Solving this linear system with respect to (a, b, c) and substituting Eq. (50) into the solutions,

Department of Intelligent Systems Design Engineering, Toyama Prefectural University, Imizu,

[1] Ekstrand B. Some aspects on filter design for target tracking. Journal of Control Science

[2] Bar-Shalom Y, Li XR, Kirubarajan T. Estimation and Tracking: Principles, Techniques, and

[3] Saho K, Masugi M. Automatic parameter setting method for an accurate Kalman filter tracker using an analytical steady-state performance index. IEEE Access. 2015;3:1919-1930.

[4] Hashlamon I, Erbatur K. An improved real-time adaptive Kalman filter with recursive noise covariance updating rules. Turkish Journal of Electrical Engineering & Computer

[5] Basso G, Amorim TG, Brito AV, Nascimento TP. Kalman filter with dynamical setting of optimal process noise. IEEE Access. 2017;5:8385-8393. DOI: 10.1109/ACCESS.2017.2697072

[6] Fan Y, Lu F, Zhu W, Bai G, Yan L. A hybrid model algorithm for hypersonic glide vehicle Maneuver tracking based on the aerodynamic model. Applied Sciences. 2017;7:159. DOI:

[7] Li W, Sun S, Jia Y, Du J. Robust unscented Kalman filter with adaptation of process and measurement noise covariances. Digital Signal Processing. 2016;48:93-103. DOI: 10.1016/j.

and Engineering. 2012;2012:870890. DOI: 10.1155/2012/870890

Software. Boston, MA, United States: Artech House Publishers; 1998

<sup>B</sup><sup>v</sup> <sup>þ</sup> <sup>a</sup> � <sup>η</sup>Tð Þ <sup>η</sup>TB<sup>v</sup> <sup>þ</sup> <sup>θ</sup>TB<sup>v</sup> <sup>þ</sup> <sup>b</sup> (53)

Kalman Filter for Moving Object Tracking: Performance Analysis and Filter Design

http://dx.doi.org/10.5772/intechopen.71731

249

βBx=T ¼ ð Þ 1 � α ðηTB<sup>v</sup> þ θTB<sup>v</sup> þ bÞ � ηTð Þ θB<sup>v</sup> þ c (54)

<sup>θ</sup>B<sup>v</sup> <sup>¼</sup> ð Þ <sup>1</sup> � <sup>θ</sup> ð Þ� <sup>θ</sup>B<sup>v</sup> <sup>þ</sup> <sup>c</sup> <sup>β</sup>=<sup>T</sup> ð Þ <sup>η</sup>TB<sup>v</sup> <sup>þ</sup> <sup>θ</sup>TB<sup>v</sup> <sup>þ</sup> <sup>b</sup> (55)

As indicated in Eq. (7), H of the PVM Kalman filter is the identity matrix. Thus, from Eq. (48), the relationship between the Kalman gains and the error covariance matrix in the estimated state Pb is calculated using Eqs. (8) and (24) as

$$
\widehat{\mathbf{P}} = \mathbf{K} \mathbf{R} = \begin{pmatrix} a\mathbf{B}\_{\mathbf{x}} & T\eta \mathbf{B}\_{\mathbf{v}} \\ \beta \mathbf{B}\_{\mathbf{x}}/T & \Theta \mathbf{B}\_{\mathbf{v}} \end{pmatrix} \tag{49}
$$

With <sup>P</sup><sup>1</sup>, <sup>2</sup> <sup>¼</sup> <sup>P</sup><sup>2</sup>, <sup>1</sup> and Eq. (31), we have the following relationship:

$$
\eta = \beta \frac{B\_\mathrm{x}}{T^2 B\_\mathrm{v}} = \beta R\_\mathrm{xv} \tag{50}
$$

Eq. (50) is equal to Eq. (32), showing that this relationship is satisfied in the assumed PVM Kalman filter without depending on the process noise. Pb is also calculated using Eq. (13) by substituting Eqs. (7) and (24) as

$$
\hat{\mathbf{P}} = \begin{pmatrix} (1 - \alpha)\tilde{\mathbf{P}}^{1,1} - T\eta \tilde{\mathbf{P}}^{1,2} & (1 - \alpha)\tilde{\mathbf{P}}^{1,2} - T\eta \tilde{\mathbf{P}}^{2,2} \\ (1 - \theta)\tilde{\mathbf{P}}^{1,2} - (\theta/\mathbf{T})\tilde{\mathbf{P}}^{1,1} & (1 - \theta)\tilde{\mathbf{P}}^{2,2} - (\theta/\mathbf{T})\tilde{\mathbf{P}}^{1,2} \end{pmatrix} \tag{51}
$$

Elements of P~are required to calculate Eq. (51) and are calculated using Eqs. (3), (12), and (18) as

$$\tilde{\mathbf{P}} = \begin{pmatrix} P^{1,1} + 2TP^{1,2} + T^2P^{2,2} + a & P^{1,2} + TP^{2,2} + b \\ P^{1,2} + TP^{2,2} + b & P^{2,2} + c \end{pmatrix} \tag{52}$$

Substituting Eq. (52) into Eq. (51), and comparing elements of Eq. (49), we have the following linear system:

$$a\mathcal{B}\_\mathbf{x} = (1 - a) \left( a\mathcal{B}\_\mathbf{x} + 2\eta T^2 \mathcal{B}\_\mathbf{v} + \theta T^2 \mathcal{B}\_\mathbf{v} + a \right) - \eta T (\eta T \mathcal{B}\_\mathbf{v} + \theta T \mathcal{B}\_\mathbf{v} + b) \tag{53}$$

$$
\beta \mathcal{B}\_\mathbf{x}/T = (1 - \alpha)(\eta T \mathcal{B}\_\mathbf{v} + \theta T \mathcal{B}\_\mathbf{v} + b) - \eta T(\theta \mathcal{B}\_\mathbf{v} + \mathbf{c})\tag{54}
$$

$$
\Theta B\_\mathbf{v} = (1 - \Theta)(\Theta B\_\mathbf{v} + \mathbf{c}) - \left(\beta/T\right)(\eta TB\_\mathbf{v} + \Theta TB\_\mathbf{v} + b) \tag{55}
$$

Solving this linear system with respect to (a, b, c) and substituting Eq. (50) into the solutions, we arrive at Eqs. (28)–(30).

### Author details

Kenshi Saho

strategy for such systems (i.e., position/acceleration and position/velocity/acceleration measured Kalman filters) are important considerations for advanced sensor fusion systems under development. Moreover, considerations of other dynamic models (e.g., the constant turn model) should also be probed for use in many applications including pedestrian tracking.

Because we assume a steady state, the index k of all parameters and matrices is omitted in the

As indicated in Eq. (7), H of the PVM Kalman filter is the identity matrix. Thus, from Eq. (48), the relationship between the Kalman gains and the error covariance matrix in the estimated

<sup>P</sup><sup>b</sup> <sup>¼</sup> KR <sup>¼</sup> <sup>α</sup>B<sup>x</sup> <sup>T</sup>ηB<sup>v</sup>

Eq. (50) is equal to Eq. (32), showing that this relationship is satisfied in the assumed PVM Kalman filter without depending on the process noise. Pb is also calculated using Eq. (13) by

> ð Þ <sup>1</sup> � <sup>θ</sup> <sup>P</sup>~1,<sup>2</sup> � <sup>β</sup>=<sup>T</sup> � �P~1, <sup>1</sup> ð Þ <sup>1</sup> � <sup>θ</sup> <sup>P</sup>~2,<sup>2</sup> � <sup>β</sup>=<sup>T</sup> � �P~1, <sup>2</sup> !

!

<sup>P</sup><sup>2</sup>, <sup>2</sup> <sup>þ</sup> a P<sup>1</sup>, <sup>2</sup> <sup>þ</sup> TP<sup>2</sup>, <sup>2</sup> <sup>þ</sup> <sup>b</sup>

Elements of P~are required to calculate Eq. (51) and are calculated using Eqs. (3), (12), and (18) as

<sup>P</sup><sup>1</sup>, <sup>2</sup> <sup>þ</sup> TP<sup>2</sup>, <sup>2</sup> <sup>þ</sup> b P<sup>2</sup>, <sup>2</sup> <sup>þ</sup> <sup>c</sup>

Substituting Eq. (52) into Eq. (51), and comparing elements of Eq. (49), we have the following

<sup>P</sup><sup>b</sup> <sup>¼</sup> ð Þ <sup>1</sup> � <sup>α</sup> <sup>P</sup>~1, <sup>1</sup> � <sup>T</sup>ηP~1, <sup>2</sup> ð Þ <sup>1</sup> � <sup>α</sup> <sup>P</sup>~1,<sup>2</sup> � <sup>T</sup>ηP~2, <sup>2</sup>

<sup>η</sup> <sup>¼</sup> <sup>β</sup> <sup>B</sup><sup>x</sup> T2 Bv

βBx=T θB<sup>v</sup> � � .

(49)

(51)

(52)

<sup>K</sup> <sup>¼</sup> PH<sup>b</sup> <sup>T</sup>R�<sup>1</sup> (48)

¼ βRxv (50)

following calculations. The ith row and jth column of a matrix P are denoted as Pi,j

A. Appendix

A.1. Derivation of Eqs. (28)–(30)

248 Kalman Filters - Theory for Advanced Applications

Eq. (11) is also written using Pb as

substituting Eqs. (7) and (24) as

linear system:

state Pb is calculated using Eqs. (8) and (24) as

With <sup>P</sup><sup>1</sup>, <sup>2</sup> <sup>¼</sup> <sup>P</sup><sup>2</sup>, <sup>1</sup> and Eq. (31), we have the following relationship:

<sup>P</sup><sup>~</sup> <sup>¼</sup> <sup>P</sup><sup>1</sup>, <sup>1</sup> <sup>þ</sup> <sup>2</sup>TP<sup>1</sup>,<sup>2</sup> <sup>þ</sup> <sup>T</sup><sup>2</sup>

Address all correspondence to: saho@pu-toyama.ac.jp

Department of Intelligent Systems Design Engineering, Toyama Prefectural University, Imizu, Toyama, Japan

### References


[8] Vivone F, Braca P, Horstmann J. Knowledge-based multitarget ship tracking for HF surface wave radar systems. IEEE Transactions on Geoscience and Remote Sensing. 2015;53:3931- 3949. DOI: 10.1109/TGRS.2014.2388355

[20] Afia AB, Escher AC, Macabiau C, Roche S. A GNSS/IMU/WSS/VSLAM hybridization using an extended Kalman filter. In: Proc. of ION 2015 Pacific PNT Meeting. 2015.

Kalman Filter for Moving Object Tracking: Performance Analysis and Filter Design

http://dx.doi.org/10.5772/intechopen.71731

251

[21] Kellner D, Barjenbruch M, Klappstein J, Dickmann J, Dietmayer K. Tracking of extended objects with high-resolution Doppler radar. IEEE Transaction on Intelligent Transporta-

[22] Carrolla P, Domresea K, Zhoub H, Zhoua S, Willetta P. Doppler-aided localization of mobile nodes in an underwater distributed antenna system. Physical Communication.

[23] Zhoua G, Wub L, Xiea J, Denga W, Quan T. Constant turn model for statically fused converted measurement Kalman filters. Signal Processing. 2015;108:400-411. DOI: 10.1016/

[24] Cho MH, Tahk MJ, Kim MC, Lee D, Yoon J. Modified gain pseudo-measurement filter design for radar target tracking with range rate measurement. In: Proc. IEEE 2017 25th Mediterranean Conf. on Control and Automation (MED). 2017. pp. 1195-1200. DOI: 10.1109/

[25] Saho K, Masugi M. Performance analysis of fixed-gain position-velocity-measured tracking filter. In: 2016 2nd International Conference on Control, Automation and Robotics (ICCAR); 28-30 April 2016; Hong Kong, China. IEEE; 2016. DOI: 10.1109/ICCAR.2016.

[26] Saho K, Masugi M. Performance analysis of alpha-beta-gamma tracking filters using position and velocity measurements. EURASIP Journal on Advances in Signal Processing.

[27] Kalata PR. The tracking index: A generalized parameter for alpha-beta and alpha-betagamma target trackers. IEEE Transactions on Aerospace and Electronic Systems. 1984;

tion Systems. 2016;17:1341-1353. DOI: 10.1109/TITS.2015.2501759

2016;18:49-59. DOI: 10.1016/j.phycom.2015.08.008

2015;2015:35. DOI: 10.1186/s13634-015-0220-3

AES-20:174-182. DOI: 10.1109/TAES.1984.310438

pp. 719-732

j.sigpro.2014.09.022

MED.2017.7984280

7486760


[20] Afia AB, Escher AC, Macabiau C, Roche S. A GNSS/IMU/WSS/VSLAM hybridization using an extended Kalman filter. In: Proc. of ION 2015 Pacific PNT Meeting. 2015. pp. 719-732

[8] Vivone F, Braca P, Horstmann J. Knowledge-based multitarget ship tracking for HF surface wave radar systems. IEEE Transactions on Geoscience and Remote Sensing. 2015;53:3931-

[9] Du G, Zhang P. A Markerless human robot Interface using particle filter and Kalman filter for dual robots. IEEE Transactions on Industrial Electronics. 2015;62:2257-2264. DOI:

[10] Martino L, Read J, Elvira V, Louzada F. Cooperative parallel particle filters for on-line model selection and applications to urban mobility. Digital Signal Processing. 2017;60:

[11] Drovandi CC, McGree J, Pettitt AN. A sequential Monte Carlo algorithm to incorporate model uncertainty in Bayesian sequential design. Journal of Computational and Graphi-

[12] Li XR, Jilkov VP. Survey of maneuvering target tracking. Part I: Dynamic models. IEEE Transactions on Aerospace and Electronic Systems. 2003;39:1333-1364 DOI: 10.1109/

[13] Crouse DF. A general solution to optimal fixed-gain (alpha-beta-gamma etc.) filters. IEEE

[14] Tang X, Falco G, Falletti E, Presti LL. Complexity reduction of the Kalman filter-based tracking loops in GNSS receivers. GPS Solutions. 2017;21:685-699. DOI: 10.1007/s10291-

[15] Jatoth RK, Gopisetty S, Hussai M. Performance analysis of alpha beta filter, Kalman filter and meanshift for object tracking in video sequences. International Journal of Image,

[16] Li Z, Wu H. A survey of Maneuvering target tracking using Kalman filter. Proc. 4th Int. Conf. Mechatronics Mater. Chem. Comput. Eng. (ICMMCCE). 2015;39:542-545. DOI:

[17] Saho K, Homma H, Sakamoto T, Sato T, Inoue K, Fukuda T. Accurate image separation method for two closely spaced pedestrians using UWB Doppler imaging radar and supervised learning. IEICE Transactions on Communications. 2014;97:1223-1233. DOI:

[18] Anabuki M, Okumura S, Sato T, Sakamoto T, Saho K, Yoshioka M, Inoue K, Fukuda T, Sakai H. Ultrawideband radar imaging using adaptive Array and Doppler separation. IEEE Transactions on Aerospace and Electronic Systems. 2017;53:190-200. DOI: 10.1109/

[19] Cossaboom M, Georgy J, Karamat T, Noureldin A. Augmented kalman filter and map matching for 3D RISS/GPS integration for land vehicles. International Journal of Naviga-

tion and Observation. 2012;2012:576807. DOI: 10.1155/2012/576807

Graphics and Signal Processing. 2015;3:24-30. DOI: 10.5815/ijigsp.2015.03.04

Signal Processing Letters. 2015;22:901-904. DOI: 10.1109/LSP.2014.2376876

cal Statistics. 2014;23:324. DOI: 10.1080/10618600.2012.730083

3949. DOI: 10.1109/TGRS.2014.2388355

172-185. DOI: 10.1016/j.dsp.2016.09.011

10.1109/TIE.2014.2362095

250 Kalman Filters - Theory for Advanced Applications

TAES.2003.1261132

10.2991/icmmcce-15.2015.109

10.1587/transcom.E97.B.1223

TAES.2017.2649798

016-0557-6


**Chapter 13**

Provisional chapter

**Distributed Kalman Filter**

Distributed Kalman Filter

http://dx.doi.org/10.5772/intechopen.71941

fusion, accumulated state density

Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

The continuing trend toward connected sensors ("internet of things" and" ubiquitous computing") drives a demand for powerful distributed estimation methodologies. In tracking applications, the distributed Kalman filter (DKF) provides an optimal solution under Kalman filter conditions. The optimal solution in terms of the estimation accuracy is also achieved by a centralized fusion algorithm, which receives all associated measurements. However, the centralized approach requires full communication of all measurements at each time step, whereas the DKF works at arbitrary communication rates since the calculation is fully distributed. A more recent methodology is based on "accumulated state density" (ASD), which augments the states from multiple time instants to overcome spatial cross-correlations. This chapter explains the challenges in distributed tracking. Then, possible solutions are derived, which include the DKF and ASD approach.

DOI: 10.5772/intechopen.71941

Keywords: distributed Kalman filter, target tracking, multisensor fusion, information

Modern tracking and surveillance system development is increasingly driving technological trends and algorithm developments toward networks of dislocated sensors. Besides classical target tracking, many other applications can be found, for instance, in robotics, manufacturing, health care, and financial economics. A multisensor network can exploit spatial diversity to compensate for the weak attributes of a single sensor such as limited field of view or high measurement noise. Also, heterogeneous sensors can reveal a more complete situational awareness and a more precise estimate of the underlying phenomena. Additionally, a sensor network is more robust against a single point of failure in comparison to a standalone system, if its

Despite its unquestioned advantages, a multisensor network must cope with design-specific challenges, for instance, a full transmission of all the measurements to a fusion center can be

> © The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

© 2018 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Felix Govaers

Felix Govaers

Abstract

1. Introduction

architecture is chosen carefully.

**Chapter 13** Provisional chapter

### **Distributed Kalman Filter** Distributed Kalman Filter

### Felix Govaers Felix Govaers

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.71941

#### Abstract

The continuing trend toward connected sensors ("internet of things" and" ubiquitous computing") drives a demand for powerful distributed estimation methodologies. In tracking applications, the distributed Kalman filter (DKF) provides an optimal solution under Kalman filter conditions. The optimal solution in terms of the estimation accuracy is also achieved by a centralized fusion algorithm, which receives all associated measurements. However, the centralized approach requires full communication of all measurements at each time step, whereas the DKF works at arbitrary communication rates since the calculation is fully distributed. A more recent methodology is based on "accumulated state density" (ASD), which augments the states from multiple time instants to overcome spatial cross-correlations. This chapter explains the challenges in distributed tracking. Then, possible solutions are derived, which include the DKF and ASD approach.

DOI: 10.5772/intechopen.71941

Keywords: distributed Kalman filter, target tracking, multisensor fusion, information fusion, accumulated state density

### 1. Introduction

Modern tracking and surveillance system development is increasingly driving technological trends and algorithm developments toward networks of dislocated sensors. Besides classical target tracking, many other applications can be found, for instance, in robotics, manufacturing, health care, and financial economics. A multisensor network can exploit spatial diversity to compensate for the weak attributes of a single sensor such as limited field of view or high measurement noise. Also, heterogeneous sensors can reveal a more complete situational awareness and a more precise estimate of the underlying phenomena. Additionally, a sensor network is more robust against a single point of failure in comparison to a standalone system, if its architecture is chosen carefully.

Despite its unquestioned advantages, a multisensor network must cope with design-specific challenges, for instance, a full transmission of all the measurements to a fusion center can be

© 2018 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

hindered, when the communication links are unreliable or constrained in bandwidth or costs. A well-known approach to cope with the limited bandwidth data links is to apply data processing on the sensor sites to generate local track parameters that are transmitted to the fusion center. The latter then reconstructs the global track parameters by an application of a Track-to-Track Fusion (T2TF) scheme. Depending on the scenario constraints, this is a nontrivial task, since the local tracks are mutually correlated due to the common process noise. The first known solution in the literature to the T2TF problem proposed to apply an information filterbased multisensor fusion algorithm in [1], which later became famous as the "tracklet fusion." However, the tracklet fusion also requires a transmission from each sensor after each time step in order to reconstruct the optimal solution.

where vs

<sup>k</sup> � <sup>Ν</sup> vs

described by the Gaussian

<sup>k</sup>; <sup>0</sup>;Rs k

density function for the transition model:

various data fusion layers can be found in [4].

3. Least squares estimate

estimate x<sup>s</sup>

� � is the Gaussian distributed, zero-mean random variable, which models

� � (2)

� � are mutually independent,

http://dx.doi.org/10.5772/intechopen.71941

Distributed Kalman Filter

(3)

255

<sup>k</sup> ;…; z<sup>S</sup> k

xk ¼ Fk∣k�<sup>1</sup>xk�<sup>1</sup> þ wk (4)

� � (5)

n o (6)

<sup>k</sup>∣<sup>k</sup>. In a T2TF scheme, these

<sup>k</sup>; H<sup>s</sup> kxk;R<sup>s</sup> k

the noise of the sensing process. Therefore, the likelihood for a single measurement is fully

Y S

s¼1 p zs <sup>k</sup>jxk � �

s¼1

The true state of the target itself is modeled as a time-variant stochastic process, where the

where wk � Ν wk; 0; Qk ð Þ is the Gaussian distributed, zero-mean random variable to model the process noise of the system. Analogously to the likelihood, this provides the probability

p xk ð Þ¼ jxk�<sup>1</sup> Ν xk; Fk∣k�<sup>1</sup>xk�<sup>1</sup>; Qk

Based on the local processors, each sensor node produces a track at time tk in terms of an

parameters are the only information, which is transmitted to a fusion center (FC). It should be noted that the FC may also not be centralized, distinguished instance in the architecture, but each and every processing node can act as a FC. The introduction of a distinguished FC is only for clarification of different computation layers. An excellent overview of pros and cons of

The T2TF problem can now be stated as follows: compute a fused estimate xk∣<sup>k</sup> of the state xk and a consistent error covariance Pk∣<sup>k</sup> by means of the local tracks and knowledge on their models:

In order to compute an estimate as a well-suited combination of the local tracks, it is useful to

<sup>k</sup>∣<sup>k</sup>;…; x<sup>S</sup> k∣k

xk∣k <sup>x</sup><sup>1</sup>

consider the joint likelihood function given by the following Gaussian:

Ν zs <sup>k</sup>; H<sup>s</sup> kxk;Rs k

� �

<sup>¼</sup> <sup>Y</sup> S

p z<sup>s</sup> <sup>k</sup>jxk � � <sup>¼</sup> <sup>Ν</sup> zs

p Zk ð Þ¼ jxk

transition from time tk�<sup>1</sup> to time tk is given by the following motion equation:

<sup>k</sup>∣<sup>k</sup> and a corresponding estimation error covariance Ps

Since the measurement processes across all sensors Zk <sup>¼</sup> <sup>z</sup><sup>1</sup>

the joint likelihood of all sensor data produced at time tk factorizes:

This chapter presents the theory and the derivation of the distributed Kalman filter (DKF), which is an optimal solution of the T2TF problem under Kalman filter assumptions with respect to the mean squared error (MSE). Assuming Kalman conditions means that linear Gaussian models are provided for the motion model and all measurement models of the sensors. Moreover, it is assumed that measurement-to-track (at the sensors) and track-to-track (at the fusion center) association has been solved. The DKF approach requires, however, all remote sensor models to be known at each local sensor site, which is infeasible in most practical scenarios. Therefore, this chapter also presents a solution based on the accumulated state density (ASD), which is closely related to the DKF but does not require the measurement models to be known. Surveys that reflect the history of research in distributed estimation can be found, for instance, in [2, 3].

This chapter is structured as follows: Section 2 summarizes the problem formulation. A basic approach to T2TF is given in Section 3, where we present the least squares solution. Section 4 presents a simple fusion methodology, which is easy to compute and provides practical results for various applications. The reason why this approach is suboptimal is investigated in Section 5 by means of a recursive computation of the cross-covariances of the local tracks. In Section 6, the product representation of Gaussian probability densities is introduced, which is the basis for the derivation of the distributed Kalman filter in Section 7. An alternative derivation in terms of information parameters is provided in Section 8. Since the local measurement models are often unknown in practical applications, the distributed accumulated state density filter is introduced in Section 9. The chapter concludes with Section 10.

### 2. Problem formulation

Throughout this chapter, it is assumed that all S sensors produce their measurements zs <sup>k</sup> ∈ R<sup>m</sup> at each time step tk of the same target with its true state xk ∈ R<sup>n</sup> in a synchronized way. The extension to the unsynchronized case is straightforward by means of standard Kalman filter predictions, and is therefore omitted for the sake of notational simplicity. The measurement equation for sensor s ∈f g 1; …; S is given by

$$
\omega\_k^s = H\_k^s \mathbf{x}\_k + \mathbf{v}\_k^s \tag{1}
$$

where vs <sup>k</sup> � <sup>Ν</sup> vs <sup>k</sup>; <sup>0</sup>;Rs k � � is the Gaussian distributed, zero-mean random variable, which models the noise of the sensing process. Therefore, the likelihood for a single measurement is fully described by the Gaussian

$$p\left(z\_k^s|\mathbf{x}\_k\right) = \mathbf{N}\left(z\_k^s; H\_k^s \mathbf{x}\_k, R\_k^s\right) \tag{2}$$

Since the measurement processes across all sensors Zk <sup>¼</sup> <sup>z</sup><sup>1</sup> <sup>k</sup> ;…; z<sup>S</sup> k � � are mutually independent, the joint likelihood of all sensor data produced at time tk factorizes:

$$\begin{aligned} p(\mathbf{Z}\_k | \mathbf{x}\_k) &= \prod\_{s=1}^{S} p(\mathbf{z}\_k^s | \mathbf{x}\_k) \\ &= \prod\_{s=1}^{S} \mathbf{N}(\mathbf{z}\_k^s; H\_k^s \mathbf{x}\_k, \mathbf{R}\_k^s) \end{aligned} \tag{3}$$

The true state of the target itself is modeled as a time-variant stochastic process, where the transition from time tk�<sup>1</sup> to time tk is given by the following motion equation:

$$\mathbf{x}\_{k} = F\_{k\mathbb{k}-1}\mathbf{x}\_{k-1} + \mathbf{w}\_{k} \tag{4}$$

where wk � Ν wk; 0; Qk ð Þ is the Gaussian distributed, zero-mean random variable to model the process noise of the system. Analogously to the likelihood, this provides the probability density function for the transition model:

$$p(\mathbf{x}\_k|\mathbf{x}\_{k-1}) = \mathbf{N}(\mathbf{x}\_k; F\_{k|k-1}\mathbf{x}\_{k-1}, Q\_k) \tag{5}$$

Based on the local processors, each sensor node produces a track at time tk in terms of an estimate x<sup>s</sup> <sup>k</sup>∣<sup>k</sup> and a corresponding estimation error covariance Ps <sup>k</sup>∣<sup>k</sup>. In a T2TF scheme, these parameters are the only information, which is transmitted to a fusion center (FC). It should be noted that the FC may also not be centralized, distinguished instance in the architecture, but each and every processing node can act as a FC. The introduction of a distinguished FC is only for clarification of different computation layers. An excellent overview of pros and cons of various data fusion layers can be found in [4].

The T2TF problem can now be stated as follows: compute a fused estimate xk∣<sup>k</sup> of the state xk and a consistent error covariance Pk∣<sup>k</sup> by means of the local tracks and knowledge on their models:

$$\mathbf{x}\_{k\nparallel} \leftarrow \left\{ \mathbf{x}\_{k\nparallel}^{1}, \dots, \mathbf{x}\_{k\nparallel}^{S} \right\} \tag{6}$$

#### 3. Least squares estimate

hindered, when the communication links are unreliable or constrained in bandwidth or costs. A well-known approach to cope with the limited bandwidth data links is to apply data processing on the sensor sites to generate local track parameters that are transmitted to the fusion center. The latter then reconstructs the global track parameters by an application of a Track-to-Track Fusion (T2TF) scheme. Depending on the scenario constraints, this is a nontrivial task, since the local tracks are mutually correlated due to the common process noise. The first known solution in the literature to the T2TF problem proposed to apply an information filterbased multisensor fusion algorithm in [1], which later became famous as the "tracklet fusion." However, the tracklet fusion also requires a transmission from each sensor after each time step

This chapter presents the theory and the derivation of the distributed Kalman filter (DKF), which is an optimal solution of the T2TF problem under Kalman filter assumptions with respect to the mean squared error (MSE). Assuming Kalman conditions means that linear Gaussian models are provided for the motion model and all measurement models of the sensors. Moreover, it is assumed that measurement-to-track (at the sensors) and track-to-track (at the fusion center) association has been solved. The DKF approach requires, however, all remote sensor models to be known at each local sensor site, which is infeasible in most practical scenarios. Therefore, this chapter also presents a solution based on the accumulated state density (ASD), which is closely related to the DKF but does not require the measurement models to be known. Surveys that reflect the history of research in distributed estimation can be found, for instance, in [2, 3]. This chapter is structured as follows: Section 2 summarizes the problem formulation. A basic approach to T2TF is given in Section 3, where we present the least squares solution. Section 4 presents a simple fusion methodology, which is easy to compute and provides practical results for various applications. The reason why this approach is suboptimal is investigated in Section 5 by means of a recursive computation of the cross-covariances of the local tracks. In Section 6, the product representation of Gaussian probability densities is introduced, which is the basis for the derivation of the distributed Kalman filter in Section 7. An alternative derivation in terms of information parameters is provided in Section 8. Since the local measurement models are often unknown in practical applications, the distributed accumulated state density filter is introduced

Throughout this chapter, it is assumed that all S sensors produce their measurements zs

zs <sup>k</sup> <sup>¼</sup> <sup>H</sup><sup>s</sup>

at each time step tk of the same target with its true state xk ∈ R<sup>n</sup> in a synchronized way. The extension to the unsynchronized case is straightforward by means of standard Kalman filter predictions, and is therefore omitted for the sake of notational simplicity. The measurement

<sup>k</sup>xk <sup>þ</sup> vs

<sup>k</sup> ∈ R<sup>m</sup>

<sup>k</sup> (1)

in order to reconstruct the optimal solution.

254 Kalman Filters - Theory for Advanced Applications

in Section 9. The chapter concludes with Section 10.

equation for sensor s ∈f g 1; …; S is given by

2. Problem formulation

In order to compute an estimate as a well-suited combination of the local tracks, it is useful to consider the joint likelihood function given by the following Gaussian:

$$p\left(\mathbf{x}\_{k|k}^{1},\ldots,\mathbf{x}\_{k|k}^{S}|\mathbf{x}\_{k}\right) = \mathbf{N}\left(\begin{pmatrix} \mathbf{x}\_{k|k}^{1} \\ \vdots \\ \mathbf{x}\_{k|k}^{S} \end{pmatrix}; \begin{pmatrix} \mathbf{x}\_{k} \\ \vdots \\ \mathbf{x}\_{k} \end{pmatrix}, \begin{pmatrix} P\_{k|k}^{1,1} & \cdots & P\_{k|k}^{1,S} \\ \vdots & \ddots & \vdots \\ P\_{k|k}^{S,1} & \cdots & P\_{k|k}^{S,S} \end{pmatrix}\right),\tag{7}$$

¼ I

¼ I

<sup>1</sup>:<sup>S</sup> � �<sup>T</sup> <sup>P</sup><sup>1</sup>:<sup>S</sup> k∣k � ��<sup>1</sup>

<sup>1</sup>:<sup>S</sup> � �<sup>T</sup> <sup>P</sup><sup>1</sup>:<sup>S</sup> k∣k � ��<sup>1</sup>

4. Naïve fusion

I

I

I <sup>1</sup>:<sup>S</sup> � �<sup>T</sup> <sup>P</sup><sup>1</sup>:<sup>S</sup> k∣k � ��<sup>1</sup>

:

Concluding the derivations from above equation, one can obtain:

xk∣<sup>k</sup> ¼ Pk∣<sup>k</sup> I

Pk∣<sup>k</sup> ¼ I

P<sup>1</sup>:<sup>S</sup> <sup>k</sup>∣<sup>k</sup> ¼

¼

0

BBB@

xk∣<sup>k</sup> ¼ Pk∣<sup>k</sup>

Pk∣<sup>k</sup> <sup>¼</sup> <sup>X</sup> S

P<sup>1</sup>:<sup>S</sup> k∣k � ��<sup>1</sup>

Filling into the maximum likelihood formulas directly yields.

The last equation holds due to the fact that E I

error covariance is given in block-diagonal form:

obtained in closed form solution:

E I

<sup>1</sup>:<sup>S</sup> � �xk � <sup>x</sup><sup>1</sup>:<sup>S</sup>

<sup>1</sup>:<sup>S</sup> � �<sup>T</sup> <sup>P</sup><sup>1</sup>:<sup>S</sup> k∣k � ��<sup>1</sup>

<sup>1</sup>:<sup>S</sup> � �<sup>T</sup> <sup>P</sup><sup>1</sup>:<sup>S</sup> k∣k � ��<sup>1</sup>

It is obvious that for the ML estimate, it is assumed that the cross-covariances Pi,j

P<sup>1</sup>, <sup>1</sup> k∣k

As a direct consequence of the matrix inversion lemma (see Appendix 12.1), the inverse can be

P<sup>1</sup>, <sup>1</sup> k∣k � ��<sup>1</sup>

> X S

Ps k∣k � ��<sup>1</sup>

Ps k∣k � ��<sup>1</sup>

Thus, by means of a simple approximation of the ML estimate, we have obtained a first practical fusion rule for the FC, which we call convex combination due to its structure for further

s¼1

s¼1

0 B@

⋱

PS,S k∣k

⋱

1

PS,S k∣k � ��<sup>1</sup>

xs k∣k, 1

CCCA

!�<sup>1</sup> (16)

f g 1;…; S are known. Since this might not be given in practical scenarios, a simple approximation is to assume them to be zero. This approach is called Naïve fusion. It implies that the joint

k∣k � �<sup>2</sup> � � <sup>P</sup><sup>1</sup>:<sup>S</sup>

<sup>1</sup>:<sup>S</sup> � �xk � <sup>x</sup><sup>1</sup>:<sup>S</sup>

k∣k � ��<sup>1</sup>

k∣k � �<sup>2</sup> � � <sup>¼</sup> <sup>P</sup><sup>1</sup>:<sup>S</sup>

> x<sup>1</sup>:<sup>S</sup> <sup>k</sup>∣<sup>k</sup> ,

I

I <sup>1</sup>:<sup>S</sup> � � I

<sup>1</sup>:<sup>S</sup> � � � ��<sup>1</sup> (13)

<sup>1</sup>:<sup>S</sup> � �<sup>T</sup> <sup>P</sup><sup>1</sup>:<sup>S</sup> k∣k � ��<sup>1</sup>

http://dx.doi.org/10.5772/intechopen.71941

CA (14)

I

257

<sup>k</sup>∣<sup>k</sup>, i, j∈

(15)

<sup>1</sup>:<sup>S</sup> � � � ��<sup>1</sup>

Distributed Kalman Filter

<sup>k</sup>∣<sup>k</sup> is the joint covariance.

<sup>1</sup>:<sup>S</sup> � � � ��<sup>1</sup>

<sup>1</sup>:<sup>S</sup> � � � ��<sup>1</sup>

where Ps,s <sup>k</sup>∣<sup>k</sup> <sup>¼</sup> Ps <sup>k</sup>∣<sup>k</sup> are the track covariances on the block-diagonal entries and Pi,j <sup>k</sup>∣<sup>k</sup> ≜cov xi k∣k, xj <sup>k</sup>∣<sup>k</sup>jxk h i <sup>¼</sup> Pj,i T <sup>k</sup>∣<sup>k</sup> are the cross-covariances on the off-diagonal entries of the joint error covariance.

Since the joint likelihood from above is proportional to an exponential function:

$$p\left(\mathbf{x}\_{k|k'}^{1},\ldots,\mathbf{x}\_{k|k}^{S}|\mathbf{x}\_{k}\right)\bullet\exp\left\{-\frac{1}{2}\left(\begin{pmatrix}\mathbf{x}\_{k|k}^{1}\\ \vdots\\ \mathbf{x}\_{k|k}^{S}\end{pmatrix}-\begin{pmatrix}\mathbf{x}\_{k}\\ \vdots\\ \mathbf{x}\_{k}\end{pmatrix}\right)^{T}\begin{pmatrix}P\_{lk}^{1,1}&\cdots&P\_{lk}^{1,S}\\ \vdots&\ddots&\vdots\\ P\_{lk}^{S,1}&\cdots&P\_{lk}^{S,S}\end{pmatrix}^{-1}\left(\begin{pmatrix}\mathbf{x}\_{lk}^{1}\\ \vdots\\ \mathbf{x}\_{lk}^{S}\end{pmatrix}-\begin{pmatrix}\mathbf{x}\_{k}\\ \vdots\\ \mathbf{x}\_{lk}^{S}\end{pmatrix}\right)\right\}\tag{8}$$

the maximum likelihood (ML) estimate can be computed in terms of the least squares:

$$\mathbf{x}\_{k|k} = \mathbf{argmin}\_{\mathbf{x}\_k} \left\{ \left( \mathbf{x}\_{k|k}^{1:S} - I^{1:S} \mathbf{x}\_k \right)^T \left( P\_{k|k}^{1:S} \right)^{-1} \left( \mathbf{x}\_{k|k}^{1:S} - I^{1:S} \mathbf{x}\_k \right) \right\},\tag{9}$$

where x<sup>1</sup>:<sup>S</sup> <sup>k</sup>∣<sup>k</sup> <sup>¼</sup> <sup>x</sup><sup>1</sup> <sup>T</sup> <sup>k</sup>∣<sup>k</sup> ; …; xS T k∣k � �<sup>T</sup> , I <sup>1</sup>:<sup>S</sup> <sup>¼</sup> ð Þ <sup>I</sup>; …; <sup>I</sup> T, and <sup>P</sup><sup>1</sup>:<sup>S</sup> <sup>k</sup>∣<sup>k</sup> for the joint error covariance have been introduced for notational simplicity. A closed form solution of the ML estimates can be obtained by setting the gradient with respect to the state to zero:

$$0 = \nabla\_{\mathbf{x}} \left( \mathbf{x}\_{k|\mathbf{k}}^{1:\mathbf{S}} - I^{1:\mathbf{S}} \mathbf{x}\_{k} \right)^{T} \left( P\_{k|\mathbf{k}}^{1:\mathbf{S}} \right)^{-1} \left( \mathbf{x}\_{k|\mathbf{k}}^{1:\mathbf{S}} - I^{1:\mathbf{S}} \mathbf{x}\_{k} \right) = \mathbf{2} \left( I^{1:\mathbf{S}} \right)^{T} \left( P\_{k|\mathbf{k}}^{1:\mathbf{S}} \right)^{-1} \left( \mathbf{x}\_{k|\mathbf{k}}^{1:\mathbf{S}} - I^{1:\mathbf{S}} \mathbf{x}\_{k} \right), \tag{10}$$

Therefore, the ML estimate is given by:

$$\mathbf{x}\_{k\mid k} = \left( \left( I^{1:S} \right)^{T} \left( P\_{k\mid k}^{1:S} \right)^{-1} \left( I^{1:S} \right) \right)^{-1} \left( I^{1:S} \right)^{T} \left( P\_{k\mid k}^{1:S} \right)^{-1} \mathbf{x}\_{k\mid k}^{1:S}.\tag{11}$$

For information fusion applications, it is also important to have a consistent estimate of the squared error, in other words, we need to compute the corresponding error covariance:

$$\begin{split} \text{cov}\left[\mathbf{x}\_{k\parallel}|\mathbf{x}\_{k}\right] &= \text{E}\left[\left(\mathbf{x}\_{k} - \mathbf{x}\_{k\parallel}\right)\left(\mathbf{x}\_{k} - \mathbf{x}\_{k\parallel}\right)^{\mathrm{T}}\right] \\ &\triangleq \text{E}\left[\left(\mathbf{x}\_{k} - \mathbf{x}\_{k\parallel}\right)^{2}\right] \\ &= \text{E}\left[\left(\mathbf{x}\_{k} - \left(\left(\mathbf{I}^{1:S}\right)^{\mathrm{T}}\left(\mathbf{P}\_{k\parallel}^{\mathrm{LS}}\right)^{-1}\left(\mathbf{I}^{1:S}\right)\right)^{-1}\left(\mathbf{I}^{1:S}\right)^{\mathrm{T}}\left(\mathbf{P}\_{k\parallel}^{\mathrm{LS}}\right)^{-1}\mathbf{x}\_{k\parallel}^{\mathrm{LS}}\right)^{2}\right] \\ &= \text{E}\left[\left(\left(\left(\mathbf{I}^{1:S}\right)^{\mathrm{T}}\left(\mathbf{P}\_{k\parallel}^{\mathrm{LS}}\right)^{-1}\left(\mathbf{I}^{1:S}\right)\right)^{-1}\left(\left(\left(\mathbf{I}^{1:S}\right)^{\mathrm{T}}\left(\mathbf{P}\_{(k\parallel)}^{\mathrm{LS}}\right)^{-1}\left(\mathbf{I}^{1:S}\right)\right)\mathbf{x}\_{k} - \left(\left(\mathbf{I}^{1:S}\right)^{\mathrm{T}}\left(\mathbf{P}\_{k\parallel}^{\mathrm{LS}}\right)^{-1}\mathbf{x}\_{k\parallel}^{\mathrm{LS}}\right)^{2}\right)^{2}\right] \end{split} \tag{12}$$

Distributed Kalman Filter http://dx.doi.org/10.5772/intechopen.71941 257

$$\begin{split} &= \left( \left( I^{1:S} \right)^{\mathrm{T}} \left( P\_{k|k}^{1:S} \right)^{-1} \left( I^{1:S} \right) \right)^{-1} \left( I^{1:S} \right)^{\mathrm{T}} \left( P\_{k|k}^{1:S} \right)^{-1} \mathrm{E} \left[ \left( \left( I^{1:S} \right) \mathbf{x}\_k - \mathbf{x}\_{k|k}^{1:S} \right)^{2} \right] \left( P\_{k|k}^{1:S} \right)^{-1} \left( I^{1:S} \right) \left( \left( I^{1:S} \right)^{\mathrm{T}} \left( P\_{k|k}^{1:S} \right)^{-1} \left( I^{1:S} \right) \right)^{-1} \cdots \\ &= \left( \left( I^{1:S} \right)^{\mathrm{T}} \left( P\_{k|k}^{1:S} \right)^{-1} \left( I^{1:S} \right) \right)^{-1} . \end{split}$$

The last equation holds due to the fact that E I <sup>1</sup>:<sup>S</sup> � �xk � <sup>x</sup><sup>1</sup>:<sup>S</sup> k∣k � �<sup>2</sup> � � <sup>¼</sup> <sup>P</sup><sup>1</sup>:<sup>S</sup> <sup>k</sup>∣<sup>k</sup> is the joint covariance. Concluding the derivations from above equation, one can obtain:

$$\begin{aligned} \mathbf{x}\_{k\#} &= P\_{k\#} \left( I^{\mathbf{1}:\mathbf{S}} \right)^{\mathrm{T}} \left( P\_{k\|\mathbf{k}}^{\mathbf{1}:\mathbf{S}} \right)^{-1} \mathbf{x}\_{k\|\mathbf{k}'}^{\mathbf{1}:\mathbf{S}} \\ P\_{k\|\mathbf{k}} &= \left( \left( I^{\mathbf{1}:\mathbf{S}} \right)^{\mathrm{T}} \left( P\_{k\|\mathbf{k}}^{\mathbf{1}:\mathbf{S}} \right)^{-1} \left( I^{\mathbf{1}:\mathbf{S}} \right) \right)^{-1} \end{aligned} \tag{13}$$

#### 4. Naïve fusion

p x<sup>1</sup>

256 Kalman Filters - Theory for Advanced Applications

where Ps,s

<sup>¼</sup> Pj,i T

p x<sup>1</sup>

where x<sup>1</sup>:<sup>S</sup>

cov xk∣<sup>k</sup>jxk

k∣k,…, x<sup>S</sup>

<sup>k</sup>∣<sup>k</sup>jxk � �

<sup>k</sup>∣<sup>k</sup> <sup>¼</sup> <sup>x</sup><sup>1</sup> <sup>T</sup>

<sup>0</sup> <sup>¼</sup> <sup>∇</sup>xk <sup>x</sup><sup>1</sup>:<sup>S</sup>

� � <sup>¼</sup> <sup>E</sup> xk � xk∣<sup>k</sup>

≜E xk � xk∣<sup>k</sup> � �<sup>2</sup> h i

¼ E xk � I

4

¼ E I

4

<sup>k</sup>∣<sup>k</sup> ; …; xS T k∣k � �<sup>T</sup>

> <sup>k</sup>∣<sup>k</sup> � I 1:S xk � �<sup>T</sup>

Therefore, the ML estimate is given by:

<sup>k</sup>∣<sup>k</sup> <sup>¼</sup> Ps

k∣k,…, x<sup>S</sup>

<sup>∝</sup> exp � <sup>1</sup>

8 >>><

>>>:

2

xk∣<sup>k</sup> <sup>¼</sup> argminxk <sup>x</sup><sup>1</sup>:<sup>S</sup>

, I

ned by setting the gradient with respect to the state to zero:

xk∣<sup>k</sup> ¼ I

� � xk � xk∣<sup>k</sup> � �<sup>T</sup> h i

> <sup>1</sup>:<sup>S</sup> � �<sup>T</sup> <sup>P</sup><sup>1</sup>:<sup>S</sup> k∣k � ��<sup>1</sup>

<sup>1</sup>:<sup>S</sup> � � � ��<sup>1</sup>

<sup>1</sup>:<sup>S</sup> � �<sup>T</sup> <sup>P</sup><sup>1</sup>:<sup>S</sup> k∣k � ��<sup>1</sup>

P<sup>1</sup>:<sup>S</sup> k∣k � ��<sup>1</sup>

<sup>1</sup>:<sup>S</sup> � �<sup>T</sup> <sup>P</sup><sup>1</sup>:<sup>S</sup> k∣k � ��<sup>1</sup>

I

!<sup>2</sup> 2

<sup>1</sup>:<sup>S</sup> � � � ��<sup>1</sup>

I

<sup>k</sup>∣<sup>k</sup>jxk � �

¼ N

x1 k∣k ⋮ xS k∣k 1

CCA �

0

0

BB@

BB@

x1 k∣k ⋮ xS k∣k 1

xk ⋮ xk 1 CA; 0 B@ P<sup>1</sup>, <sup>1</sup>

PS,<sup>1</sup>

<sup>k</sup>∣<sup>k</sup> <sup>⋯</sup> <sup>P</sup><sup>1</sup>,S k∣k

⋮⋱⋮

<sup>k</sup>∣<sup>k</sup> <sup>⋯</sup> <sup>P</sup>S,S k∣k

> x<sup>1</sup>:<sup>S</sup> <sup>k</sup>∣<sup>k</sup> � I 1:S xk

<sup>1</sup>:<sup>S</sup> � �<sup>T</sup> <sup>P</sup><sup>1</sup>:<sup>S</sup> k∣k � ��<sup>1</sup>

PS,<sup>1</sup>

P<sup>1</sup>:<sup>S</sup> k∣k � ��<sup>1</sup>

� � � �

¼ 2 I

I <sup>1</sup>:<sup>S</sup> � �<sup>T</sup> <sup>P</sup><sup>1</sup>:<sup>S</sup> k∣k � ��<sup>1</sup>

> x<sup>1</sup>:<sup>S</sup> k∣k

> > I

<sup>1</sup>:<sup>S</sup> � � � �

� �!<sup>2</sup> <sup>2</sup>

3 5

xk � I

<sup>1</sup>:<sup>S</sup> � �<sup>T</sup> <sup>P</sup><sup>1</sup>:<sup>S</sup> k∣k � ��<sup>1</sup>

<sup>k</sup>∣<sup>k</sup> <sup>⋯</sup> <sup>P</sup><sup>1</sup>,S

⋮⋱⋮

<sup>k</sup>∣<sup>k</sup> <sup>⋯</sup> PS,S

1 CA

�<sup>1</sup> x<sup>1</sup> k∣k ⋮ xS k∣k

0 B@

0 B@

<sup>k</sup>∣<sup>k</sup> for the joint error covariance have been

x<sup>1</sup>:<sup>S</sup> <sup>k</sup>∣<sup>k</sup> � I 1:S xk � �

<sup>k</sup>∣<sup>k</sup> : (11)

x<sup>1</sup>:<sup>S</sup> k∣k

3 5 (12)

x<sup>1</sup>:<sup>S</sup>

1 CA �

xk ⋮ xk

, (9)

1 CA

1 CA

9 >>>=

>>>;

, (10)

(8)

0 B@

k∣k

1 CA

1

<sup>k</sup>∣<sup>k</sup> ≜cov xi

CCA, (7)

k∣k, xj <sup>k</sup>∣<sup>k</sup>jxk h i

k∣k

0 B@

CCA ;

<sup>k</sup>∣<sup>k</sup> are the track covariances on the block-diagonal entries and Pi,j

<sup>k</sup>∣<sup>k</sup> are the cross-covariances on the off-diagonal entries of the joint error covariance.

xk ⋮ xk 1 CA

1

T P<sup>1</sup>,<sup>1</sup>

0 B@

CCA

0 B@

the maximum likelihood (ML) estimate can be computed in terms of the least squares:

<sup>k</sup>∣<sup>k</sup> � I 1:S xk � �<sup>T</sup>

<sup>1</sup>:<sup>S</sup> <sup>¼</sup> ð Þ <sup>I</sup>; …; <sup>I</sup> T, and <sup>P</sup><sup>1</sup>:<sup>S</sup>

x<sup>1</sup>:<sup>S</sup> <sup>k</sup>∣<sup>k</sup> � I 1:S xk � �

<sup>1</sup>:<sup>S</sup> � � � ��<sup>1</sup>

introduced for notational simplicity. A closed form solution of the ML estimates can be obtai-

I

For information fusion applications, it is also important to have a consistent estimate of the squared error, in other words, we need to compute the corresponding error covariance:

> I <sup>1</sup>:<sup>S</sup> � �<sup>T</sup> <sup>P</sup><sup>1</sup>:<sup>S</sup> k∣k � ��<sup>1</sup>

I <sup>1</sup>:<sup>S</sup> � �<sup>T</sup> <sup>P</sup><sup>1</sup>:<sup>S</sup> ð Þ kjk � ��<sup>1</sup>

0

0

BB@

BB@

Since the joint likelihood from above is proportional to an exponential function:

It is obvious that for the ML estimate, it is assumed that the cross-covariances Pi,j <sup>k</sup>∣<sup>k</sup>, i, j∈ f g 1;…; S are known. Since this might not be given in practical scenarios, a simple approximation is to assume them to be zero. This approach is called Naïve fusion. It implies that the joint error covariance is given in block-diagonal form:

$$P\_{k|k}^{1:S} = \begin{pmatrix} P\_{k|k}^{1,1} & & \\ & \ddots & \\ & & P\_{k|k}^{S,S} \end{pmatrix} \tag{14}$$

As a direct consequence of the matrix inversion lemma (see Appendix 12.1), the inverse can be obtained in closed form solution:

$$\left(P\_{k|k}^{1:\mathbb{S}}\right)^{-1} = \begin{pmatrix} \left(P\_{k|k}^{1,1}\right)^{-1} & & & \\ & \ddots & & \\ & & \left(P\_{k|k}^{\mathbb{S},\mathbb{S}}\right)^{-1} \end{pmatrix} \tag{15}$$

Filling into the maximum likelihood formulas directly yields.

$$\begin{aligned} \mathbf{x}\_{k|k} &= P\_{k|k} \sum\_{s=1}^{S} \left( P\_{k|k}^{s} \right)^{-1} \mathbf{x}\_{k|k'}^{s} \\ P\_{k|k} &= \left( \sum\_{s=1}^{S} \left( P\_{k|k}^{s} \right)^{-1} \right)^{-1} \end{aligned} \tag{16}$$

Thus, by means of a simple approximation of the ML estimate, we have obtained a first practical fusion rule for the FC, which we call convex combination due to its structure for further

$$\begin{aligned} \boldsymbol{P}\_{k|k-1}^{i,j} &= \mathrm{E}\left[ \left( \mathbf{x}\_{k} - \mathbf{x}\_{k|k-1}^{i} \right) \left( \mathbf{x}\_{k} - \mathbf{x}\_{k|k-1}^{j} \right)^{\mathrm{T}} \right] \\ &= \mathrm{E}\left[ \left( \boldsymbol{F}\_{k|k-1} \mathbf{x}\_{k-1} + \boldsymbol{w}\_{k} - \boldsymbol{F}\_{k|k-1} \mathbf{x}\_{k-1|k-1}^{i} \right) \left( \boldsymbol{F}\_{k|k-1} \mathbf{x}\_{k-1} + \boldsymbol{w}\_{k} - \boldsymbol{F}\_{k|k-1} \mathbf{x}\_{k-1|k-1}^{i} \right)^{\mathrm{T}} \right] \\ &= \boldsymbol{F}\_{k|k-1} \mathrm{E} \left[ \left( \mathbf{x}\_{k-1} - \mathbf{x}\_{k-1|k-1}^{i} \right) \left( \mathbf{x}\_{k-1} - \mathbf{x}\_{k-1|k-1}^{j} \right)^{\mathrm{T}} \right] \boldsymbol{F}\_{k|k-1}^{\mathrm{T}} \end{aligned} \tag{17}$$

$$\begin{aligned} & -F\_{k|k-1} \operatorname{\mathbf{E}} \Big[ \left( \mathbf{x}\_{k-1} - \mathbf{x}\_{k-1|k-1}^{i} \right) \mathbf{z} \mathbf{w}\_{k}^{\mathrm{T}} \Big] - \operatorname{\mathbf{E}} \Big[ \mathbf{w}\_{k} \Big( \mathbf{x}\_{k-1} - \mathbf{x}\_{k-1|k-1}^{j} \Big)^{\mathrm{T}} \Big] F\_{k|k-1}^{\mathrm{T}} + \operatorname{\mathbf{E}} \Big[ \mathbf{w}\_{k} \mathbf{z} \mathbf{w}\_{k}^{\mathrm{T}} \Big] \\ & = F\_{k|k-1} P\_{k-1|k-1}^{j,j} F\_{k|k-1}^{\mathrm{T}} + Q\_{k|l} \end{aligned}$$

$$\begin{aligned} P\_{0|0}^{i,j} &= 0\\ P\_{k|k-1}^{i,j} &= F\_{k|k-1} P\_{k-1|k-1}^{i,j} F\_{k|k-1}^{\mathrm{T}} + Q\_k\\ P\_{k|k}^{i,j} &= \left( I - \mathcal{W}\_{k|k-1}^{i} H\_k^i \right) P\_{k|k-1}^{i,j} \left( I - \mathcal{W}\_{k|k-1}^i H\_k^i \right)^{\mathrm{T}} \end{aligned} \tag{19}$$

result. If this is not the case, as maybe in most practical applications, then the Naïve fusion method is an approximation and its degree of approximation depends primarily on the level of the process noise.

7.1. DKF prediction

Zk�<sup>1</sup> <sup>s</sup> <sup>g</sup>, where <sup>Z</sup><sup>k</sup>�<sup>1</sup> <sup>s</sup> <sup>Z</sup><sup>s</sup>

mutually independent.

globalize the covariances Ps

process. Thus, if we set

current and the last time step:

where

For the prediction, it is assumed that the previous posterior is given in product representation:

N xk�<sup>1</sup>; xs

<sup>k</sup>�1∣k�<sup>1</sup>; <sup>P</sup><sup>s</sup>

� � are from all sensors and all time steps up to time tk�1, of which

<sup>k</sup>�1∣k�<sup>1</sup> of the local processing nodes at first. This process changes the

<sup>k</sup>�1∣k�<sup>1</sup>; <sup>P</sup>e<sup>k</sup>�1∣k�<sup>1</sup>

xs k�1∣k�1

� � (22)

k�1∣k�1

� � (21)

<sup>1</sup> ;…; �

Distributed Kalman Filter

261

http://dx.doi.org/10.5772/intechopen.71941

(23)

<sup>k</sup>�1∣k�<sup>1</sup> are required to

S

s¼1

For notational simplicity, we have conditioned the posterior on the full data set <sup>Z</sup><sup>k</sup>�<sup>1</sup> <sup>¼</sup> Zk�<sup>1</sup>

the local tracks are sufficient statistics. At the initialization phase, that is, k � 1 ¼ 0, the product representation is trivial, since the initial estimates can be based on first measurements, which are

To derive a closed form solution for the prediction of product representation, it is required to

local parameters such that the same previous posterior density is factorized; however, the local covariances will be unified to a single <sup>P</sup>e<sup>k</sup>�1∣k�<sup>1</sup>. Rigorously speaking, this matrix does not represent a meaningful covariance in the sense of an expected estimation error squared anymore. Still, the fused result will be optimal since the global density is not changed during this

N xk�<sup>1</sup>; <sup>e</sup>x<sup>s</sup>

k�1∣k�1 � ��<sup>1</sup>

p xk�<sup>1</sup>jZk�<sup>1</sup> � �<sup>∝</sup> <sup>Y</sup>

p xk�<sup>1</sup>jZk�<sup>1</sup> � �<sup>∝</sup> <sup>Y</sup>

exs

S

s¼1

<sup>k</sup>�1∣k�<sup>1</sup> <sup>¼</sup> SPk�1∣k�<sup>1</sup> <sup>P</sup><sup>s</sup>

S

Ps k�1∣k�1 � ��<sup>1</sup> !�<sup>1</sup>

then the same fused density will be obtained, which is easily verified by means of the convex

compute the globalized covariance matrix <sup>P</sup>e<sup>k</sup>�1∣k�<sup>1</sup>. Since Kalman filter conditions are assumed, they can be computed without data transmission, as they do not depend on the local sensor

The prediction formulas can now be obtained by a marginalization of the joint density of the

s¼1

measurements. Therefore, it is sufficient to be aware of the remote sensor models.

Pe<sup>k</sup>�1∣k�<sup>1</sup> ¼ SPk�1∣k�<sup>1</sup>

Pk�1∣k�<sup>1</sup> <sup>¼</sup> <sup>X</sup>

combination. It should be noted that the remote error covariances P<sup>s</sup>

1;…;Z<sup>s</sup> k�1

### 6. Gaussian product representation

The basic concept of the distributed Kalman filter is to make the local parameters stochastically independent, even if process noise is present. This is achieved by a product representation, which directly follows from the fact that.

$$\begin{aligned} & \exp\left\{ -\frac{1}{2} \left( \begin{pmatrix} \mathbf{x}\_{k\mathbb{K}}^{1} \\ \vdots \\ \mathbf{x}\_{k\mathbb{K}}^{S} \end{pmatrix} - \begin{pmatrix} \mathbf{x}\_{k} \\ \vdots \\ \mathbf{x}\_{k} \end{pmatrix} \right)^{\mathrm{T}} \begin{pmatrix} \left( \mathbf{P}\_{k\mathbb{K}}^{1} \right)^{-1} & & \\ & \ddots & \\ & & \left( \mathbf{P}\_{k\mathbb{K}}^{S} \right)^{-1} \end{pmatrix} \left( \begin{pmatrix} \mathbf{x}\_{k\mathbb{K}}^{1} \\ \vdots \\ \mathbf{x}\_{k\mathbb{K}}^{S} \end{pmatrix} - \begin{pmatrix} \mathbf{x}\_{k} \\ \vdots \\ \mathbf{x}\_{k}^{S} \end{pmatrix} \right) \right] \\ & = \exp\left\{ -\frac{1}{2} \sum\_{s=1}^{S} \left( \mathbf{x}\_{k\mathbb{K}}^{s} - \mathbf{x}\_{k} \right)^{\mathrm{T}} \left( \mathbf{P}\_{k\mathbb{K}}^{s} \right)^{-1} \left( \mathbf{x}\_{k\mathbb{K}}^{s} - \mathbf{x}\_{k} \right) \right\} \text{er} \prod\_{s=1}^{S} \mathcal{N} \left( \mathbf{x}\_{k}; \mathbf{x}\_{k\mathbb{K}}^{s}, \mathbf{P}\_{k\mathbb{K}}^{s} \right) \end{aligned} \tag{20}$$

Thus, the Gaussian product representation is equivalent to uncorrelated track parameters for each processing node. It should be noted that the product representation is not normalized, that is, the integral for S > 1 is not unity. This, however, is not of practical relevance since the fused estimate density is a Gaussian and the parameters of which are provided by the convex combination.

#### 7. Derivation of the distributed Kalman filter

For the DKF, we are going to modify the local processing scheme for each sensor in order to have the product representation hold at each instant of time. Then, when the fusion center receives the parameters from all sensors, the convex combination can be applied to compute the optimal global estimate. Note that the convex combination does not consider a local prior of the fusion center; therefore, the result will be independent from previous transmissions. This can be of great benefit, if communication channels with unreliable links have to be considered, since the full information on the target state is distributed in the sensor network. However, for completeness, it should also be noted that the modified local parameters are not optimal anymore in a local sense. One could say that local optimality is given up for the sake of global optimality [5].

In the following sections, the derivation of a prediction-filtering recursion of the DKF is discussed.

#### 7.1. DKF prediction

result. If this is not the case, as maybe in most practical applications, then the Naïve fusion method is an approximation and its degree of approximation depends primarily on the level of

The basic concept of the distributed Kalman filter is to make the local parameters stochastically independent, even if process noise is present. This is achieved by a product representation, which

⋱

xs <sup>k</sup>∣<sup>k</sup> � xk

Thus, the Gaussian product representation is equivalent to uncorrelated track parameters for each processing node. It should be noted that the product representation is not normalized, that is, the integral for S > 1 is not unity. This, however, is not of practical relevance since the fused estimate density is a Gaussian and the parameters of which are provided by the convex

For the DKF, we are going to modify the local processing scheme for each sensor in order to have the product representation hold at each instant of time. Then, when the fusion center receives the parameters from all sensors, the convex combination can be applied to compute the optimal global estimate. Note that the convex combination does not consider a local prior of the fusion center; therefore, the result will be independent from previous transmissions. This can be of great benefit, if communication channels with unreliable links have to be considered, since the full information on the target state is distributed in the sensor network. However, for completeness, it should also be noted that the modified local parameters are not optimal anymore in a local sense. One could say that local optimality is given up for the sake of global optimality [5].

In the following sections, the derivation of a prediction-filtering recursion of the DKF is

PS k∣k � ��<sup>1</sup>

> ∝ Y S

> > s¼1

1

x1 k∣k 1

xk

1

1

9 >>>>>>=

>>>>>>;

(20)

CCCCA

CCCCA

0

BBBB@

⋮

xk

CCCCA �

0

0

BBBB@

N xk; xs

BBBB@

⋮

xS k∣k

<sup>k</sup>∣<sup>k</sup>; P<sup>s</sup> k∣k � �

CCCCCCA

the process noise.

exp � <sup>1</sup> 2

>>>>>>:

combination.

discussed.

8 >>>>>><

6. Gaussian product representation

directly follows from the fact that.

260 Kalman Filters - Theory for Advanced Applications

0

0

BBBB@

<sup>¼</sup> exp � <sup>1</sup>

BBBB@

x1 k∣k 1

xk

1

1

<sup>T</sup> P<sup>1</sup> k∣k � ��<sup>1</sup>

0

BBBBBB@

Ps k∣k � ��<sup>1</sup>

� � ( )

CCCCA

CCCCA

0

BBBB@

xs <sup>k</sup>∣<sup>k</sup> � xk � �<sup>T</sup>

⋮

xk

7. Derivation of the distributed Kalman filter

CCCCA �

⋮

xS k∣k

> 2 X S

s¼1

For the prediction, it is assumed that the previous posterior is given in product representation:

$$p\left(\mathbf{x}\_{k-1}|\mathbf{Z}^{k-1}\right) \approx \prod\_{s=1}^{S} N\left(\mathbf{x}\_{k-1}; \mathbf{x}\_{k-1|k-1}^{s}, P\_{k-1|k-1}^{s}\right) \tag{21}$$

For notational simplicity, we have conditioned the posterior on the full data set <sup>Z</sup><sup>k</sup>�<sup>1</sup> <sup>¼</sup> Zk�<sup>1</sup> <sup>1</sup> ;…; � Zk�<sup>1</sup> <sup>s</sup> <sup>g</sup>, where <sup>Z</sup><sup>k</sup>�<sup>1</sup> <sup>s</sup> <sup>Z</sup><sup>s</sup> 1;…;Z<sup>s</sup> k�1 � � are from all sensors and all time steps up to time tk�1, of which the local tracks are sufficient statistics. At the initialization phase, that is, k � 1 ¼ 0, the product representation is trivial, since the initial estimates can be based on first measurements, which are mutually independent.

To derive a closed form solution for the prediction of product representation, it is required to globalize the covariances Ps <sup>k</sup>�1∣k�<sup>1</sup> of the local processing nodes at first. This process changes the local parameters such that the same previous posterior density is factorized; however, the local covariances will be unified to a single <sup>P</sup>e<sup>k</sup>�1∣k�<sup>1</sup>. Rigorously speaking, this matrix does not represent a meaningful covariance in the sense of an expected estimation error squared anymore. Still, the fused result will be optimal since the global density is not changed during this process. Thus, if we set

$$p\left(\mathbf{x}\_{k-1}|Z^{k-1}\right) \propto \prod\_{s=1}^{S} N\left(\mathbf{x}\_{k-1}; \widetilde{\mathbf{x}}\_{k-1|k-1}^{s}, \widetilde{P}\_{k-1|k-1}\right) \tag{22}$$

where

$$\begin{aligned} \widetilde{\boldsymbol{\pi}}\_{k-1|k-1}^{s} &= \boldsymbol{S} \boldsymbol{P}\_{k-1|k-1} \left( \boldsymbol{P}\_{k-1|k-1}^{s} \right)^{-1} \boldsymbol{\pi}\_{k-1|k-1}^{s} \\ \widetilde{\boldsymbol{P}}\_{k-1|k-1} &= \boldsymbol{S} \boldsymbol{P}\_{k-1|k-1} \\ \boldsymbol{P}\_{k-1|k-1} &= \left( \sum\_{s=1}^{S} \left( \boldsymbol{P}\_{k-1|k-1}^{s} \right)^{-1} \right)^{-1} \end{aligned} \tag{23}$$

then the same fused density will be obtained, which is easily verified by means of the convex combination. It should be noted that the remote error covariances P<sup>s</sup> <sup>k</sup>�1∣k�<sup>1</sup> are required to compute the globalized covariance matrix <sup>P</sup>e<sup>k</sup>�1∣k�<sup>1</sup>. Since Kalman filter conditions are assumed, they can be computed without data transmission, as they do not depend on the local sensor measurements. Therefore, it is sufficient to be aware of the remote sensor models.

The prediction formulas can now be obtained by a marginalization of the joint density of the current and the last time step:

$$p\left(\mathbf{x}\_{k}|Z^{k-1}\right) = \int \mathbf{dx}\_{k-1} \, p\left(\mathbf{x}\_{k}, \mathbf{x}\_{k-1}|Z^{k-1}\right)$$

$$= \int \mathbf{dx}\_{k-1} \, p\left(\mathbf{x}\_{k}|\mathbf{x}\_{k-1}, Z^{k-1}\right) \, p\left(\mathbf{x}\_{k-1}|Z^{k-1}\right) \tag{24}$$

$$= \int \mathbf{dx}\_{k-1} \, p\left(\mathbf{x}\_{k}|\mathbf{x}\_{k-1}\right) \, p\left(\mathbf{x}\_{k-1}|Z^{k-1}\right)$$

$$p\left(\mathbf{x}\_{k}|\mathbf{Z}^{k-1}\right) \propto \int d\mathbf{x}\_{k-1} \mathcal{N}\left(\mathbf{x}\_{k}; F\_{k|k-1}\mathbf{x}\_{k-1}, Q\_{k}\right) \prod\_{s=1}^{S} N\left(\mathbf{x}\_{k-1}; \widetilde{\mathbf{x}}\_{k-1|k-1}^{s}, \widetilde{P}\_{k-1|k-1}\right) \tag{25}$$

$$\begin{split} \mathbb{N} \left( \mathbf{x}\_{k}; F\_{k|k-1} \mathbf{x}\_{k-1}, Q\_{k} \right) & \approx \exp \left\{ -\frac{1}{2} \left( \mathbf{F}\_{k|k-1} \mathbf{x}\_{k-1} - \mathbf{x}\_{k} \right)^{\mathrm{T}} \left( Q\_{k} \right)^{-1} \left( \mathbf{F}\_{k|k-1} \mathbf{x}\_{k-1} - \mathbf{x}\_{k} \right) \right\} \\ &= \exp \left\{ -\frac{1}{2} \left( \mathbf{F}\_{k|k-1} \mathbf{x}\_{k-1} - \mathbf{x}\_{k} \right)^{\mathrm{T}} \mathbf{S} \left( \mathbf{S} Q\_{k} \right)^{-1} \left( \mathbf{F}\_{k|k-1} \mathbf{x}\_{k-1} - \mathbf{x}\_{k} \right) \right\} \\ & \propto \left[ \mathbf{N} \left( \mathbf{x}\_{k}; F\_{k|k-1} \mathbf{x}\_{k-1}, \mathbf{S} Q\_{k} \right) \right]^{\mathrm{S}} \end{split} \tag{26}$$

$$p\left(\mathbf{x}\_{k}|\mathbf{Z}^{k-1}\right) \propto \int d\mathbf{x}\_{k-1} \prod\_{s=1}^{S} \mathcal{N}\left(\mathbf{x}\_{k}; F\_{k|k-1}\mathbf{x}\_{k-1}, \mathbf{S}Q\_{k}\right) \mathcal{N}\left(\mathbf{x}\_{k-1}; \widetilde{\mathbf{x}}\_{k-1|k-1}^{s}, \widetilde{P}\_{k-1|k-1}\right) \tag{27}$$

$$p\left(\mathbf{x}\_{k}|\mathbf{Z}^{k-1}\right) \propto \prod\_{s=1}^{S} \mathcal{N}\left(\mathbf{x}\_{k}; \mathbf{x}\_{k|k-1}^{s}, \widetilde{P}\_{k|k-1}^{s}\right) \int d\mathbf{x}\_{k-1} \prod\_{s=1}^{S} \mathcal{N}(\mathbf{x}\_{k-1}; \mathbf{y}^{s}, \mathbf{Y}), \tag{28}$$

$$\begin{aligned} \widetilde{\mathbf{x}}\_{k|k-1}^{s} &= F\_{k|k-1} \widetilde{\mathbf{x}}\_{k-1|k-1}^{s} \\ \widetilde{\mathbf{P}}\_{k|k-1}^{s} &= F\_{k|k-1} \widetilde{\mathbf{P}}\_{k-1|k-1} \mathbf{F}\_{k|k-1}^{\mathrm{T}} + SQ\_{k} \\ \mathbf{y}^{s} &= Y \left( \left( \widetilde{\mathbf{P}}\_{k|k-1}^{s} \right)^{-1} \mathbf{x}\_{k|k-1}^{s} + F\_{k|k-1} \mathbf{T} (SQ\_{k})^{-1} \mathbf{x}\_{k} \right) \\ \mathbf{Y} &= \left( \left( \widetilde{\mathbf{P}}\_{k|k-1}^{s} \right)^{-1} + F\_{k|k-1} \mathbf{T} (SQ\_{k})^{-1} F\_{k|k-1} \right)^{-1} . \end{aligned} \tag{29}$$

$$\begin{split} & \int \mathrm{d}\mathbf{x}\_{k-1} \prod\_{s=1}^{S} \mathrm{N} \left( \mathbf{x}\_{k-1} - \mathrm{YF}\_{k|k-1} \mathrm{T} (\mathbf{S} \mathbf{Q}\_{k})^{-1} \mathbf{x}\_{k}; \mathrm{Y} \left( \overleftarrow{\boldsymbol{P}}\_{k|k-1}^{s} \right)^{-1} \mathbf{x}\_{k|k-1}^{s}, \mathrm{Y} \right) \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \operatorname{d}\mathbf{x}\_{k-1} \mathrm{N} \left( \mathbf{x}\_{k-1} - \mathrm{YF}\_{k|k-1} \mathrm{T} (\mathbf{S} \mathbf{Q}\_{k})^{-1} \mathbf{x}\_{k}; \overline{\overline{\mathbf{y}}}, \overline{\overline{\mathbf{Y}}} \right) = 1, \end{split} \tag{30}$$

$$\begin{aligned} \widetilde{\boldsymbol{x}}\_{k-1\mid k-1}^{s} &= \boldsymbol{S} \boldsymbol{P}\_{k-1\mid k-1} \left(\boldsymbol{P}\_{k-1\mid k-1}^{s}\right)^{-1} \boldsymbol{\mathcal{X}}\_{k-1\mid k-1}^{s} \\ \widetilde{\boldsymbol{P}}\_{k-1\mid k-1} &= \boldsymbol{S} \boldsymbol{P}\_{k-1\mid k-1} \\ \boldsymbol{\mathcal{X}}\_{k\mid k-1}^{s} &= \boldsymbol{F}\_{k\mid k-1} \widetilde{\boldsymbol{\mathcal{X}}}\_{k-1\mid k-1}^{s} \\ \widetilde{\boldsymbol{P}}\_{k\mid k-1}^{s} &= \boldsymbol{F}\_{k\mid k-1} \widetilde{\boldsymbol{P}}\_{k-1\mid k-1} \boldsymbol{F}\_{k\mid k-1}^{-1} + \boldsymbol{S} \boldsymbol{Q}\_{k}. \end{aligned}$$

$$p\left(\mathbf{x}\_{k}|\mathbf{Z}^{k}\right) = \frac{p(\mathbf{Z}\_{k}|\mathbf{x}\_{k})p\left(\mathbf{x}\_{k}|\mathbf{Z}^{k-1}\right)}{p\left(\mathbf{Z}\_{k}|\mathbf{Z}^{k-1}\right)}\tag{31}$$

$$p(Z\_k|\mathbf{x}\_k) = \prod\_{s=1}^{S} p\left(Z\_k^s|\mathbf{x}\_k\right) \tag{32}$$

$$p\left(\mathbf{x}\_{k}|\mathbf{Z}^{k}\right) \propto \prod\_{s=1}^{S} \mathbf{N}\left(\mathbf{Z}\_{k}; H\_{k}^{s}\mathbf{x}\_{k}, R\_{k}^{s}\right) \mathbf{N}\left(\mathbf{x}\_{k}; \mathbf{x}\_{k|k-1}^{s}, \widetilde{P}\_{k|k-1}^{s}\right) \tag{33}$$

$$p\left(\mathbf{x}\_{k}|\mathbf{Z}^{k}\right) \propto \prod\_{s=1}^{S} \mathcal{N}\left(\mathbf{x}\_{k}; \mathbf{x}\_{k|k}^{s}, P\_{k|k}^{s}\right) \tag{34}$$

where

$$\begin{aligned} \boldsymbol{x}\_{k|k}^{s} &= \boldsymbol{x}\_{k|k-1}^{s} + \boldsymbol{W}\_{k|k-1}^{s} \boldsymbol{\nu}\_{k}^{s} \\ \boldsymbol{P}\_{k|k}^{s} &= \boldsymbol{P}\_{k|k-1}^{s} - \boldsymbol{W}\_{k|k-1}^{s} \boldsymbol{S}\_{k}^{s} \boldsymbol{W}\_{k|k-1}^{s} \\ \boldsymbol{\nu}\_{k}^{s} &= \boldsymbol{z}\_{k}^{s} - \boldsymbol{H}\_{k}^{s} \boldsymbol{x}\_{k|k-1}^{s} \\ \boldsymbol{W}\_{k|k-1}^{s} &= \boldsymbol{P}\_{k|k-1}^{s} \boldsymbol{H}\_{k}^{s} \left(\boldsymbol{S}\_{k}^{s}\right)^{-1} \\ \boldsymbol{S}\_{k}^{s} &= \boldsymbol{H}\_{k}^{s} \boldsymbol{P}\_{k|k-1}^{s} \boldsymbol{H}\_{k}^{s} \boldsymbol{T} + \boldsymbol{R}\_{k}^{s}. \end{aligned} \tag{35}$$

xk∣k�<sup>1</sup> ¼ Fk∣k�<sup>1</sup>xk�1∣k�<sup>1</sup>

<sup>¼</sup> <sup>X</sup> S

Thus, we have given the local predicted state parameters as:

Analogously, one obtains for the prior covariance:

yields the exact global fused covariance.

8.2. Information DKF filtering

l

s <sup>k</sup> and I s

Thus, if we set Ps

parameter i

contributions I

filtering equations:

models are used:

xs

s¼1

¼ Fk∣k�<sup>1</sup>Pk�1∣k�<sup>1</sup>

X S

Ps k�1∣k�1 � ��<sup>1</sup>

k�1∣k�1 � ��<sup>1</sup>

> k∣k � ��<sup>1</sup>

<sup>k</sup>∣k�<sup>1</sup> <sup>þ</sup> Qk

<sup>k</sup> from its own sensor model and, in addition, the information matrix

xk∣k�<sup>1</sup> <sup>þ</sup><sup>X</sup> S

> xs <sup>k</sup>∣k�<sup>1</sup> <sup>þ</sup> <sup>i</sup> s k:

xs <sup>k</sup>∣k�<sup>1</sup> <sup>þ</sup> <sup>i</sup> s

s¼1 i s k

<sup>k</sup> from all remote sensors l by using the individual sensor models which are

<sup>k</sup>∣k�<sup>1</sup> <sup>þ</sup> SQk

xs k�1∣k�1

(38)

265

Distributed Kalman Filter

http://dx.doi.org/10.5772/intechopen.71941

(40)

(41)

xs <sup>k</sup>�1∣k�<sup>1</sup>:

xs

<sup>k</sup>∣<sup>k</sup> (39)

<sup>k</sup>∣k�<sup>1</sup> <sup>þ</sup> SQk, then the convex combination

<sup>k</sup>: (42)

s¼1

Fk∣k�<sup>1</sup>Pk�1∣k�<sup>1</sup> <sup>P</sup><sup>s</sup>

<sup>k</sup>∣k�<sup>1</sup> <sup>¼</sup> Fk∣k�<sup>1</sup>Pk�1∣k�<sup>1</sup> <sup>P</sup><sup>s</sup>

<sup>S</sup> Fk∣k�<sup>1</sup>SPk�1∣k�<sup>1</sup>F<sup>T</sup>

For the filtering, it is assumed that each sensor has computed its local information contribution

Pk∣k�<sup>1</sup> <sup>¼</sup> Fk∣k�<sup>1</sup>Pk�1∣k�<sup>1</sup>F<sup>T</sup>

¼ 1

<sup>k</sup>∣k�<sup>1</sup> <sup>¼</sup> SPk∣k�<sup>1</sup> <sup>¼</sup> Fk∣k�<sup>1</sup>SPk�1∣k�<sup>1</sup>F<sup>T</sup>

again assumed to be known. Then, the information state is updated via.

xk∣<sup>k</sup> ¼ Pk∣k�<sup>1</sup> � ��<sup>1</sup>

> <sup>¼</sup> <sup>X</sup> S

> > s¼1

Ps k∣k�1 � ��<sup>1</sup>

As a direct consequence, the updated parameters of the local processors follow the standard IF

k∣k�1 � ��<sup>1</sup>

For the globalized information matrix, the remote information parameters from the sensor

Pk∣<sup>k</sup> � ��<sup>1</sup>

> Ps k∣k � ��<sup>1</sup>

xs <sup>k</sup>∣<sup>k</sup> <sup>¼</sup> Ps

Again, we have omitted the factors, which are independent of xk.

### 8. Information filter formulation of the DKF

In [6], an elegant derivation of the DKF formulas was published based on the information filter (IF). The IF uses information matrices, which are inverted covariances, and information states, which are information matrices multiplied with states. The optimal, centralized update formulas for S sensors based on the predicted information parameters Pk∣k�<sup>1</sup> � ��<sup>1</sup> and Pk∣k�<sup>1</sup> � ��<sup>1</sup> xk∣k�<sup>1</sup> are given by:

$$\begin{aligned} \left(\boldsymbol{P}\_{k|k}\right)^{-1}\boldsymbol{x}\_{k|k} &= \left(\boldsymbol{P}\_{k|k-1}\right)^{-1}\boldsymbol{x}\_{k|k-1} + \sum\_{s=1}^{S} \boldsymbol{I}\_{k}^{s} \\ \left(\boldsymbol{P}\_{k|k}\right)^{-1} &= \left(\boldsymbol{P}\_{k|k-1}\right)^{-1} + \sum\_{s=1}^{S} \boldsymbol{I}\_{k}^{s} \end{aligned} \tag{36}$$

where i s <sup>k</sup> <sup>¼</sup> <sup>H</sup><sup>s</sup> <sup>T</sup> <sup>k</sup> R<sup>s</sup> k � ��<sup>1</sup> zs <sup>k</sup> and I s <sup>k</sup> <sup>¼</sup> <sup>H</sup><sup>s</sup> <sup>T</sup> <sup>k</sup> Rs k � ��<sup>1</sup> Hs <sup>k</sup> are the local information contribution from the current measurements at time tk, which were received by the FC. If we want to distribute the computation to S nodes, we will have them uncorrelated as in the DKF previously. Since the fused estimate is obtained via the convex combination, the local information parameters are summed up:

$$\begin{aligned} \left(P\_{k|k}\right)^{-1} \mathbf{x}\_{k|k} &= \sum\_{s=1}^{S} \left(P\_{k|k}^{s}\right)^{-1} \mathbf{x}\_{k|k}^{s} \\ \left(P\_{k|k}\right)^{-1} &= \sum\_{s=1}^{S} \left(P\_{k|k}^{s}\right)^{-1} \end{aligned} \tag{37}$$

This summation structure can be used to provide a closed prediction-filtering cycle.

#### 8.1. Information DKF prediction

The prediction of the state is easier than a direct transition of the information parameters. Based on the fused estimate, one can obtain.

$$\begin{aligned} \mathbf{x}\_{k|k-1} &= F\_{k|k-1} \mathbf{x}\_{k-1|k-1} \\ &= F\_{k|k-1} P\_{k-1|k-1} \sum\_{s=1}^{S} \left( P\_{k-1|k-1}^{s} \right)^{-1} \mathbf{x}\_{k-1|k-1}^{s} \\ &= \sum\_{s=1}^{S} F\_{k|k-1} P\_{k-1|k-1} \left( P\_{k-1|k-1}^{s} \right)^{-1} \mathbf{x}\_{k-1|k-1}^{s} . \end{aligned} \tag{38}$$

Thus, we have given the local predicted state parameters as:

$$\mathbf{x}\_{k|k-1}^{s} = F\_{k|k-1} P\_{k-1|k-1} \left( P\_{k|k}^{s} \right)^{-1} \mathbf{x}\_{k|k}^{s} \tag{39}$$

Analogously, one obtains for the prior covariance:

$$\begin{split} P\_{k|k-1} &= F\_{k|k-1} P\_{k-1|k-1} F\_{k|k-1}^T + Q\_k \\ &= \frac{1}{S} F\_{k|k-1} S P\_{k-1|k-1} F\_{k|k-1}^T + S Q\_k \end{split} \tag{40}$$

Thus, if we set Ps <sup>k</sup>∣k�<sup>1</sup> <sup>¼</sup> SPk∣k�<sup>1</sup> <sup>¼</sup> Fk∣k�<sup>1</sup>SPk�1∣k�<sup>1</sup>F<sup>T</sup> <sup>k</sup>∣k�<sup>1</sup> <sup>þ</sup> SQk, then the convex combination yields the exact global fused covariance.

#### 8.2. Information DKF filtering

where

264 Kalman Filters - Theory for Advanced Applications

where i s <sup>k</sup> <sup>¼</sup> <sup>H</sup><sup>s</sup> <sup>T</sup> <sup>k</sup> R<sup>s</sup> k � ��<sup>1</sup> zs <sup>k</sup> and I s <sup>k</sup> <sup>¼</sup> <sup>H</sup><sup>s</sup> <sup>T</sup> <sup>k</sup> Rs k � ��<sup>1</sup> Hs

8.1. Information DKF prediction

Based on the fused estimate, one can obtain.

xs <sup>k</sup>∣<sup>k</sup> <sup>¼</sup> xs

Ps <sup>k</sup>∣<sup>k</sup> <sup>¼</sup> <sup>P</sup><sup>s</sup>

W<sup>s</sup>

Again, we have omitted the factors, which are independent of xk.

8. Information filter formulation of the DKF

based on the predicted information parameters Pk∣k�<sup>1</sup>

Pk∣<sup>k</sup> � ��<sup>1</sup>

Pk∣<sup>k</sup>

Pk∣<sup>k</sup> � ��<sup>1</sup>

> Pk∣<sup>k</sup> � ��<sup>1</sup> <sup>¼</sup> <sup>X</sup>

This summation structure can be used to provide a closed prediction-filtering cycle.

νs <sup>k</sup> <sup>¼</sup> zs

Ss <sup>k</sup> <sup>¼</sup> <sup>H</sup><sup>s</sup> kPs <sup>k</sup>∣k�<sup>1</sup>H<sup>s</sup> <sup>T</sup>

<sup>k</sup>∣k�<sup>1</sup> <sup>¼</sup> <sup>P</sup><sup>s</sup>

<sup>k</sup>∣k�<sup>1</sup> <sup>þ</sup> <sup>W</sup><sup>s</sup>

<sup>k</sup>∣k�<sup>1</sup> � <sup>W</sup><sup>s</sup>

<sup>k</sup> � <sup>H</sup><sup>s</sup> kxs k∣k�1

<sup>k</sup>∣k�<sup>1</sup>H<sup>s</sup> <sup>T</sup> <sup>k</sup> Ss k � ��<sup>1</sup>

In [6], an elegant derivation of the DKF formulas was published based on the information filter (IF). The IF uses information matrices, which are inverted covariances, and information states, which are information matrices multiplied with states. The optimal, centralized update formulas for S sensors

> xk∣<sup>k</sup> ¼ Pk∣k�<sup>1</sup> � ��<sup>1</sup>

� ��<sup>1</sup> <sup>¼</sup> Pk∣k�<sup>1</sup>

<sup>k</sup>∣k�<sup>1</sup>ν<sup>s</sup> k

<sup>k</sup>∣k�<sup>1</sup>S<sup>s</sup>

<sup>k</sup> <sup>þ</sup> <sup>R</sup><sup>s</sup> k:

� ��<sup>1</sup> and Pk∣k�<sup>1</sup>

� ��<sup>1</sup> þ<sup>X</sup>

current measurements at time tk, which were received by the FC. If we want to distribute the computation to S nodes, we will have them uncorrelated as in the DKF previously. Since the fused estimate is obtained via the convex combination, the local information parameters are summed up:

> xk∣<sup>k</sup> <sup>¼</sup> <sup>X</sup> S

> > s¼1

S

s¼1

The prediction of the state is easier than a direct transition of the information parameters.

Ps k∣k � ��<sup>1</sup>

Ps k∣k � ��<sup>1</sup>

xs k∣k

xk∣k�<sup>1</sup> <sup>þ</sup><sup>X</sup> S

S

s¼1 I s k s¼1 i s k

� ��<sup>1</sup>

<sup>k</sup> are the local information contribution from the

xk∣k�<sup>1</sup> are given by:

kW<sup>s</sup> <sup>T</sup> k∣k�1

(35)

(36)

(37)

For the filtering, it is assumed that each sensor has computed its local information contribution parameter i s <sup>k</sup> and I s <sup>k</sup> from its own sensor model and, in addition, the information matrix contributions I l <sup>k</sup> from all remote sensors l by using the individual sensor models which are again assumed to be known. Then, the information state is updated via.

$$\begin{aligned} \left(\mathbf{P}\_{k|k}\right)^{-1}\mathbf{x}\_{k|k} &= \left(P\_{k|k-1}\right)^{-1}\mathbf{x}\_{k|k-1} + \sum\_{s=1}^{S} \mathbf{i}\_{k}^{s} \\ &= \sum\_{s=1}^{S} \left(P\_{k|k-1}^{s}\right)^{-1} \mathbf{x}\_{k|k-1}^{s} + \mathbf{i}\_{k}^{s}. \end{aligned} \tag{41}$$

As a direct consequence, the updated parameters of the local processors follow the standard IF filtering equations:

$$\left(P\_{k|k}^{s}\right)^{-1}\mathbf{x}\_{k|k}^{s} = \left(P\_{k|k-1}^{s}\right)^{-1}\mathbf{x}\_{k|k-1}^{s} + \mathbf{i}\_{k}^{s}.\tag{42}$$

For the globalized information matrix, the remote information parameters from the sensor models are used:

$$\left(P\_{k|k}\right)^{-1} = \left(P\_{k|k-1}\right)^{-1} + \sum\_{s=1}^{S} I\_k^s$$

$$= \sum\_{s=1}^{S} \left(P\_{k|k-1}^s\right)^{-1} + I\_{k'}^s \tag{43}$$

$$\left(P\_{k|k}^s\right)^{-1} = \left(P\_{k|k-1}^s\right)^{-1} + I\_k^s$$

Since the measurements conditioned on the whole trajectory only depend on the state at time

p xk:<sup>n</sup>jZk�<sup>1</sup> � � <sup>¼</sup> p xkjxk�1:n, Zk�<sup>1</sup> � �p xk�1:<sup>n</sup>jZk�<sup>1</sup> � �

where we have used the Markov property of the system in the last equation. This recursive representation can now be repeated on the term p xk�1:<sup>n</sup>jZk�<sup>1</sup> � �. A successive application of this

> Y S

p zs

<sup>k</sup>�<sup>1</sup>jxk�<sup>1</sup>

s¼1

� � � � p xð Þ <sup>n</sup>þ<sup>1</sup>jxn <sup>∙</sup>p xnjZ<sup>n</sup> ð Þ

where we have neglected the normalization constant in the denominator. Filling in our Gaussian

Since the initial density usually is based on a first measurement, we can assume that it

N xn; x<sup>s</sup>

N xk:<sup>n</sup>; xs

<sup>n</sup>∣<sup>n</sup>; Ps n∣n

<sup>k</sup>:n∣<sup>k</sup>; Ps k:n∣k

S

s¼1

s¼1

When the posterior is fully factorized in the number of sensors and in the time steps, each

Y S

p zs <sup>k</sup>jxk

<sup>¼</sup> p xk ð Þ <sup>j</sup>xk�<sup>1</sup> p xk�1:<sup>n</sup>jZk�<sup>1</sup> � � (47)

� � � � p xð Þ <sup>k</sup>�<sup>1</sup>jxk�<sup>2</sup> ∙∙∙

� � � � <sup>∙</sup>p xnjZ<sup>n</sup> ð Þ (49)

� � (50)

� � (51)

<sup>k</sup>:n∣<sup>k</sup> and covariance

� � (46)

http://dx.doi.org/10.5772/intechopen.71941

Distributed Kalman Filter

267

(48)

s¼1

p Zk ð Þ¼ jxk:<sup>n</sup> p Zk ð Þ¼ jxk

tk, the joint likelihood function is given by:

procedure yields

matrix Ps

<sup>k</sup>:n∣<sup>k</sup> [7]:

where the parameters are given by:

<sup>p</sup>ðxk:<sup>n</sup> Zk � � � ∝ Y S

<sup>p</sup>ðxk:<sup>n</sup> <sup>Z</sup><sup>k</sup> � � �

factorizes into independent local track starts:

s¼1

∝ Y k

l¼nþ1

Y S

N zs <sup>l</sup> ; H<sup>s</sup> <sup>l</sup> xl;Rs l � �N xl; Fl∣l�<sup>1</sup>xl�<sup>1</sup>; SQl

p xnjZn ð Þ <sup>∝</sup> <sup>Y</sup>

processing node can compute the resulting ASD Gaussian with mean xs

<sup>p</sup>ðxk:<sup>n</sup> zk � � � ∝ Y S

s¼1

Y S

s¼1

p z<sup>s</sup> <sup>k</sup>jxk � � � � p xk ð Þ <sup>j</sup>xk�<sup>1</sup>

p zs

<sup>n</sup>þ<sup>1</sup>jxnþ<sup>1</sup>

models and using the factorization of the transition model from above equation yields

The second factor can be reformulated as follows:

It is important to note that the local processing nodes compute both the local pseudo information matrix Ps k∣k � ��<sup>1</sup> and the global fused error covariance Pk∣<sup>k</sup>, which is required for the prediction to the next time step.

#### 9. Distributed accumulated state density filter

The DKF from Sections 7 and 8 can be considered a big step toward distributed state estimation, tracking, and information inference. However, in practical applications, the exact solution is often hindered by the fact that the exact remote sensor model parameters are unknown and can only be approximated based on local state estimates. The good news is that there is another exact solution based on the accumulated state density (ASD). The distributed ASD equations turn the spatial correlations into temporal correlations of successive states. Originally, the ASD equations were introduced to solve the out-of-sequence problem, which handles delayed transmissions of measurements into an ongoing fusion process in an optimal manner. Therefore, the temporal correlations can well be coped with the ASD approach.

At first, let us introduce the ASD state xk:<sup>n</sup> as

$$\mathbf{x}\_{k:n} = \begin{pmatrix} \mathbf{x}\_k^T, \dots, \mathbf{x}\_n^T \end{pmatrix}^T \tag{44}$$

where tk refers to the current time of the filtering process and tn refers to the initialization time. The ASD approach now considers the conditional joint density p xk:<sup>n</sup>jzk � �, that is, the posterior of the full trajectory of the target between tn and tk. In particular, the individual state densities of a single instant of time can be obtained via marginalization. Also, it should be noted that the Rauch-Tung-Striebel (RTS) smoothing equations are inherently integrated in the ASD posterior, since all states are conditioned on the full set of measurements up to time tk [7].

A recursive computation of the ASD posterior can be achieved by using the Bayes theorem:

$$p\left(\mathbf{x}\_{k:n}|\mathbf{Z}^k\right) = \frac{p(\mathbf{Z}\_k|\mathbf{x}\_{k:n})p\left(\mathbf{x}\_{k:n}|\mathbf{Z}^{k-1}\right)}{p\left(\mathbf{Z}\_k|\mathbf{Z}^{k-1}\right)}\tag{45}$$

Since the measurements conditioned on the whole trajectory only depend on the state at time tk, the joint likelihood function is given by:

$$p(Z\_k|\mathbf{x}\_{k:n}) = p(Z\_k|\mathbf{x}\_k) = \prod\_{s=1}^{S} p(z\_k^s|\mathbf{x}\_k) \tag{46}$$

The second factor can be reformulated as follows:

Pk∣<sup>k</sup>

Ps k∣k � ��<sup>1</sup>

9. Distributed accumulated state density filter

At first, let us introduce the ASD state xk:<sup>n</sup> as

matrix Ps

k∣k � ��<sup>1</sup>

266 Kalman Filters - Theory for Advanced Applications

the next time step.

� ��<sup>1</sup> <sup>¼</sup> Pk∣k�<sup>1</sup>

<sup>¼</sup> <sup>X</sup> S

<sup>¼</sup> <sup>P</sup><sup>s</sup> k∣k�1 � ��<sup>1</sup>

s¼1

It is important to note that the local processing nodes compute both the local pseudo information

The DKF from Sections 7 and 8 can be considered a big step toward distributed state estimation, tracking, and information inference. However, in practical applications, the exact solution is often hindered by the fact that the exact remote sensor model parameters are unknown and can only be approximated based on local state estimates. The good news is that there is another exact solution based on the accumulated state density (ASD). The distributed ASD equations turn the spatial correlations into temporal correlations of successive states. Originally, the ASD equations were introduced to solve the out-of-sequence problem, which handles delayed transmissions of measurements into an ongoing fusion process in an optimal manner. Therefore, the temporal correlations can well be coped with the ASD approach.

xk:<sup>n</sup> <sup>¼</sup> xT

since all states are conditioned on the full set of measurements up to time tk [7].

<sup>k</sup> ;…; x<sup>T</sup> n

where tk refers to the current time of the filtering process and tn refers to the initialization time. The ASD approach now considers the conditional joint density p xk:<sup>n</sup>jzk � �, that is, the posterior of the full trajectory of the target between tn and tk. In particular, the individual state densities of a single instant of time can be obtained via marginalization. Also, it should be noted that the Rauch-Tung-Striebel (RTS) smoothing equations are inherently integrated in the ASD posterior,

A recursive computation of the ASD posterior can be achieved by using the Bayes theorem:

p xk:<sup>n</sup>jZ<sup>k</sup> � � <sup>¼</sup> p Zk ð Þ <sup>j</sup>xk:<sup>n</sup> p xk:<sup>n</sup>jZk�<sup>1</sup> � �

� �<sup>T</sup> (44)

p ZkjZ<sup>k</sup>�<sup>1</sup> � � (45)

� ��<sup>1</sup> þ<sup>X</sup>

Ps k∣k�1 � ��<sup>1</sup>

S

s¼1 I s k

þ I s k

and the global fused error covariance Pk∣<sup>k</sup>, which is required for the prediction to

þ I s k, (43)

$$\begin{split} p\left(\mathbf{x}\_{k:n}|\mathbf{Z}^{k-1}\right) &= p\left(\mathbf{x}\_{k}|\mathbf{x}\_{k-1:n}, \mathbf{Z}^{k-1}\right) p\left(\mathbf{x}\_{k-1:n}|\mathbf{Z}^{k-1}\right) \\ &= p(\mathbf{x}\_{k}|\mathbf{x}\_{k-1}) p\left(\mathbf{x}\_{k-1:n}|\mathbf{Z}^{k-1}\right) \end{split} \tag{47}$$

where we have used the Markov property of the system in the last equation. This recursive representation can now be repeated on the term p xk�1:<sup>n</sup>jZk�<sup>1</sup> � �. A successive application of this procedure yields

$$\begin{split} p(\mathbf{x}\_{k:n}|\mathbf{Z}^{k}) & \propto \prod\_{s=1}^{S} \{ p(\mathbf{z}\_{k}^{s}|\mathbf{x}\_{k}) \} p(\mathbf{x}\_{k}|\mathbf{x}\_{k-1}) \prod\_{s=1}^{S} \{ p(\mathbf{z}\_{k-1}^{s}|\mathbf{x}\_{k-1}) \} p(\mathbf{x}\_{k-1}|\mathbf{x}\_{k-2}) \cdots \\ & \prod\_{s=1}^{S} \{ p(\mathbf{z}\_{n+1}^{s}|\mathbf{x}\_{n+1}) \} p(\mathbf{x}\_{n+1}|\mathbf{x}\_{n}) p(\mathbf{x}\_{n}|\mathbf{Z}^{n}) \end{split} \tag{48}$$

where we have neglected the normalization constant in the denominator. Filling in our Gaussian models and using the factorization of the transition model from above equation yields

$$p(\mathbf{x}\_{k:n}|Z^k) \propto \prod\_{l=n+1}^{k} \prod\_{s=1}^{S} \left\{ N(\mathbf{z}\_l^s; H\_l^s \mathbf{x}\_l, R\_l^s) N(\mathbf{x}\_l; F\_{l|l-1} \mathbf{x}\_{l-1}, \mathbf{S} \mathbf{Q}\_l) \right\} \cdot p(\mathbf{x}\_n|Z^n) \tag{49}$$

Since the initial density usually is based on a first measurement, we can assume that it factorizes into independent local track starts:

$$p(\mathbf{x}\_n|Z^n) \propto \prod\_{s=1}^S N\left(\mathbf{x}\_n; \mathbf{x}\_{n|n}^s, P\_{n|n}^s\right) \tag{50}$$

When the posterior is fully factorized in the number of sensors and in the time steps, each processing node can compute the resulting ASD Gaussian with mean xs <sup>k</sup>:n∣<sup>k</sup> and covariance matrix Ps <sup>k</sup>:n∣<sup>k</sup> [7]:

$$p(\mathbf{x}\_{k:n}|z^k) \propto \prod\_{s=1}^{S} N\left(\mathbf{x}\_{k:n}; \mathbf{x}\_{k:n|k}^s, P\_{k:n|k}^s\right) \tag{51}$$

where the parameters are given by:

$$\begin{aligned} &\mathbf{x}\_{k:n|k}^{s} = \left(\mathbf{x}\_{k|k}^{s,\mathrm{T}},\ldots,\mathbf{x}\_{n|k}^{s,\mathrm{T}}\right)^{\mathrm{T}},\\ &\qquad \times \left(\begin{array}{cccc}P\_{k|k}^{s} & P\_{k|k}^{s}W\_{k-1|k}^{s} & P\_{k|k}^{s}W\_{k-2|k}^{s} & \cdots & P\_{k|k}^{s}W\_{n|k}^{s}\\ W\_{k-1|k}^{s}P\_{k|k}^{s} & P\_{k-1|k}^{s} & P\_{k-1|k}^{s}W\_{k-2|k-1}^{s} & \ast & P\_{k-1|k}^{s}W\_{n|k-1}^{s}\\ W\_{k-2|k}^{s}P\_{k|k}^{s} & W\_{k-2|k-1}^{s}P\_{k-1|k}^{s} & P\_{k-2|k}^{s} & \ast & P\_{k-2|k}^{s}W\_{n|k-2}^{s}\\ \vdots & \ast & \ast & \ast & \vdots\\ W\_{n|k}^{s}P\_{k|k}^{s} & W\_{n|k-1}^{s}P\_{k-1|k}^{s} & \cdots & W\_{n+1}^{s}P\_{n+1|k}^{s} & P\_{n|k}^{s} \end{array}\right). \end{aligned}$$

We have used a short notation such that xl∣<sup>k</sup> <sup>¼</sup> <sup>E</sup> xljzk � � are the smoothed state estimates and Pl∣<sup>k</sup> <sup>¼</sup> cov xljzk � � are the covariances, respectively, which result from the Rauch-Tung-Striebel equations. Also, the combined retrodiction gain matrices are known from the RTS smoother:

$$\begin{aligned} \boldsymbol{W}\_{l|k} &= \prod\_{i=l}^{k-1} \boldsymbol{W}\_{i|l+1} \\ \boldsymbol{W}\_{l|i+1} &= \boldsymbol{P}\_{l|i} \boldsymbol{F}\_{i+1|l}^{\mathrm{T}} \left(\boldsymbol{P}\_{i+1|i}\right)^{-1} \end{aligned} \tag{52}$$

Ps

W<sup>s</sup> <sup>k</sup>�1:<sup>n</sup> ¼

xs <sup>k</sup>:n∣<sup>k</sup> <sup>¼</sup> xs

> Ps <sup>k</sup>:n∣<sup>k</sup> <sup>¼</sup> <sup>P</sup><sup>s</sup>

Ss <sup>k</sup> <sup>¼</sup> <sup>H</sup><sup>s</sup>

W<sup>s</sup>

9.2. Distributed ASD filtering

posterior parameters:

10. Conclusion

functions.

<sup>k</sup>∣k�<sup>1</sup> <sup>¼</sup> Fk∣k�<sup>1</sup>P<sup>s</sup>

0

BBBBB@

<sup>k</sup>:n∣k�<sup>1</sup> <sup>þ</sup> <sup>W</sup><sup>s</sup>

kΠkP<sup>s</sup>

<sup>k</sup>:n∣<sup>k</sup> <sup>¼</sup> <sup>P</sup><sup>s</sup>

W<sup>s</sup> k�1∣k W<sup>s</sup> k�2∣k ⋮ W<sup>s</sup> n∣k

<sup>k</sup>�1∣k�<sup>1</sup>F<sup>T</sup>

1

CCCCCA

Since the prior is factorized in form of a product representation and the current measurements from time tk are mutually uncorrelated, local Kalman filters can be applied to obtain the

<sup>k</sup>:n∣<sup>k</sup> z<sup>s</sup>

<sup>k</sup>:n∣k�<sup>1</sup>Π<sup>T</sup>

<sup>k</sup>:n∣k�<sup>1</sup> � <sup>W</sup><sup>s</sup>

<sup>k</sup>:n∣k�<sup>1</sup>Π<sup>T</sup>

In this chapter, we have introduced the least squares solution to the track-to-track fusion problem, where cross-covariances of the track estimation errors are required. Neglecting the cross-covariances has led us to the Naïve fusion, a simple but powerful fusion algorithm for practical applications. By recursive computation of the cross-covariances, we have seen that they primarily depend on the process noise of the state transition kernel. Since a centralized computation of the cross-covariances is infeasible in practical applications, more sophisticated solutions are required for optimal fusion results. The distributed Kalman filter, which uses the product representation to keep the local parameters decorrelated achieved this. However, this approach only works, if the local processors know all measurement models at each time step. Then, the distributed accumulated state density filter uses the temporal correlations to factorize the global posterior density. This approach does not require remote sensor models and is therefore, well suited for extensions with measurement ambiguity or nonlinear measurement

In the study, one can find more extensions based on the distributed Kalman filter to overcome the lack of knowledge on the remote sensor models. In [8] and the references therein, a debiasing matrix is introduced to compensate for globally biased gain matrices of the local filters. An application of the tracklet fusion based on the distributed accumulated state density filter can be found in [9]. Then, in [10], the information filter formulation of the distributed Kalman filter also was extended to scenarios with input information on the transition process.

<sup>k</sup> � <sup>H</sup><sup>s</sup>

k:n∣kS<sup>s</sup> kW<sup>s</sup> <sup>T</sup> k:n∣k

<sup>k</sup> H<sup>s</sup> <sup>T</sup> <sup>k</sup> <sup>þ</sup> Rs k

<sup>k</sup> Hs T <sup>k</sup> S<sup>s</sup> k

kΠkxs k:n∣k�1

� ��<sup>1</sup> (55)

Distributed Kalman Filter

269

http://dx.doi.org/10.5772/intechopen.71941

� �

<sup>k</sup>∣k�<sup>1</sup> <sup>þ</sup> SQk

Thus, when the FC receives the local ASD parameters, the optimal fused estimate can be obtained via the convex combination:

$$\begin{aligned} \mathbf{x}\_{k:n|k} &= P\_{k:n|k} \sum\_{s=1}^{S} \left( P\_{k:n|k}^{s} \right)^{-1} \mathbf{x}\_{k:n|k'}^{s} \\ P\_{k:n|k} &= \left( \sum\_{s=1}^{S} \left( P\_{k:n|k}^{s} \right)^{-1} \right)^{-1} \end{aligned} \tag{53}$$

For a continuous state estimation process, it is convenient to formulate the distributed ASD solution in terms of a prediction-filtering cycle.

#### 9.1. Distributed ASD prediction

For the prediction step, it is assumed that the local processing node s has computed the previous filtering parameters xs <sup>k</sup>�1:n∣k�<sup>1</sup> and <sup>P</sup><sup>s</sup> <sup>k</sup>�1:n∣k�<sup>1</sup>, which refer to time tk�1. Then, the prior parameters are given by:

$$\begin{aligned} \mathbf{x}\_{k:n|k-1}^{s} &= \left(\mathbf{x}\_{k|k-1}^{\mathrm{s}}, \mathbf{x}\_{k-1:n|k-1}^{\mathrm{s}}\right)^{\mathrm{T}}\\ P\_{k:n|k-1}^{s} &= \left(\begin{array}{c} P\_{k|k-1}^{s} & P\_{k-1:n|k-1}^{s} \mathcal{W}\_{k-1:n}^{\mathrm{s}}\\ \mathcal{W}\_{k-1:n}^{s} P\_{k-1:n|k-1}^{s} & P\_{k-1:n|k-1}^{s} \end{array}\right) \\ \mathbf{x}\_{k|k-1}^{s} &= F\_{k|k-1} \mathbf{x}\_{k-1|k-1}^{s} \end{aligned} \tag{54}$$

$$\begin{aligned} P\_{k|k-1}^s &= F\_{k|k-1} P\_{k-1|k-1}^s F\_{k|k-1}^\Gamma + S Q\_k \\ W\_{k-1:n}^s &= \begin{pmatrix} W\_{k-1|k}^s \\ W\_{k-2|k}^s \\ \vdots \\ W\_{n|k}^s \end{pmatrix} \end{aligned}$$

#### 9.2. Distributed ASD filtering

xs

Ps <sup>k</sup>:n∣<sup>k</sup> ¼

<sup>k</sup>:n∣<sup>k</sup> <sup>¼</sup> xs <sup>T</sup>

0

BBBBBBBBBB@

<sup>k</sup>∣<sup>k</sup> ;…; x<sup>s</sup> <sup>T</sup> n∣k � �<sup>T</sup>

Ps

W<sup>s</sup> <sup>k</sup>�1∣<sup>k</sup>Ps

268 Kalman Filters - Theory for Advanced Applications

W<sup>s</sup> <sup>k</sup>�2∣<sup>k</sup>Ps

> W<sup>s</sup> n∣kPs

obtained via the convex combination:

solution in terms of a prediction-filtering cycle.

xs

Ps

xs

<sup>k</sup>:n∣k�<sup>1</sup> <sup>¼</sup> xs <sup>T</sup>

<sup>k</sup>∣k�<sup>1</sup> <sup>¼</sup> Fk∣k�<sup>1</sup>xs

<sup>k</sup>:n∣k�<sup>1</sup> <sup>¼</sup> <sup>P</sup><sup>s</sup>

W<sup>s</sup> <sup>k</sup>�1:<sup>n</sup>P<sup>s</sup>

9.1. Distributed ASD prediction

previous filtering parameters xs

parameters are given by:

,

<sup>k</sup>∣<sup>k</sup> P<sup>s</sup>

<sup>k</sup>∣<sup>k</sup> W<sup>s</sup>

<sup>k</sup>∣<sup>k</sup> W<sup>s</sup>

k∣kWs T

<sup>k</sup>�2∣k�<sup>1</sup>Ps

<sup>n</sup>∣k�<sup>1</sup>Ps

<sup>k</sup>�1∣<sup>k</sup> <sup>P</sup><sup>s</sup>

<sup>k</sup>�1∣<sup>k</sup> <sup>P</sup><sup>s</sup>

<sup>k</sup>�1∣<sup>k</sup> Ps

Wl∣<sup>k</sup> <sup>¼</sup> <sup>Y</sup> k�1

Wi∣iþ<sup>1</sup> <sup>¼</sup> Pi∣iF<sup>T</sup>

xk:n∣<sup>k</sup> ¼ Pk:n∣<sup>k</sup>

Pk:n∣<sup>k</sup> <sup>¼</sup> <sup>X</sup>

<sup>k</sup>�1:n∣k�<sup>1</sup> and <sup>P</sup><sup>s</sup>

<sup>k</sup>∣k�<sup>1</sup>; xs <sup>T</sup>

k�1∣k�1

i¼l

Thus, when the FC receives the local ASD parameters, the optimal fused estimate can be

X S

Ps k:n∣k � ��<sup>1</sup>

xs <sup>k</sup>:n∣k,

!�<sup>1</sup> (53)

<sup>k</sup>�1:n∣k�<sup>1</sup>, which refer to time tk�1. Then, the prior

s¼1

Ps k:n∣k � ��<sup>1</sup>

For a continuous state estimation process, it is convenient to formulate the distributed ASD

For the prediction step, it is assumed that the local processing node s has computed the

k�1:n∣k�1 � �<sup>T</sup>

<sup>k</sup>∣k�<sup>1</sup> Ps

<sup>k</sup>�1:n∣k�<sup>1</sup> <sup>P</sup><sup>s</sup>

!

<sup>k</sup>�1:n∣k�<sup>1</sup>W<sup>s</sup> <sup>T</sup>

k�1:n∣k�1

k�1:n

S

s¼1

k∣kWs T

<sup>k</sup>�1∣<sup>k</sup>Ws T

⋮∗ ∗ ∗⋮

<sup>k</sup>�1∣<sup>k</sup> <sup>⋯</sup> <sup>W</sup><sup>s</sup>

We have used a short notation such that xl∣<sup>k</sup> <sup>¼</sup> <sup>E</sup> xljzk � � are the smoothed state estimates and Pl∣<sup>k</sup> <sup>¼</sup> cov xljzk � � are the covariances, respectively, which result from the Rauch-Tung-Striebel equations. Also, the combined retrodiction gain matrices are known from the RTS smoother:

Wi∣iþ<sup>1</sup>

<sup>i</sup>þ1∣<sup>i</sup> Piþ1∣<sup>i</sup> � ��<sup>1</sup>

<sup>k</sup>�2∣<sup>k</sup> <sup>⋯</sup> <sup>P</sup><sup>s</sup>

<sup>k</sup>�2∣k�<sup>1</sup> <sup>∗</sup> Ps

<sup>n</sup>∣þ<sup>1</sup>P<sup>s</sup>

<sup>k</sup>�2∣<sup>k</sup> <sup>∗</sup> Ps

k∣kWs T n∣k

1

CCCCCCCCCCA

(52)

(54)

<sup>k</sup>�1∣<sup>k</sup>Ws T n∣k�1

<sup>k</sup>�2∣<sup>k</sup>Ws T n∣k�2

n∣k

<sup>n</sup>þ1∣<sup>k</sup> <sup>P</sup><sup>s</sup>:

<sup>k</sup>∣<sup>k</sup> P<sup>s</sup>

Since the prior is factorized in form of a product representation and the current measurements from time tk are mutually uncorrelated, local Kalman filters can be applied to obtain the posterior parameters:

$$\mathbf{x}\_{k:n|k}^{s} = \mathbf{x}\_{k:n|k-1}^{s} + \mathcal{W}\_{k:n|k}^{s} \left(\mathbf{z}\_{k}^{s} - H\_{k}^{s} \Pi\_{k} \mathbf{x}\_{k:n|k-1}^{s}\right)$$

$$P\_{k:n|k}^{s} = P\_{k:n|k-1}^{s} - \mathcal{W}\_{k:n|k}^{s} \mathcal{S}\_{k}^{s} \mathcal{W}\_{k:n|k}^{s}$$

$$S\_{k}^{s} = H\_{k}^{s} \Pi\_{k} \mathcal{P}\_{k:n|k-1}^{s} \Pi\_{k}^{T} H\_{k}^{s \top} + \mathcal{R}\_{k}^{s}$$

$$W\_{k:n|k}^{s} = P\_{k:n|k-1}^{s} \Pi\_{k}^{T} H\_{k}^{s \top} \left(\mathcal{S}\_{k}^{s}\right)^{-1} \tag{55}$$

### 10. Conclusion

In this chapter, we have introduced the least squares solution to the track-to-track fusion problem, where cross-covariances of the track estimation errors are required. Neglecting the cross-covariances has led us to the Naïve fusion, a simple but powerful fusion algorithm for practical applications. By recursive computation of the cross-covariances, we have seen that they primarily depend on the process noise of the state transition kernel. Since a centralized computation of the cross-covariances is infeasible in practical applications, more sophisticated solutions are required for optimal fusion results. The distributed Kalman filter, which uses the product representation to keep the local parameters decorrelated achieved this. However, this approach only works, if the local processors know all measurement models at each time step. Then, the distributed accumulated state density filter uses the temporal correlations to factorize the global posterior density. This approach does not require remote sensor models and is therefore, well suited for extensions with measurement ambiguity or nonlinear measurement functions.

In the study, one can find more extensions based on the distributed Kalman filter to overcome the lack of knowledge on the remote sensor models. In [8] and the references therein, a debiasing matrix is introduced to compensate for globally biased gain matrices of the local filters. An application of the tracklet fusion based on the distributed accumulated state density filter can be found in [9]. Then, in [10], the information filter formulation of the distributed Kalman filter also was extended to scenarios with input information on the transition process.

$$
\begin{pmatrix} A & B \\ \mathcal{C} & D \end{pmatrix}^{-1} = \begin{pmatrix} \left( A - BD^{-1} \mathbb{C} \right)^{-1} & -\left( A - BD^{-1} \mathbb{C} \right)^{-1} BD^{-1} \\\\ -D^{-1} \mathbb{C} \left( A - BD^{-1} \mathbb{C} \right)^{-1} & D^{-1} + D^{-1} \mathbb{C} \left( A - BD^{-1} \mathbb{C} \right)^{-1} BD^{-1} \end{pmatrix}
$$

$$
= \begin{pmatrix} A^{-1} + A^{-1} B \left( D - \mathcal{C} A^{-1} B \right)^{-1} \mathcal{C} A^{-1} & -A^{-1} B \left( D - \mathcal{C} A^{-1} B \right)^{-1} \\\\ -\left( D - \mathcal{C} A^{-1} B \right)^{-1} \mathcal{C} A^{-1} & \left( D - \mathcal{C} A^{-1} B \right)^{-1} \end{pmatrix}. \tag{56}
$$

$$\begin{aligned} \left(A - BD^{-1}\mathbb{C}\right)^{-1} &= A^{-1} + A^{-1}B\left(D - \mathbb{C}A^{-1}B\right)^{-1}\mathbb{C}A^{-1} - \left(A - BD^{-1}\mathbb{C}\right)^{-1}BD^{-1} = \\ \left(-A^{-1}B\left(D - \mathbb{C}A^{-1}B\right)^{-1} - D^{-1}\mathbb{C}\left(A - BD^{-1}\mathbb{C}\right)^{-1} = -\left(D - \mathbb{C}A^{-1}B\right)^{-1}\mathbb{C}A^{-1}D^{-1} \\ + D^{-1}\mathbb{C}\left(A - BD^{-1}\mathbb{C}\right)^{-1}BD^{-1} &= \left(D - \mathbb{C}A^{-1}B\right)^{-1} \end{aligned} \tag{57}$$

$$
\begin{pmatrix} A & B \\ C & D \end{pmatrix} \begin{pmatrix} E & F \\ G & H \end{pmatrix} = \begin{pmatrix} I & O \\ O & I \end{pmatrix}
\text{and}
\begin{pmatrix} E & F \\ G & H \end{pmatrix} \begin{pmatrix} A & B \\ C & D \end{pmatrix} = \begin{pmatrix} I & O \\ O & I \end{pmatrix}.
$$

$$\begin{aligned} AE + BG &= I \\ AF + BH &= O \\ CE + DG &= O \\ CF + DH &= I \end{aligned} \tag{58}$$

$$\begin{aligned} EA + FC &= I \\ EB + FD &= O \\ GA + HC &= O \\ GB + HD &= I \end{aligned} \tag{59}$$

$$\mathcal{N}(\mathbf{x}; \boldsymbol{y}, \boldsymbol{P})\mathcal{N}(\boldsymbol{z}; H\mathbf{x}, \boldsymbol{R}) = \mathcal{N}(\mathbf{x}; \overline{\boldsymbol{y}}, \overline{\boldsymbol{P}})\mathcal{N}(\boldsymbol{z}; \overline{\boldsymbol{z}}, \boldsymbol{S})\tag{60}$$


[6] Chong CY, Mori S. Optimal Fusion For Non-Zero Process Noise. Proceedings of the 16th International Conference on Information Fusion, Istanbul. 2013

**Chapter 14**

Provisional chapter

**Consensus-Based Distributed Filtering for GNSS**

DOI: 10.5772/intechopen.71138

Consensus-Based Distributed Filtering for GNSS

Amir Khodabandeh, Peter J.G. Teunissen and Safoora Zaminpardaz

Kalman filtering in its distributed information form is reviewed and applied to a network of receivers tracking Global Navigation Satellite Systems (GNSS). We show, by employing consensus-based data-fusion rules between GNSS receivers, how the consensus-based Kalman filter (CKF) of individual receivers can deliver GNSS parameter solutions that have a comparable precision performance as their network-derived, fusion center dependent counterparts. This is relevant as in the near future the proliferation of low-cost receivers will give rise to a significant increase in the number of GNSS users. With the CKF or other distributed filtering techniques, GNSS users can therefore achieve highprecision solutions without the need of relying on a centralized computing center.

Keywords: distributed filtering, consensus-based Kalman filter (CKF), global navigation satellite systems (GNSS), GNSS networks, GNSS ionospheric observables

effective and develop the nodes' capacity to perform parallel computations.

Kalman filtering in its decentralized and distributed forms has received increasing attention in the sensor network community and has been extensively studied in recent years, see e.g. [1–8]. While in the traditional centralized Kalman filter setup all sensor nodes have to send their measurements to a computing (fusion) center to obtain the state estimate, in the distributed filtering schemes the nodes only share limited information with their neighboring nodes (i.e. a subset of all other nodes) and yet obtain state estimates that are comparable to that of the centralized filter in a minimum-mean-squared-error sense. This particular feature of the distributed filters would potentially make the data communication between the nodes cost-

Next to sensor networks, distributed filtering can therefore benefit several other applications such as formation flying of aerial vehicles [9], cooperative robotics [10] and disciplines that concern the Global Navigation Satellite Systems (GNSS). The latter is the topic of this present

> © The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

© 2018 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Amir Khodabandeh, Peter J.G. Teunissen and

Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.71138

Safoora Zaminpardaz

Abstract

1. Introduction


Provisional chapter
