**2. State estimation problem**

**Keywords:** Kalman filter, sigma point, state estimation, parameter estimation, nonlin‐

Dynamic state‐space models [1–3] are useful for describing data in many different areas, such as engineering [4–8], biological data [9, 10], chemical data [11, 12], and environmental data [8, 13–15]. Estimation of the state and model parameters based on measurements from the observation process is an essential task when analyzing data by state‐space models. Bayesian estimation filtering represents a solution of considerable importance for this type of problem definition as demonstrated by many existing algorithms based on the Bayesian filtering [16– 25]. The Kalman filter(KF) [26–29] has been extensively utilized in several science applications, such as control, machine learning and neuroscience. The KF provides an optimum solution [28], when the model describing the system is supposed to be Gaussian and linear. However, the KF is limited when the model is considered to be nonlinear and present non‐Gaussian modeling assumptions. In order to relax these assumptions, the extended Kalman filter (EKF) [26, 27, 30–32], the unscented Kalman filter (UKF) [33–36], the central difference Kalman filter (CDKF) [37, 38], the square‐root unscented Kalman filter (SRUKF) [39, 40], the square‐root central difference Kalman filter (SRCDKF) [41], the iterated unscented Kalman filter (IUKF) [42, 43], the iterated central difference Kalman filter (ICDKF) [44, 45], the iterated square‐root unscented Kalman filter (ISRUKF) [46] and the iterated square‐root central difference Kalman filter (ISRCDKF) [47] have been developed. The EKF [26] linearizes the model describing the system to approximate the covariance matrix of the state vector. However, the EKF is not always performing especially for highly nonlinear or complex models. On behalf of linearizing the model, a class of filters called the sigma‐point Kalman filters (SPKFs) [48] uses a statistical linearization technique which linearizes a nonlinear function of a random variable via a linear regression. This regression is done between *n* points drawn from the prior distribution of the random variable, and the nonlinear functional evaluations of those points. The sigma‐point family of filters has been proposed to address the issues of the EKF by making use of a deterministic sampling approach. In this approach, the state distribution is approximated and represented by a set of chosen weighted sample points which capture the true mean and covariance of the state vector. These points are propagated through the true nonlinear system and capture the posterior mean and the covariance matrix of the state vector accurately to the third order (Taylor series expansion) for any nonlinearity. As part of the SPKF family, the UKF [26, 27, 33] has been developed. It uses the unscented transformation, in which a set of samples (sigma points) are propagated and selected by the nonlinear model, providing more accurate approximations of the covariance matrix and mean of the state vector. However, the UKF technique has the limit of the number of sigma‐points which are not so large and cannot represent complicated distributions. Another filter in the SPKF family is the central difference Kalman filter (CDKF) [37, 38]. It uses the Stirling polynomial interpolation formula. This filter has the benefit over the UKF in using only one parameter when generating the sigma‐point.

ear system

118 Nonlinear Systems - Design, Analysis, Estimation and Control

**1. Introduction**

Next, we present the formulation of the state estimation problem.

## **2.1. Problem description and formulation**

The state estimation problem for a system of nonlinear complex model is described as follows:

$$\begin{aligned} \dot{\mathfrak{x}} &= g(\mathfrak{x}, \mathfrak{u}, \theta, \mathfrak{w}) \\ \mathfrak{y} &= l(\mathfrak{x}, \mathfrak{u}, \theta, \mathfrak{v}) \end{aligned} \tag{1}$$

where Rn is the state variable vector, <sup>R</sup>m is the measurement vector, Rq is the unknown vector, Rp is the input variable vector, Rn and <sup>R</sup><sup>m</sup> are respectively process and measurement noise vectors, and and are nonlinear differentiable functions. The discretization of the model (1) is presented as follows:

$$\begin{array}{l} \boldsymbol{\omega}\_{k} = \mathbf{f}(\boldsymbol{\chi}\_{k-1}, \boldsymbol{u}\_{k-1}, \boldsymbol{\theta}\_{k-1}, \boldsymbol{w}\_{k-1})\\ \boldsymbol{y}\_{k} = h(\boldsymbol{\chi}\_{k}, \boldsymbol{u}\_{k}, \boldsymbol{\theta}\_{k}, \boldsymbol{v}\_{k}) \end{array} \tag{2}$$

which describes the state variables at some time step () in terms of their values at a previous time step ( <sup>1</sup>). Since we are interested to estimate the state vector , as well as the param‐ eter vector , the parameter vector is assumed to be presented as follows:

$$
\theta\_k = \theta\_{k-1} + \chi\_{k-1} \tag{3}
$$

This means that it corresponds to a stationary process, with an identity transition matrix, driven by white noise. In order to include the parameter vector into the state estimation problem, let us define a new state vector that augments the state vector and the parameter vector as follows:

$$\mathbf{z}\_{k} = \begin{bmatrix} \boldsymbol{\alpha}\_{k} \\ \boldsymbol{\theta}\_{k} \end{bmatrix} = \begin{bmatrix} f(\boldsymbol{x}\_{k-1}, \boldsymbol{u}\_{k-1}, \boldsymbol{v}\_{k-1}, \boldsymbol{\theta}\_{k-1}) \\ \boldsymbol{\theta}\_{k-1} + \boldsymbol{\gamma}\_{k-1} \end{bmatrix} \tag{4}$$

where <sup>∈</sup> <sup>R</sup>n+q. Also, defining the augmented noise vector as:

$$\varepsilon\_{k-1} = \begin{bmatrix} \upsilon\_{k-1} \\ \upsilon\_{k-1} \end{bmatrix} \tag{5}$$

The model (2) can be written as:

$$\mathbf{z}\_k = \mathcal{F}(\mathbf{z}\_{k-1}, u\_{k-1}, \varepsilon\_{k-1}) \tag{6}$$

$$\mathbf{y}\_k = \mathcal{R}(\mathbf{z}\_k, \mathbf{u}\_{k'} \mathbf{v}\_k) \tag{7}$$

where ℱ and ℛ are differentiable nonlinear functions. Thus, the objective here is to estimate the augmented state vector , given the measurement vector .

### **3. Description of state estimation methods**

#### **3.1. UKF**

The UKF is a SPKF that uses the unscented transformation. This transformation is a method for calculating the statistics of a random variable that undergoes a nonlinear mapping. It is built on the theory that "it is easier to approximate a probability distribution than an arbitrary nonlinear function".

The state distribution is represented by a Gaussian random variable (GRV) and by a set of deterministically chosen points. These points capture the true mean and covariance of the GRV and also capture the posterior mean and covariance accurately to the second order for any nonlinearity and to the third order for Gaussian inputs. Suppose that GRV characterized by a mean and covariance is used in the model. This variable is transformed by a nonlinear function = (). To reach the statistics of , a 2 + 1 sigma vector is defined as follows:

$$\begin{aligned} \mathbf{z}\_0 &= \overline{\mathbf{z}} \\ \mathbf{z}\_i &= \overline{\mathbf{z}} + (\sqrt{(L+\lambda)P\_\mathbf{z}})\_i \qquad &i = 1, \dots, L \\ \mathbf{z}\_i &= \overline{\overline{\mathbf{z}}} - (\sqrt{(L+\lambda)P\_\mathbf{z}})\_i \qquad &i = L+1, \dots, 2L \end{aligned} \tag{8}$$

where *L* is the dimension of the state *z,* = <sup>2</sup> <sup>+</sup> − is a scaling parameter and ( ( <sup>+</sup> )) 

denotes the *i*th column of the matrix square root. The constant 10−4 <sup>&</sup>lt; < 1 defines the spread of the sigma‐points around . The constant is a scaling parameter which is usually set to zero or 3 − [30].

Then, these sigma‐points are propagated through the nonlinear function,

$$\mathbf{Y}\_{\mathrm{l}} = \mathbf{f}(\mathbf{Z}\_{\mathrm{l}}) \quad \mathrm{i} = \mathbf{0}, \ldots, \mathbf{2L} \tag{9}$$

And the mean and covariance matrix of can be approximated as weighted sample mean and covariance of the transformed sigma‐point of as follows:

$$\overline{\mathbf{y}} = \sum\_{i=0}^{2L} W\_i^{\{m\}} Y \quad \text{and} \quad P\_{\overline{\mathcal{Y}}\_k} = \sum\_{l=0}^{2L} W\_l^{\{c\}} (Y\_l - \overline{\mathbf{y}}) (Y\_l - \overline{\mathbf{y}}) \tag{10}$$

where the weights are given by

which describes the state variables at some time step () in terms of their values at a previous time step ( <sup>1</sup>). Since we are interested to estimate the state vector , as well as the param‐

This means that it corresponds to a stationary process, with an identity transition matrix, driven by white noise. In order to include the parameter vector into the state estimation problem, let us define a new state vector that augments the state vector and the parameter

where ℱ and ℛ are differentiable nonlinear functions. Thus, the objective here is to estimate

The UKF is a SPKF that uses the unscented transformation. This transformation is a method for calculating the statistics of a random variable that undergoes a nonlinear mapping. It is built on the theory that "it is easier to approximate a probability distribution than an arbitrary

The state distribution is represented by a Gaussian random variable (GRV) and by a set of deterministically chosen points. These points capture the true mean and covariance of the GRV

(3)

(4)

(5)

(6)

(7)

eter vector , the parameter vector is assumed to be presented as follows:

where <sup>∈</sup> <sup>R</sup>n+q. Also, defining the augmented noise vector as:

the augmented state vector , given the measurement vector .

**3. Description of state estimation methods**

vector as follows:

120 Nonlinear Systems - Design, Analysis, Estimation and Control

The model (2) can be written as:

**3.1. UKF**

nonlinear function".

$$\begin{aligned} \mathcal{W}\_0^{(m)} &= \frac{\lambda}{\lambda + L} \\ \mathcal{W}\_0^{(c)} &= \frac{\lambda}{\lambda + L} + (1 - e^2 + \zeta) \\ \mathcal{W}\_i^{(m)} &= \mathcal{W}\_i^{(c)} = \frac{1}{2(\lambda + L)} \qquad & i = 1, \dots, 2L \end{aligned} \tag{11}$$

The parameter ξ is used to integrate prior knowledge about the distribution of .

The algorithm of the UKF includes two steps: prediction and update. In the prediction step, we calculate the predicted state estimate <sup>−</sup> and the predicted estimate covariance −. In the update step, we calculate the updated state estimate and the updated estimate covariance after calculating the innovation residual and the optimal Kalman gain .

The UKF technique is summarized in Algorithm 1.

#### **3.2. CDKF method**

The CDKF is another filter from the family of SPKF. This filter is based on Sterling polynomial interpolation formula instead of the unscented transformation used in UKF. The CDKF is similar to the UKF with the same or superior performance. However, it has an advantage over the UKF that it uses only one parameter instead of three parameters in the UKF. The CDKF uses a symmetric set of (2 + 1) sigma‐point which are calculated as follows,

$$\begin{aligned} \mathbf{z}\_0 &= \hat{\mathbf{z}}\\ \mathbf{z}\_l &= \hat{\mathbf{z}} + (h\sqrt{P\_\mathbf{z}})\_l \qquad l = 1, \dots, L\\ \mathbf{z}\_l &= \hat{\mathbf{z}} - (h\sqrt{P\_\mathbf{z}})\_l \quad l = L+1, \dots, 2L \end{aligned} \tag{12}$$

where is the dimension of the state , ℎ is a scaling parameter (the optimal value is ℎ = 3) and indicates the th column of the matrix.

These sigma‐points are propagated through the nonlinear function to form the set of the posterior sigma‐point,

$$Y\_i = f(\Psi\_i) \quad i = 0, \ldots, 2L \tag{13}$$

Within the above results, the sterling approximation estimates of the mean , covariance and cross covariance , are obtained through a linear regression of weighted point,

$$\mathfrak{F}\_{\mathbf{k}} = \Sigma\_{l=0}^{2L} \mathcal{W}\_{l}^{(m)} Y\_{l} \tag{14}$$

$$P\_{\mathcal{T}\_k} = \Sigma\_{l=1}^L \left[ \mathcal{W}\_l^{(c\_1)} (Y\_{l,k|k-1} + Y\_{l+L,k|k-1})^2 + \mathcal{W}\_l^{(c\_2)} (Y\_{l,k|k-1} + Y\_{l+L,k|k-1} + 2Y\_0)^2 \right] \tag{15}$$

$$P\_{\mathbf{z}\_k \mathbf{y}\_k} = \sqrt{W\_1^{(c\_1)} P\_\mathbf{z} \left[ Y\_{1:L,k\mid k-1} - Y\_{L+1:2L,k\mid k-1} \right]^T} \tag{16}$$

The set of corresponding weights for the mean () which are used to compute the posterior mean is defined as:

$$\mathcal{W}\_0^{(m)} = \frac{h^2 - L}{h^2}, \mathcal{W}\_i^{(m)} = \frac{1}{2h^2} \tag{17}$$

And the set of corresponding weights for the covariance 0 () which is used to recover the covariance and the cross‐covariance is defined as,

Nonlinear State and Parameter Estimation Using Iterated Sigma Point Kalman Filter: Comparative Studies http://dx.doi.org/10.5772/63728 123

$$\mathcal{W}\_{l}^{(c1)} = \frac{1}{4h^2}, \quad \mathcal{W}\_{l}^{(c2)} = \frac{h^2 - 1}{4h^4} \quad \text{i} = 1, \ldots, 2\text{L} \tag{18}$$

The CDKF technique is summarized in Algorithm 2.

#### **3.3. SRUKF method**

(12)

(13)

(14)

(15)

(16)

(17)

() which are used to compute the posterior

which is used to recover the

()

The UKF technique is summarized in Algorithm 1.

122 Nonlinear Systems - Design, Analysis, Estimation and Control

and indicates the th column of the matrix.

The set of corresponding weights for the mean

covariance and the cross‐covariance is defined as,

And the set of corresponding weights for the covariance 0

The CDKF is another filter from the family of SPKF. This filter is based on Sterling polynomial interpolation formula instead of the unscented transformation used in UKF. The CDKF is similar to the UKF with the same or superior performance. However, it has an advantage over the UKF that it uses only one parameter instead of three parameters in the UKF. The CDKF

where is the dimension of the state , ℎ is a scaling parameter (the optimal value is ℎ = 3)

These sigma‐points are propagated through the nonlinear function to form the set of the

Within the above results, the sterling approximation estimates of the mean , covariance

and cross covariance , are obtained through a linear regression of weighted point,

uses a symmetric set of (2 + 1) sigma‐point which are calculated as follows,

**3.2. CDKF method**

posterior sigma‐point,

mean is defined as:

One drawback of the UKF is that it requires the calculation of the matrix square‐root <sup>=</sup> , at each time step. That is why a square‐root form of the UKF has been developed to reduce the computational complexity. In this new method the covariance matrix will be propagated directly, avoiding to refactorize at each time step [34].

The SRUKF is initialized as follows:

$$\dot{\mathbf{z}}\_0 = E\{\mathbf{z}\_0\} \, \text{and } \mathbf{S}\_0 = \text{chol}\{E\{ (\mathbf{z}\_0 - \dot{\mathbf{z}}\_0)(\mathbf{z}\_0 - \dot{\mathbf{z}}\_0)' \}\} \tag{19}$$

$$\Psi\_{k-1} = \begin{bmatrix} \mathcal{Q}\_{k-1} & \mathcal{Q}\_{k-1} + h\mathcal{S}\_{k-1} & \mathcal{Q}\_{k-1} \ -h\mathcal{S}\_{k-1} \ \end{bmatrix} \tag{20}$$

#### Algorithm 1: UKF algorithm

• *Initialization step:*

$$\mathbf{z}\_0 = E\{\mathbf{z}\_0\} \text{ and } \mathbf{P}\_{\mathbf{z}\_0} = \left[ (\mathbf{z} - \hat{\mathbf{z}}\_0)(\mathbf{z} - \hat{\mathbf{z}}\_0)^T \right]^\top$$

• *Prediction step:*

$$\begin{aligned} \Psi\_{k-1} &= \begin{bmatrix} \hat{z}\_{k-1} & \hat{z}\_{k-1} + \sqrt{(L+\hat{\lambda})P\_z} & \hat{z}\_{k-1} - \sqrt{(L+\hat{\lambda})P\_z} \end{bmatrix} \\\\ \Psi\_{k|k-1} &= f'(\Psi\_{k-1}) \\\\ \hat{z}\_k^- &= \sum\_{i=0}^{2L} W\_i^{(w)} \, \Psi\_{i,k|k-1} \\\\ P\_k^- &= \sum\_{i=0}^{2L} W\_i^{(c)} \, (\Psi\_{i,k|k-1} - \hat{z}\_k^-) (\Psi\_{i,k|k-1} - \hat{z}\_k^-)^T \end{aligned}$$

*k i ikk k i k <sup>i</sup> PW z zk k*


$$\begin{aligned} Y\_{k|k-1} &= h \bigcap \Psi\_{k|k-1} \\\\ \hat{\mathcal{Y}}\_k^- &= \sum\_{i=0}^{2L} W\_i^{(m)} Y\_{i,k|k-1} \end{aligned}$$

• *Estimation (update) step:*

$$\begin{aligned} \boldsymbol{P}\_{\hat{\boldsymbol{\mu}}\_{i}} &= \sum\_{i=0}^{2L} \boldsymbol{W}\_{i}^{(c)} \left[ \boldsymbol{Y}\_{i,k|k-1} - \hat{\boldsymbol{\mu}}\_{k}^{-} \right] \left[ \boldsymbol{Y}\_{i,k|k-1} - \hat{\boldsymbol{\mu}}\_{k}^{-} \right]^{T} \\\\ \boldsymbol{P}\_{\hat{\boldsymbol{\mu}}\_{i}\hat{\boldsymbol{\mu}}\_{i}} &= \sum\_{i=0}^{2L} \boldsymbol{W}\_{i}^{(c)} \left[ \boldsymbol{\Psi}\_{i,k|k-1} - \hat{\boldsymbol{z}}\_{k}^{-} \right] \left[ \boldsymbol{Y}\_{i,k|k-1} - \hat{\boldsymbol{\mu}}\_{k}^{-} \right]^{T} \\\\ \boldsymbol{K}\_{k} &= \boldsymbol{P}\_{\hat{\boldsymbol{z}}\_{i}\hat{\boldsymbol{\mu}}\_{k}} \boldsymbol{P}\_{\hat{\boldsymbol{\mu}}\_{k}}^{-1} \\\\ \hat{\boldsymbol{z}}\_{k} &= \hat{\boldsymbol{z}}\_{k}^{-} + \boldsymbol{K}\_{k} \left( \boldsymbol{\mathcal{y}}\_{k} - \hat{\boldsymbol{\mu}}\_{k}^{-} \right) \\\\ \boldsymbol{P}\_{k} &= \boldsymbol{P}\_{k}^{-} - \boldsymbol{K}\_{k} \boldsymbol{P}\_{\hat{\boldsymbol{\mu}}\_{k}} \boldsymbol{K}\_{k}^{T} \end{aligned}$$

*k*

*Return the augmented state estimation*

#### Algorithm 2: CDKF algorithm

• *Initialization step:*

$$z\_0 = E\left[z\_0\right] \\
and \ P\_{z\_0} = \left[ (z - \hat{z}\_0)(z - \hat{z}\_0)^T \right],$$

• *Prediction step:*

$$\begin{aligned} \Psi\_{k-1} &= \left[ \begin{array}{cc} \hat{z}\_{k-1} & \hat{z}\_{k-1} + h\sqrt{P\_z} & \hat{z}\_{k-1} - h\sqrt{P\_z} \end{array} \right] \\\\ \Psi\_{k|k-1} &= f\left(\Psi\_{k-1}\right) \\\\ \hat{z}\_k^- &= \sum\_{i=0}^{2L} W\_i^{(m)} \Psi\_{i,k|k-1} \end{aligned}$$

Nonlinear State and Parameter Estimation Using Iterated Sigma Point Kalman Filter: Comparative Studies http://dx.doi.org/10.5772/63728 125

$$\begin{aligned} P\_k^- = \sum\_{i=0}^L \left[ W\_i^{(c)} (\Psi\_{i,k|k-1} - \Psi\_{L+i,k|k-1})^2 + W\_i^{(c\_2)} (\Psi\_{i,k|k-1} + \Psi\_{L+i,k|k-1} - 2\Psi\_{0,k|k-1})^2 \right] \\\\ Y\_{k|k-1} = h \left[ \left. \Psi\_{k|k-1} \right| \right. \\\\ \hat{y}\_k^- = \sum\_{i=0}^{2L} W\_i^{(m)} Y\_{i,k|k-1} \end{aligned}$$

• *Estimation (update) step:*

*Y h k k*|1 |1 - - *k k* = Yé ù ë û

<sup>2</sup> ( ) , | <sup>0</sup> <sup>1</sup> <sup>ˆ</sup> *<sup>L</sup> <sup>m</sup> k i ik <sup>i</sup> <sup>k</sup> y WY* - - <sup>=</sup>

,| 1 , | <sup>0</sup> <sup>1</sup> ˆ ˆ *<sup>k</sup> <sup>L</sup> <sup>T</sup> <sup>c</sup> <sup>y</sup> i ikk k k <sup>i</sup> ik k p WY yY y* - - - - <sup>=</sup> =å - - é ùé ù % ë ûë û

,| 1 ,| 1 <sup>0</sup> ˆ ˆ *k k <sup>L</sup> <sup>T</sup> <sup>c</sup>*

1

*k*

ë û


*z y i ikk k k <sup>i</sup> P W zY yk ik*


> *kk k K PP <sup>z</sup> <sup>y</sup> yk* - = %

ˆ ˆ ( ) ˆ *k k kk k z z Ky y*- - =+ -

*<sup>T</sup> P P KPK k k ky k* - = - %

[ ] <sup>0</sup> 0 0 0 0 ( )( ) ˆ ˆ *<sup>T</sup> <sup>z</sup> z E z and P z z z z* = =- - é ù

1 11 <sup>1</sup> ˆˆ ˆ *k k k zk z* - -- *z z hP z hP* - Y= + - é ù ë û


<sup>2</sup> ( ) , | <sup>0</sup> <sup>1</sup> <sup>ˆ</sup> *<sup>L</sup> <sup>m</sup> k i ik <sup>i</sup> <sup>k</sup> z W* - - <sup>=</sup> =å Y

= å

<sup>2</sup> ( )

<sup>2</sup> ( )

• *Estimation (update) step:*

124 Nonlinear Systems - Design, Analysis, Estimation and Control

*Return the augmented state estimation*

Algorithm 2: CDKF algorithm

• *Initialization step:*

• *Prediction step:*

$$\begin{aligned} P\_{\tilde{\gamma}\_{k}} &= \sum\_{i=0}^{L} \left[ W\_{i}^{(\epsilon\_{i})} (Y\_{i,k|k-1} - Y\_{L+i,k|k-1})^{2} + W\_{i}^{(\epsilon\_{i})} (Y\_{i,k|k-1} + Y\_{L+i,k|k-1} - 2Y\_{0})^{2} \right] \\\\ P\_{z\_{i}\ge\_{k}} &= \sqrt{W\_{1}^{(\epsilon\_{i})} P\_{k}^{-}} \left[ Y\_{1:L,k|k-1} - Y\_{L+1:2L,k|k-1} \right]^{T} \\\\ K\_{k} &= P\_{z\_{i}\ge\_{k}} P\_{\tilde{\gamma}\_{k}}^{-1} \\\\ \hat{z}\_{k} &= \hat{z}\_{k}^{-} + K\_{k} (\mathcal{y}\_{k} - \hat{y}\_{k}^{-}) \\\\ P\_{k} &= P\_{k}^{-} - K\_{k} P\_{\tilde{\gamma}\_{k}} K\_{k}^{T} \end{aligned}$$

*Return the augmented state estimation*

The Cholesky factorization decomposes a symmetric, positive‐definite matrix into the product of a lower triangular matrix and its transpose. This new matrix is utilized directly to obtain the sigma‐point: The scaling constant *h* is expressed as ℎ = 2 , where *α* is a tunable parameter less than one.

*k*

In order to predict the current attitude based on each sigma‐point, these sigma‐points are transformed through the nonlinear process system

$$
\Psi\_{k/k-1} = f[\Psi\_{k-1}] \tag{21}
$$

Then, the state mean and the square‐root covariance are estimated and calculated through the transformed sigma‐point as follows:

$$\mathcal{Z}\_k^\tau = \Sigma\_{l=0}^{2L} \mathcal{W}\_l^{(m)} \Psi\_{l,k\mid k-1} \tag{22}$$

$$S\_k^- = qr \left\{ \left[ \sqrt{\mathcal{W}\_1^{(c)}} \left( \Psi\_{1:2L,k/k-1} - \hat{Z}\_k^- \right) \sqrt{R^w} \right] \right\} \tag{23}$$

$$\mathcal{S}\_k^- = \
cohup \text{date } \left\{ \mathbf{S}\_k^-, \ \Psi\_{0,k} - \mathbf{Z}\_k^-, \ W\_0^{(c)} \right\} \tag{24}$$

where 0 () = 2 <sup>1</sup> − <sup>2</sup> <sup>+</sup> <sup>1</sup> <sup>2</sup> , 0 () = 1 − <sup>2</sup> and () <sup>=</sup> () <sup>=</sup> <sup>1</sup> 22, is a tunable

parameter used to include prior distribution. The transformed sigma‐point vector is then used to predict the measurements using the measurement model:

$$Y\_{k|k-1} = h[\Psi\_{k|k-1}] \tag{25}$$

The expected measurement <sup>−</sup> and square‐root covariance of <sup>=</sup> − − (called the inno‐ vation) are given by the unscented transform expressions just as for the process model:

$$\mathbf{y}\_k^- = \sum\_{l=0}^{(m)} \mathbf{W}\_l^{(m)} \mathbf{y}\_{l,k\mid k-1} \tag{26}$$

$$S\_{\hat{\mathcal{Y}}k} = qr \left\{ \left[ \sqrt{\mathcal{W}\_1^{(c)}} (\mathcal{Y}\_{1:2L,k|k-1} - \hat{\mathcal{Y}}\_k) \sqrt{R\_k^{\nu}} \right] \right\} \tag{27}$$

$$S\_{\tilde{\mathcal{Y}}k} = cholupdate\left\{S\_{\tilde{\mathcal{Y}}k\prime} \begin{array}{c} Y\_{0,k} \ -\mathcal{Y}\_{k\prime} \ \mathcal{W}\_0^{(c)} \end{array} \right\} \tag{28}$$

In an attempt to find out how much to adjust the predicted state mean and covariance based on the actual measurement, the Kalman gain matrix is calculated as follows:

$$P\_{Zkyk} = \Sigma\_{l=0}^{2L} \mathcal{W}\_l^{(c)} \left[ \Psi\_{i,k|k-1} - \hat{Z}\_k^{-} \right] \left[ Y\_{l,k|k-1} - \mathfrak{H}\_k^{-} \right] \tag{29}$$

$$K\_K = \,^pP\_{Zkyk} / \mathrm{S}\_{\mathfrak{P}k}^T / \mathrm{S}\_{\mathfrak{P}k} \tag{30}$$

Finally, the state mean and covariance are updated using the actual measurement and the Kalman gain matrix:

$$\mathfrak{A}\_{k} = \mathfrak{A}\_{k}^{-} + K\_{k}(\mathfrak{y}\_{k} - \mathfrak{y}\_{k}^{-}) \tag{31}$$

$$U = K\_k S\_{\mathfrak{P}k} \tag{32}$$

$$\mathcal{S}\_{\mathbb{k}} = \mathit{cohulup} \mathit{date} \left\{ \mathbb{S}\_{\mathbb{k}}^{-}, \ U, -1 \right\} \tag{33}$$

where is the process noise covariance, is the measurement noise covariance, chol is Cholesky method of matrix factorization, *qr* is QR matrix decomposition and cholupdate is a Cholesky factor updating.

The SRUKF technique is summarized in Algorithm 3.

#### **3.4. SRCDKF method**

(23)

(24)

(25)

(26)

(27)

(28)

(29)

(30)

(31)

(32)

(33)

22, is a tunable

− (called the inno‐

where 0

() = 2 <sup>1</sup> −

The expected measurement

Kalman gain matrix:

<sup>2</sup> <sup>+</sup> <sup>1</sup>

126 Nonlinear Systems - Design, Analysis, Estimation and Control

<sup>2</sup> , 0

to predict the measurements using the measurement model:

() = 1 −

<sup>2</sup> and

<sup>−</sup> and square‐root covariance of <sup>=</sup> −

parameter used to include prior distribution. The transformed sigma‐point vector is then used

vation) are given by the unscented transform expressions just as for the process model:

In an attempt to find out how much to adjust the predicted state mean and covariance based

Finally, the state mean and covariance are updated using the actual measurement and the

on the actual measurement, the Kalman gain matrix is calculated as follows:

() <sup>=</sup>

() <sup>=</sup> <sup>1</sup>

Like the SRUKF, the matrix square‐root will be propagated directly, avoiding the compu‐ tational complexity to refactorize at each time step in the CDKF. The SRCDKF is initialized with a state mean vector and the square root of a covariance.

$$\text{R}\_0 = E\{\mathbf{z}\_0\}\\
\text{And } \text{S}\_0 = \text{chol}\{E\{(\mathbf{z}\_0 - \mathbf{z}\_0)(\mathbf{z}\_0 - \mathbf{z}\_0)'\}\}\tag{34}$$

After the Cholesky factorization we obtain the sigma‐point:

$$
\Psi\_{k-1} = \begin{bmatrix} \hat{z}\_{k-1} & \hat{z}\_{k-1} + hS\_{k-1} & \hat{z}\_{k-1} \ -hS\_{k-1} \end{bmatrix} \tag{35}
$$

The sigma‐point vector is then gone through the nonlinear process system, which predicts the current attitude based on each sigma‐point.

$$
\Psi\_{k/k-1} = f[\Psi\_{k-1}] \tag{36}
$$

The estimated state mean and square‐root covariance are calculated from the transformed sigma‐point using,

$$\mathbb{Z}\_k^{\bullet} = \sum\_{l=0}^{2L} \, \_0 \boldsymbol{W}\_l^{(m)} \, \Psi\_{l,k\, |k-1|} \tag{37}$$

$$S\_k = qr \left\{ \left[ \sqrt{\mathbb{W}\_1^{(c1)}} (\Psi\_{1:L,k|k-1} - \Psi\_{L+1:2L,k|k-1}) \sqrt{\mathbb{W}\_1^{(c2)}} (\Psi\_{1:L,k|k-1} + \Psi\_{L+1:2L,k|k-1} - 2\Psi\_{0,k|k-1}) \right] \right\} (38)$$

where ( 1) <sup>=</sup> <sup>1</sup> 4ℎ2, <sup>2</sup> <sup>=</sup> ℎ2 <sup>−</sup> <sup>1</sup> 4ℎ4 , 0 () <sup>=</sup> ℎ2 − ℎ2 and () <sup>=</sup> <sup>1</sup> 2ℎ2. The next step,

the sigma‐point for measurement update is calculated as,

$$
\Psi\_{k|k-1} = \begin{bmatrix}
\hat{z}\_{k|k-1} & \hat{z}\_{k|k-1} + hS\_{k|k-1} & \hat{z}\_{k|k-1} & -hS\_{k|k-1} \\
\end{bmatrix} \tag{39}
$$

The transformed sigma‐point vector is then used to predict the measurements using the measurement model:

$$Y\_{k|k-1} = h\{\Psi\_{k|k-1}\}\tag{40}$$

The expected measurement <sup>−</sup> and square‐root covariance of <sup>=</sup> − − (called the inno‐ vation) are given by expressions just as for the process model:

$$\hat{\mathcal{Y}}\_{\mathbf{k}} = \Sigma\_{\mathbf{i}=0}^{2L} \mathcal{W}\_{\mathbf{i}}^{(m)} \, Y\_{\mathbf{k}\parallel\mathbf{k}-1} \tag{41}$$

$$S\_{\mathcal{Y}\_k} = qr \left\{ \left[ \sqrt{W\_1^{\{c\_1\}}} (Y\_{1:L:2L,k|k-1} - Y\_{L+1:2L,k|k-1}) \sqrt{W\_1^{\{c\_2\}}} (Y\_{1:L:|k-1} - Y\_{L+1:2L,k|k-1} - 2Y\_{0,k|k-1}) \right] \right\} \tag{42}$$

In an attempt to find out how much to adjust the predicted state mean and covariance based on the actual measurement, the Kalman gain matrix is calculated as follows:

$$P\_{\mathbf{z}\_k \mathbf{y}\_k} = W\_1^{(c\_1)} S\_k^- \left[ Y\_{1:L,k\mid k-1} - Y\_{L+1:2L,k\mid k-1} \right]^T \tag{43}$$

$$K\_k = P\_{\mathbf{z}\_k \mathcal{Y}\_k} / S\_{\tilde{\mathcal{Y}}\_k}^T / S\_{\tilde{\mathcal{Y}}\_k} \tag{44}$$

Then, the state mean and covariance are updated using the actual measurement and the Kalman gain matrix is:

$$\mathfrak{d}\_{k} = \mathfrak{d}\_{k}^{\sf T} + K\_{k}(\mathfrak{y}\_{k} - \mathfrak{Y}\_{k}^{\sf T}) \tag{45}$$

$$U = K\_k \mathbb{S}\_{\mathbb{P}\_k} \tag{46}$$

$$\mathcal{S}\_k = \textit{cholupstate}\{\mathcal{S}\_k^-, U, -1\}\tag{47}$$

The SRCDKF technique is summarized in Algorithm 4.

#### **3.5. ISPKF**

In order to achieve superior performance of the statical linearization methods in terms of efficiency and accuracy, the ISPKFs have been developed. These filters include IUKF, ICDKF, ISRUKF and ISRCDKF. The major difference between the ISPKFs and the noniterated SPKFs is shown in the step where the updated state estimation is calculated using the predicted state and the observation. Instead of relying on the predicted state, the observation equation is relinearized over times by iterating an approximate maximum a posteriori estimate, so the state estimate will be more accurate.

#### *3.5.1. IUKF*

(40)

(41)

(42)

(43)

(44)

(45)

(46)

(47)

− (called the inno‐

The expected measurement

128 Nonlinear Systems - Design, Analysis, Estimation and Control

Kalman gain matrix is:

**3.5. ISPKF**

vation) are given by expressions just as for the process model:

The SRCDKF technique is summarized in Algorithm 4.

state estimate will be more accurate.

<sup>−</sup> and square‐root covariance of <sup>=</sup> −

In an attempt to find out how much to adjust the predicted state mean and covariance based

Then, the state mean and covariance are updated using the actual measurement and the

In order to achieve superior performance of the statical linearization methods in terms of efficiency and accuracy, the ISPKFs have been developed. These filters include IUKF, ICDKF, ISRUKF and ISRCDKF. The major difference between the ISPKFs and the noniterated SPKFs is shown in the step where the updated state estimation is calculated using the predicted state and the observation. Instead of relying on the predicted state, the observation equation is relinearized over times by iterating an approximate maximum a posteriori estimate, so the

on the actual measurement, the Kalman gain matrix is calculated as follows:

The difference between the UKF and the IUKF consists in the iteration strategy.

After generating the prediction and the update steps, and getting both the state estimate and the covariance matrix , an iteration loop is set up with the following initializations:

$$\hat{\mathbf{z}}\cdot\hat{\mathbf{z}}\_{k,0} = \hat{\mathbf{z}}\overline{\mathbf{z}}, \quad \mathbf{P}\_{k,0} = \mathbf{P}\_{\mathbf{k}'}\hat{\mathbf{z}}\ \_{\mathbf{k},1} = \hat{\mathbf{z}}\,\_{\mathbf{k}'}, \quad \mathbf{P}\_{k,1} = \mathbf{P}\_{\mathbf{k}}\text{ and } j = 2 \text{ with } j \text{ is the } j\text{th iterate.}$$

In this loop, for each *j*, new sigma‐points are generated in the same way as the standard UKF

$$\Psi\_{k,j} = \begin{bmatrix} \hat{x}\_{k,j-1} & \hat{x}\_{k,j-1} + \sqrt{(L+\lambda)P\_{k,j-1}} & \hat{x}\_{k,j-1} - \sqrt{(L+\lambda)P\_{k,j-1}} \end{bmatrix} \tag{48}$$

#### Algorithm 3: SRUKF algorithm

• *Initialization step:* 0 <sup>=</sup> 0

$$\operatorname{And} \operatorname{S}\_{\operatorname{o}} = \operatorname{chol} \left( E \left[ \left( z\_{\operatorname{o}} - \hat{z}\_{\operatorname{o}} \right) \left( z\_{\operatorname{o}} - \hat{z}\_{\operatorname{o}} \right)^{\ast} \right] \right)$$

• *Prediction step:*

$$\begin{aligned} \Psi\_{k-1}^{\boldsymbol{\Psi}} &= \begin{bmatrix} \hat{\boldsymbol{z}}\_{k-1} & \hat{\boldsymbol{z}}\_{k-1} + \boldsymbol{h} \boldsymbol{S}\_{k-1} & \hat{\boldsymbol{z}}\_{k-1} - \boldsymbol{h} \boldsymbol{S}\_{k-1} \end{bmatrix} \\\\ \Psi\_{k/k-1}^{\boldsymbol{\Psi}} &= \boldsymbol{f} \left[ \Psi\_{k-1}^{\boldsymbol{\Psi}} \right] \\\\ \hat{\boldsymbol{Z}}\_{k}^{-} &= \sum\_{i=0}^{2L} \boldsymbol{W}\_{i}^{(\boldsymbol{w})} \Psi\_{i,k|k-1} \\\\ \boldsymbol{S}\_{k}^{-} &= \boldsymbol{q} \boldsymbol{r} \left\{ \sqrt{\boldsymbol{W}\_{1}^{(\boldsymbol{c})}} \left( \boldsymbol{\Psi}\_{1:2L,k/k-1} - \hat{\boldsymbol{Z}}\_{k}^{-} \right) \sqrt{\boldsymbol{R}^{w}} \right\} \\\\ \boldsymbol{S}\_{k}^{-} &= \boldsymbol{c} \boldsymbol{h} \boldsymbol{q} \boldsymbol{p} \text{d} \boldsymbol{\theta} \big{(} \boldsymbol{S}\_{k}^{-}, \boldsymbol{\Psi}\_{0:k} - \hat{\boldsymbol{Z}}\_{k}^{-}, \boldsymbol{W}\_{0}^{(\boldsymbol{c})} \big{)} \\\\ \boldsymbol{Y}\_{k|k-1} &= \boldsymbol{h} \boldsymbol{\left[ \boldsymbol{\Psi}\_{k|k-1} \right]} \end{aligned}$$

$$
\hat{\jmath}\_k^- = \sum\_{i=0}^{(m)} W\_i^{(m)} Y\_{i,k|k-1},
$$

• *Estimation (update) step:*

$$\begin{aligned} S\_{\hat{\boldsymbol{\mu}}\_{k}} &= qr \left\{ \left[ \sqrt{W\_{1}^{(c)}} \left( Y\_{\hat{\boldsymbol{1}}2L, k|k-1} - \hat{\boldsymbol{\boldsymbol{\mu}}\_{k}} \right) \sqrt{R\_{k}^{\*}} \right] \right\} \\\\ S\_{\hat{\boldsymbol{\mu}}\_{k}} &= \operatorname{cohupleft} \left\{ S\_{\hat{\boldsymbol{\mu}}\_{k}}, Y\_{0,k} - \hat{\boldsymbol{\boldsymbol{\mu}}\_{k}}, W\_{0}^{(c)} \right\} \\\\ P\_{2\hat{\boldsymbol{\mu}}\_{k}} &= \sum\_{i=0}^{2L} W\_{i}^{(c)} [\![\![\![\boldsymbol{\mu}\_{i,k|k-1} - \hat{\boldsymbol{Z}}\_{k}^{-}] \!] \!] \![\![\boldsymbol{Y}\_{i,k|k-1} - \hat{\boldsymbol{\boldsymbol{\mu}}\_{k}^{-}}] \!]] \\\\ K\_{k} &= P\_{2\hat{\boldsymbol{\mu}}\_{k}} / S\_{\hat{\boldsymbol{\mu}}\_{k}}^{T} / S\_{\hat{\boldsymbol{\mu}}\_{k}} \\\\ \hat{\boldsymbol{z}}\_{k} &= \hat{\boldsymbol{z}}\_{k}^{-} + K\_{k} \left( \boldsymbol{\mu}\_{k} - \hat{\boldsymbol{\mu}}\_{k}^{-} \right) \\\\ U &= K\_{k} S\_{\hat{\boldsymbol{\mu}}\_{k}} \end{aligned}$$

$$S\_k = 
cholup
quad 
text{
\left\{S\_k^-, U, -1
\right\}
}$$

*Return the augmented state estimation*

#### Algorithm 4: SRCDKF algorithm

• *Initialization step:* 0 <sup>=</sup> 0

$$\operatorname{And} \operatorname{S}\_0 = \operatorname{chol} \left\{ E \left[ \left( z\_0 - \hat{z}\_0 \right) \left( z\_0 - \hat{z}\_0 \right)^\* \right] \right\}$$

• *Prediction step:*

$$
\Psi\_{k-1} = \begin{bmatrix} \hat{\boldsymbol{z}}\_{k-1} & \hat{\boldsymbol{z}}\_{k-1} + h\mathbf{S}\_{k-1} & \hat{\boldsymbol{z}}\_{k-1} - h\mathbf{S}\_{k-1} \end{bmatrix},
$$

$$\Psi\_{k/k-1} = f\left[\Psi\_{k-1}\right]$$

Nonlinear State and Parameter Estimation Using Iterated Sigma Point Kalman Filter: Comparative Studies http://dx.doi.org/10.5772/63728 131

$$z\_k^- = \sum\_{i=0}^{2L} W\_i^{(m)} \Psi\_{i,k|k-1}$$

$$\begin{aligned} S\_k^- &= qr \left\{ \left[ \sqrt{W\_1^{(c)}} \left( \Psi\_{1L,k|k-1} - \Psi\_{L+12L,k|k-1} \right) \sqrt{W\_1^{(c\_2)}} \left( \Psi\_{1L,k|k-1} + \Psi\_{L+12L,k|k-1} - 2\Psi\_{0,k|k-1} \right) \right] \right\} \\\\ \Psi\_{k|k-1} &= \left[ \hat{z}\_{k|k-1} & \hat{z}\_{k|k-1} + hS\_{k|k-1} & \hat{z}\_{k|k-1} - hS\_{k|k-1} \right] \\\\ Y\_{k|k-1} &= h(\Psi\_{k|k-1}) \\\\ \hat{y}\_k^- &= \sum\_{i=0}^{2L} W\_i^{(m)} Y\_{k|k-1} \end{aligned}$$

• *Estimation(update) step:*

( ) ( ) ,| 1 <sup>0</sup> <sup>ˆ</sup> *<sup>m</sup> <sup>m</sup> <sup>k</sup> i ikk <sup>i</sup> y WY* - <sup>=</sup> å <sup>=</sup> -

{ ( ) } ( )

*c v*

,| 1 ,| 1 <sup>0</sup> [ ][ ] <sup>ˆ</sup> <sup>ˆ</sup> *<sup>L</sup> <sup>c</sup>*

{ } ( )

*c*

1 1:2 , | 1 ˆ *<sup>k</sup>*

*<sup>L</sup> <sup>k</sup> <sup>k</sup> <sup>k</sup> ky S qr W Y y R* - <sup>=</sup> é ù - ë û %

0, <sup>0</sup> , ˆ , *k k <sup>y</sup>*

*Zkyk i ikk k ikk k <sup>i</sup> P W ZY y* - - = Y- - å <sup>=</sup> - -

/ / *<sup>T</sup> KP SS <sup>K</sup>* = *Zkyk yy* %%*kk*

*z z Ky y* ˆ ˆ *k k kk k* ( ˆ ) - - =+ -

*U KS* = *k yk*%

*S cholupdate S U k k* { , , 1} - = -

*And S chol E z z z z* <sup>0</sup> = é- - ù { ë û ( 0 00 0 ˆ ˆ )( )' }

Y= + - *kk k k k k* -- - - - - 1 1 [*z z hS z hS* ˆ ˆ 1 <sup>1</sup> ˆ <sup>1</sup> <sup>1</sup> ]

Y =Y *k k*/ 1- - *f* [ *<sup>k</sup>* <sup>1</sup> ]

*<sup>k</sup> ky S cholupdate S Y y W* % %= -

2 ( )

• *Estimation (update) step:*

130 Nonlinear Systems - Design, Analysis, Estimation and Control

*Return the augmented state estimation*

Algorithm 4: SRCDKF algorithm

• *Initialization step:* 0 <sup>=</sup> 0

• *Prediction step:*

$$\begin{aligned} S\_{\tilde{\gamma}\_k} &= qr \left\{ \left[ \sqrt{W\_1^{(c)}} \left( Y\_{1,L,k|-1} - Y\_{L+12L,k|k-1} \right) \sqrt{W\_1^{(c)}} \left( Y\_{1,L,k|k-1} - Y\_{L+12L,k|k-1} - 2Y\_{0,k|k-1} \right) \right] \right\} \\\\ P\_{z\_i \gamma\_i} &= \sqrt{W\_1^{(c)}} S\_{\tilde{k}}^{-1} \left[ Y\_{1,L,k|k-1} - Y\_{L+12L,k|k-1} \right]^T \\\\ K\_k &= P\_{z\_i \gamma\_i} / S\_{\tilde{\gamma}\_k}^T / S\_{\tilde{\gamma}\_k} \\\\ \tilde{z}\_k &= \tilde{z}\_k^- + K\_k (\mathcal{y}\_k - \hat{\mathcal{y}}\_k^-) \\\\ U &= K\_k S\_{\tilde{\gamma}\_k} \\\\ S\_k &= \text{cholupdate} \left\{ S\_{\tilde{\gamma}\_k}^-, U, -1 \right\} \end{aligned}$$

*Return the augmented state estimation*

Then the prediction step and the update step are executed as follows:

$$\hat{\mathbf{z}}\_{k,j}^{\mathsf{T}} = \sum\_{i=0}^{2L} \mathcal{W}\_{i}^{\{\mathsf{m}\}} \,\, \Psi\_{i,j} \tag{49}$$

$$\mathcal{Y}\_{k,j} = h[\Psi\_{l,j}] \tag{50}$$

where , represents the ith component of

$$\mathfrak{F}\_{k,j} = \Sigma\_{l=0}^{2L} \mathcal{W}\_l^{(m)} Y\_{l,j} \tag{51}$$

$$P\_{\hat{\mathcal{Y}}\_{k,j}} = \Sigma\_{i=0}^{2L} \mathcal{W}\_{i}^{(c)} \left[ Y\_{i,j} - \hat{\mathcal{Y}}\_{k,j}^{-} \right] \left[ Y\_{i,j} - \hat{\mathcal{Y}}\_{k,j}^{-} \right]^T \tag{52}$$

$$P\_{\mathbf{Z}\_{k,j}\mathbf{y}\_{k,j}} = \Sigma\_{i=0}^{2L} \mathcal{W}\_i^{(c)} \left[\Psi\_{l\_n j} - \mathbf{z}\_{k,j}^{-}\right] \left[\mathbf{Y}\_{l,j} - \mathbf{y}\_{k,j}^{-}\right]^T \tag{53}$$

$$\mathcal{K}\_{k,j} = P\_{\mathbf{z}\_{k,j} \mathcal{Y} \mathbf{x}\_{k,j}} P\_{\mathcal{Y} \mathbf{x}\_k j}^{-1} \tag{54}$$

$$\mathbb{Z}\_{k,j} = \mathbb{Z}\_{k,j}^{-} + g.K\_{k,j}(\mathbb{y}\_k - \mathbb{y}\_{k,j}^{-}) \tag{55}$$

$$P\_{k,j} = P\_{k,j}^{-} - K\_{k,j} P\_{\mathcal{P}\_{k,j}} K\_{k,j}^{T} \tag{56}$$

Those steps are repeated many times until a following inequality is not satisfied.

$$\mathbb{P}\_{k,j}^T P\_{k,j-1}^T + \mathbb{P}\_{k,j}^T R\_k^{-1} \mathbb{P}\_{k,j} < \mathbb{P}\_{k,j-1}^T R\_k^{-1} \mathbb{P}\_{k,j-1} \text{ or } j < N \tag{57}$$

The IUKF is summarized in Algorithm 5.

#### *3.5.2. ICDKF*

The iterated sigma‐point methods have the ability to provide accuracy over other estimation methods since it relinearizes the measurement equation by iterating an approximate maximum a posteriori estimate around the updated state, instead of relying on the predicted state.

In the ICDKF, the prediction step is calculated as the standard CDKF and we get −and −.

Then the sigma‐point in measurement updating is calculated as follows:

$$
\Psi\_{(k|k-1)} = \begin{bmatrix}
\hat{\mathbf{z}}\_{(k|k-1)} & \hat{\mathbf{z}}\_{(k|k-1)} + h\sqrt{P\_k} & \hat{\mathbf{z}}\_{(k|k-1)} - h\sqrt{P\_k}
\end{bmatrix} \tag{58}
$$

After that, the initialization 0 <sup>=</sup> − is set up and then the iteration step is executed, so the following equations are repeated m times.

Nonlinear State and Parameter Estimation Using Iterated Sigma Point Kalman Filter: Comparative Studies http://dx.doi.org/10.5772/63728 133

$$\mathbf{x}\_{j}\boldsymbol{\Psi}\_{j} = \begin{bmatrix} \mathbf{z}\_{j} & \mathbf{z}\_{j} + h\sqrt{P\_{k}^{-}} & \mathbf{z}\_{j} - h\sqrt{P\_{k}^{-}} \end{bmatrix} \tag{59}$$

$$Y\_j = h[\Psi\_j] \tag{60}$$

$$\mathcal{Y}\_{k,j}^{-} = \Sigma\_{l=0}^{2L} \mathcal{W}\_{l}^{(m)} Y\_{i,j} \tag{61}$$

$$P\_{\mathfrak{Y}\_k} = \sum\_{l=0}^{2L} \mathcal{W}\_l^{(c\_1)} \left( Y\_{l,j} - Y\_{L+i,j} \right)^2 + \mathcal{W}\_l^{(c\_2)} \left( Y\_{l,j} + Y\_{L+i,j} - 2Y\_{0,j} \right)^2 \tag{62}$$

$$P\_{\mathbf{Z}\_k \mathbf{y}\_k} = \sqrt{\mathcal{W}\_1^{\{c\_1\}} P\_\mathbf{z}} \left[ Y\_{1:L,f} - Y\_{L+1:2L,f} \right]^T \tag{63}$$

$$K\_{k,f} = P\_{\mathbf{z}\_k \mathcal{Y}\_k} P\_{\mathbf{\mathcal{Y}}\_k}^{-1} \tag{64}$$

$$\text{H}\_{k,j} = \text{\text{\textdegree}}\_{k}^{\text{\textdegree}} + K\_{k,j}(\text{y}\_{k} - \text{\textdegree \text{\textdegree}}\_{k,j}^{\text{\textdegree}}) \tag{65}$$

$$P\_k = P\_k^- - K\_{k,f} P\_{\mathfrak{F}\_k} K\_{k,f}^T \tag{66}$$

The algorithm of the ICDKF is summarized in Algorithm 6.

#### *3.5.3. ISRUKF*

(50)

(51)

(52)

(53)

(54)

(55)

(56)

(57)

−and −.

(58)

where ,

represents the ith component of

132 Nonlinear Systems - Design, Analysis, Estimation and Control

The IUKF is summarized in Algorithm 5.

After that, the initialization 0 <sup>=</sup>

following equations are repeated m times.

*3.5.2. ICDKF*

Those steps are repeated many times until a following inequality is not satisfied.

The iterated sigma‐point methods have the ability to provide accuracy over other estimation methods since it relinearizes the measurement equation by iterating an approximate maximum a posteriori estimate around the updated state, instead of relying on the predicted state.

− is set up and then the iteration step is executed, so the

In the ICDKF, the prediction step is calculated as the standard CDKF and we get

Then the sigma‐point in measurement updating is calculated as follows:

The ISRUKF has the same principle as the IUKF. After executing the standard SRUKF, an iteration loop is started. The predicted estimated state −, and the predicted and esti‐ mated covariance matrix ( −, ) obtained through the prediction and the update steps will be the initialization inputs for the iteration loop (, 0 <sup>=</sup> −, , 0 <sup>=</sup> − and , 1 <sup>=</sup> , , 1 <sup>=</sup> ) . Also let *j*=2 where *j* is the *j*th iteration.

In the iteration loop, and for each *j*, the new sigma‐point vector is generated as follows:

$$\Psi\_{k,j} = \begin{bmatrix} \hat{z}\_{k,j-1} & \hat{z}\_{k,j-1} + hS\_{k,j-1} & \hat{z}\_{k,j-1} - hS\_{k,j-1} \end{bmatrix} \tag{67}$$

Then, the prediction and the update steps are executed as follows:

$$
\Psi\_f = f(\Psi\_{k,f}) \tag{68}
$$

$$\hat{\mathbf{z}}\_{k,j}^{\mathsf{T}} = \sum\_{i=0}^{2L} \, \_0 \mathcal{W}\_i^{(m)} \, ^t \boldsymbol{\Psi}\_{l,j} \tag{69}$$

$$S\_k^- = qr \left\{ \left[ \sqrt{\mathcal{W}\_1^{(c)}} \left( \Psi\_{1:2L,j} - \hat{Z}\_{k,j}^- \right) \sqrt{R^w} \right] \right\} \tag{70}$$

$$\mathcal{S}\_{k}^{\bullet} = \textit{cholupstate}\left\{ \mathcal{S}\_{k,j}^{\bullet}, \: \left. \Psi\_{\iota,j} - \mathcal{Q}\_{k,j}^{\bullet}, \: \left. \mathcal{W}\_{1}^{(\mathcal{C})} \right\} \right. \tag{71}$$

$$\Psi\_{l,l} = \hbar \left[ \Psi\_{l,l} \right] \tag{72}$$

$$\hat{\mathbf{y}}\_{k,j}^{\mathsf{T}} = \sum\_{i=0}^{\left(m\right)} \mathcal{W}\_i^{\left(m\right)} \,\, \mathbf{Y}\_{i,j} \tag{73}$$

$$\mathcal{S}\_{\mathfrak{P}k,f} = qr \left\{ \left[ \sqrt{\mathcal{W}\_1^{(c)}} \{ \mathcal{Y}\_{1:2L,f} - \mathcal{Y}\_{k,f} \} \sqrt{R\_k^v} \right] \right\} \tag{74}$$

$$\mathcal{S}\_{\mathfrak{P}k,f} = \mathit{cholupstate}\{\mathcal{S}\_{\mathfrak{P}k,f}, \mathcal{Y}\_{0,f} - \mathcal{Y}\_{k,f}, \ \mathcal{W}\_0^{(c)}\}\tag{75}$$

$$P\_{Zkyk} = \Sigma\_{i=0}^{2L} \mathcal{W}\_i^{(c)} \left[ \Psi\_{l,j} - \mathcal{Z}\_{k,j}^{-} \right] \left[ \mathbf{y}\_{l,j} - \mathbf{y}\_{k,j}^{-} \right] \tag{76}$$

$$K\_{\mathbb{K}, \mathfrak{f}} = \, \_{\mathbb{Z}\mathbb{k}\mathfrak{y}\mathbb{k}} / \mathbb{S}\_{\mathfrak{Y}\mathbb{k}, \mathfrak{f}}^{\mathbb{T}} / \mathbb{S}\_{\mathfrak{Y}\mathbb{k}, \mathfrak{f}} \tag{77}$$

$$\mathbb{R}\_{k,j} = \mathbb{R}\_{k-j}^{-} + K\_{k,j} \{ y\_k - \mathbb{y}\_{k,j}^{-} \} \tag{78}$$

$$U = K\_{\mathbb{K},f} \mathbb{S}\_{\mathbb{P}\mathbb{K},f} \tag{79}$$

$$S\_{k,j} = 
cholup
datese
\left\{ S\_{k,l-1}, U, -1 \right\} \tag{80}$$

The equations in the iterative loop are repeated m times.

The ISRUKF algorithm is summarized in Algorithm 7.

#### *3.5.4. ISRCDKF*

The ISRCDKF has the ability to provide accuracy over other SRCDKF since it relinearizes the measurement equation by iterating an approximate maximum a posteriori estimate around the updated state, instead of relying on the predicted state.

The algorithm of the ISRCDKF consists of generating the prediction step as the standard SRCDKF, then applying *m* iterations over the update step described as follows:

$$\Psi\_{\rangle} = \begin{bmatrix} \mathbf{z}\_{\rangle} & \mathbf{z}\_{\rangle} + \ \mathbf{h} \mathbf{S}\_{\mathbf{k}}^{\mathsf{T}} & \mathbf{z}\_{\rangle} - \ \mathbf{h} \mathbf{S}\_{\mathbf{k}}^{\mathsf{T}} \end{bmatrix} \tag{81}$$

$$\mathbf{Y}\_{l} = h(\boldsymbol{\Psi}\_{l}) \tag{82}$$

Nonlinear State and Parameter Estimation Using Iterated Sigma Point Kalman Filter: Comparative Studies http://dx.doi.org/10.5772/63728 135

$$\mathfrak{F}\_{\mathbf{k},j} = \Sigma\_{\mathbf{i}=\mathbf{0}}^{2L} \mathcal{W}\_{\mathbf{i}}^{(m)} Y\_{\mathbf{i},j} \tag{83}$$

$$\mathcal{S}\_{\mathcal{G}\_{k}} \, qr \left\{ \left| \sqrt{W\_{1}^{\{c\_{1}\}}} (Y\_{1:L,f} - Y\_{L+1:2L,f}) \sqrt{W\_{1}^{\{c\_{2}\}}} (Y\_{1:L,f} - Y\_{L+1:2L,f} - 2Y\_{0,f}) \right| \right\} \tag{84}$$

$$P\_{x\_k \chi\_k} = W\_1^{\langle c\_1 \rangle} S\_k \boxed{Y\_{1:L,f} - Y\_{L+1:2L,f}}^{\tag{85}} \tag{85}$$

$$K\_{k,f} = P\_{\text{x}\_k \text{y}\_k} / \mathbf{S}\_{\text{\tilde{\mathcal{Y}}\_k}}^t / \mathbf{S}\_{\text{\tilde{\mathcal{Y}}\_k}} \tag{86}$$

$$\mathfrak{H}\_{\mathbf{k},j} = \mathfrak{k}\_{\mathbf{k}}^{\mathsf{T}} + K\_{\mathbf{k},j} (\mathfrak{y}\_{\mathbf{k}} - \mathfrak{Y}\_{\mathbf{k},j}^{\mathsf{T}}) \tag{87}$$

$$U = K\_{k,f} S\_{\mathcal{P}\_k} \tag{88}$$

$$S\_k = 
cholupdate\{S\_k^-, U, -1\}\tag{89}$$

The ISRCDKF algorithm is summarized in Algorithm 8.

In the next section, the SPKF method performances will be assessed and compared to ISPKF methods. The performances of UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF methods will be evaluated through a simulation example with two comparative studies in terms of estimation accuracy, convergence and execution times.

## **4. Simulation results**

(70)

(71)

(72)

(73)

(74)

(75)

(76)

(77)

(78)

(79)

(80)

(81)

(82)

The equations in the iterative loop are repeated m times.

the updated state, instead of relying on the predicted state.

The ISRCDKF has the ability to provide accuracy over other SRCDKF since it relinearizes the measurement equation by iterating an approximate maximum a posteriori estimate around

The algorithm of the ISRCDKF consists of generating the prediction step as the standard

SRCDKF, then applying *m* iterations over the update step described as follows:

The ISRUKF algorithm is summarized in Algorithm 7.

134 Nonlinear Systems - Design, Analysis, Estimation and Control

*3.5.4. ISRCDKF*

#### **4.1. State and parameter estimations for a second‐order LTI system**

Consider a second‐order LTI described by the following state variable,

$$
\dot{\boldsymbol{x}}\_k = A\boldsymbol{x}\_k + BU\_1(\boldsymbol{k}) + \boldsymbol{\nu}\_k \tag{90}
$$

where is a Gaussian process noise (; 0, 10−1), and <sup>=</sup> 1.9223 −0.9604 1 0 is a matrix with scalar parameter <sup>=</sup> 0.2 0.2 .

#### Algorithm 5: IUKF algorithm

• *Initialization step:*

*Z Ez* 0 0 = [ ]

$$P\_{z\_0} = \left[ (z - \hat{z}\_0)(z - \hat{z}\_0)^r \right]^2$$

• *Prediction step:*

*Generate the UKF prediction step and return* − −


*And* , 1 <sup>=</sup> , , 1 <sup>=</sup>

*Also let* = 1 and = 2.

$$\begin{aligned} \Psi\_{k,j} &= \begin{bmatrix} \hat{z}\_{k,j-1} & \hat{z}\_{k,j-1} + \sqrt{(L+\hat{\lambda})P\_{k,j-1}} & \hat{z}\_{k,j-1} - \sqrt{(L+\hat{\lambda})P\_{k,j-1}} \end{bmatrix} \\\\ \hat{z}\_{k,j}^{\*} &= \sum\_{i=0}^{2L} W\_{i}^{\*(i)} \Psi\_{i,j} \\\\ Y\_{k,j} &= h\left[\Psi\_{k,j}\right] \\\\ \hat{y}\_{k,j}^{\*} &= \frac{2L}{i\alpha} W\_{i}^{\*(i)} Y\_{i,j} \\\\ P\_{z\_{k,j}} &= \sum\_{i=0}^{2L} W\_{i}^{\*(i)} \left[Y\_{i,j} - \hat{\lambda}\_{k,j}^{\*}\right] \left[Y\_{i,j} - \hat{\lambda}\_{k,j}^{\*}\right]^{T} \\\\ P\_{z\_{k,j}p\_{j,j}} &= \sum\_{i=0}^{2L} W\_{i}^{\*(i)} \left[\Psi\_{i,j} - \hat{\omega}\_{k,j}^{\*}\right] \left[Y\_{i,j} - \hat{\lambda}\_{k,j}^{\*}\right]^{T} \\\\ K\_{k,j} &= P\_{z\_{k,j}p\_{j,j}} P\_{p\_{j,j}}^{\*1} \\\\ \hat{z}\_{k,j} &= P\_{z\_{k,j}p\_{j,j}} P\_{p\_{j,j}}^{\*1} \\\\ P\_{z\_{k,j}} &= P\_{z\_{k,j}} + \mathbf{g}; K\_{k,j} (\mathcal{Y}\_{k} - \hat{\mathcal{Y}}\_{k,j}^{\*}) \\\\ P\_{k,j} &= P\_{k,j} - K\_{k,j} P\_{p\_{k,j}} K\_{k,j}^{T} \end{aligned}$$

Nonlinear State and Parameter Estimation Using Iterated Sigma Point Kalman Filter: Comparative Studies http://dx.doi.org/10.5772/63728 137

*Define these equations:*

<sup>0</sup> 0 0 ( )( ˆ ˆ ) *<sup>T</sup> P zz zz <sup>z</sup>* =- - é ù ë û

> − −

, ,1 ,1 ( ) ,1 ,1 ( ) , 1 ˆ ˆ ˆ *kj kj kj kj kj k j z z LP z LP* - l

> 2 ( ) , , <sup>0</sup> <sup>ˆ</sup> *<sup>L</sup> <sup>m</sup> k j i ij <sup>i</sup> z W* - <sup>=</sup> = Y å


*Y h kj ij* , , = Yé ù ë û

> 2 0

*P WY y Y yi ij kj ij kj* <sup>=</sup> - - =å - - é ùé ù % ë ûë û

, , <sup>0</sup> , , <sup>ˆ</sup> <sup>ˆ</sup> *kj kj*

, *kj kj kj K PP <sup>k</sup> <sup>j</sup> <sup>y</sup> yz* - = %

,, , , ˆ ˆ .( ) ˆ *kj kj kj k kj z z gK y y*- - =+ -

, ,, , , *k j <sup>T</sup> P P KPK k j k j k yj k j* - = - %

*z y i ij kj ij k <sup>i</sup> P W z Yy <sup>j</sup>* <sup>=</sup> - - =å Y - - é ùé ù ë ûë û

( ) , , <sup>ˆ</sup> *<sup>L</sup> <sup>m</sup> kj i <sup>i</sup> i j y WY* <sup>=</sup> - = å

*<sup>L</sup> <sup>T</sup> <sup>c</sup>*

, ,, 1  l

• *Prediction step:*

• *Estimation (update) step:*

• *Iteration: Let* , 0 <sup>=</sup>

*And* , 1 <sup>=</sup> , , 1 <sup>=</sup>

*Also let* = 1 and = 2.

*Generate the UKF prediction step and return*

136 Nonlinear Systems - Design, Analysis, Estimation and Control

• *Generate the UKF update step and return*

−, , 0 <sup>=</sup>

,

, ,

*<sup>y</sup> <sup>i</sup>*

<sup>2</sup> ( 0

<sup>2</sup> ( )

) , ,, , ˆ ˆ *k j <sup>L</sup> <sup>T</sup> <sup>c</sup>*

−

$$
\hat{\mathcal{Y}}\_{k,j} = h(\hat{\boldsymbol{z}}\_{k,j})
$$

$$
\tilde{\boldsymbol{z}}\_{k,j} = \hat{\boldsymbol{z}}\_{k,j} - \hat{\boldsymbol{z}}\_{k,j-1}
$$

$$
\tilde{\mathcal{Y}}\_{k,j} = \mathcal{Y}\_k - \hat{\mathcal{Y}}\_{k,j}
$$

*If the inequality is fulfilled*

$$
\tilde{\boldsymbol{z}}\_{k,j}^T \boldsymbol{P}\_{k,j-1}^T + \tilde{\boldsymbol{y}}\_{k,j}^T \boldsymbol{R}\_k^{-1} \tilde{\boldsymbol{y}}\_{k,j} < \tilde{\boldsymbol{y}}\_{k,j-1}^T \boldsymbol{R}\_k^{-1} \tilde{\boldsymbol{y}}\_{k,j-1}
$$

*And* < *then set* = . , = + 1, *and return to the iterated loop*.

*Otherwise set*

$$\hat{\mathbf{z}}\_{\mathbf{k}} = \hat{\mathbf{z}}\_{\mathbf{k},j} \text{ and } \mathbf{P}\_{\mathbf{k}} = \mathbf{P}\_{\mathbf{k},j} \text{ Return the augmented state estimation } \hat{\mathbf{z}}\_{\mathbf{k}}$$

Algorithm 6: ICDKF algorithm

• *Initialization step:*

$$\begin{aligned} Z\_0 &= E\left[z\_0\right] \\\\ P\_{z\_0} &= \left[ (z - \hat{z}\_0)(z - \hat{z}\_0)^T \right] \end{aligned}$$

• *Prediction step:*

$$\begin{aligned} \Psi\_{k-1} &= \left[ \begin{array}{cc} \hat{\boldsymbol{z}}\_{k-1} & \hat{\boldsymbol{z}}\_{k-1} + \boldsymbol{h} \sqrt{P\_z} & \hat{\boldsymbol{z}}\_{k-1} - \boldsymbol{h} \sqrt{P\_z} \right] \\\\ & \quad \Psi\_{k|k-1} = \boldsymbol{f}(\Psi\_{k-1}) \\\\ & \quad \hat{\boldsymbol{z}}\_{k}^{-} = \sum\_{i=0}^{2L} W\_i^{(m)} \, \Psi\_{i,k|k-1} \end{aligned} $$

$$\begin{aligned} P\_k^- &= \sum\_{i=0}^{2L} W\_i^{(\alpha)} (\Psi\_{\boldsymbol{\iota},k|k-1} - \Psi\_{\boldsymbol{\iota}\boldsymbol{\iota},k|k-1})^2 + W\_i^{(\alpha)} (\Psi\_{\boldsymbol{\iota},k|k-1} + \Psi\_{\boldsymbol{\iota}\boldsymbol{\iota},k|k-1} - 2\Psi\_{\boldsymbol{\iota},k|k-1})^2 \\\\ \Psi\_{\boldsymbol{\iota}^{(k|k-1)}} &= \left[ \hat{z}\_{\left(k|k-1\right)} \hat{z}\_{\left(k|k-1\right)} + h\sqrt{P\_k^{-}} \hat{z}\_{\left(k|k-1\right)} - h\sqrt{P\_k^{-}} \right] \end{aligned}$$

• *Estimation(update) step:*

<sup>0</sup> ˆ*<sup>k</sup> z z*- =

• *Iteration: for j=0,.., m*

$$\begin{aligned} \Psi\_{j} &= \left[ z\_{j} - z\_{j} + h \sqrt{P\_{k}^{-}} \right] \quad z\_{j} - h \sqrt{P\_{k}^{-}} \\\\ Y\_{j} &= h \left[ \Psi\_{j} \right] \\\\ \hat{Y}\_{k,j}^{-} &= \sum\_{i=0}^{2L} W\_{i}^{(\alpha)} Y\_{i,j} \\\\ P\_{\tilde{\gamma}\_{k}} &= \sum\_{i=0}^{2L} W\_{i}^{(\alpha)} \left( Y\_{i,j} - Y\_{L \leftrightarrow i, j} \right)^{2} + W\_{i}^{(\alpha)} \left( Y\_{i,j} + Y\_{L \leftrightarrow i, j} - 2Y\_{0,j} \right)^{2} \\\\ P\_{z\_{k}\tilde{\gamma}\_{k}} &= \sqrt{W\_{1}^{(\alpha)} P\_{\tilde{z}}} \left[ Y\_{1:L,j} - Y\_{L \leftrightarrow 2L,j} \right]^{T} \\\\ K\_{k,j} &= P\_{z\_{k}\tilde{\gamma}\_{k}} P\_{\tilde{\gamma}\_{k}}^{-1} \end{aligned}$$

*End*

$$
\hat{\boldsymbol{z}}\_{k,j} = \hat{\boldsymbol{z}}\_k^- + \boldsymbol{K}\_{k,j} (\boldsymbol{y}\_k - \hat{\boldsymbol{y}}\_{k,j}^-)
$$

$$
\boldsymbol{P}\_k = \boldsymbol{P}\_k^- - \boldsymbol{K}\_{k,j} \boldsymbol{P}\_{\mathfrak{T}\_k} \boldsymbol{K}\_{k,j}^T
$$

*Return the augmented state estimation*

Algorithm 7: ISRUKF algorithm

• *Initialization step:* 0 <sup>=</sup> 0

$$S\_0 = \operatorname{chol}\left\{ E\left[ (z\_o - \hat{z}\_o)(z\_o - \hat{z}\_o)^\prime \right] \right\}$$

• *Prediction step:*

1 2


*P W W* -

138 Nonlinear Systems - Design, Analysis, Estimation and Control

• *Estimation(update) step:*

• *Iteration: for j=0,.., m*

*End*

<sup>2</sup> ( ) 2 2 ( ) ,| 1 ,| 1 , | 1 , | 1 0, | <sup>0</sup> <sup>1</sup> ( )( 2 ) *<sup>L</sup> c c k i ikk L ikk i ikk L ikk k k <sup>i</sup>*

=å Y -Y + Y +Y - Y - - --

( |1 |1 ) ( ) ( | 1) ( | 1) ˆˆ ˆ *k k kk kk k k k k z z hPz hP* - - - -- - Y= + - é ù ë û

<sup>0</sup> ˆ*<sup>k</sup> z z*- =

*j jj k j k z z hP z hP* é ù - - Y= + - ë û

*Y h j j* = Yé ù ë û

2 0 ( ) , , <sup>ˆ</sup> *<sup>L</sup> <sup>m</sup> kj i <sup>i</sup> i j y WY* <sup>=</sup> - = å

( ) ( )

<sup>1</sup> ( ) 1 1: , 1:2 , *k k <sup>T</sup> <sup>c</sup> P W PY Y z y z Lj L Lj* <sup>+</sup> <sup>=</sup> é ù - ë û

*y i ij L ij i ij L ij j <sup>i</sup> P WYY WYY Y* + + <sup>=</sup> =å - + + -

> , *kk k K PP kj zy y* - =

, ,, ˆ ˆ ( ) ˆ *kj k kj k kj z zKyy* - - =+ -

*<sup>T</sup> P P K PK k k kj y kj* - = -

*<sup>L</sup> c c*

( ) 1 2

1

, , *<sup>k</sup>*

2 2 2 , , , , 0, 2 *<sup>k</sup>*

( )

0


*Generate the SRUKF update step and return*

• *Iteration: Let* , 0 <sup>=</sup> −, , 0 <sup>=</sup> <sup>−</sup> and , 1 <sup>=</sup> , , 1 <sup>=</sup> . Also let *j*=2

$$\begin{aligned} \Psi\_{k,j} &= \left[\hat{\Sigma}\_{k,j-1} \quad \hat{\Sigma}\_{k,j-1} + hS\_{k,j-1} \quad \hat{\Sigma}\_{k,j-1} - hS\_{k,j-1}\right] \\\\ \Psi\_j &= f\left(\Psi\_{k,j}\right) \\\\ \hat{\Sigma}\_{k,j}^- &= \sum\_{i=0}^{2^r} W\_i^{(\alpha)} \Psi\_{i,j} \\\\ S\_k^- &= qr \left\{ \left[\sqrt{W\_1^{(c)}} \left(\Psi\_{12L,j} - \hat{Z}\_{k,j}^-\right) \sqrt{R''}\right] \right\} \\\\ S\_k^- &= \operatorname{cohupt} \operatorname{coth}\left\{ S\_{i,j}^-, \Psi\_{i,j} - \hat{Z}\_{k,j}^-, W\_1^{(c)} \right\} \\\\ Y\_{i,j} &= h\left[\Psi\_{i,j}\right] \\\\ \hat{Y}\_{k,j}^- &= \sum\_{i=0}^{(\alpha)} W\_i^{(\alpha)} Y\_{i,j} \\\\ S\_{jk,j} &= qr \left\{ \left[\sqrt{W\_1^{(c)}} \left(Y\_{12L,j} - \hat{S}\_{k,j}\right) \sqrt{R''}\right] \right\} \end{aligned}$$

$$\begin{aligned} S\_{jk,j} &= \text{cholupdate}\left\{ S\_{jk,j}, Y\_{0,j} - \hat{\mathcal{Y}}\_{k,j}, W\_0^{(c)} \right\} \\\\ P\_{2jk} &= \sum\_{i=0}^{2l} W\_i^{(c)} \left[ \Psi\_{i,j} - \hat{\mathcal{Z}}\_{k,j}^- \right] \left[ Y\_{i,j} - \hat{\mathcal{Y}}\_{k,j}^- \right] \\\\ K\_{k,j} &= P\_{2jk} \parallel S\_{jk,j}^T / S\_{jk,j} \\\\ \hat{\mathcal{Z}}\_{k,j} &= \hat{\mathcal{Z}}\_{k-j}^- + K\_{k,j} \left( \mathcal{Y}\_k - \hat{\mathcal{Y}}\_{k,j}^- \right) \\\\ U &= K\_{k,j} S\_{jk,j} \\\\ \vdots \end{aligned}$$

*End*

$$U = K\_{k,m} \mathcal{S}\_{\mathbb{R}\_k}$$

*S cholupdate S U k j* , { *k j* , 1, , 1} - = - -

$$S\_k = 
cholup
quad
text{
\{S\_{k,m}, U, -1
\}
}$$

*Return the augmented state estimation*

#### Algorithm 8: ISRCDKF algorithm

• *Initialization step:* 0 <sup>=</sup> 0

$$S\_o = \operatorname{chol}\left(E\left[ (z\_o - \hat{z}\_o)(z\_o - \hat{z}\_o)' \right]\right)$$

• *Prediction step:*

$$
\Psi\_{k-1} = \begin{bmatrix} \hat{z}\_{k-1} & \hat{z}\_{k-1} + hS\_{k-1} & \hat{z}\_{k-1} - hS\_{k-1} \end{bmatrix},
$$

$$\Psi\_{k/k-1} = f\left[\Psi\_{k-1}\right]$$

Nonlinear State and Parameter Estimation Using Iterated Sigma Point Kalman Filter: Comparative Studies http://dx.doi.org/10.5772/63728 141

$$z\_k^- = \sum\_{i=0}^{2L} W\_i^{(m)} \Psi\_{i,k|k-1}$$

$$\begin{aligned} S\_k^- &= qr \left\{ \left[ \sqrt{W\_1^{(\varepsilon)}} \left( \Psi\_{1:L,k|k-1} - \Psi\_{L+1:2L,k|k-1} \right) \sqrt{W\_1^{(\varepsilon z)}} \left( \Psi\_{1:L,k|k-1} + \Psi\_{L+1:2L,k|k-1} - 2\Psi\_{0,k|k-1} \right) \right] \right\} \\\\ \Psi\_{\{k|k-1\}} &= \begin{bmatrix} \hat{\boldsymbol{z}}\_{\{k|k-1\}} & \hat{\boldsymbol{z}}\_{\{k|k-1\}} + h\boldsymbol{S}\_{\{k|k-1\}} & \hat{\boldsymbol{z}}\_{\{k|k-1\}} - h\boldsymbol{S}\_{\{k|k-1\}} \end{bmatrix} \end{aligned}$$

• *Estimation(update) step:*

{ } ( )

, , 0, , 0 , ˆ , *<sup>c</sup> yk j yk j j k j S cholupdate S Y y W* % % = -

, ,, , <sup>0</sup> <sup>ˆ</sup> <sup>ˆ</sup> *<sup>L</sup> <sup>c</sup> Zkyk i i j j i j j <sup>i</sup> P W Z Yy k k* <sup>=</sup> - - =å Y - - é ùé ù ë ûë û

, ,, / / *<sup>T</sup> K PS S K j Zkyk yk j yk j* = % %

*z z Kyy* ˆ ˆ *kj k j kj k kj* , ,, ( ˆ ) - - =+ - -

*U KS* = *k j yk j* , , %

*S cholupdate S U k j* , { *k j* , 1, , 1} - = - -

, *<sup>k</sup> UKS* = *km y*%

*S cholupdate S U <sup>k</sup>* = { *k m*, , ,1- }

*S chol E z z z z* <sup>0</sup> = é- - ù { ë û ( 0 00 0 ˆ ˆ )( )' }

Y= + - *k k k kk k* - - - -- - 1 1 [*z z hS z hS* ˆ ˆ 1 11 ˆ <sup>1</sup> ]

Y =Y *k k*/ 1- - *f* [ *<sup>k</sup>* <sup>1</sup> ]

<sup>2</sup> ( )

*End*

*Return the augmented state estimation*

140 Nonlinear Systems - Design, Analysis, Estimation and Control

Algorithm 8: ISRCDKF algorithm

• *Initialization step:* 0 <sup>=</sup> 0

• *Prediction step:*

<sup>0</sup> ˆ*<sup>k</sup> z z*- =

• *Iteration: for j=0: m*

$$\begin{aligned} \Psi\_{j} &= \begin{bmatrix} \boldsymbol{z}\_{j} & \boldsymbol{z}\_{j} + hS\_{k}^{-} & \boldsymbol{z}\_{j} - hS\_{k}^{-} \end{bmatrix} \\\\ \boldsymbol{Y}\_{j} &= h\left(\boldsymbol{\Psi}\_{j}\right) \\\\ \hat{S}\_{k,j}^{-} &= \sum\_{i=0}^{2L} \hat{W}\_{i}^{(\alpha)} \boldsymbol{Y}\_{i,j} \\\\ \boldsymbol{S}\_{j\_{k}} &= qr \left[ \sqrt{W\_{1}^{(\alpha)}} \left( Y\_{1L,j} - Y\_{L+12L,j} \right) \sqrt{W\_{1}^{(\alpha)}} \left( Y\_{1L,j} - Y\_{L+12L,j} - 2Y\_{0,j} \right) \right] \end{aligned}$$
 
$$\begin{aligned} \boldsymbol{P}\_{2,p\_{k}} &= W\_{1}^{(\alpha)} S\_{k}^{-} \left[ Y\_{1L,j} - Y\_{L+12L,j} \right]^{T} \\\\ \boldsymbol{K}\_{k,j} &= P\_{2,p\_{k}} \times S\_{k}^{\*} \times S\_{j\_{k}} \\\\ \hat{\boldsymbol{z}}\_{k,j} &= \hat{\boldsymbol{z}}\_{k}^{-} + K\_{k,j} \left( Y\_{k} - \hat{\boldsymbol{y}}\_{k,j}^{-} \right) \end{aligned}$$

$$S\_k = 
ocholup
Delta 
te 
\left\{ S\_k^-, U, -1 \right\}.$$

*End* <sup>=</sup> ,

$$S\_k = 
ocholup
Delta 
te 
\left\{ S\_{k,m}, U, -1 \right\}.$$

*Return the augmented state estimation*

The nonstationary observation model is given by,

$$\mathbf{x}\_k = \mathbf{C}\mathbf{x}\_k + DU\_2\left(k\right) + \eta\_k \tag{91}$$

where <sup>=</sup> 1 0 and <sup>=</sup> <sup>0</sup> 0.2 . The observation noise is a Gaussian noise (; 0, 3.10−1). Given only the noisy observations , the different filters were used to estimate the underlying clean state sequence for = 1…100.

#### *4.1.1. Generation of dynamic data*

It must be noted that this simulated state is assumed to be noise‐free. They are contaminated with Gaussian noise. Given noisy observations , the various KFs were used to estimate the

clean state sequence <sup>=</sup> 1 2 for k = 1...100. **Figure 1** shows the changes in the state variable

1.

**Figure 1.** Simulated data used in estimation: state variable (x1).

Here, the number of sigma‐points is fixed to 9 for all the techniques (*L* = 4). The process noise (; 0, 10−1) was added. The observation noise is (; 0, 3.10−1). The initial value of the state vector is 0 <sup>=</sup> 1 0 ' .

#### *4.1.2. Comparative study: estimation of state variables from noisy measurements*

*S cholupdate S U k k* { , ,1} - = -

*S cholupdate S U <sup>k</sup>* = { *k m*, , ,1- }

Given only the noisy observations , the different filters were used to estimate the underlying

It must be noted that this simulated state is assumed to be noise‐free. They are contaminated with Gaussian noise. Given noisy observations , the various KFs were used to estimate the

*k k* <sup>2</sup> ( ) *<sup>k</sup> y Cx DU k n* =+ + (91)

. The observation noise is a Gaussian noise (; 0, 3.10−1).

for k = 1...100. **Figure 1** shows the changes in the state variable

*End* <sup>=</sup> ,

*Return the augmented state estimation*

142 Nonlinear Systems - Design, Analysis, Estimation and Control

clean state sequence for = 1…100.

*4.1.1. Generation of dynamic data*

clean state sequence <sup>=</sup>

1.

where <sup>=</sup> 1 0 and <sup>=</sup> <sup>0</sup>

The nonstationary observation model is given by,

0.2

1 2

**Figure 1.** Simulated data used in estimation: state variable (x1).

The purpose of this study is to compare the estimation accuracy of UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF methods when they are utilized to estimate the state variable of the system. Hence, it is considered that the state vector to be estimated <sup>=</sup> and the model parameters 1, 2 are assumed to be known. The simulation results for state estimations of state variable *xk* using UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF methods are shown in **Figures 2** and **3**, respectively. Also, the performance comparison of the state estimation techniques in terms of RMSE and execution times is presented in **Table 1**.

**Figure 2.** Estimation of state variables using various state estimation techniques (UKF, CDKF, SRUKF and SRCDKF).

**Figure 3.** Estimation of state variables using various state estimation techniques (IUKF, ICDKF, ISRUKF and ISRCDKF).


**Table 1.** Comparison of state estimation techniques.

It is easily observed from **Figures 2** and **3** as well as **Table 1** that the ISRCDKF method achieves a better accuracy than the other methods.

#### *4.1.3. Comparative study: simultaneous estimation of state variables and model parameters*

The state variables and parameters are estimated and performed using the state estimation techniques UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF. The results of estimation for the model parameters using the estimation techniques (UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF) are shown in **Figures 4** and **5**, respectively. It can be seen from the results presented in **Figures 4** and **5** that the IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF methods outperform the UKF method, and that the ISRCDKF shows relative improvement over all other techniques. These results confirm the results obtained in the first comparative study, where only the state variable is estimated. The advantages of the ISRCDKF over the other techniques can also be seen through their abilities to estimate the model parameters.

**Figure 4.** Estimation of the model parameters (*P*1, *P*2) using UKF, CDKF, SRUKF and SRCDKF.

Nonlinear State and Parameter Estimation Using Iterated Sigma Point Kalman Filter: Comparative Studies http://dx.doi.org/10.5772/63728 145

**Figure 5.** Estimation of the model parameters (*P*1, *P*2) using IUKF, ICDKF, ISRUKF and ISRCDKF.

#### *4.1.3.1. Root Mean Square Error analysis*

**Technique** *x***1 (RMSE)** *x***2 (RMSE) Time execution(s) Technique** *x***1 (RMSE)** *x***2 (RMSE) Time execution(s)**

It is easily observed from **Figures 2** and **3** as well as **Table 1** that the ISRCDKF method achieves

The state variables and parameters are estimated and performed using the state estimation techniques UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF. The results of estimation for the model parameters using the estimation techniques (UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF) are shown in **Figures 4** and **5**, respectively. It can be seen from the results presented in **Figures 4** and **5** that the IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF methods outperform the UKF method, and that the ISRCDKF shows relative improvement over all other techniques. These results confirm the results obtained in the first comparative study, where only the state variable is estimated. The advantages of the ISRCDKF over the other techniques can also be seen through their abilities

*4.1.3. Comparative study: simultaneous estimation of state variables and model parameters*

**Figure 4.** Estimation of the model parameters (*P*1, *P*2) using UKF, CDKF, SRUKF and SRCDKF.

UKF 0.3539 0.4658 0.3577 IUKF 0.3342 0.4341 0.5952 CDKF 0.3512 0.4583 0.3367 ICDKF 0.3265 0.4315 0.4351 SRUKF 0.3495 0.4590 0.3354 ISRUKF 0.3254 0.4256 0.5803 SRCDKF 0.3324 0.4593 0.2586 ISRCDKF 0.3121 0.4213 0.4229

**Table 1.** Comparison of state estimation techniques.

144 Nonlinear Systems - Design, Analysis, Estimation and Control

a better accuracy than the other methods.

to estimate the model parameters.

The effects of the practical challenges on the performances of the UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF for state and parameter estimation are investigated in the next section.

#### *4.1.3.1.1. Effect of number of state and parameter to estimate on the estimation RMSE*

To study the effect of the number of states and parameters to be estimated on the estimation performances of UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF, the estimation performance is analyzed for different numbers of estimated states and parameters. Here, we will consider two cases, which are summarized below. In all cases, it is assumed that the state is measured.

Case 1: the state along with the first parameter 1 will be estimated.

Case 2: the state along with the two parameters 1 and 2 will be estimated.

The estimation of the state variables and parameter(s) for these two cases is performed using UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF, and the simulation results for the state variables and the model parameters for the two cases are shown in **Tables 2** and **3**. For example, for case 1, **Table 2** compares the estimation RMSEs for the two state variables (with respect to the noise‐free data) and the mean of the estimated parameter 1 at steady state (i.e., after convergence of parameter(s)) using the estimation methods. **Tables 2** and **3** also present similar comparisons for cases 1 and 2, respectively.


**Table 2.** Root mean square errors of estimated state variables and mean of estimated parameter: case 1.

The results also show that the number of parameters to estimate affects the estimation accuracy of the state variables. In other words, for all the techniques the estimation RMSE of increases from the first comparative study (where only the state variables are estimated) to case 1 (where the states and one parameter 1 is estimated) to case 2 (where the states and two parameters, 1 and 2, are estimated). For example, the RMSEs obtained using ISRCDKF for 1 in the first comparative study and cases 1–2 of the second comparative study are 0.3121, 0.3737 and 0.3846, respectively, which increase as the number of estimated parameters increases (see **Tables 2** and **3**). This observation is valid for the other state estimation techniques.

It can also be shown from **Tables 2** and **3** that, for all the techniques, estimating more model parameters affects the estimation accuracy. The ISRCDKF method, however, still provides advantages over other methods in terms of the estimation accuracy.


**Table 3.** Root mean square errors of estimated state variables and mean of estimated parameter: case 2.

#### *4.1.3.1.2. Effect of noise content on the estimation RMSE*

It is assumed that a noise is added to the state variable. In order to show the performance of the estimation algorithms in the presence of noise, three different measurement noise values, 10−1, 10−2 and 10−3, are considered. The simulation results of estimating the state using the UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF methods when the noise levels vary in 10−1, 10−2 and 10−3 are shown in **Table 4**.

Nonlinear State and Parameter Estimation Using Iterated Sigma Point Kalman Filter: Comparative Studies http://dx.doi.org/10.5772/63728 147


**Table 4.** Root mean square errors (RMSEs) of the estimated states for different noise levels.

In other words, for the estimation techniques, the estimation RMSEs of increase from the first comparative study (noise value = 10−1) to case (where the noise value = 10−3). For example, the RMSEs obtained using ISRCDKF for x1 where the noise level in 10−1, 10−2 and 10−3 are 0.3121, 0.1066 and 0.0376, which increase as the noise variance increases (refer to **Table 4**).

## **5. Conclusions**

**Technique** *x***<sup>1</sup>**

**Technique** *x***<sup>1</sup>**

**(RMSE)**

*x***2 (RMSE)**

*4.1.3.1.2. Effect of noise content on the estimation RMSE*

noise levels vary in 10−1, 10−2 and 10−3 are shown in **Table 4**.

**(RMSE)**

*x***2 (RMSE)**

146 Nonlinear Systems - Design, Analysis, Estimation and Control

*P***1 (mean)** **Time**

**execution (s)**

UKF 0.4221 0.5418 2.2453 0.3906 IUKF 0.3854 0.5093 1.9826 0.6963

CDKF 0.4192 0.5205 2.2232 0.3696 ICDKF 0.3827 0.4920 1.9786 0.5160

SRUKF 0.4063 0.4978 2.2228 0.3835 ISRUKF 0.3757 0.4748 1.9661 0.6798

SRCDKF 0.3970 0.4943 2.1858 0.3420 ISRCDKF 0.3737 0.4720 1.9297 0.5154

The results also show that the number of parameters to estimate affects the estimation accuracy of the state variables. In other words, for all the techniques the estimation RMSE of increases from the first comparative study (where only the state variables are estimated) to case 1 (where the states and one parameter 1 is estimated) to case 2 (where the states and two parameters, 1 and 2, are estimated). For example, the RMSEs obtained using ISRCDKF for 1 in the first comparative study and cases 1–2 of the second comparative study are 0.3121, 0.3737 and 0.3846, respectively, which increase as the number of estimated parameters increases (see **Tables 2** and

It can also be shown from **Tables 2** and **3** that, for all the techniques, estimating more model parameters affects the estimation accuracy. The ISRCDKF method, however, still provides

UKF 0.1962 0.6590 1.9484 ‐0.9798 IUKF 0.4056 0.4927 19408 ‐0.9721 CDKF 0.4170 0.4932 1.9482 ‐0.9786 ICDKF 0.4012 0.4908 1.9389 ‐0.9720 SRUKF 0.4133 0.4977 1.9481 ‐0.9776 ISRUKF 0.3989 0.4843 1.9342 ‐0.9677 SRCDKF 0.4090 0.4956 1.9436 ‐0.9741 ISRCDKF 0.3846 0.4875 1.9305 ‐0.9486

It is assumed that a noise is added to the state variable. In order to show the performance of the estimation algorithms in the presence of noise, three different measurement noise values, 10−1, 10−2 and 10−3, are considered. The simulation results of estimating the state using the UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF methods when the

**Technique** *x***<sup>1</sup>**

**(RMSE)**

*x***2 RMSE)** *P***1 (mean)** *P***2 (mean)**

**Table 2.** Root mean square errors of estimated state variables and mean of estimated parameter: case 1.

**3**). This observation is valid for the other state estimation techniques.

advantages over other methods in terms of the estimation accuracy.

*P***2 (mean)**

**Table 3.** Root mean square errors of estimated state variables and mean of estimated parameter: case 2.

*P***1 (mean)** **Technique** *x***<sup>1</sup>**

**(RMSE)**

*x***2 (RMSE)** *P***1 (mean)** **Time**

**execution (s)**

In this chapter, various SPKF‐based methods are used to estimate nonlinear state variables and model parameters. They are compared for the estimation performance in two comparative studies. In the first comparative study, the state variables are estimated from noisy measure‐ ments of these variables, and the several estimation methods are compared by estimating the RMSE with respect to the noise‐free data. In the second comparative study, of the state variables as well as that the model parameters are estimated. Comparing the performances of the several state estimation extensions, the impact of the number of estimated model parameters on the convergence and accuracy of these methods is also evaluated. The results of the second comparative study show that, for all the techniques, estimating more model parameters affects the estimation accuracy as well as the convergence of the estimated states and parameters. The iterated square‐root central difference Kalman method, however, still provides advantages over other methods in terms of the estimation accuracy, convergence and execution times.

## **Acknowledgements**

This work was made possible by NPRP grant NPRP7‐1172‐2‐439 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.
