**Optimal Solution to Matrix Riccati Equation – For Kalman Filter Implementation**

Bhar K. Aliyu, Charles A. Osheku, Lanre M.A. Adetoro and Aliyu A. Funmilayo

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/46456

## **1. Introduction**

96 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

Energy, 2002: 7l, 87-100.

Thermal Eng, 1997: 40, 469—77.

Energy, 2000: 67, 17—35.

2001: 123, 348—54.

engineering sciences, 2007: 23, 9-27.

review, ken Sustain Energy Rev, 2001: 5, 373—401.

neural network, Rev Roumaine De Chim, 1995: 40, 561—5.

spray cooling, S Mater Process Technol, 2001: 113, 439—45.

Energy Conver Manage, 1999: 40, 1073—91.

neural network. Appl Energy; 2002: 71, 307—19.

consumption, Energy Conver Manage, 2002: 44, 893—905.

[40] Aydinalp M, Ugursal VI, Fung AS. Modelling of the appliance, lighting, and space cooling to energy consumptions in the residential sector using neural networks. Appl

[41] Dorylo ASS, J.A. Jervase JA, AI-Lawati A. Solar radiation estimation using artificial

[42] Masr GE, Hadr EA, Joun C. Back propagation neural networks for modelling gasoline

[43] Bechtler H, Browne MW, Bansal PK, Kecman V. Neural networks—a new approach to

[44] Kalogirou SA. Long-term performance prediction of forced circulation solar domestic water heating systems using artificial neural networks, Appl Energy, 2000: 6663—74. [45] Kalogirou SA, Bojic M. Artificial neural networks for the prediction of the energy

[46] Farshad FE, Garber SD, Lorde IN. Predicting temperature profiles in producing oil

[47] Ueda M, Taniguchi Y, Asano A, Mochizuki M, Ikegarni T, Kawai T. An automobile heating, ventilating and air conditioning (HVAC) system with a neural network for controlling the thermal sensations felt by a passenger, ISME Ins J Series B-Fluid

[48] Kalogiron SA. Artificial neural networks in renewable energy systems applications: a

[49] Kalogiron SA. Applications of artificial neural-networks for energy systems, Appl

[50] Kalogiron SA. Applications of artificial neural networks in energy systems: a review,

[51] Diaz G, Sen M, Yang KT, McClain RL. Dynamic prediction and control of heat exchangers using artificial neural networks, Int Heat Mass Transfer, 2001: 44, 1671—9. [52] Lavric D, Lavric L, Woinaroschy A, Danciu AE. Designing tin heat exchanger with a

[53] Oliveira MSA, Sousa ACM. Neural network analysis of experimental data for air/water

[54] Sablani SS. A neural network approach for non-iterative calculation of heat transfer

[55] Pacheco-Vega A, Diax G, Sen M, Yang KT, McClain RL. Heat rate predictions air-water beat exchangers using correlations and neural networks, S Heat Transfer—Trans ASME,

[56] Al-haj Ibrahim H, Safwat A, Hussamy N. Prediction of the rate of heat transfer and fouling factor by using artificial neural network approach, Bassel Al-Assad Journal for

[58] Aydinalp M, Ugursal VI, Fung AS. Predicting residential appliance, lighting, and space cooling energy consumption using neural networks, Proc. the Fourth International

[59] Rafig MY, Bugmann G, Easterbrook DI. Neural network design for engineering

coefficient in fluid—particle systems, Chem Eng Process, 2001: 40, 363—9.

[57] Kern DQ. Process Heat Transfer, McGraw-Hill Book Co., 1950, 151-153.

Thermal Energy Congress, Cesme, Turkey, 2001, 417-22.

applications, Comput Struct, 2001: 79, 1541-52.

model vapour-compression heat pumps, Int J Energy Res, 2001: 25, 591—9.

consumption of a passive solar building, Energy, 2000: 1(5), 419—91.

wells using artificial neural networks, Eng Comput, 2000: 17, 735—54.

Matrix Riccati Equations arise frequently in applied mathematics, science, and engineering problems. These nonlinear matrix equations are particularly significant in optimal control, filtering, and estimation problems. Essentially, solving a Riccati equation is a central issue in optimal control theory. The needs for such equations are common in the analysis and synthesis of Linear Quadratic Gaussian (LQC) control problems. In one form or the other, Riccati Equations play significant roles in optimal control of multivariable and large-scale systems, scattering theory, estimation, and detection processes. In addition, closed forms solution of Riccti Equations are intractable for two reasons namely; one, they are nonlinear and two, are in matrix forms. In the past, a number of unconventional numerical methods were employed for the solutions of time-invariant Riccati Differential Equations (RDEs). Despite their peculiar structure, no unconventional methods suitable for time-varying RDEs have been constructed, except for carefully re-designed conventional linear multistep and Runge-Kutta(RK) methods.

Implicit conventional methods that are preferred to explicit ones for stiff systems are premised on the solutions of nonlinear systems of equations with higher dimensions than the original problems via Runge-Kutta methods. Such procedural techniques do not only pose implementation difficulties but are also very expensive because they require solving robust non-linear matrix equations.

In this Chapter, we shall focus our attention on the numerical solution of Riccati Differential Equations (RDEs) for computer-aided control systems design using the numerical algorithm with an adaptive step of *Dormand-Prince*. It is a key step in many computational methods for model reduction, filtering, and controller design for linear systems. In the meantime, we shall limit our investigation to the optimality in the numeric solution to Riccati equation as it affects the design of Kalman-Bucy filter state estimator, for an LQG control of an

© 2012 Aliyu et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2012 Aliyu et al., licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Expendable Launch Vehicle (ELV) in pitch plane during atmospheric ascent. Furthermore, the approach in the paper by Aliyu Kisabo Bhar et al will be fully employed from a comparative standpoint of the solution to a differential Riccati equation and an Algebraic Riccati for Kalman-Bucy filter implementation.

Optimal Solution to Matrix Riccati Equation – For Kalman Filter Implementation 99

The applications of Kalman filtering encompass many fields, but its use as a tool, is almost exclusively for two purposes: estimation and performance analysis of estimators. Figure 1

Despite the indication of Kalman filtering process in the apex of the pyramid, it is an integral part in the foundation of another discipline *modern control theory*, and a proper

Kalman filter analyses a dynamic systems' behavior with external and measurement noise. In general, the output *y* is affected by noise measurement. In addition, the process dynamics are also affected by disturbances, such as atmospheric turbulence. Following this, we shall now present the model for the LQG control for the ELV via equation (1). The essential assumptions are viz; the dynamic system is linear, the performance cost function is

> . *x Ax Bu Gw*

Where, *w* and *v* are zero-mean Gaussian noise processes (uncorrelated from each other). The

,

(1)

*x Ax Bu L y y* ˆˆ ˆ ( ), (3)

*<sup>N</sup> L PC R* (4)

*t s*, (2)

By defining, a continuous –time process and measurement model is as follows;

following process and measurement covariance matrices hold namely:

*Ev t w s*

From the foregoing,a Kalman filter equation admits the form;

where *L* is the Kalman gain represented as

*y Cx v* 

() () ( )

*Ev t v s R t s*

*N*

 

() () ( )

*Ew t w s Q t s*

*N*

1 . *<sup>T</sup>*

() () 0

*T T*

*T*

depicts the essential subject for the foundation for Kalman filtering theory.

subset of statistical decision theory.

**Figure 1.** Foundation concept in Kalman filtering.

quadratic, and the random processes are Gaussian.

## **2. Kalman filter**

Theoretically the Kalman Filter is an estimator for the linear-quadratic problem, it is an interesting technique for estimating the instantaneous 'state' of a linear dynamic system perturbed by white -noise measurements that is linearly related to the corrupted white noise state. The resulting estimator is statistically optimal with respect to any quadratic function of the estimation error.

In estimation theory, Kalman introduced stochastic notions that applied to non-stationary time-varying systems, via a recursive solution. C.F. Gauss (1777-1855) first used the Kalman filter, for the least-squares approach in planetary orbit problems. The Kalman filter is the natural extension of the Wiener filter to non-stationary stochastic systems. In contrast, the theory of Kalman has provided optimal solutions for control systems with guaranteed performance. These control analyses were computed by solving formal matrix design equations that generally had unique solutions. By a way of reference, the U.S. space program blossomed with a Kalman filter providing navigational data for the first lunar landing.

Practically, it is one of the celebrated discoveries in the history of statistical estimation theory in the twentieth century. It has enable humankind to do many things, one obvious advantage, is its indispensability as silicon integration in the makeup of electronic systems. It's most dependable application is in the control of complex dynamic systems such as continuous manufacturing processes, aircraft, ships, or spacecraft. To control a dynamic system, you must first know what it is doing. For these applications, it is not always possible or desirable to measure every variable that you want to control, and Kalman filter provides a means of inferring the missing information from indirect (and noisy) measurements. Kalman Filter is also very useful for predicting the likely future course of dynamic systems that people are not likely to control, such as the flow of rivers during flood, the trajectory of celestial bodies, or the prices of traded commodities. Kalman Filter is 'ideally noted for digital computer implementation', arising from a finite representation of the estimated problem-by a finite number of variables. Usually, these variables are assumed to be real numbers-with infinite precision. Some of the problems encountered in its uses, arose from its distinction between 'finite' and 'manageable' problem sizes. These are significant issues on the practical side of Kalman filtering that must be considered in conjunction with the theory. It is also a complete statistical characterization of an estimated problem than an estimator, because it propagates the entire probability distribution of the variables in its task to be estimated. This is a complete characterization of the current state of knowledge of the dynamic system, including influence of all past measurements. These probability distributions are also useful for statistical analysis and predictive design of sensor systems. The applications of Kalman filtering encompass many fields, but its use as a tool, is almost exclusively for two purposes: estimation and performance analysis of estimators. Figure 1 depicts the essential subject for the foundation for Kalman filtering theory.

Despite the indication of Kalman filtering process in the apex of the pyramid, it is an integral part in the foundation of another discipline *modern control theory*, and a proper subset of statistical decision theory.

**Figure 1.** Foundation concept in Kalman filtering.

98 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

Riccati for Kalman-Bucy filter implementation.

**2. Kalman filter** 

of the estimation error.

landing.

Expendable Launch Vehicle (ELV) in pitch plane during atmospheric ascent. Furthermore, the approach in the paper by Aliyu Kisabo Bhar et al will be fully employed from a comparative standpoint of the solution to a differential Riccati equation and an Algebraic

Theoretically the Kalman Filter is an estimator for the linear-quadratic problem, it is an interesting technique for estimating the instantaneous 'state' of a linear dynamic system perturbed by white -noise measurements that is linearly related to the corrupted white noise state. The resulting estimator is statistically optimal with respect to any quadratic function

In estimation theory, Kalman introduced stochastic notions that applied to non-stationary time-varying systems, via a recursive solution. C.F. Gauss (1777-1855) first used the Kalman filter, for the least-squares approach in planetary orbit problems. The Kalman filter is the natural extension of the Wiener filter to non-stationary stochastic systems. In contrast, the theory of Kalman has provided optimal solutions for control systems with guaranteed performance. These control analyses were computed by solving formal matrix design equations that generally had unique solutions. By a way of reference, the U.S. space program blossomed with a Kalman filter providing navigational data for the first lunar

Practically, it is one of the celebrated discoveries in the history of statistical estimation theory in the twentieth century. It has enable humankind to do many things, one obvious advantage, is its indispensability as silicon integration in the makeup of electronic systems. It's most dependable application is in the control of complex dynamic systems such as continuous manufacturing processes, aircraft, ships, or spacecraft. To control a dynamic system, you must first know what it is doing. For these applications, it is not always possible or desirable to measure every variable that you want to control, and Kalman filter provides a means of inferring the missing information from indirect (and noisy) measurements. Kalman Filter is also very useful for predicting the likely future course of dynamic systems that people are not likely to control, such as the flow of rivers during flood, the trajectory of celestial bodies, or the prices of traded commodities. Kalman Filter is 'ideally noted for digital computer implementation', arising from a finite representation of the estimated problem-by a finite number of variables. Usually, these variables are assumed to be real numbers-with infinite precision. Some of the problems encountered in its uses, arose from its distinction between 'finite' and 'manageable' problem sizes. These are significant issues on the practical side of Kalman filtering that must be considered in conjunction with the theory. It is also a complete statistical characterization of an estimated problem than an estimator, because it propagates the entire probability distribution of the variables in its task to be estimated. This is a complete characterization of the current state of knowledge of the dynamic system, including influence of all past measurements. These probability distributions are also useful for statistical analysis and predictive design of sensor systems. Kalman filter analyses a dynamic systems' behavior with external and measurement noise. In general, the output *y* is affected by noise measurement. In addition, the process dynamics are also affected by disturbances, such as atmospheric turbulence. Following this, we shall now present the model for the LQG control for the ELV via equation (1). The essential assumptions are viz; the dynamic system is linear, the performance cost function is quadratic, and the random processes are Gaussian.

By defining, a continuous –time process and measurement model is as follows;

$$\begin{aligned} \dot{\mathbf{x}} &= A\mathbf{x} + Bu + G\mathbf{w}, \\ \mathbf{y} &= \mathbf{C}\mathbf{x} + \mathbf{v}. \end{aligned} \tag{1}$$

Where, *w* and *v* are zero-mean Gaussian noise processes (uncorrelated from each other). The following process and measurement covariance matrices hold namely:

$$\begin{cases} Ev(t)v^T(s) = R\_N \delta(t-s) \\ Ev(t)w^T(s) = 0 \\ Ev(t)w^T(s) = Q\_N \delta(t-s) \end{cases} \quad t, s \in \mathfrak{R} \tag{2}$$

From the foregoing,a Kalman filter equation admits the form;

$$
\dot{\hat{\chi}} = A\hat{\chi} + Bu + L(y - \hat{y}),
\tag{3}
$$

where *L* is the Kalman gain represented as

$$L = \mathbf{P} \mathbf{C}^T \mathbf{R}\_N^{-1}.\tag{4}$$

The covariance matrix *P,* in equation (4) is the solution to a Riccati Differential Equation (RDE) or an Algebraic Riccati Equation (ARE).

## **3. Riccati equation**

In mathematics, a Riccati equation is any ordinary differential equation that is quadratic in the unknown function. In other words, it is an equation of the form

$$y'(\mathbf{x}) = q\_0(\mathbf{x}) + q\_1(\mathbf{x})y(\mathbf{x}) + q\_2(\mathbf{x})y^2(\mathbf{x}),\tag{5}$$

Optimal Solution to Matrix Riccati Equation – For Kalman Filter Implementation 101

(9)

<sup>1</sup> *Pt AtB t* ( ) ( ) ( ), (10)

*N*

11 1 ( ) ( ) ( ) ( ) ( ) ( ) ( ). *<sup>T</sup> <sup>A</sup> <sup>N</sup> tB tC tR tCtAtB t* (12)

*<sup>N</sup>* (14)

(15)

*T*

(16)

*N*

(13)

(11)

A fractional decomposition of the covariance matrix results in a linear differential equation for the numerator and the denominator matrices. The numerator and denominator matrices as functions of time, such that the product *A(t)B-1(t)* satisfies the matrix Riccati equation and its boundary conditions. By taking the derivative of the matrix fraction *A(t)B-1(t)* with

<sup>111</sup> ( ) ( ) ( ) ( ), *<sup>d</sup> B t B tBtB t*

From the Riccati equation in (6) substitution for *P(t)* with *A(t)B-1(t)* in the right hand side of

1 1 ( ) () () () () () () () () () *T T*

 

*dP t <sup>A</sup> tAtB t AtB tA t GtQ tG t dt*

 <sup>1</sup> () () () () () () () () () () *<sup>T</sup> <sup>A</sup> <sup>N</sup> t AtB tBt A tAt GtQ tG tBt* 

1 1 () () () () () () () () *T T <sup>A</sup> <sup>N</sup> tB t C tR tCtAt A tBt*

( ) ( ) ( ) ( ) ( ) ( ), *<sup>T</sup> At A t GtQ tG tBt*

<sup>1</sup> ( ) ( ) ( ) ( ) ( ) ( ) ( ), *T T <sup>N</sup> Bt C tR tCtAt A tBt*

then *P(t)=A(t)B-1(t)* satisfies the Riccati differential equation. Note that equations (14) and (15) are the linear differential equations with respect to matrices *A(t)* and *B(t)*. The foregoing

> ( ) () () () () ( ) ( ) ( ) () () () ()

*A t A t GtQ tG t A t B t C tR tCt A t B t*

*dt*

 1 11 ( ) () () () () () () *dP t <sup>A</sup> tB t AtB tBtB t dt*

Now let us represent the covariance matrix *P(t)* by

and on applying equations (9-10) yields

the equation, leads to the following namely

Therefore, if we find *A(t)* and *B(t)* that satisfy:

can be arranged as follows viz;

Equating (11) and (12) and multiplying through with *B(t)* yields

1

*N*

*T*

respect to *t* and using the fact that

where, *q0(x)≠0* and *q2(x)≠0* ( *q0(x)=0* is Bernoulli equation and *q2(x)=0* is first order linear ordinary equation). It is named after Count Jacopo Francesco Riccati (1676-1754).

More generally, "Riccati equations" refer to matrix equations with analogous quadratic terms both in continuous-time and in discrete-time Linear-Quadratic-Gaussian Control. The steady-state (non-dynamic) versions of these equations are classified as algebraic Riccati equations.

#### **3.1. Riccati differential equation (RDE)**

The Riccati differential equation was first studied in the eighteen century as a nonlinear scalar differential equation, and a method was derived for transforming it to a linear matrix form. This same method works when the dependent variable of the original Riccati differential equation is a matrix.

The statistical performance of the Kalman filter estimator can be predicted a priori by solving the Riccati equations for computing the optimal feedback gain of the estimator. Also, the behaviors of their solutions can be shown analytically for the most trivial cases. These equations also provide a means for verifying the proper performance of the actual estimator when it is running.

For the LQG problem, the associated Riccati Differential Equation which provides the covariance *P(t*) needed for the solution of Kalman gain is of the form,

$$\dot{P}(t) = A\_{\phi}(t)P(t) + P(t)A\_{\phi}^{\top}(t) + G(t)Q\_{N}(t)G(t) - P(t)\mathbb{C}^{\top}(t)R\_{N}^{-1}(t)\mathbb{C}(t)P(t) \tag{6}$$

where *A�* is the state transitional matrix defined as

$$\frac{d}{dt}\mathbf{x}(t) = A(t)\mathbf{x}(t) + G(t)\mathbf{w}(t). \tag{7}$$

The Riccati Differential Equation in (6) can be solved by using a technique, called the Matrix Fraction Decomposition. A matrix product of the sort *AB-1* is called a *matrix fraction,* and a representation of a matrix *M* in the form

$$M = AB^{-1} \tag{8}$$

A fractional decomposition of the covariance matrix results in a linear differential equation for the numerator and the denominator matrices. The numerator and denominator matrices as functions of time, such that the product *A(t)B-1(t)* satisfies the matrix Riccati equation and its boundary conditions. By taking the derivative of the matrix fraction *A(t)B-1(t)* with respect to *t* and using the fact that

$$\frac{d}{dt}B^{-1}(t) = -B^{-1}(t)\dot{B}(t)B^{-1}(t),\tag{9}$$

Now let us represent the covariance matrix *P(t)* by

$$P(t) = A(t)B^{-1}(t),\tag{10}$$

and on applying equations (9-10) yields

100 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

the unknown function. In other words, it is an equation of the form

(RDE) or an Algebraic Riccati Equation (ARE).

**3.1. Riccati differential equation (RDE)** 

differential equation is a matrix.

estimator when it is running.

representation of a matrix *M* in the form

where *A�* is the state transitional matrix defined as

**3. Riccati equation** 

equations.

The covariance matrix *P,* in equation (4) is the solution to a Riccati Differential Equation

In mathematics, a Riccati equation is any ordinary differential equation that is quadratic in

where, *q0(x)≠0* and *q2(x)≠0* ( *q0(x)=0* is Bernoulli equation and *q2(x)=0* is first order linear

More generally, "Riccati equations" refer to matrix equations with analogous quadratic terms both in continuous-time and in discrete-time Linear-Quadratic-Gaussian Control. The steady-state (non-dynamic) versions of these equations are classified as algebraic Riccati

The Riccati differential equation was first studied in the eighteen century as a nonlinear scalar differential equation, and a method was derived for transforming it to a linear matrix form. This same method works when the dependent variable of the original Riccati

The statistical performance of the Kalman filter estimator can be predicted a priori by solving the Riccati equations for computing the optimal feedback gain of the estimator. Also, the behaviors of their solutions can be shown analytically for the most trivial cases. These equations also provide a means for verifying the proper performance of the actual

For the LQG problem, the associated Riccati Differential Equation which provides the

<sup>1</sup> () () () () () () () () () () () () () *T T N N Pt A tPt PtA t GtQ tGt PtC tR tCtPt*

( ) ( ) ( ) ( ) ( ). *<sup>d</sup> xt Atxt Gtwt*

The Riccati Differential Equation in (6) can be solved by using a technique, called the Matrix Fraction Decomposition. A matrix product of the sort *AB-1* is called a *matrix fraction,* and a

(6)

(7)

*M AB*<sup>1</sup> (8)

covariance *P(t*) needed for the solution of Kalman gain is of the form,

 

*dt*

ordinary equation). It is named after Count Jacopo Francesco Riccati (1676-1754).

2 01 2 *y*( ) ( ) ( ) ( ) ( ) ( ), *x q x q xyx q xy x* (5)

$$\frac{dP(t)}{dt} = \dot{A}(t)B^{-1}(t) - A(t)B^{-1}(t)\dot{B}(t)B^{-1}(t) \tag{11}$$

From the Riccati equation in (6) substitution for *P(t)* with *A(t)B-1(t)* in the right hand side of the equation, leads to the following namely

$$\frac{dP(t)}{dt} = A\_{\phi}(t)A(t)B^{-1}(t) + A(t)B^{-1}(t)A\_{\phi}^{T}(t) + G(t)Q\_{N}(t)G^{T}(t)$$

$$-A(t)B^{-1}(t)C^{T}(t)R\_{N}^{-1}(t)C(t)A(t)B^{-1}(t). \tag{12}$$

Equating (11) and (12) and multiplying through with *B(t)* yields

$$
\dot{A}(t) - A(t)B^{-1}(t)\dot{B}(t) = \left\langle A\_{\phi}(t)A(t) + G(t)Q\_{N}(t)G^{T}(t)B(t) \right\rangle
$$

$$
$$

Therefore, if we find *A(t)* and *B(t)* that satisfy:

$$\dot{A}(t) = A\_{\phi}(t) + G(t)Q\_{N}(t)G^{T}(t)B(t),\tag{14}$$

$$\dot{B}(t) = \mathbb{C}^{T}(t)R\_{N}^{-1}(t)\mathbb{C}(t)A(t) - A\_{\phi}^{T}(t)B(t),\tag{15}$$

then *P(t)=A(t)B-1(t)* satisfies the Riccati differential equation. Note that equations (14) and (15) are the linear differential equations with respect to matrices *A(t)* and *B(t)*. The foregoing can be arranged as follows viz;

$$
\begin{pmatrix} A(t) \\ B(t) \end{pmatrix} = \begin{bmatrix} A\_{\varphi}(t) & G(t)Q\_{N}(t)G^{T}(t) \\ C^{T}(t)R\_{N}^{-1}(t)C(t) & -A\_{\varphi}(t) \end{bmatrix} \begin{pmatrix} A(t) \\ B(t) \end{pmatrix} \tag{16}
$$

Such a representation is a Hamiltonian Matrix known as matrix Riccati differential equation.

$$\Psi(t) = \begin{bmatrix} A\_{\phi}(t) & G(t)Q\_{N}(t)G^{T}(t) \\ C^{T}(t)R\_{N}^{-1}(t)C(t) & -A\_{\phi}(t) \end{bmatrix} \tag{17}$$

Optimal Solution to Matrix Riccati Equation – For Kalman Filter Implementation 103

 

(23)

*P t*() 0 *p d*. ., () () , *T nn Pt P t R t* 0. (26)

 

 

(24)

 

2

*t*

 

2

*t*

(25)

0 0 <sup>1</sup> ( ) {[ ( ) ] [ ( ) ] } <sup>2</sup> *t t at P A qe P A qe* 

<sup>1</sup> ( ) {( )[ ( ) ] ( )[ ( ) ] }. <sup>2</sup> *t t b t A P A qe A P A qe*

0 0

If the system is *observable*, i.e. *(A,C)*: Observable Pair, then the RDE has a positive-definite,

The need to solve Riccati equation is perhaps the greatest single cause of anxiety and agony on the part of people faced with implementing Kalman filter. Because there is no general formula for solving higher order polynomials equations (i.e., beyond quartic), finding closed-form solutions to algebraic Riccati equations by purely algebraic means is very rigorous. Thus, it is necessary to employ numerical solution methods. Numbers do not always provide us as much insight into the characteristics of the solution as formulas do, but

*a t P A q P A qe Pt q b t A P A q A P A qe*

0 0 [ ( ) ][( ) ] ( ) ( ) . ( ) ( )[ ( ) ] ( )[ ( ) ]

symmetric solution for an arbitrary positive-definite initial value of matrix *P0 > 0*;

**3.2. Numerical example – An expendable launch vehicle (ELV) autopilot** 

<sup>1</sup> () () () () () , *<sup>T</sup>*

*QN*

6

0 14.7805 94.8557 . *<sup>T</sup> <sup>G</sup>*

0 4.0 10 0 , 0 0 4.0 10

ELV autopilot problem in Matlab/Simulink®. It solves the symmetrical RDE:

This problem is taken from Aliyu et al, and it is significant for modeling and simulating an

*N N P t AP t P t A P t CR CP t GQ G* <sup>0</sup>*P I* . (27)

3

6

1.1 10 0 0

*C*

3

100 0 1 0, 000

0 1.1 10 0 , 0 0 1.1 10

3

readily amenable for most problems of practical significance.

0 10 14.7805 0 0.01958 , 100.858 0 0.1256

6

4.0 10 0 0

 

*q*

Consequently, the covariance follows as;

Where,

and

*A*

*RN*

0 0

The initial values of *A(t)* and *B(t)* must be constrained by the initial value of *P(t)*. This is easily satisfied by taking *P0=I*, an identity matrix.

In the time-invariant case, the Hamiltonian matrix Ψ is also time-invariant. As a consequence, the solution for the numerator *A(t)*and denominator *B(t)*of the matrix fraction can be represented in matrix form as the product

$$
\begin{bmatrix} A(t) \\ B(t) \end{bmatrix} = e^{\Psi t} \begin{bmatrix} P(0) \\ I \end{bmatrix} \tag{18}
$$

where *eΨt*is a *2n x 2n* matrix.

*Convergence Properties of a Scalar Time-Invariant Case.* In this case, the numerator *A(t)* and the denominator *B(t)* of the 'matrix fraction' *A(t)B-1(t)* will be scalars, but Ψ will be a *2n x 2n* matrix. Considering a case: *A(t)→a(t),* and *B(t)→b(t)*and the process and measurement equations becomes

$$\begin{aligned} A\_{\phi}(t) &= A\_{\phi'} \\ G(t) &= G\_{\prime} \\ Q(t) &= Q\_{\prime} \\ R(t) &= R\_{\prime} \\ C(t) &= C. \end{aligned} \tag{19}$$

The scalar time-invariant Riccati differential matrix equation and its linearized equivalent is

$$\dot{P}(t) = A\_{\phi}P(t) + P(t)A\_{\phi} - P(t)CR\_{N}^{-1}CP(t) + GQG^{T} \,. \tag{20}$$

Hence, equation (16) reduces to

$$
\begin{pmatrix} \dot{a} \\ \dot{b} \end{pmatrix} = \begin{bmatrix} A\_{\phi} & GQG^{T} \\ CR^{-1}C & -A\_{\phi} \end{bmatrix} \begin{pmatrix} a \\ b \end{pmatrix} \tag{21}
$$

with the following initial conditions namely; *a(0)=P0* and *b(0)=1*. In the meantime, the eigenvalues of the Hamilton Matrix are;

$$
\lambda\_1, \lambda\_2 = \pm \sqrt{A\_{\phi}^2 + \frac{Q}{R} G^2 C^2} \,. \tag{22}
$$

Using *λ1* and *λ2* , where, *q=G2Q,* we can write the following:

Optimal Solution to Matrix Riccati Equation – For Kalman Filter Implementation 103

$$a(t) = \frac{1}{2\lambda} \{ [P\_0(\lambda - A\_\phi) + q]e^{\lambda t} + [P\_0(\lambda - A\_\phi) + q]e^{-\lambda t} \} \tag{23}$$

$$b(t) = \frac{1}{2\lambda q} \{ (\lambda - A\_{\phi}) [P\_0(\lambda + A\_{\phi}) + q]e^{\lambda t} - (\lambda + A\_{\phi}) [P\_0(\lambda - A\_{\phi}) - q]e^{-\lambda t} \}. \tag{24}$$

Consequently, the covariance follows as;

$$P(t) = \frac{a(t)}{b(t)} = q \frac{[P\_0(\lambda + A\_\phi) + q] + [P\_0(\lambda - A\_\phi) - q]e^{-2\lambda t}}{(\lambda - A\_\phi)[P\_0(\lambda + A\_\phi) + q] - (\lambda + A\_\phi)[P\_0(\lambda - A\_\phi) - q]e^{-2\lambda t}}. \tag{25}$$

If the system is *observable*, i.e. *(A,C)*: Observable Pair, then the RDE has a positive-definite, symmetric solution for an arbitrary positive-definite initial value of matrix *P0 > 0*;

$$P(t) > 0 \quad p.d., \quad P(t) = P^T(t) \in R^{n \times n}, \quad \forall t > 0. \tag{26}$$

The need to solve Riccati equation is perhaps the greatest single cause of anxiety and agony on the part of people faced with implementing Kalman filter. Because there is no general formula for solving higher order polynomials equations (i.e., beyond quartic), finding closed-form solutions to algebraic Riccati equations by purely algebraic means is very rigorous. Thus, it is necessary to employ numerical solution methods. Numbers do not always provide us as much insight into the characteristics of the solution as formulas do, but readily amenable for most problems of practical significance.

#### **3.2. Numerical example – An expendable launch vehicle (ELV) autopilot**

This problem is taken from Aliyu et al, and it is significant for modeling and simulating an ELV autopilot problem in Matlab/Simulink®. It solves the symmetrical RDE:

$$
\dot{P}(t) - AP(t) + P(t)A - P(t)CR\_N^{-1}CP(t) + GQ\_N G^T \ , \quad P\_0 = I. \tag{27}
$$

Where,

102 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

1

*N*

*T*

*t*

easily satisfied by taking *P0=I*, an identity matrix.

can be represented in matrix form as the product

where *eΨt*is a *2n x 2n* matrix.

Hence, equation (16) reduces to

eigenvalues of the Hamilton Matrix are;

equations becomes

Such a representation is a Hamiltonian Matrix known as matrix Riccati differential equation.

() () () () ( ) () () () ()

*C tR tCt A t*

The initial values of *A(t)* and *B(t)* must be constrained by the initial value of *P(t)*. This is

In the time-invariant case, the Hamiltonian matrix Ψ is also time-invariant. As a consequence, the solution for the numerator *A(t)*and denominator *B(t)*of the matrix fraction

> ( ) (0) , ( ) *At P <sup>t</sup> <sup>e</sup> Bt I*

*Convergence Properties of a Scalar Time-Invariant Case.* In this case, the numerator *A(t)* and the denominator *B(t)* of the 'matrix fraction' *A(t)B-1(t)* will be scalars, but Ψ will be a *2n x 2n* matrix. Considering a case: *A(t)→a(t),* and *B(t)→b(t)*and the process and measurement

> () , () , () , () , () .

 

*At A Gt G Qt Q Rt R Ct C*

The scalar time-invariant Riccati differential matrix equation and its linearized equivalent is

<sup>1</sup> () () () () () . *<sup>T</sup> <sup>N</sup> P t A P t P t A P t CR CP t GQG*

(20)

(21)

(22)

*T*

2 22

 

1

 

*a A GQG a b CR C A b* 

with the following initial conditions namely; *a(0)=P0* and *b(0)=1*. In the meantime, the

1 2 , . *<sup>Q</sup> <sup>A</sup> G C R* 

 

Using *λ1* and *λ2* , where, *q=G2Q,* we can write the following:

*A t GtQ tG t*

*T*

(17)

(18)

(19)

*N*

$$A = \begin{bmatrix} 0 & 1 & 0 \\ 14.7805 & 0 & 0.01958 \\ -100.858 & 0 & -0.1256 \end{bmatrix} \prime \ Q\_N = \begin{bmatrix} 1.1 \times 10^{-3} & 0 & 0 \\ 0 & 1.1 \times 10^{-3} & 0 \\ 0 & 0 & 1.1 \times 10^{-3} \end{bmatrix} \prime$$

$$R\_N = \begin{bmatrix} 4.0 \times 10^{-6} & 0 & 0 \\ 0 & 4.0 \times 10^{-6} & 0 \\ 0 & 0 & 4.0 \times 10^{-6} \end{bmatrix} \prime \ C = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{bmatrix} \prime$$

and

$$G = \begin{bmatrix} 0 & 14.7805 & -94.8557 \end{bmatrix}^T.$$

## **3.3. Numerical Methods and Problem Solving Environment (PSE), for Ordinary Differential Equations**

In the last decade, two distinct directions have emerged in the way Ordinary Differential Equation (ODE) solvers software is utilized. These include Large Scale Scientific Computation and PSE. Most practicing engineers and scientists, as well as engineering and science students use numerical software for problem solving, while only a few very specific research applications require large scale computing.

MATLAB® provides several powerful approaches to integrate sets of initial value, Ordinary Differential Equations. We have carried out an extensive study of the requirements. For the simulation of the autopilot problem, we have used the mathematical development environment Matlab/Simulink®. Matlab/Simulink® was chosen, as it is widely used in the field of numerical mathematics and supports solving initial value ordinary differential equations like the type we have in (27) with easy.

From the humble beginnings of Euler's method, numerical solvers started relatively simple and have evolved into the more complex higher order Taylor methods and into the efficient Runge-Kutta methods. And the search for more efficient and accurate methods has led to the more complicated variable step solvers.

#### *3.3.1. One step solver*

For solving an initial value problem

$$y' = f(\mathbf{x}, y)\_{\prime} \quad y(\mathbf{x}\_0) = y\_{0\prime} \tag{28}$$

Optimal Solution to Matrix Riccati Equation – For Kalman Filter Implementation 105

,| , *<sup>n</sup> S xy a x by a b*, finite, (33)

*xyh f* (34)

*<sup>h</sup> xyh f f xy* (35)

*xyh f h* (36)

(38)

*neX h* (39)

( ): ( ) *n n e X yX y X xn* fix, variable *n* (37)

describes the differential quotient of the exact solution *z(t)* with step size *h*, whereas *Ф(x,y,h;f)* is the differential quotient of the approximated solution with step size *h*. The difference *� = ∆ - Ф* is the measure of quality of the approximation method and is denoted

In the following, *FN(a,b)* is defined as the set of all functions *f* , for which exist all partial

<sup>0</sup> lim ( , , ; ) 0.

<sup>0</sup> lim ( , , ; ) ( , ).

If this condition holds for all *x є [a,b], y є F1(a,b)* then *Ф* and the corresponding one step

( , , ; ) 0( ), *<sup>p</sup>*

is the difference between exact solution and the approximated solution. The one step

lim ( ) 0. *<sup>n</sup> <sup>n</sup> e X*

( ) 0( ). *<sup>p</sup>*

This means that the order of the global discretization error is equal to the order of the local discretization error. The crucial problem concerning one-step methods is the choice of the step size *h*. If the step size is too small, the computational effort of the method is unnecessary high, but if the step size is too large, the global discretization error increases. For initial values *x0,y0* a step size as large as possible would be chosen, so that the global discretization error is below a

*h* 

method are called *consistent.* Thus, the one step method is of *order p*, if

holds for all x є [a,b], y є R, f є Fp (a,b). The global discretization error

**Theorem:** Methods of order *p > 0* are convergent and it holds

boundary *ε* after each step. Therefore, a step size control is necessary.

as local discretization error.

One step solvers must fulfill

This is equivalent to

derivations of order *N* on the area

where they are continuous and limited.

method is denoted as *convergent*, if:

a numerical method is needed. One step solvers are defined by a function *Ф(x,y,h;f)* which gives approximate values *yi*≔*y(xi)* for the exact solution *y(x)*:

$$y\_{i+1} \coloneqq y\_i + h\Phi(\mathbf{x}\_{i'} y\_{i'} h; f)\_{\prime} \tag{29}$$

$$\propto\_{i+1} \coloneqq \propto\_i + h\_i \tag{30}$$

where, *h* denotes the step size. In the following let *x* and *y* be arbitrary but fixed, and *z(t)* is the exact solution of the initial value problem

$$z'(t) = f(t, z(t)), \quad z(\chi) = y \tag{31}$$

with the initial values *x* and *y*. Then the function

$$\Delta(\mathbf{x}, \mathbf{y}, h, f) := \begin{cases} \frac{\mathbf{z}(\mathbf{x} + h) - \mathbf{y}}{h} & h \neq 0 \\\ f(\mathbf{x}, \mathbf{y}) & h = 0 \end{cases} \tag{32}$$

describes the differential quotient of the exact solution *z(t)* with step size *h*, whereas *Ф(x,y,h;f)* is the differential quotient of the approximated solution with step size *h*. The difference *� = ∆ - Ф* is the measure of quality of the approximation method and is denoted as local discretization error.

In the following, *FN(a,b)* is defined as the set of all functions *f* , for which exist all partial derivations of order *N* on the area

$$S = \left( \ge, y \mid a \le x \le b, y \in \mathfrak{R}^n \right) \quad a, b \text{ finite},\tag{33}$$

where they are continuous and limited.

One step solvers must fulfill

$$\lim\_{h \to 0} \tau(\mathbf{x}, y, h; f) = 0. \tag{34}$$

This is equivalent to

104 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

**Differential Equations** 

research applications require large scale computing.

equations like the type we have in (27) with easy.

≔

the exact solution of the initial value problem

with the initial values *x* and *y*. Then the function

the more complicated variable step solvers.

For solving an initial value problem

*3.3.1. One step solver* 

gives approximate values *yi*

**3.3. Numerical Methods and Problem Solving Environment (PSE), for Ordinary** 

In the last decade, two distinct directions have emerged in the way Ordinary Differential Equation (ODE) solvers software is utilized. These include Large Scale Scientific Computation and PSE. Most practicing engineers and scientists, as well as engineering and science students use numerical software for problem solving, while only a few very specific

MATLAB® provides several powerful approaches to integrate sets of initial value, Ordinary Differential Equations. We have carried out an extensive study of the requirements. For the simulation of the autopilot problem, we have used the mathematical development environment Matlab/Simulink®. Matlab/Simulink® was chosen, as it is widely used in the field of numerical mathematics and supports solving initial value ordinary differential

From the humble beginnings of Euler's method, numerical solvers started relatively simple and have evolved into the more complex higher order Taylor methods and into the efficient Runge-Kutta methods. And the search for more efficient and accurate methods has led to

a numerical method is needed. One step solvers are defined by a function *Ф(x,y,h;f)* which

where, *h* denotes the step size. In the following let *x* and *y* be arbitrary but fixed, and *z(t)* is

( ) <sup>0</sup> ( , , , ):

*zx h y <sup>h</sup> xyh f <sup>h</sup>*

(,) 0

*fxy h*

*y(xi)* for the exact solution *y(x)*:

*<sup>y</sup> fxy* ( , ), 0 0 *<sup>y</sup>*() , *x y* (28)

<sup>1</sup> : ( , , ; ), *i i ii y y h x y hf* (29)

*z t f tzt* ( ) ( , ( )), *zx y* ( ) (31)

(32)

<sup>1</sup> : , *i i x xh* (30)

$$\lim\_{h \to 0} \Phi(\mathbf{x}, y, h; f) = f(\mathbf{x}, y). \tag{35}$$

If this condition holds for all *x є [a,b], y є F1(a,b)* then *Ф* and the corresponding one step method are called *consistent.* Thus, the one step method is of *order p*, if

$$
\pi(\mathbf{x}, y, h; f) = \mathcal{O}(h^p),
\tag{36}
$$

holds for all x є [a,b], y є R, f є Fp (a,b). The global discretization error

$$e\_n(X) \coloneqq y(X) - y\_n \quad X = \propto\_n \text{fix}\_\prime \, n \text{ variable} \tag{37}$$

is the difference between exact solution and the approximated solution. The one step method is denoted as *convergent*, if:

$$\lim\_{n \to \infty} \left\| e\_n(X) \right\| = 0. \tag{38}$$

**Theorem:** Methods of order *p > 0* are convergent and it holds

$$e\_n(X) = \mathcal{O}(h^{\mathcal{V}}).\tag{39}$$

This means that the order of the global discretization error is equal to the order of the local discretization error. The crucial problem concerning one-step methods is the choice of the step size *h*. If the step size is too small, the computational effort of the method is unnecessary high, but if the step size is too large, the global discretization error increases. For initial values *x0,y0* a step size as large as possible would be chosen, so that the global discretization error is below a boundary *ε* after each step. Therefore, a step size control is necessary.

#### *3.3.2. Explicit Euler*

The most elementary method of solving initial value problems is the explicit Euler. The value of *yi+1* can be calculated the following way:

$$y\_{i+1} = y\_i + h \cdot f(\mathbf{x}\_{i'} y\_i) \tag{40}$$

Optimal Solution to Matrix Riccati Equation – For Kalman Filter Implementation 107

(43)

As the characteristics of the solution are a priori unknown, a good grid structure cannot be chosen before the numerical integration. Instead, the grid points have to be adapted during the computation of the solution. Trying to apply this to Runge-Kutta methods lead to the

To create a method of order *p* (for *yi+1*), it is combined with a method of order *P+1*(for *ŷk+1*).

This method for *yi+1* is called the embedded method. The idea of embedding was developed by Fehlberg and methods using this technique therefore are called *Runge-Kutta-Fehlberg* methods. This leads to a modified Butcher-tableau (see figure 3). The new step size is

*new h h* <sup>1</sup> , <sup>ˆ</sup> *<sup>p</sup>*

The main concern with numerical solvers is the error made when they approximate a solution. The second concern is the number of computations that must be performed. Both of these can be addressed by creating solvers that use a variable step size in order to keep the error within a specified tolerance. By using the largest step size allowable while keeping

The way to keep the error under control is to determine the error made at each step. A common way to do this is to use two solvers of orders *p* and *p+1*, as earlier explained*.* Any approximations made of order *p* will have an error no larger than the value of the *p + 1* term. This leads us to take the difference of the two solvers to find the value of the error

*y y* 

 a given accuracy of the numerical solution is reached the needed computational effort is minimized.

**Figure 3.** Modified Butcher-tableau for embedded Runge-Kutta-methods.

*3.4.1. Error control and variable step size* 

the error within a tolerance, the error made is reduced.

following technique:

calculated with

term.

where *ε* denotes the tolerance.

The explicit Euler calculates the new value *y*i+1 by following the tangent at the old value for a distance of *h*. The slope of the tangent is given by the value of *f (xi,yi)*. The explicit Euler uses no step size control; the step size *h* is fixed. Therefore, it is only useful in special cases, where the function to integrate is pretty flat. Nevertheless, it is very easy to implement and calculates very fast, so it can be a good choice.

#### *3.3.3. Runge-Kutta method*

The Runge-Kutta methods are a special kind of one-step solvers, which evaluate the right side in each step several times. The intermediate results are combined linearly. The general discretization schema for one-step of a Runge-Kutta method is

$$y\_1 = y\_0 + h(b\_1K\_1 + b\_2K\_2 + \dots + b\_aK\_a),\tag{41}$$

with corrections

$$K\_i = f(\mathbf{x}\_0 + c\_i h\_\prime \, \mathbf{y}\_0 + h \sum\_{j=1}^{i-1} a\_{ij} K\_j)\_\prime \quad i = 1, \cdots \text{s.t.} \tag{42}$$

The coefficients are summarized in a tableau, the so-called Butcher-tableau, see figure 2.

$$\begin{array}{cccc}cc\_1 & 0\\ & c\_2 & a\_{21} & \ddots & 0\\ \vdots & \vdots & \ddots & \ddots & \\ & & & & & \\ \frac{a\_s - a\_{s1} - \dots - a\_{ss-1} - 0}{b\_1 - b\_2 - \dots - b\_{s1}} & & \\ \end{array}$$

**Figure 2.** Butcher-tableau.

#### **3.4. Step size control**

The Runge-Kutta methods use an equidistant grid, but this is for most applications inefficient. A better solution is to use an adaptive step size control. The grid has to be chosen so that


As the characteristics of the solution are a priori unknown, a good grid structure cannot be chosen before the numerical integration. Instead, the grid points have to be adapted during the computation of the solution. Trying to apply this to Runge-Kutta methods lead to the following technique:

To create a method of order *p* (for *yi+1*), it is combined with a method of order *P+1*(for *ŷk+1*).

This method for *yi+1* is called the embedded method. The idea of embedding was developed by Fehlberg and methods using this technique therefore are called *Runge-Kutta-Fehlberg* methods. This leads to a modified Butcher-tableau (see figure 3). The new step size is calculated with

$$h\_{new} = h \quad \sqrt[p+1]{\frac{\mathcal{E}}{\|\|\mathbf{y}-\hat{\mathbf{y}}\|}} \, \tag{43}$$

where *ε* denotes the tolerance.

106 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

The most elementary method of solving initial value problems is the explicit Euler. The

The explicit Euler calculates the new value *y*i+1 by following the tangent at the old value for a distance of *h*. The slope of the tangent is given by the value of *f (xi,yi)*. The explicit Euler uses no step size control; the step size *h* is fixed. Therefore, it is only useful in special cases, where the function to integrate is pretty flat. Nevertheless, it is very easy to implement and

The Runge-Kutta methods are a special kind of one-step solvers, which evaluate the right side in each step several times. The intermediate results are combined linearly. The general

1

1

*j*

The Runge-Kutta methods use an equidistant grid, but this is for most applications inefficient. A better solution is to use an adaptive step size control. The grid has to be chosen

0 0

*K f x ch y h a K*

*i i ij j*

( , ), *i*

The coefficients are summarized in a tableau, the so-called Butcher-tableau, see figure 2.

<sup>1</sup> (,) *i i ii y y h fx y* (40)

1 0 11 22 ( ), *a a y y hbK bK bK* (41)

*i s* 1, . (42)

*3.3.2. Explicit Euler* 

value of *yi+1* can be calculated the following way:

calculates very fast, so it can be a good choice.

discretization schema for one-step of a Runge-Kutta method is

*3.3.3. Runge-Kutta method* 

with corrections

**Figure 2.** Butcher-tableau.

**3.4. Step size control** 

so that

$$\begin{array}{c|c} \stackrel{\circ c}{b^T} & \stackrel{\circ}{A^T} \\ \stackrel{\circ}{b^T} & \stackrel{\circ}{b^T} \\ \stackrel{\circ}{b^T} & \stackrel{\circ}{b^T} \end{array} \longrightarrow \begin{array}{c|c} \stackrel{\circ}{y\_1} = y\_0 + h \stackrel{\circ}{\sum} & \stackrel{\circ}{b\_i} K\_i \\ \stackrel{\circ}{b^T} & \stackrel{\circ}{\sum} = y\_0 + h \stackrel{\circ}{\sum} & \stackrel{\circ}{b\_i} K\_i \end{array}$$

**Figure 3.** Modified Butcher-tableau for embedded Runge-Kutta-methods.

#### *3.4.1. Error control and variable step size*

The main concern with numerical solvers is the error made when they approximate a solution. The second concern is the number of computations that must be performed. Both of these can be addressed by creating solvers that use a variable step size in order to keep the error within a specified tolerance. By using the largest step size allowable while keeping the error within a tolerance, the error made is reduced.

The way to keep the error under control is to determine the error made at each step. A common way to do this is to use two solvers of orders *p* and *p+1*, as earlier explained*.* Any approximations made of order *p* will have an error no larger than the value of the *p + 1* term. This leads us to take the difference of the two solvers to find the value of the error term.

Since the downside of using two distinct methods is a dramatic increase in computations, another method is typically used. An example of this method is the Rung-Kutta-Fehlberg Algorithm. The Rung-Kutta-Fehlberg combines Rung-Kutta methods of order four and order five into one algorithm. Doing this reduces the number of computations made while returning the same result.

#### *3.4.2. Dormand-Prince method*

The *Dormand-Prince* method is a member of the *Runge-Kutta-Fehlberg* class with order 4(5).It means that the method has order 5 and the embedded method has order 4. This is described by the following equations:

$$\begin{aligned} y(\mathbf{x}\_0 + h) &= y\_0 + h \sum\_{k=0}^4 b\_k f\_k(\mathbf{x}\_0, y\_0; h) \\ \hat{y}(\mathbf{x}\_0 + h) &= y\_0 + h \sum\_{k=0}^5 \hat{b}\_k f\_k(\mathbf{x}\_0, y\_0; h) \\ f\_k &= f(\mathbf{x}\_0 + c\_k h, y\_0 + h \sum\_{t=0}^{k-1} a\_{ki} f\_i) \end{aligned} \tag{44}$$

Optimal Solution to Matrix Riccati Equation – For Kalman Filter Implementation 109

between these solutions is then taken to be the error of the (fourth-order) solution. This error estimate is very convenient for adaptive step size integration algorithms. This adaptive stepsize control algorithm monitors the estimate of the integration error, and reduces or increases the step size of the integration in order to keep the error below a specified threshold. The accuracy requested is that both the relative and absolute (maximal) errors be less than the truncation error tolerance. In MATLAB, both relative (*RelTol*) and absolute (*AbsTol*) tolerances can be specified. The default values (that were used in solving the problem) are *RelTol*= 0.001 and *AbsTol*= 10-6 .We intend to integrate the DRE from *t=0* up to

After successfully simulating the above model, it was used to design an Linear Quadratic Gaussian (LQG) autopilot with the following Linear Quadratic Regulator (LQR)

**1/s**

For this autopilot, the Kalman-Bucy filter model was implemented as shown in Fig.6. It should be noted that if for any reason the initial condition *P0* in (27) is taking as a matrix with entries less than 1, then initial covariance matrix could be used as a tuning parameter

The dependent variable in the Riccati differential equation of the Kalman-Bucy filter is the covariance matrix of the *estimation error,* defined as the difference between the estimated

and *R* 0.1. (45)

**Kalman Gain**

**Matrix Multiply** PA'

PA' + AP

**4.1917629328627e-006 0.00098077916436169 0.0015947115830751**

**Covariance matrix of estimation error**

**-0.0004146670943 0.001594711583 20.30987696**

**Matrix Multiply**

AP

A

**uT**

A'

**4.0149259531392e-006 4.1917629328627e-006 -0.00041466709435484**

A

**A**

<sup>P</sup> <sup>P</sup>

**Matrix Multiply**

2.5 0 0 04 0 , 0 0 173.6

to meet a specific *time response* characteristics as was presented in Aliyu, et al.

*t=1.* The Simulink model for the DRE is as shown in Figure 5.

**Matrix Multiply** PC'(R^-1)CP

**Figure 5.** Simulink model for matrix Differential Riccati Equation.

**GQG\_prime**

C'(R^-1)C

**Matrix Multiply**

*Q*

Characteristics;

**R\_N**

C C

**C**

**Inv**

R^-1

C'

C

**uT**

**4. Filter performance** 

The coefficients from Dormand and Prince can be seen in figure 4.

**Figure 4.** Butcher-tableau for Dormand-Prince-method.

For solving an initial value problem like the one we have in (27) Matlab was chosen, as it is widely used in the field of numerical mathematics and supports solving ordinary differential equations. Moreover, it is possible to visualize the simulation results of the autopilot. In our program we used the *ode45*, a standard solver included in Matlab/Simulink. The solver *ode45* implements the method of *Dormand-Prince*, which is a member of the class of *Runge-Kutta-Fehlberg* methods. More specifically, the *Dormand–Prince* method uses six function evaluations to calculate fourth- and fifth-order accurate solutions. The difference between these solutions is then taken to be the error of the (fourth-order) solution. This error estimate is very convenient for adaptive step size integration algorithms. This adaptive stepsize control algorithm monitors the estimate of the integration error, and reduces or increases the step size of the integration in order to keep the error below a specified threshold. The accuracy requested is that both the relative and absolute (maximal) errors be less than the truncation error tolerance. In MATLAB, both relative (*RelTol*) and absolute (*AbsTol*) tolerances can be specified. The default values (that were used in solving the problem) are *RelTol*= 0.001 and *AbsTol*= 10-6 .We intend to integrate the DRE from *t=0* up to *t=1.* The Simulink model for the DRE is as shown in Figure 5.

**Figure 5.** Simulink model for matrix Differential Riccati Equation.

After successfully simulating the above model, it was used to design an Linear Quadratic Gaussian (LQG) autopilot with the following Linear Quadratic Regulator (LQR) Characteristics;

$$Q = \begin{bmatrix} 2.5 & 0 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & 173.6 \end{bmatrix} \text{ and } \ R = 0.1. \tag{45}$$

For this autopilot, the Kalman-Bucy filter model was implemented as shown in Fig.6. It should be noted that if for any reason the initial condition *P0* in (27) is taking as a matrix with entries less than 1, then initial covariance matrix could be used as a tuning parameter to meet a specific *time response* characteristics as was presented in Aliyu, et al.

#### **4. Filter performance**

108 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

returning the same result.

*3.4.2. Dormand-Prince method* 

by the following equations:

Since the downside of using two distinct methods is a dramatic increase in computations, another method is typically used. An example of this method is the Rung-Kutta-Fehlberg Algorithm. The Rung-Kutta-Fehlberg combines Rung-Kutta methods of order four and order five into one algorithm. Doing this reduces the number of computations made while

The *Dormand-Prince* method is a member of the *Runge-Kutta-Fehlberg* class with order 4(5).It means that the method has order 5 and the embedded method has order 4. This is described

> 4 0 0 0 0 0 5 0 0 0 0 0 1

( ) ( , ;)

*yx h y h b f x y h*

*k*

ˆ ˆ( ) ( , ;)

*yx h y h b f x y h*

(, )

For solving an initial value problem like the one we have in (27) Matlab was chosen, as it is widely used in the field of numerical mathematics and supports solving ordinary differential equations. Moreover, it is possible to visualize the simulation results of the autopilot. In our program we used the *ode45*, a standard solver included in Matlab/Simulink. The solver *ode45* implements the method of *Dormand-Prince*, which is a member of the class of *Runge-Kutta-Fehlberg* methods. More specifically, the *Dormand–Prince* method uses six function evaluations to calculate fourth- and fifth-order accurate solutions. The difference

*k k*

 

0

*t*

*k k*

*k k*

(44)

0 0

The coefficients from Dormand and Prince can be seen in figure 4.

**Figure 4.** Butcher-tableau for Dormand-Prince-method.

*k k ki i*

*f f x chy h a f*

The dependent variable in the Riccati differential equation of the Kalman-Bucy filter is the covariance matrix of the *estimation error,* defined as the difference between the estimated

state vector *x-hat* and the true state vector *x*. Matrix *P* is a matrix of covariance of an error estimate of the state vector *x* . The initial value of which is chosen as

$$P\_0 = E\{ [\mathbf{x}(t) - \hat{\mathbf{x}}(t)][\mathbf{x}(t) - \hat{\mathbf{x}}(t)]^T \} \quad \text{At } t \to \infty. \tag{46}$$

Optimal Solution to Matrix Riccati Equation – For Kalman Filter Implementation 111

Actual KF

The LQG autopilot simulation for tracking a pitch angle of 3 degrees (0.05rads) with the observer as designed in Fig.6 gave the result presented in Fig.7. It is interesting to note that the time response characteristics of the simulated autopilot in Fig. 7 meets all the design specifications of; *percentage overshoot* less than 10 percent; *settling time of less* than 4 seconds;

*rise time* of less than 1 second and *steady state error* of less than 2 percent.

**Figure 7.** Set-point command tracking of LQG autopilot for a RDE solution to Kalman gain.

6

autopilot.

developed.

*P*

*L*

0

0.01

0.02

Pitch angle(rad)

0.03

0.04

0.05

The numerical result at *t=1* for the covariance matrix with respect to Fig.5 is given in (47) and hence the associated Kalman gain is given in (48). The Kalman gain harnessed at this point urged us to re-design the Kalman filter model as shown in Fig. 8 for our LQG

0 0.2 0.4 0.6 0.8 1

Time(sec)

6 6

5.228197174 10 4.261615461 10 0.000394471297 4.261615461 10 0.0009807093782 0.0006722035747 , 0.0003944771297 0.0006722035747 8.750905161

> 1.30704929362596 1.06540386549228 0 1.06540386549228 245.177344558508 0 . 98.6192824463156 168.050893692856 0

A further investigation was carried out-the single point value of Kalman filter gain given in (48) was used to implement the Kalman filter algorithm as popular represented in most textbooks-as a constant gain. For which, the Kalman-Bucy model in Fig. 8 was

(47)

(48)

The state error covariance matrix is *n × n* and symmetric, and must remain positive definite to retain filter stability. Diagonal elements of this matrix are variances of errors of the estimations for corresponding components of the state vector. These also serve as a definition of accuracy for the estimation.

**Figure 6.** Simulink model for Kalman-Bucy filter with a RDE.

The solution of the matrix Riccati equation was found to provide a quantitative measure of how well the state variables can be estimated in terms of mean-squared estimation errors. Therefore, the matrix Riccati equation from the Kalman filter was soon recognized as a practical model for predicting the performance of sensor systems, and it became the standard model for designing aerospace sensor systems to meet specified performance requirements. More importantly, covariance analysis is crucial in exploring what-if scenarios with new measurement sources.

Note that in (27) we increase estimation uncertainty by adding in process noise and we decrease estimation uncertainty by the amount of information (*R-1*) inherent in the measurement.

For an initial guess for the value of covariance, a very large value could be selected if one is using a very poor sensor for measurement. This makes the filter very conservative. Converse is the case if very good sensors are used for measurement.

The LQG autopilot simulation for tracking a pitch angle of 3 degrees (0.05rads) with the observer as designed in Fig.6 gave the result presented in Fig.7. It is interesting to note that the time response characteristics of the simulated autopilot in Fig. 7 meets all the design specifications of; *percentage overshoot* less than 10 percent; *settling time of less* than 4 seconds; *rise time* of less than 1 second and *steady state error* of less than 2 percent.

110 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

estimate of the state vector *x* . The initial value of which is chosen as

definition of accuracy for the estimation.

**Figure 6.** Simulink model for Kalman-Bucy filter with a RDE.

**Matrix Multiply** C'(R^-1)C

**2 delta**

scenarios with new measurement sources.

measurement.

**R\_N**

C C

**C**

**GQG\_prime**

R^-1 C'

**Inv**

C

**uT**

state vector *x-hat* and the true state vector *x*. Matrix *P* is a matrix of covariance of an error

The state error covariance matrix is *n × n* and symmetric, and must remain positive definite to retain filter stability. Diagonal elements of this matrix are variances of errors of the estimations for corresponding components of the state vector. These also serve as a

**y-hat**

**1/s**

**P**

4.0149259531392e-006 4.1917629328627e-006 -0.00041466709435484

**A**

A

<sup>P</sup> <sup>P</sup>

**L**

**Matrix Multiply**

> **Matrix Multiply**

AP


PA'

PA' + AP

**L11 To Workspace1**

**4 y-hat 3 z-dot-hat 2 theta-dot-hat 1 theta-hat**

> **Matrix Multiply**

A

**uT**

4.1917629328627e-006 0.00098077916436169 0.0015947115830751

A'

Display

**Matrix Multiply**

**x-dot <sup>x</sup>**

**1/s**

**C**

**C\* u**

**K\*u A**

The solution of the matrix Riccati equation was found to provide a quantitative measure of how well the state variables can be estimated in terms of mean-squared estimation errors. Therefore, the matrix Riccati equation from the Kalman filter was soon recognized as a practical model for predicting the performance of sensor systems, and it became the standard model for designing aerospace sensor systems to meet specified performance requirements. More importantly, covariance analysis is crucial in exploring what-if

**L To Workspace2**

**Matrix Multiply** PC'(R^-1)CP

**K\*u D**

**K\*u B**

Note that in (27) we increase estimation uncertainty by adding in process noise and we decrease estimation uncertainty by the amount of information (*R-1*) inherent in the

For an initial guess for the value of covariance, a very large value could be selected if one is using a very poor sensor for measurement. This makes the filter very conservative.

Converse is the case if very good sensors are used for measurement.

<sup>0</sup> {[ ( ) ( )][ ( ) ( )] } ˆ ˆ *<sup>T</sup> P E xt xt xt xt* At . *<sup>t</sup>* (46)

o\* u n\* u m\* u

**1 y** C\* u

**Figure 7.** Set-point command tracking of LQG autopilot for a RDE solution to Kalman gain.

The numerical result at *t=1* for the covariance matrix with respect to Fig.5 is given in (47) and hence the associated Kalman gain is given in (48). The Kalman gain harnessed at this point urged us to re-design the Kalman filter model as shown in Fig. 8 for our LQG autopilot.

$$P = \begin{bmatrix} 5.228197174 \times 10^{-6} & 4.261615461 \times 10^{-6} & -0.000394471297 \\ 4.261615461 \times 10^{-6} & 0.0009807093782 & 0.0006722035747 \\ -0.0003944771297 & 0.0006722035747 & 8.750905161 \end{bmatrix} \tag{47}$$
 
$$L = \begin{bmatrix} 1.30704929362596 & 1.06540386549228 & 0 \\ 1.06540386549228 & 245.177344558508 & 0 \\ -98.6192824463156 & 168.050893692856 & 0 \end{bmatrix} \tag{48}$$

A further investigation was carried out-the single point value of Kalman filter gain given in (48) was used to implement the Kalman filter algorithm as popular represented in most textbooks-as a constant gain. For which, the Kalman-Bucy model in Fig. 8 was developed.

Optimal Solution to Matrix Riccati Equation – For Kalman Filter Implementation 113

*<sup>t</sup> Pt P* (49)

*dt* (50)

(52)

(53)

(54)

<sup>1</sup> 0. *T TT AP PA GQG PC R CP* (51)

Assume that the Riccati differential equation has an asymptotically stable solution for *P(t)*:

lim ( ) .

( ) lim 0.

This is called the Algebraic Riccati Equation. This is a nonlinear matrix equation, and need a

Consider a scalar case: *P <sup>∞</sup> є R1×I, A� ,C, Q, R ,G є R1×I .* The Algebraic Riccati Equation can be

2 2 <sup>2</sup> 0, *<sup>C</sup> P AP GQ <sup>R</sup>* 

<sup>2</sup> . *R Q P A A CG C R* 

From (52) there exist two solutions; one positive and the other negative, corresponding to the two values for the signum (±). There is no cause for alarm. The solution that agrees with (52) is the non-negative one. The other solution is non-positive. We are only interested in the non-negative solution, because the variance *P* of uncertainty is, by definition, non-negative.

> *R Q Pt P A A CG C R*

For the numerical example at hand, the Kalman gain for this problem is easily solved with the following Matlab command *[L, P,E]=lqe(A,G,C,Q,R)*. This command solves a continuous time Algebraic Riccati Equation associated with the described model. Where, *L,* is the Kalman gain, *P*, the covariance matrix, and *E*, the closed-loop eigenvalues of the observer.

> 0.999999834557877 1.0005722536001 0 1.00057522536001 23530.07308334456 0 , 12.5039701153711 150868.724947992 0

 

2 22

2 22

 

*dP t*

*t*

2

<sup>2</sup> lim ( )

*t*

The numeric values are as follows respectively:

*L*

**5. Algebraic Riccati equation** 

Then the time derivative vanishes

Substituting this into the (6) yields

Thus, taking the positive solution

solved analytically

numerical solver to obtain a solution for *P<sup>∞</sup>*.

**Figure 8.** Simulink Model of Kalman filter model for a constant gain value of Kalman gain.

Hence, the same LQG autopilot was simulated with the Kalman filter based observer as shown in Fig.8. Simulation result for the system is as shown in Fig. 9. It is also interesting to note that all the time response characteristics as earlier mentioned were met. Though, the *LQR* controller could bring the ELV to a *settling time* at about 2 seconds.

The Matlab in-built command function *[K,P,E]=lqr(A,B,C,D,Q,R)* was used to obtain the solution for the design of the LQR controller. Where, *K* is the controller gain, *P* is the associated solution to the Algebraic Riccati Equation of the controller design and *E*, the closed-loop eigenvalues of the plant dynamics. Note, that for LQR design the pair *(A,B)* must be *controllable* then, a state feedback control law can be constructed to arbitrarily locate the closed-loop eigenvalues.

**Figure 9.** Set point command tracking of ELV autopilot for a Kalman gain obtained by evaluating covariance matrix to a Riccati Differential Equation at *t=1sec.*

#### **5. Algebraic Riccati equation**

112 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

**x-dot**

**Figure 8.** Simulink Model of Kalman filter model for a constant gain value of Kalman gain.

**C**

**C\* u**

**K\*u A**

**K\*u D**

**K\*u B**

**2 delta**

*LQR* controller could bring the ELV to a *settling time* at about 2 seconds.

the closed-loop eigenvalues.

0

0.01

0.02

0.03

Pitch angle(rads)

0.04

0.05

0.06

Hence, the same LQG autopilot was simulated with the Kalman filter based observer as shown in Fig.8. Simulation result for the system is as shown in Fig. 9. It is also interesting to note that all the time response characteristics as earlier mentioned were met. Though, the

**y-hat**

**x**

o\* u n\* u

C\* u

**1 y** m\* u

**1/s**

**Kalman gain**

**K\*u**

Actual KF

**4 y-hat**

**3 z-dot-hat**

**2 theta-dot-hat**

**1 theta-hat**

The Matlab in-built command function *[K,P,E]=lqr(A,B,C,D,Q,R)* was used to obtain the solution for the design of the LQR controller. Where, *K* is the controller gain, *P* is the associated solution to the Algebraic Riccati Equation of the controller design and *E*, the closed-loop eigenvalues of the plant dynamics. Note, that for LQR design the pair *(A,B)* must be *controllable* then, a state feedback control law can be constructed to arbitrarily locate

**Figure 9.** Set point command tracking of ELV autopilot for a Kalman gain obtained by evaluating

0 1 2 3 4

Time(sec)

covariance matrix to a Riccati Differential Equation at *t=1sec.*

Assume that the Riccati differential equation has an asymptotically stable solution for *P(t)*:

$$\lim\_{t \to \!\!\to \!\!\!\!\!\/]} P(t) = P\_{\text{op}}.\tag{49}$$

Then the time derivative vanishes

$$\lim\_{t \to \!\!\!-} \frac{dP(t)}{dt} = 0.\tag{50}$$

Substituting this into the (6) yields

$$
\dot{\mathbf{P}}\mathbf{P} + \mathbf{P}\mathbf{A}^T + \mathbf{G}\mathbf{Q}\mathbf{G}^T - \mathbf{P}\mathbf{C}^T\mathbf{R}^{-1}\mathbf{C}\mathbf{P} = \mathbf{0}.\tag{51}
$$

This is called the Algebraic Riccati Equation. This is a nonlinear matrix equation, and need a numerical solver to obtain a solution for *P<sup>∞</sup>*.

Consider a scalar case: *P <sup>∞</sup> є R1×I, A� ,C, Q, R ,G є R1×I .* The Algebraic Riccati Equation can be solved analytically

$$
\frac{\mathbb{C}^2}{R} P\_\alpha^2 - 2A\_\phi P\_\infty - G^2 Q = 0,\\
$$

$$
P\_\infty = \frac{R}{\mathbb{C}^2} \left\{ A\_\phi \pm \sqrt{A\_\phi^2 + \frac{Q}{R} \mathbb{C}^2 G^2} \right\}.\tag{52}
$$

From (52) there exist two solutions; one positive and the other negative, corresponding to the two values for the signum (±). There is no cause for alarm. The solution that agrees with (52) is the non-negative one. The other solution is non-positive. We are only interested in the non-negative solution, because the variance *P* of uncertainty is, by definition, non-negative. Thus, taking the positive solution

$$\lim\_{t \to \infty} P(t) = P\_{\infty} = \frac{R}{C^2} \left\{ A\_{\phi} + \sqrt{A\_{\phi}^2 + \frac{Q}{R}C^2G^2} \right\} \tag{53}$$

For the numerical example at hand, the Kalman gain for this problem is easily solved with the following Matlab command *[L, P,E]=lqe(A,G,C,Q,R)*. This command solves a continuous time Algebraic Riccati Equation associated with the described model. Where, *L,* is the Kalman gain, *P*, the covariance matrix, and *E*, the closed-loop eigenvalues of the observer. The numeric values are as follows respectively:

$$L = \begin{bmatrix} 0.999999834557877 & 1.0005722536001 & 0\\ 1.00057522536001 & 23530.07308334456 & 0\\ -12.5039701153711 & -150868.724947992 & 0 \end{bmatrix} \tag{54}$$

$$P = \begin{bmatrix} 3.99999938 \times 10^{-6} & 4.0023 \times 10^{-6} & -5.0016 \times 10^{-5} \\ 4.0023 \times 10^{-6} & 0.0941 \times 10^{-6} & -0.6035 \\ -5.0016 \times 10^{-6} & -0.6035 & 673.4535 \end{bmatrix} \tag{55}$$

and

$$E = \begin{bmatrix} -23530.19862489\\ -1.00000017144017\\ -5.81187094955956 \times 10^{-5} \end{bmatrix} \tag{56}$$

Optimal Solution to Matrix Riccati Equation – For Kalman Filter Implementation 115

RDE

ARE

RDE@1 sec

Kalman filter in the traditional manner but still not solving the associated Riccati equation as an algebraic one but as a differential one as done earlier and then harvesting the value of the Kalman gain at some specific point (in this research, we chose 1 second). Then, applying the Kalman gain, as a constant gain throughout the regime of simulation (RDE@1sec). It is obvious that for this example, this result still out performs that obtained from solving an Algebraic Riccati Equation (ARE). Actually, it could be clearly seen in Fig. 9 that the *settling time* is 2 second and in Fig. 10 is 3 seconds as the case applies. Figure 11 tries to give a holistic view of the three cases considered in this

Though one might be tempted to look at the difference in *settling time* of 1 second between the case of harvesting the Kalman gain value after solving a RDE at *t=1sec.,* With that of ARE as negligible or insignificant. On the contrary, considering an aerospace vehicle like the ELV, moving with a speed rage of *Mach* number *0.8-1.2* (transonic). One will be force to rethink. Let alone, compared to a *settling time* of 0.7sec obtained when Kalman gain is

It is of paramount interest to add here that the plant and observer dynamics for all cases explored in this research gave a dynamic system with stable poles (*separation principle*).

Contrary to that obtained in the paper Aliyu et al and in the book Aliyu Bhar Kisabo.

**Figure 11.** Compared results of the three approaches to the solution of Kalman gain as applied to an

0 1 2 3 4 5

Time(sec)

It can be clearly seen in Figure 6, that the synthesized LQG autopilot, with Kalman gain obtained by solving an Algebraic Riccati Equation (ARE) has 6 *percent overshoot* and a *settling time* of 3seconds. While, that in Fig. 3, is most preferred in all *time-domain* characteristicszero *percentage overshoot* and *settling time* of 0.7second. This is the result of implementing

research for perusal.

autopilot.

**7. Conclusion** 

0.01

0.02

0.03

Pitchangle(rads)

0.04

0.05

0.06

obtained from solving a RDE.

Based, on the result in (54), the LQG autopilot was simulated with the Kalman-Bucy filter state observer as modeled in Fig. 8. The result obtained from this was used in designing an LQG controller for the case of an ELV during atmospheric ascent. This seeks to track and control a pitch angle of 3 degrees (0.05rads) and the result is as shown Figure 10. Though, in this case the controller could bring the ELV to a *Settling Time* at 3 seconds. A minute further, compared to that obtained in Fig. 9. In both cases a negligible difference in *Percentage Overshoot* was observed.

**Figure 10.** Set-point command tracking of LQG autopilot for an Algebraic Riccati Equation's solution to Kalman gain.

#### **6. Comparative analysis**

It can be clearly seen from figure 7 that the result of applying Kalman gain in the LQG problem of an ELV is most suitable by solving the associated Riccati Equation in its differential form (RDE). All time response characteristics were met within a second and with a zero *percent overshoot*! Though, issues might arise when hardware implementation is to be carried out. This basically will be due to the computational demand that will be placed on the selected micro-controller. In view of that, if we choose to design the Kalman filter in the traditional manner but still not solving the associated Riccati equation as an algebraic one but as a differential one as done earlier and then harvesting the value of the Kalman gain at some specific point (in this research, we chose 1 second). Then, applying the Kalman gain, as a constant gain throughout the regime of simulation (RDE@1sec). It is obvious that for this example, this result still out performs that obtained from solving an Algebraic Riccati Equation (ARE). Actually, it could be clearly seen in Fig. 9 that the *settling time* is 2 second and in Fig. 10 is 3 seconds as the case applies. Figure 11 tries to give a holistic view of the three cases considered in this research for perusal.

Though one might be tempted to look at the difference in *settling time* of 1 second between the case of harvesting the Kalman gain value after solving a RDE at *t=1sec.,* With that of ARE as negligible or insignificant. On the contrary, considering an aerospace vehicle like the ELV, moving with a speed rage of *Mach* number *0.8-1.2* (transonic). One will be force to rethink. Let alone, compared to a *settling time* of 0.7sec obtained when Kalman gain is obtained from solving a RDE.

It is of paramount interest to add here that the plant and observer dynamics for all cases explored in this research gave a dynamic system with stable poles (*separation principle*). Contrary to that obtained in the paper Aliyu et al and in the book Aliyu Bhar Kisabo.

**Figure 11.** Compared results of the three approaches to the solution of Kalman gain as applied to an autopilot.

### **7. Conclusion**

114 MATLAB – A Fundamental Tool for Scientific Computing and Engineering Applications – Volume 3

6

*E*

66 5

5

Actual KF

(55)

(56)

6 6

4.0023 10 0.0941 10 0.6035 , 5.0016 10 0.6035 673.4535

> 23530.19862489 1.00000017144017 .

5.81187094955956 10

Based, on the result in (54), the LQG autopilot was simulated with the Kalman-Bucy filter state observer as modeled in Fig. 8. The result obtained from this was used in designing an LQG controller for the case of an ELV during atmospheric ascent. This seeks to track and control a pitch angle of 3 degrees (0.05rads) and the result is as shown Figure 10. Though, in this case the controller could bring the ELV to a *Settling Time* at 3 seconds. A minute further, compared to that obtained in Fig. 9. In both cases a negligible difference in *Percentage* 

**Figure 10.** Set-point command tracking of LQG autopilot for an Algebraic Riccati Equation's solution to

0 1 2 3 4 5

Time(sec)

It can be clearly seen from figure 7 that the result of applying Kalman gain in the LQG problem of an ELV is most suitable by solving the associated Riccati Equation in its differential form (RDE). All time response characteristics were met within a second and with a zero *percent overshoot*! Though, issues might arise when hardware implementation is to be carried out. This basically will be due to the computational demand that will be placed on the selected micro-controller. In view of that, if we choose to design the

3.99999938 10 4.0023 10 5.0016 10

and

*Overshoot* was observed.

Kalman gain.

**6. Comparative analysis** 

0

0.01

0.02

0.03

Pitch angle(rad)

0.04

0.05

0.06

*P*

It can be clearly seen in Figure 6, that the synthesized LQG autopilot, with Kalman gain obtained by solving an Algebraic Riccati Equation (ARE) has 6 *percent overshoot* and a *settling time* of 3seconds. While, that in Fig. 3, is most preferred in all *time-domain* characteristicszero *percentage overshoot* and *settling time* of 0.7second. This is the result of implementing Kalman gain as a solution to a Riccati Differential Equation (RDE). Thus, the solution to *Riccati Differential Equation* for the implementation of Kalman filter in LQG controller design is the most optimal for pitch plane control of an ELV in the boast phase.

**Chapter 0**

**Chapter 5**

**Numerical Simulation of the Frank-Kamenetskii**

The efficient solution of the Frank-Kamenetskii partial differential equation through the implementation of parallelized numerical algorithms or GPUs (Graphics Processing Units) in MATLAB is a natural progression of the work which has been conducted in an area of practical import. There is an on-going interest in the mathematics describing thermal explosions due to the significance of the applications of such models - one example is the chemical processes which occur in grain silos. Solutions which pertain to the different geometries of such a physical process have different physical interpretations, however in this chapter we will consider the Frank-Kamenetskii partial differential equation within the context of the mathematical theory of combustion which according to Frank-Kamenetskii [16] deals with the combined systems of equations of chemical kinetics and of heat transfer and diffusion. A physical explanation of such a system is often a gas confined within a vessel which then reacts

The focus of this chapter is to investigate the performance of the parallelization power of the GPU vs. the computing power of the CPU within the context of the solution of the Frank-Kamenetskii partial differential equation. GPU computing is the use of a GPU as a co-processor to accelerate CPUs (Central Processing Units) for general purpose scientific and engineering computing. The GPU accelerates applications running on the CPU by offloading some of the compute-intensive and time consuming portions of the code. The rest of the application still runs on the CPU. The reason why the application is seen to run faster is because it is using the extreme parallel processing power of the GPU to boost performance. A CPU consists of 4 to 8 CPU cores while the GPU consists of 100s of smaller cores. Together they operate to crunch through the data in the application and as such it is this massive parallel

The methods which will be investigated in this research are implicit methods, such as the Crank-Nicolson method (*CN*) and the Crank-Nicolson method incorporating the Newton method (*CNN*) [26]. These algorithms pose a serious challenge to the implementation of

> ©2012 Harley, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0),which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

©2012 Harley, licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

chemically, heating up until it either attains a steady state or explodes.

architecture which gives the GPU its high compute performance.

**PDE: GPU vs. CPU Computing**

Additional information is available at the end of the chapter

Charis Harley

**1. Introduction**

http://dx.doi.org/10.5772/46463

It is required that after designing Kalman filter, the accuracy of estimation is also assessed from the covariance matrix. It could be seen that both cases gave a very good estimation (very small covariance). Though, that of ARE gave a much smaller value. This has less significance to our research since we are majorly interested in the time response characteristic of the controlled plant. MATLAB 2010a was used for all the simulations in this paper.

## **Author details**

Bhar K. Aliyu, Charles A. Osheku, Lanre M.A. Adetoro and Aliyu A. Funmilayo *Federal Ministry of Science and Technology (FMST), National Space Research & Development Agency (NASRDA), Center For Space Transport & Propulsion (CSTP) Epe, Lagos, Nigeria* 

## **Acknowledgement**

The authors will like to specially appreciate the Honourable *Minister* of Science and Technology, Prof. Ita Okon Bassey Ewe, for his special intereste in the Research and Development activities at National Space Research and Development Agency (NASRDA). His support has been invaluable. Also, in appreciation is the *Director-General* of NASRDA, Dr. S. O Mohammed for making CSTP a priority agency among others Centres of NASRDA. We also thank Dr. Femi A. Agboola, *Director*, Engineering and Space Systems (ESS) at NASRDA, for his meaningful suggestions and discussions.

## **8. References**

