**2. Kalman filter**

#### **2.1 Description of the system dynamics**

Kalman filtering theory and its applications are well treated in literature (Jazwinski 1970; Gelb 1974; Kailath 1981; Maybeck 1982; Brown 1983; Sorenson 1985; Mendel 1987; Grewal and Andrews 2001; Simon 2006). In order to apply Kalman filtering to a robot movement the system dynamics must be described by a set of differential equations which are in statespace form, in general

$$\frac{d\mathbf{x}}{dt} = F\mathbf{x} + \mathbf{G}\boldsymbol{\mu} + \mathbf{w} \tag{1}$$

Data Sensor Fusion for Autonomous Robotics 377

where (t) is given in (13). When robot movement is along a straight line with a constant

.. . 0 1 0 0

*x x* 

> 0 1 0 0 *<sup>F</sup>*

The system transition matrix is computed from inverse Laplace transform of the form

1 1 () ( )

*t*

1 ( ) 0 1

The discrete fundamental matrix, i.e., system transition matrix can be found from preceding expression by simply replacing time with the sampling time interval of the perception

( ) 0 1 *<sup>k</sup>*

*T*

*<sup>T</sup>*

In two dimensional navigation space, i.e., *xy* plane, the system transition matrix becomes

*k*

*t*

1

*T*

*T*

*0*, i.e., circular movement with an angular velocity, we consider the

*t L sI F e*

( )Q ( )d

 

*x*

*F t*

(9)

*o x x a vt* (10)

(11)

(12)

(13)

(14)

(15)

[ ] *X xvx <sup>y</sup> y v* (16)

*=0*. The system dynamics in state-space form is given by

0

.

*x*

speed *vx*, the *x* component of the system dynamic model is given by

*k*

It is to note that the angular speed

measurements T or

In the case of

Where the system dynamics matrix *F* is given by

and the corresponding state vector *X* is given by

geometry shown in Figure 1.

*T <sup>T</sup> Qk* 

where *x* is a column vector with the states of the system, *F* the system dynamics matrix, *u* is the control vector , and *w* is a white noise process vector. The process noise matrix *Q* is related to the process-noise vector according to

$$Q = \mathbb{E}[\mathbf{w}\mathbf{w}^\top] \tag{2}$$

The measurements are linearly related to the states according to

$$z = \mathsf{H}\mathfrak{x} + \mathsf{v} \tag{3}$$

where *z* is the measurement vector, *H* is the measurement matrix, and *v* is measurement noise vector which is element-wise white. The measurement noise matrix *R* is related to the measurement noise vector *v* according to

$$\mathcal{R} = \operatorname{E}[\mathbf{v}\mathbf{v}^{\mathrm{T}}] \tag{4}$$

In discrete form, the Kalman filtering equations become

$$\begin{aligned} \mathbf{x}\_k &= \boldsymbol{\Phi}\_k \mathbf{x}\_{k-1} + \mathbf{K}\_k \mathbf{(z}\_k \text{-H}\boldsymbol{\Phi}\_k \mathbf{x}\_{k \cdot I} \text{-H}\mathbf{G}\_k \boldsymbol{\mu}\_{k \cdot 1}) + \mathbf{G}\_k \boldsymbol{\mu}\_{k \cdot 1} \\ \mathbf{z}\_k &= \mathbf{H} \mathbf{x}\_k + \boldsymbol{\upsilon}\_k \end{aligned} \tag{5}$$

where *<sup>k</sup>* system transition matrix, *Kk* represents he Kalman gain matrix and *Gk* is obtained from

$$\mathbf{G}\_k = \int\_0^T \Phi(\tau) \mathbf{G}(\tau) \, d\tau \tag{6}$$

where T is the sampling time interval and the computation of (t) is given shortly afterwards in (13). In this research information processing from the sensors for estimation is concerned. The control signal (*u*) is not involved in the filtering operation. Hence the Kalman filtering equation for this case becomes

$$\mathbf{x}\_{k} = \boldsymbol{\Phi}\_{k}\mathbf{x}\_{k-1} + \mathbf{K}\_{k}(\mathbf{z}\_{k} \cdot \mathbf{H}\boldsymbol{\Phi}\_{k}\mathbf{x}\_{k-1}) \tag{7}$$

While the filter is operating, the Kalman gains are computed from the matrix Riccati equations:

$$\begin{aligned} \mathcal{M}\_k &= \boldsymbol{\Phi}\_k \boldsymbol{P}\_{k-1} \boldsymbol{\Phi}\_k^T + \boldsymbol{Q}\_k \\ \mathcal{M}\_k &= \boldsymbol{M}\_k \boldsymbol{H}^T \{ \boldsymbol{H} \boldsymbol{M}\_k \boldsymbol{H}^T + \boldsymbol{R}\_k \}^{-1} \\ \mathcal{P}\_k &= \{ \boldsymbol{I} - \boldsymbol{K}\_k \boldsymbol{H} \} \boldsymbol{M}\_k \end{aligned} \tag{8}$$

where *Pk* is a covariance matrix representing errors in the state estimates after an update and *Mk* is the covariance matrix representing errors in the state estimates before an update. The discrete process noise matrix *Qk* can be found from the continuous process-noise matrix *Q* according to

where *x* is a column vector with the states of the system, *F* the system dynamics matrix, *u* is the control vector , and *w* is a white noise process vector. The process noise matrix *Q* is

*z Hx*

0

*T Gk*

where *z* is the measurement vector, *H* is the measurement matrix, and *v* is measurement noise vector which is element-wise white. The measurement noise matrix *R* is related to the

1 kk k k-1 k k-1

*<sup>k</sup>* system transition matrix, *Kk* represents he Kalman gain matrix and *Gk* is obtained

( )G( )d

 

where T is the sampling time interval and the computation of (t) is given shortly afterwards in (13). In this research information processing from the sensors for estimation is concerned. The control signal (*u*) is not involved in the filtering operation. Hence the

While the filter is operating, the Kalman gains are computed from the matrix Riccati

*T k kk k k T T kk k k*

where *Pk* is a covariance matrix representing errors in the state estimates after an update and *Mk* is the covariance matrix representing errors in the state estimates before an update. The discrete process noise matrix *Qk* can be found from the continuous process-noise matrix *Q*

<sup>1</sup> ( )

1

 

( )

*P I KHM*

*MP Q K M H HM H R*

*k kk*

K (z -H -HG ) G

*dt <sup>x</sup> u w* (1)

<sup>T</sup> *Q* E[ ] *ww* (2)

(3)

(4)

*k-1 xuu* (5)

(6)

(8)

1 k k k-1 K (z -H ) *k kk <sup>k</sup> xx x* (7)

<sup>G</sup> *dx <sup>F</sup>*

related to the process-noise vector according to

measurement noise vector *v* according to

<sup>T</sup> *R* E[ ]

*x x*

k

Kalman filtering equation for this case becomes

z

where

equations:

according to

from

In discrete form, the Kalman filtering equations become

*Hx v*

*k kk k k k*

The measurements are linearly related to the states according to

$$Q\_k = \int\_0^T \Phi(\tau) \mathbf{Q} \Phi^T(\tau) d\tau \tag{9}$$

where (t) is given in (13). When robot movement is along a straight line with a constant speed *vx*, the *x* component of the system dynamic model is given by

$$
\boldsymbol{\omega} = \boldsymbol{a}\_o + \boldsymbol{\upsilon}\_\mathbf{x} \mathbf{t} \tag{10}
$$

It is to note that the angular speed *=0*. The system dynamics in state-space form is given by

$$
\begin{bmatrix}
\dot{\mathbf{x}} \\
\ddot{\mathbf{x}} \\
\ddot{\mathbf{x}}
\end{bmatrix} = \begin{bmatrix}
\mathbf{0} & \mathbf{1} \\
\mathbf{0} & \mathbf{0}
\end{bmatrix} \begin{bmatrix}
\mathbf{x} \\
\dot{\mathbf{x}}
\end{bmatrix} \tag{11}
$$

Where the system dynamics matrix *F* is given by

$$F = \begin{bmatrix} \mathbf{0} & \mathbf{1} \\ \mathbf{0} & \mathbf{0} \end{bmatrix} \tag{12}$$

The system transition matrix is computed from inverse Laplace transform of the form

$$\begin{aligned} \Phi(t) &= L^{-1} \left\langle \left( sI - F \right)^{-1} \right\rangle = e^{Ft} \\ \Phi\_k &= \Phi(t) = \begin{bmatrix} 1 & t \\ 0 & 1 \end{bmatrix} \end{aligned} \tag{13}$$

The discrete fundamental matrix, i.e., system transition matrix can be found from preceding expression by simply replacing time with the sampling time interval of the perception measurements T or

$$\Phi\_k = \Phi(T) = \begin{bmatrix} 1 & T \\ 0 & 1 \end{bmatrix} \tag{14}$$

In two dimensional navigation space, i.e., *xy* plane, the system transition matrix becomes

$$
\Phi\_k = \begin{bmatrix} 0 & T & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & T \\ 0 & 0 & 0 & 1 \end{bmatrix} \tag{15}
$$

and the corresponding state vector *X* is given by

$$X = \begin{bmatrix} x & v\_x & y & v\_y \end{bmatrix} \tag{16}$$

In the case of *0*, i.e., circular movement with an angular velocity, we consider the geometry shown in Figure 1.

Data Sensor Fusion for Autonomous Robotics 379

1 1 <sup>0</sup>

<sup>1</sup> 0 0 ( )

switch from linear to non-linear. It is interesting to note that if in (22)

perception (Ciftcioglu, Bittermann et al. 2007; Ciftcioglu 2008).

( )

*k*

with each time a deviation

Fig. 2. Measurements along a linear move

2 2 2 2

( )

 

(21)

(22)

*=0*, then it reduces to

 

( )

 

> 

> >

> > >

> > > >

from linear trajectory. The linear and non-linear cases are

 

 

 

2 2 2 2

22 22

1 1 <sup>0</sup>

*s s s s*

*s s s*

*s s s s*

<sup>1</sup> 0 0

and the inverse Laplace transform of (21) gives the system transition matrix, for *t=T*, as

*s s s*

2 2 2 2

sin( ) cos( ) 1 1 0

*T T*

*T T T T*

*T T*

0 cos( ) 0 sin( ) cos( ) 1 sin( ) 0 1

0 sin( ) 0 cos( )

During the robot navigation we have obtained two system dynamics models; namely rectilinear straight-ahead and angular rotation cases. To endow the robot to be autonomous the angular velocity should be computed during the navigation. If the perception measurements yield a significant angular velocity, the system dynamics model should

(15) which is the transition matrix for linear case. In other words, (22) represents inherently the linear case, as well as the rotational robot navigation. If the angular velocity is computed at each step of navigation and if it is non-zero, the robot moves along a non-linear trajectory

illustrated in figure 2 and figure 3 where the measurements are from sensory visual

Fig. 1. Geometry with respect to angular deviation during robot navigation At the local coordinate system, the state variables are

$$\begin{aligned} \mathbf{x}\_1 &= r \cos(\alpha t) \\ \mathbf{x}\_1^\bullet &= -r \cos(\alpha t) \\ \mathbf{x}\_2 &= r \sin(\alpha t) \end{aligned} \tag{17}$$
  $\mathbf{x}\_2^\bullet = r \alpha \cos(\alpha t)$ 

So that the system dynamics in state-space form in continuous time is given by

$$
\begin{bmatrix} \mathbf{x}\_1^\bullet \\ \mathbf{x}\_1^\bullet \\ \mathbf{x}\_1^\bullet \\ \mathbf{x}\_2^\bullet \\ \mathbf{x}\_2^\bullet \\ \mathbf{x}\_2^\bullet \end{bmatrix} = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & -\alpha \\ 0 & 0 & 0 & 1 \\ 0 & \alpha & 0 & 0 \end{bmatrix} \begin{bmatrix} \mathbf{x}\_1 \\ \mathbf{x}\_1^\bullet \\ \mathbf{x}\_2 \\ \mathbf{x}\_2 \\ \mathbf{x}\_2^\bullet \end{bmatrix} \tag{18}
$$

The system transition matrix *<sup>k</sup>* is computed from inverse Laplace transform of the form

$$\Phi\_k^{oo}(t) = L^{-1} \left\{ (sI - F\_{oo})^{-1} \right\} = e^{F\_{oo}t} \tag{19}$$

where (*sI-F)* is given by

$$\begin{bmatrix} sI - F\_{o} = \begin{bmatrix} s & -1 & 0 & 0 \\ 0 & s & 0 & o \\ 0 & 0 & s - 1 \\ 0 & -o & 0 & s \end{bmatrix} \end{bmatrix} \tag{20}$$

The inverse of (20) yields

Fig. 1. Geometry with respect to angular deviation during robot navigation

1

 

cos( )

*xr t*

*xr t xr t*

*xr t*

<sup>1</sup> <sup>1</sup>

<sup>x</sup> <sup>0100</sup>

*<sup>k</sup>* is computed from inverse Laplace transform of the form

(19)

<sup>1</sup> <sup>1</sup> <sup>2</sup> <sup>2</sup>

000- 0001 0 00

*x x x x*

<sup>2</sup> <sup>2</sup>

 1 1 () ( ) *F t <sup>k</sup> t L sI F e*

> 0 0 00 1 0 0

*s*

*s*

100

*s*

*s*

*x x*

sin( ) sin( )

(17)

(18)

(20)

cos( )

 

1 2 2

So that the system dynamics in state-space form in continuous time is given by

*x*

*sI F*

The system transition matrix

*)* is given by

The inverse of (20) yields

where (*sI-F*

At the local coordinate system, the state variables are

$$\begin{bmatrix} \frac{1}{s} & \frac{1}{s^2 + o^2} & 0 & -\frac{o}{s(s^2 + o^2)}\\ 0 & \frac{1}{s(s^2 + o^2)} & 0 & -\frac{o}{s^2 + o^2} \\ 0 & \frac{o}{s(s^2 + o^2)} & \frac{1}{s} & \frac{1}{s^2 + o^2} \\ 0 & \frac{o}{s^2 + o^2} & 0 & \frac{1}{s(s^2 + o^2)} \end{bmatrix} \tag{21}$$

and the inverse Laplace transform of (21) gives the system transition matrix, for *t=T*, as

$$\begin{array}{ccccc} \begin{bmatrix} 1 & \frac{\sin(\alpha T)}{\alpha} & 0 & \frac{\cos(\alpha T) - 1}{\alpha} \\ 0 & \cos(\alpha T) & 0 & -\sin(\alpha T) \\ 0 & -\frac{\cos(\alpha T) - 1}{\alpha} & 1 & \frac{\sin(\alpha T)}{\alpha} \\ 0 & \sin(\alpha T) & 0 & \cos(\alpha T) \end{bmatrix} \end{array} \tag{22}$$

During the robot navigation we have obtained two system dynamics models; namely rectilinear straight-ahead and angular rotation cases. To endow the robot to be autonomous the angular velocity should be computed during the navigation. If the perception measurements yield a significant angular velocity, the system dynamics model should switch from linear to non-linear. It is interesting to note that if in (22) *=0*, then it reduces to (15) which is the transition matrix for linear case. In other words, (22) represents inherently the linear case, as well as the rotational robot navigation. If the angular velocity is computed at each step of navigation and if it is non-zero, the robot moves along a non-linear trajectory with each time a deviation from linear trajectory. The linear and non-linear cases are illustrated in figure 2 and figure 3 where the measurements are from sensory visual perception (Ciftcioglu, Bittermann et al. 2007; Ciftcioglu 2008).

Fig. 2. Measurements along a linear move

Data Sensor Fusion for Autonomous Robotics 381

*<sup>f</sup> x x fx x w x*

*o*

 

*x x*


*f f f h h h x x x x x x*

. . . .

In view of (29) and (30), the system dynamics matrix in discrete form for extended

sin( ) cos( ) 1 1 0

*T T*

 

> 

 

 

 

*T T T T*

0 cos( ) 0 sin( )

cos( ) 1 sin( ) 0 1

*k y*

0 sin( ) 0 cos( )

1 sin cos 1 ( cos ) ( sin ) *<sup>x</sup> T T T T xT T y*

1 1cos sin ( sin ) ( cos ) *<sup>y</sup> T T T T xT T y*

00 0 0 1

*T T*

*o*

(28)

*x x*

1 1 1 1 1 2 1 2 2 1 2 1 1 2 1 2

*f f h h x x x x*

...... ......

**F H** (29)

(30)

*x*

*x*

 

*y*

 

 

> 

*T x Ty T* (33)

 

 

(31)

(32)

(34)

( )

*<sup>h</sup> z hx x v x*

( )

*o*

*o o x x x x*

If the reference trajectory *xo* is chosen to satisfy the differential equation

() *<sup>o</sup> x fx*

sin ) cos

*x*

 

*<sup>y</sup>* , .. are given by

Kalman filtering becomes

*o o*

The Taylors series expansion yields the linearized model

where

Above,

*x ,* 

Fig. 3. Robot navigation deviating from a linear move

To compute the angular velocity at each step, it is also selected as a state variable, so that the state variables vector given by (16) is modified to

$$X^{\prime 0} = \begin{bmatrix} x & v\_x & y & v\_y & w \end{bmatrix} \tag{23}$$

However, in this case the system transition matrix in (22) becomes non-linear with respect to . In this case Kalman filtering equations should be linearized. This is a form known as extended Kalman filtering.

#### **2.2 Extended Kalman filtering (EKF)**

The non-linear state-space form as a set of first-order non-linear differential equations is given by

$$\mathbf{\hat{x}} = f(\mathbf{x}) + w \tag{24}$$

where *x* is a vector of the system states, *f(x)* is a non-linear function of those states, and w is a random zero-mean process. The measurement equation is considered to be a non-linear function of the states according to

$$z = h(\mathbf{x}) + \upsilon \tag{25}$$

where *h(x)* is a non-linear measurement matrix, *v* is a zero-mean random process. We assume that an *approximate* trajectory *xo* is available. This is referred to as the reference trajectory. The actual trajectory *x* may then be written as

$$
\mathfrak{X} = \mathfrak{X}\_o + \Delta \mathfrak{x} \tag{26}
$$

Hence, (24) and (25) become

$$\begin{aligned} \stackrel{\bullet}{\mathbf{x}}\_{o} + \stackrel{\bullet}{\Delta \mathbf{x}} &= f(\mathbf{x}\_{o} + \Delta \mathbf{x}) + w \\ \stackrel{\bullet}{\mathbf{z}}\_{o} &= h(\mathbf{x}\_{o} + \Delta \mathbf{x}) + v \end{aligned} \tag{27}$$

The Taylors series expansion yields the linearized model

$$\begin{aligned} \stackrel{\bullet}{\mathbf{x}\_o + \Delta \mathbf{x}} & \approx f(\mathbf{x}\_o) + \left[ \frac{\partial f}{\partial \mathbf{x}} \right]\_{\mathbf{x} = \mathbf{x}\_o} \Delta \mathbf{x} + w \\\ z &= h(\mathbf{x}\_o) + \left[ \frac{\partial h}{\partial \mathbf{x}} \right]\_{\mathbf{x} = \mathbf{x}\_o} \Delta \mathbf{x} + v \end{aligned} \tag{28}$$

where

380 Serial and Parallel Robot Manipulators – Kinematics, Dynamics, Control and Optimization

at each step, it is also selected as a state variable, so that

(24)

*z hx v* ( ) (25)

*<sup>o</sup> xx x* (26)

(27)

(23)

Fig. 3. Robot navigation deviating from a linear move

the state variables vector given by (16) is modified to

() *x fx w*

trajectory. The actual trajectory *x* may then be written as

[] *X xv y v x y*

However, in this case the system transition matrix in (22) becomes non-linear with respect to

The non-linear state-space form as a set of first-order non-linear differential equations is

where *x* is a vector of the system states, *f(x)* is a non-linear function of those states, and w is a random zero-mean process. The measurement equation is considered to be a non-linear

We assume that an *approximate* trajectory *xo* is available. This is referred to as the reference

( )

( ) *o o o*

*x x fx x w z hx x v*

 

where *h(x)* is a non-linear measurement matrix, *v* is a zero-mean random process.

. In this case Kalman filtering equations should be linearized. This is a form known as

To compute the angular velocity

extended Kalman filtering.

**2.2 Extended Kalman filtering (EKF)** 

function of the states according to

Hence, (24) and (25) become

given by

$$\frac{\partial f}{\partial \mathbf{x}}|\_{\mathbf{x}=\mathbf{x}\_o} = \mathbf{F} = \begin{vmatrix} \frac{\partial f\_1}{\partial \mathbf{x}\_1} & \frac{\partial f\_1}{\partial \mathbf{x}\_2} & \cdots \\ \frac{\partial f\_2}{\partial \mathbf{x}\_1} & \frac{\partial f\_1}{\partial \mathbf{x}\_2} & \cdots \\ \vdots & & \\ \cdot & & \\ \cdot & & \end{vmatrix} \frac{\partial \mathbf{h}}{\partial \mathbf{x}}|\_{\mathbf{x}=\mathbf{x}\_o} = \mathbf{H} = \begin{vmatrix} \frac{\partial h\_1}{\partial \mathbf{x}\_1} & \frac{\partial h\_1}{\partial \mathbf{x}\_2} & \cdots \\ \frac{\partial h\_2}{\partial \mathbf{x}\_1} & \frac{\partial h\_1}{\partial \mathbf{x}\_2} & \cdots \\ \vdots & & \\ \cdot & & \\ \cdot & & \end{vmatrix} \tag{29}$$

If the reference trajectory *xo* is chosen to satisfy the differential equation

$$
\stackrel{\bullet}{\mathbf{A}\mathbf{x}} = f(\mathbf{x}\_o) \tag{30}
$$

In view of (29) and (30), the system dynamics matrix in discrete form for extended Kalman filtering becomes

$$
\begin{bmatrix}
1 & \frac{\sin(\alpha T)}{\alpha} & 0 & \frac{\cos(\alpha T) - 1}{\alpha} & \alpha\_x \\
0 & \cos(\alpha T) & 0 & -\sin(\alpha T) & \alpha\_y \\
0 & -\frac{\cos(\alpha T) - 1}{\alpha} & 1 & \frac{\sin(\alpha T)}{\alpha} & \alpha\_y \\
0 & \sin(\alpha T) & 0 & \cos(\alpha T) & \alpha\_y \\
0 & 0 & 0 & 0 & 1
\end{bmatrix}
\tag{31}
$$

Above, *x , <sup>y</sup>* , .. are given by

$$\rho o\_{\mathbf{x}} = \frac{1}{\alpha \rho} \left[ (T \cos \alpha T - \frac{\sin \alpha T}{\alpha}) \mathbf{x} + (-T \sin \alpha T - \frac{\cos \alpha T - 1}{\alpha}) \mathbf{y} \right] \tag{32}$$

$$o\underset{\mathbf{x}}{\bullet} = \left[ -\sin\alpha T \right] \mathbf{\stackrel{\bullet}{x}-\cos\alpha T} \underset{\mathbf{y}}{\mathbf{x}} \tag{33}$$

$$\rho o\_y = \frac{1}{o\nu} \left[ (T\sin\alpha T - \frac{1-\cos\alpha T}{o\nu}) \mathbf{x} + (-T\cos\alpha T - \frac{\sin\alpha T}{o\nu}) \mathbf{y} \right] \tag{34}$$

$$o\_{\bullet} = \left| \cos \alpha T \,\mathrm{x} - \sin \alpha T \,\stackrel{\bullet}{y} \right| T \tag{35}$$

Data Sensor Fusion for Autonomous Robotics 383

The estimation in Kalman filter is accomplished recursively which with the notations in the

( 1| ) ( 1) ( 1) ( 1) ( 1| ) ( 1) ( 1)

So that the updated as the measurements *z* are available the updated state variables and

( 1| 1) ( 1| ) ( 1)[ ( 1) ( 1) ( 1| )]

*xk k xk k Kk k Ck xk k*

[ ] [] [] [] [] [] []

*x k A kx k k C kx k k*

*N NN N NN i ii i i ii i*

[] [] [ ]

[] [] [ ]

*i iT i ii i*

[] [] [ ]

*i iT i ii i*

( ) ( ) ( ),

*E k l R k kl*

( ) ( ) ( ),

*Ew k w l Q k k l*

( ) 0,

( ) 0,

 

*N NT N NN N*

( ) ( ) ( ),

*Ew k w l Q k k l*

0

*k l*

*<sup>i</sup> z k* at different resolution levels, we write

0

0

*k l*

*k l*

( 1) ( ) ( ), () () () ()

*Ck Pk kCk Rk* 

*z*

*Pk kCk K k*

*xk k Akxk k* ( 1| ) ( ) ( | ) (42)

(44)

(48)

(49)

(50)

(45)

( 1| ) ( ) ( | ) ( ) ( ) ( ) ( ) *T T Pk k AkPk kAk BkQkBk* (43)

*T T*

*Pk k I Kk Ck Pk k* ( 1| 1) [ ( 1) ( 1)] ( 1| ) (46)

*z* (47)

literature became standard matrix equations formulation and it reads

( 1) ( 1) ( 1| )

*k C k x k k innovation*

1,...,

( ) 0,

*i N*

[ ]

[ ]

[ ]

*i i*

*E k*

*i i*

*Ew k*

*N N*

*Ew k*

where i=N is the highest resolution level, so that

Referring to the measurements [ ]( ) *<sup>i</sup>*

and

N-level multiresolutional dynamic system in a vector form can be described by

covariance matrix are

*z*

In the extended Kalman filter operation, three measurements are considered. Referring to Figure 3 these are

$$\begin{aligned} \theta &= \operatorname{arcclg}(y \mid \mathbf{x}) = \operatorname{arcclg}(\mathbf{x}\_3 \mid \mathbf{x}\_1) & & \text{angle} \\ r &= \sqrt{\mathbf{x}^2 + y^2} &= \sqrt{\mathbf{x}\_1^2 + \mathbf{x}\_3^2} & & \text{distance} \\ \upsilon &= \sqrt{\upsilon\_x^2 + \upsilon\_y^2} &= \sqrt{\mathbf{x}\_2^2 + \mathbf{x}\_4^2} & & \text{velocity} \end{aligned} \tag{36}$$

From (36), the linearized measurement matrix in terms of state variables becomes

$$\mathbf{H} = \begin{bmatrix} -1 & 0 & \frac{\mathbf{x}\_1}{\mathbf{x}\_1^2 + \mathbf{x}\_3^2} & 0 & 0 \\ \frac{\mathbf{x}\_1}{\sqrt{\mathbf{x}\_1^2 + \mathbf{x}\_3^2}} & 0 & \frac{\mathbf{x}\_3}{\sqrt{\mathbf{x}\_1^2 + \mathbf{x}\_3^2}} & 0 & 0 \\ 0 & \frac{\mathbf{x}\_2}{\sqrt{\mathbf{x}\_2^2 + \mathbf{x}\_4^2}} & 0 & \frac{\mathbf{x}\_4}{\sqrt{\mathbf{x}\_2^2 + \mathbf{x}\_4^2}} & 0 \end{bmatrix} \tag{37}$$

Above *x1, x2, x3, x4* are the state variables which are defined as *x1=x*, *x2=y*, *x3=vx*, and *x4=vy*, respectively.

#### **2.3 Estimation**

In this work, the Kalman filter is an estimator of states of a dynamic system with a minimal error (innovation) variance and in this sense it is optimal. In order to explain the estimation in detail, the filter equations taking the discrete time point *k* as reference are briefly given below. A general dynamic system given in a form

$$\mathbf{x}(k+1) = A(k)\mathbf{x}(k) + B(k)w(k) \tag{38}$$

$$\mathbf{z}(k) = \mathbf{C}(k)\mathbf{x}(k) + \mathbf{v}(k) \tag{39}$$

is terminologically referred to as *state-space*. Above A is the system matrix; B process noise matrix; C is the measurement matrix. Further, *w(k)* and *v(k)* are Gaussian process noise and measurement noise respectively with the properties

$$\begin{aligned} E\{w(k)\} &= 0\\ E\{w(k)w(l)^T\} &= Q(k)for \ k = l\\ &= 0 \text{ otherwise} \end{aligned} \tag{40}$$

$$\begin{aligned} E\{\boldsymbol{\upsilon}(k)\} &= \boldsymbol{0} \\ E\{\boldsymbol{\upsilon}(k)\boldsymbol{\upsilon}(l)^T\} &= R(k)\boldsymbol{f}\boldsymbol{\upsilon} \quad k = l \\ &= \boldsymbol{0} \text{ otherwise} \end{aligned} \tag{41}$$

In the extended Kalman filter operation, three measurements are considered. Referring to

1 3

2 4

*x y* velocity

1

0 00

**H** (37)

*xk Akxk Bkwk* ( 1) ( ) ( ) ( ) ( ) (38)

*zk Ckxk vk* () ()() () (39)

(40)

(41)

<sup>1</sup> 0 00

*x*

2 4 22 22 24 24

*x x xx xx*

0 00

Above *x1, x2, x3, x4* are the state variables which are defined as *x1=x*, *x2=y*, *x3=vx*, and *x4=vy*,

In this work, the Kalman filter is an estimator of states of a dynamic system with a minimal error (innovation) variance and in this sense it is optimal. In order to explain the estimation in detail, the filter equations taking the discrete time point *k* as reference are briefly given

is terminologically referred to as *state-space*. Above A is the system matrix; B process noise matrix; C is the measurement matrix. Further, *w(k)* and *v(k)* are Gaussian process noise and

0

0

*otherwise*

*otherwise*

{ ( )} 0

{ ( )} 0

*Ewk*

*Evk*

{ ( ) () } ( )

{ ( ) () } ( )

*T*

*T*

*E w k w l Q k for k l*

*E v k v l R k for k l*

3 1

( / ) ( / ) angle

(35)

(36)

distance

*Tx Ty T*

22 22

*r xy xx v vv xx*

 

*arctg y x arctg x x*

22 22

From (36), the linearized measurement matrix in terms of state variables becomes

2 2 2 2 1 3 1 3 1 3 22 22 13 13

*xx xx x x xx xx*

cos sin

below. A general dynamic system given in a form

measurement noise respectively with the properties

Figure 3 these are

respectively.

**2.3 Estimation** 

*y*

The estimation in Kalman filter is accomplished recursively which with the notations in the literature became standard matrix equations formulation and it reads

$$\mathbf{x}(k+1\mid k) = A(k)\mathbf{x}(k\mid k) \tag{42}$$

$$P(k+1 \mid k) = A(k)P(k \mid k)A(k)^T + B(k)Q(k)B(k)^T \tag{43}$$

$$K(k+1) = \frac{P(k+1 \mid k)C(k+1)^T}{C(k+1)P(k+1 \mid k)C(k+1)^T + R(k+1)}\tag{44}$$

So that the updated as the measurements *z* are available the updated state variables and covariance matrix are

$$\begin{aligned} \mathbf{x}(k+1 \mid k+1) &= \mathbf{x}(k+1 \mid k) + \mathbf{K}(k+1) \left[ \mathbf{z}(k+1) - \mathbf{C}(k+1)\mathbf{x}(k+1 \mid k) \right] \\ \mathbf{z}(k+1) - \mathbf{C}(k+1)\mathbf{x}(k+1 \mid k) &= im\text{vraction} \end{aligned} \tag{45}$$

$$P(k+1 \mid k+1) = [I - K(k+1)C(k+1)]P(k+1 \mid k) \tag{46}$$

N-level multiresolutional dynamic system in a vector form can be described by

$$\begin{aligned} \mathbf{x}^{[N]}(k\_N + \mathbf{1}) &= A^{[N]}(k\_N)\mathbf{x}^{[N]}(k\_N),\\ \mathbf{z}^{[i]}(k\_i) &= \mathbf{C}^{[i]}(k\_i)\mathbf{x}^{[i]}(k\_i) + \nu^{[i]}(k\_i) \\ \mathbf{i} &= \mathbf{1}, \dots, N \end{aligned} \tag{47}$$

where i=N is the highest resolution level, so that

$$\begin{aligned} E\left[w^{\left[N\right]}(k\_N)\right] &= 0, \\ E\left[w^{\left[N\right]}(k\_N)w^{\left[N\right]}(l\_N)^T\right] &= Q^{\left[N\right]}(k\_N), \quad k = l \\ &= 0 \quad k \neq l \end{aligned} \tag{48}$$

Referring to the measurements [ ]( ) *<sup>i</sup> <sup>i</sup> z k* at different resolution levels, we write

$$\begin{aligned} E\left[w^{[i]}(k\_i)\right] &= 0, \\ E\left[w^{[i]}(k\_i)w^{[i]}(l\_i)^T\right] &= \mathcal{Q}^{[i]}(k\_i), \quad k = l \\ &= 0 \quad \quad k \neq l \end{aligned} \tag{49}$$

and

$$\begin{aligned} E\left[\boldsymbol{\nu}^{\mathrm{I}[i]}(k\_{i})\right] &= \mathbf{0}, \\ E\left[\boldsymbol{\nu}^{\mathrm{I}[i]}(k\_{i})\boldsymbol{\nu}^{\mathrm{I}[i]}(l\_{i})^{T}\right] &= \mathbf{R}^{\mathrm{I}[i]}(k\_{i}), \quad k = \mathrm{I} \\ &= \mathbf{0} \qquad k \neq l \end{aligned} \tag{50}$$

Data Sensor Fusion for Autonomous Robotics 385

<sup>1</sup> ( 1)*<sup>k</sup>*

The coefficients *gk* and *hk* appear in the quadrature mirror filters used to compute the

high-pass filters respectively which are spaces in L2() where inner product of functions is

, ,1, *m k* () () () *mk m k*

/2 , ( ) 2 (2 ) *m m*

/2 , ( ) 2 (2 ) *m m*

*m=0* constitutes the coarsest scale. The simplest filter coefficients are known as Haar filter

1 2 [ ]

1 2 [ ] <sup>1</sup> [1 1] 2

If one sensor is used at the highest resolutional level i.e., i=3 the measurements at different

*<sup>h</sup> h hh*

*<sup>h</sup> g gg*

resolution levels can be obtained by the decomposition scheme shown in figure 4.

*1*

*data block data block data block*

*2 3* 

*time*

In this scheme each data block at the highest resolution level (i=3) contains 4 samples.

In figure 5, the measurements are uniform. This means measurement time points in a lower resolution are exactly at the mid of the two points of measurement times at the higher resolutional level, as indicated in figure 4. Within a data block, the state variables at

*0 1* 

*0*

*0 1 2 3 4*

Fig. 4. Measurements at different resolution levels

resolution level *i* are designated as

Wavelet decomposition of this block of samples is shown in figure 5.

<sup>1</sup> [1 1] 2

 

 

 

 *tt t* 

wavelet transform.

where

and given by

*(t)* and

defined with finite energy. The orthogonal spaces satisfy the property

*m k* 

*m k* 

*k k h g* (57)

*t tk* (59)

*t tk* (60)

(61)

(62)

*i=1*

*i=2*

*i=3*

*2*

*4 5* 

*5 6 7 8 9 10 11* 

(58)

(t) form orthogonal functions which constitute low-pass and

Kalman filter is used to combine the information from the measurements at different resolutional levels and enhance the state estimation rather than to employ single measurement at each time-step.

### **3. Wavelets**

#### **3.1 Multiresolutional decomposition**

Wavelet analysis is basically the projection of data onto a set of basis functions in order to separate different scale information. In particular, in the discrete wavelet transform (DWT) data are separated into wavelet detail coefficients (detail-scale information) and approximation coefficients (approximation-scale information) by the projection of the data onto an orthogonal dyadic basis system (Mallat 1989). In the DWT framework, a signal f(x) is decomposed into approximation and detail components to form a multiresolution analysis of the signal as

$$f(\mathbf{x}) = \sum\_{k} a\_{j\diamond,k} \phi\_{j\diamond,k}(\mathbf{x}) + \sum\_{j=j\diamond}^{j\diamond + 1} \sum\_{k} d\_{j,k} \boldsymbol{\upmu}\_{j,k}(\mathbf{x}) \quad j\_{o\prime}, j, k \in \mathbb{Z} \tag{51}$$

where *ajo,k* denote the approximation coefficient at resolution *jo*; *dj,k* denotes the wavelet coefficient at resolution *j*; *jo,k(x)* is a scaling function; *j,k(x)* is a wavelet function at resolution *j*, and *J* is the number of decomposition levels. The coefficients are given by

$$\begin{aligned} d\_{jo,k} &= \langle f(\mathbf{x}), \phi\_{jo,k} \rangle \\ d\_{j,k} &= \langle f(\mathbf{x}), \boldsymbol{\nu}\_{j,k} \rangle \quad \quad j\_{o'}, j, k \in \mathbb{Z} \end{aligned} \tag{52}$$

Above . denotes the inner product in the space of square integrable functions *L2()*. Specifically, the dyadic DWT assumes the scaling functions have the property of

$$\mathfrak{d}\_{j\flat,k}(\mathbf{x}) = \mathfrak{D}^{j\flat/2} \mathfrak{d}(\mathfrak{D}^{j\flat}\mathbf{x} - k) \tag{53}$$

and the wavelet functions

$$
\Psi\_{j,k}(\mathbf{x}) = 2^{j/2} \Psi(2^j \mathbf{x} - k) \tag{54}
$$

The novel feature of wavelets is that they are localized in time and frequency as to signals. This behaviour makes them convenient for the analysis of non-stationary signals. It is an elementary introduction of wavelets by introducing a scaling function, such that

$$
\phi(t) = \sqrt{2} \sum\_{k} g\_k \phi(2t - k) \tag{55}
$$

A counterpart of this function is called mother wavelet function obtained from

$$
\psi(t) = \sqrt{2} \sum\_{k} h\_{k} \phi(2t - k) \tag{56}
$$

where *lk* and *hk* are related via the equation

$$h\_k = (-1)^k g\_{1-k} \tag{57}$$

The coefficients *gk* and *hk* appear in the quadrature mirror filters used to compute the wavelet transform. *(t)* and (t) form orthogonal functions which constitute low-pass and high-pass filters respectively which are spaces in L2() where inner product of functions is defined with finite energy. The orthogonal spaces satisfy the property

$$
\phi\_{m,k}(t) \cup \nu(t)\_{m,k} = \phi\_{m+1,k}(t) \tag{58}
$$

where

384 Serial and Parallel Robot Manipulators – Kinematics, Dynamics, Control and Optimization

Kalman filter is used to combine the information from the measurements at different resolutional levels and enhance the state estimation rather than to employ single

Wavelet analysis is basically the projection of data onto a set of basis functions in order to separate different scale information. In particular, in the discrete wavelet transform (DWT) data are separated into wavelet detail coefficients (detail-scale information) and approximation coefficients (approximation-scale information) by the projection of the data onto an orthogonal dyadic basis system (Mallat 1989). In the DWT framework, a signal f(x) is decomposed into approximation and detail components to form a multiresolution

> , , , , () () () ,, *jo J jo k jo k jk jk o*

*fx a x d x j jk*

*jo,k(x)* is a scaling function;

*d fx j jk* 

Above . denotes the inner product in the space of square integrable functions *L2(*

resolution *j*, and *J* is the number of decomposition levels. The coefficients are given by

, , , ,

*jo k jo k j k jk o*

Specifically, the dyadic DWT assumes the scaling functions have the property of

*a fx*

*jo k* 

*j k* 

where *lk* and *hk* are related via the equation

elementary introduction of wavelets by introducing a scaling function, such that

A counterpart of this function is called mother wavelet function obtained from

( ),

where *ajo,k* denote the approximation coefficient at resolution *jo*; *dj,k* denotes the wavelet

( ), , ,

/2 , ( ) 2 (2 ) *jo jo*

/2 , ( ) 2 (2 ) *j j*

The novel feature of wavelets is that they are localized in time and frequency as to signals. This behaviour makes them convenient for the analysis of non-stationary signals. It is an

> ( ) 2 (2 ) *<sup>k</sup> k*

( ) 2 (2 ) *<sup>k</sup> k*

 

 

 

 

 

(51)

(52)

*x xk* (53)

*x xk* (54)

*t g tk* (55)

*t h tk* (56)

*j,k(x)* is a wavelet function at

*)*.

*k j jo k*

measurement at each time-step.

**3.1 Multiresolutional decomposition** 

**3. Wavelets** 

analysis of the signal as

coefficient at resolution *j*;

and the wavelet functions

$$
\phi\_{m,k}(t) = 2^{m/2} \phi(2^m t - k) \tag{59}
$$

$$
\psi\_{m,k}(t) = 2^{m/2} \nu(2^m t - k) \tag{60}
$$

*m=0* constitutes the coarsest scale. The simplest filter coefficients are known as Haar filter and given by

$$\begin{aligned} h\_{\boldsymbol{h}} &= [\boldsymbol{h}\_1 \ \boldsymbol{h}\_2 \ \boldsymbol{1}] \\ &= \frac{1}{\sqrt{2}} [\boldsymbol{1} \ \quad \ 1 \ \quad \end{aligned} \tag{61}$$

$$\begin{aligned} \begin{bmatrix} g\_h \end{bmatrix} &= \begin{bmatrix} g\_1 & g\_2 \end{bmatrix} \\ &= \frac{1}{\sqrt{2}} [1 \ -1] \end{aligned} \tag{62}$$

If one sensor is used at the highest resolutional level i.e., i=3 the measurements at different resolution levels can be obtained by the decomposition scheme shown in figure 4.

Fig. 4. Measurements at different resolution levels

In this scheme each data block at the highest resolution level (i=3) contains 4 samples. Wavelet decomposition of this block of samples is shown in figure 5.

In figure 5, the measurements are uniform. This means measurement time points in a lower resolution are exactly at the mid of the two points of measurement times at the higher resolutional level, as indicated in figure 4. Within a data block, the state variables at resolution level *i* are designated as

Data Sensor Fusion for Autonomous Robotics 387

*x m*

*x m*

related by

*i-3*

*H \**

*x m*

Fig. 7. Wavelet reconstruction of state variables in a data block

*i-2*

*\* <sup>y</sup> m*

*i-2*

*H \**

*x m*

The wavelet matrix operator G and the scaling matrix operator H in the decomposition and their counterparts G\* and H\* in the reconstruction contain two-tap Haar filters and they

The operators G and H are called Quadrature Mirror Filters (QMF) for wavelet decomposition and G\* and H\* QMF for reconstruction. A QMF has the following properties.

1

0

0

\* \* \* \* \* \*

*HH GG*

*H H HG I GH GG I*

*i-1*

*\* ym i-1*

> *H \**

*G*

*G*

*G*

*\* ym i-3*

*<sup>i</sup> H*

*G*

*ym i-1*

> *x m*

Fig. 6. Wavelet decomposition of state variables in a data block

*i-1 H*

*G*

*ym i-2*

> *x m*

*i-2 H*

*G*

*x m i*

\* \* *T T GG HH* (68)

(69)

*ym i-3*

> *x m i-3*

Fig. 5. Wavelet decomposition of state variables in a data block

$$\begin{aligned} \mathbf{x}\_m^{[i]} &= \begin{bmatrix} \mathbf{x}^{[i]} k(i) \\ \mathbf{x}^{[i]} k(i+1) \\ \vdots \\ \mathbf{x}^{[i]} k(i+2^{i-1}) \end{bmatrix} \\\\ \mathbf{x}^{[i]}|\_{k(i)} &= \begin{bmatrix} \mathbf{x}\_{k,1}^{[i]} \\ \mathbf{x}\_{k,2}^{[i]} \\ \vdots \\ \mathbf{x}\_{k,p}^{[i]} \end{bmatrix} \end{aligned} \tag{64}$$

where *m* is data block index and *p* is the number of state variables. In a data block, there are *2i-1* state variables. Each state variable has *p* state components. A lower resolution state variable is computed from

$$\mathbf{x}^{\left[i\right]}(k\_i) = h\_1 \mathbf{x}^{\left[i+1\right]}(k\_{i+1}) + h\_2 \mathbf{x}^{\left[i+1\right]}(k\_{i+1} + 1) \tag{65}$$

where h1 and h2 are the Haar low-pass filter coefficients. The details component i.e., high frequency part after the decomposition is computed via

$$\mathbf{y}^{[i]}(k\_i) = \mathbf{g}\_1 \mathbf{x}^{[i+1]}(k\_{i+1}) + \mathbf{g}\_2 \mathbf{x}^{[i+1]}(k\_{i+1} + 1) \tag{66}$$

where g1 and g2 are the Haar high-pass filter coefficients. The reconstruction of the states is carried out by combining (65) and (66) in a matrix equation form as given below.

$$\begin{bmatrix} \mathbf{x}^{[i+1]}(k\_{i+1})\\ \mathbf{x}^{[i+1]}(k\_{i+1}+1) \end{bmatrix} = \mathbf{h}^{\ast T}\mathbf{x}^{[i]}(k\_i) + \mathbf{g}^{\ast T}\mathbf{y}^{[i]}(k\_i) \tag{67}$$

where *h\** and *g\** are mirror filters of *h* and *g* counterparts; wavelet decomposition and reconstruction is carried out according to the scheme shown in figures 6 and 7.

*i=1*

*i=2*

*i=3*

(63)

(64)

*data block*

*x[1](k1)*

*x[2](k2) x[2](k2+1)*

*x[3](k3) x[3](k3+1) x[3](k3+2) x[3](k3+3)*

*time index ki*

[ ] [ ]

*x ki <sup>X</sup>*

*i m*

*x*

[ ]

*x ki*

*i i*

.......

*i k k i*

where *m* is data block index and *p* is the number of state variables. In a data block, there are *2i-1* state variables. Each state variable has *p* state components. A lower resolution state

where h1 and h2 are the Haar low-pass filter coefficients. The details component i.e., high

where g1 and g2 are the Haar high-pass filter coefficients. The reconstruction of the states is

1 [ ] [ ]

*i T i i i i i*

where *h\** and *g\** are mirror filters of *h* and *g* counterparts; wavelet decomposition and

( ) () () ( 1)

[ ] [ 1] [ 1] 1 12 1 ( ) ( ) ( 1) *ii i*

[ ] [ 1] [ 1] 1 12 1 ( ) ( ) ( 1) *ii i*

carried out by combining (65) and (66) in a matrix equation form as given below.

reconstruction is carried out according to the scheme shown in figures 6 and 7.

[] 1

*i i*

*x ki*

( 2)

[ ] ,1 [ ] [ ] ,2 ( )

*i k i*

 

*x x*

> [ ] ,

*ii i x k hx k hx k* (65)

*ii i y k gx k gx k* (66)

*k k*

*\* \*T hx gy* (67)

*i k p*

....

*x*

( ) ( 1)

Fig. 5. Wavelet decomposition of state variables in a data block

frequency part after the decomposition is computed via

[ 1]

*x k*

*i*

[ 1]

*x k*

1

*i*

variable is computed from

Fig. 6. Wavelet decomposition of state variables in a data block

Fig. 7. Wavelet reconstruction of state variables in a data block

The wavelet matrix operator G and the scaling matrix operator H in the decomposition and their counterparts G\* and H\* in the reconstruction contain two-tap Haar filters and they related by

$$\mathbf{G}^\* = \mathbf{G}^T \qquad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \tag{68}$$

The operators G and H are called Quadrature Mirror Filters (QMF) for wavelet decomposition and G\* and H\* QMF for reconstruction. A QMF has the following properties.

$$\begin{aligned} \begin{bmatrix} H \ \end{bmatrix}^\* H + \mathbf{G}^\* \mathbf{G} &= \mathbf{1} \\\\ \begin{bmatrix} H^\* H & H \mathbf{G}^\* \\\\ GH^\* & G \mathbf{G}^\* \end{bmatrix} = \begin{bmatrix} I & \mathbf{0} \\ \mathbf{0} & I \end{bmatrix} \end{aligned} \tag{69}$$

where I is the identity matrix and (68) implies that filter impulse responses form an orthonormal set. It is to note that the state variable estimations are carried out at the respective resolution levels, as follows.

$$H^{i+1 \to i} = \text{diag}\{ (\mathbf{1} \,/\sqrt{2})h\_{\mathbf{h}}^{(1)}, (\mathbf{1} \,/\sqrt{2})h\_{\mathbf{h}}^{(2)}, \dots, (\mathbf{1} \,/\sqrt{2})h\_{\mathbf{h}}^{(K)} \} \tag{70}$$

where *hh[i]* is scaled two-tap Haar lowpass filters on the diagonal; *K* is the number of filters involved in a decomposition from the resolution level *i+1* to *i*. For instance from 2 to 1, then *i*=1, and *K* is given by

$$K = \mathfrak{2}^{[i-1]} = 1 \tag{71}$$

Data Sensor Fusion for Autonomous Robotics 389

multiple sensors for the same state variable while the sensors are operated in different resolutions. Referring to figure 4, 3 sensors of different resolutions are considered. Since sensors are operated independently the measurements are non-uniform, in general. This means measurements in the lower resolution are not necessarily at the mid of the two points of the measurement times at the higher resolutional level. This is depicted in figure 8.

*0 1* 

*b1 [2]*

*[3] b2 a [3] <sup>2</sup> [3]*

*measurements*

Before explaining the fusion process, the wavelet decomposition for the non-uniform case will be explained in detail since this is central to this study. Decomposing the state variables at time indices *0* and *1* at resolution level *i=3* into a single state variable at time index *0* at

[3] [3] [2] 1 1

[ 1] [ 1] [ ] <sup>1</sup> [ 1] [ 1] [ 1] [ 1] (1) *<sup>i</sup> iiii*

*<sup>b</sup> <sup>a</sup> <sup>h</sup>*

(1) *b a <sup>h</sup>*

[3] [3] [3] [3] 1111

*abab* 

*i i*

 

*iiii kkkk*

*abab*

where index *i* denotes the resolution level; *ki* is the time index at the resolution level *i*; *aki[i+1]* and *bki[i+1]* are the relative time intervals. The lowpass filter *h[i](ki)* for deriving a coarsened estimate state variable at time index *ki* and at resolution level *i* is based on the appropriate pair of estimated state variables at resolution level *i+1*. As the lowpass filter is determined, the highpass filter and the inverse filters can be determined by the filterbank implementation of the Quadrature Mirror Filter (QMF) shown in figures 6 and 7. Hence from lowpass filter *h[i](ki)* the highpass filter *g[i]k(i)* and the inverse filters *hinv[i](ki)* and *ginv[i](ki)*

[ 1] [ 1] [ ] <sup>1</sup>

*i iiii kkkk*

*i k k k k inv i i i*

<sup>2</sup> <sup>2</sup> ( ) *<sup>i</sup>*

*<sup>b</sup> <sup>a</sup> g k abab*

[ 1] [ 1] [ 1] [ 1]

*i i*

 

[ 1] [ 1] [ 1] [ 1]

 

*ii ii*

[ 1] [ 1] 0.5( ) 0.5( ) ( ) *i i i i*

*i i*

*ab ab*

*k k*

*b a*

*iiii*

*0*

*a1 [2]*

*[3] b1*

Fig. 8. Non-uniform sampling in a data block of three resolutions

resolution level *i=2* can be achieved by lowpass filter *h[2](1)* as follows.

*i k*

are determined as given below that they satisfy the constraints given by (69).

*i k*

[ ]

*h k*

*a1*

*i=1*

*i=2*

(79)

(80)

(81)

(82)

*i=3*

*0 1 2 3*

Uniform sampling is seen in figure 4.

which can be written in general form

as this is seen in figure 5 where 2[i-1] pairs of state variables in *Xmi+1* are transformed to 2[i-1] lower resolution variables in *Xmi* . For the *p* number of components in a state variable as seen in (64), the *H* the scaling matrix operator is composed of *p* number of *Hi+1i* matrices at the diagonal as

$$H = \text{diag}\{H\_{[1]}^{i+1 \to i}, H\_{[2]}^{\quad i+1 \to i}, \dots, H\_{[p]}^{\quad i+1 \to i}\}\tag{72}$$

where each *H[i]i+1<sup>1</sup>* is given by (70). Similarly, for the reconstruction filter, we write

$$\mathbf{G}^{i+1 \to i} = \text{diag}\{ (\sqrt{2}) \mathbf{g}\_{\hbar}^{(1)}, (\sqrt{2}) \mathbf{g}\_{\hbar}^{(2)}, \dots, (\sqrt{2}) \mathbf{g}\_{\hbar}^{(K)} \}\tag{73}$$

The wavelet matrix operator for G for the wavelet coefficients at resolution level *i* from the resolution level *i+1*

$$\mathbf{G} = \text{diag}\{\mathbf{G}\_{[1]}^{i+1 \to i}, \mathbf{G}\_{[2]}^{i+1 \to i}, \dots, \mathbf{G}\_{[p]}^{i+1 \to i}\} \tag{74}$$

where *K* is given by (71). For the inverse transform scheme given by figure 7, we write

$$\left| H \right|^{\*} = \text{diag}\{ H\_{\left[ 1 \right]} \stackrel{i \to i+1}{}, H\_{\left[ 2 \right]} \stackrel{i \to i+1}{}, \dots, H\_{\left[ p \right]} \stackrel{i \to i+1}{} \} \tag{75}$$

and

$$\mathbf{G}^\* = \operatorname{diag} \{ \mathbf{G}\_{\left[1\right]}^{i \to i+1}, \mathbf{G}\_{\left[2\right]}^{i \to i+1}, \dots, \mathbf{G}\_{\left[p\right]}^{i \to i+1} \} \tag{76}$$

Where each *H[i]i i+1* and *G[i]i i+1*is given by

$$\mathbf{H}^{i \to i+1} = \text{diag}\{ (\sqrt{2})\mathbf{h}\_h^{T(1)}, (\sqrt{2})\mathbf{h}\_h^{T(2)}, \dots, (\sqrt{2})\mathbf{h}\_h^{T(K)} \}\tag{77}$$

$$\mathbf{G}^{i \to i+1} = \text{diag}\{ (\mathbf{1} / \sqrt{2}) \mathbf{g}\_h^{T(1)}, (\mathbf{1} / \sqrt{2}) \mathbf{g}\_h^{T(2)}, \dots, (\mathbf{1} / \sqrt{2}) \mathbf{g}\_h^{T(K)} \}\tag{78}$$

Above *T* indicates transpose.

#### **3.2 Multiresolution by sensory measurements**

In subsection A wavelet decomposition is presented where N-level wavelet decomposition scheme lower level measurements are obtained basically by means of wavelet decomposition. This implies that for a state variable all measurements are obtained by a single sensor associated with that state variable. However in the present case we consider

where I is the identity matrix and (68) implies that filter impulse responses form an orthonormal set. It is to note that the state variable estimations are carried out at the

where *hh[i]* is scaled two-tap Haar lowpass filters on the diagonal; *K* is the number of filters involved in a decomposition from the resolution level *i+1* to *i*. For instance from 2 to 1, then

as this is seen in figure 5 where 2[i-1] pairs of state variables in *Xmi+1* are transformed to 2[i-1]

[1] [2] [ ] { , ,......, } *ii ii ii H diag H H H <sup>p</sup>*

<sup>1</sup> (1) (2) ( ) { ) , ) ,...., ) } (2 (2 (2 *i i <sup>K</sup> G diag g g g hh h*

The wavelet matrix operator for G for the wavelet coefficients at resolution level *i* from the

[1] [2] [ ] { , ,......, } *ii ii ii G diag G G G <sup>p</sup>*

\* 11 1 [1] [2] [ ] { , ,......, } *ii ii i i H diag H H H <sup>p</sup>*

\* 11 1 [1] [2] [ ] { , ,......, } *ii ii i i G diag G G G <sup>p</sup>*

In subsection A wavelet decomposition is presented where N-level wavelet decomposition scheme lower level measurements are obtained basically by means of wavelet decomposition. This implies that for a state variable all measurements are obtained by a single sensor associated with that state variable. However in the present case we consider

where *K* is given by (71). For the inverse transform scheme given by figure 7, we write

11 1

*<sup>1</sup>* is given by (70). Similarly, for the reconstruction filter, we write

11 1

in (64), the *H* the scaling matrix operator is composed of *p* number of *Hi+1*

<sup>1</sup> (1) (2) ( ) {(1/ 2) (1/ 2) (1/ 2) , ,...., } *i i <sup>K</sup> H diag h h hh h <sup>h</sup>* (70)

[ 1] 2 1 *<sup>i</sup> K* (71)

*i*

matrices at the

. For the *p* number of components in a state variable as seen

(72)

(73)

(74)

(75)

(76)

<sup>1</sup> (1) (2) ( ) { ) , ) ,...., ) } (2 (2 (2 *i i <sup>T</sup> <sup>T</sup> T K H diag h h hh h <sup>h</sup>* (77)

<sup>1</sup> (1) (2) ( ) {(1/ 2) (1/ 2) (1/ 2) , ,...., } *i i TT T <sup>K</sup> G diag ghh h <sup>g</sup> <sup>g</sup>* (78)

respective resolution levels, as follows.

*i*=1, and *K* is given by

diagonal as

where each *H[i]i+1*

resolution level *i+1*

and

Where each *H[i]i*

Above *T* indicates transpose.

*i+1* and *G[i]i*

**3.2 Multiresolution by sensory measurements** 

*i+1*is given by

lower resolution variables in *Xmi*

multiple sensors for the same state variable while the sensors are operated in different resolutions. Referring to figure 4, 3 sensors of different resolutions are considered. Since sensors are operated independently the measurements are non-uniform, in general. This means measurements in the lower resolution are not necessarily at the mid of the two points of the measurement times at the higher resolutional level. This is depicted in figure 8. Uniform sampling is seen in figure 4.

Fig. 8. Non-uniform sampling in a data block of three resolutions

Before explaining the fusion process, the wavelet decomposition for the non-uniform case will be explained in detail since this is central to this study. Decomposing the state variables at time indices *0* and *1* at resolution level *i=3* into a single state variable at time index *0* at resolution level *i=2* can be achieved by lowpass filter *h[2](1)* as follows.

$$h^{[2]}(1) = \begin{bmatrix} b\_1^{[3]} & a\_1^{[3]} \\ \hline a\_1^{[3]} + b\_1^{[3]} & a\_1^{[3]} + b\_1^{[3]} \end{bmatrix} \tag{79}$$

which can be written in general form

$$h^{[i]}(\mathbf{1}) = \begin{bmatrix} b\_{k\_i}^{[i+1]} \\ a\_{k\_i}^{[i+1]} + b\_{k\_i}^{[i+1]} & a\_{k\_i}^{[i+1]} + b\_{k\_i}^{[i+1]} \end{bmatrix} \tag{80}$$

where index *i* denotes the resolution level; *ki* is the time index at the resolution level *i*; *aki[i+1]* and *bki[i+1]* are the relative time intervals. The lowpass filter *h[i](ki)* for deriving a coarsened estimate state variable at time index *ki* and at resolution level *i* is based on the appropriate pair of estimated state variables at resolution level *i+1*. As the lowpass filter is determined, the highpass filter and the inverse filters can be determined by the filterbank implementation of the Quadrature Mirror Filter (QMF) shown in figures 6 and 7. Hence from lowpass filter *h[i](ki)* the highpass filter *g[i]k(i)* and the inverse filters *hinv[i](ki)* and *ginv[i](ki)* are determined as given below that they satisfy the constraints given by (69).

$$\mathbf{g}^{[i]}(k\_i) = \begin{bmatrix} 2b\_{k\_i}^{[i+1]} & -2a\_1^{[i+1]} \\ \overline{a\_{k\_i}^{[i+1]} + b\_{k\_i}^{[i+1]}} & \overline{a\_{k\_i}^{[i+1]} + b\_{k\_i}^{[i+1]}} \end{bmatrix} \tag{81}$$

$$h\_{inv}^{\ \ \ \lbrack i \rceil}(k\_i) = \left[ \frac{0.5(a\_{k\_i}^{\ \lbrack i+1 \rbrack} + b\_{k\_i}^{\lbrack i+1 \rbrack})}{b\_{k\_i}^{\lbrack i+1 \rbrack}} \quad \frac{0.5(a\_{k\_i}^{\ \lbrack i+1 \rbrack} + b\_{k\_i}^{\lbrack i+1 \rbrack})}{a\_{k\_i}^{\lbrack i+1 \rbrack}} \right] \tag{82}$$

and

$$\log\_{inv}{}^{[i]}(k\_i) = \left[ \frac{0.5(a\_{k\_i}^{\left[i+1\right]} + b\_{k\_i}^{\left[i+1\right]})}{2b\_{k\_i}^{\left[i+1\right]}} \quad \frac{-0.5(a\_{k\_i}^{\left[i+1\right]} + b\_{k\_i}^{\left[i+1\right]})}{2a\_{k\_i}^{\left[i+1\right]}} \right] \tag{83}$$

Data Sensor Fusion for Autonomous Robotics 391

level three the estimations are updated for four samples. At all levels the estimations are updated by Kalman filtering for *i=1,2,* and *3*. Signals from different resolutional levels are projected to the highest resolution level so that they all have four samples in a data block. The basic update scheme for dynamic multiresolutional filtering is shown in Fig. 9 where at each resolutional level, when the measurement *Z* is available, the state variables are updated and when the block *m* is complete the inverse wavelet transform and fusion is performed. During the inverse transformation the wavelet coefficients are all zero due to non-

*update with Z*

*update with Z*

*update with Z*

*propagation*

*X m+1 m+1*

*m+1 m+1* **[***N***]**

*X m+1 m+1*

*m+1 m+1*

*X m+1 m+1 Pm+1 m+1*

*X m+2 m+1 Pm+2 m+1*

1| 1 1| 1 1 1 1| ( ) *i i ii i i X X K Z CX mm mm m m m mm* (86)

(88)

*i ii i P IK C P XX m m XX* (87)

(89)

**[***N***]**

*fusion*

**[***N-1***]**

**[***N-1***]**

**[***1***]**

**[***1***]**

**[***N***] [***N***]**

*P*

**[***N***]**

**[***N-1***]**

**[***1***]**

[ ] [] [] [] [] []

1| 1 1| [ ] [] [] [] 1 1 ( ) *m m m m*

1| 1 <sup>|</sup> <sup>1</sup> [ ] [] [] [] [] [] [] 1 *m m* 11 1 1 *m m <sup>i</sup> i iT i i iT i K P C CP C R m XX m m XX m <sup>m</sup>*

[ ] 1 [] 1 1

*i i i ii*

..... , [( 1) 2 1]

[( 1)2 ], [( 1)2 2 1], ...,

The minimum variance Kalman gain matrix *Km+1[i]* at each level, is determined by

1 [ ] 1

*m i i Cm Cm*

*C diag C m*

*P*

performed wavelet decomposition.

*Xm+1 <sup>m</sup>*

*Pm+1 <sup>m</sup>*

*Xm+1 <sup>m</sup>*

**[***N-1***]**

**[***N-1***]**

**[***1***]**

**[***1***]**

*Pm+1 m+1*

Fig. 9. Wavelet decomposition of state variables in a data block *m*

where the measurent matrix *Cm=1[i]* and *Rm+1[i]* are given by

[ ]

*i*

*Pm+1 <sup>m</sup>*

*Xm+1 <sup>m</sup>*

*Pm+1 <sup>m</sup>*

*fusion X m+1 m+1*

Explicitly, the basic update scheme is as follows.

and

For *aki[i+1]*=*bki[i+1]*, the filters reduce to Haar filters given (61) and (62) and shown in figures 6 and 7.

In this implementation the scaling and wavelet operators H and G for decomposition i+1i are given by

$$\begin{aligned} \, \_iH^{i+1 \to i} &= \text{diag}\left\{ h^{[i]}(k\_i), h^{[i]}(k\_i+1), \dots, h^{[i]}(k\_i+2^{i-1}-1) \right\} \\ \, \_iG^{i+1 \to i} &= \text{diag}\left\{ g^{[i]}(k\_i), g^{[i]}(k\_i+1), \dots, g^{[i]}(k\_i+2^{i-1}-1) \right\} \end{aligned} \tag{84}$$

For *aki[i+1]*= *bki[i+1]* (84) reduces to (70) and (73).

The inverse scaling and wavelet operators H and G for construction ii+1 are given by

$$\begin{aligned} \mathbf{H}^{i \rightarrow i+1} &= \text{diag}\left\{ h\_{inv}^{\ [i]} \left( k\_i \right)^T, h\_{inv}^{\ [i]} \left( k\_i + 1 \right)^T, \dots, h\_{inv}^{\ [i]} \left( k\_i + 2^{i-1} - 1 \right)^T \right\} \\ \mathbf{G}^{i \rightarrow i+1} &= \text{diag}\left\{ g\_{inv}^{\ [i]} \left( k\_i \right)^T, g\_{inv}^{\ [i]} \left( k\_i + 1 \right)^T, \dots, g\_{inv}^{\ [i]} \left( k\_i + 2^{i-1} - 1 \right)^T \right\} \end{aligned} \tag{85}$$

For *aki[i+1]*= *bki[i+1]* (84) and (85) reduces to (77) and (78).

## **4. Fusion process as multiresolutinal dynamic filtering (MDF)**

The fusion of information is central to this research. Therefore, in the preceding section wavelet decomposition and reconstruction is presented in vector form for the sake of explaining the fusion process in detail. However the wavelet decomposition in this work is not used. This is simply because lower resolution level sensory measurements are obtained from associated sensors and not from wavelet decomposition of the highest resolution level sensory measurements. Therefore only the wavelet reconstruction is relevant. The lower resolution level measurements are used to update the estimated information at this very level. Afterwards, this information is transformed to higher resolution level information by inverse wavelet transform where the inverse transformation wavelet coefficients, that is the detail coefficients, are not involved in this process as they are all zero. Because of this reason the transformed information at a higher resolution level is the same as the information lying in the preceding lower level. But this very information at the higher resolution level timely coincides with the sensory information of this level. This is achieved by non-uniform formulation of wavelet transform. By doing so, the independent operation of multiresolutional sensors is aimed to make the information fusion effective. The actual implementation in this work is explicitly as follows. Referring to figure 8, a data block has four sensory measurement samples at the highest resolution *(i=3)* and one sensory sample in the lowest resolution *(i=1)*. The resolution level between the highest and lowest contains two sensory measurements. By means of inverse wavelet transform the updated estimations at levels *i=1* and *i=2* are transformed to highest level separately providing the estimate of the signal of the resolution index *i=1* and *i=2* and the highest level *(i=3)*. In the level one, a single estimation, in the level two, two updated estimations are projected to highest level. In the

*i k k k k inv i i i*

For *aki[i+1]*=*bki[i+1]*, the filters reduce to Haar filters given (61) and (62) and shown in figures 6

In this implementation the scaling and wavelet operators H and G for decomposition i+1i

1 [ ] [ ] [ ] 1

*ii i i i i*

*H diag h k h k h k*

*G diag g k g k g k*

*H diag h k h k h k*

*G diag g k g k g k*

**4. Fusion process as multiresolutinal dynamic filtering (MDF)** 

*ii i i i i*

The inverse scaling and wavelet operators H and G for construction ii+1 are given by

1 [ ] [ ] [ ] 1

*i i iT i T i i T inv i inv i inv i i i iT i T i i T inv i inv i inv i*

1 [ ] [ ] [ ] 1

The fusion of information is central to this research. Therefore, in the preceding section wavelet decomposition and reconstruction is presented in vector form for the sake of explaining the fusion process in detail. However the wavelet decomposition in this work is not used. This is simply because lower resolution level sensory measurements are obtained from associated sensors and not from wavelet decomposition of the highest resolution level sensory measurements. Therefore only the wavelet reconstruction is relevant. The lower resolution level measurements are used to update the estimated information at this very level. Afterwards, this information is transformed to higher resolution level information by inverse wavelet transform where the inverse transformation wavelet coefficients, that is the detail coefficients, are not involved in this process as they are all zero. Because of this reason the transformed information at a higher resolution level is the same as the information lying in the preceding lower level. But this very information at the higher resolution level timely coincides with the sensory information of this level. This is achieved by non-uniform formulation of wavelet transform. By doing so, the independent operation of multiresolutional sensors is aimed to make the information fusion effective. The actual implementation in this work is explicitly as follows. Referring to figure 8, a data block has four sensory measurement samples at the highest resolution *(i=3)* and one sensory sample in the lowest resolution *(i=1)*. The resolution level between the highest and lowest contains two sensory measurements. By means of inverse wavelet transform the updated estimations at levels *i=1* and *i=2* are transformed to highest level separately providing the estimate of the signal of the resolution index *i=1* and *i=2* and the highest level *(i=3)*. In the level one, a single estimation, in the level two, two updated estimations are projected to highest level. In the

1 [ ] [ ] [ ] 1

*g k b a*

[ 1] [ 1] [ 1] [ 1]

(83)

(84)

(85)

 

*ii ii*

[ 1] [ 1] 0.5( ) 0.5( ) ( ) 2 2 *i i i i i i*

*ii i*

*ii i*

( ), ( 1),..., ( 2 1)

( ), ( 1),..., ( 2 1)

( ) , ( 1) ,..., ( 2 1)

( ) , ( 1) ,..., ( 2 1)

*k k*

*ab ab*

and

and 7.

are given by

[ ]

For *aki[i+1]*= *bki[i+1]* (84) reduces to (70) and (73).

For *aki[i+1]*= *bki[i+1]* (84) and (85) reduces to (77) and (78).

level three the estimations are updated for four samples. At all levels the estimations are updated by Kalman filtering for *i=1,2,* and *3*. Signals from different resolutional levels are projected to the highest resolution level so that they all have four samples in a data block. The basic update scheme for dynamic multiresolutional filtering is shown in Fig. 9 where at each resolutional level, when the measurement *Z* is available, the state variables are updated and when the block *m* is complete the inverse wavelet transform and fusion is performed. During the inverse transformation the wavelet coefficients are all zero due to nonperformed wavelet decomposition.

$$\begin{array}{c|c} \downarrow & \downarrow \\ \downarrow & \downarrow \\ \downarrow & \downarrow \\ \downarrow & \downarrow \\ \downarrow & \downarrow \\ \downarrow & \downarrow \\ \downarrow & \downarrow \\ \downarrow & \downarrow \\ \downarrow & \downarrow \\ \end{array} \xrightarrow{\begin{subarray}{c} \text{update\limits\\ $\mathbf{M}\_{m+I}\nmid\mathbf{m}} \\ \mathbf{M}\_{m+I}\nmid\mathbf{m} \\ \mathbf{P}\_{m+I}\nmid\mathbf{m} \end{subarray}} \xrightarrow{\begin{subarray}{c} \text{update\limits\\$ \mathbf{M}\_{m+I}\nmid\mathbf{m} \\ \mathbf{M}\_{m+I}\nmid\mathbf{m} \\ \mathbf{P}\_{m+I}\nmid\mathbf{m} \end{subarray}} \xrightarrow{\begin{subarray}{c} \text{fusion} \\ \mathbf{M} \\ \mathbf{P}\_{m+I}\nmid\mathbf{m} \\ \mathbf{P}\_{m+I}\nmid\mathbf{m} \end{subarray}} \xrightarrow{\begin{subarray}{c} \text{fusion} \\ \mathbf{M} \\ \mathbf{P}\_{m+I}\nmid\mathbf{m} \end{subarray}} \xrightarrow{\begin{subarray}{c} \text{fusion} \\ \mathbf{M} \\ \mathbf{P}\_{m+I}\nmid\mathbf{m} \end{subarray}} \xrightarrow{\begin{subarray}{c} \text{fusion} \\ \mathbf{M} \\ \mathbf{P}\_{m+I}\nmid\mathbf{m} \end{subarray}} \xrightarrow{\begin{subarray}{c} \text{fusion} \\ \mathbf{M} \\ \mathbf{P}\_{m+I}\nmid\mathbf{m} \end{subarray}} \xrightarrow{\begin{subarray}{c} \text{fusion} \\ \mathbf{M} \\ \mathbf{P}\_{m+I}\nmid\mathbf{m} \end{subarray}} \xrightarrow{\begin{subarray}{c} \text{fusion} \\ \mathbf{M} \\ \mathbf{P}\_{m+I}\nmid\mathbf{m} \end{subarray}} \xrightarrow{\begin{subarray}{c} \text{fspace} \\ \mathbf{M} \\ \mathbf{P}\_{m+I}\nmid\mathbf{m} \end{subarray}} \xrightarrow{\begin{$$

Fig. 9. Wavelet decomposition of state variables in a data block *m* Explicitly, the basic update scheme is as follows.

$$X\_{m+1\mid m+1}{}^{\left[i\right]} = X\_{m+1\mid m}{}^{\left[i\right]} + K\_{m+1}{}^{\left[i\right]} \left(Z\_{m+1}{}^{\left[i\right]} - C\_{m+1}{}^{\left[i\right]} X\_{m+1\mid m}{}^{\left[i\right]}\right)}\tag{86}$$

and

$$P\_{XX\_{m+1\mid m+1}}[i] = (I - K\_{m+1} \, ^{[i]}C\_{m+1} \, ^{[i]}) P\_{XX\_{m+1\mid m}}[i] \tag{87}$$

The minimum variance Kalman gain matrix *Km+1[i]* at each level, is determined by

$$\mathbf{K}\_{m+1}{}^{\left[i\right]} = P\_{XX\_{m+3\text{pr}}}{}^{\left[i\right]} \mathbf{C}\_{m+1}{}^{\left[i\right]} \left(\mathbf{C}\_{m+1}{}^{\left[i\right]} P\_{XX\_{m+3\text{pr}}}{}^{\left[i\right]} \mathbf{C}\_{m+1}{}^{\left[i\right]} + R\_{m+1}{}^{\left[i\right]}\right)^{-1} \tag{88}$$

where the measurent matrix *Cm=1[i]* and *Rm+1[i]* are given by

$$\mathbf{C}\_{m+1}^{[i]} = \text{diag}\begin{bmatrix} \mathbf{C}^{[i]} [(m+1)2^{i-1}], \mathbf{C}^{[i]} [(m+1)2^{i-1} - 2^{i-1} + 1], \dots \\ \dots & \dots \end{bmatrix} \tag{89}$$

$$R\_{m+1}^{[i]} = \text{diag}\begin{bmatrix} R^{[i]} [ (m+1)2^{i-1} ] \, R^{[i]} [ (m+1)2^{i-1} - 2^{i-1} + 1 ] \, \dots \\ \dots & \, , R^{[i]} [ (m+1) + 2^{i-1} + 1 ] \end{bmatrix} \tag{90}$$

Data Sensor Fusion for Autonomous Robotics 393

coordinates and the respective velocities remain subject to estimation. The overall robot trajectory is shown in figure 10 where there are four lines plotted but they are all close to each other due to the scale involved. Broadly one can see approximately a linear trajectory followed by a curve trajectory and approximately another linear trajectory afterwards. The line marked by **\*** sign represents the measurement of the data at the highest resolution level for *i=3*. The line marked by is the estimation by sensor fusion. The line marked by + sign is the estimation by extended Kalman filtering for the data obtained from the sensor of the highest resolution level *(i=3)*. The line indicated by **o** sign is the reference trajectory. These lines are not explicitly seen in this figure. For explicit illustration of the experimental outcomes the same figure with different zooming ranges and the zooming powers are given

trajectory ref [o], MDF [.], EKF [+] and measurements [\*]

11 12 13 14 15 16 17 18 19

Fig. 10. The overall *measurement*, *reference*, *extended Kalman filtering(EKF)* and *multiresolutional distributed filtering* (*MDF)* estimation of robot trajectory. The **\*** sign is for measurement; **+** for

Figure 11 and 12 shows the estimations in a linear mode. Figure 12 is the enlarged form of figure 11. From these figures it is seen that, the Kalman filtering is effective at the first linear part of the trajectory; namely relative to Kalman filtering estimation, the estimation by sensor fusion by MDF is inferior. In this mode the angular velocity is zero, the system matrix is linear and the linear Kalman filter is accurate enough to describe the dynamic system. During this period, the Kalman filter estimations are carried in smallest sampling time intervals. At the same period MDF estimations made in lower resolution levels are extended to the highest resolution level. However during this extension the x and y coordinates do not match exactly the estimates in the highest resolutional level because of time difference between the estimations. Explicitly, in figure 8 the estimation in the level *i=0* is extended to estimation number 3 in the resolutional level *i=3* where there is time difference of more than one sampling time interval. The result is higher estimation error in the MDF and this error appears to be as systematic error in estimation in the form of hangoff error, i.e., error does

in figures 11-18.


EKF; **o** for reference; for *MDF* estimated trajectory.


Once, the sequences of updated state variables and error covariances [ ,] 1| 1 *N i Xm m* and [ ,] 1| 1 *N i Pm m* for *i=1,2,..,N,* are determined, they must be fused to generate an optimal [ ] 1| 1 *NF Xm m* and [ ] 1| 1 *NF Pm m* . For the minimum fusion error covariance [ ] 1| 1 *NF Pm m* as derived in (Hong 1991; Hong 1992), the fused estimate [ ] 1| 1 *NF Xm m* is calculated as

$$\begin{aligned} &X\_{m+1|m+1} ^{[NF]} \\ &= P\_{m+1|m+1} ^{[NF]} \left[ \sum\_{i=1}^{N} \left( P\_{m+1|m+1} ^{[N,i]} \right)^{-1} X\_{m+1|m+1} ^{[N,i]} - (N-1) \left( P\_{m+1|m} ^{[N]} \right)^{-1} X\_{m+1|m} ^{[N]} \right] \end{aligned} \tag{91}$$

where the minimum fusion error covariance [ ] 1| 1 *NF Pm m* is given by

$$\left(P\_{m+1|m+1} \, ^{[NF]}\right)^{-1} = \sum\_{i=1}^{N} \left(P\_{m+1|m+1} \, ^{[N,i]}\right)^{-1} - (N-1) \left(P\_{m+1|m} \, ^{[N]}\right)^{-1} . \tag{92}$$

The fused estimate [ ] 1| 1 *NF Xm m* is a weighted summation of both predicted [ ] 1| *<sup>N</sup> Xm m* and updated [ ,] 1| 1 *N i Xm m* , for *i=1,2,..,N*. The sum of the weight factors equal to the identity *I*. This can be seen by substitution of [ ] 1| 1 *NF Pm m* given above into the expression of [ ] 1| 1 *NF Xm m* in (91). With the estimations in different level of resolutions and finally fusion of the level-wise estimations for unified estimations form a multiresolutional distributed filtering (MDR).

#### **5. Experiments with the autonomous robot**

The computer experiments have been carried out with the simulated robot navigation. The state variables vector is given by (93) where *N*=*i* to represent a general resolutional level.

$$\mathbf{x}^{\text{[N]}}\,\_{k}\{\text{N)} = \begin{bmatrix} \mathbf{x}^{\text{[N]}}\_{k,1} \\ \mathbf{x}^{\text{[N]}}\_{k,2} \\ \cdots \\ \mathbf{x}^{\text{[N]}}\_{k,p} \end{bmatrix} \tag{93}$$

Explicitly,

$$[\mathbf{x}^{[N]}]k(N) = [\mathbf{x}, \mathbf{\stackrel{\bullet}{x}}, y, \mathbf{\stackrel{\bullet}{y}}, o]\_{\mathbf{x}}$$

where is the angular rate and it is estimated during the move. When the robot moves in a straight line, the angular rate becomes zero and the other state variables namely, x and y

1 [ ] 1

Once, the sequences of updated state variables and error covariances [ ,]

*m i i Rm Rm*

*R diag R m*

derived in (Hong 1991; Hong 1992), the fused estimate [ ]

[ ]

1| 1

1

where the minimum fusion error covariance [ ]

1| 1

This can be seen by substitution of [ ]

**5. Experiments with the autonomous robot** 

*i*

*N*

*i*

[ ,] 1| 1

[ ] 1| 1

*m m*

*X*

*NF Xm m* and [ ]

[ ] 1| 1

*NF*

The fused estimate [ ]

1| 1

updated [ ,]

[ ] 1| 1

filtering (MDR).

Explicitly,

where  [ ] 1 [] 1 1

*i i i ii*

..... , [( 1) 2 1]

*N i Pm m* for *i=1,2,..,N,* are determined, they must be fused to generate an optimal

*P P X NP X*

1

*i*

*N*

 11 1 [ ] [ ,] [ ] 1| 1 1| 1 1|

*P PN <sup>P</sup>* 

1| 1

*NF Xm m* in (91). With the estimations in different level of resolutions and finally fusion of the level-wise estimations for unified estimations form a multiresolutional distributed

The computer experiments have been carried out with the simulated robot navigation. The state variables vector is given by (93) where *N*=*i* to represent a general resolutional level.

[ ] ,2

[ ] ( ) [,,,, ] *<sup>N</sup> x kN xxyy*

straight line, the angular rate becomes zero and the other state variables namely, x and y

*N k k*

*x N*

( ) ....

[ ] ,1 [ ]

*N k N*

*x x*

> [ ] ,

*N k p*

*x*

is the angular rate and it is estimated during the move. When the robot moves in a

*NF N i N m m m m m m*

1 1 [ } [ ,] [ ,] [] [] 1| 1 1| 1 1| 1 1| 1|

*NF N i N i N N m m m m m m mm mm*

(91)

1| 1

*NF Xm m* is a weighted summation of both predicted [ ]

*N i Xm m* , for *i=1,2,..,N*. The sum of the weight factors equal to the identity *I*.

[( 1)2 ], [( 1)2 2 1], ...,

*NF Pm m* . For the minimum fusion error covariance [ ]

1| 1

( 1) .

*NF Pm m* given above into the expression of

(93)

(92)

( 1)

*NF Pm m* is given by

*NF Xm m* is calculated as

(90)

1| 1 *N i Xm m* and

> 1| 1 *NF Pm m* as

1| *<sup>N</sup> Xm m* and coordinates and the respective velocities remain subject to estimation. The overall robot trajectory is shown in figure 10 where there are four lines plotted but they are all close to each other due to the scale involved. Broadly one can see approximately a linear trajectory followed by a curve trajectory and approximately another linear trajectory afterwards. The line marked by **\*** sign represents the measurement of the data at the highest resolution level for *i=3*. The line marked by is the estimation by sensor fusion. The line marked by + sign is the estimation by extended Kalman filtering for the data obtained from the sensor of the highest resolution level *(i=3)*. The line indicated by **o** sign is the reference trajectory. These lines are not explicitly seen in this figure. For explicit illustration of the experimental outcomes the same figure with different zooming ranges and the zooming powers are given in figures 11-18.

Fig. 10. The overall *measurement*, *reference*, *extended Kalman filtering(EKF)* and *multiresolutional distributed filtering* (*MDF)* estimation of robot trajectory. The **\*** sign is for measurement; **+** for EKF; **o** for reference; for *MDF* estimated trajectory.

Figure 11 and 12 shows the estimations in a linear mode. Figure 12 is the enlarged form of figure 11. From these figures it is seen that, the Kalman filtering is effective at the first linear part of the trajectory; namely relative to Kalman filtering estimation, the estimation by sensor fusion by MDF is inferior. In this mode the angular velocity is zero, the system matrix is linear and the linear Kalman filter is accurate enough to describe the dynamic system. During this period, the Kalman filter estimations are carried in smallest sampling time intervals. At the same period MDF estimations made in lower resolution levels are extended to the highest resolution level. However during this extension the x and y coordinates do not match exactly the estimates in the highest resolutional level because of time difference between the estimations. Explicitly, in figure 8 the estimation in the level *i=0* is extended to estimation number 3 in the resolutional level *i=3* where there is time difference of more than one sampling time interval. The result is higher estimation error in the MDF and this error appears to be as systematic error in estimation in the form of hangoff error, i.e., error does

Data Sensor Fusion for Autonomous Robotics 395

not go to zero. In a sensor fusion process such sensor delays are inevitable and thus delayrelated effects are inevitable. This research clearly illustrates the extent of such effects which are to be categorized eventually as errors. Consequently one can conclude that the fusion of sensors from different resolutional levels has an inherent peculiarity of latency that turns out to be undesirable outcome in some cases, as it is the case in the present situation,

Figure 13 and 14 shows the estimations in a bending mode. Figure 14 is the enlarged form of figure 13. In this case estimations by sensor fusion are superior to the estimations by extended Kalman filtering. This can be explained seeing that system matrix involves the angular velocity which makes the system dynamics matrix non-linear. This results in marked separation between the reference trajectory and the estimated trajectory due to the approximation error caused by the Taylor's series expansion and ensuing linearization in the extended Kalman filtering (EKF) in the highest resolution level. One should note that the effect of this approximation error propagated four times in a data block to the time point where fusion and predictions are made for the following data block as seen in figure 4. In this nonlinear period, sensor fusion is very effective and the difference between the estimated outcomes and the reference trajectory is apparently negligibly small. However, this is not exactly so. Because of the delay of the data from the lower resolutional levels to the highest resolutional level as described in the preceding paragraph, there is some difference between the true position and the estimated position. Nevertheless, the true trajectory is almost perfectly identified. The reason for the effectiveness in the lower resolution levels is due to more effective linearization and therefore better state estimations. Although in the lower resolutions levels error in the linearization process for EKF is greater relative to that occurred in the higher resolutional level, such modeling errors are accounted

16.8 17 17.2 17.4 17.6 17.8 18 18.2 18.4 18.6

Fig. 13. Enlarged *measurement*, *reference*, *extended Kalman filtering(EKF)* and *multiresolutional distributed filtering* (*MDF)* estimation of robot trajectory in the bending period. The **\*** sign is

for measurement; **+** for EKF; **o** for reference; for *MDF* estimated trajectory.

trajectory ref [o], MDF [.], EKF [+] and measurements [\*]








although this is not general as the following figures (13-18) indicate.

Fig. 11. Enlarged *measurement*, *reference*, *extended Kalman filtering(EKF)* and *multiresolutional distributed filtering* (*MDF)* estimation of robot trajectory in first linear period. The **\*** sign is for measurement; **+** for EKF; **o** for reference; for *MDF* estimated trajectory.

Fig. 12. Enlarged *measurement*, *reference*, *extended Kalman filtering(EKF)* and *multiresolutional distributed filtering* (*MDF)* estimation of robot trajectory in first linear period. The **\*** sign is for measurement; **+** for EKF; **o** for reference; for *MDF* estimated trajectory.

trajectory ref [o], MDF [.], EKF [+] and measurements [\*]

13.8 14 14.2 14.4 14.6 14.8 15 15.2 15.4 15.6

trajectory ref [o], MDF [.], EKF [+] and measurements [\*]

Fig. 11. Enlarged *measurement*, *reference*, *extended Kalman filtering(EKF)* and *multiresolutional distributed filtering* (*MDF)* estimation of robot trajectory in first linear period. The **\*** sign is for

14.4 14.5 14.6 14.7 14.8 14.9 15 15.1 15.2 15.3

Fig. 12. Enlarged *measurement*, *reference*, *extended Kalman filtering(EKF)* and *multiresolutional distributed filtering* (*MDF)* estimation of robot trajectory in first linear period. The **\*** sign is for

measurement; **+** for EKF; **o** for reference; for *MDF* estimated trajectory.

measurement; **+** for EKF; **o** for reference; for *MDF* estimated trajectory.













not go to zero. In a sensor fusion process such sensor delays are inevitable and thus delayrelated effects are inevitable. This research clearly illustrates the extent of such effects which are to be categorized eventually as errors. Consequently one can conclude that the fusion of sensors from different resolutional levels has an inherent peculiarity of latency that turns out to be undesirable outcome in some cases, as it is the case in the present situation, although this is not general as the following figures (13-18) indicate.

Figure 13 and 14 shows the estimations in a bending mode. Figure 14 is the enlarged form of figure 13. In this case estimations by sensor fusion are superior to the estimations by extended Kalman filtering. This can be explained seeing that system matrix involves the angular velocity which makes the system dynamics matrix non-linear. This results in marked separation between the reference trajectory and the estimated trajectory due to the approximation error caused by the Taylor's series expansion and ensuing linearization in the extended Kalman filtering (EKF) in the highest resolution level. One should note that the effect of this approximation error propagated four times in a data block to the time point where fusion and predictions are made for the following data block as seen in figure 4. In this nonlinear period, sensor fusion is very effective and the difference between the estimated outcomes and the reference trajectory is apparently negligibly small. However, this is not exactly so. Because of the delay of the data from the lower resolutional levels to the highest resolutional level as described in the preceding paragraph, there is some difference between the true position and the estimated position. Nevertheless, the true trajectory is almost perfectly identified. The reason for the effectiveness in the lower resolution levels is due to more effective linearization and therefore better state estimations. Although in the lower resolutions levels error in the linearization process for EKF is greater relative to that occurred in the higher resolutional level, such modeling errors are accounted

Fig. 13. Enlarged *measurement*, *reference*, *extended Kalman filtering(EKF)* and *multiresolutional distributed filtering* (*MDF)* estimation of robot trajectory in the bending period. The **\*** sign is for measurement; **+** for EKF; **o** for reference; for *MDF* estimated trajectory.

Data Sensor Fusion for Autonomous Robotics 397

trajectory ref [o], MDF [.], EKF [+] and measurements [\*]

17 17.2 17.4 17.6 17.8 18 18.2 18.4 18.6 18.8 19

trajectory ref [o], MDF [.], EKF [+] and measurements [\*]

17.95 18 18.05 18.1 18.15

Fig. 16. Enlarged *measurement*, *reference*, *extended Kalman filtering(EKF)* and *multiresolutional distributed filtering* (*MDF)* estimation of robot trajectory in the transition between bending and second linear periods. The **\*** sign is for measurement; **+** for EKF; **o** for reference; for

Fig. 15. Enlarged *measurement*, *reference*, *extended Kalman filtering(EKF)* and *multiresolutional distributed filtering* (*MDF)* estimation of robot trajectory in the bending period. The **\*** sign is

for measurement; **+** for EKF; **o** for reference; for *MDF* estimated trajectory.



*MDF* estimated trajectory.














Fig. 14. Enlarged *measurement*, *reference*, *extended Kalman filtering(EKF)* and *multiresolutional distributed filtering* (*MDF)* estimation of robot trajectory in the bending period. The **\*** sign is for measurement; **+** for EKF; **o** for reference; for *MDF* estimated trajectory.

for in the process noise and therefore effect of errors due to linearization becomes less important. However, it should be pointed out that, in all resolutional levels, the sensor quality plays essential role on the estimation errors.

Figure 15 shows the trajectory where EKF estimation crosses the reference trajectory. This can be explained as follows. EKF estimations in the highest resolutional level without multiresolutional information, start to deviate from the reference trajectory in the bending mode as seen in figures 13 and 14. In this case Kalman filter tend to make estimations to compensate this deviation error and therefore the deviation start to become smaller. At the same time the bending information namely the angular frequency () becomes effective and these two corrective joint actions in this turbulent transition period make the estimation error minimal and finally the estimated trajectory cross the reference trajectory. It is to note that in this resolutional level the Kalman filter bandwidth is relatively wide justifying the sampling rate which is the highest. As seen in (31), the system matrix is highly involved with the angular frequency and even small estimation error on might cause relatively high effects in this non-linear environment. After crossing, the deviations start to increase and after the bending is over it remains constant in the second linear mode in the trajectory, as seen in figures 16 and 17. On the other hand, during this period, the multiresolutional distributed filtering (MDF) estimations improve due to due to incoming bending information, the deviations become smaller and finally it crosses the reference trajectory. This crossing is shown in figure 16. The estimations at the lower resolution level are much accurate than those at the highest resolution level and by means of the sensor fusion process, the fused estimations are quite accurate at the bending mode and afterwards. This is seen in figures 13 through18. Also the effect of the information latency on the position estimation is clearly observed in these figures.

trajectory ref [o], MDF [.], EKF [+] and measurements [\*]

17.3 17.4 17.5 17.6 17.7 17.8 17.9 18 18.1 18.2

Fig. 14. Enlarged *measurement*, *reference*, *extended Kalman filtering(EKF)* and *multiresolutional distributed filtering* (*MDF)* estimation of robot trajectory in the bending period. The **\*** sign is

for in the process noise and therefore effect of errors due to linearization becomes less important. However, it should be pointed out that, in all resolutional levels, the sensor

Figure 15 shows the trajectory where EKF estimation crosses the reference trajectory. This can be explained as follows. EKF estimations in the highest resolutional level without multiresolutional information, start to deviate from the reference trajectory in the bending mode as seen in figures 13 and 14. In this case Kalman filter tend to make estimations to compensate this deviation error and therefore the deviation start to become smaller. At the same time the bending information namely the angular frequency () becomes effective and these two corrective joint actions in this turbulent transition period make the estimation error minimal and finally the estimated trajectory cross the reference trajectory. It is to note that in this resolutional level the Kalman filter bandwidth is relatively wide justifying the sampling rate which is the highest. As seen in (31), the system matrix is highly involved with the angular frequency and even small estimation error on might cause relatively high effects in this non-linear environment. After crossing, the deviations start to increase and after the bending is over it remains constant in the second linear mode in the trajectory, as seen in figures 16 and 17. On the other hand, during this period, the multiresolutional distributed filtering (MDF) estimations improve due to due to incoming bending information, the deviations become smaller and finally it crosses the reference trajectory. This crossing is shown in figure 16. The estimations at the lower resolution level are much accurate than those at the highest resolution level and by means of the sensor fusion process, the fused estimations are quite accurate at the bending mode and afterwards. This is seen in figures 13 through18. Also the effect of the information latency on the position

for measurement; **+** for EKF; **o** for reference; for *MDF* estimated trajectory.


quality plays essential role on the estimation errors.

estimation is clearly observed in these figures.






Fig. 15. Enlarged *measurement*, *reference*, *extended Kalman filtering(EKF)* and *multiresolutional distributed filtering* (*MDF)* estimation of robot trajectory in the bending period. The **\*** sign is for measurement; **+** for EKF; **o** for reference; for *MDF* estimated trajectory.

Fig. 16. Enlarged *measurement*, *reference*, *extended Kalman filtering(EKF)* and *multiresolutional distributed filtering* (*MDF)* estimation of robot trajectory in the transition between bending and second linear periods. The **\*** sign is for measurement; **+** for EKF; **o** for reference; for *MDF* estimated trajectory.

Data Sensor Fusion for Autonomous Robotics 399

estimations of the Extended Kalman filtering in the highest resolutional level. These estimations have relatively large errors which are due to the system dynamics matrix given by (31) where is expected to be zero in this linear period; however it is approximately zero (0) as calculated by Kalman filtering. The small nonzero terms in the matrix cause error in estimations which are interpreted as model errors in the Kalman filtering operation. Further such errors also cause round off errors in the covariance matrix in (43) and finally poor estimation. When the same operation is repeated by switching the matrix forcefully from bending mode to linear mode by putting =0 in (31), the Kalman filtering estimation in the last linear period becomes comparable as illustrated in figure 11 and 12. Since the highest resolution level estimations have large errors, they have small contributions to the sensor-data fusion process and therefore the fusion results remain accurate. Figures 13 through 18 represent a firm indication of the

In this work the effectiveness of multisensor-based multiresolutional fusion is investigated by means of estimation errors of mobile robot position determination. The comparison is made offline but not real-time. By doing so, a clear view presented about at what conditions the multiresolutional multi-sensor fusion process is effective and also in which circumstances the fusion process may have shortcomings and why. However, the implementation can be carried on in real-time in the form of one block ahead prediction forming the data-sensor fusion, and one step-ahead prediction at the highest resolutional level i.e., for *i=3* without fusion process. These are illustrated in figure 4. In both cases, i.e., real-time and off-line operations, the merits of the multiresolutional multisensor fusion remains robust although some unfavorable deviation from the existing results in real-time may occur due to a block prediction compared to 1-step-ahead prediction, obviously. Investigations on real-time operation for the assessment of the robustness are interesting

Autonomous mobile robot navigation is a challenging issue where robot should be provided with accurate and reliable position information. Although reliable information can be provided by adding redundant sensors, the enhanced accuracy and precision information can be provided by synergistically coordinating the information from these sensors. In this respect, the present research introduced a novel information fusion concept by inverse wavelet transform using independent multiresolutional sensors. In the linear system description, the highest resolutional sensor provides enough information for optimum information processing by Kalman filtering where residual variance is minimum so that the information delivered by multiresolutional sensors can be redundant depending on the sensors's qualities and associated noises. In situations where system dynamics is non-linear, Kalman filter is still optimal in its extended formulation. However, the estimation errors in this case are dependent on the degree of the non-linearity of the system dynamics. The multiresolutional sensor fusion becomes quite effective in the non-linear case since the partial nonlinearity information of the system in different resolutional scales is available. Sensor quality is always an important factor playing role on the estimation. These features are demonstrated and the fusion process presented can easily be extended to consider realtime operation as well as some cases of probabilistic nature such as missing measurements,

effectiveness and robustness of the sensor fusion, in this research.

since the mobile robot is especially meant for this type of operation.

sensor failures and other probabilistic occurrences.

**6. Discussion** 

**7. Conclusions** 

Figure 17 and 18 shows the estimations in the second linear trajectory. They are the enlarged form of figure 10 at this very period. Next to satisfactory MDF estimations, the figures show the

Fig. 17. Enlarged *measurement*, *reference*, *extended Kalman filtering(EKF)* and *multiresolutional distributed filtering* (*MDF)* estimation of robot trajectory in the second linear period. The **\*** sign is for measurement; **+** for EKF; **o** for reference; for *MDF* estimated trajectory.

Fig. 18. Enlarged *measurement*, *reference*, *extended Kalman filtering(EKF)* and *multiresolutional distributed filtering* (*MDF)* estimation of robot trajectory in the second linear period. The **\*** sign is for measurement; **+** for EKF; **o** for reference; for *MDF* estimated trajectory.

estimations of the Extended Kalman filtering in the highest resolutional level. These estimations have relatively large errors which are due to the system dynamics matrix given by (31) where is expected to be zero in this linear period; however it is approximately zero (0) as calculated by Kalman filtering. The small nonzero terms in the matrix cause error in estimations which are interpreted as model errors in the Kalman filtering operation. Further such errors also cause round off errors in the covariance matrix in (43) and finally poor estimation. When the same operation is repeated by switching the matrix forcefully from bending mode to linear mode by putting =0 in (31), the Kalman filtering estimation in the last linear period becomes comparable as illustrated in figure 11 and 12. Since the highest resolution level estimations have large errors, they have small contributions to the sensor-data fusion process and therefore the fusion results remain accurate. Figures 13 through 18 represent a firm indication of the effectiveness and robustness of the sensor fusion, in this research.
