Kalman Filters and Analysis

#### **Chapter 3**

## Kalman Filter Estimation and Its Implementation

*Erick Ulin-Avila and Juan Ponce-Hernandez*

#### **Abstract**

In this chapter, we use the Kalman filter to estimate the future state of a system. We present the theory, design, simulation, and implementation of the Kalman filter. We use as a case example the estimation of temperature using a Resistance Temperature Detector (RTD), which has not been reported before. After a brief literature review, the theoretical analysis of a Kalman filter is presented along with that of the RTD. The dynamics of the RTD system are analytically derived and identified using Matlab. Then, the design of a time-varying Kalman filter using Matlab is presented. The solution to the Riccati equation is used to estimate the future state. Then, we implement the design using C-code for a microprocessor ATMega328. We show under what conditions the system may be simplified. In our case, we reduced the order of the system to that of a system having a 1st order response, that of an RC system, giving us satisfactory results. Furthermore, we can find two first order systems whose response defines two boundaries inside which the evolution of a second order system remains.

**Keywords:** Kalman filter, prediction, Riccati equation

#### **1. Introduction**

A deterministic system is a system whose governing physical laws are specified so that if the state of the system at some time is known, then one can precisely predict the state at a later time. Nondeterministic systems are divided into two categories: stochastic and random. A stochastic system has governing physical laws that even if the state at some point in time is known precisely, it is impossible to determine the state of the system at a later time precisely. It is possible to determine the probability of a state, rather than the state itself. A random system is one which has no apparent governing physical laws. Practically, we treat all unpredictable systems, stochastic or random as stochastic systems, since we employ the same methods to study them. While we are unable to predict the state of a random process, we can evolve a strategy to deal with such processes. Such a strategy is based on a branch of mathematics dealing with unpredictable systems, called statistics.

Estimation is the process of extracting information from data which can be used to predict the behavior of state variables in a system. The estimation uses statistical criteria to infer the actual value of unknown variables. Estimation models are used to process noisy measurements, filter them, and detect inaccuracies. When random signals are passed through a deterministic system, their statistical properties are

modified. A deterministic system to which random signals are input, so that the output is a random signal with desired statistical properties is called a filter. Filters can be linear or nonlinear, time-invariant or time varying. However, for simplicity we will usually consider linear, time-invariant filters. Linear, time-invariant filters are commonly employed in control systems to reduce the effect of measurement noise on the control system. In such systems, the output is usually a superposition of a deterministic signal and a random measurement noise.

The output of a filter not only has a frequency content different from the input signal, but also certain other characteristics of the filter, such as a phase-shift or a change in magnitude. In other words, the signal passing through a filter is also distorted by the filter, which is undesirable. A filter would produce an output signal based upon its characteristics, described by the transfer-function, frequency or impulse response, or a state-space representation of the filter. However, a filter can be designed to achieve a desired set of performance objectives, i.e. the numerator and denominator polynomials of the filter's transfer function, or coefficient matrices of the filter's state-space model, can be selected by a design process to achieve the conflicting requirements of maximum noise attenuation and minimum signal distortion.

There are several prediction models to infer the system state, although, it can be shown that of all estimation tools Kalman Filter (KF) is the one that minimizes the variance of the estimation error which enables accurate estimation of the process.

#### **1.1 Literature review**

The first application of state estimation was in the aerospace field to solve problems related to the prediction of position in aerospace vehicles. Nowadays, estimation has been applied in several fields of engineering and control systems. One common application is in data acquisition, to solve the problem of predicting the state of a system that cannot be measured directly due to the characteristics and complexity of the environment.

KF is an estimator proposed by Rudolph E. Kalman in 1960. It is an algorithm to estimate the evolution of a dynamic system, especially when data has a lot of noise. The principle of the filter is to find the probability of the hypothesis of predicted state and using the data from the measurement to correct it and improve the future estimation at each time. It is a suitable algorithm to apply in dynamic systems, linking real-time measurements and predicting the state of system parameters through time approaches. KF has been implemented in several fields, such as in navigation systems [1–4], financial models [5–7], tracking vehicles [8, 9] and image processing [10–12]; only to mention some of them. Nevertheless, this statistical tool is useful for two main purposes: estimation and performance analysis of estimators.

In the field of IC technology, it has been implemented for thermal estimation. Multicore processors use a dynamic thermal management mechanism that use embedded thermal sensors for monitoring the real-time thermal behavior of the processor, this kind of sensors are susceptible to a variety of source of noise and this causes the discrepancies between actual temperatures observed by on-chip thermal sensors. Therefore, to fix the discrepancies in sensing, Kalman's prediction is used to estimate real values from noisy sensor readings [13]. Another novel application of KF is in the electric vehicle industry, the estimation of the charge state of lithiumion battery is an important parameter in order to guarantee a safe operation of them. The battery performance is influenced by aging; this fact makes difficult to predict the battery state, to overcome this issue the application of KF in combination with other methods is a suitable methodology [14–17].

*Kalman Filter Estimation and Its Implementation DOI: http://dx.doi.org/10.5772/intechopen.97406*

Recently, KF has been applied in several industrial applications. With the development of manufacturing process, welding automation emerges as one important tool to speed up the production rate in the assembly line in stronger and highquality welds. Nevertheless, there are several factors that could influence the welding quality and the most important is the arc length, which could be influenced by the irregular surface of the workpiece and the loss of the tungsten electrode. To enhance the quality during the Gas-Tungsten Arc Welding (GTAW) process, KF is applied in order to keep the arc length stable and minimize the external noise [18]. In the field of sensorless control, KF have been used in intelligence electrical drives. To control induction motor drives without mechanical speed sensor at the motor shaft allows reduced hardware complexity, and low costs. Additionally, the use of induction motors without position sensor is useful for applications with abrasive and hard surface. Thereby, the application of an estimation method it's necessary in order to predict the position and velocity of the shaft [19–21].

In applications related with radio astronomy, KF has been applied for the analysis of Very-Long-Baseline Interferometry (VLBI) data, in order to analyze parameters such as base line lengths, earth orientation parameters, radio source coordinates and tropospheric delays. Nowadays, modern antennas are being constructed and equipped with highly accurate broadband receiving systems. Besides the accurate observations gotten by astronomic instruments, it is necessary to implement estimation methods in order to optimize the models applied in data analysis [22, 23]. In power systems, one of the main difficulties is power quality due to total harmonics distortion (THD) that is mainly caused by nonlinear loads. THD effects are strongly correlated with issues as device heating, break down electronic components, network interference, etc. Several filters have been performed to decrease the effect of harmonics; nevertheless, the application of KF has shown an important reduction in the effect of harmonics [24–26]. In the field of biomedicine KF is widely used over other estimation methodologies to overcome the different sources of noise. Specifically, KF has been used to smooth and predict signals from Electroencephalogram and Electrocardiogram signals [27, 28]. Recently in the literature there are reports on a new methodology to protect the confidentiality of the transmitted data based on a Kalman filter. This strategy proposes the implementation of encrypted algorithm using KF, and is suggested to be used in Industrial cyber–physical systems (ICPSs) to protected data privacy [29, 30].

As it has been mentioned above, KF has been used in diverse fields of science and technology to predict specific parameters of interest according to the application. Temperature evolution is an important parameter to measure and predict, in order to study or control the temperature in an environment [31, 32], device [13, 33, 34] and process [18]. It is well known that RTDs are commercial devices very useful to monitor the temperature due their stability and accuracy. However, RTDs are selfheating causing noisy readings making the RTD a suitable example to implement KF for temperature estimation. Importantly, we searched in the literature and found no evidence of previous work reporting the use of a KF to filter the noise and predict the temperature behavior from RTD readings.

#### **2. Theoretical analysis of a Kalman Filter**

The final objective of this study is to obtain the specification of a linear dynamic system (Wiener filter [35]) which accomplishes the prediction, separation, or detection of a random signal [36]. With the state-transition method, a single derivation covers a large variety of problems: growing and infinite memory filters, stationary and non-stationary statistics, etc. Having guessed the "state" of the

estimation (i.e., filtering or prediction) problem correctly, one is led to a nonlinear difference (or differential) equation for the covariance matrix of the optimal estimation error. From the solution of the equation for the covariance matrix we obtain the coefficients characterizing the optimal linear filter [36]. The following is a simplified derivation described previously in the references [37, 38].

#### **2.1 Defining statistical quantities of use**

The initial state, **x**(0), of a stochastic system is insufficient to determine its future state, x(t). Thus, based upon a statistical analysis of similar systems, and taking the average of their future states at a given time, t, we can calculate the mean state-vector as follows:

$$\mathbf{x}\_{\mathbf{m}}(\mathbf{t}) = (\mathbf{1}/\mathbf{N}) \sum\_{i=1}^{N} \mathbf{x}\_{i}(\mathbf{t}) \tag{1}$$

Thus *xm*ð Þ*t* is the expected state vector after studying N systems. It is also called the expected value of the state-vector, *xm*ðÞ¼ *t E*½ � *x*ð Þ*t* . Another statistical quantity of use is the correlation matrix of the state-vector:

$$\mathbf{P}\_{\mathbf{x}}(\mathbf{t}, \mathbf{r}) = (\mathbf{l}/\mathbf{N}) \sum\_{i=1}^{N} \mathbf{x}\_{i}(\mathbf{t}) \mathbf{x}\_{i}^{T}(\mathbf{r}) \tag{2}$$

The correlation matrix, **P***x*ð Þ t, τ , is a measure of correlation, a statistical property among the different state variables, and between the same state variable at two different times. Two scalar variables, *x*1ð Þ*t* and *x*2ð*t*), are said to be uncorrelated if the expected value of *x*1ð Þ*t x*2ð*τ*), i.e. *E x*½ �¼ <sup>1</sup>ð Þ*t x*2ð Þ*τ* 0, where τ is different from t.

The correlation matrix is the expected value of the matrix *<sup>x</sup>i*ð Þ*<sup>t</sup> <sup>x</sup><sup>T</sup> <sup>i</sup>* ð Þ*τ* , or **<sup>P</sup>***x*ð Þ¼ t, <sup>τ</sup> <sup>E</sup> *xi*ð Þ*<sup>t</sup> xT <sup>i</sup>* ð Þ*<sup>τ</sup>* � �. When t = <sup>τ</sup>, the correlation matrix **<sup>P</sup>***x*ð Þ¼ t, t <sup>E</sup> *xi*ð Þ*<sup>t</sup> <sup>x</sup><sup>T</sup> <sup>i</sup>* ð Þ*<sup>t</sup>* � �, is called the covariance matrix. The covariance matrix, **P***x*ð Þ t, t , is symmetric. If **<sup>P</sup>***x*ð Þ t, <sup>τ</sup> is a diagonal matrix i.e. *E xi*ð Þ*<sup>t</sup> <sup>x</sup> <sup>j</sup>*ð Þ*<sup>τ</sup>* � � <sup>¼</sup> 0, where i 6¼ j, it implies that all the state variables are uncorrelated.

#### **2.2 Defining the filter in state space - discrete domain**

Consider a plant which we cannot model accurately using only a deterministic model, because of the presence of uncertainties called process noise and measurement noise:

$$\mathbf{x}\_{k+1} = \mathbf{A}\mathbf{x}\_k + \mathbf{w}\_k \tag{3}$$

$$\mathcal{Y}\_k = \mathbf{C}\mathbf{x}\_k + \mathbf{v}\_k \tag{4}$$

In the linear, time-varying state-space representation above, **w** is the process noise vector which may arise due to modeling errors such as neglecting nonlinear dynamics, and **v** is the measurement noise vector. The random noises, **w** and **v**, are assumed to be stationary white noises. The covariance matrices of stationary white noises, **w** and **v**, can be expressed as follows:

$$\mathbf{Q} = \mathbb{E}\left[w\_k w\_k^T\right] \tag{5}$$

$$\mathbf{R} = \mathbf{E}\left[\boldsymbol{\nu}\_{k}\boldsymbol{\nu}\_{k}^{T}\right] \tag{6}$$

*Kalman Filter Estimation and Its Implementation DOI: http://dx.doi.org/10.5772/intechopen.97406*

Since we cannot predict the state-vector, **x** of a stochastic plant, an observer is required for estimating the state-vector, based upon a measurement of the output, **y** and a known input, **u**. We need an observer that calculates the estimated statevector, **x**^, optimally, based upon statistical description of the vector output and plant state. Such an observer is the Kalman Filter, which minimizes a statistical measure of the estimation error, **e***<sup>k</sup>* ¼ **x***<sup>k</sup>* � **x**^*k*. This statistical measure is the covariance of the estimation error:

$$\mathbf{P}\_k = \mathbb{E}\left[\mathbf{e}\_k \mathbf{e}\_k^T\right] = E\left[\left(\mathbf{x}\_k - \hat{\mathbf{x}}\_k\right)\left(\mathbf{x}\_k - \hat{\mathbf{x}}\_k\right)^T\right] \tag{7}$$

Since the state-vector, **x**, is a random vector and the estimated state **x**^, is based on the measurement of the output, **y**, for a finite time, say T, where t ≥T then a true statistical average of **x** would require measuring the output for an infinite time.

If T < t, this is a data-smoothing (interpolation) problem. If T = t, this is called filtering. If T > t, we have a prediction problem. Since the original treatment is general enough, the collective term estimation is used [36].

Hence, the best estimate to obtain for **x** is not the true mean, but a conditional mean, *xm*, based on only a finite time record of the output, **y**:

$$\mathfrak{x}\_m = E\left[\mathfrak{x} : \mathfrak{y}, T \le t\right] \tag{8}$$

Taking in consideration the deviation of the estimated state- vector, **x**^, from the conditional mean, *xm*, we can write the estimated state- vector as:

$$
\hat{\mathbf{x}} = \mathbf{x}\_{\mathfrak{m}} + \Delta \mathfrak{x} \tag{9}
$$

Δ*x* is the deviation from the conditional mean. The conditional covariance matrix of the estimation error based on a finite record of the output is then:

$$\mathbf{P}\_k = \mathbb{E}\left[\mathbf{e}\_k \mathbf{e}\_k^T : \mathbf{y}, T \le t\right] \\
= E\left[\mathbf{x}\mathbf{x}^T\right] - \mathbf{x}\_m \mathbf{x}\_m^T + \Delta \mathbf{x} \Delta \mathbf{x}^T \\\tag{10}$$

The best estimate of state-vector happens if Δ*x* ¼ 0, or **x**^ ¼ *xm*, and would result in a minimization of the conditional covariance matrix, or error covariance matrix, **P***k*. In other words, minimization of **P***<sup>k</sup>* yields the optimal observer, which is the Kalman filter.

#### **2.3 Defining the Kalman gain**

The state-equation of the Kalman filter is that of a time-varying observer, and can be written as follows:

$$
\hat{\mathbf{x}}\_{k+1} = \mathbf{A}\hat{\mathbf{x}}\_k + \mathbf{B}u\_k + \mathbf{K}\_k \left[ \mathbf{y}\_k - \mathbf{C}\hat{\mathbf{x}}\_k \right] \tag{11}
$$

*Kk* is the gain matrix of the Kalman filter. Assuming the prior estimate of **x**^*<sup>k</sup>* is called **x**^<sup>0</sup> *<sup>k</sup>*, gained by knowledge of the system. We write an update equation for the new estimate, combing the old estimate with measurement data, **x**^0 *<sup>k</sup>* ¼ *A***x**^*<sup>k</sup>* þ *Buk* and:

$$
\hat{\mathbf{x}}\_k = \hat{\mathbf{x}}\_k' + \mathbf{K}\_k \left[ \mathbf{y}\_k - \mathbf{C} \hat{\mathbf{x}}\_k' \right] \tag{12}
$$

If we substitute Eq. (4) into Eq. (12) we get:

*Adaptive Filtering - Recent Advances and Practical Implementation*

$$
\hat{\mathbf{x}}\_{k} = \hat{\mathbf{x}}\_{k}^{\prime} + \mathbf{K}\_{k} \left[ \mathbf{C} \mathbf{x}\_{k} + \boldsymbol{\nu}\_{k} - \mathbf{C} \hat{\mathbf{x}}\_{k}^{\prime} \right] \tag{13}
$$

Substituting Eq. (13) into Eq. (7)

$$\mathbf{P}\_{k} = E\left[\left(\left(\mathbf{I} - \mathbf{K}\_{k}\mathbf{C}\right)\left(\mathbf{x}\_{k} - \hat{\mathbf{x}}\_{k}^{\prime}\right) - \mathbf{K}\_{k}\mathbf{v}\_{k}\right)\left(\left(\mathbf{I} - \mathbf{K}\_{k}\mathbf{C}\right)\left(\mathbf{x}\_{k} - \hat{\mathbf{x}}\_{k}^{\prime}\right) - \mathbf{K}\_{k}\mathbf{v}\_{k}\right)^{T}\right] \tag{14}$$

Here **x***<sup>k</sup>* � **x**^<sup>0</sup> *k* � � is the error of the prior estimate. Since there is no correlation among the input, process noise and measurement noise, then the expectation may be re-written as;

$$\mathbf{P}\_{k} = (I - \mathbf{K}\_{k}\mathbf{C})E\left[\left(\mathbf{x}\_{k} - \hat{\mathbf{x}}\_{k}^{\prime}\right)\left(\mathbf{x}\_{k} - \hat{\mathbf{x}}\_{k}^{\prime}\right)^{T}\right](I - \mathbf{K}\_{k}\mathbf{C})^{T} + \mathbf{K}\_{k}\mathbf{E}\left[\boldsymbol{\nu}\_{k}\boldsymbol{\nu}\_{k}^{T}\right]\mathbf{K}\_{k}^{T} \tag{15}$$

Using Eqs. (6) and (7), we obtain:

$$\mathbf{P}\_k = (\mathbf{I} - \mathbf{K}\_k \mathbf{C}) \mathbf{P}\_k' (\mathbf{I} - \mathbf{K}\_k \mathbf{C})^T + \mathbf{K}\_k \mathbf{R} \mathbf{K}\_k^T \tag{16}$$

Eq. (16) is the error covariance update equation, where *P*<sup>0</sup> *<sup>k</sup>* is the prior estimate of **P***k***.**

The trace of the error covariance matrix is the sum of the mean squared errors. The mean squared error may be reduced by minimizing the trace of **P***k*. This requires to differentiate the trace of **P***<sup>k</sup>* with respect to *Kk*, then the result set to zero to find *Kk* that minimizes the trace of **P***k*.

We rewrite Eq. (16);

$$\mathbf{P}\_k = \mathbf{P}'\_k - \mathbf{P}'\_k \mathbf{C}^T \mathbf{K}^T\_k - \mathbf{K}\_k \mathbf{C} \mathbf{P}'\_k + \mathbf{K}\_k \mathbf{C} \mathbf{P}'\_k \mathbf{C}^T \mathbf{K}^T\_k + \mathbf{K}\_k \mathbf{R} \mathbf{K}^T\_k \tag{17}$$

Taking the trace of this expression gives:

$$\mathbf{T}[\mathbf{P}\_{k}] = \mathbf{T}[\mathbf{P}\_{k}^{\prime}] - 2\mathbf{T}[\mathbf{K}\_{k}\mathbf{C}\mathbf{P}\_{k}^{\prime}] + T\left(\mathbf{K}\_{k}\left(\mathbf{C}\mathbf{P}\_{k}^{\prime}\mathbf{C}^{T} + \mathbf{R}\right)\mathbf{K}\_{k}^{T}\right) \tag{18}$$

Then, we differentiate with respect to *Kk*;

$$\frac{\text{d}\mathbf{T}[\mathbf{P}\_{k}]}{\text{d}\mathbf{K}\_{k}} = -2\text{T}\left[\mathbf{C}\mathbf{P}\_{k}^{\prime}\right] + 2\text{T}\left(\mathbf{K}\_{k}\left(\mathbf{C}\mathbf{P}\_{k}^{\prime}\mathbf{C}^{T} + \mathbf{R}\right)\right) \tag{19}$$

Setting to zero and solving for *Kk* we obtain the Kalman gain equation:

$$\mathbf{K}\_k = \mathbf{P}\_k' \mathbf{C}^T \left(\mathbf{C} \mathbf{P}\_k' \mathbf{C}^T + \mathbf{R}\right)^{-1} \tag{20}$$

Substitution of Eq. (20) into [17]**,** gives:

$$\mathbf{P}\_k = (I - K\_k \mathbf{C}) \mathbf{P}'\_k \tag{21}$$

Eq. (21) is the update equation for the error covariance matrix with optimal gain.

State projection is derived using;

$$
\hat{\mathbf{x}}'\_{\mathbf{k}+1} = \mathbf{A}\hat{\mathbf{x}}'\_{\mathbf{k}} + \mathbf{w}\_{\mathbf{k}} \tag{22}
$$

To project the error covariance matrix into the next time interval, k + 1 we first find an expression for the error based on the prior error;

$$\mathbf{e}'\_{\mathbf{k}+\mathbf{1}} = \mathbf{x}\_{\mathbf{k}+1} - \hat{\mathbf{x}}'\_{\mathbf{k}+1}$$

$$= \mathbf{A}\mathbf{x}\_{\mathbf{k}} + \mathbf{w}\_{\mathbf{k}} - \mathbf{A}\hat{\mathbf{x}}'\_{\mathbf{k}}\tag{23}$$

$$= \mathbf{A}\mathbf{e}\_{\mathbf{k}} + \mathbf{w}\_{\mathbf{k}}$$

Eq. (7) in time k + 1 is;

$$\mathbf{P}'\_{k+1} = \mathbf{E}\left[\mathbf{e}'\_{k+1}\mathbf{e}'\_{k+1}\right]^T = \mathbf{E}\left[(\mathbf{A}\mathbf{e}\_k + \mathbf{w}\_k)(\mathbf{A}\mathbf{e}\_k + \mathbf{w}\_k)^T\right] \tag{24}$$

Assuming that *ek* and *wk* have zero cross-correlation.

$$\begin{aligned} \mathbf{P}'\_{k+1} &= \mathbb{E}\left[\mathbf{e}'\_{k+1}\mathbf{e}'\_{k+1}\mathbf{1}^T\right] \\ &= \mathbb{E}\left[\left(A\mathbf{e}\_k\mathbf{e}\_k^T\mathbf{A}^T + w\_k w\_k\mathbf{1}^T\right)\right] \\ &= \mathbf{A}\mathbf{P}\_k\mathbf{A}^T + \mathbf{Q} \end{aligned} \tag{25}$$

This completes the description of the filter.

#### **2.4 Algorithm loop**

An algorithm loop is required to make the program in MATLAB and in C-code for the microprocessor. The loop is summarized in the **Figure 1**.

The KF assumes that the system model is linear and known, the system and measurement noises are white, and the states have initial conditions with known means and variances. The power spectral densities used can be treated as tuning parameters to design an observer with excellent performance and robustness. The linear Kalman filter can also be used to design observers for nonlinear plants, by treating nonlinearities as process noise with appropriate power spectral density matrix.

**Figure 1.** *Recursive algorithm for the Kalman filter.*

#### **2.5 Derivation of the Riccati equation**

Since the Kalman filter is an optimal observer the appearance of matrix Riccati equation is not surprising. We are interested in a steady Kalman filter, i.e. the Kalman filter for which the covariance matrix converges to a constant in the limit *t* ! ∞. This happens when the plant is time invariant. The derivation goes as follows [39, 40]:

From the projections into ∞ we get:

$$
\hat{\mathbf{x}}'\_{\ast \ast + 1} = \mathbf{A} \hat{\mathbf{x}}\_{\ast \ast} \tag{26}
$$

$$\mathbf{P}\_{\approx +1} = \mathbf{A} \mathbf{P}\_{\approx} \mathbf{A}^{\mathrm{T}} + \mathbf{Q} \tag{27}$$

$$\mathbf{P}\_{\Leftrightarrow} = (I - \mathbf{K}\_{\Leftrightarrow} \mathbf{C}) \mathbf{P}'\_{\Leftrightarrow} \tag{28}$$

$$
\hat{\mathbf{x}}\_{\infty} = \hat{\mathbf{x}}'\_{\infty} + \mathbf{K}\_{\infty} \left[ \mathbf{y}\_{\infty} - \mathbf{C} \hat{\mathbf{x}}'\_{\infty} \right] \tag{29}
$$

$$\mathbf{K}\_{\approx} = \mathbf{P}'\_{\approx} \mathbf{C}^T \left( \mathbf{C} \mathbf{P}'\_{\approx} \mathbf{C}^T + \mathbf{R} \right)^{-1} \tag{30}$$

Using the Eqs. 26–29 we get:

$$
\hat{\mathbf{x}}'\_{\approx+1} = \mathbf{A}\hat{\mathbf{x}}'\_{\approx} + \mathbf{A}\mathbf{K}\_{\approx} \left[ \mathbf{y}\_{\approx} - \mathbf{C}\hat{\mathbf{x}}'\_{\approx} \right] \tag{31}
$$

$$P\_{\approx +1} = \mathbf{A} (I - \mathbf{K}\_{\approx} \mathbf{C}) P\_{\approx}^{\prime} \mathbf{A}^{T} + \mathbf{Q} \tag{32}$$

Using Eq. 30 in Eqs. 31 and 32 we get:

$$\hat{\mathbf{x}}'\_{\circ\circ+1} = \mathbf{A}\hat{\mathbf{x}}'\_{\circ\circ} + \mathbf{A}\mathbf{P}'\_{\circ\circ}\mathbf{C}^T \left(\mathbf{C}\mathbf{P}'\_{\circ\circ}\mathbf{C}^T + \mathbf{R}\right)^{-1} \left[\mathbf{y}\_{\circ\circ} - \mathbf{C}\hat{\mathbf{x}}'\_{\circ\circ}\right] \tag{33}$$

$$\mathbf{P}\_{\rm os} + \mathbf{1} = \mathbf{A} \left( \mathbf{I} - \mathbf{P}'\_{\rm os} \mathbf{C}^T \left( \mathbf{C} \mathbf{P}'\_{\rm os} \mathbf{C}^T + \mathbf{R} \right)^{-1} \mathbf{C} \right) \mathbf{P}'\_{\rm os} \mathbf{A}^T + \mathbf{Q} \tag{34}$$

Rewriting Eq. 34 we get:

$$P\_{\circ\circ+1} = \mathbf{A} \mathbf{P}'\_{\circ\circ} \mathbf{A}^T - \mathbf{A} \mathbf{P}'\_{\circ\circ} \mathbf{C}^T \left(\mathbf{C} \mathbf{P}'\_{\circ\circ} \mathbf{C}^T + \mathbf{R}\right)^{-1} \mathbf{C} \mathbf{P}'\_{\circ\circ} \mathbf{A}^T + \mathbf{Q} \tag{35}$$

When in steady state:

$$P\_{\approx +1} = P'\_{\approx} = P\_{\approx} \tag{36}$$

Then we arrive at the Riccati equation:

$$\mathbf{A}\mathbf{P}\_{\Leftrightarrow}\mathbf{A}^{T} - \mathbf{A}\mathbf{P}\_{\Leftrightarrow}\mathbf{C}^{T}\left(\mathbf{C}\mathbf{P}\_{\Leftrightarrow}\mathbf{C}^{T} + \mathbf{R}\right)^{-1}\mathbf{C}\mathbf{P}\_{\Leftrightarrow}\mathbf{A}^{T} - \mathbf{P}\_{\Leftrightarrow} + \mathbf{Q} = \mathbf{0} \tag{37}$$

The iterative solution of the Riccati equation is not required in real time. The observer gain is calculated off-line for predictive control applications [40]. Riccati equations are mainly used to control large scale systems, estimation, and, detection processes.

#### **2.6 Solution to the Riccati equation using MATLAB**

In this work the discrete-time algebraic Riccati equation (DARE) was solved to obtain the covariance matrix P of the Kalman gain. The discrete-time algebraic Riccati equation is represented by the next form [41]:

*Kalman Filter Estimation and Its Implementation DOI: http://dx.doi.org/10.5772/intechopen.97406*

$$\mathbf{X} = \mathbf{A}^T \mathbf{X} \mathbf{A} + \mathbf{Q} - \left(\mathbf{A}^T \mathbf{X} \mathbf{B}\right) \left(\mathbf{R} + \mathbf{B}^T \mathbf{X} \mathbf{B}\right)^{-1} \left(\mathbf{B}^T \mathbf{X} \mathbf{A}\right) \tag{38}$$

Where *<sup>A</sup>*, *<sup>X</sup>*, *<sup>Q</sup>* <sup>¼</sup> *AT* <sup>∈</sup> *<sup>n</sup>*�*n*, *<sup>B</sup>* <sup>∈</sup> *<sup>n</sup>*�*m*, *<sup>R</sup>*<sup>∈</sup> *<sup>m</sup>*�*m*ð Þ *<sup>m</sup>* <sup>≤</sup>*<sup>n</sup>* , and *<sup>R</sup>* <sup>¼</sup> *<sup>B</sup><sup>T</sup>* <sup>&</sup>gt; 0. Eq. (38) can be written in the short form:

$$\mathbf{A}^{\mathrm{T}}\mathbf{X}(\mathrm{I}+\mathbf{S}\mathbf{X})^{-1}\mathbf{A}-\mathbf{X}+\mathbf{Q}=\mathbf{0}\tag{39}$$

Where:

$$\mathbf{S} = \mathbf{B} \mathbf{R}^{-1} \mathbf{B}^{T} \tag{40}$$

The application of the Kalman filter implies solving the DARE, which can be solved by several solution methods. Computational methods to solve Riccati equations can be categorized into three classes: invariant subspace methods, deflating subspace methods, and Newton's methods. The generalized Schur method that is classified as a deflating subspace method is used to solve DARE. The generalized Schur algorithm is a strong algebraic tool that allows computing classical decompositions of matrices, such as the QR and LU factorizations [42]. The next algorithm was used to solve DARE [43]:

Input arguments:

A � An n � n matrix B � An n � m matrix Q � An n � n symetric matrix R � An m � m symetrix matrix

Output arguments: *X* � *DARE solution*

1.Form the pencil *PDARE* � *λNDARE*, where

$$\mathbf{P}\_{\text{DARE}} = \begin{pmatrix} \mathbf{A} & \mathbf{0} \\ -\mathbf{Q} & \mathbf{I} \end{pmatrix},\tag{41}$$

$$\mathbf{N}\_{\text{DARE}} = \begin{pmatrix} \mathbf{I} & \mathbf{S} \\ \mathbf{0} & \mathbf{A}^{\text{T}} \end{pmatrix} \tag{42}$$

2.Transform the pencil *PDARE* � *λNDARE* to the generalized real Schur form apply QZ algorithm, that is, find orthogonal matrices Q1 and Z1 such that:

$$Q\_1 P\_{DARE} Z\_1 = P\_1 = \begin{pmatrix} P\_{11} & P\_{12} \\ 0 & P\_{22} \end{pmatrix},\tag{43}$$

$$\mathbf{Q\_1N\_{\rm DARE}Z\_1} = \mathbf{N\_1} = \begin{pmatrix} \mathbf{N\_{11}} & \mathbf{N\_{12}} \\ \mathbf{0} & \mathbf{N\_{22}} \end{pmatrix} \tag{44}$$

3.Using an orthogonal transformation and reorder the generalized real Schur form. So that all the pencil *P*<sup>11</sup> � *λN*<sup>11</sup> has all the eigenvalues with moduli less than 1. Find Qz and Z2 orthogonal matrices, such that:

$$Q\_2Q\_1P\_{DARE}Z\_1Z\_2 = quasi-upper\ triangular \tag{45}$$

$$nQ\_2Q\_1N\_{\text{DARE}}Z\_1Z\_2 = upper\ triangular \tag{46}$$

4.Form the matrix:

$$\mathbf{Z} = \mathbf{Z}\_1 \mathbf{Z}\_2 = \begin{pmatrix} \mathbf{Z}\_{11} & \mathbf{Z}\_{12} \\ \mathbf{Z}\_{21} & \mathbf{Z}\_{22} \end{pmatrix} \tag{47}$$

5.*Compute X* <sup>¼</sup> *<sup>Z</sup>*21*Z*�<sup>1</sup> 11

#### **3. Application example: resistive temperature detectors (RTD)**

Resistive temperature detectors (RTD) have attracted attention to be employed as thermal health monitors. As clinical thermometers they are stable and reliable presenting high accuracy and resolution [44]. One of the most widely used RTD is the emerging thin-film resistor which has minimal impact on complex circuits due to its small size and due to their negligible mass.

The basic function of the sensor is determined by a proportional increment of resistance when temperature is applied. RTDs can be employed on a rigid or flexible substrate [45–47], the metal combination with a flexible o rigid substrate can cover conformal applications. RTD fabrication can be done by metals like Pt [48–50], Cu [51], Ag [52], and Ni [53], among other materials. Nickel presents a suitable option for RTD fabrication due to its wide temperature linear range of operation and its relatively low price.

Clinical thermometers require a high definition and reliability because less than 1°C difference can indicate a health problem. The thermometer signal can be amplified by electronic means, but it is desirable to filter such readings. This work is focused to the filtering and prediction of an highly sensitive Nickel based thin film RTD (range, 273–325 K), to be incorporated to complex circuits [54], we present the theoretical analysis about the relation sensibility-resistance that matches with experimental results.

#### **3.1 Theoretical analysis of an RTD**

All metals produce an increase in its resistance to an increase in specific temperature, which means that resistance is linearly proportional to temperature change. This dependence between electrical resistance and temperature is the principle of operation used by a resistance temperature detector (RTD). The relation between temperature-resistance for Pt wire (RTD), is described by the equation known as the Calendar-Van Dusen, Eq. 41) [50].

$$R\_{(T)} = R\_{0^\bullet C} \left( \mathbf{1} + aT + \beta T^2 \right) \tag{48}$$

Where R0°C is the resistance at 0°C, α and β are temperature coefficients and T is temperature, the temperature coefficients depend only on material properties. In addition, the RTD resistance depends on its geometrical design, according to Eq. 42.

$$\mathbf{R} = \frac{\mathbf{\sigma} \ast \mathbf{L}}{\mathbf{A}} = \frac{\mathbf{\sigma} \ast \mathbf{L}}{\mathbf{w} \ast \mathbf{t}} \tag{49}$$

#### *Kalman Filter Estimation and Its Implementation DOI: http://dx.doi.org/10.5772/intechopen.97406*

Where "σ" is the resistivity, "L" length, "A" lateral area, "w" channel width, and "t" channel height. Only by increasing the length "L" or decreasing the area "A" that means reducing the "t" film thickness or the "w" channel wide, the resistance can increase.

#### *3.1.1 State-space description of an RTD*

The estimation of the thermal system is represented by the linear stochastic state-space description *xk* ¼ *Axk*�<sup>1</sup> þ *Buk*�<sup>1</sup> þ *wk*�<sup>1</sup> and *yk* ¼ *Cxk* þ *vk*. Where A is an nxn state transition matrix applied to the previous state vector *xk*�1, B is the control-input matrix applied to the control vector *uk*�1, and *wk*�<sup>1</sup> is the process noise vector. The linear combination of the measurement noise and the signal value is represented by *yk*, where *C* is the measurement matrix, and *vk* is the measurement noise vector with covariances matrices represented by Q and R. The covariances are assumed to be independent and are given by *<sup>Q</sup>* <sup>¼</sup> *E wkw<sup>T</sup> k* and <sup>¼</sup> *E vkv<sup>T</sup> k* .

Generally, the RTD system is modeled as an RLC circuit, which consists of a resistor a capacitor and an inductor in series with an input voltage. The output that we analyzed is the voltage across the resistor which is related to temperature change. The RLC circuit is represented by a second-order differential equation *L d*2 *i t*ð Þ *dt*<sup>2</sup> <sup>þ</sup> *<sup>R</sup> di t*ð Þ *dt* <sup>þ</sup> <sup>1</sup> *<sup>C</sup> i t*ðÞ¼ 0. To solve the above equation we implement the next matrix system:

$$\mathbf{A} = \begin{pmatrix} \mathbf{0} & \mathbf{1} \\ -\mathbf{1}/\mathbf{L}\cdot\mathbf{C} & -\mathbf{R}/\mathbf{L} \end{pmatrix} \qquad \mathbf{B} = \begin{pmatrix} \mathbf{0} \\ \mathbf{1}/\mathbf{L}\cdot\mathbf{C} \end{pmatrix} \tag{50}$$

$$\mathbf{C} = \begin{pmatrix} \mathbf{1} & \mathbf{0} \end{pmatrix} \qquad \qquad \mathbf{D} = \mathbf{0}$$

Also, we may simplify the response of the system to that of a first-order RC circuit. This implies to solve a first-order ordinary differential equation: *RCdq dt* þ *q* ¼ *VC* . The dynamic model is defined by the following system:

$$\mathbf{A} = \left(-\frac{\mathbf{1}}{\mathbf{R} \cdot \mathbf{C}}\right) \qquad \qquad \mathbf{B} = \left(\frac{\mathbf{1}}{\mathbf{R} \cdot \mathbf{C}}\right) \tag{51}$$

$$\mathbf{C} = \mathbf{1} \qquad \qquad \mathbf{D} = \mathbf{0}$$

#### **4. Design and simulations**

#### **4.1 Kalman filter in resistance thermal detectors (RTD)**

In this work, a Kalman Filter is proposed to decrease the time response to improve the speed feedback and filtering of the perturbations by signal noise from physical signals as thermal detectors. In some instances, a reduced model is advisable to use in an embedded system due to easy implementation and low computational complexity [2].

Kalman filter can be embedded in a temperature system made by Resistance Thermal Detectors (RTD).RTD's are robust elements that require relatively easy measurement, as a consequence are a useful thermal sensor for industry and medical applications. Nevertheless, these devices are exposing to vibration, electrical noise, and measurement errors generated by the thermoelectric effect caused by the temperature difference between electrical contacts, which affects the response time of the sensor. The implementation of the Kalman filter in a temperature system

produces an optimal estimative of thermal behavior and decreases the uncertainties about the prediction of the temperature.

In order to describe the system in the state space, it is necessary to apply system identification methods using MATLAB. Then, after obtaining the system's state space model we are able to use the Kalman filter algorithm to estimate the future output of the system.

To study the dynamics of our system, we used MATLAB functions **etfe** and **spa** to firstly estimate the empirical transfer functions and then estimate the frequency response with fixed frequency resolution using spectral analysis. The continuous time-identified transfer function obtained is:

$$\frac{2.278\,\mathrm{s} + 0.1711}{\mathrm{s}^2 + 2.488\,\mathrm{s} + 0.1695} \tag{52}$$

Using MATLAB we are able to acquire the Discrete-time identified state-space model:

$$\mathbf{x}(\mathbf{t} + \mathbf{T}\mathbf{s}) = \mathbf{A}\left|\mathbf{x}(\mathbf{t}) + \mathbf{B}\left|\mathbf{u}(\mathbf{t}) + \mathbf{K}\left|\mathbf{e}(\mathbf{t})\right|\right.\tag{53}$$

$$\mathbf{y}(\mathbf{t}) = \mathbf{C}\left|\mathbf{x}(\mathbf{t}) + \mathbf{D}\left|\mathbf{u}(\mathbf{t}) + \mathbf{e}(\mathbf{t})\right.\tag{54}$$

with:

$$A = \begin{bmatrix} 0.8342 & -0.08908 \\ 0.0942 & -0.9716 \end{bmatrix}, B = \begin{bmatrix} 0.01966 \\ -0.03341 \end{bmatrix}, \tag{54}$$
 
$$C = \begin{bmatrix} 7.966 & 0.3005 \end{bmatrix}, D = 0, K = \begin{bmatrix} 0.006289 \\ -0.2834 \end{bmatrix}$$

Estimated using **N4SID** on time domain data. Fit to estimation data: 90.27% (prediction focus) with FPE: 0.4532 and MSE: 0.2292. **Figure 2** shows the

**Figure 2.** *System ID using MATLAB. Input output model for a step response defined problem.*

Input–output model for which the input was set to a constant value of 38°C. The output, the step response, is that of the second order system as can be seen in the bode plot shown in **Figure 3**. **Figure 4** shows the evolution of the measured versus the modeled step responses.

**Figure 3.** *Bode diagram indicating the system is a second order system as described by the system transfer function.*

**Figure 4.** *Systems model and measured evolutions in time. Fit to estimation data: 90.27%.*

### **4.2 Kalman filter**

We modify the MATLAB example for the time-varying case found in [55] and we code our own function to solve the Discrete Algebraic Riccati Equation. MATLAB functions like predict or forecast were found useful to understand the problem at hand, however they were not used in the code we present here.

#### *4.2.1 Time-varying Kalman filter using MATLAB*

```
w(1:n) = sqrt(Q)*randn(n,1);
v(1:n) = sqrt(R)*randn(n,1);
systv = ss(A,B,C,0,Ts);
ytv(1:n) = lsim(systv,U(1:n) + w(1:n)).
yvtv(1:n) = ytv(1:n) + v(1:n);
Ptv(:,:) = B(:,:)*Q*B(:,:)'; % Initial error covariance.
x = zeros(order,1); % Initial condition on the state.
order = 2;
yetv(1:n) = zeros(n,1);
ycov(1:n) = zeros(n,1);
for i = 1:n.
% Measurement update.
     Mn(:,:) = Ptv(:,:)*C(:,:)'/(C(:,:)*Ptv(:,:)*C(:,:)' + R);
     x = x + Mn(:,:)*(yvtv(i)-C(:,:)*x); % x[n|n].
     Ptv(:,:) = (eye(order)-Mn(:,:)*C(:,:))*Ptv(:,:); % P[n|n].
     yetv(i) = C(:,:)*x;
     errcov(i) = C(:,:)*Ptv(:,:)*C(:,:)';
% Time update.
     x = A(:,:)*x + B(:,:)*U(i); % x[n + 1|n].
     Ptv(:,:) = A(:,:)*Ptv(:,:)*A(:,:)' + B(:,:)*Q*B(:,:)'; P[n + 1|n].
end
%% DARE. We coded our own dare function [X,L,G] = sdare(A,B,Q,R).
[P_inf,L,M_inf] = sdare(atv,ctv',Q,R);
for i = 1:p
% Measurement update.
     x = x + M_inf'*(yvtv(i)-ctv*x); % x[n|n].
yetv_inf(i) = ctv*x;
errcov_inf(i) = ctv*P_inf*ctv';
% Time update.
x = atv*x + btv*U(i); % x[n + 1|n].
P_inf = atv*P_inf*atv' + btv*Q*btv'; % P[n + 1|n].
end
function [SD] = sdare(A,B,Q,R).
At = transpose(A);
Bt = transpose(B);
S1 = size(A);
E = eye(S1);
Z = zeros(S1);
Ri = inv.(R);
S = B*Ri*Bt;
Pdare = [A Z; Q E];
Ndare = [E S; Z At];
```
*Kalman Filter Estimation and Its Implementation DOI: http://dx.doi.org/10.5772/intechopen.97406*

[AA,BB,L,Z] = qz(Pdare,Ndare); [AAS1,BBS1,QS1,ZS1] = ordqz(AA,BB,L,Z,'udi'); O = ZS1(1:2,1:2); P = ZS1(3:4,1:2); H = inv.(O); SD = P\*H; end

#### **5. Simulations**

Matlab was used to simulate the response of an RTD modelled as a second order system. In **Figure 5(A)** we show the plot of the true response y (cyan line) and the filtered response (red line). In **Figure 5(B)** the plot compares the measurement error with the estimation error. As can be seen in **Figure 5(C)** the time-varying filter also estimates the covariance errcov of the estimation error at each sample which shows when the filter reached steady state. As it can be seen, we have the possibility to predict the state after approximately 8 seconds. Also, we show the evolution of the estimated temperature response showing an error of �0.0948°C in the best of the cases and less than 1°C in the worst of the cases after 45 seconds.

#### **6. Implementation**

The unit step response depends on the roots of the characteristic equation. If both roots are real-valued, the second-order system behaves like a chain of two first-order systems, and the step response has two exponential components. If the roots are complex, the step response is a harmonic oscillation with an exponentially decaying amplitude [56]. In our case, the roots of the characteristic polynomial: <sup>s</sup><sup>2</sup> <sup>þ</sup> <sup>2</sup>*:*488 s <sup>þ</sup> <sup>0</sup>*:*1695 are �2.4179 and � 0.0701. Thus our system behaves like two first order systems in series.

The state description for an RC system is described above. From there, we know that the dynamics are dependent only on the RC constant. In addition, there is an amplificator in the system electronics that has a gain of 260. To solve for the RC constant of the system we use the least-squares method (Chi square minimization). The system has a solution of the form *<sup>y</sup>* <sup>¼</sup> *eBx* and we take n data points to form the vectors *xi* and *Yi*. The problem is to minimize the error function, *err* <sup>¼</sup> <sup>P</sup>*<sup>n</sup> <sup>i</sup>*¼<sup>1</sup> *Yi* � *AeBxi* � �<sup>2</sup> . The trick on the algorithm goes as follows:

$$\ln Y\_i = \ln Y\_i = \ln \left( A e^{B \mathbf{x}\_i} \right) = \ln A + \ln \left( e^{B \mathbf{x}\_i} \right) = C + B \mathbf{x}\_i \tag{55}$$

Which is a linear equation. Using a linear fitting program:

$$\mathbf{B} = \frac{\mathbf{n}\sum \mathbf{x\_i}\mathbf{y\_i} - \sum \mathbf{x\_i}\sum \mathbf{y\_i}}{\mathbf{n}\sum \mathbf{x\_i^2} - \left(\sum \mathbf{x\_i}\right)^2} \text{ and } \mathbf{c} = \frac{\sum \mathbf{x\_i}^2 \sum \mathbf{y\_i} - \sum \mathbf{x\_i}\sum \mathbf{x\_i}\mathbf{y\_i}}{\mathbf{n}\sum \mathbf{x\_i^2} - \left(\sum \mathbf{x\_i}\right)^2} \tag{56}$$

We obtain *B* and *A* ¼ exp ð Þ*c* . We coded two Kalman filters with different calibration parameters (Q and R) as is written below. The KF was implemented in an ATMEGA328 microprocessor. Code for an Arduino was generated using C language which is available in the following section. Only the relevant portion is written.

#### **Figure 5.**

*(A) Evolution of the estimated temperature response showing an error of 0.0948°C in the best of the cases and less than 1°C in the worst of the cases. (B) Evolution of the measurement and estimation errors. (C) Evolution of the covariance of the error showing the possibility to predict the state after approximately 8 seconds.*

#### **6.1 Arduino code**

readings[readIndex] = analogRead(inputPin); // read from the sensor. total = total + readings[readIndex]; // add the reading to the total. readIndex = readIndex +1; // advance to the next position in the array. *Kalman Filter Estimation and Its Implementation DOI: http://dx.doi.org/10.5772/intechopen.97406*

```
time_equis_readings[time_equis_readIndex] = time_equis_readIndex;
time_equis_readIndex = time_equis_readIndex +1;
if (readIndex > = numReadings) // if we're at the end of the array.
{
    for(i = 0;i < =numReadings-1;i++).
     {
     Y[i] = log(readings[i]);
     time1[i] = time_equis_readings[i];
     sumx = (sumx +time_equis_readings[i]);
     sumx2 = (sumx2 + time_equis_readings[i]*time_equis_readings[i]);
     sumy = (sumy +Y[i]);
     sumxy = (sumxy +time_equis_readings[i]*Y[i]);
     }
 den = (numReadings*sumx2-sumx*sumx);
 a = (sumx2*sumy -sumx*sumxy)/den;
 Bc = (n*sumxy-sumx*sumy)/den;
// State description.
 A = -Bc;B=Bc;C = 260;D = 0;
//wrap around to the beginning:
 readIndex = 0;time_equis_readIndex = 0;
 }
// KALMAN.
 errcov = C*P*C;
 for(i = 0;i < =numReadings-1;i++).
  {
    Mn = P*C/((C*P*C + R)); // initial estimate.
    X = X + Mn*(readings[i]-C*X); // update estimate Average_readings[i].
    P = (1-Mn*C)*P; // update covariance.
y_e[i] = C*X;
    errcov = C*P*C;
X = A*X + B*U; // project into k + 1.
    P = A*P*A + B*Q*B; // project into k + 1.
}
  timer0_millis = millis();
  // Solution to the Riccati equation.
  F = -Bc;H = 260;
  SQ = sqrt((H*H*Q*R) + (F*F*R*R));
SR = F * R;
P_inf = (SQ + SR)/(H*H);
M_inf = P_inf*C/(C*P_inf*C + R);
for(i = 0;i < =numReadings-1;i++).
  {
    // Measurement update.
    // M_inf;
    x_inf = x_inf + M_inf*(readings[i]-C*x_inf); // % x[n|n].
//P_inf; % P[n|n].
    y_e_inf = C*x_inf;
errcov_inf = C*P_inf*C;
    // Time update.
    x_inf = A*x_inf + B*U;
    P_inf = A*P_inf*A + B*Q*B;
  }
}
```
#### **7. Results**

The experiments were performed in a thermal bath giving step responses to the desired setup temperature. **Figure 6** depicts the upward and downward evolution of the temperature, the Kalman filter and the two predictors (using two different Q and R settings). As can be seen the predictors follow the Temperature of the sensor closely, especially for the upward way, while the Kalman filter lags behind.

#### **Figure 6.**

*Implemented system. Step response for the upwards and downwards evolution. Two different Kalman filters were used to predict (by solving the DARE equation) the evolution of the future state with different Q and R to calibrate the desired response. In blue the evolution of the RTD sensor analog input, in Yellow and red the two Kalman predictors and the Kalman estimation in cyan color.*

#### **8. Boundary layer**

Sliding control [57] is an additional tool to predict the behavior of a second order system basically smoothing the system by boundary layers. The prediction of the system state trajectory is given using an uncertain model of the system. The subspace which represents the quantity of uncertainties in the prediction process, forces the estimate state trajectory to switching gain to converge the estimates to within a boundary of the real state values. To predict the state trajectory of our RLC system it's possible to switch its gain by the subspace represented by a first-order RC model. The estimated state trajectory is forced to keep a switch back and forth

#### **Figure 7.**

*Ascending and descending step responses of the Kalman filter and two Predictors which function in real time. In blue the RTD sensor response, in cyan the estimator response, in yellow and in red the two differently calibrated Kalman predictors.*

#### *Kalman Filter Estimation and Its Implementation DOI: http://dx.doi.org/10.5772/intechopen.97406*

within the boundary layer represented in our case by a RC model. By creating a boundary layer, the system is further constrained to have a solution existing in between two RC model solutions.

In **Figure 7** it can be clearly seen that the use of two estimators may help predict the behavior of the RTD in a much better way. The system needs to be calibrated first in order to have the two Kalman filters enveloping the required solution. As can be seen in the upward direction, both predictors (yellow and red) envelope the desired response (blue), that of the RTD sensor improving the response of the Kalman filter without boundaries (cyan). Unfortunately, this is not the case in the downward evolution. From the nonlinear control systems point of view these two evolutions demark a region where the RTD stands thus making possible to program a better estimator. It is left as an outlook to program a third estimator using this boundary layer in order to have a better predictor, especially for the downward evolution.

#### **9. Conclusions**

As it can be shown the implementation of the Kalman filter brings the opportunity to estimate the forecast in real time of a second order system using first, MATLAB and second that of two first order systems using a simple RC system coded in C-language for a microprocessor. It has been shown that the program is able to predict the evolution of temperature for a RTD system. Even if the system is implemented using a first order system we can find evolving solutions for our estimation and prediction to be good enough. We predict the state after approximately 8 seconds showing an error of 0.0948°C in the best of the cases. In addition, a boundary layer may be programmed using two first order Kalman predictors which may be tuned by setting Q and R properly. We believe this is the first report on the use of a Kalman filter to predict the evolution of temperature from a RTD.

#### **Author details**

Erick Ulin-Avila\* and Juan Ponce-Hernandez Center for Engineering and Industrial Development, Querétaro, México

\*Address all correspondence to: ulinavilaerick@gmail.com

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Xincun, Y.; Yongzhong, O.; Fuping, S.; Hui, F. Kalman Filter Applied in Underwater Integrated Navigation System Underwater Integrated Navigation System Profile Kalman Filter 4 Kalman Rllter Integrated Navigation System of Underwater. Geod. Geodyn. **2013**, *4* (1), 46–50. https://doi.org/10.3724/SP. J.1246.2013.01046.

[2] Popov, I.; Koschorrek, P.; Haghani, A.; Jeinsch, T. Adaptive Kalman Filtering for Dynamic Positioning of Marine Vessels. IFAC-PapersOnLine **2017**, *50* (1), 1121–1126. https://doi.org/ 10.1016/j.ifacol.2017.08.394.

[3] Zhao, Y. Performance Evaluation of Cubature Kalman Filter in a GPS/IMU Tightly-Coupled Navigation System. Signal Processing **2016**, *119*, 67–79. https://doi.org/10.1016/j. sigpro.2015.07.014.

[4] Allotta, B.; Caiti, A.; Costanzi, R.; Fanelli, F.; Fenucci, D.; Meli, E.; Ridolfi, A. A New AUV Navigation System Exploiting Unscented Kalman Filter. Ocean Eng. **2016**, *113*, 121–132. https:// doi.org/10.1016/j.oceaneng.2015.12.058.

[5] Khan, N.; Bacha, S. A.; Khan, S. A. Improvement of Compensated Closed-Loop Kalman Filtering Using Autoregressive Moving Average Model. Measurement **2019**, *134*, 266–279. https://doi.org/10.1016/j. measurement.2018.10.063.

[6] Yuan, J.; Wang, Y.; Ji, Z. A Differentially Private Square Root Unscented Kalman Filter for Protecting Process Parameters in ICPSs. ISA Trans. **2020**, *104*, 44–52. https://doi.org/ 10.1016/j.isatra.2019.12.010.

[7] Hamiche, K.; Abouaïssa, H.; Goncalves, G.; Hsu, T. A Robust and Easy Approach for Demand Forecasting in Supply Chains. IFAC-PapersOnLine

**2018**, *51* (11), 1732–1737. https://doi.org/ 10.1016/j.ifacol.2018.08.206.

[8] Baradaran Khalkhali, M.; Vahedian, A.; Sadoghi Yazdi, H. Vehicle Tracking with Kalman Filter Using Online Situation Assessment. Rob. Auton. Syst. **2020**, *131*, 103596. https://doi.org/ 10.1016/j.robot.2020.103596.

[9] Farahi, F.; Yazdi, H. S. Probabilistic Kalman Filter for Moving Object Tracking. Signal Process. Image Commun. **2020**, *82* (December 2019), 115751. https://doi.org/10.1016/j. image.2019.115751.

[10] Piovoso, M.; Laplante, P. A. Kalman Filter Recipes for Real-Time Image Processing. Real-Time Imaging **2003**, *9* (6), 433–439. https://doi.org/10.1016/j. rti.2003.09.005.

[11] Hamuda, E.; Mc Ginley, B.; Glavin, M.; Jones, E. Improved Image Processing-Based Crop Detection Using Kalman Filtering and the Hungarian Algorithm. Comput. Electron. Agric. **2018**, *148* (February), 37–44. https:// doi.org/10.1016/j.compag.2018.02.027.

[12] Wang, L.; Loffeld, O.; Ma, K.; Qian, Y. Sparse ISAR Imaging Using a Greedy Kalman Filtering Approach. Signal Processing **2017**, *138*, 1–10. https://doi. org/10.1016/j.sigpro.2017.03.002.

[13] Predictor, K. On-Line Temperature Estimation for Noisy Thermal Sensors Using a Smoothing Filter-Based. **2018**. https://doi.org/10.3390/s18020433.

[14] Shrivastava, P.; Soon, T. K.; Idris, M. Y. I. Bin; Mekhilef, S. Overview of Model-Based Online State-of-Charge Estimation Using Kalman Filter Family for Lithium-Ion Batteries. *Renew. Sustain. Energy Rev.* **2019**, *113* (December 2018), 109233. https://doi. org/10.1016/j.rser.2019.06.040.

*Kalman Filter Estimation and Its Implementation DOI: http://dx.doi.org/10.5772/intechopen.97406*

[15] Linghu, J.; Kang, L.; Liu, M.; Luo, X.; Feng, Y.; Lu, C. Estimation for Stateof-Charge of Lithium-Ion Battery Based on an Adaptive High-Degree Cubature Kalman Filter. *Energy* **2019**, *189* (xxxx), 116204. https://doi.org/10.1016/j. energy.2019.116204.

[16] Zhang, S.; Guo, X.; Zhang, X. An Improved Adaptive Unscented Kalman Filtering for State of Charge Online Estimation of Lithium-Ion Battery. J. Energy Storage **2020**, *32* (September), 101980. https://doi.org/10.1016/j. est.2020.101980.

[17] Sassi, H. Ben; Errahimi, F.; Es-sbai, N. State of Charge Estimation by Multi-Innovation Unscented Kalman Filter for Vehicular Applications. *J. Energy Storage* **2020**, *32* (October), 101978. https://doi. org/10.1016/j.est.2020.101978.

[18] Wang, H.; Lei, T.; Rong, Y.; Shao, W.; Huang, Y. Arc Length Stable Method of GTAW Based on Adaptive Kalman Filter. *J. Manuf. Process.* **2020**, No. December 2019, 0–1. https://doi. org/10.1016/j.jmapro.2020.01.029.

[19] Holtz, J. Sensorless Control of Induction Machines – with or without Signal Injection ? **2019**, No. July. https:// doi.org/10.1109/TIE.2005.862324.

[20] Ameid, T.; Menacer, A.; Talhaoui, H.; Harzelli, I. Rotor Resistance Estimation Using Extended Kalman Filter and Spectral Analysis for Rotor Bar Fault Diagnosis of Sensorless Vector Control Induction Motor. Meas. J. Int. Meas. Confed. **2017**, *111*, 243–259. https://doi.org/10.1016/j. measurement.2017.07.039.

[21] Chen, Z.; Wang, L.; Liu, X. *Sensorless Direct Torque Control of PMSM Using Unsected Kalman Filter*; IFAC, 2011; Vol. 44. https://doi.org/10.3182/20110828-6- IT-1002.02515.

[22] Nilsson, T.; Soja, B.; Karbon, M.; Heinkelmann, R.; Schuh, H. Application of Kalman Filtering in VLBI Data Analysis. Earth, Planets Sp. **2015**. https://doi.org/10.1186/s40623-015- 0307-y.

[23] Karbon, M.; Soja, B.; Nilsson, T.; Deng, Z.; Heinkelmann, R.; Schuh, H. Earth Orientation Parameters from VLBI Determined with a Kalman Filter. Geod. Geodyn. **2017**, *8* (6), 396–407. https://doi.org/10.1016/j. geog.2017.05.006.

[24] Teh, L. A. and J. Kalman Filter for Reducing Total Harmonics Distortion in Stand-Alone PV System. *2020 Glob. Congr. Electr. Eng. (GC- ElecEng)* **2020**, 81–87. https://doi.org/10.23919/GC-ElecEng48342.2020.9286275.Abstract.

[25] Docimo, D. J.; Ghanaatpishe, M.; Mamun, A. Extended Kalman Filtering to Estimate Temperature and Irradiation for Maximum Power Point Tracking of a Photovoltaic Module. Energy **2017**, *120*, 47–57. https://doi.org/10.1016/j. energy.2016.12.089.

[26] Monteiro, R. V. A.; Guimarães, G. C.; Moura, F. A. M.; Albertini, M. R. M. C.; Albertini, M. K. Estimating Photovoltaic Power Generation: Performance Analysis of Artificial Neural Networks, Support Vector Machine and Kalman Filter. Electr. Power Syst. Res. **2017**, *143*, 643–656. https://doi.org/10.1016/j. epsr.2016.10.050.

[27] Madhukar, P. S. S. M. Overview. **2020**, No. Icosec, 1268–1272.

[28] Belkhatir, Z.; Mechhoud, S.; Laleg-Kirati, T. M. Kalman Filter Based Estimation Algorithm for the Characterization of the Spatiotemporal Hemodynamic Response in the Brain. Control Eng. Pract. **2019**, *89* (May), 180–189. https://doi.org/10.1016/j. conengprac.2019.05.017.

[29] Alguliyev, R.; Imamverdiyev, Y.; Sukhostat, L. Cyber-Physical Systems and Their Security Issues. Comput. Ind. **2018**, *100* (July 2017), 212–223. https:// doi.org/10.1016/j.compind.2018.04.017.

[30] Wang, J.; Luo, J.; Liu, X.; Li, Y.; Liu, S.; Zhu, R. Improved Kalman Filter Based Differentially Private Streaming Data Release in Cognitive Computing. Futur. Gener. Comput. Syst. **2019**, *98*, 541–549. https://doi.org/10.1016/j. future.2019.03.050.

[31] Zhang, Y.; Wang, R.; Li, S.; Qi, S. Temperature Sensor Denoising Algorithm Based on Curve Fitting and Compound Kalman Filtering. Sensors (Switzerland) **2020**, *20* (7), 1–13. https://doi.org/10.3390/ s20071959.

[32] Mouzinho, L. F.; Fonsecaneto, J. V.; Luciano, B. A.; Freire, R. C. S. Indirect Measurement of the Temperature via Kalman Filter. *18th IMEKO World Congr. 2006 Metrol. a* Sustain. Dev. **2006**, *1*, 818–823.

[33] Ma, Y.; Cui, Y.; Mou, H.; Gao, J.; Chen, H. Core Temperature Estimation of Lithium-Ion Battery for EVs Using Kalman Filter. Appl. Therm. Eng. **2020**, *168* (April 2019), 114816. https://doi. org/10.1016/j.applthermaleng.2019. 114816.

[34] Eleffendi, M. A.; Johnson, C. M. Application of Kalman Filter to Estimate Junction Temperature in IGBT Power Modules. IEEE Trans. Power Electron. **2016**, *31* (2), 1576–1587. https://doi.org/ 10.1109/TPEL.2015.2418711.

[35] Wiener, N. *The Extrapolation, Interpolation and Smoothing of Stationary Time Series with Engineering Applications*; John Wiley & Sons: New York, 1949.

[36] Kalman, R. E. A New Approach to Linear Filtering and Prediction Problems. J. Fluids Eng. Trans. ASME **1960**, *82* (1), 35–45. https://doi.org/ 10.1115/1.3662552.

[37] Tewari, A. Modern Control Design With MATLAB and SIMULINK. **2002**, 518.

[38] Lacey, T. Tutorial : The Kalman Filter. 133–140.

[39] Grewal, M. S.; Andrews, A. P. *Kalman Filtering: Theory and Practice Using MATLAB®: Third Edition*; 2008. https://doi.org/10.1002/9780470377819.

[40] Wang, L. *Model Predictive Control System Design and Implementation Using MATLAB*; 2009; Vol. 53.

[41] Lu, L. Z.; Lin, W. W. An Iterative Algorithm for the Solution of the Discrete-Time Algebraic Riccati Equation. Linear Algebra Appl. **1993**, *188*–*189* (C), 465–488. https://doi.org/ 10.1016/0024-3795(93)90476-5.

[42] Laudadio, T.; Mastronardi, N.; Van Dooren, P. The Generalized Schur Algorithm and Some Applications. Axioms **2018**, *7* (4), 1–18. https://doi. org/10.3390/axioms7040081.

[43] DATTA, B. N. Numerical Solutions and Conditioning of Algebraic Riccati Equations. *Numer. Methods Linear Control Syst.* **2004**, 519–599. https://doi. org/10.1016/b978-012203590-6/ 50017-3.

[44] Chauhan, J.; Neelakantan, U. An Experimental Approach for Precise Temperature Measurement Using Platinum RTD PT1000. Int. Conf. Electr. Electron. Optim. Tech. ICEEOT *2016* **2016**, 3213–3215. https://doi.org/ 10.1109/ICEEOT.2016.7755297.

[45] Trung, T. Q.; Ramasundaram, S.; Hwang, B. U.; Lee, N. E. An All-Elastomeric Transparent and Stretchable Temperature Sensor for Body-Attachable Wearable Electronics. Adv. Mater. **2016**, *28* (3), 502–509. https://doi.org/10.1002/ adma.201504441.

*Kalman Filter Estimation and Its Implementation DOI: http://dx.doi.org/10.5772/intechopen.97406*

[46] Chen, Y.; Lu, B.; Chen, Y.; Feng, X. Breathable and Stretchable Temperature Sensors Inspired by Skin. Sci. Rep. **2015**, *5*, 1–11. https://doi.org/10.1038/ srep11505.

[47] Wang, Z.; Gao, W.; Zhang, Q.; Zheng, K.; Xu, J.; Xu, W.; Shang, E.; Jiang, J.; Zhang, J.; Liu, Y. 3D-Printed Graphene/Polydimethylsiloxane Composites for Stretchable and Strain-Insensitive Temperature Sensors. ACS Appl. Mater. Interfaces **2019**, *11* (1), 1344–1352. https://doi.org/10.1021/ acsami.8b16139.

[48] Kim, J.; Kim, J.; Shin, Y.; Yoon, Y. A Study on the Fabrication of an RTD (Resistance Temperature Detector) by Using Pt Thin Film. Korean J. Chem. Eng. **2001**, *18* (1), 61–66. https://doi. org/10.1007/BF02707199.

[49] Noh, J.; Park, S.; Boo, H.; Kim, H. C.; Chung, T. D. Nanoporous Platinum Solid-State Reference Electrode with Layer-by-Layer Polyelectrolyte Junction for PH Sensing Chip. Lab Chip **2011**, *11* (4), 664–671. https://doi.org/10.1039/ c0lc00293c.

[50] Hassan, A. S.; Juliet, V.; Joshua Amrith Raj, C. MEMS Based Humidity Sensor with Integration of Temperature Sensor. *Mater.* Today Proc. **2018**, *5* (4), 10728–10737. https://doi.org/10.1016/j. matpr.2017.12.356.

[51] Imran, M.; Bhattacharyya, A. Thermal Response of an On-Chip Assembly of RTD Heaters, Sputtered Sample and Microthermocouples. Sensors Actuators, A Phys. **2005**, *121* (2), 306–320. https://doi.org/10.1016/j. sna.2005.02.019.

[52] Kang, L.; Shi, Y.; Zhang, J.; Huang, C.; Zhang, N.; He, Y.; Li, W.; Wang, C.; Wu, X.; Zhou, X. A Flexible Resistive Temperature Detector (RTD) Based on in-Situ Growth of Patterned Ag Film on Polyimide without Lithography. Microelectron. Eng. **2019**, *216* (July),

111052. https://doi.org/10.1016/j. mee.2019.111052.

[53] Cui, J.; Liu, H.; Li, X.; Jiang, S.; Zhang, B.; Song, Y.; Zhang, W. Fabrication and Characterization of Nickel Thin Film as Resistance Temperature Detector. Vacuum **2020**, *176*, 109288. https://doi.org/10.1016/j. vacuum.2020.109288.

[54] Lee, Y.; Cheng, S.; Fang, W. MONOLITHIC INTEGRATED CMOS-MEMS FLUORESCENCE QUENCHING GAS SENSOR AND RESISTIVE TEMPERATURE DETECTOR ( RTD ) FOR TEMPERATURE COMPENSATION. *2019 20th Int. Conf. Solid-State Sensors, Actuators Microsystems Eurosensors XXXIII (TRANSDUCERS EUROSENSORS XXXIII)* **2019**, No. June, 1293–1296.

[55] Mathworks. Kalman filtering.

[56] Haidekker, M. A. Solving Differential Equations in the Laplace Domain. Linear Feed. Control. **2013**, 27– 56. https://doi.org/10.1016/b978-0- 12-405875-0.00003-6.

[57] Gadsden, S. A.; Eng, B. M. Smooth Variable Structure Filtering: Theory and Applications. *Thesis* **2011**.

**Chapter 4**

## A Constant Gain Kalman Filter for Wireless Sensor Network and Maneuvering Target Tracking

*Peeyush Awasthi, Ashwin Yadav, Naren Naik and Mudambi Ramaswamy Ananthasayanam*

#### **Abstract**

One of the well-known approaches to target tracking is the Kalman filter. The problem of applying the Kalman Filter in practice is that in the presence of unknown noise statistics, accurate results cannot be obtained. Hence the tuning of the noise covariances is of paramount importance in order to employ the filter. The difficulty involved with the tuning attracts the applicability of the concept of Constant Gain Kalman Filter (CGKF). It has been generally observed that after an initial transient the Kalman Filter gain and the State Error Covariance P settles down to steady state values. This encourages one to consider working directly with steady state or constant Kalman gain, rather than with error covariances in order to obtain efficient tracking. Since there are no covariances in CGKF, only the state equations need to be propagated and updated at a measurement, thus enormously reducing the computational load. The current work first applies the CGKF concept to heterogeneous sensor based wireless sensor network (WSN) target tracking problem. The paper considers the Standard EKF and CGKF for tracking various manoeuvring targets using nonlinear state and measurement models. Based on the numerical studies it is clearly seen that the CGKF out performs the Standard EKF. To the best of our knowledge, such a comprehensive study of the CGKF has not been carried out in its application to diverse target tracking scenarios and data fusion aspects.

**Keywords:** Constant Gain Kalman Filter, INS, GPS, Wireless Sensor Network, Tracking

#### **1. Introduction**

The Kalman Filter (KF) is one of the most fundamental and widely used estimation schemes in tracking application. While the KF formalism is very powerful we need to keep in mind that the solution scheme can be considered to be formal and a fundamental prerequisite for accurate results is the a prioiri knowledge of the initial state (*X*0), initial state noise covariance (*P*0), system noise covariance (*Q*), measurement noise covariance (*R*). Good values of *X*0, *P*0, *Q* and *R* are imperative for the filter to perform optimally. Tuning of the KF is defined as the process to obtain precise values of *P*0, *Q*, and *R*. A detailed review of filter tuning has been given by Ananthasayanam et al. in [1]. A main theme of filter covariance tuning schemes is the notion of the innovation sequence being white and Gaussian for filter optimality.

One class of schemes obtains the unknown covariances that maximize the likelihood [2–5]. Another important class of filter covariance tuning schemes is the covariance matching methodology [6–8]. The idea of an innovations based cost function being minimized by the optimal covariances is also used in [7, 8] to tune process system noise (*Q*) and state error covariances (*P*) respectively.

An alternate approach to tuning is via the direct setting of the Kalman gain as carried out in the work of Ananthasayanam et al. [9] and Ashwin et al. [10]. It is often observed that the Kalman gain converges to a steady state value which coincides with the convergence of the state error covariance *P*. The premise for working with the steady state or constant gain is well explained in the thesis work by Bohn [11]. The work of Anil Kumar et al. [9], optimizes the innovations likelihood cost function [5] for the (constant) Kalman gain in a space craft reentry problem. This CGKF approach works directly with the Kalman gain and does not utilize any knowledge of the filter covariances.

Our present work is about the application and sensitivity study of the CGKF traget tracking scheme in sensor networks scenarios and maneuvering target tracking. We look at target tracking problems in wireless sensor networks using passive infrared (PIR), acoustic and seismic sensors in stand alone (SA) and data fusion (DF) modes as given by Raol [12] for the discrete white noise acceleration (DWNA) traget motion model. We further demonstrate the capability of the CGKF to track maneuvering targets [13, 14] from acquired range and direction data for a class of coordinated turn (CT) maneuver models. The CGKF with linear measurement model was validated in [10, 15]. The present study applies the CGKF to a non linear measurement models and further demonstrates its robustness through sensitivity studies. The results obtained with respect to homogeneous and heterogeneous data fusion further demonstrate the range of applicability of the CGKF. These extensive tracking and sensitivity studies for a wide range of state and measurement models are to the best of our knowledge, unique to this paper and provide the reader with a comprehensive reference. These results also provide a firm base for application of the CGKF concept to other areas. In the sequel, Section 2 describes the CGKF concept. Section 3 introduces the various tracking scenarios based on PIR, acoustic and seismic measuerement models in SA and DF modes. In addition maneuvering targets based on CT models are discussed, since these have the potential to demonstrate the flexibility of the CGKF. Section 4 details the tracking and sensitivity studies on the above mentioned models, and Section 5 gives the conclusion of the present work.

#### **2. Constant gain Kalman filtering**

The KF algorithm [16] is based on the least squares principle with recursive time updates. It is a fact that optimal filter performance needs apriori knowledge of the filter statistics in terms of the state-error, system and measurement noise covariances (*P*, *Q* and *R* respectively). A central theme in the optimality of the KF is the requirement of the innovations being white at convergence [17, 18]. Mehra [17] shows that the settling of the filter gain value to a steady state value coincides with the state error covariance also similarly settling (**Figure 1**).

The observation that the gain (reflecting *P*, *Q*, *R*) reaches a steady state, prompts us to consider working directly with the steady state gain rather than the tuning dependant *P*, *Q*, *R* matrices to determine the gain. The way we accomplish this is via an innovations cost function minimization approach. We use the whiteness of the innovations at KF convergnce in order to construct the likelihood based function of the innovations sequence [5].

*A Constant Gain Kalman Filter for Wireless Sensor Network and Maneuvering Target Tracking DOI: http://dx.doi.org/10.5772/intechopen.98700*

**Figure 1.** *Gain* K *vs. error covariance matrix* P.

$$J(K, \mathcal{R}) = \frac{1}{N} \sum\_{t=1}^{N} \left( v\_t^T \mathcal{R} v\_t + \log \left( |\mathcal{R}| \right) \right) \tag{1}$$

where *vt* represents the innovations, R represents the innovations covariance, |.| represents the determinant and *N* is the number of measurement time steps. We obtain the steady state gain *<sup>K</sup>*<sup>∗</sup> and innovation covariance <sup>R</sup><sup>∗</sup> by solving the following optimization problem

$$I(K^\*, \mathcal{R}^\*) = \arg\min\_{K, \mathcal{R}} J(K, \mathcal{R}) \tag{2}$$

The following is the estimation scheme based on a predict and update mode.

#### **2.1 The estimation scheme**

The generic KF updates are

$$
\hat{\mathfrak{x}}\_{l} = \overline{\mathfrak{x}}\_{l} + K\_{l} v\_{l} \tag{3}
$$

where the the innovations sequence is *vt* ¼ *yt* � *Cxt*. *C* is the measurement matrix, *xt* is the predicted state matrix and *x*^*<sup>t</sup>* is the filtered state matrix. The standard KF computes the gain matrix *Kt* using *P*, *Q* and *R* while we proceed to estimate this constant gain *K* <sup>∗</sup> for the CGKF, by solving the optimization problem described by (2) above. The optimization problem can be solved using local gradient based methods (such as Newton type schemes) [19] or global schemes such as Genetic Algorithm (GA) [20] applied to problems as in [9]. As the filter tracks the target, the gain *K* is seen to stabilize to a value given by the solution of the above problem. Once we have computed the optimal filter gain *Kt* (denoted henceforth by *K* <sup>∗</sup> , representative of the constant gain) for the CGKF, the KF recursions become.

*Adaptive Filtering - Recent Advances and Practical Implementation*

**Predict**

$$
\overline{\mathfrak{X}}\_{t+1} = A\hat{\mathfrak{x}}\_{t+1} + \mathfrak{u}\_{t+1} \tag{4}
$$

**Update**

$$
\hat{\mathfrak{x}}\_{t+1} = \overline{\mathfrak{x}}\_{t+1} + K^\* \left( \mathcal{Y}\_{t+1} - \mathbf{C} \overline{\mathfrak{x}}\_{t+1} \right) \tag{5}
$$

Thus it is evident that once the optimal gain *K* <sup>∗</sup> is computed using GA to solve the optimization problem, the filter algorithm reduces to a simple predict and update model given by (4) and (5) above. This is obviously more compact compared to the standard KF which is implemented in five steps involving computation and propagation of the State Error Covariance *P* using *Q* and *R*. The advantage is speed of operation because we circumvent the tedious calculations of the costly covariance matrices *P*, *Q*, *R* and instead work directly with the optimal gain for the set of measurements.

We observe that the typically expensive covariance time update step is not needed in the constant gain approach.The CGKF is found to work quite well even with state models moderately different from that for which the gains are computed [25], suggesting a robustness of the gains calculated (Refer **Tables 6** and **7**). It is to be noted that the present problem is a non linear problem, in so far as the measurement model is concerned so that the filter used is the CGKF. This is one unique advantage of the CGKF over the standard KF/EKF wherein the EKF requires linearization of the measurement model via use of the Jacobian *H*. The reconstruction in CGKF case employing the GA as the optimization tool, does not rely on the Jacobian in computation of the optimum Constant Filter Gain *K* <sup>∗</sup> .

#### **3. Sensor models and modes**

The focus of our study is the application of the CGKF to a variety of 2D sensor models such as those in unattended ground sensor (UGS) and Intelligence, Surveillance and Reconnaissance (ISR) systems. Sensors such as passive infrared (PIR) [21], acoustic, seismic [22] and radar have been studied. The sensor system might consist of single or multiple data inputs as required in different scenarios. They may consist of single type of sensor or multiple type of sensor nodes, as required in situations. Homogeneous and heterogeneous DF aspects of certain combination of sensors will be analyzed. We outline the regular and CGKF schemes and their application to the above mentioned systems.

#### **3.1 State variable models in stand alone mode**

#### **Non Maneuvering or Discrete White Noise Acceleration (DWNA) Model.**

The state model for 2D target is comprised of *x* and *y* direction displacement and their corresponding velocities wherein the state vector is represented as *Xt* ¼ *xt*, *yt* , *x*\_ *<sup>t</sup>*, *y*\_*<sup>t</sup> <sup>t</sup>* with *xt*, *yt* representing X,Y coordinates respectively of the target and *x*\_ *<sup>t</sup>*, *y*\_*<sup>t</sup>* representing velocities in X, Y directions.

#### **State Equation.**

The state equation for the DWNA is

$$X\_{l+1} = AX\_l + Bw\_l \tag{6}$$

*A Constant Gain Kalman Filter for Wireless Sensor Network and Maneuvering Target Tracking DOI: http://dx.doi.org/10.5772/intechopen.98700*

where

$$A = \begin{pmatrix} 1 & 0 & T\_s & 0 \\ 0 & 1 & 0 & T\_s \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}, B = \begin{pmatrix} 0.5T\_s^2 & 0 \\ 0 & 0.5T\_s^2 \\ T\_s & 0 \\ 0 & T\_s \end{pmatrix} \tag{7}$$

with *A* and *B* being the state - transition and acceleration matrices respectively, *wt* being an uncorrelated Gaussian process.

**Measurement equation: The measurement at time** *t* **of** *nth* **sensor** *g* ð Þ *n t*

$$\mathbf{g}\_t^{(n)} = h^{(n)}(X\_t, t) + \boldsymbol{\upsilon}\_t^{(n)} \tag{8}$$

where *<sup>h</sup>*ð Þ *<sup>n</sup> Xt* ð Þ , *<sup>t</sup>* is typically a nonlinear function of the states *<sup>v</sup>*ð Þ *<sup>n</sup>* is the corresponding measurement noise (assumed to be white Gaussian) of *nth* sensor. The measurement equations for the respective sensors are given below.

**Sensor Measurement model for PIR sensor** [21]

$$\mathbf{g}\_t^{(n)} = \log \frac{\dot{\mathbf{x}}\_t^2 + \dot{\mathbf{y}}\_t^2}{\left(\mathbf{x}\_t - r\_\mathbf{x}^{(n)}\right)^2 + \left(\mathbf{y}\_t - r\_\mathbf{y}^{(n)}\right)^2} + \boldsymbol{v}\_t^{(n)} \tag{9}$$

**Sensor Measurement model for Acoustic sensor** [22]

$$\mathbf{g}\_t^{(n)} = \tan^{-1} \left( \mathbf{y}\_t - \mathbf{r}\_\mathbf{y}^{(n)} / \mathbf{x}\_t - \mathbf{r}\_\mathbf{x}^{(n)} \right) + \mathbf{v}\_t^{(n)} \tag{10}$$

**Sensor Measurement model for Seismic sensor** [22]

$$\mathbf{g}\_t^{(n)} = \begin{pmatrix} \left( \left( \mathbf{x}\_t - \mathbf{r}\_\mathbf{x}^{(n)} \right)^2 + \left( \mathbf{y}\_t - r\_\mathbf{y}^{(n)} \right)^2 \right)^{0.5} + \mathbf{v}\_t^{(n)} \\\\tan^{-1} \left( \mathbf{y}\_t - r\_\mathbf{y}^{(n)} / \mathbf{x}\_t - r\_\mathbf{x}^{(n)} \right) + \mathbf{v}\_t^{(n)} \end{pmatrix} \tag{11}$$

where *<sup>r</sup>*ð Þ *<sup>n</sup>* <sup>¼</sup> *<sup>r</sup>* ð Þ *n <sup>x</sup>* ,*r* ð Þ *n y* h i is the position of the *nth* sensor in network.

#### **Estimation Scheme.**

We now outline the estimation scheme by an EKF as well as a CGKF. The EKF has the following steps. For t = 0,1,2......

**Prediction**

$$
\overline{X}\_{t+1} = A\hat{X}\_t \tag{12}
$$

$$
\overline{P}\_{t+1} = A\hat{P}\_t A' + Q \tag{13}
$$

where *Xt*þ<sup>1</sup> is the predicted estimate based on the filtered estimate *<sup>X</sup>*^*t*, where *Pt*þ<sup>1</sup> and *<sup>P</sup>*^*<sup>t</sup>* being the state error covariances corresponding to the predicted *Xt*þ<sup>1</sup> and filtered *<sup>X</sup>*^*<sup>t</sup>* estimates respectively. and *<sup>Q</sup>* <sup>¼</sup> *BE ww*<sup>0</sup> ð Þ*B*<sup>0</sup> .

**Update/Correction:** The update of the states and covariances as per the EKF scheme are

$$
\hat{X}\_{t+1} = \overline{X}\_{t+1} + K\_{t+1} \left( \mathbf{g}\_{t+1} - h\left(\overline{X}\_{t+1}, t\right) \right) \tag{14}
$$

where

$$K\_{t+1} = \overline{P}\_{t+1} H\_{t+1} \left( H\_{t+1} \overline{P}\_{t+1} H\_{t+1} \prime + R \right)^{-1} \tag{15}$$

where *Ht*þ<sup>1</sup> is the Jacobian corresponding to *h*ð Þ*:* at time *t* þ 1 and *R* ¼ *E vv*<sup>0</sup> ð Þ.

$$
\hat{P}\_{t+1} = (I - K\_{t+1}H\_{t+1})\overline{P}\_{t+1} \tag{16}
$$

**Table 1** gives measurement Jacobians for all three sensors. In the table � *dt* <sup>¼</sup> *xt* � *<sup>r</sup>* ð Þ *n x* � �<sup>2</sup> þ *yt* � *r* ð Þ *n y* � �<sup>2</sup> is used for sake of brevity of space. The CGKF on the other hand has the following two steps.

**Prediction**

$$
\overline{X}\_{t+1} = A\hat{X}\_{t+1} \tag{17}
$$

#### **Update/Correction.**

Once the optimized Kalman gain *K* has been calculated via equations - 1,2 The following Eq. (17) updates the state parameters.

$$
\hat{X}\_{t+1} = \overline{X}\_{t+1} + K \left( \mathbf{g}\_{t+1} - h(\overline{X}\_t, t) \right) \tag{18}
$$

In our work the optimized value of K is calculated via the application of the genetic algorithm to the innovation cost function Eqs. (1) and (2).

#### **3.2 Homogeneous data fusion**

In homogeneous fusion the fusion is based on the data from multiple sensors of similar type, at every time instant. Here in this section we have used mainly the centralized approach to DF in respect of the KF. The data obtained from various nodes (similar type of sensors) is combined together then applied to EKF and CGKF for tracking the target. This approach has been used as measurement fusion [12] approach in WSN of UGSs.

Measurement fusion techniques combine the raw measurements of the target obtained from the Individual Sensor Node (ISN) at the Cluster Head Node (CHN) Level. The ISN is a tier 1 node while the CHN is a tier 2 node which is capable of running a complex fusion algorithm based on KF framework. So ISNs are considered to have minimal computation capability compared to the CHNs. The two approaches which have been implemented in our work with respect to the CGKF under the


#### **Table 1.** *Jacobians for sensors measurement models.*

homogeneous DF are Maximal Kalman filter (MKF) [12] and Weighted fusion (WF) [12] approaches. The state model is the DWNA model of the previous sub section.

#### *3.2.1 Maximal Kalman filter (MKF) method*

This method is based on fusing all measurements of the ISN by incorporating them in a fused measurement vector and the corresponding measurement noise covariance and measurement matrices as described below

$$\mathbf{g}\_t^f = \begin{bmatrix} \mathbf{g}\_t^1, \mathbf{g}\_t^2, \dots \dots \mathbf{g}\_t^m \end{bmatrix} \tag{19}$$

$$H\_t^f = \begin{bmatrix} H\_t^1, H\_t^2, \dots \dots H\_t^m \end{bmatrix} \tag{20}$$

$$R\_t^f = \text{diag}\left[R\_t^1, R\_t^2, \dots \dots R\_t^m\right] \tag{21}$$

where *g <sup>f</sup> <sup>t</sup>* in (16) is the fused measurement vector, by combining the measurements of *m* sensors (ISNs) at time instant *t*. Similarly *H <sup>f</sup> <sup>t</sup>* is the corresponding value of the Jacobian of the respective ISNs. In (21) *R<sup>f</sup> <sup>t</sup>* is the measurement error covariance. Note that no modification measurements of the ISNs is carried out here and pure measurements of the target are being fused at the CHN to obtain the final state vector and state error covariance.

#### *3.2.2 WF method*

This method is based on combining the *m* measurements in a different manner than MKF. A weighing factor *ϖ* is allotted to each of the corresponding measurements of the ISNs, which represents the degree of correctness or confidence that one has regarding the measurement obtained from a specific ISN. The weight factor has been applied to Eqs. (19)–(21) as follows

$$\mathbf{g}\_t = \frac{\sum\_{m=1}^{N} \left(\boldsymbol{\sigma}\_t^m \mathbf{g}\_t^m\right)}{\sum\_{m=1}^{N} \boldsymbol{\omega}\_t^m} \tag{22}$$

$$H\_t = \frac{\sum\_{m=1}^{N} \left(\varpi\_t^m H\_t^m\right)}{\sum\_{m=1}^{N} \omega\_t^m} \tag{23}$$

$$R\_t = \frac{\sum\_{m=1}^{N} \left(\varpi\_t^m R\_t^m\right)}{\sum\_{m=1}^{N} \varpi\_t^m} \tag{24}$$

where *gt* , *Ht* and *Rt* are the composite measurement vector, measurementmatrix/Jacobian and measurement noise covariance matrix respectively obtained by combining respective components from the *m* sensors sensing the target at that specific time instant. The *ϖ<sup>m</sup> <sup>t</sup>* is the weight alotted to the *mth* sensor at *t th* time instant. Possible choices for the weights are

$$
\sigma\_t^m = \frac{1}{R\_t^m} \tag{25}
$$

$$
\sigma\_t^m = \frac{1}{\left(d\_t^m\right)^r} \tag{26}
$$

where *R<sup>m</sup> <sup>t</sup>* represents the measurement noise of the *<sup>m</sup>th*sensor, *dm <sup>t</sup>* the distance of the *mth* sensor from the target and *r* represents the path loss exponent. In our

simulations we have used Eq. (25). We utilize the above weighted - fused quantities in the EKF and the CGKF.

#### **3.3 Heterogeneous DF**

Heterogeneous DF differs from the homogeneous variety in that we fuse data from different types of sensors in combinations: such as, PIR and acoustic or PIR and seismic or PIR, acoustic and seismic together [12]. We have tried the architectures of centralized (measurement fusion) as well as decentralized (state fusion) data fusion. There are several methods in practice for DF but for nonlinear measurement models, it has been found that only a few models have been able to maintain the accuracy against catastrophic fusion [23]. The state model applied is the DWNA (Eq. (6)) of the subsection A.

#### *3.3.1 Centralized DF*

This architecture mainly follows the measurement fusion. The measurements (data) are obtained from all ISNs and then fused at cluster head node CHNs. In our case the data obtained is nonlinear from all three sensors with different size of measurement models. The only possible approach to collate data effectively is the MKF since weighted fusion applies only to sensors based on similar measurement model. The method has been applied to both EKF and CGKF.

The MKF is an effective way to combine data from dissimilar type of sensor measurement models. At the cost of computational complexity owing to matrix size this is overall an effective method considering WF can combine data from only similar group of sensors.

#### *3.3.2 Decentralized data fusion*

The method has been explicitly used to bring out the fact the CGKF did perform better as against any of these methods of combining state parameters and covariances. This method has been cited as state fusion concept [12] or hierarchical data fusion. This is based on a two tier system wherein state estimation of the target is carried out at ISNs which forms tier 1 and these states are then fused at tier 2 in the CHNs. The global state estimate and global state error covariance calculated at CHNs and these are then fed to ISNs. The KF algorithm runs in the ISN to obtain fresh state and error covariance estimates, which are again fed at the CHN and the cycle continues. There are mainly two approaches of track to track fusion as given by Raol [12] in Eq. (27) and (28) and Durrant whyte [24] in Eqs. (29) and (30). Most of the methods surveyed in this category feature a scheme where in we have to combine error covariances and state vectors to produce new covariance and state vectors. The only difference between the two methods below is the way state estimates and error covariances are used to compute fused global values of state estimate *X <sup>f</sup> <sup>t</sup>*þ<sup>1</sup> and state error covariance *<sup>P</sup> <sup>f</sup> <sup>t</sup>*þ<sup>1</sup> at *t* þ 1 time instant. The symbols used in the equations below *P*<sup>1</sup> and *P*<sup>2</sup> are the covariances with respect to two different sensors. *<sup>X</sup>*^<sup>1</sup> *<sup>t</sup>* and *<sup>X</sup>*^<sup>2</sup> *<sup>t</sup>* are the target state vector as computed by the EKF at ISNs. *X <sup>f</sup> <sup>t</sup>*þ<sup>1</sup> and *<sup>P</sup> <sup>f</sup> <sup>t</sup>*þ<sup>1</sup> are the finally computed target state vector and state error covariances respectively at CHNs. This is fed back to every ISN after every iteration. In the present work we use the Global fusion method of Raol [12].

*A Constant Gain Kalman Filter for Wireless Sensor Network and Maneuvering Target Tracking DOI: http://dx.doi.org/10.5772/intechopen.98700*

**Global Fusion**

$$\overline{\mathbf{X}}\_{t+1}^{f} = \hat{\mathbf{X}}\_{t}^{1} + \hat{P}\_{t}^{1} \left(\hat{P}\_{t}^{1} + \hat{P}\_{t}^{2}\right)^{-1} \left(\hat{\mathbf{X}}\_{t}^{2} - \hat{\mathbf{X}}\_{t}^{1}\right) \tag{27}$$

$$
\overline{P}\_{t+1}^{f} = P\_t^1 - P\_t^1 \left(\hat{P}\_t^1 + \hat{P}\_t^2\right)^{-1} P\_t^{1T} \tag{28}
$$

#### **Track to Track Fusion**

$$\overline{P}\_{t+1}^{f} = \left[\sum\_{i=1}^{N} P\_{i\_t}^{-1}\right]^{-1} \tag{29}$$

$$\overline{X}\_{t+1}^{f} = \overline{P}\_{t+1}^{f} \sum\_{i=1}^{N} P\_{i\_t}^{-1} \mathbf{X}\_{i\_t} \tag{30}$$

#### **3.4 Maneuvering target**

The class of maneuvering targets yield particularly challenging tracking problems. The challenges include choosing a system model close to the actual target maneuvers in addition to often having to give real time solutions. In our work we now aim to demonstarate the efficacy of the CGKF framework to RADAR- measurement based coordinated turn (CT) models. We reiterate that the non necessity of prior knowledge of the system and measurement noise characteristics (often representing the nature of maneuver) make the CGKF particularly attractive. The present work builds on [15] where the CGKF algorithm has been applied to a variety of maneuvering targets based on a linear measurement model. Currently a non linear measurement model (RADAR based) has been employed in order to move a step closer to a more realistic scenario. We have applied the CGKF to the highly maneuvering class of CT models with known as well as unknown turn rates [13, 14]. In the simulation studies the turn rate is represented by *ω*.

The present part is divided into the following parts.

#### *3.4.1 CT state variable model*

A two dimensional model for the target tracking problem (maneuver in horizontal 2D plane) is described as follows.

**State Equation: CT known** *ω*.

$$X\_{t+1} = AX\_t + Bw\_t \tag{31}$$

$$\begin{aligned} \text{where state vector is } X\_t &= \begin{pmatrix} x(t) & \dot{x}(t) & y(t) \ \dot{y}(t) \end{pmatrix}^T, \text{ state transition matrix} \\ A &= \begin{pmatrix} A\_1 & -A\_2 \\ A\_2 & A\_1 \end{pmatrix}, \ B = \begin{pmatrix} B\_1 & B\_2 \\ B\_2 & B\_1 \end{pmatrix}. \\ \text{where } A\_1 &= \begin{pmatrix} 1 & \text{Sim}(a\omega\Delta t)/w \\ 0 & \text{Cov}(a\omega\Delta t) \end{pmatrix}, A\_2 = \begin{pmatrix} 0 & (1 - \text{Cov}(a\omega\Delta t))/w \\ 0 & \text{Sim}(a\omega\Delta t) \end{pmatrix}, \\ B\_1 &= \begin{pmatrix} \Delta t^2/2 & 0 \\ \Delta t & 0 \end{pmatrix}, B\_2 = \begin{pmatrix} 0 \\ 0 \end{pmatrix} \text{and} \, w\_t \text{ represents system noise which is Gaussian.} \\ \text{which is invariant.} \end{aligned}$$

and *Δt* is a time step. Here we have considered the state vector to include X and Y coordinates of the target as well as the speed in the two coordinates.

**State Equation: CT with unknown** *ω*

$$X\_{t+1} = A(X\_t)X\_t + Bw\_t \tag{32}$$

A two dimensional model for the target tracking problem (maneuver in horizontal 2D plane) is described as follows.

$$\begin{aligned} \text{where } \text{state vector is } X\_t &= \begin{pmatrix} \boldsymbol{x}(t) & \dot{\boldsymbol{x}}(t) & \boldsymbol{y}(t) & \dot{\boldsymbol{y}}(t) & \boldsymbol{o}(t) \end{pmatrix}^T, \text{state transition} \\ \text{matrix} \boldsymbol{A}(X\_t) &= \begin{pmatrix} A\_1 & -A\_2 & B\_2 \\ A\_2 & A\_1 & B\_2 \\ B\_3 & B\_3 & 1 \end{pmatrix}, \boldsymbol{B} = \begin{pmatrix} \Delta t^2/2 & 0 & 0 \\ \Delta t & 0 & 0 \\ 0 & \Delta t^2/2 & 0 \\ 0 & 0 & \Delta t \\ 0 & 0 & \Delta t \end{pmatrix} \text{ where } B\_3 = (\mathbf{0} \quad \mathbf{0}), \\ \text{where } \mathbf{0} \quad \mathbf{A}(t) \end{aligned}$$

*A*1,*A*2,*B*2,*B*<sup>3</sup> are as described above and *wt* represents system noise which is white Gaussian. Inclusion of the angular speed *ω* in the state vector makes the state equation non linear for thecase of CT with unknown *ω*.

#### **Measurement Equation:**

$$\mathbf{g}\_t = \begin{pmatrix} \left(\mathbf{x}(t)^2 + \mathbf{y}(t)^2\right)^5\\ \tan^{-1}(\mathbf{y}(t)/\mathbf{x}(t)) \end{pmatrix} + v\_t \tag{33}$$

where *gt* is the measurement vector and *vt* is measurement noise which is assumed to be white Gaussian.

#### **4. Results and sensitivity studies**

#### **4.1 Stand alone mode**

The tabulated result of all sensors for EKF and CGKF are given below with their respective PFE (Percentage Fit Error). The error metric *PFE* <sup>¼</sup> <sup>∣</sup>*Xt*�*Xt*<sup>∣</sup> <sup>∣</sup>*Xt*<sup>∣</sup> � 100 which represents the normalized difference between the estimated and actual track, achieved by CGKF and EKF. The PIR sensor gives the least error with CGKF. All the results in this section and subsequent sections are out of a minimum of 500 Montecarlo runs. The plots for EKF (Left) and CGKF (Right) have been combined together. The figures appear as top and bottom, top one is the true trajectory and bottom one is the true trajectory super imposed with estimated trajectory. The PFE is also mentioned on the graph itself for every case. This is same for all the cases given below.Where ever the error is negligible, the estimated track (green) completely takes over the actual track (black). Following are the deductions based on the simulation results. It is to be noted that the plots are based on one of the 500 runs used to compute the PFE metric (refer **Table 2**). This applies to the present and all subsequent sections also.One example of each sensor performance is displayed in the **Figures 2**–**4** respectively. The PFE, RMSPE metric (representative of the error in range calculation based on *x*, *y* coordinates of the target and expresssed as) in the plots correspond to that of the CGKF for a particular run.


**Table 2.** *Stand alone mode.* *A Constant Gain Kalman Filter for Wireless Sensor Network and Maneuvering Target Tracking DOI: http://dx.doi.org/10.5772/intechopen.98700*

**Figure 2.** *Stand alone mode:-PIR sensor.*

**Figure 3.** *Stand alone mode:-acoustic sensor.*

#### **4.2 Homogeneous fusion mode of sensors**

The results from both, MKF and Weighted fusion have been tabulated seperately as shown in the first six enteries of **Table 3**. Settings of the simulations

**Figure 4.** *Stand alone mode:- Sesimic sensor.*


#### **Table 3.**

*Measurement fusion:-4 sensor set.*

including the number of Monte Carlo runs is same as that for the Stand Alone method described above. An example of the method is illustrated in **Figures 5** and **6**. The PFE, RMSPE metrics (as defined in Section 4.1) metric in the plots correspond to that of the CGKF for the coresponding run.

#### **4.3 Heterogeneous fusion mode of sensors**

The results in last three entries of **Table 3**, are those corresponding to the measurement fusion based method of heterogeneous fusion. Settings of the simulations including the number of Monte Carlo runs is same as that for the Stand Alone and homogeneous fusion method described above. One example each *A Constant Gain Kalman Filter for Wireless Sensor Network and Maneuvering Target Tracking DOI: http://dx.doi.org/10.5772/intechopen.98700*

**Figure 5.** *Homogeneous fusion (MKF):- seismic sensor.*

**Figure 6.** *Homogeneous fusion (weighted):- acoustic sensor.*

of the two sensor (PIR and seismic case) and three sensor (all three combined) is illustrated in **Figures 7** and **8** repectively. The PFE, RMSPE metrics (as defined in Section 4.1) in the plots correspond to that of the CGKF for the coresponding run (refer **Table 4**).

**Figure 7.** *Heterogeneous fusion (maximal):-PIR and seismic sensors.*

**Figure 8.** *Heterogeneous fusion (maximal):- PIR, seismic and acoustic sensors.*

#### **4.4 Maneuvering target**

The 2-D frame work study has been carried out on a set of seventy data points in order to generate a smooth trajectory. The following system and measurement covariances matrices are used to generate the simulated track *Q* ¼ *:*01*I*, *R* ¼ *:*1*I* for

*A Constant Gain Kalman Filter for Wireless Sensor Network and Maneuvering Target Tracking DOI: http://dx.doi.org/10.5772/intechopen.98700*


#### **Table 4.**

*Heterogeneous fusion (state fusion).*

all models, choice of the initial value of *ω* in the CT model has been obtained by via standard fighter aircraft (eg F-16) data available on the internet. This value of *ω* has been set to .5, which corresponds to approx 28.6 degrees/s (which happens to be the maximum instantaneous turn rate of any current generation fighter aircraft). Here the *PFE* is based on sum total *PFE* obtained along X and Y coordinates respectively. The error metric shown in tables is the average value computed over 500 runs while the plots correspond to one specific run wherein results are presented in the form of 2D plots of the simulated target trajectory, simulated measurements and the estimated track against time. **Table 5** shows the typical constant gain average values computed using GA over 500 runs corresponding to each of the models. Corresponding to a certain gain there is a transient and steady state behavior. If the gain is large the transient is short with the steady state fluctuating error being large. When the gain is small as in the present case there is a large transient with small steady state error. If the filter is run backwards from the end then the whole actual trajectory will be wrapped around by the estimated values. In a nutshell the filter gain values *K*, can be tuned manually to provide optimal tracking results in a constant gain framework. The filter gain values will be of typical nature as per **Table 5** corresponding to specific target state models. **Figures 9** and **10** illustrate the performance of the CGKF versus the standard KF model. The PFE, RMSPE metrics (as defined in Section 4.1) in the plots correspond to that of the CGKF for a particular run.

#### **4.5 Sensitivity studies on constant gain in case of maneuvering targets (CT (known** *ω***))**

Under this heading we demonstrate the robustness of the constant gain in so far as the application of gain variations to the maneuvering target tracking scenario for


**Table 5.** *Percentage fit error comparison:-non linear case and typical K matrix values.*

**Figure 9.** *CT(known ω).*

the CT (known *ω*) is concerned. In the tables below are mentioned different PFE/ RMSPE metrics achieved as per specified variation in the constant gain values are concerned. In **Table 6** we show variation as per additive increments to the constant gain while in **Table 7** we show variation as per fractional values to the constant gain. *A Constant Gain Kalman Filter for Wireless Sensor Network and Maneuvering Target Tracking DOI: http://dx.doi.org/10.5772/intechopen.98700*


#### **Table 6.**

*Constant gain robustness to additive variations (CT(known ω)).*


#### **Table 7.**

*Constant gain robustness to fractional variations (CT(known ω)).*

The tables show that the constant gain is robust to minor additive and fractional increments, thereby demonstrative of the fact that the achieved constant gain provides good tracking results as far as achieved PFE values indicate.

#### **5. Conclusion**

We believe that these are the only studies of a CGKF applied to tracking targets in WSN environments and maneuvering target models based on non linear measurement models. As seen the EKF is unable to effectively track the targets in WSN and for the maneuvering target case compared to the CGKF. This is a significant finding and supports the fact that CGKF effectively circumvents, or in other words trades the gains with the filter statistics which are more difficult to obtain and therein gives optimal tracking results by working directly with the Kalman Gain. The present results prove that the CGKF is successful in target tracking applications wherein the constant gain approach overcomes uncertainty regarding noise statistics that exist in the framework of the problem. The CGKF has been employed for tracking maneuvering targets and those in a WSN. The present work firmly establishes the CGKF framework thereby enabling its applicability to a wider variety of problems as deemed fit by the reader.

#### **5.1 Analysis of results and future work**

#### *5.1.1 Stand alone mode*

Following are the deductions based on the simulation studies as summarized in **Table 2**.

1.The results and plots bring out clearly the novelty of CGKF, the overall performance of which is better than the EKF as per the PFE values.


Following are deductions based on the simulation studies as summarized in **Table 3** [16].


#### *5.1.3 Heterogeneous Fusion mode*

Following are the deductions based on the simulation studies as summarized in **Table 4**.


#### *5.1.4 Maneuvering target*

Following are the deductions of the simulations.


*A Constant Gain Kalman Filter for Wireless Sensor Network and Maneuvering Target Tracking DOI: http://dx.doi.org/10.5772/intechopen.98700*

3.The results obtained for the CT models including those of sensitivity analysis (Refer **Tables 6** and **7**) demonstrates the viability of applying the CGKF to this category of problems.

#### **5.2 Conclusions and Suggestions for Further Studies**

The efficacy of the CGKF has been demonstrated wherein a single approach yields optimal results for a varierty of linear [10] as well as non linear models in WSN and maneuvering target scenarios [15]. The extensive numerical studies establish the fact that the CGKF performs better that the conventional EKF.

Actual implementation of a target tracking application in the WSN environment shall require optimal routing, deployment, design, communication protocols and other such associated integral characteristics mentioned in the introduction. Though not directly within the purview of the scope of the work, these aspects are very important.

It would be very useful to apply this CGKF to variants of the Kalman Filter such as particle filter, ensemble filter and other formulations.

Finally CGKF could be tried out for massive data based problems like numerical weather prediction. The constant gains can be pre computed using earlier data and since the gains are robust they can be expected to handle newer data quiet efficiently similar to space debris as in [1].

#### **Author details**

Peeyush Awasthi<sup>1</sup> , Ashwin Yadav<sup>2</sup> \*, Naren Naik<sup>3</sup> and Mudambi Ramaswamy Ananthasayanam<sup>4</sup>

1 Graduate Research Scholar, Florida International University, USA

2 Geomatics Engineering, Department of Civil Engineering, Indian Institute of Technology, Roorkee, India

3 Department of Electrical Engineering, Indian Institute of Technology Kanpur, India

4 Department of Aerospace Engineering, Indian Institute of Science, Bangalore, India

\*Address all correspondence to: ashwiny77@gmail.com

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Mudambi R. Ananthasayanam (November 23rd 2018). Tuning of the Kalman Filter Using Constant Gains, Introduction and Implementations of the Kalman Filter, Felix Govaers, IntechOpen, DOI: 10.5772/ intechopen.81795. Available from: https://www.intechopen.com/books/ introduction-and-implementationsof-the-kalman-filter/tuning-of-thekalman-filter-using-constant-gains

[2] Mehra, R.K. , "Approaches to adaptive filtering," in *IEEE Symposium on Adaptive Processes (9th) Decision and Control* , vol.9, no., pp.141, 7-9 Dec. 1970

[3] Robert H Shumway, David S Stoffer; "Time series Analysis and its applications" (Springer Text in Statistics)

[4] Bavdekar, V. A., Deshpande, A. P. and Patwardhan, S. C. (2011) Identification of process and measurement noise covariance for state and parameter estimation using extended Kalman filter. Journal of Process control, 21 : 585-601.

[5] A. H. Mohamed, K. P. Schwartz, "Adaptive Kalman filtering for INS/ GPS," J*ournal of Geodesy*, vol 73(2), pp. 193-203,1999.

[6] Myers, K.; Tapley, B.; , "Adaptive sequential estimation with unknown noise statistics," , *IEEE Transactions on Automatic Control* , vol.21, no.4, pp. 520- 523, Aug 1976

[7] R.M.O Gemson and M.R. Ananthasayanam, "Importance of Initial State Covariance Matrix for the parameter estimation using an Adaptive Extended Kalman Filter",in American Institute of Aeronautics and Astronautics, vol. 4153, pp. 94-104,1998.

[8] Akita, T.; Takaki, R,; Shima, E, "A new adaptive estimation method of spacecraft thermal mathematical model with an ensemble Kalman filter", Acta Astronautica, vol. 73, April-May 2012, pp 144-155

[9] A.K. Anil Kumar,M.R.

Ananthasayanam,P.V. Subba Rao, "A Constant Gain Kalman Filter Approach for the prediction of the re-entry of risk objects",in *Acta Astronautica* , vol 61 (10), vol-25,pp. 831-839,2007.

[10] Yadav, A.; Naik, N.; Ananthasayanam, M. R.; Gaur, A.; Singh, Y. N., " A constant gain Kalman filter approach to target tracking in wireless sensor networks," Industrial and Information Systems (ICIIS), 2012 7th IEEE International Conference on , vol., no., pp.1,7, 6-9 Aug. 2012

[11] Christian Bohn., "Recursive Parameter Estimation Of Non Linear Continuous-Time Systems through Sensitivity-Model-Based Adaptive Filter",2000

[12] Jitendra.R.Raol, *Multi Sensor Data Fusion with MATLAB*, CRC Press,2010.

[13] X. Rong Li; Vesselin P Jilkov; , "A Survey of Maneuvering Target Tracking: Dynamic Models", SPIE conference on Signal and Data processing of small targets, April 2000 (4048-22)

[14] Yaakov Bar-Shalom; Peter K Willet; Xin Tian,*Tracking and datafusion A handbook of algorithms,* YBS Press,2011.

[15] Yadav, A.; Awasthi, P.; Naik, N.; Ananthasayanam, M.R., " A constant gain Kalman filter approach to track maneuvering targets," Control Applications (CCA), 2013 IEEE International Conference on , vol., no., pp.562,567, 28-30 Aug. 2013

*A Constant Gain Kalman Filter for Wireless Sensor Network and Maneuvering Target Tracking DOI: http://dx.doi.org/10.5772/intechopen.98700*

[16] R.E Kalman, "A new approach to Linear Filtering and Prediction Problems", in *Transactions of the ASME-Journal of Basic Engineering*, vol. 82, pg-35-45., 1960

[17] Mehra, R.K., " On the Identification of variances and and adaptive Kalman filtering," Automatic Control, IEEE Transactions on , vol. AC-15, No. 2, April 1970

[18] Kailath, T., " An innovations approach to least-squares estimation– Part I: Linear filtering in additive white noise," Automatic Control, IEEE Transactions on , vol.13, no.6, pp.646,655, Dec 1968

[19] Viswanath. Study of constant gain Kalman filter for basic systems with preliminary study of ISAR problem [MTech thesis]. India: IIT Kanpur; 2018

[20] Kalyanmoy Deb, *Multi-Objective Optimization using Evolutionary Algorithms*, Wiley India , 2010.

[21] Nithya V.S.;Sheshadri K; Kumar A; Hari K.V.S; "Model based target tracking in a wireless network of passive infrared sensor nodes" International Conference on Signal Processing and Communications (SPCOM), 2010, 18-21 July 2010

[22] Stefano Coraluppi, Craig Carthel, Mahendra Mallick; "Multi-Target tracking with Unattended Ground Sensors (UGS) Data; ALPHATECH,Inc

[23] H.B.Mitchell; Multi-Sensor Data Fusion, An Introduction 2007

[24] Hugh Durrant-Whyte Lecture notes on "Multi Sensor Data Fusion"

[25] Grimble M J, Jukes K A and D P Goodall, " Nonlinear filters and operators and the constant gain extended Kalman filter," *IMA Journal of Mathematical Control and Inf.*, vol. 1, pp. 359-386, 1984.

Section 3

## Practical Implementations and Applications

#### **Chapter 5**

## Parameter Estimation of Weighted Maxwell-Boltzmann Distribution Using Simulated and Real Life Data Sets

*Javaid Ahmad Reshi, Bilal Ahmad Para and Shahzad Ahmad Bhat*

### **Abstract**

This paper deals with estimation of parameters of Weighted Maxwell-Boltzmann Distribution by using Classical and Bayesian Paradigm. Under Classical Approach, we have estimated the rate parameter using Maximum likelihood Estimator. In Bayesian Paradigm, we have primarily studied the Bayes' estimator of the parameter of the Weighted Maxwell-Boltzmann Distribution under the extended Jeffrey's prior, Gamma and exponential prior distributions assuming different loss functions. The extended Jeffrey's prior gives the opportunity of covering wide spectrum of priors to get Bayes' estimates of the parameter – particular cases of which are Jeffrey's prior and Hartigan's prior. A comparative study has been done between the MLE and the estimates of different loss functions (SELF and Al-Bayyati's, Stein and Precautionary new loss function). From the results, we observe that in most cases, Bayesian Estimator under New Loss function (Al-Bayyati's Loss function) has the smallest Mean Squared Error values for both prior's i.e., Jeffrey's and an extension of Jeffrey's prior information. Moreover, when the sample size increases, the MSE decreases quite significantly. These estimators are then compared in terms of mean square error (MSE) which is computed by using the programming language R. Also, two types of real life data sets are considered for making the model comparison between special cases of Weighted Maxwell-Boltzmann Distribution in terms of fitting.

**Keywords:** Weighted Maxwell-Boltzmann Distribution, prior distributions, loss functions, R Software

#### **1. Introduction**

In Statistical Mechanics, there are a lot of applications of Maxwell-Boltzmann Distribution. The Maxwell-Boltzmann distribution forms the basis of the kinetic energy of gases, which explains many fundamental properties of gases, including pressure and diffusion. This distribution is sometimes called as the distribution of velocities, energy and magnitude of momenta of molecules. Tyagi and Bhattacharya [1] who considered the Maxwell distribution as a lifetime model and discussed the Baye's and minimum variance unbiased estimation procedures for its parameter and reliability function. Chaturvedi and Rani [2] estimated the classical and Baye's estimators for

the Maxwell distribution, after generalization it by adding another parameter. Empirical Baye's estimation for the Maxwell distribution was also obtained by Bekker and Roux [3]. Kazmi et al. [4] derived the Bayesian estimation for two component mixture of Maxwell distribution, assuming censored data. The Maxwell-Boltzmann distribution can be used to find the distribution of particle's kinetic energy which is related to particle's speed by the formula *<sup>E</sup>* <sup>¼</sup> *mv*2*=*2, provided the distribution of speed is known. The PDF of Maxwell-Boltzmann distribution is given by Maxwell [5]:

$$f\_w(\mathbf{x}) = \sqrt{\left(2/\pi\right)} \,\theta^{3/2} \mathbf{x}^2 e^{-\theta \cdot \mathbf{x}^2/2} \tag{1}$$

And the CDF of Maxwell Distribution is given as:

$$F\_w(\infty) = 1 - \frac{\Gamma\left((a+3)/2, \theta \mathbf{x}^2/2\right)}{\Gamma((a+3)/2)}\tag{2}$$

Recently, Aijaz et al. [6] estimates and analyze the Bayes' Estimators of Maxwell-Boltzmann Distribution under various Loss functions and prior Distributions. Other Contributions in Maxwell Distribution are Huang and Chen [7], Krishna and Malik [8], Tomer and Panwar [9], Zhang et al. [10], and Monisa [11].

Various Statisticians and Mathematicians have carried out the Bayesian paradigm of Maxwell-Boltzmann distribution by using loss functions and prior distributions, see Al-Baldawi [12], Dey et al. [13], Podder and Roy [14], Rasheed [15], and Spiring and Yeung [16].

The concept of weighted distributions introduced by Fisher [17] and later it was formulated in general terms by Rao [18] in connection with modeling statistical data. These Distributions are applicable, when each and every observation is given an equal chance of being recorded. These distributions arise, when the probability of selecting an observation varied from observation to observation. In this context, the authors generalize the Maxwell Distribution and is known as Weighted Maxwell-Boltzmann distribution. The PDF of Weighted Maxwell-Boltzmann Distribution was introduced by Aijaz et al. [19].

$$f\_w(\mathbf{x}; \theta, a) = \frac{\theta^{(a+3)/2} \mathfrak{x}^{(a+2)} e^{-\theta \cdot \mathbf{x}^2/2}}{2^{(a+1)/2} \Gamma((a+3)/2)} \tag{3}$$

Where θ is the rate parameter and ω is the weight parameter (ω>0). Also, CDF of the Weighted Maxwell Distribution is given by:

$$F\_w(\mathbf{x}) = \mathbf{1} - \frac{\Gamma\left( (a+\mathbf{3})/2, \theta \mathbf{x}^2/2 \right)}{\Gamma((a+\mathbf{3})/2)} \tag{4}$$

*Parameter Estimation of Weighted Maxwell-Boltzmann Distribution Using Simulated and Real… DOI: http://dx.doi.org/10.5772/intechopen.100227*

The Reliability function and Hazard Rate of the Weighted Maxwell Distribution is given by:

$$R\_{\text{av}}(\mathbf{x}) = \frac{\Gamma\left((a+\mathbf{3})/2, \theta \mathbf{x}^2/2\right)}{\Gamma((a+\mathbf{3})/2)}\tag{5}$$

$$h\_w(\mathbf{x}) = \frac{\theta^{((a+3)/2)} \mathfrak{x}^{(a+2)} \exp\left(-\theta \mathbf{x}^2/2\right)}{2^{((a+1)/2)} \Gamma((a+3)/2) \Gamma\left((a+3)/2, \theta \mathbf{x}^2/2\right)}\tag{6}$$

The, *r*th moments about zero of Weighted Maxwell-Boltzmann Distribution is given by:

$$\mu\_r' = (\mathfrak{I}/\theta)^\sharp \Gamma((a+r+\mathfrak{I})/\mathfrak{I})/\Gamma((a+\mathfrak{I})/\mathfrak{I}), \text{Where } r = \mathfrak{I}, \mathfrak{I}, \mathfrak{I}, \mathfrak{I}, \dots \tag{7}$$

In comparison to classical approach, Bayesian approach is considered to be fair enough in estimating the parameters of a distribution provided that the prior distribution describes nicely the random behavior of a parameter. Very often, priors are chosen according to one's subjective knowledge and beliefs that is why Bayesian approach is sometimes called as subjective approach. However, Aslam [20] have shown an application of prior predictive distribution to elicit the prior density. A number of symmetric and asymmetric loss functions have been shown to be functional, see Kasair et al. [21], Norstrom [22], Reshi et al. [23], Zellner [24], Reshi et al. [25], Dey and Maiti [26], Alkutbi [27], Wald [28], etc.

#### **2. Estimation of parameters**

In this Section, the authors estimated the parameters of Weighted Maxwell-Boltzmann Distribution under Classical and Bayesian Paradigm.

#### **2.1 Maximum likelihood estimation**

Let *x* ¼ ð Þ *x*1, *x*2, *x*3, … , *xn* be a random sample of size n from Weighted Maxwell Distribution Therefore the likelihood function will be given by:

$$L(\theta, w/\mathfrak{x}) = \frac{\theta^{n\left(\frac{\alpha+3}{2}\right)} \sum \mathfrak{x}\_{i}^{\alpha+2}}{2^{n\left(\frac{\alpha+1}{2}\right)} \Gamma\left(\frac{\alpha+3}{2}\right)^{n}} \exp\left(-\frac{\theta}{2} \sum\_{i=1}^{n} \mathfrak{x}\_{i^{2}}\right) \tag{8}$$

The, the Log likelihood function is given by:

$$\begin{split} \log L(\theta, \omega/\mathbf{x}) &= \frac{n(\omega+\mathbf{3})}{2} \log \left(\theta\right) - \frac{n(\omega+\mathbf{1})}{2} \log \left(2\right) - n \log \Gamma \left(\frac{\omega+\mathbf{3}}{2}\right) \\ &+ (\omega+\mathbf{2}) \sum\_{i=1}^{n} \log \mathbf{x}\_{i} - \frac{\theta}{2} \sum\_{i=1}^{n} \mathbf{x}\_{i} \end{split} \tag{9}$$

After, differentiating the log likelihood w.r.to θ, and equate to zero, we have:

$$\hat{\theta}\_{\text{mle}} = \frac{n\nu + \mathfrak{Z}n}{\sum\_{i=1}^{n} \mathfrak{X}\_{i}^{2}} \tag{10}$$

#### **2.2 Bayesian Estimation of Weighted Boltzmann Maxwell Distribution using different loss functions**

#### *2.2.1 Estimation using extension of Jeffery's prior*

The Joint Probability Density Function of *θ* and x is given by:

$$f\_{1}(\mathbf{x}, \theta) = \frac{\theta^{\frac{m+3\mathfrak{a}}{2} - 2c\_{1}} \sum \mathbf{x}\_{i}^{a+2}}{2^{n\left(\frac{m+1}{2}\right)} \Gamma\left(\frac{m+3}{2}\right)^{n}} \exp\left(-\frac{\theta}{2} \sum \mathbf{x}\_{i^{2}}\right) \tag{11}$$

And, the marginal distribution function of *θ* and x is given by:

$$f\_1(\infty) = \frac{\sum \mathbb{X}\_i^{\alpha + 2}}{\Gamma\left(\frac{\alpha + 3}{2}\right)^n} \frac{\Gamma\left(\frac{n\alpha + 3n - 4\epsilon\_1 + 2}{2}\right) 2^{\frac{2n - 4\epsilon\_1 + 2}{2}}}{(\sum \mathbb{X}\_i)^{\frac{n\alpha + 3n - 4\epsilon\_1 + 2}{2}}} \tag{12}$$

*Parameter Estimation of Weighted Maxwell-Boltzmann Distribution Using Simulated and Real… DOI: http://dx.doi.org/10.5772/intechopen.100227*

Substituting the above two Eqs. (11) and (12), we get the Posterior Probability Density function of *θ* and x is given by:

$$\pi\_1(\theta/\mathfrak{x}) = \frac{\theta^{\frac{m+3r-4c\_1}{2}} \left(\frac{\sum \mathfrak{x}\_i}{2}\right)^{\frac{m+3r-4c\_1+2}{2}}}{\Gamma\left(\frac{m+3r-4c\_1+2}{2}\right)} \exp\left(-\frac{\theta}{2} \sum \mathfrak{x}\_i\right) \tag{13}$$

#### *2.2.1.1 Bayes' estimator under squared error loss function*

The Risk Function Under SELF is given as:

$$R\_{\left(\text{Sq},\text{E}\right)}\left(\hat{\theta}\right) = c\hat{\theta}^2 + \frac{4c\left(\frac{n\alpha + 3n - 4c\_1 + 4}{2}\right)\left(\frac{n\alpha + 3n - 4c\_1 + 2}{2}\right)}{\left(\sum \mathbf{x}\_{i^2}\right)^2} - \frac{4c\hat{\theta}\left(\frac{n\alpha + 3n - 4c\_1 + 2}{2}\right)}{\cdot\left(\sum \mathbf{x}\_{i^2}\right)}\tag{14}$$

After solving the above risk function, we get the Baye's estimator:

$$\hat{\theta}\_{\text{(Sq,El)}} = \frac{n\alpha + 3n - 4c\_1 + 2}{\left(\sum \mathbf{x}\_{i^2}\right)}\tag{15}$$

#### *2.2.1.2 Baye's estimator under precautionary: Loss function*

The Risk Function Under Precautionary Loss Function is given as:

$$R\_{\left(\text{pr},\text{E}\right)}\left(\hat{\theta}\right) = c\hat{\theta} + \frac{4c\left(\frac{n\nu + 3n - 4\epsilon\_1 + 4}{2}\right)\left(\frac{n\nu + 3n - 4\epsilon\_1 + 2}{2}\right)}{\hat{\theta}\left(\sum \mathbf{x}\_{i^2}\right)^2} - \frac{4c\left(\frac{n\nu + 3n - 4\epsilon\_1 + 2}{2}\right)}{\left(\sum \mathbf{x}\_{i^2}\right)}\tag{16}$$

After solving the above risk function, we get the Baye's estimator:

$$\hat{\theta}\_{\text{(pr,Ej)}} = \frac{\sqrt{(n\alpha + \mathfrak{z}n - 4\mathfrak{c}\_1 + 4)(n\alpha + \mathfrak{z}n - 4\mathfrak{c}\_1 + 2)}}{\left(\sum \mathfrak{x}\_i\right)^2} \tag{17}$$

#### *2.2.1.3 Baye's estimator under the Al-Bayyati's loss function*

The Risk function under Al-Bayyati's Loss Function is given as:

$$R\_{\left(\mathrm{Al},\mathrm{EI}\right)}\left(\hat{\theta}\right) = \frac{2^{c\_1}\hat{\theta}^2}{\Gamma\left(\frac{n\alpha + 3n - 4c\_1 + 2c\_2 + 2}{2}\right)} + \frac{2^{(c\_1 + 2)}\Gamma\left(\frac{n\alpha + 3n - 4c\_1 + 2c\_2 + 6}{2}\right)}{\Gamma\left(\frac{n\alpha + 3n - 4c\_1 + 2}{2}\right)\left(\sum\_{i} \frac{n\alpha + 3n - 4c\_1 + 2}{2}\right)} \frac{\Gamma\left(\frac{n\alpha + 3n - 4c\_1 + 2c\_2 + 6}{2}\right)}{\Gamma\left(\frac{n\alpha + 3n - 4c\_1 + 2c\_2 + 4}{2}\right)} \tag{18}$$
 
$$-\frac{2^{(c\_1 + 2)}\hat{\theta}\,\Gamma\left(\frac{n\alpha + 3n - 4c\_1 + 2c\_2 + 4}{2}\right)}{\Gamma\left(\frac{n\alpha + 3n - 4c\_1 + 2}{2}\right)\left(\sum\_{i} \mathbf{x}\_i\right)^{(c\_2 + 1)}} \tag{18}$$

After solving the above risk function, we get the Baye's estimator:

$$\hat{\theta}\_{\text{(Al,E]}} = \frac{(n\nu + 3n - 4c\_1 + 2c\_2 + 2)}{(\sum \mathbf{x}\_{i^2})} \tag{19}$$

#### *2.2.1.4 Baye's estimator under the combination of Stein's loss function*

The Risk function under the Stein's Loss function is given as:

$$R\_{\text{(St,Ej)}}(\hat{\theta}) = \frac{\hat{\theta}\left(\frac{\sum x\_i}{2}\right)}{\frac{n\nu + 3n - 4c\_1}{2}} - \left(\log \hat{\theta}\right) + e^{-t} - \mathbf{1} \tag{20}$$

After solving the above risk function, we get the Baye's estimator:

$$\hat{\theta}\_{\text{(St,Ej)}} = \frac{n\alpha + \mathfrak{z}n - 4c\_1}{\left(\sum \mathfrak{x}\_{i^2}\right)}\tag{21}$$

#### *2.2.2 Bayesian estimation under gamma (α, β) prior distributions*

The Joint Probability Density Function of Maxwell-Boltzmann Distribution Using Gamma Prior Distribution is given as:

$$f\_{2}(\mathbf{x},\boldsymbol{\theta}) = \frac{\theta^{\left(\frac{\mathbf{u}\mathbf{s}+\hat{\mathbf{s}}\mathbf{s}}{2}\right)}\sum \mathbf{x}\_{i}^{\mathbf{a}+2}}{\mathcal{Z}^{\mathbf{u}\left(\frac{\mathbf{s}+1}{2}\right)}\Gamma\left(\frac{\hat{\mathbf{a}}+\mathbf{3}}{2}\right)^{n}}\exp\left(-\frac{\theta}{2}\sum \mathbf{x}\_{i^{2}}\right)\frac{\beta^{n}}{\Gamma(a)}\ \exp\left(-\beta\mathbf{\theta}\right)\,\theta^{(a-1)}\tag{22}$$

Also, the Marginal density function of x is given by:

$$f\_2(\mathbf{x}) = \frac{\sum \mathbf{x}\_i^{a+2}}{2^{n\left(\frac{a+1}{2}\right)} \Gamma\left(\frac{a+3}{2}\right)^n} \frac{\beta^a}{\Gamma(a)} \frac{\Gamma\left(\frac{na+3n+2a}{2}\right)}{\left(\sum\_{\frac{\mathbf{x}\_i}{2}} + \beta\right)^{\left(\frac{na+3n+2a}{2}\right)}}\tag{23}$$

Using above Two Results (22) and (23), we get the posterior Probability Density Function:

$$\pi\_2(\theta/\mathfrak{x}) = \frac{\theta^{\frac{n\nu+3n+2a-2}{2}} \left(\frac{\sum \mathfrak{x}\_2}{2} + \beta\right)^{\left(\frac{n\nu+3n+2a}{2}\right)}}{\Gamma\left(\frac{n\nu+3n+2a}{2}\right)} \exp\left(-\theta\left(\frac{\sum \mathfrak{x}\_2}{2} + \beta\right)\right) \tag{24}$$

#### *2.2.2.1 Under squared-error loss function*

The Risk function under Squared Error Function is given as:

$$R\_{(\text{Sq,gp})}(\hat{\theta}) = c\hat{\theta}^2 + \frac{c\left(\frac{na+3n+2a+2}{2}\right)\left(\frac{na+3n+2a}{2}\right)}{\left(\frac{\sum\_{i\_2}}{2} + \beta\right)^2} - \frac{2c\hat{\theta}\left(\frac{na+3n+2a}{2}\right)}{\left(\frac{\sum\_{i\_2}}{2} + \beta\right)}\tag{25}$$

After solving the above Risk function, we get the Baye's estimator:

$$\hat{\theta}\_{(\text{Sq,gp})} = \frac{n\alpha + \mathfrak{z}n + 2\alpha}{\sum \mathfrak{x}\_{i^2} + 2\beta} \tag{26}$$

#### *2.2.2.2 Under precautionary loss function*

The Risk function under precautionary Loss function is given as:

*Parameter Estimation of Weighted Maxwell-Boltzmann Distribution Using Simulated and Real… DOI: http://dx.doi.org/10.5772/intechopen.100227*

$$R\_{\left(\text{Pr,gp}\right)}\left(\hat{\theta}\right) = c\hat{\theta} + c\hat{\theta}^{-1} \frac{\left(\frac{n\alpha + 3n + 2\alpha + 2}{2}\right)\left(\frac{n\alpha + 3n + 2\alpha}{2}\right)\Gamma\left(\frac{n\alpha + 3n + 2\alpha}{2}\right)}{\Gamma\left(\frac{n\alpha + 3n + 2\alpha}{2}\right)\left(\frac{\sum\_{i=2}^{\infty} + \beta\right)^{\binom{4}{i}}} - \alpha} - \frac{2c}{\left(\frac{\sum\_{i=2}^{\infty} + \beta}{2}\right)^{\binom{4}{i}}} \tag{27}$$

After solving the above Risk function, we get the Baye's estimator:

$$\hat{\theta}\_{(\text{Pr,gp})} = \frac{\sqrt{(n\alpha + 3n + 2\alpha + 2)(n\alpha + 3n + 2\alpha)}}{(\sum \mathbf{x}\_i + 2\beta)}\tag{28}$$

#### *2.2.2.3 Under Al-Bayyati's loss function*

The Risk function under Al-Bayyati's Loss function:

$$R\_{(\text{Algsp})}(\hat{\theta}) = \frac{\hat{\theta}^2}{\Gamma\left(\frac{no+3n+2a}{2}\right)} \frac{\Gamma\left(\frac{no+3n+2a+2c\_2}{2}\right)}{\left(\frac{\sum\_{\tilde{x}\_2}}{2} + \theta\right)^{(c)}} + \frac{\Gamma\left(\frac{no+3n+2a+2c\_2+4}{2}\right)}{\Gamma\left(\frac{no+3n+2a}{2}\right)\left(\frac{\sum\_{\tilde{x}\_2}}{2} + \theta\right)^{(c\_2+2)}}$$
 
$$-\frac{2\hat{\theta}}{\Gamma\left(\frac{no+3n+2a}{2}\right)} \frac{\Gamma\left(\frac{no+3n+2a+2c\_2+2}{2}\right)}{\left(\frac{\sum\_{\tilde{x}\_2}}{2} + \theta\right)^{(c\_2+1)}} $$

After solving *<sup>∂</sup>R*ð Þ Al,gp ð Þ^*<sup>θ</sup> <sup>∂</sup>*^*<sup>θ</sup>* <sup>¼</sup> 0 for ^*θ*, we will have the Baye's estimator given by:

$$\hat{\theta}\_{\text{Al,gp}} = \frac{n\nu + 3n + 2a + 2c\_2}{\sum \mathbf{x}\_{i^2} + 2\beta} \tag{30}$$

#### *2.2.2.4 Under Stein's loss function*

The Risk function under Stein Loss Function is given by:

$$R\_{\text{(St,gp)}}(\hat{\theta}) = \int\_0^\alpha \left(\frac{\hat{\theta}}{\theta} - \log \frac{\hat{\theta}}{\theta} - 1\right) \frac{\theta^{\frac{\alpha + 3n + 3a - 2}{2}} \left(\frac{\sum\_{\mathbf{x}\_\beta}}{2} + \beta\right)^{\left(\frac{\alpha + 3n + 2a}{2}\right)}}{\Gamma\left(\frac{\alpha \alpha + 3n + 2a}{2}\right)} \exp\left(-\theta \left(\frac{\sum \mathbf{x}\_{i^2}}{2} + \beta\right)\right) d\theta$$

$$R\_{\text{(St,gp)}}(\hat{\theta}) = \hat{\theta} \frac{\Gamma\left(\frac{\alpha \alpha + 3n + 2a - 2}{2}\right) \left(\frac{\sum \mathbf{x}\_\beta}{2} + \beta\right)}{\left(\frac{\alpha \alpha + 3n + 2a - 2}{2}\right) \Gamma\left(\frac{\alpha \alpha + 3n + 2a - 2}{2}\right)} - \log\left(\hat{\theta}\right) + e^{-t} - 1 \tag{31}$$

After solving the above Risk function, we will have required Baye's estimator:

$$\hat{\theta}\_{(\text{St.gp})} = \frac{n\alpha + 3n + 2a - 2}{(\sum \mathbf{x}\_{i^2} + 2\beta)}\tag{32}$$

#### *2.2.3 Bayesian estimation under exponential (α) prior distributions*

The Joint Probability Density Function of Weighted Maxwell-Boltzmann Distribution Using Exponential Prior Distribution is given as:

*Adaptive Filtering - Recent Advances and Practical Implementation*

$$f\_{2}(\mathbf{x},\theta) = \frac{\theta^{\left(\frac{\mathbf{n}+\hat{\mathbf{n}}}{2}\right)} \sum \mathbf{x}\_{i}^{a+2}}{2^{\mathbf{n}\left(\frac{\mathbf{n}+\hat{\mathbf{n}}}{2}\right)} \Gamma\left(\frac{\hat{\mathbf{n}}+3}{2}\right)^{n}} \exp\left(-\frac{\theta}{2} \sum \mathbf{x}\_{i^{2}}\right) \frac{1}{\Gamma(a)} \left. \exp\left(-\theta\right) \,\theta^{(a-1)}\right| \tag{33}$$

Also, the Marginal density function of x is given by:

$$f\_2(\mathbf{x}) = \frac{\sum \mathbf{x}\_i^{a+2}}{2^{n\left(\frac{a+1}{2}\right)} \Gamma\left(\frac{a+3}{2}\right)^n} \frac{\mathbf{1}}{\Gamma(a)} \frac{\Gamma\left(\frac{n\alpha + 3n + 2a}{2}\right)}{\left(\sum\_{\frac{\mathbf{x}\_i}{2}} + \mathbf{1}\right)^{\left(\frac{n\alpha + 3n + 2a}{2}\right)}}\tag{34}$$

Using above Two Results (33) and (34), we get the posterior Probability Density Function:

$$\pi\_2(\theta/\mathfrak{x}) = \frac{\theta^{\frac{\mathfrak{m}+3\mathfrak{n}+2\mathfrak{a}-2}{2}} \left(\frac{\sum\_{\mathfrak{x}\_{i^2}}}{2} + 1\right)^{\left(\frac{\mathfrak{m}+3\mathfrak{a}+2\mathfrak{a}}{2}\right)}}{\Gamma\left(\frac{\mathfrak{m}+3\mathfrak{a}+2\mathfrak{a}}{2}\right)} \exp\left(-\theta\left(\frac{\sum \mathfrak{x}\_{i^2}}{2} + 1\right)\right) \tag{35}$$

#### *2.2.3.1 Under squared-error loss function*

The Risk function under Squared Error Function is given as:

$$R\_{\mathrm{(Sq,Ep)}}(\hat{\theta}) = c\hat{\theta}^2 + \frac{c\left(\frac{n\alpha + 3n + 2\alpha + 2}{2}\right)\left(\frac{n\alpha + 3n + 2\alpha}{2}\right)}{\left(\frac{\sum\_{\hat{x}\_2}}{2} + 1\right)^2} - \frac{2c\hat{\theta}\left(\frac{n\alpha + 3n + 2\alpha}{2}\right)}{\left(\frac{\sum\_{\hat{x}\_2}}{2} + 1\right)}\tag{36}$$

After solving the above Risk function, we get the Baye's estimator:

$$\hat{\theta}\_{\text{(Sq,ep)}} = \frac{n\alpha + \mathfrak{N}n + 2a}{\sum \mathfrak{x}\_{i^2} + 2} \tag{37}$$

#### *2.2.3.2 Under precautionary loss function*

The Risk function under precautionary Loss function is given as:

$$R\_{\left(\mathbf{Pr},\mathbf{exp}\right)}\left(\hat{\theta}\right) = c\hat{\theta} + c\hat{\theta}^{-1} \frac{\left(\frac{n\alpha + 3n + 2a + 2}{2}\right)\left(\frac{n\alpha + 3n + 2a}{2}\right)\Gamma\left(\frac{n\alpha + 3n + 2a}{2}\right)}{\Gamma\left(\frac{n\alpha + 3n + 2a}{2}\right)\left(\frac{\sum\_{2}^{X\_2}}{2} + 1\right)^{\left(\frac{\alpha}{2}\right)}} - 2c \frac{\left(\frac{n\alpha + 3n + 2a}{2}\right)}{\left(\frac{\sum\_{2}^{X\_2}}{2} + 1\right)^{\left(\frac{\alpha}{2}\right)}}$$

After solving the above Risk function, we get the Baye's estimator:

$$\hat{\theta}\_{(\text{Pr,ep})} = \frac{\sqrt{(n\alpha + 3n + 2a + 2)(n\alpha + 3n + 2a)}}{(\sum x\_{i^2} + 2)}\tag{38}$$

#### *2.2.3.3 Under Al-Bayyati's loss function*

The Risk function under Al-Bayyati's Loss function:

*Parameter Estimation of Weighted Maxwell-Boltzmann Distribution Using Simulated and Real… DOI: http://dx.doi.org/10.5772/intechopen.100227*

$$R\_{(\text{Al,cp})}\left(\hat{\theta}\right) = \frac{\hat{\theta}^2}{\Gamma\left(\frac{n\alpha + 3n + 2a}{2}\right)} \frac{\Gamma\left(\frac{n\alpha + 3n + 2a + 2c\_2}{2}\right)}{\left(\frac{\sum\_{\frac{x\_1}{2}}{2} + 1\right)^{\{c\_2\}}} + \frac{\Gamma\left(\frac{n\alpha + 3n + 2a + 2c\_2 + 4}{2}\right)}{\Gamma\left(\frac{n\alpha + 3n + 2a}{2}\right)\left(\frac{\sum\_{\frac{x\_1}{2}}{2} + 1\right)^{\{c\_2\}}}} $$
 
$$-\frac{2\hat{\theta}}{\Gamma\left(\frac{n\alpha + 3n + 2a}{2}\right)} \frac{\Gamma\left(\frac{n\alpha + 3n + 2a + 2c\_2 + 2}{2}\right)}{\left(\frac{\sum\_{\frac{x\_1}{2}}}{2} + 1\right)^{\{c\_2\} + 1}} $$

After solving the above Risk Function, we will have the Baye's estimator given by:

$$\hat{\theta}\_{\text{Al,Eq}} = \frac{n\nu + 3n + 2a + 2c\_2}{\sum \mathbf{x}\_{i^2} + 2} \tag{40}$$

#### *2.2.3.4 Under Stein's loss function*

The Risk function under Stein Loss Function is given by:

$$R\_{\text{(St,ep)}}(\hat{\theta}) = \int\_0^\alpha \left(\frac{\hat{\theta}}{\theta} - \log \frac{\hat{\theta}}{\theta} - 1\right) \frac{\theta^{\frac{\alpha + 3v + 2a - 2}{2}} \left(\frac{\sum\_{\mathbf{x}\_2}}{2} + 1\right)^{\left(\frac{\alpha + 1 + 2a}{2}\right)}}{\Gamma\left(\frac{\alpha \alpha + 3v + 2a}{2}\right)} \exp\left(-\theta \left(\frac{\sum \mathbf{x}\_{i^2}}{2} + 1\right)\right) d\theta$$

$$R\_{\text{(St,ep)}}(\hat{\theta}) = \hat{\theta} \frac{\Gamma\left(\frac{\alpha \alpha + 3v + 2a - 2}{2}\right) \left(\frac{\sum \mathbf{x}\_i}{2} + 1\right)}{\left(\frac{\alpha \alpha + 3v + 2a - 2}{2}\right) \Gamma\left(\frac{\alpha \alpha + 3v + 2a - 2}{2}\right)} - \log\left(\hat{\theta}\right) + e^{-t} - 1} \tag{41}$$

After solving the above Risk function, we will have required Baye's estimator:

$$\hat{\theta}\_{(\text{St,ep})} = \frac{n\alpha + 3n + 2\alpha - 2}{(\sum \mathbf{x}\_{i^2} + 2)} \tag{42}$$

#### **3. Simulation study of weighted Maxwell-Boltzmann distribution**

In this section, we conduct the simulation studies of weighted Maxwell-Boltzmann distribution to examine the performance of the MLEs and Bayesian estimators under different prior's like extension of Jeffrey's' prior, Gamma prior and Exponential prior under different loss functions in terms of expected estimates, biases, variances and mean squared errors by considering different parameter combinations. For the simulation study, sample size is taken as n = (25, 100, 300) to observe the effect of small, moderate and large samples on the estimators. We have conducted the simulation procedure for different random parameter combinations and the process was repeated 2000 times. From the simulation results, it is concluded that the performances of the Bayesian and MLEs become better when the sample size increases. In terms of MSE, the Bayesian estimators under Gamma prior perform better (see **Table 1**). In specific, from **Table 2**, extension of Jeffery's prior under Al-Bayyati's error loss function and stein's loss function gives smaller MSE's as compared to other loss functions.

From **Table 1**, we can see that the performances of the Bayesian and MLEs become better when the sample size increases. For large samples, Gamma prior



*Parameter Estimation of Weighted Maxwell-Boltzmann Distribution Using Simulated and Real… DOI: http://dx.doi.org/10.5772/intechopen.100227*


**Table1.**

*Average estimate, bias, variance and mean squared error for* ^*θ under gamma prior.*


*Parameter Estimation of Weighted Maxwell-Boltzmann Distribution Using Simulated and Real… DOI: http://dx.doi.org/10.5772/intechopen.100227*


ml*, maximum likelihood;* sq*, squared error loss function;* pre*, precautionary loss function;* alb, *Al-Bayyati's loss function;* ste*, Stein's loss function.*

#### **Table 2.**

*Average estimate, bias, variance and mean squared error for* ^*θ under extension of Jeffery's prior.*

under squared error loss function and Al-Bayyati's loss function gives smaller MSE's as compared to other loss functions and MLEs.

From **Table 3**, we can see that the performances of the Bayesian and MLEs become better when the sample size increases. Exponential prior under squared error loss function and stein's loss function gives smaller MSE's as compared to other loss functions. Thus, Exponential prior under squared error loss function and stein's loss function can be preferred for parameter estimation.

#### **4. Applications of weighted Maxwell-Boltzmann distribution**

In this section, we present the goodness of fit of weighted Maxwell-Boltzmann distribution (WMB). For testing the goodness of fit of weighted Maxwell-Boltzmann distribution over Maxwell-Boltzmann (MB), length biased Maxwell-Boltzmann (LBMB) and area biased Maxwell-Boltzmann (ABMB) distributions, following two data sets have been considered.

Data set I is regarding tensile strength, measured in GPA, of 69 carbon fibers tested under tension at gauge lengths of 20 mm, Bader and Priest [29].

From **Table 4**, it has been observed that weighted Maxwell-Boltzmann distribution have the lesser AIC, AICC, �log*L* and BIC values as compared to Maxwell-Boltzmann, length biased Maxwell-Boltzmann and area biased Maxwell-Boltzmann distributions. Hence we can conclude that the Weighted Maxwell-Boltzmann distribution leads to a better fit than the Maxwell-Boltzmann, length biased Maxwell-Boltzmann and area biased Maxwell-Boltzmann distributions in case of analyzing the data set I.

Data set II is regarding the strength data and it represents the strength measured in GPA for single carbon fibers and impregnated 1000-carbon fiber tows. Single fibers were tested under tension at gauge lengths of 10 mm with sample sizes n = 63; see Bader and Priest [29] and Surles and Padgett [30].

From **Table 5**, it has been observed that weighted Maxwell-Boltzmann distribution have the lesser AIC, AICC, �log*L* and BIC values as compared to Maxwell-Boltzmann, length biased Maxwell-Boltzmann and area biased Maxwell-Boltzmann distributions. Hence we can conclude that the Weighted Maxwell-Boltzmann distribution leads to a better fit than the Maxwell-Boltzmann, length biased Maxwell-Boltzmann and area biased Maxwell-Boltzmann distributions in case of analyzing the data set II.


*Parameter Estimation of Weighted Maxwell-Boltzmann Distribution Using Simulated and Real… DOI: http://dx.doi.org/10.5772/intechopen.100227*



 **3.** *Average estimate, bias, variance and mean squared error for* ^*θ under exponential prior.*

**Table**

#### *Parameter Estimation of Weighted Maxwell-Boltzmann Distribution Using Simulated and Real… DOI: http://dx.doi.org/10.5772/intechopen.100227*

#### *Adaptive Filtering - Recent Advances and Practical Implementation*


**Table 4.**

*Model comparison using AIC, AICC, BIC and -log*L *criterion for data set 1.*


**Table 5.**

*Model comparison using AIC, AICC, BIC and -log*L *criterion for data set II.*

### **5. Conclusions**


#### **Acknowledgements**

The authors are highly thankful to the referees and the editor for their valuable suggestions.

*Parameter Estimation of Weighted Maxwell-Boltzmann Distribution Using Simulated and Real… DOI: http://dx.doi.org/10.5772/intechopen.100227*

### **Author details**

Javaid Ahmad Reshi<sup>1</sup> \*, Bilal Ahmad Para<sup>2</sup> and Shahzad Ahmad Bhat3

1 Department of Statistics, Government Degree College, Anantnag, India

2 Department of Mathematical Sciences, Islamic University of Science and Technology, Awantipora, India

3 Department of Commerce, Government Degree College, Anantnag, India

\*Address all correspondence to: reshijavaid19@gmail.com

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Tyagi, R. K., and Bhattacharya, S. K. (1989). Bayes estimation of the Maxwell's velocity distribution function. Statistica 29(4), 563–567.

[2] Chaturvedi, A., and Rani, U. (1998). Classical and Bayesian reliability estimation of the generalized Maxwell failure distribution. Journal of Statistical Research 32, 113–120.

[3] Bekker, A. J. J. J., and Roux, J. J. J. (2005). Reliability characteristics of the Maxwell distribution: A Bayes estimation study. Communications in Statistics-Theory and Methods 34(11), 2169–2178.

[4] Kazmi, S., Aslam, M., and Ali, S. (2012). On the Bayesian estimation for two component mixture of Maxwell distribution, assuming type I censored data. International Journal of Applied Science and Technology 2(1), 197–218.

[5] Maxwell, J. C. (1990). The Scientific Letters and Papers of James Clerk Maxwell. Cambridge: Cambridge University Press. pp. 1846–1862.

[6] Aijaz, A., Ahmed, A., Reshi. J. A. (2017). Bayesian analysis of Maxwell-Boltzmann distribution under different loss functions and prior distributions. Pakistan Journal of Statistics 33(6), 419– 440.

[7] Huang, J., and Chen, S. (2016). Tail behavior of the generalized Maxwell distribution. Communications in Statistics-Theory and Methods 45(14), 4230–4236.

[8] Krishna, H., and Malik, M. (2012). Reliability estimation in Maxwell distribution with progressively type-II censored data. Journal of Statistical Computation and Simulation 82(4), 623–641.

[9] Tomer, S. K., and Panwar, M. S. (2015). Estimation procedures for

Maxwell distribution under type-I progressive hybrid censoring scheme. Journal of Statistical Computation and Simulation 85(2), 339–356.

[10] Zhang, W., Guo, X., and Kassab, G. S. (2008). A generalized Maxwell model for creep behavior of artery opening angle. Journal of Biomechanical Engineering 130(5), 1–16.

[11] Monsia, M. D. (2011). A simplified nonlinear generalized Maxwell model for predicting the time dependent behavior of viscoelastic materials. World Journal of Mechanics 1, 158–167.

[12] Al-Baldawi, T. H. (2013). Comparison of maximum likelihood and some Bayes estimators for Maxwell distribution based on non-informative priors. Baghdad Science Journal 10(2), 480–488.

[13] Dey, S., Dey, T., and Maiti, S. S. (2013). Bayesian inference for Maxwell distribution under conjugate prior. Model Assisted Statistics and Applications 8(3), 193–203.

[14] Podder, C. K., and Roy, M. K. (2003). Bayesian estimation of the parameter of Maxwell distribution under MLINEX loss function. Journal of Statistical Studies 23, 11–16.

[15] Rasheed, H. A. (2013). Minimax estimation of the parameter of the Maxwell distribution under quadratic loss function. Journal of Al-Rafidain University College 31, 43–56.

[16] Spiring, F.A., and Yeung, A. S. (1998). A general class of loss functions with industrial applications. Journal of Quality Technology 30(2), 152–162.

[17] Fisher, R. A. (1934). The effects of methods of ascertainment upon the estimation of frequencies. Ann. Eugenics 6, 13–25.

*Parameter Estimation of Weighted Maxwell-Boltzmann Distribution Using Simulated and Real… DOI: http://dx.doi.org/10.5772/intechopen.100227*

[18] Rao, C. R. (1965). On discrete distributions arising out of method of ascertainment. In: Classical and Contagious Discrete. G. P. Patil, editor. Calcutta: Pergamon Press and Statistical Publishing Society. pp. 320–332.

[19] Dar, A. A., Ahmed, A., and Reshi, J. A. (2018). Characterisation and estimation of weighted Maxwell-Boltzmann distribution. Applied Mathematics and Information Sciences 12(1), 193–202.

[20] Aslam, M. (2003). An application of prior predictive distribution to elicit the prior density. Journal of Statistical Theory and Applications 2(1), 70–83.

[21] K. Ahmad, S. P. Ahmad, A. Ahmed, J. A. Reshi (2015). Bayesian analysis of generalized gamma distribution using R software. Journal of Statistics Applications and Probability 4(2), 323–335.

[22] Norstrom, J. G. (1996). The use of precautionary loss functions in risk analysis. IEEE Transactions on Reliability 45(3), 400–403.

[23] Ahmed, A., Ahmad, S. P., and Reshi, J. A. (2013). Bayesian analysis of Rayleigh distribution. International Journal of Scientific and Research Publications 3(10), 217–225.

[24] Zellner, A. (1986). Bayesian estimation and prediction using asymmetric loss functions. Journal of the American Statistical Association 81 (394), 446–451.

[25] Reshi, J. A., Ahmed, A., and Mir, K. A. (2014). Bayesian estimation of size biased classical gamma distribution. Journal of Reliability and Statistical Studies 7(1), 31–42.

[26] Dey, S., and Maiti, S. S. (2010). Bayesian estimation of the parameter of Maxwell distribution under different loss functions. Journal of Statistical Theory and Practice 4(2), 279–287.

[27] Al-Kutubi, H. S. (2005). On comparison estimation procedures for parameter and survival function exponential distribution using simulation [Ph.D. thesis]. Iraq: Baghdad University.

[28] Wald, A. (1950). Statistical Decision Functions. New York: John Wiley and Sons.

[29] Bader, M. G., and Priest, A. M. (1982). Statistical aspects of fiber and bundle strength in hybrid composites. In: Hayashi, T., Kawata, K., Umekawa, S., editors. Progress in Science in Engineering Composites. Tokyo: ICCM-IV. pp. 1129–1136.

[30] Surles, J. G., and Padgett, W. J. (2001). Inference for reliability and stress-strength for a scaled Burr type X distribution. Lifetime Data Anal 7, 187– 200.

#### **Chapter 6**

## Averaging Indoor Localization System

*Eman Shawky Abd El-Fattah Amer*

#### **Abstract**

This chapter aims at improving the accuracy of estimation the localization by using the RSS method to estimate the positions and take into account the effects of both LOS propagation. The proposed system is depending on developing a mathematical model for the noisy VLC positioning system. For improving the results, adopting the KF is combined with the proposed system, which is considered an optimal estimator. The performance of the proposed technique is determined by evaluating the positioning errors in a typical room. Also this chapter develops the accuracy of the positioning system by using different ideas with average techniques. The discussion of the results for averaging technique is displayed.

**Keywords:** RSS, KF, VLC, localization

#### **1. Introduction**

This chapter aims at improving the accuracy of estimation the localization by using the Received Signal Strength (RSS) method to estimate the positions and take into account the effects of both line of sight (LOS) and non-line of sight (NLoS) propagations. The proposed system is depending on developing a mathematical model for the noisy Visible light communication (VLC) positioning system. For improving the results, adopting the Kalman filter (KF) is combined with the proposed system, which is considered an optimal estimator. The performance of the proposed technique is determined by evaluating the positioning errors in a typical room. Also, this chapter develops the accuracy of the positioning system by using different ideas with average techniques.

The remaining of this chapter is organized as follows: Section 2 discusses the optical channel in indoor systems. Section 3 is devoted to explaining the methodology of localization using RSS techniques. A mathematical derivation for performance evaluation is developed in the same section as well. In Section 4, The proposed KF algorithm is presented with explaining its algorithm for estimation correction. There is an average technique aiming to use the average method as shown in Section 5. Using both effects of LOS and the first reflection of NLoS propagation is done in the average proposed system. Adopting KF with averaging is shown in Section 6. The discussion of the results for averaging technique is displayed in Section 7. Section 8 shows the comparison between the results with some recent references. Finally, the concluding remarks are given in Section 9.

#### **2. Optical channel model**

The characteristics of the channel modeling have been analyzed with the effects of the channel distortions in [1]. The power associated with the channel is separated into two factors, these being optical path loss (PL) and multipath dispersion. The PL is calculated from the knowledge of the receiver size, the transmitter beam divergence, and separation distance. However, a NLoS configuration (diffuse systems) mainly used in the indoor environment, uses reflections of the room surfaces and furniture. These reflections could be seen as unwanted signals or multi-path distortions which predict the PL more complex. The OW channel transfer function is defined by

$$H = H\_{\rm LaS} + H\_{\rm NLaS} \tag{1}$$

According to **Figure 1**, it describes *HloS* as the contribution due to the LoS, which is independent of the modulation frequency and it depends on the distance between transmitter and receiver. In a VLC system, the direct current (DC) gain of a VLC channel is expressed by.

$$H\_{\rm LOS}^{\dot{i}} = \frac{m+1}{2\pi d\_i^2} \cos^m(\phi\_i) A\_R \cos\left(\psi\_i\right) T\_s(\psi\_i) \mathbf{g}\left(\psi\_i\right),\tag{2}$$

The received power therefore becomes

$$P\_r = H\_{\rm LOS} S\_t,\tag{3}$$

where *Pt* is a transmitted power *m* is the Lambertian order, *di* is the distance between transmitter *i* and the receiver, *ϕ<sup>i</sup>* is the irradiance angle, *ψ<sup>i</sup>* is the incidence angle, *Ts*ð Þ� and *g*ð Þ� are the gains of the optical filter and concentrator at the receiver (assumed here as unity gain), and *AR* is the detector effective area. Where the channel DC gain *HNLOS* of the first reflection is shown as in the following equation related to **Figure 1**.

**Figure 1.** *The channel model of VLC system.*

*Averaging Indoor Localization System DOI: http://dx.doi.org/10.5772/intechopen.97531*

$$\begin{split}dH\_{\text{NLOS}}^{ip} &= \frac{m+1}{2\pi D\_{ip,1}^2 D\_{p,2}^2} \cos^m \left(\phi\_{ip}\right) \cos \left(a\_{ip}\right) \cdot dA\_p \\ &\times \rho \cos \left(\beta\_p\right) \cos \left(\psi\_p\right) T\_s \left(\psi\_p\right) \mathbf{g} \left(\psi\_p\right) A\_R,\end{split} \tag{4}$$

where *Dip*,1 is the distance between transmitter *i* and reflection point *p*, *Dp*,2 is the distance between reflection point *p* and receiver *RX*, *ϕip* and *ψ<sup>p</sup>* are the NLoS irradiance and incidence angles with respect to point *p*, respectively, *αip* and *β<sup>p</sup>* are the incidence and irradiance angles at reflection point *p* on the wall, respectively, *ρ* is the wall reflectivity, and *dAp* represents the area of the reflection point on the wall.

The total NLoS channel gain for *i* th transmitter *H<sup>i</sup>* NLOS is given by collecting the reflections from the 4 walls [2]:

$$H\_{\text{NLOS}}^i = \sum\_{j=1}^4 H\_{\text{NLOS}, \text{null}j}^i,\tag{5}$$

where *H<sup>i</sup>* NLOS,*wallj* is the collection of reflections from transmitter *i* to wall *j*, and can be obtained by integrating (4) over ð Þ *x*, *z* or ð Þ *y*, *z* based on the wall location, such that

$$\begin{aligned} H\_{\text{NLOS}, \mu \text{null}}^{\text{i}} &= \iint\limits\_{(\mathbf{x}, \mathbf{z}) \text{or} \\ (\mathbf{x}, \mathbf{z}) \text{or} \\ (\mathbf{y}, \mathbf{z}) \end{aligned} \quad \text{or} \quad \begin{aligned} \frac{m+1}{2\pi D\_{ip,1}^2 D\_{p,2}^2} \cos^m \left(\phi\_{ip}\right) \cos \left(a\_{ip}\right) \rho \\ \text{or} \\ \times \cos \left(\rho\_p \right) \cos \left(\psi\_p \right) T\_s \left(\psi\_p \right) \mathbf{g} \left(\psi\_p \right) A\_R dA\_p. \end{aligned} \tag{6}$$

The determination of the parameters for the previous equation is as follows:

$$\begin{aligned} D\_{ip,1} &= \sqrt{(T\_{X,i} - p)(T\_{X,i} - p)^T} \\ D\_{p,2} &= \sqrt{(p - R\_X)(p - R\_X)^T} \end{aligned} \tag{7}$$

where ðÞ*<sup>T</sup>* is the transpose operator for row vector *<sup>a</sup>*. By using triangle calculations, the angles *ϕip*, *αip*, *ψp*, and *β<sup>p</sup>* can be found as follows:

$$\begin{aligned} \cos\left(\phi\_{ip}\right) &= \frac{|z\_i - z|}{D\_{ip,1}}, \quad a\_{ip} &= \frac{\pi}{2} - \phi\_{ip}, \\\cos\left(\psi\_p\right) &= \frac{|z - z\_0|}{D\_{p,2}}, \quad \beta\_p &= \frac{\pi}{2} - \psi\_p. \end{aligned} \tag{8}$$

The simulation of the power distribution is done for two cases, first case is using four transmitters which are distributed in different positions (1.25, 1.25, 3) m, (1.25, 3.75,3) m, (3.75, 1.25, 3) m, (3.75, 3.75, 3) m in room with size (5,5,3) m, as shown in **Figure 2**. Second case is using only one transmitter that is located in the center of the ceiling. This power distribution is shown in **Figure 3**. The simulation is done with using some parameters where *FOV* <sup>¼</sup> <sup>70</sup>*<sup>o</sup>* and assume the filter gain = 1, the number of LEDs by array 60x60. The concentrator gain = 1 where the active area of photo diode (PD) = 1*cm*2.

**Figure 2.** *The power distribution for 4 LEDs.*

**Figure 3.** *The power distribution for one LED centered in the ceiling.*

#### **3. RSS mathematical analysis**

The proposed technique depends on estimating the receiver position using RSS method, then further improving the acquired estimation by adopting the Kalman filtering algorithm. In the initial estimation, RSS technique is used taking into account the effect of LoS. Specifically, A mathematical model is developed for the noisy VLC positioning system and estimate both the angular and horizontaldistance errors. Because of the dependence of horizontal-distance error on the irradiance angle error. The performance of the proposed technique is determined by evaluating the positioning errors in a typical room. Also, the results are compared to that of the traditional RSS system. Depending on **Figure 1**; the analysis assumes that *Averaging Indoor Localization System DOI: http://dx.doi.org/10.5772/intechopen.97531*

*ψ<sup>i</sup>* ¼ *ϕ<sup>i</sup>* for any *i*∈ f g 1, 2, 3, 4 . Assume that the light emitted from each LED is distinguishable at the receiver. Accordingly, The index *i* is dropped from the developed equations.

If the transmitter and receiver are aligned together, then *ψ* ¼ *ϕ* ¼ 0, *d* ¼ *V*, where *d* is a direct distance between transmitter and receiver and *V* is the horizontal distance of the transmitter, using *Ts ψ<sup>i</sup>* ð Þ¼ 1, and *g ψ<sup>i</sup>* ð Þ¼ 1 as shown in **Figure 1**. In this case, the power of LoS can be approximated as the following:

$$P\_{R0} = \frac{(m+1)A\_R}{2\pi V^2} P\_T. \tag{9}$$

In the general case (*ϕ* 6¼ 0) as view in (2), and by using **Figure 1** where cos <sup>2</sup>ð Þ¼ *<sup>ϕ</sup> <sup>V</sup>*<sup>2</sup> *<sup>d</sup>*<sup>2</sup> and by multiplying the equation by *<sup>V</sup>*<sup>2</sup> *<sup>V</sup>*2, then, the received power can be modeled as:

$$P\_R = \cos^{m+3}(\phi) P\_{R0},\tag{10}$$

The last equation expresses the ideal system case, which means there is no noise affecting the system. From which:

$$\phi = \cos^{-1} \sqrt[k]{\frac{P\_R}{P\_{R0}}},\tag{11}$$

where *k* ¼ *m* þ 3. In the previous works, *PRo* is known while here, *PRo* is not known exactly with noise power *Pn*. Assuming constant noise power in the room then the noise add to both the receiver power and irradiance angle. Neglecting the effect of NLoS (as it is very small), include the noise effect to (10) as follows:

$$P\_R + P\_n = \cos^{m+3}(\phi + \Delta\phi)(P\_{R0} + P\_n),\tag{12}$$

where *Pn* and Δ*ϕ* are the receiver power and irradiance angle noises, respectively. Substituting *ϕ* from (11):

$$
\Delta \phi = \cos^{-1} \sqrt[k]{\frac{P\_R + P\_n}{P\_{R0} + P\_n}} - \cos^{-1} \sqrt[k]{\frac{P\_R}{P\_{R0}}}.\tag{13}
$$

From **Figure 1**, the horizontal distance without any noises is given by

$$d\_L = V \tan\left(\phi\right). \tag{14}$$

In the case of a noisy channel, the horizontal-distance error Δ*dL* is estimated from:

$$d\_L + \Delta d\_L = V \tan\left(\phi + \Delta \phi\right). \tag{15}$$

The value of the horizontal-distance error Δ*dL* is used to determine the localization more accurate after calculating the position of the receiver by trilateration method as shown in the next section.

According to the RSS method, the positioning error is simply obtained from the distance errors. The positioning algorithm uses three maximum power levels to determine the location of the user [3]. Here, RSS algorithm is used to estimate *x*0, *y*<sup>0</sup> � �, the position coordinates of the receiver. Let *xi*, *yi* � �, *<sup>i</sup>* <sup>∈</sup>f g 1, 2, 3 , be the three coordinates of three transmitters. From Δ*dL* find the positioning estimate:

#### **Figure 4.**

*A trilateration method for RSS to calculate the position of a receiver by using three transmitters.*

$$\left(\left(\varkappa\_{0}-\varkappa\_{i}\right)^{2}+\left(\mathcal{Y}\_{0}-\mathcal{Y}\_{i}\right)^{2}=\hat{d}\_{L,i}^{2},\tag{16}$$

where

$$
\hat{d}\_{L,i}^2 = d\_L + \Delta d\_L,\tag{17}
$$

for any *i*∈ f g 1, 2, 3 , where *dL*,*<sup>i</sup>* is the horizontal distance of the receiver from the transmitter *i*. This method can be clear as shown in **Figure 4**, where RSSI is received signal strength indicator.

#### **4. Proposed KF algorithm**

In this section, a KF algorithm is proposed to further improve the previous estimation (introduced in the last section) of the receiver position. Specifically, the estimation of the irradiance angle developed in the last section is further improved by adopting a KF algorithm.

The proposed system with KF is shown in **Figure 5** where the PD collects the received power and inserts it into the proposed system. The process of the proposed system is analyzing the mathematical equations to calculate the irradiance angle *ϕ* and the error of the irradiance angle Δ*ϕ*. Both of the two calculated values insert into the Kalman filtering process. The optimal value of the estimated irradiance angle is obtained after using the KF algorithm. The estimated irradiance angle inserted into the RSS process to calculate the estimated horizontal distance ^ *d*. The trilateration method has been applied to calculate the estimated position of the receiver by using the estimated horizontal distance ^ *d*. KF algorithm recursively

**Figure 5.** *The block diagram of the proposed system with KF.*

estimates the state of variables in the system in two phases; prediction and measurement [4, 5].

#### **4.1 Predict step**

We denote the state vector by *x* ¼ ð Þ *xa*, *xb <sup>T</sup>*, where *xa* represents a measured angle, *xb* is the error in the angle, and *T* is the transpose operator. Based on the estimate at iteration *k* � 1, the state *xk*�1∣*k*�1. The next step *k* of the system dynamics *xk*∣*k*�<sup>1</sup> is evaluated as:

$$
\varkappa\_{k|k-1} = F\_k \varkappa\_{k-1|k-1},\tag{18}
$$

where *Fk* is the state transition matrix. The corresponding state covariance matrix is given by:

$$P\_{k|k-1} = F\_k P\_{k-1|k-1} F\_k^T + Q\_k,\tag{19}$$

where *Qk* is the process noise covariance.

#### **4.2 Measurement step**

The updated state variable *xk*∣*<sup>k</sup>* and updated state covariance matrix *Pk*∣*<sup>k</sup>* are given by

$$\begin{aligned} \varkappa\_{k|k} &= \varkappa\_{k|k-1} + K\_k \mathcal{y}\_k \\ P\_{k|k} &= (I - K\_k H\_k) P\_{k|k-1}, \end{aligned} \tag{20}$$

respectively, where *Kk* is the Kalman gain, *yk* is is the error vector, and *Hk* is is the observation model.

$$\begin{aligned} K\_k &= P\_{k|k-1} H\_k^T \mathbb{S}\_k^{-1} \\ \mathcal{Y}\_k &= \mathcal{z}\_k - H\_k \mathbb{x}\_{k|k-1} . \end{aligned} \tag{21}$$

Here *z* denotes the measurement vector, given by:

$$z\_k = H\_k \mathfrak{x}\_k + R\_k \tag{22}$$

where *Rk* is the measurement noise matrix. Also *Sk* is the innovation matrix, which relates the covariance of state variables to measurement vector:

$$\mathbf{S}\_{k} = H\_{k} \mathbf{P}\_{k|k-1} \mathbf{H}\_{k}^{T} + \mathbf{R}\_{k}. \tag{23}$$

Finally, after getting the estimated angle then recalculate the positioning error using equations developed in Section 3.

#### **5. Proposed localization methodology using an averaging RSS technique**

Second technique contains the averaging localization method and Kalman filtering with averaging schemes. For the averaging technique, the position of the receiver has been estimated by RSSI technique for multiple times (e.g., **N** samples) and the acquired estimations are averaged over all samples.

#### **Figure 6.**

*Block diagram of proposed averaging positioning scheme.*

To enhance the results of improving the localization, The algorithm of Kalman filtering has been adopted for estimation the received power over **N** samples, followed by using RSS technique on the average received power. These techniques have been analyzed mathematically, with respect to the effects of both LoS and first-reflection from NLoS propagation.

Typical room is considered for evaluating the positioning performances for proposed techniques and the results of them are compared with the traditional RSS system.

For determining the receiver location, the trilateration method is used with the RSS from three LEDs transmitters having the maximum received levels [3]. Our techniques depend on the average of estimated receiver position over a certain number of measurements to decrease the localization error. This decreasing in error gets at the cost of exceeding the system mathematical complexity. **Figure 6** shows a simple block diagram that demonstrates this approach.

#### **5.1 RSS technique**

Using (2), the received LoS power from transmitter *i* ∈f g 1, 2, 3, 4 can be written as:

$$P\_{R,i} = \left(\frac{m+1}{2\pi d\_i^2} \cos^{m+1}(\phi\_i) A\_R \right) P\_{T,i},\tag{24}$$

where *PT*,*<sup>i</sup>* is the transmitted power of *i* th LED. Here, assume that *ψ<sup>i</sup>* ¼ *ϕi*, which is determined from **Figure 1** as:

$$\cos\left(\phi\_i\right) = \frac{V}{d\_i},\tag{25}$$

where *V* is the vertical distance between transmitter and receiver, assumed constant. Accordingly, the distance between transmitter *i* and receiver can be evaluated as:

$$d\_i = \sqrt[m+\beta]{\frac{(m+1)V^{m+1}A\_R P\_{T,i}}{2\pi P\_{R,i}}}.\tag{26}$$

*Averaging Indoor Localization System DOI: http://dx.doi.org/10.5772/intechopen.97531*

If consider the effect of NLOS as well, the total power collected at the receiver is obtained by modifying (24) to:

$$P\_{\mathcal{R},i} = \left(H^i\_{\text{LOS}} + H^i\_{\text{NLOS}}\right) P\_{T,i} \,. \tag{27}$$

#### **5.2 Linear LS method**

To estimate the receiver location, the linear LS estimation is commonly used. Let *xi*, *yi* � �, *<sup>i</sup>* <sup>∈</sup>f g 1, 2, 3 , be the horizontal coordinates of transmitter *<sup>i</sup>* and *dL*,*<sup>i</sup>* be the horizontal distance of the receiver from transmitter *i*. The range equation can be written in the form:

$$\left(\hat{\boldsymbol{x}} - \boldsymbol{\varkappa}\_{i}\right)^{2} + \left(\hat{\boldsymbol{y}} - \boldsymbol{y}\_{i}\right)^{2} = d\_{L,i}^{2}, \qquad i \in \{1, 2, 3\}, \tag{28}$$

where ð Þ *x*^, ^*y* is the estimated horizontal location of receiver. The last system of equations can be written in matrix form as:

$$A\hat{X} = B,\tag{29}$$

where

$$\begin{aligned} \hat{X} &= \begin{bmatrix} \hat{\mathbf{x}} & \hat{\mathbf{y}} \end{bmatrix}^T \\ A &= \begin{bmatrix} \mathbf{x}\_2 - \mathbf{x}\_1 & \mathbf{y}\_2 - \mathbf{y}\_1 \\\\ \mathbf{x}\_3 - \mathbf{x}\_1 & \mathbf{y}\_3 - \mathbf{y}\_1 \end{bmatrix} \\ B &= \begin{bmatrix} b\_{21} & b\_{31} \end{bmatrix}^T. \end{aligned} \tag{30}$$

Here for any *m* ∈f g 2, 3 ,

$$b\_{m1} = (\hat{\mathbf{x}} - \boldsymbol{\pi}\_1)(\mathbf{x}\_m - \mathbf{x}\_1) + (\hat{\mathbf{y}} - \boldsymbol{y}\_1)(\mathbf{y}\_m - \mathbf{y}\_1). \tag{31}$$

The solution of (29) is:

$$
\hat{X} = \left(\mathbf{A}^T \mathbf{A}\right)^{-1} \mathbf{A}^T \mathbf{B}.\tag{32}
$$

#### **5.3 Complexity analysis**

The complexity of proposed averaging RSS technique can be analyzed by counting the number of mathematical operations required to solve the LS method once and then multiplying the resulting by the number of samples. Specifically, the total number of floating-point operations is 39*N* þ 1 flops, where *N* is the number of samples. That is, the complexity increases linearly with the number of samples.

#### **6. Kalman filtering with averaging**

KF estimates the states of a linear system from the noisy measurements then produces the estimation of unknown variables that aim to get more accurate than those which based on a single measurement value.

At this section, a KF algorithm is adapted to enhance the estimation performance of the receiver positioning system. In the first, KF estimates several samples of measured received powers. Then, the average of these estimated power values is

**Figure 7.** *Block diagram of proposed Kalman filtering technique.*

evaluated. Using the output of KF which is the estimated average power, the position of the receiver can be calculated by using RSS technique. The block diagram of the proposed Kalman filtering with averaging technique is shown in **Figure 7**.

KF algorithm is shown in the previous chapter in Section 4 recursively estimates the state of variables in the system in two phases; prediction and measurement [4, 5]. We denote the state vector by *x*. This state vector represents measured received power and number of samples which are used in the process. Based on the estimate at iteration *k* � 1, and have state *xk*�1∣*k*�1.

#### **7. Simulation and discussion for averaging technique**

In this section, The simulation results are presented and compared them to that of traditional systems. **Table 1** shows the main parameters used in the simulation.

In case of demonstrating the relation between the SNR and the average positioning error, **Figure 8** shows that using five different positions of the receiver and the average positioning error in a meter. This figure shows that the proposed system outperform the traditional RSS method by nearly 1 cm at SNR=10 dB while adopting KF decreases the error by 11.5 cm that means improvement by 52.27%.

**Figures 9** and **10**, plot the true path with three different methods. The simulation is done at a FoV of 70<sup>∘</sup> and an SNR of 20 dB. These figures are simulated as a plan view to show the estimation position of the receiver in the room. The proposed RSS technique achieves a tiny improvement while the proposed RSS with KF is the closest to the true path.


**Table 1.** *Main parameters in VLC.*

**Figure 8.**

*The relation between average positioning error in meter with SNR for different positions.*

#### **Figure 9.**

*Positioning error for traditional proposed, and KF correction RSS techniques at a FOV of* 70<sup>∘</sup> *and an SNR of* 20 *dB for a y path.*

The pedestrian is moving in random directions inside the room. In **Figure 11**, compare between three techniques: Traditional RSS, proposed RSS, and proposed RSS with Kalman filtering. The comparison is done at a FoV of 70<sup>∘</sup> and an SNR of 20 dB. It is clear from the figure that adopting KF estimation further reduces the positioning error and provides an estimate that is very close to reality.

In the second technique, simulation results for the proposed system are presented and compared with that of traditional systems. The main parameters used in the simulations for the VLC link are listed in **Table 2**.

#### **7.1 Positioning error**

In the simulation, the performance measure is determined by the positioning error:

*Adaptive Filtering - Recent Advances and Practical Implementation*

$$
\varepsilon\_{\text{position}} = \sqrt{\left(\hat{\mathbf{x}} - \mathbf{x}\_0\right)^2 + \left(\hat{\mathbf{y}} - \mathbf{y}\_0\right)^2},\tag{33}
$$

where *x*0, *y*<sup>0</sup> � � is the receiver horizontal location and ð Þ *<sup>x</sup>*^, ^*<sup>y</sup>* is its estimated location. **Figure 12** shows the average error in receiver positioning for different number of samples. It is clear that the error can be reduced to less than 10% of its maximum value by averaging over 50 samples. This reduction comes at the cost of

#### **Figure 10.**

*Positioning error for traditional, proposed, and KF correction RSS techniques at FOV of* 70<sup>∘</sup> *and an SNR of* 20 *dB for a x path.*

**Figure 11.** *Relation between positioning error and number of pedestrian steps at a FOV of* 70<sup>∘</sup> *and an SNR of* 20 *dB.*

#### *Averaging Indoor Localization System DOI: http://dx.doi.org/10.5772/intechopen.97531*


#### **Table 2.**

*Simulation parameters.*

**Figure 12.**

*Comparison between average positioning errors versus number of samples in averaging RSS technique.*

increasing the mathematical complexity of the system as shown in **Figure 13**. The complexity is calculated according to the number of operations, which increases as the number of samples increases.

#### **7.2 Averaging RSS and traditional RSS techniques**

The RSS variations of the positioning error at every sample are shown in **Figure 13** for receiver position *x*0, *y*<sup>0</sup> <sup>¼</sup> ð Þ 1, 1 , considering the effect of LoS only.

The positioning error using the proposed averaging RSS technique (with 100 samples) is plotted in same figure as well. The improvement using proposed technique is clear from the figure. The traditional RSS errors are more than 0.6 m (42.4%), where the error when employing the proposed averaging RSS is only 0.217 m (15.3%). That is, an improvement of about 27.1% is getting when adapting the proposed system. Both LoS and NLoS effects are studied for position of the receiver *x*0, *y*<sup>0</sup> <sup>¼</sup> ð Þ 1, 1 as well and the results are plotted in **Figure 14**. Traditional RSS errors are more than 0.7 m (49.5%), where the errors when using proposed

**Figure 13.** *Complexity of the averaging RSS method.*

averaging RSS are only 0.255 m (18%). The improvement of nearly 31.5% is achievable by using the proposed scheme.

#### **8. Kalman filtering, averaging RSS, and traditional RSS techniques**

In this section, different comparisons are shown between the performance of three methods; The traditional RSS technique, proposed averaging RSS technique, and the proposed Kalman filtering with averaging. We use same values which given for the VLC link of **Table 2**.

#### **8.1 LoS propagation**

The effects of LoS only on two tracks' estimations for both *x* and *y* paths are presented in **Figures 14** and **15** for both *x* and *y* paths, respectively.

From the figures, both tracksâ€™ estimations are the nearest to the real one when employing the proposed techniques. Also, The results show that adopting KF estimation reduces the positioning error and improve the estimation. **Table 3** for three techniques summarizes the error and improving percentages.

#### **8.2 Both LoS and NLoS propagations**

The effect of both LOS and NLOS on Kalman filtering tracks' estimations for both *x* and *y* paths are presented in **Figures 16** and **17** for both *x* and *y* paths, respectively. It is clear that the track estimation gets slightly worse when including the effect of NLoS.

#### **8.3 Kalman filtering response**

The response for a random position estimation for the KF is shown in **Figure 18**. The filter input is a measured value of received power, while the filter output is the corresponding estimated value at different number of samples. The filter response (estimated value) is near to the actual value where the samples are greater than 11.

*Averaging Indoor Localization System DOI: http://dx.doi.org/10.5772/intechopen.97531*

**Figure 14.**

*Positioning error for both traditional RSS and averaging RSS techniques at position* ð Þ 1, 1 *, considering the effects: (a) LOS only, (b) both LOS and NLOS.*

#### **8.4 Position estimation accuracy comparison**

As mentioned in the introduction, several techniques have been proposed for indoor localization based on VLC technology. In this section, a comparison is

**Figure 15.** *Track estimations using traditional and proposed techniques for: (a) an x path, (b) an y path.*


**Table 3.** *Accuracy for different techniques.*

**Figure 16.** *Comparison between Kalman filtering track estimation for both LOS and NLOS propagations for an x path.*

**Figure 17.** *Comparison between Kalman filtering track estimation for both LOS and NLOS propagations for an y path.*

provided between the position estimation accuracy and that of previous works for same simulation parameters. The results of this comparison are summarized in **Table 4**.

It is clear from the **Table 3** that both proposed averaging RSS and Kalman filtering with averaging techniques achieve better accuracy than that proposed in [6–8]. Since the authors in [9, 10] have adopted Kalman filtering, they have better accuracy than the proposed averaging method. However, employing Kalman filtering with averaging gives a better accuracy.

#### **Figure 18.**

*Response of Kalman filtering technique.*


#### **Table 4.**

*Position estimation accuracy comparison.*

#### **9. Concluding remarks**

First, the proposed techniques have been analyzed mathematically, taking into account the effects of LoS propagation. The positioning estimation accuracy of proposed techniques has been evaluated in a typical room. The results reveal that an improvement of about 52% in the average positioning error is achievable using the proposed technique with KF, when compared to that of the traditional RSS.

Secondly, both averaging and Kalman filtering by averaging schemes are adapted to improve the positioning system. Specifically, in the averaging technique, the receiver position has been determined by using the average of the samples of RSS estimations. The position is determined by RSS estimation of a Kalman filtered averaged multiple received power samples in the second proposed system, Kalman filtering with averaging algorithm.

Simulation results reveal that an improvement of about 33.3% in estimation accuracy is achievable when using the averaging scheme as compared to that of traditional RSS scheme. This improvement increases to 72.2% when adopting proposed Kalman filtering with averaging scheme.

*Averaging Indoor Localization System DOI: http://dx.doi.org/10.5772/intechopen.97531*

#### **Author details**

Eman Shawky Abd El-Fattah Amer Alex University, Alexandria, Egypt

\*Address all correspondence to: eman.shawky@alexu.edu.eg

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Ghassemlooy, Z. Popoola, and W. Rajbhandari. Optical wireless communications system and channel modelling with MATLAB. Boca Raton, 2nd Edition, 2013.

[2] C. Huang and X. Zhang. LOS-NLOS identification algorithm for indoor visible light positioning system. pages 575–578, Bali, Indonesia, Dec. 17–20, 2017.

[3] Maxim Shchekotov. Indoor localization method based on wi-fi trilateration technique. In Proc. 16th Conference of Fruct Association (ACP 2018), Pages 177–179, Oulu, Finland, Oct. 27–31, 2014.

[4] Greg Welch and Gary Bishop. An introduction to the Kalman filter. Technical Report 95-041, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA, July 24, 2006.

[5] Yuta Teruyama and Takashi Watanabe. Effectiveness of variablegain kalman filter based on angle error calculated from acceleration signals in lower limb angle measurement with inertial sensors. Computational and Mathematical Methods in Medicine, 10 (1155):398042(1–12), 2013.

[6] F. Mousa, N. Almaadeed, K. Busawon, A. Bouridane, R. Binns, and I. Elliot. Indoor visible light communication localization system utilizing received signal strength indication technique and trilateration method. Optical Engineering, 57(1): 016107(1–10), Jan. 2018.

[7] G. B. Prince and T. D. C. Little. A two phase hybrid RSS/AoA algorithm for indoor device localization using visible light. In IEEE Global Communications Conference (GLOBECOM 2012), Pages 3347–3352, Anaheim, CA, Dec. 3–7, 2012.

[8] A. Şahin, Y. S. Eroğlu, İ Güvenç, N. Pala, and M. Yüksel. Hybrid 3-D localization for visible light communication systems. Journal of Lightwave Technology, 33(22):4589– 4599, Nov. 2015.

[9] Lihui Feng Zhitian Li and Aiying Yang. Fusion based on visible light positioning and inertial navigation using extended kalman filters. IEEE SENSORS, 17(1093):093â€"1103, JUNE 2017.

[10] Fatih Erden Yusuf Said Eroglu and Ismail Guvenc. Adaptive kalman tracking for indoor visible light positioning. Eess.SP, 1, Sep 2019.

## *Edited by Wenping Cao and Qian Zhang*

Active filters are key technologies in applications such as telecommunications, advanced control, smart grids, and green transport. This book provides an update of the latest technological progress in signal processing and adaptive filters, with a focus on Kalman filters and applications. It illustrates fundamentals and guides filter design for specific applications, primarily for graduate students, academics, and industrial engineers who are interested in the theoretical, experimental, and design aspects of active filter technologies.

Published in London, UK © 2021 IntechOpen © monsitj / iStock

Adaptive Filtering - Recent Advances and Practical Implementation

Adaptive Filtering

Recent Advances and Practical Implementation

*Edited by Wenping Cao and Qian Zhang*