Time Series Methods

## **Chapter 3**

## Methods of Conditionally Optimal Forecasting for Stochastic Synergetic CALS Technologies

*Igor N. Sinitsyn and Anatoly S. Shalamov*

## **Abstract**

Problems of optimal, sub- and conditionally optimal filtering and forecasting in product and staff subsystems at the background noise in synergistical organizationtechnical-economical systems (SOTES) are considered. Nowadays for highly available systems the problems of creation of basic systems engineering principles, approaches and information technologies (IT) for SOTES from modern spontaneous markets at the background inertially going world economics crisis, weakening global market relations at conditions of competition and counteraction reinforcement is very important. Big enterprises need IT due to essential local and systematic economic loss. It is necessary to form general approaches for stochastic processes and parameters estimation in SOTES at the background noises. The following notations are introduced: special observation SOTES (SOTES-O) with own organization-product resources and internal noise as information from special SOTES being enact noise (SOTES-N). Conception for SOTES structure for systems of technical, staff and financial support is developed. Linear, linear with parametric noises and nonlinear stochastic (discrete and hybrid) equations describing organization-production block (OPB) for three types of SOTES with their planning-economical estimating divisions are worked out. SOTES-O is described by two interconnected subsystems: state SOTES sensor and OPB supporting sensor with necessary resources. After short survey of modern modeling, sub- and conditionally optimal filtering and forecasting basic algorithms and IT for typical SOTES are given. Influence of OTES-N noise on rules and functional indexes of subsystems accompanying life cycle production, its filtration and forecasting is considered. Experimental software tools for modeling and forecasting of cost and technical readiness for parks of aircraft are developed.

**Keywords:** sub- and conditionally optimal filtering and forecasting (COF and COFc), continuous acquisition logic support (CALS), organizational-technical-economical systems (OTES), probability modeling, synergetical OTES (SOTES)

## **1. Introduction**

Stochastic continuous acquisition logic support (CALS) is the basis of integrated logistic support (ILS) in the presence of noises and stochastic factors in organizational-technical-economic systems (OTES). Stochastic CALS methodology

was firstly developed in [1–5]. According to contemporary notions in broad sense ILS being CALS basis represents the systems of scientific, design-project, organizationtechnical, manufactural and informational-management technologies, means and fractial measures during life cycle (LC) of high-quality manufacturing products (MP) for obtaining maximal required available level of quality and minimal product technical exploitational costs.

Contemporary standards being CALS vanguard methodology in not right measure answer necessary purposes. CALS standard have a debatable achievement and the following essential shortcoming:


So ILS standard do not provide the whole realization of advantages for modern and perspective information technologies (IT) including staff structure in the field of stochastic modeling and estimation of two interconnected spheres: techno-sphere (techniques and technologies) and social ones.

These stochastic systems (StS) form the new systems class: OTES-CALS systems. Such systems destined for the production and realization of various services including engineering and other categorical works providing exploitation, aftersale MP support and repair, staff, medical, economical and financial support of all processes. New developed approach is based on new stochasting modeling and estimating approaches. Nowadays such IT are widely used in technical application of complex systems functioning in stochastic media.

Estimation of IT is based on: (1) model of OTES; (2) model of OTES-O (observation system); (3) model OTES-N (noise support); (4) criteria, estimation methods models and for new generations of synergetic OTES (SOTES) measuring model and organization-production block (OPB) in OTES-O are separated.

Synergetics being interdisciplinary science is based on the principle of selfrealization of the open nonlinear dissipative and nonconservative systems. According to [6, 7] in equilibrium when all systems parameters are stable and variation in it arise due to minimal deviations of some control parameters. As a result, the system begins to move off from equilibrium state with increasing velocity. Further the non-stability process lead to total chaos and as a result appears bifurcation. After that gradually new regime establishes and so on.

The existence of big amount of free entering elements and subsystems of various levels is the basic principle of self-organization. One of inalienable properties of synergetical system is the existence of "attractors". Attractor is defined as attraction set (manifold) in phase space being the aim of all nonlinear trajectories of moving initial point (IP). These manifolds are time invariant and are defined from equilibrium equation. Invariant manifolds are also determined as constraints of non-conservative synergetical system. In synergetical control theory [8] transition from natural, unsupervised behavior according to algorithms of dissipative structure to control motion IP along artificially in putted

## *Methods of Conditionally Optimal Forecasting for Stochastic Synergetic CALS Technologies DOI: http://dx.doi.org/10.5772/intechopen.103657*

demanded invariant manifolds. As a control object of synergetical system always nonlinear its dynamics may be described by nonlinear differential equations. In case of big dimension the parameters of order are introduced by revealing most slow variable and more quick subordination variables. This approach in hierarchical synergetic system is called subordination principle. So at lower hierarchy level processors go with maximal velocity. Invariant manifolds are connected with slow dynamics.

Section 1 is devoted to probabilistic modeling problems in typical StS. Special attention is paid to hybrid systems. Such specific StS as linear, linear with the Gaussian parametric noises and nonlinear reducible to quasilinear by normal approximation method. For quick off-line and on-line application theory of conditionally optimal forecasting in typical StS is developed in Section 2. In Section 3 basic off-line algorithm of probability modeling in SOTES are presented. Basic conditionary optimal filtering and forecasting quick – off-line and on-line algorithms for SOTES are given in Section 4. Peculiarities of new SOTES generalizations are described in Section 5. Simple example illustrating the influence of SOTES-N noise on rules and functional indexes of subsystems accompanying life cycle production, its filtration and forecasting is presented in Section 6. Experimental software tools for forecasting of cost and technical readiness for aircraft parks are developed.

## **2. Probabilistic modeling in StS**

Let us consider basic mathematical models of stochastic OTES:


Probabilistic analytical modeling of stochastic systems (StS) equations is based on the solution of deterministic evolutionary equations (Fokker-Plank-Kolmogorov, Pugachev, Feller-Kolmogorov) for one- and finite dimensions. For stochastic equations of high dimensions solution of evolutionary equation meets principle computationary difficulties.

At practice taking into account specific properties of StS it is possible to design rather simple stochastic models using a priori data about StS structure, parameters and stochastic factors. It is very important to design for different stages of the life cycle (LC) models based on available information. At the last LC stage we need hybrid stochastic models.

Let us consider basic general and specific stochastic models and basic algorithms of probabilistic analytical modeling. Special attention will paid to algorithms based on normal approximation, statistical linearization and equivalent linearization methods. For principally nonlinear non Gaussian StS may be recommended corresponding parametrization methods [9].

### **2.1 Continuous StS**

Continuous stochastic models of systems involve the action of various random factors. While using models described by differential equations the inclusion of random factors leads to the equations which contain random variables.

Differential equations for a StS (more precisely for a stochastic model of a system) must be replaced in the general case by the Equations [9, 10].

$$
\dot{Z} = F(Z, \varkappa, t), \quad Y = G(Z, t), \tag{1}
$$

where *F z*ð Þ , *x*, *t* and *G z*ð Þ , *t* are random functions of the *p*-dimensional vector, *z*, *n*-dimensional vector *x* and time *t* (as a rule *G* is independent of *x*). In consequence of the randomness of the right-hand sides of Eq. (1) and also perhaps of the initial value of the state vector *Z*<sup>0</sup> ¼ *Z t*ð Þ<sup>0</sup> the state vector of the system *Z* and the output *Y* represent the random variables at any fixed time moment *t*. This is the reason to denote them by capital letters as well as the random functions in the right-hand sides to the Eq. (1). The state vector of the system *Z t*ð Þ and its output *Y t*ð Þ considered as the functions of time *t* represent random functions of time *t* (in the general case vector random functions). In every specific trial the random functions *F z*ð Þ , *x*, *t* and *G z*ð Þ , *t* are realized in the form of some functions *f z*ð Þ , *x*, *t* and *g z*ð Þ , *t* and these realizations determine the corresponding realizations *z t*ð Þ, *y t*ð Þ of the state vector *Z t*ð Þ and the output *Y t*ð Þ satisfying the differential equations (which are the realizations of Eq. (1)

$$
\dot{z} = f(z, x, t), \quad \mathcal{y} = \mathbf{g}(z, t).
$$

Thus we come to the necessity to study the differential equations with random functions in the right-hand sides.

At practice the randomness of the right-hand sides of the differential equations arises usually from the fact that they represent known functions some of whose arguments are considered as random variables or as random functions of time *t* and perhaps of the state and the output of the system. But in the latter cased these functions are usually replaced by the random functions of time which are only obtained by assuming that their arguments *Z* and *Y* are known functions of time corresponding to the nominal regime of system functioning. In practical problems such an assumption usually provides sufficient accuracy.

So we may restrict ourselves to the case where all uncertain variables in the righthand sides of differential equations may be considered as random functions of time. Then Eq. (1) may be written in the form

$$\dot{Z} = f(Z, \mathbf{x}, N\_1(t), t), \quad Y = \mathbf{g}(Z, N\_2(t), t), \tag{2}$$

where *f* and *g* are known functions whose arguments include random functions of time *N*1ð Þ*t* and *N*2ð Þ*t* . The initial state vector of the system *Z*<sup>0</sup> in practical problems is always a random variable independent of the random functions *N*1ð Þ*t* and *N*2ð Þ*t* (independent of random disturbances acting of the system).

Every realization *<sup>n</sup>*1ð Þ*<sup>t</sup> <sup>T</sup> <sup>n</sup>*2ð Þ*<sup>t</sup> <sup>T</sup>* h i*<sup>T</sup>* of the random function *<sup>N</sup>*1ð Þ*<sup>t</sup> TN*2ð Þ*<sup>t</sup> <sup>T</sup>* h i*<sup>T</sup>* determines the corresponding realizations *f z*ð Þ , *x*, *n*1ð Þ*t* , *t* , *g z*ð Þ , *n*2ð Þ*t* , *t* of the functions *f z*ð Þ , *x*, *N*1ð Þ*t* , *t* , *g z*ð Þ , *N*2ð Þ*t* , *t* , and in accordance with this Eq. (2) determine respective realizations *z t*ð Þ and *y t*ð Þ of the state vector of the system *Z t*ð Þ and its output *Y t*ð Þ.

Following [9, 10] let us consider the differential equation

$$dX/dt = a(X,t) + b(X,t)V,\tag{3}$$

where *a x*ð Þ , *<sup>t</sup>* , *b x*ð Þ , *<sup>t</sup>* being functions mapping *Rp* � *<sup>R</sup>* into *Rp* and *<sup>R</sup>pq*, respectively, is called a stochastic differential equation if the random function (generalized) *V t*ð Þ represents a white noise in the strict sense. Let *X*<sup>0</sup> be a random vector of the same *Methods of Conditionally Optimal Forecasting for Stochastic Synergetic CALS Technologies DOI: http://dx.doi.org/10.5772/intechopen.103657*

dimension as the random function *X t*ð Þ. Eq. (3) with the initial condition *X t*ð Þ¼ <sup>0</sup> *X*<sup>0</sup> determines the stochastic process (StP)*X t*ð Þ .

In order to give an exact sense to Eq. (3) and to the above statement we shall integrate formally Eq. (3) in the limits from *t*<sup>0</sup> to *t* with the initial condition *X t*ð Þ¼ <sup>0</sup> *X*0. As result we obtain

$$X(t) = X\_0 + \int\_{t\_0}^t a(X(\tau), \tau)d\tau + \int\_{t\_0}^t b(X(\tau), \tau)V(\tau)d\tau$$

where the first integral represents a mean square (m.s.) integral. Introducing the StP with independent increments *W t*ð Þ whose derivative is a white noise *V t*ð Þ we rewrite the previous equation in the form

$$X(t) = X\_0 + \int\_{t\_0}^t a(X(\tau), \tau)d\tau + \int\_{t\_0}^t b(X(\tau), \tau)d\mathcal{W}(\tau). \tag{4}$$

This equation has the exact sense. Stochastic differential Eq. (3) or the equivalent equation

$$dX = a(X, t)dt + b(X, t)dW\tag{5}$$

with the initial condition *X t*ð Þ¼ <sup>0</sup> *X*<sup>0</sup> represents a concise form for of Eq. (4).

Eq. (5) in which the second integral represents a stochastic Ito integral is called a stochastic Ito integral equation and the corresponding differential Eq. (3) or (5) is called a stochastic Ito differential Eq.

A random process *X t*ð Þ satisfying Eq. (4) in which the integral represent the m.s. limits of the corresponding integral sums is called a mean square of shortly, an m.s. solution of stochastic integral Eq. (4) and of the corresponding stochastic differential Eq. (3) or (5) with the initial condition *X t*ð Þ¼ <sup>0</sup> *X*0.

If the integrals in Eq. (4) exist for every realization of the StP*W t*ð Þ and *X t*ð Þ and equality (4) is valid for every realization then the random process *X t*ð Þ is called a solution in the realization of Eq. (4) and of the corresponding stochastic differential Eq. (3) and (5) with the initial condition *X t*ð Þ¼ <sup>0</sup> *X*0.

Stochastic Ito differential Eqs. (3) and (5) with the initial condition *X t*ð Þ¼ <sup>0</sup> *X*0, where *X*<sup>0</sup> is a random variable independent of the future values of a white noise *V s*ð Þ, *s* >*t*<sup>0</sup> (future increments *W s*ðÞ� *W t*ð Þ, *s*> *t*≥*t*0, of the process *W*) determines a Markov random process.

In case of *W* being vector StP with independent in increments probabilistic modeling of one and *<sup>n</sup>*-dimensional characteristic functions *<sup>g</sup>*<sup>1</sup> <sup>¼</sup> <sup>E</sup>*e<sup>i</sup>λTZ t*ð Þ and *gn* ¼ E exp *i* P*<sup>n</sup> <sup>k</sup>*¼<sup>1</sup>*λ<sup>T</sup> <sup>k</sup> Z t*ð Þ*<sup>k</sup>* � � and densities *<sup>f</sup>* <sup>1</sup> and *<sup>f</sup> <sup>n</sup>* is based on the following integrodifferential Pugachev Eqs:

$$\frac{\partial \mathbf{g}\_1(\boldsymbol{\lambda}; t)}{\partial t} = \frac{1}{(2\pi)^p} \int\_{-\infty}^{\infty} \int\_{-\infty}^{\infty} \left[ i\boldsymbol{\lambda}^T \boldsymbol{a}(\boldsymbol{z}, t) + \boldsymbol{\chi} \left( \boldsymbol{b}(\boldsymbol{z}, t)^T \boldsymbol{\lambda}; t \right) \right] \boldsymbol{e}^{i \left( \boldsymbol{\lambda}^T + \boldsymbol{\mu}^T \right) \boldsymbol{z}} \boldsymbol{g}\_1(\boldsymbol{\mu}; t) d\boldsymbol{\mu} d\boldsymbol{z}, \tag{6}$$
 
$$\frac{\partial}{\partial t\_n} \boldsymbol{g}\_n(\boldsymbol{\lambda}\_1, \dots, \boldsymbol{\lambda}\_n; t\_1, \dots, t\_n) = \frac{1}{(2\pi)^{pq}} \int\_{-\infty}^{\infty} \int\_{-\infty}^{\infty} \left[ i \boldsymbol{\lambda}\_n^T \boldsymbol{a}(\boldsymbol{z}\_n, t\_n) + \boldsymbol{\chi} \left( \boldsymbol{b}(\boldsymbol{z}\_n, t\_n)^T \boldsymbol{\lambda}\_n; t\_n \right) \right] \tag{7}$$
 
$$\times \exp\left\{ i \sum\_{k=1}^n \left( \boldsymbol{\lambda}\_k^T - \boldsymbol{\mu}\_k^T \right) \boldsymbol{z}\_k \right\} \boldsymbol{g}\_n(\boldsymbol{\mu}\_1, \dots, \boldsymbol{\mu}\_n; t\_1, \dots, t\_n) d\boldsymbol{\mu}\_1, \dots, d\boldsymbol{\mu}\_n; d\boldsymbol{z}\_1, \dots, d\boldsymbol{z}\_n,$$

*Time Series Analysis - New Insights*

$$\frac{\partial f\_1(x;t)}{\partial t} = \frac{1}{(2\pi)^p} \int\_{-\infty}^{\infty} \int\_{-\infty}^{\infty} \left[ i\lambda^T a(\zeta, t) + \chi \left( b(\zeta, t)^T \lambda; t \right) \right] e^{i\lambda^T(\zeta - x)} f\_1(\zeta, t) d\zeta d\lambda,\tag{8}$$

$$\frac{\partial}{\partial t\_n} f\_n(z\_1, \dots, z\_n; t\_1, \dots, t\_n) = \frac{1}{(2\pi)^{np}} \int\_{-\infty}^{\infty} \dots \int\_{-\infty}^{\infty} \left[ i\lambda\_n^T a(\zeta\_n, t\_n) + \chi \left( b(\zeta\_n, t\_n)^T \lambda\_n; t\_n \right) \right] \tag{9}$$

$$\times \exp\left\{i\sum\_{k=1}^{n} \lambda\_k^T(\zeta\_k - z\_k)\right\} f\_n(\zeta\_1, \dots, \zeta\_n; t\_1, \dots, t\_n) d\zeta\_1, \dots, d\zeta\_n; d\lambda\_1, \dots, d\lambda\_n,$$

$$f\_1(\mathbf{z}; t\_0) = f\_0(\mathbf{z}), \tag{10}$$

where *i* being imaginary unit,

$$\chi(\mu; t) = \frac{1}{h\_1(\mu; t)} \frac{\partial h\_1(\mu; t)}{\partial t},\tag{11}$$

$$f\_n(z\_1, \dots, z\_{n-1}, z\_n; t\_1, \dots, t\_{n-1}, t\_{n-1}) = f\_{n-1}(z\_1, \dots, z\_{n-1}; t\_1, \dots, t\_{n-1}) \delta(z\_n - z\_{n-1}).\tag{12}$$

For the Wiener *W* StP with intensity matrix *v t*ð Þ we use Fokker-Plank-Kolmogorov Eqs:

$$\frac{\partial f\_n(x;t)}{\partial t} = \frac{\partial^T}{\partial x} \left[ a(x,t) f\_n(x;t) \right] + \frac{1}{2} \text{tr} \left\{ \frac{\partial}{\partial z} \frac{\partial^T}{\partial z} b(x,t) v(t) b(z,t)^T f\_n(z;t) \right\} \tag{13}$$

at initial conditions (12).

#### **2.2 Discrete StS**

For discrete vector StP yielding regression and autoregression StS

$$X\_{k+1} = a\flat\_k(X\_k, V\_k) \quad (k = 1, 2, \dots), \tag{14}$$

$$X\_{k+1} = a\_k(X\_k) + b\_k(X\_k)V\_k \quad (k = 1, 2, \dots). \tag{15}$$

Eqs for one and *n* dimensional densities and characteristic functions are described by:

$$f\_k(\mathbf{x}) = \frac{1}{(2\pi)^p} \int\_{-\infty}^{\infty} e^{-i\boldsymbol{\lambda}^T \mathbf{x}} \mathbf{g}\_k(\boldsymbol{\lambda}) d\boldsymbol{\lambda}, \quad \mathbf{g}\_k(\boldsymbol{\lambda}) = \mathbf{E} \exp\left\{i\boldsymbol{\lambda}^T \mathbf{X}\_k\right\},\tag{16}$$

$$f\_{\left(k\_1,\ldots,k\_n\right)}\left(\mathbf{x}\_1,\ldots,\mathbf{x}\_n\right) = \frac{\mathbf{1}}{\left(2\pi\right)^{np}} \int\_{-\infty}^{\infty} \exp\left\{i\sum\_{h=1}^n \lambda\_h^T \mathbf{x}\_h\right\} \mathbf{g}\_{k\_1,\ldots,k\_n}\left(\lambda\_1,\ldots,\lambda\_n\right) d\lambda\_1,\ldots,d\lambda\_n,\tag{17}$$

$$\mathbf{g}\_{\boldsymbol{\lambda}\_{1},\ldots,\boldsymbol{\lambda}\_{n}}(\boldsymbol{\lambda}\_{1},\ldots,\boldsymbol{\lambda}\_{n}) = \mathbf{E} \exp\left\{i\sum\_{l=1}^{n} \boldsymbol{\lambda}\_{l}^{T} \boldsymbol{\omega}\_{k\_{l}}\right\},\tag{18}$$

$$\mathbf{g}\_{k+1}(\lambda) = \mathbf{E} \exp\left\{i\lambda\nu\_k(X\_k, V\_k) = \int\_{-\infty}^{\infty} \int\_{-\infty}^{\infty} e^{-i\lambda^T a\_{-}(\mathbf{x}, \mathbf{y})} f\_k(\mathbf{x}) h\_k(\nu) d\mathbf{x} d\nu,\tag{19}$$

*Methods of Conditionally Optimal Forecasting for Stochastic Synergetic CALS Technologies DOI: http://dx.doi.org/10.5772/intechopen.103657*

$$\begin{split} \mathcal{g}\_{k\_1,\ldots,k\_n}(\lambda\_1,\ldots,\lambda\_n) &= \operatorname{E} \exp\left\{i\sum\_{l=1}^n \lambda\_l^T \mathbf{x}\_{k\_l} + i\lambda\_n^T \boldsymbol{\alpha}\_{k\_n}(\mathbf{X}\_{k\_n}, \mathbf{V}\_{k\_n})\right\} \\ &= \int\_{-\infty}^{\infty} \dots \int\_{-\infty}^{\infty} \int\_{-\infty}^{\infty} \exp\left\{i\sum\_{h=1}^n \lambda\_h^T \mathbf{x}\_h + i\lambda\_n^T \boldsymbol{\alpha}\_{k\_n}(\mathbf{x}\_n, \mathbf{v}\_n)\right\} f\_{k\_1,\ldots,k\_n}(\mathbf{x}\_1,\ldots,\mathbf{x}\_n) \boldsymbol{\eta}\_n(\mathbf{v}\_n) d\mathbf{x}\_1 \ldots d\mathbf{x}\_n, d\boldsymbol{\eta}\_n. \end{split} \tag{20}$$

Here E being symbol of mathematical expectation, *hk* being *Vk* characteristic function

$$\begin{aligned} \mathbf{g}\_{k\_1,\ldots,k\_{n-1},k\_{n-1}}(\lambda\_1,\ldots,\lambda\_n) &= \mathbf{g}\_{k\_1,\ldots,k\_{n-1}}(\lambda\_1,\ldots,\lambda\_{n-1}+\lambda\_n),\\ \mathbf{g}\_{k\_1,\ldots,k\_n}(\lambda\_1,\ldots,\lambda\_n) &= \mathbf{g}\_{\iota\_1,\ldots,\iota\_n}(\lambda\_{\mathfrak{s}\_1},\ldots,\lambda\_{\mathfrak{s}\_n}),\end{aligned} \tag{21}$$

where ð Þ *s*1, … , *sn* – permutation of 1, ð Þ … , *n* at *ks*<sup>1</sup> < *ks*<sup>2</sup> < … < *ksn :*

In case of the autoregression StS (1.14) basic characteristic functions are given by Eqs:

$$\begin{split} \mathbf{g}\_{k+1}(\boldsymbol{\lambda}\_{1},\ldots,\boldsymbol{\lambda}\_{n}) &= \mathbf{E} \exp\left\{i\boldsymbol{\lambda}^{T}\boldsymbol{a}\_{k}(\mathbf{x}\_{k}) + i\boldsymbol{\lambda}^{T}\boldsymbol{b}\_{k}(\mathbf{X}\_{k})\boldsymbol{V}\_{k}\right\} \\ &= \int\_{-\infty}^{\infty} \int\_{-\infty}^{\infty} \boldsymbol{\varepsilon}^{i\boldsymbol{\lambda}^{T}\boldsymbol{a}\_{k}(\mathbf{x}) + i\boldsymbol{\lambda}^{T}\boldsymbol{a}\_{k}(\mathbf{x})\boldsymbol{v}} \boldsymbol{f}\_{k}(\mathbf{x}) \boldsymbol{h}\_{k}(\mathbf{v}) d\mathbf{x} d\boldsymbol{\nu} = \mathbf{E}\left[\exp\left\{i\boldsymbol{\lambda}^{T}\boldsymbol{a}\_{k}(\mathbf{X}\_{k})\right\} + \boldsymbol{h}\_{k}\left(\boldsymbol{b}\_{k}(\mathbf{X}\_{k})^{T}\boldsymbol{\lambda}\right)\right], \end{split} \tag{22}$$

$$\begin{split} \mathcal{G}\_{k\_{1},\ldots,k\_{n}}(\lambda\_{1},\ldots,\lambda\_{n}) &= \operatorname{E}\exp\left\{i\sum\_{l=1}^{n-1}\boldsymbol{\lambda}\_{l}^{T}\boldsymbol{\mathbf{x}}\_{k\_{l}} + i\boldsymbol{\lambda}\_{n}^{T}a\_{k\_{n}}(\boldsymbol{\mathbf{X}}\_{k\_{n}}) + i\boldsymbol{\lambda}\_{n}^{T}b\_{k\_{n}}(\boldsymbol{\mathbf{X}}\_{k\_{n}})\boldsymbol{V}\_{k\_{n}}\right\} \\ &= \left[\begin{array}{ll}\ldots&\int\ldots\\\ldots\\\ldots\end{array}\right] \operatorname{exp}\left\{i\sum\_{k=1}^{n-1}\boldsymbol{\lambda}\_{l}^{T}\boldsymbol{\mathbf{x}}\_{l} + i\boldsymbol{\lambda}\_{n}^{T}a\_{k\_{n}}(\boldsymbol{\mathbf{x}}\_{n}) + i\boldsymbol{\lambda}\_{n}^{T}b\_{k\_{n}}(\boldsymbol{\mathbf{x}}\_{n})\boldsymbol{v}\_{n}\right\} \\ &\times\operatorname{E}\left[\exp\left\{i\sum\_{l=1}^{n-1}\boldsymbol{\lambda}\_{l}^{T}\boldsymbol{\mathbf{x}}\_{k\_{l}} + i\boldsymbol{\lambda}\_{n}^{T}a\_{k\_{n}}(\boldsymbol{\mathbf{X}}\_{k\_{n}})\right\} + h\_{n}\left(b\_{n}(\boldsymbol{\mathbf{X}}\_{n})^{T}\boldsymbol{\lambda}\_{n}\right)\right]. \end{split} \tag{23}$$

#### **2.3 Hybrid continuous and discrete StS**

When the system described by Eq. (2) is automatically controlled the function which determines the goal of control is measured with random errors and the control system components forming the required input *x*<sup>∗</sup> are always subject to noises, i.e. to random disturbances. Forming the required input and the real input including the additional variables necessary to transform these equations into a first-order equation may be written in the form

$$
\dot{X} = \varphi(X, U, t), \quad \dot{U} = \varphi(X, Z, U, N\_3(t), t) \tag{24}
$$

where *U* is the vector composed of the required input and all the auxiliary variables, and *N*3ð Þ*t* is some random function of time *t* (in the general case representing a vector random function). Writing down these equations we have taken into account that owing to the action of noises described by the random function *N*3ð Þ*t* , the vector *U* and the input *X* represent random functions of time and in accordance with this we denoted them by capital letters. These equations together with the first Eq. (2) form the set of Eqs

$$
\dot{Z} = f(Z, X, N\_1(t), t), \quad \dot{X} = \varphi(X, U, t), \quad \dot{U} = \varphi(X, Z, U, N\_3(t), t).
$$

These equations may be written in the form of one equation determining the extended state vector of the system *<sup>Z</sup>*<sup>1</sup> <sup>¼</sup> *<sup>Z</sup>TXTUT* � �*<sup>T</sup>* :

$$\dot{Z}\_1 = f\_1(Z\_1, N\_4(t), t)$$

where *<sup>N</sup>*4ðÞ¼ *<sup>t</sup> <sup>N</sup>*1ð Þ*<sup>t</sup> TN*3ð Þ*<sup>t</sup> <sup>T</sup>* h i*<sup>T</sup>* , and

$$f\_1(Z\_1, N\_4(t), t) = \left[ f(Z, X, N\_1, t)^T \: \begin{matrix} \varphi(X, U, t)^T \: \begin{matrix} \varphi(X, Z, U, N\_3, t)^T \end{matrix} \end{matrix} \right]^T.$$

As a result rejecting the indices of *Z*<sup>1</sup> and *f* <sup>1</sup> we replace the set of the Eqs. (2) and (24) by the equations

$$\dot{Z} = f\_1(Z\_1, N\_4(t), t), \quad Y = \mathbf{g}(Z, N\_2(t), t).$$

In practical problems the random functions *N*1ð Þ*t* and *N*2ð Þ*t* are practically always independent. But the random function *N*3ð Þ*t* depends on *N*1ð Þ*t* and *N*2ð Þ*t* due to the fact that the function *h Y*ð Þ¼ , *t hgZ* ð Þ ð Þ , *N*2ð Þ*t* , *t* , *t* and its total derivative with respect to time *t* enter into Eq. (24). Therefore, the random function *N*2ð Þ*t* and *N*4ð Þ*t* are dependent. Introducing the composite vector random function *N t*ðÞ¼ *<sup>N</sup>*1ð Þ*<sup>t</sup> TN*2ð Þ*<sup>t</sup> TN*3ð Þ*<sup>t</sup> <sup>T</sup>* h i*<sup>T</sup>* we rewrite the equations obtained in the form

$$\dot{Z} = f\_1(Z\_1, N(t), t), \quad Y = \mathbf{g}(Z, N(t), t). \tag{25}$$

Thus in the cased of an automatically controlled system described by Eq. (2), after coupling Eq. (2) with the equations of forming the required and the real inputs we come to the equations of the form of (23) containing the random function *N t*ð Þ.

If a control StS based on digital computers we decompose the extended state vector *Z* into two subvectors *Z*<sup>0</sup> , *<sup>Z</sup>*00, *<sup>Z</sup>* <sup>¼</sup> *<sup>Z</sup>*0*TZ*00*<sup>T</sup>* h i*<sup>T</sup>* one of which *Z*<sup>0</sup> represents a continuously varying random function, and the other *Z*<sup>00</sup> is a step random function varying by jumps at prescribed time moments *t* ð Þ*<sup>k</sup>* ð Þ *<sup>k</sup>* <sup>¼</sup> 0, 1, 2, … *:* Then introducing the random function

$$Z''(t) = \sum\_{k=0}^{\infty} Z\_k'' \mathbf{1}\_{A\_k}(t).$$

and putting *Z*<sup>0</sup> *<sup>k</sup>* ¼ *Z*<sup>0</sup> *t* ð Þ*<sup>k</sup>* � � ð Þ *<sup>k</sup>* <sup>¼</sup> 0, 1, 2, … we get the set of equations describing the evaluation of the extended state vector of controlled

$$\dot{Z} = f(Z, N(t), t), \quad Z\_{k+1}^{\prime\prime} = \wp\_k(Z\_k, N\_k) \tag{26}$$

where *Nk* ð Þ *k* ¼ 0, 1, 2, … are some random variables, and *N t*ð Þ some random function.

*Methods of Conditionally Optimal Forecasting for Stochastic Synergetic CALS Technologies DOI: http://dx.doi.org/10.5772/intechopen.103657*

For hybrid StS (HStS) let us now consider the case of a discrete-continuous system whose state vector *<sup>Z</sup>* <sup>¼</sup> *<sup>Z</sup>*0*TZ*00*<sup>T</sup>* h i*<sup>T</sup>* (extended in the general case) is determined by the set of equations

$$\dot{Z}' = a(\mathbf{Z}, \mathbf{t}) + b(\mathbf{Z}, \mathbf{t})\mathbf{V}, \quad Z'' = \sum\_{k=0}^{\infty} Z\_k'' \mathbf{1}\_{A\_k}(\mathbf{t}), \quad Z\_{k+1}'' = a\_k(\mathbf{Z}\_k, \mathbf{V}\_k) \tag{27}$$

where is the value of *Z t*ð Þ at *t* ¼ *t* ð Þ*<sup>k</sup>* , *Zk* <sup>¼</sup> *<sup>Z</sup>*0*<sup>T</sup> <sup>k</sup> <sup>Z</sup>*00*<sup>T</sup> k* h i*<sup>T</sup>* <sup>¼</sup> *Z t*ð Þ*<sup>k</sup>* � �ð Þ *<sup>k</sup>* <sup>¼</sup> 0, 1, 2, … , *a*, *b*, *ω<sup>k</sup>* are functions of the arguments indicated **1***Ak* ð Þ*t* is the indicator of the interval *Ak* ¼ *t* ð Þ*<sup>k</sup> t* ð Þ *<sup>k</sup>*þ<sup>1</sup> � � ð Þ *<sup>k</sup>* <sup>¼</sup> 0, 1, 2, … , *<sup>V</sup>* is a white noise in the strict sense, f g *Vk* is a sequence of independent random variables independent of the white noise *V*. The one-dimensional characteristic function *h*1ð Þ *μ*; *t* of the process with independent increments *W t*ð Þ whose weak m.s. derivative is the white noise *V*, and the distributions of the random variables *Vk* will be assumed known.

Introducing the random processes

$$Z''(t) = \sum\_{k=0}^{\infty} Z\_k' \mathbf{1}\_{A\_k}(t), \quad \overline{Z}(t) = \left[ Z'(t)^T Z''(t)^T Z''(t)^T \right]^T.$$

we derive in the same way as before the equation for the one-dimensional characteristic function

$$\begin{split} \mathbf{g}\_{1}(\boldsymbol{\lambda};t) &= \mathbf{E}\boldsymbol{\varepsilon}^{i\boldsymbol{\lambda}^{T}\overline{\boldsymbol{Z}}(t)} = \mathbf{E}\exp\left\{i\boldsymbol{\lambda}^{\prime T}\boldsymbol{Z}^{\prime}(t) + i\boldsymbol{\lambda}^{\prime T}\boldsymbol{Z}^{\prime}(t) + i\boldsymbol{\lambda}^{\prime T}\boldsymbol{Z}^{\prime}(t)\right\} \\ &= \mathbf{E}\exp\left\{i\boldsymbol{\lambda}^{\prime T}\boldsymbol{Z}^{\prime}(t) + i\boldsymbol{\lambda}^{\prime T}\boldsymbol{Z}\_{k}^{\prime} + i\boldsymbol{\lambda}^{\prime T}\boldsymbol{Z}\_{k}^{\prime}\right\} \end{split}$$

of the StP *Z t*ð Þ

$$\frac{\partial \mathbf{g}\_1(\boldsymbol{\lambda}; t)}{\partial t} = \mathbf{E} \left\{ i \boldsymbol{\lambda}^T \boldsymbol{a}(\mathbf{Z}, t) + \boldsymbol{\chi}(\boldsymbol{b}(\mathbf{Z}, t)^T \boldsymbol{\lambda}^T; t) \boldsymbol{e}^{i \boldsymbol{\lambda}^T \mathbf{Z}} \right\}. \tag{28}$$

Taking the initial moment *t*<sup>0</sup> ¼ *t* ð Þ <sup>0</sup> the initial condition for Eq. (25) is

$$\mathbf{g}\_1(\boldsymbol{\lambda}; \mathbf{t}\_0) = \mathbf{E}\left\{ \left( i\boldsymbol{\lambda}^T + i\boldsymbol{\lambda}^{\cdot^T T} \right) \mathbf{Z}\_0' + i\boldsymbol{\lambda}^{\prime \cdot^T} \mathbf{Z}\_0'' \right\} = \mathbf{g}\_0 \left( \left[ \boldsymbol{\lambda}^T + \boldsymbol{\lambda}^{\cdot^T T} \quad \boldsymbol{\lambda}^{\prime \cdot^T} \right]^T \right) \tag{29}$$

where *g*0ð Þ*ρ* is the characteristic function of the initial value *Z*<sup>0</sup> ¼ *Z t*ð Þ<sup>0</sup> of the process *Z t*ð Þ.

At the moment *t* ð Þ*<sup>k</sup>* the value of *<sup>g</sup>*1ð Þ *<sup>λ</sup>*; *<sup>t</sup>* is evidently equal to

$$\mathbf{E} \exp\left\{i\left(\boldsymbol{\lambda}^{\prime T} + \boldsymbol{\lambda}^{\prime T}\right)\mathbf{Z}\_k^{\prime} + i\boldsymbol{\lambda}^{\prime T}\mathbf{Z}\_k^{\prime\prime}\right\},$$

i.e. to the value *gk <sup>λ</sup>*0*<sup>T</sup>* <sup>þ</sup> *<sup>λ</sup>*‴*<sup>T</sup> <sup>λ</sup>*00*<sup>T</sup>* h i*<sup>T</sup>* � � of the characteristic function *gk*ð Þ*<sup>ρ</sup>* of the random variable *Zk* <sup>¼</sup> *<sup>Z</sup>*00*<sup>T</sup> <sup>k</sup> <sup>Z</sup>*00*<sup>T</sup> k* h i*<sup>T</sup>* . If the function *χ μ*ð Þ ; *t* is continuous function of *t* at any *μ* the *g*1ð Þ *λ*; *t* tends to

$$\mathbf{E}\left\{i\boldsymbol{\lambda}^{\prime T}\boldsymbol{Z}\_{k+1}^{\prime} + i\boldsymbol{\lambda}^{\prime\prime T}\boldsymbol{Z}\_{k}^{\prime\prime} + i\boldsymbol{\lambda}^{\prime\prime T}\boldsymbol{Z}\_{k}^{\prime}\right\},$$

when *t* ! *t* ð Þ *<sup>k</sup>*þ<sup>1</sup> , i.e. to the joint characteristic function *g*<sup>0</sup> *<sup>k</sup> λ*<sup>0</sup> , *λ*00, *λ*‴ � � of the random variables *Z*<sup>0</sup> *<sup>k</sup>*þ1, *Z*<sup>00</sup> *k:Z*<sup>0</sup> *k*

$$\operatorname{g}\_1\Big(\lambda; t^{(k+1)} - \mathbf{0}\Big) = \lim\_{t \to t^{(k+1)}} \operatorname{g}\_1(\lambda; t) = \operatorname{g}\_k'\left(\lambda', \lambda'', \boldsymbol{\lambda}''\right).$$

At the moment *t* ð Þ *<sup>k</sup>*þ<sup>1</sup> *<sup>g</sup>*1ð Þ *<sup>λ</sup>*; *<sup>t</sup>* changes its value by the jump and becomes equal to

$$\mathbb{E}\left[\mathbf{E}\left\{i\left(\boldsymbol{\lambda}^{\prime T}+\boldsymbol{\lambda}^{\prime T}\right)\mathbf{Z}\_{k+1}^{\prime}+i\boldsymbol{\lambda}^{\prime T}\mathbf{Z}\_{k+1}^{\prime}\right\}\right]=\mathbf{g}\_{k+1}\left(\left[\boldsymbol{\lambda}^{\prime T}+\boldsymbol{\lambda}^{\prime T}\quad\boldsymbol{\lambda}^{\prime T}\right]^T\right).$$

To evaluate this, we substitute here the expression of *Z*<sup>00</sup> *<sup>k</sup>*þ<sup>1</sup> from the last Eq. (27). Then we get

$$\mathbf{g}\_1(\boldsymbol{\lambda}; t^{(k+1)}) = \mathbf{E} \cdot \exp\left\{i(\boldsymbol{\lambda}^{\prime T} + \boldsymbol{\lambda}^{\prime T})\mathbf{Z}\_{k+1}^{\prime} + i\boldsymbol{\lambda}^{\prime T}\boldsymbol{\alpha}\_k(\mathbf{Z}\_k, \mathbf{V}\_k)\right\}.\tag{30}$$

Owing to the independence of the sequence of random variables f g *Vk* of the white noise *V* and independence of *Vk* of *V*0,*V*1, … ,*Vk*�<sup>1</sup> the random variables *Zk* and *Z*<sup>0</sup> *k*þ1 are independent of *Vk*. Hence, the expectation in the right-hand side of Eq. (30) is completely determined by the known distribution of the random variable*Vk* and by the joint characteristic function *g*<sup>0</sup> *<sup>k</sup> λ*<sup>0</sup> , *λ*00, *λ*‴ � � of the random variables *Z*<sup>0</sup> *<sup>k</sup>*þ1, *Z*<sup>00</sup> *<sup>k</sup>*, *Z*<sup>0</sup> *<sup>k</sup>*, i.e. by *g*<sup>1</sup> *λ*; *t* ð Þ *<sup>k</sup>*þ<sup>1</sup> � <sup>0</sup> � �. So Eq. (26) with the initial condition (27) and formula (28) determine the evolution of the one-dimensional characteristic function *g*1ð Þ *λ*; *t* of the process *Z t*ðÞ¼ *Z*<sup>0</sup> ð Þ*<sup>t</sup> TZ*00ð Þ*<sup>t</sup> TZ*‴ð Þ*<sup>t</sup> <sup>T</sup>* h i*<sup>T</sup>* and its jump-wise increments at the moments *t* ð Þ*<sup>k</sup>* ð Þ *<sup>k</sup>* <sup>¼</sup> 1, 2, … .

In the case of the discrete-continuous HStS whose state vector is determined by Eqs

$$
\dot{Z} = a(Z, t) + b(Z, t)V \tag{31}
$$

we get in the same way the equation for the *n*-dimensional characteristic function *gn*ð Þ *λ*1, … , *λn*; *t*1, … , *tn* of the random process *Z t*ðÞ¼ *Z*<sup>0</sup> ð Þ*<sup>t</sup> TZ*00ð Þ*<sup>t</sup> TZ*‴ð Þ*<sup>t</sup> <sup>T</sup>* h i*<sup>T</sup>* ,

$$\begin{aligned} &\partial \mathfrak{g}\_n(\lambda\_1, \dots, \lambda\_n; t\_1, \dots, t\_n) / \partial t\_n \\ &= \mathrm{E} \left[ i \lambda\_n^T a(Z(t\_n), t\_n) + \chi \left( b(Z(t\_n), t\_n)^T \lambda\_n; t\_n \right) \right] \left\{ i \lambda\_1^T \overline{Z}(t\_1) + \dots + i \lambda\_n^T \overline{Z}(t\_n) \right\}, \end{aligned} \tag{32}$$

And the formula for the value of *gn*ð Þ *λ*1, … , *λn*; *t*1, … , *tn* at *tn* ¼ *t* ð Þ *<sup>k</sup>*þ<sup>1</sup> <sup>≥</sup> *tn*�1,

$$\mathcal{g}\_n\left(\lambda\_1,\ldots,\lambda\_n;t\_1,\ldots,t\_{n-1},t^{(k+1)}\right) = \mathbb{E}\left\{i\lambda\_1^T\overline{Z}(t\_1) + \cdots + i\lambda\_{n-1}^T\overline{Z}(t\_{n-1})\right\},$$

$$+ i\left(\lambda\_n^{\prime T} + \lambda\_n^{\prime \prime T}\right)Z\_{k+1}^{\prime} + i\lambda\_n^{\prime \prime T}o\_k(Z\_k, V\_k)\right\}.$$

*Methods of Conditionally Optimal Forecasting for Stochastic Synergetic CALS Technologies DOI: http://dx.doi.org/10.5772/intechopen.103657*

At the point *tn* ¼ *t* ð Þ *<sup>k</sup>*þ<sup>1</sup> *gn*ð Þ *<sup>λ</sup>*1, … , *<sup>λ</sup>n*; *<sup>t</sup>*1, … , *tn* changes its value by jump from

$$\begin{aligned} \mathbf{g}\_n \begin{pmatrix} \lambda\_1, \dots, \lambda\_n; t\_1, \dots, t\_{n-1}, t^{(k+1)} - \mathbf{0} \end{pmatrix} &= \mathbf{E} \left\{ i \lambda\_1^T \overline{Z}(t\_1) + \dots + i \lambda\_{n-1}^T \overline{Z}(t\_{n-1}) + i \lambda\_n^T Z\_{k+1}' \right\} \\ &+ i \lambda\_n^{\prime T} Z\_k^{\prime \prime} + i \lambda\_n^{\prime \prime T} Z\_k^{\prime} \end{aligned}$$

to *gn λ*1, … , *λn*; *t*1, … , *tn*�1, *t* ð Þ *<sup>k</sup>*þ<sup>1</sup> � � given by (33).

The right-hand side of (33) is completely determined by the known distribution of the random variable *Vk* and by the joint characteristic function *gn λ*1, … , *λn*; *t*1, … , *tn*�1, *t* ð Þ *<sup>k</sup>*þ<sup>1</sup> � <sup>0</sup> � � of the random variables *Z t*ð Þ<sup>1</sup> , … , *Z t*ð Þ *<sup>n</sup>*�<sup>1</sup> , *Z*<sup>0</sup> *<sup>k</sup>*þ1, *Z*<sup>00</sup> *<sup>k</sup>*, *Z*<sup>0</sup> *<sup>k</sup>:* Hence, Eq. (32) with the corresponding initial condition and formula (33) determine the evolution and the jump-wise increments of *gn*ð Þ *λ*1, … , *λn*; *t*1, … , *tn* at the points *t* ð Þ *<sup>k</sup>*þ<sup>1</sup> when *tn* increases starting from the value *tn*�1.

### **2.4 Linear StS**

For differential linear StS and *<sup>W</sup>* being StP with independent increments *<sup>V</sup>* <sup>¼</sup> *<sup>W</sup>*\_

$$
\dot{Z} = aZ + a\_0 + bV \tag{34}
$$

corresponding Eqs for *n*-dimensional characteristic function are as follows:

$$\frac{\partial \mathbf{g}\_n}{\partial t\_n} = \dot{\lambda}\_n^T a(\mathbf{t}\_n) \frac{\partial \mathbf{g}\_n}{\partial \lambda\_n} + \left[ i \dot{\lambda}\_n^T a\_0(\mathbf{t}\_n) + \chi \left( b(\mathbf{t}\_n)^T \lambda\_n; \mathbf{t}\_n \right] \mathbf{g}\_n \right] \tag{35}$$

Explit formulae for *n*-dimensional characteristic function is described by formulae

$$\begin{split} & \mathbf{g}\_{n}(\boldsymbol{\lambda}\_{1}, \ldots, \boldsymbol{\lambda}\_{n}; t\_{1}, \ldots, t\_{n}) = \mathbf{g}\_{0} \left( \sum\_{k=1}^{n} u(t\_{k}, t\_{0})^{T} \boldsymbol{\lambda}\_{k} \right) \exp\left\{ i \sum\_{k=1}^{n} \boldsymbol{\lambda}\_{k}^{T} \right\} \boldsymbol{u}(t\_{k}, \boldsymbol{\tau})^{T} \boldsymbol{a}\_{0}(\boldsymbol{\tau}) d\boldsymbol{\tau} \\ & + \sum\_{k=1}^{n} \int\_{t\_{k-1}}^{t\_{k}} \boldsymbol{\chi} \left( b(\boldsymbol{\tau})^{T} \sum\_{l=k}^{n} u(t\_{1}, \boldsymbol{\tau})^{T} \boldsymbol{\lambda}\_{l}; \boldsymbol{\tau} \right) d\boldsymbol{\tau} \right\} \quad (n = 1, 2, \ldots). \end{split} \tag{36}$$

Here *u* ¼ *u t*ð Þ *<sup>k</sup>*, *τ* being fundamental solution of Eq *u*\_ ¼ *au* at condition: *u t*ð Þ¼ , *t I* (unit ð Þ *n* � *n* matrix).

In case of the Gaussian white noise *V* with intensity matrix *v* characteristic function *gn* is Gaussian

$$\begin{split} \mathbf{g}\_{n}(\boldsymbol{\lambda}\_{1},\ldots,\boldsymbol{\lambda}\_{n};t\_{1},\ldots,t\_{n}) &= \mathbf{g}\_{0}\left(\sum\_{k=1}^{n}u(t\_{k},t\_{0})^{T}\boldsymbol{\lambda}\_{k}\right) \exp\left\{i\sum\_{k=1}^{n}\boldsymbol{\lambda}\_{k}^{T}\right\}u(t\_{k},\boldsymbol{\tau})a\_{0}(\boldsymbol{\tau})d\boldsymbol{\tau} \\ -\frac{1}{2}\sum\_{l,h=1}^{n}\boldsymbol{\lambda}\_{l}^{T} & \int\_{t\_{0}}^{t\_{0}}u(t\_{l},\boldsymbol{\tau})b(\boldsymbol{\tau})\nu(\boldsymbol{\tau})b(\boldsymbol{\tau})^{T}u(t\_{h},\boldsymbol{\tau})^{T}d\boldsymbol{\tau} \boldsymbol{\lambda}\_{k} \right\} \quad (n=1,2,\ldots). \end{split} \tag{37}$$

#### **2.5 Linear StS with the parametric Gaussian noises**

In the case of StS with the Gaussian discrete additive and parametric noises described by Eq

$$\dot{Z} = aZ + a\_0 + \left(b\_0 + \sum\_{h=1}^{p} b\_h Z\_h\right) V. \tag{38}$$

we have the infinite set of equations which in this case is decomposed into independent sets of equations for the initial moments *α<sup>k</sup>* of each given order

$$\begin{split} \dot{a}\_{k} &= \sum\_{r=1}^{p} k\_{r} \left( a\_{r,0} a\_{k-\ell\_{r}} + \sum\_{q=1}^{p} a\_{r,\epsilon\_{q}} a\_{k+\epsilon\_{q}-\epsilon\_{r}} \right) \\ &+ \frac{1}{2} \sum\_{r=1}^{p} k\_{r} (k\_{r} - 1) \left( \sigma\_{r,0} a\_{k-2\epsilon\_{r}} + \sum\_{q=1}^{p} \sigma\_{r\tau,\epsilon\_{q}} a\_{k+\epsilon\_{q}-2\epsilon\_{r}} + \sum\_{q,u=1}^{p} \sigma\_{rr\tau\epsilon\_{q}+\epsilon\_{u}} a\_{k+\epsilon\_{q}+\epsilon\_{u}-2\epsilon\_{r}} \right) \\ &+ \sum\_{r=2}^{p} \sum\_{s=1}^{p-1} k\_{r} k\_{s} \left( \sigma\_{r,0} a\_{k-\epsilon\_{r}-\epsilon\_{s}} + \sum\_{q=1}^{p} \sigma\_{r\epsilon\_{q}} a\_{k+\epsilon\_{q}-\epsilon\_{r}-\epsilon\_{s}} + \sum\_{q,u=1}^{p} \sigma\_{r\epsilon,\epsilon\_{q}+\epsilon\_{u}} a\_{k+\epsilon\_{q}+\epsilon\_{u}-\epsilon\_{r}-\epsilon\_{s}} \right), \\ & \quad \dots (39) \\ &= a\_{0,r}, \quad a\_{\tau\epsilon\_{q}} = a\_{rq} \quad \left( k\_{1}, \ldots, k\_{p} = 0, 1, 2, \ldots; \quad \sigma(k) = 1, 2, \ldots \right). \end{split} \tag{39}$$

Corresponding Eqs of correlational theory are as follows:

$$
\dot{m} = am + a\_0;\tag{40}
$$

$$\dot{K} = aK + Ka^T + b\_0 \nu b\_o^T + \sum\_{h=1}^p \left( b\_h \nu b\_0^T + b\_0 \nu b\_h^T \right) m\_0 + \sum\_{h,l=1}^p b\_h \nu b\_l^T (m\_h m\_l + k\_{hl}). \tag{41}$$

where *khl* is the covariance of the components *Zh* and *Zl* of the vector *Z h*ð Þ , *<sup>l</sup>* <sup>¼</sup> 1, … , *<sup>p</sup>* . Eq. (41) with the initial condition *K t*ð Þ¼ <sup>0</sup> *Ko kpq*ð Þ¼ *<sup>t</sup>*<sup>0</sup> *<sup>k</sup>*<sup>0</sup> *pq* � � completely determines the covariance matrix *K t*ð Þ of the vector *Z t*ð Þ at any time moment *t* after funding its expectation *m:*

For discrete StS with the Gaussian parametric noises correlational Eqs may be presented in the following form:

$$X\_{k+1} = a\_k X\_k + a\_{0l} + \left(b\_{0l} + \sum\_{j=1}^p b\_{kj} X\_{kj}\right) V\_k,\tag{42}$$

$$m\_{k+1} = a\_k m\_k + a\_{0k}, \quad m\_k = \mathcal{E}Y\_k,\tag{43}$$

$$\begin{aligned} K\_{k+1} &= a\_k K\_k a\_k^T + b\_{0k} \nu\_k b\_{0k}^T + \sum\_{j=1}^p \left( b\_{0k} \nu\_l b\_{jk}^T + b\_{jk} \nu\_l b\_{0k}^T \right) m\_{jk} \\ &+ \sum\_{j=1}^p \sum\_{h=1}^p b\_{jk} \nu\_k b\_{hk}^T \left( m\_{kj} m\_{kh} + k\_{kjh} \right), \\ K\_1 &= E(Y\_1 - m\_1)(Y\_1 - m\_1)^T, \end{aligned} \tag{44}$$

*Methods of Conditionally Optimal Forecasting for Stochastic Synergetic CALS Technologies DOI: http://dx.doi.org/10.5772/intechopen.103657*

$$K(j, h + \mathbf{1}) = K(j, h)a\_h^T, \quad K(j, j) = K. \tag{45}$$

#### **2.6 Normal approximation method**

For StS of high dimensions methods of normal approximation (MNA) are the only used at engineering practice. In case of additive noises *b x*ð Þ¼ , *t b*0ð Þ*t* MNA is known as the method of statistical linearization (MSL).

Basic Eqs of MNA are as follows [9]:

$$\begin{split} g\_1(\boldsymbol{\lambda}; t) &\approx \exp\left\{i\boldsymbol{\lambda}^T \boldsymbol{m}\_t - \frac{1}{2}\boldsymbol{\lambda}^T \boldsymbol{K}\_t \boldsymbol{\lambda}\right\}, \\ f\_1(\boldsymbol{x}; t) &\approx \left[ (2\pi)^p |\boldsymbol{K}\_t| \right]^{-1/2} \exp\left\{ -\frac{1}{2} (\boldsymbol{x}^T - \boldsymbol{m}\_t^T) \boldsymbol{K}\_t^{-1} (\boldsymbol{x} - \boldsymbol{m}\_t) \right\}, \end{split} \tag{46}$$

$$
\dot{m}\_t = \rho\_1(m\_t, K\_t, t) \quad m(t\_0) = m\_0,\\
\rho\_1(m\_t, K\_t, t) = \mathbf{E}\_N a(Y\_t, t), \tag{47}
$$

$$
\dot{K}\_t = \rho\_2(m\_t, K\_t, t) \quad K(t\_0) = K\_0,\tag{48}
$$

$$\begin{aligned} \rho\_2(m\_t, K\_t, t) &= \rho\_{21}(m\_t, K\_t, t) + \rho\_{21}(m\_t, K\_t, t)^T + \rho\_{22}(m\_t, K\_t, t),\\ \rho\_{21}(m\_t, K\_t, t) &= \mathbf{E}\_N a(\mathbf{X}\_t, t) \left(\mathbf{X}\_t^T - m\_t^T\right), \rho\_{22}(m\_t, K\_t, t) = \mathbf{E}\_N b(\mathbf{X}\_t, t) \boldsymbol{\nu}(t) b(\mathbf{X}\_t, t)^T,\\ \frac{\partial \mathcal{K}(t\_1, t\_2)}{\partial t\_2} &= \mathcal{K}(t\_1, t\_2) \boldsymbol{K}(t\_2)^{-1} \boldsymbol{\rho}\_{21} \left(m, \left(t\_2\right) \boldsymbol{K}(t\_2), \left(t\_2\right)^T, \end{aligned} \tag{49}$$

$$\begin{aligned} g\_n(\lambda\_1, \dots, \lambda\_n; t\_1, \dots, t\_n) &= \exp\left\{i\lambda^T \overline{m}\_n - \frac{1}{2} \overline{\lambda}^T \overline{K}\_n \overline{\lambda}\right\} \quad (n = 1, 2, \dots), \\ f\_n(\mathbf{x}\_1, \dots, \mathbf{x}\_n; t\_1, \dots, t\_n) &= \left[ (2\pi)^n |\overline{K}\_n|^{-1/2} \exp\left\{ \frac{1}{2} (\overline{\mathbf{x}}\_n^T - \overline{m}\_n^T) \overline{K}\_n^{-1} (\overline{\mathbf{x}}\_n - \overline{m}\_n) \right\} \quad (n = 1, 2, \dots), \end{aligned} \tag{50}$$

$$\overline{\lambda} = \begin{bmatrix} \boldsymbol{\lambda}\_1^T \boldsymbol{\lambda}\_2^T \dots \boldsymbol{\lambda}\_n^T \end{bmatrix}^T, \quad \overline{m}\_n = \begin{bmatrix} m\_{\mathbf{x}}(t\_1)^T m\_{\mathbf{x}}(t\_2)^T \dots m\_{\mathbf{x}}(t\_n)^T \end{bmatrix}^T,$$

$$\overline{K}\_n = \begin{bmatrix} K(t\_1, t\_1) & K(t\_1, t\_2) & \dots & K(t\_2, t\_n) \\ K(t\_2, t\_1) & K(t\_2, t\_2) & \dots & K(t\_2, t\_n) \\ \vdots & \vdots & \vdots & \vdots \\ K(t\_n, t\_1) & K(t\_n, t\_2) & \dots & K(t\_n, t\_n) \end{bmatrix}, \\ \text{where} \quad \overline{\boldsymbol{\boldsymbol{\lambda}}}\_n = \begin{bmatrix} \boldsymbol{\lambda}\_1^T \boldsymbol{\lambda}\_2^T \dots \boldsymbol{\lambda}\_n^T \end{bmatrix}^T. \tag{51}$$

Eq. (49) may be rewritten in form

$$\frac{\partial K(t\_1, t\_2)}{\partial t\_2} = \rho\_3(K(t\_1, t\_2), t\_1, t\_2) \tag{52}$$

where

$$\begin{split} \rho\_{3}(\boldsymbol{K}(t\_{1},t\_{2}),t\_{1},t\_{2}) &= \left[ (2\pi)^{2n\_{\scriptscriptstyle\rm I}} \left| \overline{\boldsymbol{K}}\_{2} \right|^{-1/2} \right] \int\_{-\infty}^{\infty} \int (\boldsymbol{x}\_{1} - \boldsymbol{m}\_{t\_{1}}) \rho(\boldsymbol{x}\_{2},t\_{2}) \\ & \times \exp\left\{ - \left( \mathbf{x}\_{1}^{T} \mathbf{x}\_{2}^{T} \right) - \overline{\boldsymbol{m}\_{2}^{T}} \right) \overline{\boldsymbol{K}}\_{2}^{-1} \left( \mathbf{x}\_{1}^{T} \mathbf{x}\_{2}^{T} \right) - \overline{\boldsymbol{m}\_{2}^{T}} \right) d\mathbf{x}\_{1} d\mathbf{x}\_{2}; \\ \overline{\boldsymbol{m}\_{2}} &= \left[ \boldsymbol{m}\_{t\_{1}}^{T} \boldsymbol{m}\_{t\_{2}}^{T} \right]^{T}; \quad \overline{\boldsymbol{K}}\_{2} = \begin{bmatrix} \boldsymbol{K}(t\_{1},t\_{1}) & \boldsymbol{K}(t\_{1},t\_{2}) \\ \boldsymbol{K}(t\_{2},t\_{1}) & \boldsymbol{K}(t\_{2},t\_{2}) \end{bmatrix}. \end{split} \tag{53}$$

For discrete StS equations of MNA may be presented in the following form:

$$m\_{l+1} = \mathbf{E}\_N a \eta(\mathbf{X}\_l, \mathbf{V}\_l) \quad m\_1 = \mathbf{E} \mathbf{X}\_1 \quad (l = \mathbf{1}, \mathbf{2}), \tag{54}$$

$$\mathbf{K}\_{l+1} = \mathbf{E}\_N \alpha \eta(\mathbf{X}\_l, \mathbf{V}\_l) \alpha \eta(\mathbf{X}\_l, \mathbf{V}\_l)^T - \mathbf{E}\_N \alpha \eta(\mathbf{X}\_l, \mathbf{V}\_l) \mathbf{E}\_N \alpha \eta(\mathbf{X}\_l, \mathbf{V}\_l)^T,\tag{55}$$

at conditions

$$\begin{aligned} \mathbf{K}\_{1} &= \mathbf{E}\_{N}(\mathbf{X}\_{1} - m\_{1})(\mathbf{X}\_{1} - m\_{1})^{T}(l = \mathbf{1}, \mathbf{2}), \\ \mathbf{K}\_{lh} &= \mathbf{E}\_{N}\mathbf{X}\_{l}\boldsymbol{\alpha}\_{h}(\mathbf{X}\_{h}, \mathbf{V}\_{h})^{T} - m\_{l}\mathbf{E}\_{N}\boldsymbol{\alpha}\_{h}(\mathbf{X}\_{h}, \mathbf{V}\_{h})^{T}, \\ \mathbf{K}\_{ll} &= \mathbf{K}\_{l} \quad \text{at} \quad l < h(h = \mathbf{1}, \mathbf{2}, \dots), \quad \mathbf{K}\_{\ln} = \mathbf{K}(h, l) = \mathbf{K}(h, l)^{T}\mathbf{a}t \quad l < h. \end{aligned} \tag{56}$$

Corresponding MNA equations for Eq. (15) are the special case of Eqs. (54)–(56).

## **3. Conditionally optimal forecasting in StS**

Optimal forecasting is well developed for linear StS and off-line regimes [9]. For nonlinear StS linear StS with the parametric Gaussian noises and on-line regimes different versions approximate (suboptimal) methods are proposed. In [9] general results for complex statistical criteria and Bayes criteria are developed. Let us consider m.s. conditionally optimal forecasters for StS being models of stochastic OTES.

#### **3.1 Continuous StS**

Conditionally optimal forecasting (COFc) for mean square error (mse) criterion was suggested by Pugachev [10]. Following [9] we define COFC as a forecaster from class of admissible forecasters which at any joint distributions of variables *Xt* (state variable) *X*^*<sup>t</sup>* (estimate of *Xt*), *Yt* (observation variable) at forecasting time Δ >0 and time moments *t*≥*t*<sup>0</sup> in continuous (differential) StS

$$dX\_t = a(\mathbf{X}\_t, \mathbf{Y}\_t, t)dt + b(\mathbf{X}\_t, \mathbf{Y}\_t, t)dW\_1, \quad dY\_t = a\_1(\mathbf{X}\_t, \mathbf{Y}\_t, t)dt + b\_1(\mathbf{X}\_t, \mathbf{Y}\_t, t)dW\_2 \tag{57}$$

(*W*1, *W*<sup>2</sup> being independent white noises with the independent increments; *φφ*1*ψψ*<sup>1</sup> being known nonlinear functions) gives the best estimate of *Xs*þ<sup>Δ</sup> at infinitesimal time moment *<sup>s</sup>*>*t*, *<sup>s</sup>* ! *<sup>t</sup>* realizing minimum E *<sup>X</sup>*^*<sup>s</sup>* � *<sup>X</sup>*^*<sup>s</sup>*þ<sup>Δ</sup> 2 . Then COFc at any time moment *t* ≥*t*<sup>0</sup> is reduced to finding optimal coefficients *αt*, *βt*, *γ<sup>t</sup>* in the following Eq:

$$d\hat{X}\_t = a\_t \xi(\hat{X}\_t, Y\_t, t)dt + \beta\_t \eta(\hat{X}\_t, Y\_t, t)dY\_t + \gamma\_t dt.\tag{58}$$

Here *<sup>ξ</sup>* <sup>¼</sup> *<sup>ξ</sup> <sup>X</sup>*^*t*, *Yt*, *<sup>t</sup>* , *<sup>η</sup>* <sup>¼</sup> *<sup>η</sup> <sup>X</sup>*^*t*, *Yt*, *<sup>t</sup>* are given functions of current observations *Yt*, estimate *X*^*<sup>t</sup>* and time *t*.

Using theory of conditionary optimal estimation (13, 17, 18) for Eq

$$dX\_{t+\Delta} = a(X\_{t+\Delta}, t+\Delta)dt + b(X\_{t+\Delta}, t+\Delta)dW\_1(t+\Delta). \tag{59}$$

we get the following Eqs for coefficients *αt*, *βt*, *γ<sup>t</sup>*

*Methods of Conditionally Optimal Forecasting for Stochastic Synergetic CALS Technologies DOI: http://dx.doi.org/10.5772/intechopen.103657*

$$\begin{aligned} a\_t m\_1 + \beta\_t m\_2 + \gamma\_t &= m\_0, \quad m\_0 = \text{E}a(X\_t, Y\_t, t), \quad m\_1 = \text{E}\xi(Y\_t, \hat{X}\_{t+\Delta}, t), \\ m\_2 = \text{E}\eta(Y\_t, \hat{X}\_{t+\Delta}, t) a\_1(X\_t, Y\_t, t), \end{aligned} \tag{60}$$

$$\begin{split} \boldsymbol{\beta}\_{t} &= \kappa\_{02} \kappa\_{22}^{-1}, \quad \kappa\_{02} = \mathbf{E} \left( \mathbf{X}\_{t} - \hat{\mathbf{X}}\_{t+\Delta} \right) \boldsymbol{a}\_{1}(\mathbf{X}\_{t}, \mathbf{Y}\_{t}, \mathbf{t})^{T} \boldsymbol{\eta} \left( \mathbf{Y}\_{t}, \hat{\mathbf{X}}\_{t+\Delta}, \mathbf{t} \right)^{T} \\ &+ \mathbf{E} \boldsymbol{b}(\mathbf{X}\_{t}, \mathbf{Y}\_{t}, \mathbf{t}) \boldsymbol{v}(t) \boldsymbol{b}\_{1}(\mathbf{X}\_{t}, \mathbf{Y}\_{t}, \mathbf{t})^{T} \boldsymbol{\eta} \left( \mathbf{Y}\_{t}, \hat{\mathbf{X}}\_{t+\Delta}, \mathbf{t} \right)^{T}, \end{split} \tag{61}$$

$$\begin{split} \kappa\_{22} &= \mathbb{E} \big( Y\_t, \hat{\mathbf{X}}\_{t+\Delta}, t \big) b\_1(\mathbf{X}\_t, \mathbf{Y}\_t, t) v(t) b\_1(\mathbf{X}\_t, \mathbf{Y}\_t, t)^T \eta \big( \mathbf{Y}\_t, \hat{\mathbf{X}}\_{t+\Delta}, t \big)^T \\ &+ \mathbb{E} b(\mathbf{X}\_t, \mathbf{Y}\_t, t) v(t) b\_1(\mathbf{X}\_t, \mathbf{Y}\_t, t)^T \eta \big( \mathbf{Y}\_t, \hat{\mathbf{X}}\_{t+\Delta}, t \big)^T \end{split} \tag{62}$$

at condition det ð Þ *κ*<sup>22</sup> 6¼ 0 .

The theory of conditionally optimal forecasting gives the opportunity for simultaneous filtering of state and identification of StS parameters for different forecasting time Δ. All complex calculations for COFc design do not need current observations and may be performed on a priori data during design procedures. Practical application of such COFc is reduced to Eq. (58) integration. The time derivative for the error covariance matrix *Rt* is defined by formulae

$$\begin{split} \dot{\boldsymbol{R}}\_{t} &= \mathbb{E}\left[ \left( \mathbf{X}\_{t+\Delta} - \hat{\mathbf{X}}\_{t} \right) \boldsymbol{a} (\mathbf{X}\_{t+\Delta}, t+\Delta)^{T} + \boldsymbol{a} (\mathbf{X}\_{t+\Delta}, t+\Delta) \left( \mathbf{X}\_{t+\Delta}^{T} - \hat{\mathbf{X}}\_{t}^{T} \right) \right] \\ &- \beta\_{t} \eta \big( \mathbf{Y}\_{t}, \hat{\mathbf{X}}\_{t+\Delta}, t \big) b\_{1} (\mathbf{X}\_{t}, \mathbf{Y}\_{t}, t) \nu\_{2}(t) b\_{1} (\mathbf{X}\_{t}, \mathbf{Y}\_{t}, t)^{T} \eta \big( \mathbf{Y}\_{t}, \hat{\mathbf{X}}\_{t+\Delta}, t \big)^{T} \boldsymbol{\rho}\_{t}^{T} \\ &+ b (\mathbf{X}\_{t+\Delta}, t+\Delta) \boldsymbol{v}\_{1} (t+\Delta) b \big( \hat{\mathbf{X}}\_{t+\Delta}, t+\Delta \big)^{T} \big]. \end{split} \tag{63}$$

Mathematical expectations in Eq. (60)–(63) are computed on the basis of joint distribution of random variables *X<sup>T</sup> <sup>t</sup> X<sup>T</sup> <sup>t</sup>*þ<sup>Δ</sup>, *<sup>Y</sup><sup>T</sup> <sup>t</sup>* , *<sup>X</sup>*^*<sup>T</sup> <sup>t</sup> <sup>X</sup>*^*<sup>T</sup> t*þΔ h i*<sup>T</sup>* by solution of the following Pugachev Eq for characteristic function *<sup>g</sup>*<sup>2</sup> *<sup>λ</sup>*1, *<sup>λ</sup>*2, *<sup>λ</sup>*3, *<sup>μ</sup>*1, *<sup>μ</sup>*2, *<sup>μ</sup>*<sup>3</sup> ð Þ ; *<sup>t</sup>*, *<sup>s</sup>* for StP *<sup>X</sup><sup>T</sup> <sup>t</sup> Y<sup>T</sup> <sup>t</sup> <sup>X</sup>*^*<sup>T</sup> t* h i*<sup>T</sup>* at *s* >*t*:

$$\begin{aligned} \partial\_{\mathcal{B}}(\boldsymbol{A}\_{1}, \boldsymbol{\lambda}\_{2}, \boldsymbol{\lambda}\_{3}, \boldsymbol{\mu}\_{1}, \boldsymbol{\mu}\_{2}, \boldsymbol{\mu}\_{3}; t, s) / \partial \boldsymbol{s} &= \mathbb{E}\left\{i\mu\_{1}^{T}\boldsymbol{a}\_{1}(\boldsymbol{Y}\_{s}, \boldsymbol{X}\_{s}, \boldsymbol{s}) + i\mu\_{2}^{T}\boldsymbol{a}(\boldsymbol{X}\_{s}, \boldsymbol{s}) \\ &+ i\mu\_{3}^{T}\Big[\boldsymbol{a}\_{2}\boldsymbol{\varepsilon}(\boldsymbol{Y}\_{s}, \hat{\mathbf{X}}\_{s}, \boldsymbol{s}) + \beta\_{1}\eta(\boldsymbol{Y}\_{s}, \boldsymbol{X}\_{s}, \boldsymbol{s}) + \boldsymbol{\gamma}\_{i}\right] \\ &+ \chi\Big{(}b\_{1}(\boldsymbol{Y}\_{s}, \boldsymbol{X}\_{s}, \boldsymbol{s})^{T}\boldsymbol{\mu}\_{1} + b(\boldsymbol{X}\_{s}, \boldsymbol{s})\boldsymbol{\mu}\_{2} \\ &+ b\_{1}(\boldsymbol{Y}\_{s}, \boldsymbol{X}\_{s}, \boldsymbol{s})^{T}\eta(\boldsymbol{Y}\_{s}, \boldsymbol{X}\_{s}, \boldsymbol{s})^{T}\boldsymbol{\rho}\_{1}^{T}\boldsymbol{\mu}\_{3}; \boldsymbol{\varepsilon}\right) \\ &\\ &\times \exp\left\{i\boldsymbol{\lambda}\_{1}^{T}\boldsymbol{Y}\_{t} + i\boldsymbol{\lambda}\_{2}^{T}\boldsymbol{X}\_{t} + i\boldsymbol{\lambda}\_{3}^{T}\hat{\mathbf{X}}\_{t} + i\mu\_{1}^{T}\boldsymbol{Y}\_{t} + i\mu\_{2}^{T}\boldsymbol{X}\_{s} + i\mu\_{3}^{T}\hat{\mathbf{X}}\_{t}\right\}. \end{aligned} \tag{64}$$

at condition

$$\mathbf{g}\_2(\lambda\_1, \lambda\_2, \lambda\_3, \mu\_1, \mu\_2, \mu\_3; t, t) = \mathbf{g}\_1(\lambda\_1 + \mu\_1, \lambda\_2 + \mu\_2, \lambda\_3 + \mu\_3; t, s). \tag{65}$$

Basic algorithms are defined by the following Proposals .3.1.1–3.1.3.

**Proposal 3.1.1.** *At the conditions of the existence of probability moments (60), (61) nonlinear COFc is defined by* Eqs. (58) and, (63)*.*

**Proposal 3.1.2.** *For linear differential StS*

$$dX\_t = (a\_1 X\_t + a\_0)dt + b dW\_1, \quad dY\_t = (bY\_t + b\_1 X\_t + b\_0)dt + b\_1 dW\_2. \tag{66}$$

*Eqs of exact COFc are as follows:*

$$d\hat{X}\_t = \left[a\_1(t+\Delta)\left(e\_t\hat{X}\_{1t} + h\_t\right) + a\_0(t+\Delta)\right]dt + \varepsilon\_t\beta\_{1t}\left[dY\_t - \left(b\hat{X}\_{1t} + b\_0\right)dt\right].\tag{67}$$

$$
\dot{e}\_t = \mathfrak{a}\_1(t+\Delta)e\_t - e\_t\mathfrak{a}\_1. \tag{68}
$$

$$
\dot{h}\_t = a\_1(t + \Delta) - \varepsilon\_t a\_0 + a\_1(t + \Delta) h\_t. \tag{69}
$$

$$\dot{\vec{R}}\_{t} = \mathfrak{a}\_{1}(t+\Delta)\mathfrak{R}\_{t} + \mathfrak{R}\_{t}\mathfrak{a}\_{1}(t+\Delta)^{T} - \beta\_{t}\left(\mathfrak{b}\_{1}\mathfrak{v}\_{2}\mathfrak{b}\_{1}^{T}\right)\boldsymbol{\beta}\_{t}^{T} + \mathfrak{w}\_{1}(t+\Delta)\mathfrak{v}\_{1}(t+\Delta)\mathfrak{b}\_{1}(t+\Delta)^{T}.\tag{70}$$

In case of the linear StS with the parametric Gaussian noises:

$$\begin{split}dX\_{t} &= (a\_{1}X\_{t} + a\_{0})dt + \left(c\_{10} + \sum\_{r=1}^{n\_{x}} c\_{1,n\_{r}+r} X\_{r}\right) dW\_{1}, \\ dY\_{t} &= (bY\_{t} + b\_{1}X\_{t} + b\_{0})dt + \left(c\_{20} + \sum\_{r=1}^{n\_{r}} c\_{2r}Y\_{r} + \sum\_{r=1}^{n\_{x}} c\_{2r,n\_{r}+r} X\_{r}\right) dW\_{2}. \end{split} \tag{71}$$

COFc is defined by exact Eqs (Proposal 3.1.3):

$$d\hat{X}\_t = \left[a\_1(t+\Delta)\left(e\_t\hat{X}\_1 + h\_t\right) + a\_0(t+\Delta)\right]dt + e\_t\beta\_{1t}\left[dY\_t - \left(b\hat{X}\_1 + b\_0\right)dt\right].\tag{72}$$

$$
\dot{\varepsilon}\_t = a\_1(t + \Delta)\varepsilon\_t - \varepsilon\_t a\_1, \quad \dot{h}\_t = a\_0(t + \Delta) - \varepsilon\_t a\_0 + a\_1(t + \Delta)h\_t,\tag{73}
$$

$$\begin{aligned} \dot{R}\_{t} &= a\_{1}(t+\Delta)R\_{t} + R\_{t}a\_{1}(t+\Delta)^{T} \\ &- \beta\_{t} \Big[ \Big( c\_{20} + \sum\_{r=1}^{n\_{r}+n\_{r}} c\_{2r}m\_{r} \Big) \nu\_{1} \Big( c\_{20}^{T} + \sum\_{r=1}^{n\_{r}+n\_{r}} c\_{2r}^{T}m\_{r} \Big) \\ &+ \sum\_{r=1}^{n\_{r}+n\_{r}} c\_{2r}\nu\_{1}c\_{2r}^{T}k\_{r} \Big] \theta\_{t}^{T} + \left[ c\_{10}(t+\Delta) + \sum\_{r=n\_{r}+1}^{n\_{r}+n\_{r}} c\_{1r}(t+\Delta)m\_{r}(t+\Delta) \right] \nu\_{2}(t+\Delta) \\ &\times \Big[ c\_{10}(t+\Delta)^{T} + \sum\_{r=n\_{r}+1}^{n\_{r}+1} c\_{1r}(t+\Delta)^{T}m\_{r}(t+\Delta) \Big] \\ &+ \sum\_{r=n\_{r}+1}^{n\_{r}+n\_{r}} c\_{1r}(t+\Delta)\nu\_{2}(t+\Delta)c\_{1r}(t+\Delta)^{T}k\_{m} . \end{aligned} \tag{74}$$

For nonlinear StS in case of the normal StP *Xt*, *Yt*, *X*^*<sup>t</sup>* Eqs of normal COFc (NCOFc) are defined by Proposal 3.1.1 for joint normal distribution.

*Methods of Conditionally Optimal Forecasting for Stochastic Synergetic CALS Technologies DOI: http://dx.doi.org/10.5772/intechopen.103657*

#### **3.2 Discrete and hybrid StS**

Let us consider the following nonGaussian nonlinear regression StS

$$X\_{k+1} = o\_k(X\_k, V\_k), \quad Y\_k = o\_{1k}(X\_k, Y\_k, V\_k) \quad (k = 1, 2 \dots). \tag{75}$$

In this case Eqs of the discrete COFc are as follows:

$$X\_{k+r+1} = \delta\_k \zeta\_k \left( X\_k, \hat{X}\_k \right) + \gamma\_k,\tag{76}$$

$$
\delta\_k = D\_k B\_k^{-1}, \quad \gamma\_k = m\_{k+r+1} - \delta\_k \rho\_k,\tag{77}
$$

$$m\_{k+r+1} = \text{E}o\_{k+r}(X\_{k+r}, V\_{k+r}),\tag{78}$$

$$\rho\_k = \mathbb{E}\zeta\_k(\mathbf{X}\_k, \hat{\mathbf{X}}\_k), \quad B\_k = \mathbb{E}\left[\zeta\_k(\mathbf{X}\_k, \hat{\mathbf{X}}\_k) - \rho\_k\right]\zeta\_k(\mathbf{X}\_k, \hat{\mathbf{X}}\_k)^T,$$

$$\mathbf{A} = \begin{bmatrix} \mathbf{A} & \mathbf{A} \end{bmatrix}, \quad \mathbf{A} = \begin{bmatrix} \mathbf{A} & \mathbf{A} \end{bmatrix}^T,$$

$$D\_k = \mathbb{E}[o\_{k+r}(\mathbf{X}\_{k+r}, \mathbf{V}\_{k+r}) - m\_{k+r+1}] \zeta\_k \left(o\_k(\mathbf{X}\_k, \hat{\mathbf{X}}\_k)\right)^T,\tag{79}$$

$$\mathbf{g}\_{2,k,k+r}(\boldsymbol{\lambda}\_1, \boldsymbol{\lambda}\_2, \boldsymbol{\mu}) = \mathbf{E}\{i\lambda\_1^T \mathbf{X}\_k + i\lambda\_2^T \mathbf{X}\_{k+r} + i\mu^T \hat{\mathbf{X}}\_k\},\tag{80}$$

$$\mathbf{g}\_{2,k,k+r+1}(\boldsymbol{\lambda}\_1, \boldsymbol{\lambda}\_2, \boldsymbol{\mu}) = \mathbf{E} \exp\{i\lambda\_1^T \mathbf{X}\_k + i\lambda\_2^T a\_{k+r}(\mathbf{X}\_{k+r}, \mathbf{V}\_{k+r}) + i\boldsymbol{\mu}^T \boldsymbol{\hat{X}}\_k\} \tag{81}$$

at initional condition

$$\mathbf{g}\_{2,k,k}(\lambda\_1, \lambda\_2, \mu) = \mathbf{g}\_{1,k}(\lambda\_1 + \lambda\_2, \mu). \tag{82}$$

So for the nonlinear regression StS (14) we get **Proposal 3.2.1** defined by Eqs. (75)–(82).

In case of the nonlinear autoregression discrete StS (15) we have the following Eqs of **Proposal 3.2.2:**

$$X\_{k+1} = a\_k(X\_k) + b\_k(X\_k)V\_k, \quad Y\_k = a\_{1k}(X\_k, Y\_k) + b\_{1k}(Y\_k)V\_k,\tag{83}$$

$$X\_{k+r+1} = a\_k \xi\_k(\hat{X}\_k) + \beta\_k \eta\_k(\hat{X}\_k) Y\_k + \gamma\_k,\tag{84}$$

$$a\_k \kappa\_{11}^{(k)} + \beta\_k \kappa\_{21}^{(k)} = \kappa\_{01}^{(k)}, \\ a\_k \kappa\_{12}^{(k)} \left(\hat{\mathbf{X}}\_k\right) + \beta\_k \kappa\_{22}^{(k)} = \kappa\_{02}^{(k)},\tag{85}$$

$$\gamma\_k = \rho\_0^{(k+r+1)} - a\_k \rho\_1^{(k)} \left(\hat{\mathbf{X}}\_k\right) - \beta\_k \rho\_2^{(k)} = \kappa\_{02}^{(k)},\tag{86}$$

$$
\rho\_0^{(k+r+1)} = \mathbf{E} \mathbf{a}\_{k+r}(\mathbf{X}\_{k+r}), \tag{87}
$$

$$\rho\_k = \begin{bmatrix} \rho\_1^{(k)T} \rho\_2^{(k)T} \end{bmatrix}^T, \quad \rho\_1^{(k)} = \mathbf{E} \xi\_k(\hat{\mathbf{X}}\_k), \quad \rho\_2^{(k)} = \mathbf{E} \eta\_k(\hat{\mathbf{X}}\_k) \mathbf{a}\_{1k}(\mathbf{X}\_k), \tag{88}$$

$$B\_k = \begin{bmatrix} \kappa\_{11}^{(k)} & \kappa\_{12}^{(k)} \\ \kappa\_{11}^{(k)} & \kappa\_{22}^{(k)} \end{bmatrix}, \quad \det|B\_k| \neq 0,\tag{89}$$

$$\begin{aligned} \kappa\_{11}^{(k)} &= \mathbb{E}\left[\xi\_k \left(\hat{\mathbf{X}}\_k\right) - \rho\_1^{(k)}\right] \xi\_k \left(\hat{\mathbf{X}}\_k\right)^T, \\ \kappa\_{12}^{(k)} &= \kappa\_{21}^{(k)} = \mathbb{E}\left[\xi\_k \left(\hat{\mathbf{X}}\_k\right) - \rho\_1^{(k)}\right] a\_{1k} \left(\mathbf{X}\_k\right)^T \eta\_k \left(\hat{\mathbf{X}}\_k\right)^T, \\ \kappa\_{22}^{(k)} &= \mathbb{E}\left[\eta\_k \left(\hat{\mathbf{X}}\_k\right) a\_{1k} \left(\mathbf{X}\_k\right) - \rho\_2^{(k)}\right] a\_{1k} \left(\mathbf{X}\_k\right)^T \eta\_k \left(\hat{\mathbf{X}}\_k\right)^T + \mathbb{E}\eta\_k \left(\hat{\mathbf{X}}\_k\right) b\_{1k} \left(\mathbf{X}\_k\right) \eta\_k b\_{1k} \left(\mathbf{X}\_k\right)^T \eta\_k \left(\hat{\mathbf{X}}\_k\right)^T, \end{aligned} \tag{90}$$

$$D\_k = \begin{bmatrix} \kappa\_{01}^{(k)} & \kappa\_{02}^{(k)} \end{bmatrix},\tag{91}$$

$$\begin{aligned} \kappa\_{01}^{(k)} &= \mathbf{E}[a\_k(\mathbf{X}\_k) - m\_{k+1}] \xi\_k(\hat{\mathbf{X}}\_k)^T, \\ \kappa\_{02}^{(k)} &= \mathbf{E}[a\_k(\mathbf{X}\_k) - m\_{k+1}] a\_{1k}(\mathbf{X}\_k)^T \eta\_k \left(\hat{\mathbf{X}}\_k\right)^T + \mathbf{E} b\_{1k}(\mathbf{X}\_k) \nu\_k b\_{1k}(\mathbf{X}\_k)^T \eta\_k \left(\hat{\mathbf{X}}\_k\right)^T, \end{aligned} \tag{92}$$

$$\mathcal{B}\_{k+1} = \rho\_0^{(k)}, \ \rho\_0^{(k)} = \mathbf{E}\mathbf{a}\_k(X\_k), \ \mathbf{E}V\_k = \mathbf{0}, \ \mathbf{E}V\_kV\_k^T = v\_k. \tag{93}$$

Analogously we get from Proposal 3.2.2 COFc for discrete linear StS and linear with the Gaussian parametric noises. For hybrid StS we recommend mixed algorithm based on joint normal distribution and Proposal 3.1.1.

#### **3.3 Generalizations**

Mean square results (Subsection 2.1 and 2.2) may be extended to StS described by linear, linear with the Gaussian parametric noises and nonlinear Eqs or reducible to them by approximate suboptimal and conditionally optimal methods.

Differential StS with the autocorrelated noises in observations may be also reduced to differential StS.

Special COFc algorithms based on complex statistical criteria and Bayesian creteria are developed in [11].

## **4. Probability modeling in SOTES**

Following [3, 4] let as consider general approach for the SOTES modeling as macroscopic (multi-level) systems including set of subsystems being also macroscopic. In our case these sets of subsystems will be clusters covering that part of MP connected with aftersales production service. More precisely the set of subsystems of lower level where input information about concrete products, personal categories etc. is formed.

For typical continuous-discrete StP in the SOTES production cluster we have the following vector stochastic equation:

$$dX\_t = [\rho(X\_t, t) + \mathcal{S}(\nu)\rho(X\_t, t)]dt + \mathcal{S}(\nu)dP^0(t). \tag{94}$$

Here *<sup>P</sup>*<sup>0</sup>ð Þ*<sup>t</sup>* being the centered Poisson StP; *<sup>ρ</sup> Xt* ð Þ , *<sup>t</sup>* being *np* � <sup>1</sup> � � intensity of vector of StP *P t*ð Þ, *<sup>ρ</sup> Xt* ð Þ¼ , *<sup>t</sup> <sup>ρ</sup>*<sup>12</sup> *Xt* ð Þ , *<sup>t</sup> <sup>ρ</sup>*<sup>13</sup> *Xt* ð Þ , *<sup>t</sup>* … *<sup>ρ</sup>uk Xt* ½ � ð Þ , *<sup>t</sup> <sup>T</sup>* ; *ρuk Xt* ð Þ , *t* being intensities of streams changes of states; *<sup>φ</sup> Xt* ð Þ , *<sup>t</sup>* being continuous *np* � <sup>1</sup> � � vector function of quality indicators in OPB; *S v*ð Þ being *np* � *n<sup>ρ</sup>* � � matrix Poisson stream of resources (production) with volumes *v* according to the SOTES state graph. Analogously we get corresponding equations for SOTES-O and SOTES-N:

$$dY\_t = [q(X\_t, t) + \rho\_1(Y\_t, t) + D(r)\chi(Y\_t, t)]dt + D(r)dP\_1^0(t),\tag{95}$$

$$d\zeta\_t = [\varrho\_2(\zeta\_t, t) + \mathcal{C}(\theta)\mu(\zeta\_t, t)]dt + \mathcal{C}(\theta)dP\_2^0(t),\tag{96}$$

where *φ*<sup>1</sup> and *φ*<sup>2</sup> being vector functions quality indicators in OPB for the SOTES-O and the SOTES-N; *D r*ð Þ being structional matrix of resources streams in the SOTES-N *Methods of Conditionally Optimal Forecasting for Stochastic Synergetic CALS Technologies DOI: http://dx.doi.org/10.5772/intechopen.103657*

matrix; *<sup>γ</sup> Yt* ð Þ , *<sup>t</sup>* and *D r*ð Þ being the intensity function and vector of *<sup>P</sup>*<sup>0</sup> <sup>1</sup> ð Þ*t* jumps in the SOTES-O.

In linear case when *ρuk Xt* ð Þ¼ , *t AρXt* Eqs. (94)–(96) for the SOTES, SOTES-O and the SOTES-N may be presented as

$$dX\_t = \overline{a}(t, \nu) X\_t dt + \mathcal{S}(\nu) dP^0(t),\tag{97}$$

$$dY\_t = \overline{b}(r, t)Y\_t dt + \lambda(Y\_t, t)X\_t + D(r)dP^0(t) + \varphi\_1(t)d\zeta\_t,\tag{98}$$

$$d\zeta\_t = \overline{c}\_2(\theta, t)\zeta\_t dt + \mathcal{C}(\theta)dP\_2^0(t). \tag{99}$$

Here notations

$$
\overline{b}\_1(r,t) = b\_1(t) + A\_\gamma(r,t), \quad \overline{c}\_2(\theta,t) = c\_2 + A\_\mu(\theta,t). \tag{100}
$$

*A<sup>γ</sup>* ð Þ *r*, *t* , *Aμ*ð Þ *ϑ*, *t* are derived from Eqs:

$$D(r)\chi(Y\_t, t) \equiv A\_\gamma(r, t)Y\_t, \quad C(\theta)\mu(\zeta\_t, t) \equiv A\_\mu(\theta, t)\zeta\_t. \tag{101}$$

At practice a priori information about SOTES-N is poor than for the SOTES and SOTES-O. So introducing the Wiener StP *W t*ð Þ,*W*1ð Þ*t* ,*W*2ð Þ*t* we get the following Eqs:

$$dX\_t = (\overline{a}X\_t + a\_1Y\_t + a\_0)dt + \mathcal{S}(\nu)dP^0(t) + \psi'(t)dW(t),\tag{102}$$

$$dY\_t = \left(qX\_t + b\_1Y\_t + b\_2\zeta\_t + b\_0\right)dt + D(r)dP\_1^0(t) + \varphi\_1(t)d\zeta\_t + \varphi\_1'(t)dW\_1(t), \quad \text{(103)}$$

$$d\zeta\_t = (\overline{c}\_2\zeta\_t + c\_0)dt + \mathcal{C}(\vartheta)dP\_2^0(t) + \psi\_2'(t)dW\_2(t). \tag{104}$$

R e m a r k 4.1. Such noises from OTES-N may act at more lower levels OTES-O included into internal SOTES being with minimal from information point of view maximal. For highest OTES levels intermediate aggregative functions may be performed. So observation and estimation systems must be through (multi-level and cascade) and provide external noise protection for all OTES levels.

R e m a r k 4.2. As a rule at administrative SOTES levels processes of information aggregative and decision making are performed.

Finally at additional conditions:

1. information streams about OPB state in the OTES-O are given by formulae

$$Y\_t = G\_t(T\_{st}) + T\_{st} \tag{105}$$

and every StP *Gt*ð Þ *Tst* is supported by corresponding resource (e.g. financial);

2. for SOTES measurement only external noise from SOTES-N and own noise due to error of personal and equipment are essential.

We get the following basic ordinary differential Eqs:

$$
\dot{X}\_t = \overline{a}X\_t + a\_1 G\_t + a\_0 + \chi\_x V\_\Omega \equiv L\_X, 2\tag{106}
$$

$$\dot{\mathbf{G}}\_t = q(T\_{st})\mathbf{X}\_t + b\_2\zeta\_t + \chi\_\mathbf{g}V\_\Omega \equiv L\_G,\tag{107}$$

$$
\dot{\zeta}\_t = c\_2 \zeta\_t + c\_0 + \chi\_\zeta V\_\Omega \equiv L\_\zeta,\tag{108}
$$

$$\dot{T}\_{\mathcal{A}} = bX\_t + \overline{b}\_1 T\_{\mathcal{A}} + b\_0 + \chi\_{\pi} V\_{\Omega} \equiv L\_T. \tag{109}$$

Here *<sup>V</sup>*ΩðÞ¼ *<sup>t</sup> <sup>V</sup><sup>T</sup> <sup>x</sup>* ð Þ*<sup>t</sup> <sup>V</sup><sup>T</sup> <sup>g</sup>* ð Þ*<sup>t</sup> <sup>V</sup><sup>T</sup> <sup>ζ</sup>* ð Þ*<sup>t</sup> <sup>V</sup><sup>T</sup> st*ð Þ*t* h i*<sup>T</sup>* being vector white noise, dim*V*ΩðÞ¼ *t nx* <sup>þ</sup> *ng* <sup>þ</sup> *<sup>n</sup><sup>ζ</sup>* <sup>þ</sup> *nts* � � � <sup>1</sup> � �, *MV*ΩðÞ¼ *<sup>t</sup>* 0, with diagonal block intensity matrix *<sup>v</sup>*<sup>Ω</sup> <sup>¼</sup> diag ½ � *vx vg* � �½ � *<sup>v</sup><sup>ζ</sup> vts* ½ � � �, dim*vx*ðÞ¼ *<sup>t</sup> nx* � *ny* � �, dim*vg* ðÞ¼ *<sup>t</sup> ng* � *ng* � �, dim*vζ*ðÞ¼ *<sup>t</sup>* ð Þ *n<sup>ζ</sup>* � *n<sup>ζ</sup>* , dim*vts*ðÞ¼ *t* ð Þ *nts* � *nnst* , *χx*, *χ<sup>g</sup>* , *χζ*, *χst* being known matrices:

$$V\_x = \mathcal{S}(\nu)V\_P(t) + \psi'(t)V\_W, \quad V\_{\mathcal{g}} = \psi\_1(t)V\_{\bar{\zeta}} + \psi\_1'(t)V\_{W1}, \quad V\_{\bar{\zeta}} = \mathcal{C}(\theta)V\_{p2} + \psi\_2'(t)V\_{W2}; \tag{110}$$

$$\begin{aligned} V\_P &= \dot{P}\_0(t), \quad V\_{P1} = \dot{P}\_1^0(t), \quad V\_{P2} = \dot{P}\_2^0(t), \quad V\_{\text{if}} = V\_{P1}, \quad V\_W = \dot{W}(t), \quad V\_{W1} = \dot{W}\_1(t),\\ V\_{W2} &= \dot{W}\_2(t). \end{aligned} \tag{111}$$

R e m a r k 4.3. Noises *VP*, *VP*1, *VP*<sup>2</sup> (random time moments of resources or production) in are non-Gaussian noises induced by Poisson noises in the OTES, OTES-O, OTES-N, whereas noises *VW*, *VW*1, *VW*<sup>2</sup> (personal errors, internal noises) are Gaussian StP.

From Eqs. (110) and (111) we have the following equivalent expressions for intensities of vector *V*Ωð Þ*t* :

$$\begin{aligned} \boldsymbol{\nu}\_{\boldsymbol{x}} &= \operatorname{\mathcal{S}}(\boldsymbol{\nu}) \overline{\boldsymbol{\rho}} \operatorname{\mathcal{S}}^{T}(\boldsymbol{\nu}) + \boldsymbol{\nu}^{\prime} \boldsymbol{\nu}\_{\boldsymbol{W}} \boldsymbol{\nu}^{\prime T}, \quad \boldsymbol{\nu}\_{\boldsymbol{\mathcal{g}}} = \boldsymbol{\nu}\_{1} \boldsymbol{\nu}\_{\boldsymbol{\zeta}} \boldsymbol{\nu}^{T} + \boldsymbol{\nu}\_{1}^{\prime} \boldsymbol{\nu}\_{\boldsymbol{W} 1} \boldsymbol{\nu}\_{1}^{\prime T}, \\ \boldsymbol{\nu}\_{\boldsymbol{\zeta}} &= \operatorname{\mathcal{C}}(\boldsymbol{\theta}) \overline{\boldsymbol{\mu}} \boldsymbol{\mathcal{C}}^{T}(\boldsymbol{\theta}) + \boldsymbol{\nu}\_{2}^{\prime} \boldsymbol{\nu}\_{\boldsymbol{W} 2} \boldsymbol{\nu}\_{2}^{\prime T}, \quad \boldsymbol{\nu}\_{\boldsymbol{\mathfrak{t}}} = \operatorname{\mathcal{D}}(\boldsymbol{r}) \overline{\boldsymbol{\eta}} \boldsymbol{\mathcal{D}}^{T}(\boldsymbol{r}). \end{aligned} \tag{112}$$

Here the following notations are used: *S v*ð Þ*ρS<sup>T</sup>*ð Þ*<sup>v</sup>* , *<sup>C</sup>*ð Þ *<sup>ϑ</sup> <sup>μ</sup>C<sup>T</sup>*ð Þ *<sup>ϑ</sup>* , *D r*ð Þ*γDT*ð Þ*<sup>r</sup>* being intensities of nonGaussian white noises *S v*ð Þ*VP*ð Þ*t* , *D r*ð Þ*VP*1ð Þ*t* , *C*ð Þ *ϑ VP*2ð Þ*t* : *ρ* ¼ E diag*ρ Xt* ½ � ð Þ , *t* , *γ* ¼ E diag*γ Yt* ½ � ð Þ , *t* , *μ* ¼ E diag*μ ζ<sup>t</sup>* ½ � ð Þ , *t* being mathematical expectations of intensivity diagonal matrices of Poisson streams in the SOTES, SOTES-O, SOTES-N; *vW*, *vW*1, *vW*<sup>2</sup> being intensivities of Gaussian white noises *VW*, *VW*1, *VW*2. Note the difference between intensity of Poisson stream and intensity of white noise.

In case of Eqs. (106)–(109) with the Gaussian parametric noises we use the following Eqs:

$$
\dot{X}\_t = L\_X + (\tilde{a}X\_t + \tilde{a}\_1 G\_t) V\_\Omega,\tag{113}
$$

$$
\dot{\mathbf{G}}\_{t} = L\_{\rm G} + \left(\tilde{q}\mathbf{X}\_{t} + \tilde{b}\_{2}\zeta\_{t}\right)\mathbf{V}\_{\Omega},\tag{114}
$$

$$
\dot{\zeta}\_t = L\_\zeta + \ddot{c}\_2 \zeta\_t V\_\Omega,\tag{115}
$$

$$
\dot{T}\_{st} = L\_T + \left(\tilde{b}X\_t + \tilde{b}\_1 T\_{st}\right) V\_{\Omega},\tag{116}
$$

where bar means parametric noises coefficients.

At additive noises *<sup>V</sup>*<sup>Ω</sup> presenting Eqs. (113)–(116) for *Zt* <sup>¼</sup> *Xt Gt <sup>ζ</sup><sup>t</sup> Tst* ½ �*<sup>T</sup>* in form of MSL:

$$\dot{Z}\_t = B\_0(m\_t^x, K\_t^x, t) + B\_1(m\_t^x, K\_t^x, t)Z\_t + B'(m\_t^x, K\_t^x, t)V\_t^\Omega,\tag{117}$$

we get following set of interconnected Eqs for *m<sup>z</sup> <sup>t</sup>* , *K<sup>z</sup> t* :

$$
\dot{m}\_t^x = B\_0 \left( m\_t^x, K\_t^x, \mathbf{t} \right), \quad m^x(t\_0) = m\_0^x,\tag{118}
$$

*Methods of Conditionally Optimal Forecasting for Stochastic Synergetic CALS Technologies DOI: http://dx.doi.org/10.5772/intechopen.103657*

$$\dot{K}\_t^x = B\_1(m\_t^x, K\_t^x, t) K\_t^x + K\_t^x B\_1(m\_t^x, K\_t^x, t)^T + B'(m\_t^x, K\_t^x, t) v\_t^\Omega B' \left(m\_t^x, K\_t^x, t\right)^T, \quad K^z(t\_0) = K\_0^z. \tag{119}$$

Eq for *K<sup>z</sup>* ð Þ *t*1, *t*<sup>2</sup> is given by (49).

## **5. Basic SOTES conditionally optimal filtering and forecasting algorithms**

**Proposal 5.1.** *Let SOTES, SOTES-O, SOTES-N being linear, satisfy* Eqs. (102)–(104) *and admit linear filter of the form*:

$$\dot{\hat{X}}\_{t} = \left(\overline{a}\hat{X}\_{t} + a\_{1}\mathbf{G}\_{t} + a\_{0}\right) + \beta\_{t} \left[\dot{\mathbf{G}}\_{t} - \left(q\_{t}\hat{X}\_{t} + b\_{2}\zeta\_{t}\right)\right],\tag{120}$$

where coefficient *qt* in (120) does not depend upon *Tst*. Then Eqs of optimal and conditionally optimal filters coincide with the Kalman-Bucy filter and may be presented in the following form:

$$\dot{\hat{X}}\_{t} = \overline{a}\hat{X}\_{t} + a\_{1}\mathbf{G}\_{t} + a\_{0} + \mathbf{R}\_{t}\mathbf{q}\_{t}^{T}\mathbf{v}\_{\mathcal{g}}^{-1}\left[\mathbf{Z} - \left(\mathbf{q}\_{t}\hat{\mathbf{X}}\_{t} + \mathbf{b}\_{2}\boldsymbol{\zeta}\_{t}\right)\right] \quad \left(\mathbf{Z}\_{t} = \dot{\mathbf{G}}\_{t}\right),\tag{121}$$

$$
\dot{R}\_t = \overline{a}R\_t + R\_t\overline{a}^T + v\_\infty - R\_t q\_t^T v\_\text{g}^{-1} q\_t R\_t. \tag{122}
$$

**Proposal 5.2.** *At condition when measuring coefficient qt depends upon λ<sup>t</sup>* ¼ *λ Tst* ð Þ , *t and admit statistical linearization*

$$
\lambda(T\_{\rm st}, t) \approx \lambda\_0(m\_{\rm st}, K\_{\rm st}, t) + \lambda\_1(m\_{\rm st}, K\_{\rm st}, t)T\_{\rm st}^0,
$$

$$
\lambda\_0(m\_{\rm st}, K\_{\rm st}, t) = M[\lambda(T\_{\rm st}, t)] \approx \lambda\_0(m\_{\rm st}, K\_{\rm st}, t), \quad \lambda\_1(m\_{\rm st}, K\_{\rm st}, t) \approx 0. \tag{123}
$$

sub- and conditionally optimal filter Eqs are follows:

$$\dot{\hat{X}}\_{t} = \overline{a}\hat{X}\_{t} + a\_{1}\mathbf{G}\_{t} + a\_{0} + R\_{t}q\_{0}(m\_{\text{st}}, \mathbf{K}\_{\text{st}})^{T}v\_{\text{g}}^{-1}\{Z\_{t} - \left[q\_{0}(m\_{\text{st}}, \mathbf{K}\_{\text{st}})\hat{\mathbf{X}}\_{t} + b\_{2}\zeta\_{t}\right]\},\tag{124}$$

$$\dot{R}\_t = \overline{a}R\_t + R\_t\overline{a} + v\_\times - R\_t q\_0(m\_{st}, K\_{st}) v\_g^{-1} q\_t^T(m\_{st}, K\_{st}) R\_t. \tag{125}$$

R e m a r k 5.1. Filtering Eqs defined by Proposals 5.1 and 5.2 give the m.s. square optimal algorithms nonbias of *Xt* for OTES at conditions of internal noises of measuring devices and external noise from OTES on measuring part of SOTES-O.

R e m a r k 5.2. Accuracy of estimation *X*^*<sup>t</sup>* depends upon not only upon noise *ζ<sup>t</sup>* influencing on measuring signal but on rule and technical-economical quality SOTES criteria but on line state of resources *Tst* OPB for SOTES-O.

Using [9–11] let us consider more general SOTES than Eqs. (113)–(116) for system vector *Xt* <sup>¼</sup> *XtGtζtTst* ½ �*<sup>T</sup>* and observation vector *Yt* <sup>¼</sup> ½ � *<sup>Y</sup>*1*Y*2*Y*3*Y*<sup>4</sup> *<sup>T</sup>* defined by Eqs:

$$\dot{\overline{X}}\_{t} = \left(a\overline{Y}\_{t} + a\_{1}\overline{X}\_{t} + a\_{0}\right) + \left(c\_{10} + \sum\_{r=1}^{n\_{\mathcal{I}}} c\_{1r}\overline{Y}\_{r} + \sum\_{r=1}^{n\_{x}} c\_{1,n\_{\mathcal{I}}+r}\overline{X}\_{r}\right)V,\tag{126}$$

$$Z\_t = \dot{\overline{Y}}\_t = \left(b\overline{Y}\_t + b\_1\overline{X}\_t + b\_0\right) + \left(c\_{20} + \sum\_{r=1}^{n\_\gamma} c\_{2r}\overline{Y}\_r + \sum\_{r=1}^{n\_x} c\_{2,n\_\gamma + r}\overline{X}\_r\right)V\_1. \tag{127}$$

Here *a*, *a*0, *a*1, *b*, *b*0, *b*1and *cij i* ¼ 1, 2; *j* ¼ 1, *nx* � �– vector–matrix functions *t* do not depend from *Xt* <sup>¼</sup> *<sup>X</sup>*<sup>1</sup> … *Xnx* ½ �*<sup>T</sup>* and *Yt* <sup>¼</sup> *<sup>Y</sup>*<sup>1</sup> … *Yny* h i*<sup>T</sup>* . Then corresponding algorithm of conditionally optimal filter (COF) is defined by [9–11]:

$$\dot{\hat{X}}\_t = \left(a\overline{Y}\_t + a\_1\hat{\overline{X}}\_t + a\_0\right) + \beta\_t \left[\dot{Z}\_t - \left(b\overline{Y}\_t + b\_1\hat{\overline{X}}\_t + b\_0\right)\right].\tag{128}$$

For getting Eqs for *β<sup>t</sup>* it is necessary to have Eqs for mathematical expectation *mt* and covariance matrix *Kt* of random vector *Qt* ¼ *X*<sup>1</sup> … *XnxY*<sup>1</sup> … *Yny* h i*<sup>T</sup>* error covariance matrix *Rt* for *<sup>X</sup>*~*<sup>t</sup>* <sup>¼</sup> ^ *Xt* � *Xt*. Using Eqs

$$
\dot{m}\_t = \mathfrak{a}m\_t + \mathfrak{a}\_0,\tag{129}
$$

$$\dot{K}\_t = aK\_t + K\_t a^T + c\_0 \nu c\_0^T + \sum\_{r=1}^{n\_\mathcal{I} + n\_x} (c\_0 \nu c\_r^T + c\_r \nu c\_0^T) m\_r + \sum\_{r,s=1}^{n\_\mathcal{I} + n\_x} c\_r \nu c\_s^T (m\_r m\_s + k\_{rs}) \tag{130}$$

$$a \begin{pmatrix} a = \begin{bmatrix} b & b\_1 \\ a & a\_1 \end{bmatrix}, & a\_0 = \begin{bmatrix} b\_0 \\ a\_0 \end{bmatrix}, & c\_r = \begin{bmatrix} c\_{2r} \\ c\_{1r} \end{bmatrix} \quad \text{( $r = \overline{0, n\_{\mathcal{V}} + n\_x}$ )}, & \tag{131}$$

we have the following Eq for the error covariance matrix

$$\begin{aligned} \dot{R}\_{t} &= a\_{1}R\_{t} + R\_{t}a\_{1}^{T} - \left[ R\_{t}b\_{1}^{T} + \left( c\_{10} + \sum\_{r=1}^{n\_{\rm r} + n\_{\rm r}} c\_{1r} m\_{r} \right) v \left( c\_{20} + \sum\_{r=1}^{n\_{\rm r} + n\_{\rm r}} c\_{2r}^{T} m\_{r} \right) \right. \\ &\left. + \sum\_{r,s=1}^{n\_{\rm r} + n\_{\rm s}} c\_{1r} w\_{2r}^{T} k\_{\rm r} \right] \kappa\_{11}^{-1} \times \left[ R\_{t} b\_{1} + \left( c\_{20} + \sum\_{r=1}^{n\_{\rm r} + n\_{\rm r}} c\_{2r} m\_{r} \right) v \left( c\_{10}^{T} + \sum\_{r=1}^{n\_{\rm r} + n\_{\rm r}} c\_{1r}^{T} m\_{r} \right) \right. \\ &\left. + \sum\_{r,s=1}^{n\_{\rm r} + n\_{\rm s}} c\_{2r} w\_{2r}^{T} k\_{\rm r} \right] \left. + \left( c\_{10} + \sum\_{r=1}^{n\_{\rm r} + n\_{\rm r}} c\_{1r} m\_{r} \right) v \left( c\_{10}^{T} + \sum\_{r=1}^{n\_{\rm r} + n\_{\rm r}} c\_{1r}^{T} m\_{r} \right) + \sum\_{r,s=1}^{n\_{\rm r} + n\_{\rm r}} c\_{1r} w\_{1r}^{T} k\_{\rm r} \right. \end{aligned} \tag{132}$$

Here

$$\kappa\_{11} = \left(c\_{20} + \sum\_{r=1}^{n\_{\mathcal{I}} + n\_{\mathcal{X}}} c\_{2r} m\_r \right) v \left(c\_{20}^T + \sum\_{r=1}^{n\_{\mathcal{I}} + n\_{\mathcal{X}}} c\_{2r}^T m\_r \right) + \sum\_{r,r=1}^{n\_{\mathcal{I}} + n\_{\mathcal{X}}} c\_{2r} p c\_{2r}^T k\_{rn} \tag{133}$$

*mt* ¼ *mr* ½ � *r* ¼ 1, *ny* þ *nx* � � � � ,*Kt* <sup>¼</sup> *krs* ½ � *<sup>r</sup>*, *<sup>s</sup>* <sup>¼</sup> 1, *my* <sup>þ</sup> *nx* � � � � � ; *V* being the white nonGaussian noise of intensity *v*. Coefficient *β<sup>t</sup>* in Eq. (127) is defined by formula

$$\beta\_t = \left\{ R\_t b\_1^T + \left( c\_{10} + \sum\_{r=1}^{n\_\gamma + n\_x} c\_{1r} m\_r \right) v \left( c\_{20}^T + \sum\_{r=1}^{n\_\gamma + n\_x} c\_{2r}^T m\_r \right) + \sum\_{r,s=1}^{n\_\gamma + n\_x} c\_{1r} w\_{2s}^T k\_{rs} \right\} \kappa\_{11}^{-1}.\tag{134}$$

R e m a r k 5.3. In case when observations do not influence the state vector we have the following notations:

*Methods of Conditionally Optimal Forecasting for Stochastic Synergetic CALS Technologies DOI: http://dx.doi.org/10.5772/intechopen.103657*

$$\begin{aligned} a &= 0, \quad b = 0, \quad c\_{1r} = 0, \quad c\_{2r} = 0 \quad (r = \overline{1,4}), \quad n\_x = 4, \quad n\_y = 4; \\\ a\_1 &= \begin{bmatrix} \overline{a} & \overline{a}\_1 & 0 & 0 \\ q & 0 & b\_2 & 0 \\ 0 & 0 & c\_2 & 0 \\ 0 & 0 & 0 & \overline{b}\_1 \end{bmatrix}, \quad a\_0 = \begin{bmatrix} \overline{a}\_0 \\ 0 \\ c\_0 \\ c\_0 \\ b\_0 \end{bmatrix}, \quad c\_{10} = \begin{bmatrix} \chi\_x \\ \chi\_{\mathcal{E}} \\ \chi\_{\mathcal{E}} \\ \chi\_{\mathcal{E}} \\ \chi\_{\mathcal{E}} \end{bmatrix}, \\\ c\_{1,5} &= \begin{bmatrix} \bar{a} & \bar{a}\_1 & 0 & 0 \end{bmatrix}, \quad c\_{1,6} = \begin{bmatrix} \bar{q} & \mathbf{0} & \bar{b}\_2 & \mathbf{0} \end{bmatrix}, \quad c\_{1,7} = \begin{bmatrix} \mathbf{0} & \mathbf{0} & \bar{b}\_1 \end{bmatrix}. \end{aligned} \tag{135}$$

**Proposal 5.3.** *Let SOTES is described by* Eqs. (125) and (126)*. Then COF algorithm is defined by* Eqs. (127)–(133)*.*

Theory of conditionally optimal forecasting [9–11] in case of Eqs:

$$\dot{\overline{X}}\_{l} = \left(a\_{1}\overline{X}\_{l} + a\_{0}\right) - \left(c\_{10} + \sum\_{r=1}^{n\_{x}} c\_{1,n\_{r}+r}\overline{X}\_{r}\right)V\_{1},\tag{136}$$

$$Z\_{l} = \dot{\overline{Y}}\_{l} = \left(b\overline{Y}\_{l} + b\_{1}\overline{X}\_{l} + b\_{0}\right) - \left(c\_{20} + \sum\_{r=1}^{n\_{\mathcal{I}}} c\_{2r}\overline{Y}\_{r} + \sum\_{r=1}^{n\_{\mathcal{x}}} c\_{2,n\_{\mathcal{Y}}+r}\overline{X}\_{r}\right)V\_{2},\tag{137}$$

where Δ being forecasting time, *V*<sup>1</sup> and *V*<sup>2</sup> are independent nonGaussian white noises with matrix intensities *v*<sup>1</sup> and *v*2, gives the following Eqs for COFc:

$$\dot{\hat{\mathbf{X}}}\_{t} = \left[a\_{1}(t+\Delta)\hat{\mathbf{X}}\_{t} + a\_{0}(t+\Delta)\right] + \beta\_{t}\left[\mathbf{Z}\_{t} - \left(b\overline{\mathbf{Y}}\_{t} + b\_{1}\varepsilon\_{t}^{-1}\hat{\overline{\mathbf{X}}}\_{t} + b\_{0} - b\_{1}\varepsilon\_{t}^{-1}h\_{t}\right)\right].\tag{138}$$

where the following notations are used: *u s*ð Þ , *t* is fundamental solution of Eq: *du=ds* ¼ *a*1ð Þ*s u* at initial condition *u t*ð Þ¼ , *t I*, *ε<sup>t</sup>* ¼ *u t*ð Þ þ Δ, *t* ,

$$
\beta\_t = \mathbf{e}\_t (\mathbf{K}\_\mathbf{x} - \mathbf{K}\_{\hat{\mathbf{x}}\mathbf{x}}) b\_1^T \kappa\_{11}^{-1},\tag{139}
$$

$$h\_t = h(t) = \int\_t^{t+\Delta} u(t+\Delta, \tau) a\_0(\tau) d\tau, \quad h(t+\Delta, t) = \varepsilon\_t,\tag{140}$$

$$
\varepsilon m\_{\mathbf{x}}(t+\Delta) = \varepsilon\_t m\_{\mathbf{x}}(t) + h\_t = \varepsilon\_t m\_{\mathbf{x}} + h\_t. \tag{141}
$$

R e m a r k 5.4. At practice COFc may be presented as sequel connection of COF, amplifier with gain *ε<sup>t</sup>* ¼ *u t*ð Þ þ Δ, *t* and summater *ht* ¼ *h t*ð Þ:

$$
\hat{\overline{X}}\_t = \varepsilon\_t \hat{\overline{X}}\_1 + h\_t,\tag{142}
$$

where ^ *Xt* being the COF output or COF of current state *Xt*. Eq. (137) may be presented in other form:

$$\dot{\hat{\overline{X}}}\_{t} = \left[a\_{1}(t+\Delta)\left(e\_{t}\hat{X}\_{1} + h\_{t}\right) + a\_{0}(t+\Delta) + e\_{t}\beta\_{1}\left[Z\_{t} - \left(b\hat{X}\_{1} + b\_{0}\right)\right].\tag{143}$$

Accuracy of COFc is defined by the following Eq:

$$\begin{split} \dot{R}\_{t} &= a\_{1}(t+\Delta)R\_{t} + R\_{t}a\_{1}(t+\Delta)^{T} - \beta\_{t} \left[ \left(c\_{20} + \sum\_{r=1}^{n\_{r}+n\_{r}} c\_{2r}m\_{r}\right) v\_{1} \left(c\_{20} + \sum\_{r=1}^{n\_{r}+n\_{r}} c\_{2r}^{T}m\_{r}\right) \\ &+ \sum\_{r,s=1}^{n\_{r}+n\_{s}} c\_{2r}v\_{1}c\_{2s}^{T}k\_{rs} \bigg| \beta\_{t}^{T} + + \left[c\_{10}(t+\Delta) + \sum\_{r=n\_{r}+1}^{n\_{r}+n\_{r}} c\_{1r}(t+\Delta)^{T}m\_{r}(t+\Delta)\right] \\ &+ \sum\_{s=n^{\mathrm{I}}+1}^{n\_{r}+n\_{s}} c\_{1r}(t+\Delta)v\_{2}(t+\Delta)c\_{1r}(t+\Delta)^{T}k\_{\mathrm{II}}. \end{split} \tag{144}$$

**Proposal 5.5.** *At conditions of Proposal 5.3 COFc is described by* Eqs. (137)–(140)*, (*143*) or* Eqs. (141)–(143)*.*

Let us consider Eqs. (94)–(96) at conditions of possible subdivision of measuring system and OPB in SOTES-O so that *q Xt* ð Þ¼ , *t q t*ð Þ*Xt* and noise *ζ<sup>t</sup>* is additive. In this case for SOTES, SOTES-O, SOTES-N Eqs. (94)–(96) may be presented in the following form:

$$\dot{X}\_t = \rho(X\_t, t) + \mathcal{S}(\boldsymbol{\nu})\rho(X\_t, t) + \chi\_\boldsymbol{x} V\_\Omega,\tag{145}$$

$$
\dot{\mathbf{G}}\_t = q(T\_\mathcal{A})\mathbf{X}\_t + b\_2 \zeta\_t + \chi\_\mathcal{g} V\_\Omega,\tag{146}
$$

$$\dot{\zeta}\_t = \wp\_2(\zeta\_t, \mathfrak{t}) + \mathsf{C}(\mathfrak{d})\mu(\zeta\_t, \mathfrak{t}) + \chi\_{\zeta}V\_{\mathfrak{U}},\tag{147}$$

$$
\dot{T}\_{\text{st}} = \rho\_1(T\_{\text{st}}, t) + D(r)\chi(T\_{\text{st}}, t) + \chi\_{T\text{ts}}V\_{\Omega}.\tag{148}
$$

At condition of statistical linearization we make the following replacements:

$$\mathcal{S}(\boldsymbol{\nu})\rho(\boldsymbol{X}\_t, t) \approx \mathcal{S}(\boldsymbol{\nu})\rho\_0(\boldsymbol{m}\_\mathbf{x}, \boldsymbol{K}\_\mathbf{x}, t) + A\_{\rho1}(\boldsymbol{m}\_\mathbf{x}, \boldsymbol{K}\_\mathbf{x}, t)\boldsymbol{X}\_t^0,\tag{149}$$

$$\mathcal{C}(\theta)\mu(\zeta\_t, t) \approx \mathcal{C}(\theta)\mu\_0(m\_\zeta, K\_\zeta, t) + A\_{\gamma1}(m\_\zeta, K\_\zeta),\tag{150}$$

$$
\rho\_{\zeta}(\zeta\_t, t) \approx \rho\_{\zeta 0} + \rho\_{\zeta 1} \zeta\_t^0. \tag{151}
$$

So we get the following statistically linearized expressions:

$$\rho(\mathbf{X}\_t, \mathbf{t}) + \mathbb{S}(\nu)\rho(\mathbf{X}\_t, \mathbf{t}) = \overline{\rho}\_{X0} + \overline{\rho}\_{X1}\mathbf{X}\_t, \quad \rho\_\zeta(\zeta\_t, \mathbf{t}) + \mathbb{C}(\theta)\mu(\zeta\_t, \mathbf{t}) = \overline{\rho}\_{\zeta 0} + \overline{\rho}\_{\zeta 1}\zeta\_t,\tag{152}$$

where.

$$
\overline{\rho}\_{\mathbf{X}0} = \rho\_{\mathbf{X}0}(m\_{\mathbf{x}}, \mathbf{K}\_{\mathbf{x}}, t) - \left[\rho\_{\mathbf{X}1}(m\_{\mathbf{x}}, \mathbf{K}\_{\mathbf{x}}, t) + A\_{\rho1}(m\_{\mathbf{x}}, \mathbf{K}\_{\mathbf{x}}, t)\right] m\_{\mathbf{x}} + \mathbf{S}(\nu)\rho\_{0}(m\_{\mathbf{x}}, \mathbf{K}\_{\mathbf{x}}, t),
$$

$$
\overline{\rho}\_{\mathbf{X}1} = \rho\_{\mathbf{X}1}(m\_{\mathbf{x}}, \mathbf{K}\_{\mathbf{x}}, t) + A\_{\rho1}(m\_{\mathbf{x}}, \mathbf{K}\_{\mathbf{x}}, t),
$$

$$
\overline{\rho}\_{\zeta0} = \rho\_{\zeta0}(m\_{\zeta}, \mathbf{K}\_{\zeta}, t) - \left[\rho\_{\zeta1}(m\_{\zeta}, \mathbf{K}\_{\zeta}, t) + A\_{\mu1}(m\_{\zeta}, \mathbf{K}\_{\zeta}, t)\right] m\_{\zeta} + \mathbf{C}(\theta)\mu\_{0}(m\_{\zeta}, \mathbf{K}\_{\zeta}, \mathbf{0}),
$$

$$
\overline{\rho}\_{\zeta1} = \rho\_{\zeta1}(m\_{\zeta}, \mathbf{K}\_{\zeta}, t) + A\_{\mu1}(m\_{\zeta}, \mathbf{K}\_{\zeta}, t).
$$

**Proposal 5.6.** *For* Eqs. (144)–(148) *at statistical linearization conditions* Eqs. (152) and (153) *when qt does not depend upon Tst suboptimal filtering algorithm is defined by Eqs:*

*Methods of Conditionally Optimal Forecasting for Stochastic Synergetic CALS Technologies DOI: http://dx.doi.org/10.5772/intechopen.103657*

$$\dot{\hat{X}}\_t = \overline{\rho}\_{\text{X1}} \hat{X}\_t + \overline{\rho}\_{\text{X0}} + R\_t q\_t^T v\_g^{-1} \left[ Z\_t - \left( q\_t \hat{X}\_t + b\_2 \zeta\_t \right) \right], \tag{154}$$

$$\dot{R}\_t = \overline{\rho}\_{X1} R\_t + R\_t \overline{\rho}\_{X1}^T + \upsilon\_x - R\_t q\_t^T \upsilon\_g^{-1} q\_t R\_t. \tag{155}$$

**Proposal 5.7.** *At conditions λ<sup>t</sup>* ¼ *λ Tst* ð Þ , *t Eqs for SOTES, SOTES-O, SOTES-N may be presented in the form:*

$$
\dot{X}\_t = \overline{\rho}\_{X1} X\_t + \overline{\rho}\_{X0} + \chi\_x V\_{\Omega}, \tag{156}
$$

$$
\dot{\mathbf{G}}\_t = q\mathbf{X}\_t + b\_2\zeta\_t + \chi\_\mathbf{g}V\_\Omega,\tag{157}
$$

$$
\dot{\zeta}\_t = \overline{\rho}\_{\zeta 1} \zeta\_1 + \overline{\rho}\_{\zeta 0} + \chi\_{\zeta} V\_{\Omega}, \tag{158}
$$

$$
\dot{T}\_{\mathfrak{s}} = \overline{\varphi}\_{\mathfrak{s}1} T\_{\mathfrak{s}\mathfrak{t}} + \overline{\varphi}\_{\mathfrak{s}\mathfrak{t}0} + \chi\_{\mathfrak{s}} V\_{\mathfrak{t}\mathfrak{t}}.\tag{159}
$$

*Suboptimal algorithm at condition*

$$
\lambda(T\_{\rm st}, t) \approx \lambda\_0(m\_{\rm st}, K\_{\rm st}) \tag{160}
$$

is as follows:

$$\dot{\hat{X}}\_t = \overline{\rho}\_{\text{X1}} \hat{X}\_t + \overline{\rho}\_{\text{X0}} + R\_t \dot{\lambda}\_0^T v\_g^{-1} \left\{ Z\_t - \left[ \dot{\lambda}\_0^T (m\_{\text{st}}, K\_{\text{st}}) \hat{X}\_t + b\_2 \zeta\_t \right] \right\}, \tag{161}$$

$$\dot{R}\_{\ell} = \overline{\rho}\_{X1} R\_{\ell} + R\_{\ell} \overline{\rho}\_{X1}^{T} + \upsilon\_{\times} - R\_{\ell} \dot{\lambda}\_{0}^{T} \upsilon\_{\text{g}}^{-1} \lambda\_{0} R\_{\ell}. \tag{162}$$

## **6. Peculiarities of new SOTES generations**

As it was mentioned in Introduction in lower levels of hierarchical subsystems of SOTES arise information about nomenclature and character of final production and its components.

Analogously in personal LC subsystems final production systems being categories of personal with typical works and separate specialists with common works. In [1, 2] it is presented methodology of personal structuration according to categories, typical processes graphs providing necessary professional level and healthy. Analogous approach to structuration may be used to elements macroscopic subsystems of various SOTES levels. It gives possibility to design unified modeling and filtering methods in SOTES, SOTES-O, SOTES-N and then implement optimal processes of unique budget. So we get unique methodological potential possibilities for horizontal and vertical integrated SOTES.

In case of Eqs. (107)–(111) for LC subsystems in case aggregate of given personal categories defined by Eqs

$$
\dot{X}\_P = \overline{a}X\_P + \mathfrak{a}\_{1P}G\_P + \mathfrak{a}\_{0P} + \chi\_P V\_{\Omega P},\tag{163}
$$

$$
\dot{\mathbf{G}}\_P = q\_P(T\_{sP})\mathbf{X}\_P + b\_{2P}\zeta\_P + \chi\_{\mathbf{g}P}V\_{\Omega P},\tag{164}
$$

$$
\dot{\mathcal{L}}\_P = c\_{2P}\zeta\_P + c\_{0P} + \chi\_{\zeta P}V\_{\Omega P}, \tag{165}
$$

$$
\dot{T}\_{sP} = b\_P X\_P + b\_{1P} T\_{sP} + b\_{0P} + \chi\_{s1P} V\_{\Omega P},\tag{166}
$$

where index *P* denotes variables and parameters in personal LS. According to Section 4 the following filtering Eqs:

$$\dot{\hat{X}}\_P = \overline{a}\_P \hat{X}\_P + a\_{1p} G\_P + a\_{0P} + R\_P \mathbf{q}\_P^T \mathbf{v}\_{gP}^{-1} \left[ Z - \left( q\_P \hat{X}\_P + b\_{2P} \check{\mathbf{x}}\_P \right) \right],\tag{167}$$

$$
\dot{R}\_P = \overline{a}\_P R\_P + R\_P \overline{a}\_P^T + \nu\_P - R\_P q\_P^T v\_{gP}^{-1} q\_P R\_P. \tag{168}
$$

Let us consider linear synergetical connection between *X* and *XP*:

$$X\_P = \kappa\_1 X + \kappa\_0. \tag{169}$$

Here *κ*<sup>1</sup> and *κ*<sup>0</sup> being known ð Þ *nP* � *nx* and ð Þ *nP* � 1 synergetical matrices. Putting (151) into Eqs. (144)–(150) we get Eqs for personal subsystem and its observation expressed by *X*:

$$
\dot{X}\_P = \overline{a}\_P(\kappa\_1 X + \kappa\_0) + a\_{1P} G\_P + a\_{0P} + \chi\_P V\_{\Omega P},\tag{170}
$$

$$
\dot{\mathbf{G}}\_P = q\_P(T\_{sP})(\kappa\_1 \mathbf{X} + \kappa\_0) + b\_{2P}\zeta\_P + \chi\_{\mathbf{g}P}V\_{\Omega P}.\tag{171}
$$

Corresponding Eqs with combined right hand for SOTES vector *X* are described by:

$$\dot{X} = \begin{cases} \overline{a}X + a\_1 \mathcal{G}\_l + a\_0 V\_\Omega & \left(X\_K = \overline{1, n\_x}\right), \\\\ \overline{a}\_P(\kappa\_1 X + \kappa\_0) + a\_{1P} \mathcal{G}\_P(X) + a\_{0P} + \chi\_P V\_{\Omega P} & \left(X\_K = \overline{n\_{x+1}, (n\_x + n\_P)}\right). \end{cases} \tag{172}$$

Analogously using Proposal 5.1 we get the Kalman-Bucy filter Eqs:

$$\dot{\hat{X}} = \begin{cases} \overline{a}\hat{X} + a\_1 \mathbf{G}\_1 + a\_0 + R\boldsymbol{\lambda}^T \boldsymbol{\nu}\_{\rm g}^{-1} [\mathbf{Z} - \left(\boldsymbol{\lambda}\hat{\mathbf{X}} + \boldsymbol{b}\_2 \boldsymbol{\zeta}\right) & (\mathbf{x}\_K = \overline{\mathbf{1}, n\_x}), \\\\ \overline{a}\_P \left(\kappa\_1 \hat{\mathbf{X}} + \kappa\_0\right) + a\_1 \mathbf{p} \mathbf{G}\_P (\mathbf{X}) + a\_{0P} + R\boldsymbol{\rho} \boldsymbol{\lambda}\_P^T \boldsymbol{\nu}\_{\rm gP}^{-1} \left\{ Z\_P \left(\kappa\_1 \hat{\mathbf{X}} + \kappa\_0\right) + b\_{2P} \boldsymbol{\zeta}\_P \right\} & (\mathbf{x}\_K = \overline{n\_{x+1}, n\_x + n\_P}), \end{cases} \tag{173}$$

$$
\dot{\mathcal{R}} = \overline{\mathfrak{a}}\mathcal{R} + \mathcal{R}\overline{\mathfrak{a}}^T + \boldsymbol{\nu}\_{\mathfrak{x}} - \mathcal{R}\dot{\lambda}^T \boldsymbol{\nu}\_{\mathfrak{g}}^{-1} \lambda \mathcal{R}, \tag{174}
$$

$$
\dot{R}\_P = \overline{a}\_P R\_P + R\_P \overline{a}\_P^T + \nu\_P - R\_P \lambda\_P^T \nu\_{\text{gP}}^{-1} \lambda\_P R\_P. \tag{175}
$$

Eqs. (154)–(157) define Proposal 5.5 for SOTES filter including subsystems accompanying LC production and personal taking part in production LC.

Remark 6.1. Analogously we get Eqs for SOTES filter including for financial subsystem support and other subsystems.

Remark 6.2. Eqs. (173) and (174) are not connected and may be solved a priori.

## **7. Example**

Let us consider the simple example illustrating modeling the influence of the SOTES-N noise on rules and functional indexes of subsystems accompanying LC production, its filtration and forecasting. System includes stocks of spare parts (SP), exploitation organization with park of MP and together with repair organization (**Figure 1**).

At initial time moment necessary supplement provides the required level of effective exploitation at time period 0, ½ � *T* . Let consider processes in ASS connected with one type of composite parts (CP) in number *NT*. During park exploitation CP failures *Methods of Conditionally Optimal Forecasting for Stochastic Synergetic CALS Technologies DOI: http://dx.doi.org/10.5772/intechopen.103657*

appears. Non-repaired CP are or repaired again and returned into exploitation or whiting off. If the level of park readiness in exploitation is less the critical the repaired MP are taken from stocks.

In graph (**Figure 1**) the following notations are used: (1) being in stocks in number *X*1, (2) exploitation in number *X*2, (3) repair in number *X*3, (4) witting off in number *X*4. Using **Figure 2** we *nx* ¼ 4; transitions: 1 ! 2*v* ¼ 1 (for the Poisson stream *ρ*12*X*1); 2 ! 3 � *ρ*23*X*2; 2 ! 4 � *ρ*24*X*2; 3 ! 2; number of transitions is equal to *np* ¼ 4. As index of efficiency we use the following coefficient of technical readiness [1, 2]:

$$K\_{tr}(T) = \frac{1}{T} \int\_{0}^{T} \frac{X\_{2}(\tau)d\tau}{N\_{T}} = \frac{1}{T N\_{T}} \int\_{0}^{T} X\_{2}(\tau)d\tau,\tag{176}$$

where

$$N\_T = X\_2 + X\_3 + X\_4 \tag{177}$$

Being constant number of CP of park in exploitation.

Note that the influence of the SOTES-N on SOTES and SOTES-O is expressed in the following way: system noise *ζ<sup>t</sup>* as factor of report documentation distortion leads to fictive underestimated *KTR*ð Þ*t* . In case when relation

$$
\overline{K}\_{\rm TR}(t) \ge \overline{K}\_{\rm TR}^\*(t) \tag{178}
$$

is breaked down (*K* <sup>∗</sup> *TR*ð Þ*t* being critical of floating park level) stocks will give necessary CP amount. So we receive the possibility to exclude defined CP amount from turn over.

Finally let us consider filtering and forecasting algorithms of ASS processes for exposition of noise *ζt*.

**Solution.**

1. For getting Eq for *KTR*ð Þ*<sup>t</sup>* we assume *<sup>X</sup>*<sup>∗</sup> *TR*ðÞ¼ *t KTR*ð Þ*t* and take into account Eqs. (106)–(109). So have the following scalar Eqs:

$$\begin{aligned} \dot{X}\_{\text{TR}}(t) &= \frac{1}{T N\_T} X\_2(t), \\ \dot{X}\_1 &= -\rho\_{12} X\_1, \quad \dot{X}\_2 = \rho\_{12} X\_1 - (\rho\_{23} + \rho\_{24}) X\_2, \quad \dot{X}\_3 = \rho\_{23} X\_2 - \rho\_{32} X\_3, \quad \dot{X}\_4 = \rho\_{24} X\_2. \end{aligned} \tag{179}$$

In our case *a*<sup>1</sup> ¼ *a*<sup>0</sup> ¼ 0, *VX* ¼ 0 and Eq. (10.4) may be presented in vector form

$$
\dot{X} = aX \tag{180}
$$

where

$$a = \begin{bmatrix} \mathbf{0} & \mathbf{0} & (T\mathbf{N}\_T)^{-1} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\rho\_{12} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \rho\_{12} & -(\rho\_{23} + \rho\_{24}) & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \rho\_{23} & -\rho\_{32} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \rho\_{24} & \mathbf{0} & \mathbf{0} \end{bmatrix}. \tag{181}$$

At practice the reported documentation is the complete of documents containing SP demands from stock and acknowledgement SP documents. So noise *ζ<sup>t</sup>* acts only if its realization take part delivery and acquisition sides. This is the reason to name this noise as system noise and carried out by group of persons.

2. For setting Eqs for electronic control system we use of the following type Eq. (108)

$$
\dot{\mathbf{G}} = q\mathbf{X} + \zeta + \chi\_{\mathbf{g}} V\_{\Omega} \tag{182}
$$

where *V*<sup>Ω</sup> ¼ ½ � *VTRV*1*V*2*V*3*V*<sup>4</sup> *<sup>T</sup>* being noises with intensity *vg* ; *<sup>λ</sup>* <sup>¼</sup> ½ � *<sup>λ</sup>TRλ*1*λ*2*λ*3*λ*<sup>4</sup> *T* being ecoefficiency of measuring block. In scalar form Eq. (181) may be presented as

$$\begin{aligned} \dot{\mathbf{G}}\_{\text{TR}} &= \lambda\_{\text{TR}} \mathbf{X}\_{\text{TR}} + \boldsymbol{\zeta} + \mathbf{V}\_{\text{TR}}, & \dot{\mathbf{G}}\_{1} &= \lambda\_{1} \mathbf{X}\_{1} + \mathbf{V}\_{1}, & \dot{\mathbf{G}}\_{2} &= \lambda\_{2} \mathbf{X}\_{2} + \mathbf{V}\_{2}, \\ \dot{\mathbf{G}}\_{3} &= \lambda\_{3} \mathbf{X}\_{3} + \mathbf{V}\_{3}, & \dot{\mathbf{G}}\_{4} &= \lambda\_{4} \mathbf{X}\_{4} + \mathbf{V}\_{4}. \end{aligned} \tag{183}$$

3. Algorithm for noise *ζ<sup>t</sup>* description depends on participants. In simple case we use error

$$
\delta X\_{TR}(t) = X\_{TR}(t) - \overline{K}\_{TR}^\*(t). \tag{184}
$$

In this case we get lag in *GTR* measurement on variable *ζt*:

$$\mathcal{L}\_t = b\_2(\mathcal{X}\_{TR})|\mathcal{X}\_{TR}(t) - \overline{\mathcal{K}}\_{TR}^\*|. \tag{185}$$

By the choice of coefficient *b*<sup>2</sup> necessary time temp of documentation manipulation may be realized.

4. Using Eqs of Proposals 5.1 and 5.2 we get the following matrix filtering Eqs for system noise *ζ<sup>t</sup>* on background of measuring noise *VTR*

$$\dot{\hat{X}} = a\dot{\hat{X}} + R\lambda^T v\_g^{-1} \left[ Z - \left( \lambda \hat{X} + \zeta \right) \right],\tag{186}$$

$$\dot{\vec{R}} = a\mathcal{R} + \mathcal{R}a^T - \mathcal{R}\dot{\lambda}^T v\_{\text{g}}^{-1} \mathcal{\lambda}\mathcal{R}.\tag{187}$$

at *<sup>Z</sup>* <sup>¼</sup> *<sup>G</sup>*\_ , *<sup>ζ</sup>* <sup>¼</sup> ½ � *<sup>ζ</sup><sup>t</sup>* <sup>000</sup> *<sup>T</sup>*.

R e m a r k 7.1. Realization of the described filtering solutions for internal noises needs a priori information about basic OTES characteristics. So we need special methods and algorithms.

5. Finally linear COFc is defined by Eqs. (137)–(104) for various forecasting times Δ.

R e m a r k 7.2. In case of SOTES with two subsystems using Eqs. (172)–(174) we have the following Kalman-Bucy filter:

$$\begin{aligned} \dot{\hat{X}} = \begin{cases} d\hat{X} + R\boldsymbol{\lambda}^T \boldsymbol{v}\_{\mathcal{g}}^{-1} \Big[ \boldsymbol{Z} - \left( \boldsymbol{\lambda} \hat{\boldsymbol{X}} + \boldsymbol{\zeta} \right) \Big], & X\_K = \overline{\mathbf{1}, n\_{\mathcal{X}}}; \\\ \boldsymbol{a}\_P \Big( \kappa\_1 \hat{\boldsymbol{X}} + \kappa\_0 \big) + R\_P \boldsymbol{\lambda}\_P^T \boldsymbol{v}\_{\mathcal{g}}^{-1} \Big\{ \boldsymbol{Z}\_P - \left[ \boldsymbol{\lambda}\_P (\kappa\_1 \hat{\boldsymbol{X}} + \kappa\_0) \boldsymbol{\zeta}\_P \right] \Big\}, & X\_K = \overline{n\_{\mathcal{x}t}, n\_{\mathcal{x}} + n\_P}. \end{cases} \end{aligned}$$

where *ζ<sup>P</sup>* being noise acting on the functional index of personal attendant subsystem.

These results are included into experimental software tools for modeling and forecasting of cost and readiness for parks of aircraft [1, 2].

## **8. Conclusion**

For new generations of synergetical OTES (SOTES) methodological support for approximate solution of probabilistic modeling and mean square and forecasting filtering problems is generalized. Generalization is based on sub- and conditionally optimal filtering. Special attention is paid to linear systems and linear systems with the parametric white Gaussian noises.

Problems of optimal, sub- and conditionally optimal filtering and forecasting in product and staff subsystems at the background noise in SOTES are considered. Nowadays for highly available systems the problems of creation of basic systems engineering principles, approaches and information technologies (IT) for SOTES from modern spontaneous markets at the background inertially going world economics crisis, weakening global market relations at conditions of competition and counteraction reinforcement is very important. Big enterprises need IT due to essential local and systematic economic loss. It is necessary to form general approaches for stochastic processes (StP) and parameters estimation (filtering, identification, calibration etc) in SOTES at the background noises. Special observation SOTES (SOTES-O) with own organization-product resources and internal noise as information from special SOTES being enact noise (SOTES-N). Conception for SOTES structure for systems of technical, staff and financial support is developed. Linear, linear with parametric noises and nonlinear stochastic (discrete and hybrid) equations describing organizationproduction block (OPB) for three types of SOTES with their planning-economical estimating divisions are worked out. SOTES-O is described by two interconnected subsystems: state SOTES sensor and OPB supporting sensor with necessary resources. After short survey of modern modeling, sub- and conditionally optimal filtering and forecasting basic algorithms and IT for typical SOTES are given.

Influence of OTES-N noise on rules and functional indexes of subsystems accompanying life cycle production, its filtration and forecasting is considered.

Experimental software tools for modeling and forecasting of cost and technical readiness for parks of aircraft is developed.

Now we are developing presented results on the basis of cognitive approaches [12].

## **Acknowledgements**

The authors would like to thank Russian Academy of Sciences and for supporting the work presented in this chapter.

Authors much obliged to Mrs. Irina Sinitsyna and Mrs. Helen Fedotova for translation and manuscript preparation.

## **Author details**

Igor N. Sinitsyn\* and Anatoly S. Shalamov Federal Research Center "Computer Sciences and Control of Russian Academy of Sciences", Moscow, Russia

\*Address all correspondence to: sinitsin@dol.ru

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Methods of Conditionally Optimal Forecasting for Stochastic Synergetic CALS Technologies DOI: http://dx.doi.org/10.5772/intechopen.103657*

## **References**

[1] Sinitsyn IN, Shalamov AS. Lectures on Theory of Integrated Logistic Support Systems. 2nd ed. Moscow: Torus Press; 2019. p. 1072 (in Russian)

[2] Sinitsyn IN, Shalamov AS. Probabilistic modeling, estimation and control for CALS organization-technicaleconomic systems – Chapter 5/in book "Probability, Combinatorics and Control". In: Kostogryzov A, Korolev V, editors. London, UK: IntechOpen; 2020. p. 117-141. DOI: 10.5772/intechopen.79802

[3] Sinitsyn IN, Shalamov AS. Optimal estimation and control in stochastic synergetic organization-technicaleconomic systems. Filtering in product and staff subsystems at background noise (I). Highly Available Systems. 2019;**15**(4):27-48. DOI: 10.18127/ j20729472-201904-04 (in Russian)

[4] Sinitsyn IN, Shalamov AS. Optimal estimation and control in stochastic synergetic organization-technicaleconomic systems. Filtering in product and staff subsystems at background noise (II). Highly Available Systems. 2021;**17**(1):51-72. DOI: 10.18127/ j20729472-202101-05 (in Russian)

[5] Sinitsyn IN, Shalamov AS. Problems of estimation and control in synergetic organization-technical-economic systems. In: VII International Conference "Actual Problems of System and Software". Moscow; 2021 (in print)

[6] Haken H. Synergetics and Introduction. Springer Ser. Synergistics. Vol. 3. Berlin, Heidelberg: Springer; 1983

[7] Haken H. Advanced Synergetics. Springer Ser. Synergistics. Vol. 20. Berlin, Heidelberg: Springer; 1987

[8] Kolesnikov AA. Synergetical Control Theory. Taganrog: TRTU: M.: Enegroatomizdat; 1994. p. 538 (in Russian)

[9] Pugachev VS, Sinitsyn IN. Stochastic Systems. Theory and Application. Singapore: World Scientific; 2001. p. 908

[10] Sinitsyn IN, editor. Academic Pugachev Vladimir Semenovich: To 100 Annyversary. Moscow: Torus Press; 2011. p. 376 (in Russian)

[11] Kalman SIN. Pugachev Filters. 2nd ed. Vol. 2007. Moscow: Logos; 2005. p. 772 (in Russian)

[12] Kostogryzov A, Korolev V. Probabilistic Methods for cognitive solving of some problems in Artifial intelligence system – Chapter 1/in book "Probability, Combinatorics and Control". In: Kostogryzov A, Korolev V, editors. London, UK: IntechOpen; 2020. p. 3–34. DOI: 10.5772/intechopen.89168.

Section 4
