**2. Description of the task**

Let the Markovian vector process *t*, described generally by the nonlinear stochastic differential equation in the symmetrized form

$$\dot{\xi}\_t = f\left(\xi, t\right) + f\_0\left(\xi, t\right)n\_{t\prime} \qquad \qquad \xi\left(t\_0\right) = \tilde{\xi}\_{0\prime} \tag{1}$$

where *f*, *f*0 are known *N* – dimensional vector and *N M* – dimensional matrix nonlinear functions;

*nt* is the white Gaussian normalized *M* – dimensional vector - noise; be observed by means of the vector nonlinear observer of form:

$$Z = H\left(\xi, t\right) + W\_{t'}$$

© 2012 Sokolov, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2012 Sokolov, licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

where *Z – L N* – dimensional vector of the output signals of the meter;

*h*(*,t*) – a known nonlinear *L-* dimension vector - function of observation;

*Wt* – a white Gaussian *L-* dimension vector - noise of measurement with the zero average and the matrix of intensity *D . <sup>W</sup>*

Stochastic Observation Optimization on the Basis of the Generalized Probabilistic Criteria 173

, or the criterion of the minimum of

Generally instead of criterion MAP one can use, for example, the criterion of the minimum

the integrated deviation of the a posteriori density from the density of the given form etc., that results in the need for representing the criterion of optimality *J* in the more generalized

,

where **Ф** is the known nonlinear function which takes into account generally the

,

 

during time interval *Т* allows to write down analytically the form of

*J t d dt*

In the final forming of structure of the criterion of optimality *J* it is necessary to take into account the limited opportunities of the practical realization of the function of observation *h*(*,t*), as well, that results, in its turn, in the additional restriction on the choice of functional dependence *h*(*,t*). The formalization of the given restriction, for example, in the form of the requirement of the minimization of the integrated deviation of function *Н* from the given

(2)

*T T T J t d dt H t H t H t H t d dt W t dt*

 

Thus, the final statement of the synthesis problem of the optimum observer in view of the above mentioned reasoning consists in defining function *h*(*,t*), giving the minimum to

Function APD, included in it, is described explicitly by the integro-differential Stratonovich equation with the right-hand part dependent on *h*(*,t*). The analysis of the experience of the instrument realization of the meters shows, that their synthesis consists, in essence, in defining the parameters of some functional series, approximating the output characteristic of the device projected with the given degree of accuracy. As such a series one uses, as a rule, the final expansion of the nonlinear components of vector *h*(*,t*) in some given system

0 0\* , ,, ,, . *<sup>T</sup>*

     

  *t d dt*

 

\* min , .

*T*

 

*T*

of the a posteriori entropy on interval \* min max

feasible analytical restrictions on the vector *t*;

\* is some bounded set of the state parameters *t*.

 

**3. Synthesis of observations optimal control** 

of the multidimensional functions: power, orthogonal etc.

 

*T* = [*t*0, *tk*] is a time interval of optimization;

\* \*

 

the minimized criterion *J* as follows:

form:

form *Н*0 on interval \*

functional (2).

The function of the a posteriori probability density (APD) of process ,*t tZ t t* ,, , <sup>0</sup> is described by the known integro-differential equation in partial derivatives (Stratonovich equation), the right-hand part of which explicitly depends on the observation function *h*:

$$\frac{\partial \rho(\xi, t)}{\partial t} = L\left(\rho\left(\xi, t\right)\right) + \left[Q - Q\_0\right]\rho\left(\xi, t\right),$$

where <sup>0</sup> 0 0 0 1 1 , 2 2 *<sup>V</sup> T T <sup>f</sup> L t div f f div div f f* – the Focker-Plank-

operator,

*(А)(V)* is the operation for transforming the *n m* matrix **A** into vector *(А)(V)* formed from its elements as follows:

$$A^{(V)} = \left| a\_{11} a\_{21} \dots a\_{m1} a\_{12} a\_{22} \dots a\_{m2} \dots a\_{1n} a\_{2n} \dots a\_{mn} \right|^{T} \dots$$

*div* is the symbol for the operation of divergence of the matrix row,

$$Q = Q\left(\underline{\boldsymbol{\varepsilon}}, t\right) = -\frac{1}{2} \left[ Z - H\left(\underline{\boldsymbol{\varepsilon}}, t\right) \right]^T D\_W^{-1} \left[ Z - H\left(\underline{\boldsymbol{\varepsilon}}, t\right) \right],$$

$$Q\_0 = \int\_0^\circ Q\left(\underline{\boldsymbol{\varepsilon}}, t\right) \rho\left(\underline{\boldsymbol{\varepsilon}}, t\right) d\underline{\boldsymbol{\varepsilon}}.$$

As the main problem of the a posteriori analysis of the observable process *t* is the obtaining of the maximum reliable information about it, then the synthesis problem of the optimum observer would be natural to formulate as the definition of the form of the functional dependence *h*(*,t*), providing the maximum of the a posteriori probability (MAP) of signal *<sup>t</sup>* on the given interval of occurrence of its values \* min max , during the required interval of time *T* = [*t*0, *tk*], i.e. in view of the positive definiteness (*,t*)

$$\max \left\{ J = \int \int \rho \left( \xi, t \right) d\xi \, dt \right\}.$$

or

$$\min \left\{ - \int \int \rho \left( \xi', t \right) d\xi \, dt \right\}.$$

Generally instead of criterion MAP one can use, for example, the criterion of the minimum of the a posteriori entropy on interval \* min max , or the criterion of the minimum of the integrated deviation of the a posteriori density from the density of the given form etc., that results in the need for representing the criterion of optimality *J* in the more generalized form:

$$J = \int\int \Phi \Box \rho \left(\xi, t\right) \overline{\Box} d\xi \, dt \, \rho$$

where **Ф** is the known nonlinear function which takes into account generally the feasible analytical restrictions on the vector *t*;

*T* = [*t*0, *tk*] is a time interval of optimization;

172 Stochastic Modeling and Control

 

and the matrix of intensity *D . <sup>W</sup>*

 ,*t tZ t t* ,, , 

on the observation function *h*:

 

elements as follows:

operator,

or

  <sup>0</sup>

*t*

where <sup>0</sup>

*div* is the symbol for the operation of divergence of the matrix row,

on the given interval of occurrence of its values \* min max

interval of time *T* = [*t*0, *tk*], i.e. in view of the positive definiteness (*,t*)

 

1 1 , 2 2 *<sup>V</sup> T T <sup>f</sup> L t div f f div div f f* 

 

where *Z – L N* – dimensional vector of the output signals of the meter;

*h*(*,t*) – a known nonlinear *L-* dimension vector - function of observation;

*Wt* – a white Gaussian *L-* dimension vector - noise of measurement with the zero average

The function of the a posteriori probability density (APD) of process

partial derivatives (Stratonovich equation), the right-hand part of which explicitly depends

 <sup>0</sup> , , , , *<sup>t</sup> L t QQ t*

0 0 0

*(А)(V)* is the operation for transforming the *n m* matrix **A** into vector *(А)(V)* formed from its

11 21 1 12 22 2 1 2 , *<sup>V</sup> <sup>T</sup> A aa a aa a aa a <sup>m</sup> m n n mn*

 <sup>1</sup> <sup>1</sup> , , , , <sup>2</sup> *T*

*QQ t ZH t D ZH t*

 <sup>0</sup> *Q Q t td* ,

As the main problem of the a posteriori analysis of the observable process *t* is the obtaining of the maximum reliable information about it, then the synthesis problem of the optimum observer would be natural to formulate as the definition of the form of the functional dependence *h*(*,t*), providing the maximum of the a posteriori probability (MAP) of signal *<sup>t</sup>*

 

is described by the known integro-differential equation in

 

> 

 

, during the required

*W*

 , .

> 

 

*J t d dt*

 

\* max , *T*

– the Focker-Plank-

\* is some bounded set of the state parameters *t*.

In the final forming of structure of the criterion of optimality *J* it is necessary to take into account the limited opportunities of the practical realization of the function of observation *h*(*,t*), as well, that results, in its turn, in the additional restriction on the choice of functional dependence *h*(*,t*). The formalization of the given restriction, for example, in the form of the requirement of the minimization of the integrated deviation of function *Н* from the given form *Н*0 on interval \* during time interval *Т* allows to write down analytically the form of the minimized criterion *J* as follows:

$$J = \int\_{T} \int \Phi\left[\int \rho\left(\xi, t\right)\right] d\xi dt + \int \int \left[H\left(\xi, t\right) - H\_0\left(\xi, t\right)\right]^T \left[H\left(\xi, t\right) - H\_0\left(\xi, t\right)\right] d\xi dt = \int\_{T} \mathcal{W}\_\*\left(t\right) dt. \tag{2}$$

Thus, the final statement of the synthesis problem of the optimum observer in view of the above mentioned reasoning consists in defining function *h*(*,t*), giving the minimum to functional (2).

### **3. Synthesis of observations optimal control**

Function APD, included in it, is described explicitly by the integro-differential Stratonovich equation with the right-hand part dependent on *h*(*,t*). The analysis of the experience of the instrument realization of the meters shows, that their synthesis consists, in essence, in defining the parameters of some functional series, approximating the output characteristic of the device projected with the given degree of accuracy. As such a series one uses, as a rule, the final expansion of the nonlinear components of vector *h*(*,t*) in some given system of the multidimensional functions: power, orthogonal etc.

Having designated vector of the multidimensional functions as 1... *<sup>T</sup> <sup>S</sup>* , we present the approximation of vector *h*(*,t*) as

$$H\left(\boldsymbol{\xi},t\right) = \left(\boldsymbol{E}\otimes\boldsymbol{\eta}^{T}\right)h = \boldsymbol{\eta}\_{E}h\_{\boldsymbol{\epsilon}}\tag{3}$$

$$h = \left|h\_{11}\ldots h\_{1S}h\_{21}\ldots h\_{2S}\ldots h\_{N1}\ldots h\_{NS}\right|^{T}$$

Stochastic Observation Optimization on the Basis of the Generalized Probabilistic Criteria 175

under the final condition *V*(*tk*) = 0 with respect to the optimum functional *V* = *V*(,*t*), parametrically dependent on *t*  [*t*0, *tk*] and determined on the set of functions satisfying

For the processes, described by the linear equations in partial derivative, and criteria of the form of the above-stated ones, functional *V* is found in the form of the integrated quadratic

 

> 

*T T*

 

0 0 0, *<sup>T</sup> <sup>T</sup>*

 

1

 

> 

 

2 2 0 1

 

*dv v L v hH hH h d*

*E E h h dhh*

*<sup>о</sup>pt E E E E h vH H d h vH d*

0 1 <sup>1</sup> 1 10 2 2 *dv T T TT v L h B vHd B vH h*

01 1 0 1 0 1 1 *<sup>T</sup> T T TT h B h B v Hd h B v Hd B*

 

 

10 1 *Bv h v H d* , .

, for *v*(,*t*) we have the following equation:

\*

\* \*

   

> 

 

 

 

   

 

> 

 

 

 

> 

 

 

   

<sup>2</sup> *V v t td* , , .

1 2 <sup>2</sup> 22 2 , *dV dv d dv T T v d v L v hH v hH h d*

 

 

\*

 

<sup>2</sup> min 2 2 1 2

 

\*

*T T T*

\*

 

\* \*

 

*<sup>о</sup>pt h h*

1 2

2 2

 

 

(4).

form [1], therefore in this case we have:

*dt*

2 2

 

> 

the functional equation for *v* is obtained in the following form:

\* \*

*dt dt dt dt*

*dt* 

\*

whence we have optimum vector *hоpt*:

  

Using condition \* 0

*dt*

 

*dV <sup>W</sup> dt*

*h H*

Calculating derivative *dV*

where 1 , *S i ij j j h t ht* is the *i*-th component of vector *h*, the factors of which define

the concrete technical characteristics of the device,

is the symbol of the Kronecker product.

For the subsequent analytical synthesis of optimum vector - function *h*(*,t*) in form of (3) we rewrite the equation of the APD (*,t*) in the appropriate form

$$\frac{\partial \hat{\rho}}{\partial t} = \mathcal{L} \left[ \rho \right] + h^T H\_1 \left[ \rho \right] - h^T H\_2 \left[ \rho \right] h\_\prime \tag{4}$$

where

$$\begin{aligned} \boldsymbol{H}\_{1}\Big[\boldsymbol{\rho}\Big] &= \left[\boldsymbol{\nu}\_{E}^{\boldsymbol{T}} - \int\_{\boldsymbol{\xi}} \boldsymbol{\nu}\_{E}^{\boldsymbol{T}}\Big(\boldsymbol{\xi}\Big) \boldsymbol{\rho}\Big(\boldsymbol{\xi},t\Big) d\boldsymbol{\xi}\right] \boldsymbol{D}\_{W}^{-1} \boldsymbol{Z} \boldsymbol{\rho}\Big(\boldsymbol{\xi},t\Big), \\\\ \boldsymbol{H}\_{2}\Big[\boldsymbol{\rho}\Big] &= \frac{\rho\left(\boldsymbol{\xi},t\Big)}{2} \Bigg[\boldsymbol{\nu}\_{E}^{\boldsymbol{T}} \boldsymbol{D}\_{W}^{-1} \boldsymbol{\nu}\_{E} - \int\_{\boldsymbol{\xi}} \boldsymbol{\nu}\_{E}^{\boldsymbol{T}}\Big(\boldsymbol{\xi}\Big) \boldsymbol{D}\_{W}^{-1} \boldsymbol{\nu}\_{E}\Big(\boldsymbol{\xi}\Big) \boldsymbol{\rho}\Big(\boldsymbol{\xi},t\Big) d\boldsymbol{\xi}\Big]. \end{aligned}$$

The constructions carried out the problem of search of optimum vector *h*(*,t*) is reduced to the synthesis of the optimum in-the- sense -of-(2) control *h* of the process with the distributed parameters described by Stratonovich equation (in view of representing vector *Н*0(*,t*) in the form similar to (3)

$$H\_0\left(\xi',t\right) = \wp\_E h\_0\left(.\right).$$

The optimum control of process (*,t*) will be searched in the class of the limited piecewisecontinuous functions with the values from the open area *Н*\*. For its construction we use the method of the dynamic programming, according to which the problem is reduced to the minimization of the known functional [1]

$$\min\_{h \in H\_\*} \left\{ \frac{dV}{dt} + \mathcal{W}\_\* \right\} = 0 \tag{5}$$

under the final condition *V*(*tk*) = 0 with respect to the optimum functional *V* = *V*(,*t*), parametrically dependent on *t*  [*t*0, *tk*] and determined on the set of functions satisfying (4).

For the processes, described by the linear equations in partial derivative, and criteria of the form of the above-stated ones, functional *V* is found in the form of the integrated quadratic form [1], therefore in this case we have:

$$V = \int\_{\xi\_\*} v(\xi'\_{\tau'}t) \rho^2(\xi'\_{\tau'}t) d\xi.$$

Calculating derivative *dV dt*

174 Stochastic Modeling and Control

the approximation of vector *h*(*,t*) as

where 1

*S i ij j j h t ht* 

 

the concrete technical characteristics of the device,

rewrite the equation of the APD (*,t*) in the appropriate form

*t* 

\*

 

 

*T T*

 

*H D D t d* 

 

 

*t*

is the symbol of the Kronecker product.

2

minimization of the known functional [1]

*Н*0(*,t*) in the form similar to (3)

,

where

Having designated vector of the multidimensional functions as 1... *<sup>T</sup>*

, , *<sup>T</sup> Ht E h hE*

For the subsequent analytical synthesis of optimum vector - function *h*(*,t*) in form of (3) we

1 2 , *T T L hH hH h*

1

 

> 

(5)

 

 

(4)

<sup>1</sup> , , , *T T <sup>H</sup> E E td D Z t <sup>W</sup>*

, , . <sup>2</sup>

1 1

*EWE E WE*

 

 

\*

The constructions carried out the problem of search of optimum vector *h*(*,t*) is reduced to the synthesis of the optimum in-the- sense -of-(2) control *h* of the process with the distributed parameters described by Stratonovich equation (in view of representing vector

> 0 0 , ). *Ht hE*

The optimum control of process (*,t*) will be searched in the class of the limited piecewisecontinuous functions with the values from the open area *Н*\*. For its construction we use the method of the dynamic programming, according to which the problem is reduced to the

min \* 0

*dV <sup>W</sup> dt* 

\*

*h H*

 

 

11 1 21 2 1 ... ... ... ... , *<sup>T</sup> S S N NS h h hh h h h*

is the *i*-th component of vector *h*, the factors of which define

 

(3)

 *<sup>S</sup>* , we present

$$\frac{dV}{dt} = \int\_{\tilde{\varepsilon}} \left( \frac{dv}{dt} \rho^2 + 2v\rho \frac{d\rho}{dt} \right) d\tilde{\varepsilon} = \int\_{\tilde{\varepsilon}} \left( \frac{dv}{dt} \rho^2 + 2v\rho L \left[ \rho \right] + 2v\rho h^T H\_1 \left[ \rho \right] - 2v\rho h^T H\_2 \left[ \rho \right] h \right) d\tilde{\varepsilon},$$

the functional equation for *v* is obtained in the following form:

$$\begin{aligned} \min\_{h \in H\_\*} & \int\_{\boldsymbol{\xi}} \left( \frac{d\boldsymbol{v}}{dt} \rho^2 + 2\boldsymbol{v}\rho\boldsymbol{L}\left[\boldsymbol{\rho}\right] + 2\boldsymbol{v}\rho\left(h^T H\_1\left[\boldsymbol{\rho}\right] - h^T H\_2\left[\boldsymbol{\rho}\right]\boldsymbol{h}\right) + \boldsymbol{\Phi}\left[\boldsymbol{\rho}\right] \right) d\boldsymbol{\xi} + \boldsymbol{v} \\\\ & \quad + \left(\boldsymbol{h} - \boldsymbol{h}\_0\right)^T \int\_{\boldsymbol{\xi}} \boldsymbol{\nu}\_E^T \left(\boldsymbol{\xi}\right) \boldsymbol{\nu}\_E \left(\boldsymbol{\xi}\right) d\boldsymbol{\xi} \Big(\boldsymbol{h} - \boldsymbol{h}\_0\right) = 0, \end{aligned}$$

whence we have optimum vector *hоpt*:

$$\begin{split} h\_{opt} &= \left\{ \int\_{\tilde{\boldsymbol{\xi}}\_{\tau}} \left[ \boldsymbol{\nu}\_{E}^{T} \left( \boldsymbol{\xi} \right) \boldsymbol{\nu}\_{E} \left( \boldsymbol{\xi} \right) - \boldsymbol{\upsilon} \boldsymbol{\rho} \left( \boldsymbol{H}\_{2} + \boldsymbol{H}\_{2}^{T} \right) \right] d\boldsymbol{\xi} \right\}^{-1} \int\_{\tilde{\boldsymbol{\xi}}\_{\tau}} \left\{ \left( \boldsymbol{\nu}\_{E}^{T} \left( \boldsymbol{\xi} \right) \boldsymbol{\nu}\_{E} \left( \boldsymbol{\xi} \right) \boldsymbol{h}\_{0} - \boldsymbol{\upsilon} \boldsymbol{\rho} \boldsymbol{H}\_{1} \right) \right\} d\boldsymbol{\xi} = \\ &= \boldsymbol{B} \{ \boldsymbol{\upsilon}, \boldsymbol{\rho} \} \int\_{\tilde{\boldsymbol{\xi}}\_{\tau}} \left( \boldsymbol{\nu}\_{1} \left( \boldsymbol{\xi} \right) \boldsymbol{h}\_{0} - \boldsymbol{\upsilon} \boldsymbol{\rho} \boldsymbol{H}\_{1} \right) d\boldsymbol{\xi}. \end{split}$$

Using condition \* 0 *<sup>о</sup>pt h h dV <sup>W</sup> dt* , for *v*(,*t*) we have the following equation:

 \* 1 2 0 1 <sup>1</sup> 1 10 2 2 *dv T T TT v L h B vHd B vH h dt* \* \* 2 2 01 1 0 1 0 1 1 *<sup>T</sup> T T TT h B h B v Hd h B v Hd B* 

$$\rho \times \left( 2v\rho H\_2 - \nu\_1 \right) \left( B\nu\_{1\xi} h\_0 - B \int\_{\xi} v\rho H\_1 d\xi \right) - \rho^{-2} h\_0^T \nu\_1 h\_0 - \rho^{-2} \Phi \left[ \rho \right] \tag{6}$$

Stochastic Observation Optimization on the Basis of the Generalized Probabilistic Criteria 177

 

 

(8)

 

, where the values of the components

 

0 1 <sup>1</sup> <sup>1</sup> , , , , *<sup>T</sup> <sup>T</sup> T T TT T <sup>T</sup> L d hB H dB H d*

 

 

 ,*t* .

\*

 

   

2 2

 

 

 

\* 1 0 <sup>1</sup> , , , , , *TT T B hB H dd*

1 2 0 1 <sup>2</sup> , , *T TT T T T T <sup>L</sup> h B*

<sup>1</sup> <sup>1</sup> 1 0 ,, 2 *T T TT T TT T H dB H h*

0 1 1 0 <sup>1</sup> , , , , *T T TT T h B hB H d*

 \*

2 1 1 0 <sup>2</sup> , , *TT T H Bh*

<sup>1</sup> 0 10 , , *TT T T T T T <sup>B</sup> H d hh <sup>d</sup>*

 

From the point of view of the practical realization the integration of system (8) under the boundary-value conditions appears to be more simple than integration (6), (7), but from the point of view of organization of the estimation process in the real time it is still hindered: first, the volume of the necessary temporary and computing expenses is great, secondly the feasibility of the adjustment of the vector of factors h in the real time of arrival of the signal of measurement *Z* - is excluded, the prior simulation of realizations *Z* appears to be necessary (in this case in the course of the instrument realization, as a rule, one fails to maintain the precisely given values *h* all the same). Thus, the use of the approximated methods of the problem solution (8) is quite proved in this case, then as one of which we consider the method of the invariant imbedding [3], used above and providing the required

0 1 <sup>1</sup> , , , , *T TT T T TT T h B H dB*

 

 *T t* 0, 

 

 

 

0 1 <sup>1</sup> <sup>2</sup> , , , , *T T T T TT T <sup>T</sup> h B*

 

 

 

*H dB H*

   

\* \* \*

\* \*

 

\*

2

 

2

\*

approximated solution in the real time.

\*

 

 

> 

 

 

> 

> >

 

 

under boundary value conditions 0 0

are defined from the expansion of function 0 0

 

 

 

  

 

> 

 

where

$$\mathcal{W}\_{1\xi} = \int\_{\xi^\*} \mathcal{W}\_1 \left( \xi^{\varepsilon} \right) d\xi^{\varepsilon} \zeta$$

which is connected with the equation of the APD, having after substitution into it expression *<sup>о</sup>pt h* the following form:

$$\frac{d\rho}{dt} = L\left[\rho\right] + \left(h\_0^T \nu\_{1\xi} B^T - \int\_{\tilde{\varepsilon}} v \rho H\_1^T d\xi \, B^T\right) H\_1 -$$

$$- \left(h\_0^T \nu\_{1\xi} B^T - \int\_{\tilde{\varepsilon}} v \rho H\_1^T d\xi \, B^T\right) H\_2 \left(B \nu\_{1\xi} h\_0 - B \int\_{\tilde{\varepsilon}} v \rho H\_1 d\xi\right). \tag{7}$$

### **4. Observations suboptimal control**

The solution of the obtained equations (6), (7) exhausts completely the problem stated, allowing to generate the required optimum vector - function *h* of form (3). On the other hand, the solution problem of system (6), (7) is the point-to-point boundary-value problem for integrating the system of the integro-differential equations in partial derivatives, general methods of the exact analytical solution of which , as it is known, does not exist now. Not considering the numerous approximated methods of the solution of the given problem oriented on the trade-off of accuracy against volume of the computing expenses, then as one of the solution methods for this problem we use the method based on the expansion of function **v,** *p* in series by some system of the orthonormal functions of the vector argument :

$$\begin{aligned} V\left(\underline{\boldsymbol{\varepsilon}},t\right) &= \sum\_{\mu} \alpha\_{\mu}\left(t\right) \boldsymbol{\phi}\_{\mu}\left(\underline{\boldsymbol{\varepsilon}}\right) = \boldsymbol{\phi}^{T}\boldsymbol{\alpha}, \\\\ \boldsymbol{\rho}\left(\underline{\boldsymbol{\varepsilon}},t\right) &= \sum\_{\mu} \boldsymbol{\beta}\_{\mu}\left(t\right) \boldsymbol{\phi}\_{\mu}\left(\underline{\boldsymbol{\varepsilon}}\right) = \boldsymbol{\phi}^{T}\boldsymbol{\beta} \end{aligned}$$

where is the index running a set of values from (0,...,0) to (*М*,...,*М*) [2]; is the vector of the orthonormal functions of argument ; are vectors of factors of the appropriate expansions.

In this case the solution is reduced to the solution of the point-to-point boundaryvalue problem for integrating the system of the following equations, already ordinary ones:

 \* \* \* 0 1 <sup>1</sup> <sup>1</sup> , , , , *<sup>T</sup> <sup>T</sup> T T TT T <sup>T</sup> L d hB H dB H d* \* \* 0 1 <sup>1</sup> <sup>2</sup> , , , , *T T T T TT T <sup>T</sup> h B H dB H* \* 1 0 <sup>1</sup> , , , , , *TT T B hB H dd* \* 1 2 0 1 <sup>2</sup> , , *T TT T T T T <sup>L</sup> h B* \* <sup>1</sup> <sup>1</sup> 1 0 ,, 2 *T T TT T TT T H dB H h* \* 2 0 1 1 0 <sup>1</sup> , , , , *T T TT T h B hB H d* (8) \* 2 0 1 <sup>1</sup> , , , , *T TT T T TT T h B H dB* 2 1 1 0 <sup>2</sup> , , *TT T H Bh* \* 2 2 <sup>1</sup> 0 10 , , *TT T T T T T <sup>B</sup> H d hh <sup>d</sup>* 

176 Stochastic Modeling and Control

*<sup>о</sup>pt h* the following form:

where

*dt*

**4. Observations suboptimal control** 

orthonormal functions of the vector argument :

ones:

\*

\* 1 1 *d* ,

*<sup>d</sup> T T TT L h B vHd B H*

 

which is connected with the equation of the APD, having after substitution into it expression

  

2 1 10 <sup>1</sup> 0 10 2 , *<sup>T</sup> v H B h B v Hd h h*

\* 0 1 1 1

 

 

 

 

> 

 

(7)

\* \* 0 1 1 2 10 <sup>1</sup> . *T T TT h B v Hd B H B h B v Hd*

 

The solution of the obtained equations (6), (7) exhausts completely the problem stated, allowing to generate the required optimum vector - function *h* of form (3). On the other hand, the solution problem of system (6), (7) is the point-to-point boundary-value problem for integrating the system of the integro-differential equations in partial derivatives, general methods of the exact analytical solution of which , as it is known, does not exist now. Not considering the numerous approximated methods of the solution of the given problem oriented on the trade-off of accuracy against volume of the computing expenses, then as one of the solution methods for this problem we use the method based on the expansion of function **v,** *p* in series by some system of the

> , , *<sup>T</sup> Vt t*

 , , *<sup>T</sup> t t* 

where is the index running a set of values from (0,...,0) to (*М*,...,*М*) [2];

 

 

 is the vector of the orthonormal functions of argument ; are vectors of factors of the appropriate expansions.

 

 

In this case the solution is reduced to the solution of the point-to-point boundaryvalue problem for integrating the system of the following equations, already ordinary

 

 

2 2

 

(6)

 

 

> under boundary value conditions 0 0 *T t* 0, , where the values of the components are defined from the expansion of function 0 0 ,*t* .

> From the point of view of the practical realization the integration of system (8) under the boundary-value conditions appears to be more simple than integration (6), (7), but from the point of view of organization of the estimation process in the real time it is still hindered: first, the volume of the necessary temporary and computing expenses is great, secondly the feasibility of the adjustment of the vector of factors h in the real time of arrival of the signal of measurement *Z* - is excluded, the prior simulation of realizations *Z* appears to be necessary (in this case in the course of the instrument realization, as a rule, one fails to maintain the precisely given values *h* all the same). Thus, the use of the approximated methods of the problem solution (8) is quite proved in this case, then as one of which we consider the method of the invariant imbedding [3], used above and providing the required approximated solution in the real time.

As the application of the given method assumes the specifying of all the components of the required approximately estimated vector in the differential form, then for the realization of the feasibility of the synthesis of vector *h* through the given method in the real time we introduce a dummy variable v, allowing to take into account from here on expression *h*opt as the differential equation

Stochastic Observation Optimization on the Basis of the Generalized Probabilistic Criteria 179

*n* was carried out the normalized Gaussian white

= = [-

<sup>2</sup> *Zh h W* 1 2 *<sup>t</sup>* 

where

where

functions *V*,

 

down the minimized functional as

for target <sup>3</sup>

 

\*

2

*t* 

 

\*

In this case the equation of the APD has the form

*T T*

2

 

\* \*

2

<sup>о</sup>pt 34 34 , *<sup>H</sup> h DV td d*

\* \*

2

The optimum vector *h* is defined from expression *h*opt as

*Dd T <sup>H</sup>*

 <sup>2</sup> 3

 

\* <sup>1</sup> 2 2 *H* ,*td Z* , 

 

 

 

\*

23 23 <sup>2</sup> 34 34 , . <sup>2</sup> *H t d* 

 

 

 

23 23

 

 

 

> 

 

 

<sup>0</sup> 2 2 , . *Dh Z V <sup>H</sup> td d*

 

Using the Fourier expansion up to the 3-rd order for the approximated representation of

 

 

 

*t*

of the a posteriori probability of the existence of the observable process on interval \*

noises of the target and meter. As the criterion of optimization the criterion of the maximum

2.5, 2.5] was chosen that provided the additional restriction in the form of the requirement of the minimal deviation of vector *h* from the given vector 0 0.95, 0.3 *<sup>T</sup> <sup>h</sup>* that allows to write

10.4 0

 

0 39.1

 

2 1 2 <sup>1</sup> , <sup>2</sup> *T T hH hHh*

 

 

1

 

*J t d dt h h D h h dt*

0 0 , , *<sup>T</sup>*

*H*

, 0; 600 .

$$\dot{v} = h\_{\text{opt}} \left( \boldsymbol{\phi}^T \boldsymbol{\alpha}\_\prime \boldsymbol{\phi}^T \boldsymbol{\beta} \right)\_\prime$$

forming with equations (8) a unified system. The application of the method of the invariant imbedding results in this case in the following system of equations:

 \* \* <sup>0</sup> <sup>2</sup> <sup>0</sup> 0 0 0 01 0 2 0 <sup>0</sup> , *T T T TT T h v D d h H H hd* \* 1 2 0 0 0 0 00 0 00 <sup>2</sup> *T T TT T H H D Lh <sup>h</sup> h dD* \* 1 <sup>0</sup> 0 0 2 00 <sup>2</sup> *T T T TT T <sup>D</sup> L h H hd* \* 1 0 02 0 <sup>2</sup> *TT T T H hH d* \* 2 1 00 0 0 0 2 2 , *T T TT <sup>T</sup> D d D* \* 1 1 0 20 1 <sup>1</sup> 2 . 2 *T T <sup>T</sup> Hh H d* 

By virtue of the fact that matrix *D* in the method of the invariant imbedding plays the role of the weight matrix at the deviation of the vector of the approximated solution from the optimum one, in this case for variables *i*0 the appropriate components *D* characterize the degree of their deviation from the factors of expansion of the true APD (components *D*0 - are deviations of the parameters at the initial moment). The essential advantage of the approach considered, despite the formation of the approximated solution, is the feasibility of the synthesis of the optimum observation function in the real time, i.e. in the course of arrival of the measuring information.

#### **5. Example**

For the illustration of the feasibility of the practical use of the suggested method the numerical simulation of the process of forming vector 1 2 *<sup>T</sup> h hh* of factors of the observer

<sup>2</sup> *Zh h W* 1 2 *<sup>t</sup>* for target <sup>3</sup> *t n* was carried out the normalized Gaussian white noises of the target and meter. As the criterion of optimization the criterion of the maximum of the a posteriori probability of the existence of the observable process on interval \* = = [- 2.5, 2.5] was chosen that provided the additional restriction in the form of the requirement of the minimal deviation of vector *h* from the given vector 0 0.95, 0.3 *<sup>T</sup> <sup>h</sup>* that allows to write down the minimized functional as

$$J = -\int\_{T} \int\_{\xi\_{\*}} \rho \left(\xi\_{\*}t\right) d\xi \, dt + \int\_{T} \left(h - h\_{0}\right)^{T} D\_{H} \left(h - h\_{0}\right) dt\,\zeta$$

where

178 Stochastic Modeling and Control

the differential equation

*v*

**5. Example** 

\*

\*

 

\*

As the application of the given method assumes the specifying of all the components of the required approximately estimated vector in the differential form, then for the realization of the feasibility of the synthesis of vector *h* through the given method in the real time we introduce a dummy variable v, allowing to take into account from here on expression *h*opt as

> opt , , *T T v h*

forming with equations (8) a unified system. The application of the method of the invariant

0 0 0 01 0 2 0 <sup>0</sup>

0 00 <sup>2</sup> *T T TT T H H D Lh <sup>h</sup> h dD*

 

 

 

 

<sup>0</sup> 0 0 2 00 <sup>2</sup> *T T T TT T <sup>D</sup> L h H hd*

 

 

1 0 20 1 <sup>1</sup> 2 .

By virtue of the fact that matrix *D* in the method of the invariant imbedding plays the role of the weight matrix at the deviation of the vector of the approximated solution from the optimum one, in this case for variables *i*0 the appropriate components *D* characterize the degree of their deviation from the factors of expansion of the true APD (components *D*0 - are deviations of the parameters at the initial moment). The essential advantage of the approach considered, despite the formation of the approximated solution, is the feasibility of the synthesis of the optimum observation function in the real time, i.e. in the course of arrival of the measuring information.

For the illustration of the feasibility of the practical use of the suggested method the

 

1 0 02 0 <sup>2</sup> *TT T T H hH d*

00 0 0

*T T <sup>T</sup> Hh H d*

2 2 , *T T TT <sup>T</sup> D d D*

 

, *T T T TT T*

*D d h H H hd*

1 2 0 0 0 0 00

 

 

0

 

 

2

 

> 

 

 

 

 

> 

*<sup>T</sup> h hh* of factors of the observer

 

imbedding results in this case in the following system of equations:

*h*

 

1

 

> 

2 1

 

 

\*

\* 1

 

numerical simulation of the process of forming vector 1 2

 

\* \*

<sup>0</sup> <sup>2</sup> <sup>0</sup>

$$D\_H = \int\_{\mathcal{L}} \left| \begin{matrix} \xi \\ \xi^2 \end{matrix} \right| \xi \quad \xi^2 \Big| d\xi = \begin{vmatrix} 10.4 & \cdots & 0 \\ \vdots & & \vdots \\ 0 & \cdots & \Re 9.1 \end{vmatrix} , \quad T = \begin{vmatrix} 0 \end{vmatrix} \text{; } \mathsf{600} \Big| .$$

In this case the equation of the APD has the form

$$\frac{\partial \mathcal{P}}{\partial t} = \frac{\partial}{\partial \xi} \Big(\xi^3 \rho \Big) + \frac{1}{2} \frac{\partial^2 \rho}{\partial \xi^2} + h^T H\_1 - h^T H\_2 h\_{\theta}$$

where

$$\begin{aligned} H\_1 &= \left( \left| \begin{matrix} \boldsymbol{\xi} \\ \boldsymbol{\xi}^2 \end{matrix} \right| - \int\_{\boldsymbol{\xi}^\*} \begin{matrix} \boldsymbol{\xi} \\ \boldsymbol{\xi}^2 \end{matrix} \rho \left( \boldsymbol{\xi}', t \right) d\boldsymbol{\xi}' \right) \mathbf{Z} \rho, \\\\ H\_2 &= \frac{\rho}{2} \begin{pmatrix} \left| \boldsymbol{\xi}^2 & \boldsymbol{\xi}^3 \\ \boldsymbol{\xi}^3 & \boldsymbol{\xi}^4 \end{pmatrix} - \int\_{\boldsymbol{\xi}^\*} \begin{matrix} \boldsymbol{\xi}^2 & \boldsymbol{\xi}^3 \\ \boldsymbol{\xi}^3 & \boldsymbol{\xi}^4 \end{matrix} \rho \left( \boldsymbol{\xi}', t \right) d\boldsymbol{\xi}' \right). \end{aligned}$$

The optimum vector *h* is defined from expression *h*opt as

$$\begin{split} \boldsymbol{h}\_{\text{opt}} &= \left[ \boldsymbol{D}\_{H} - \int\_{\boldsymbol{\xi}^{\cdot}} \boldsymbol{V} \, \rho^{2} \left( \begin{vmatrix} \boldsymbol{\xi}^{2} & \boldsymbol{\xi}^{3} \\ \boldsymbol{\xi}^{3} & \boldsymbol{\xi}^{4} \end{vmatrix} - \int\_{\boldsymbol{\xi}^{\cdot}} \begin{vmatrix} \boldsymbol{\xi}^{2} & \boldsymbol{\xi}^{3} \\ \boldsymbol{\xi}^{3} & \boldsymbol{\xi}^{4} \end{vmatrix} \rho \left( \boldsymbol{\xi}^{\cdot}, t \right) d \boldsymbol{\xi}^{\cdot} \right) d \boldsymbol{\xi}^{\cdot} \right]^{-1} \times \\\\ & \times \left( \boldsymbol{D}\_{H} \boldsymbol{h}\_{0} - \boldsymbol{Z} \int\_{\boldsymbol{\xi}^{\cdot}} \boldsymbol{V} \, \rho^{2} \left[ \begin{vmatrix} \boldsymbol{\xi}^{\cdot} \\ \boldsymbol{\xi}^{\cdot} \end{vmatrix} - \int\_{\boldsymbol{\xi}^{\cdot}} \begin{vmatrix} \boldsymbol{\xi}^{\cdot} \\ \boldsymbol{\xi}^{\cdot} \end{vmatrix} \rho \left( \boldsymbol{\xi}^{\cdot}, t \right) d \boldsymbol{\xi}^{\cdot} \right] d \boldsymbol{\xi}^{\cdot} \right). \end{split}$$

Using the Fourier expansion up to the 3-rd order for the approximated representation of functions *V*,

$$V\left(\boldsymbol{\xi},t\right) = \frac{1}{2}\boldsymbol{\alpha}\_0 + \sum\_{k=1}^{2} \alpha\_{1k} \cos k\alpha\_0 \boldsymbol{\xi} + \alpha\_{2k} \sin k\alpha\_0 \boldsymbol{\xi}$$

$$\boldsymbol{\rho}\left(\boldsymbol{\xi},t\right) = \sum\_{k=1}^{2} \beta\_{1k} \cos k\alpha\_0 \boldsymbol{\xi} + \beta\_{2k} \sin k\alpha\_0 \boldsymbol{\xi}$$

$$\boldsymbol{\alpha}\_0 = \frac{2\pi}{5}\boldsymbol{\omega}$$

Stochastic Observation Optimization on the Basis of the Generalized Probabilistic Criteria 181

 *ki ki*

*i*

*K i S S K S*

*ki ki k*

2 1 2 1 2

<sup>1</sup> ,1 ,1 ,1 <sup>2</sup>

*K i C C K C*

*ki ki k*

1 2 1 1 2

*K i C C K C*

*ki ki k*

1 2 1 1 2

<sup>1</sup> ,2 ,2 ,2 <sup>2</sup>

*K i S S K S*

 

 

2 2

,2 ,3 ,1

,2 ,3 ,4

*<sup>k</sup> k k*

*K K*

2 2

1 1

1 2

*K C K S*

 

*k kk kk*

*ki ki k*

2 1 2 1 2

<sup>1</sup> ,3 ,3 ,3 <sup>2</sup>

<sup>1</sup> ,2 ,2 ,2 <sup>2</sup>

 

> 

   

*h*

 

, ,

 

2 0

> 

 

 

 

> 

> >

> > >

 

 

*k k*

Then the system of equations for the factors of expansion has the following form:

 

2 1 2 1 2 1

*k ki ki*

 

 

 

> 

*K*

2

2

*K*

2

*K*

1

2

*K*

1

 

 

*K*

1

1

,

 

 

2

2

*K*

2

*K*

1

 

 

2

1 2

*Zh h*

 

 

1

*K C*

2 , ,

*T T K*

*K*

1

1

<sup>1</sup> , <sup>2</sup>

*T*

*h*

*T*

*Zh*

<sup>1</sup> 3 ,2 ,2 <sup>2</sup> *i K C C*

<sup>0</sup> 1 2 ,3 ,3 <sup>2</sup> *S S <sup>i</sup>*

 

 

 

 

*K i S S K S*

*ki ki k*

2 1 2 1 2

 *i*

> 

 

 

 

<sup>2</sup>

01 1 2

 

*k*

 

<sup>1</sup> ,3 ,3 ,3 <sup>2</sup>

*K i C C K C*

1 2 0 10 , 0 0, 1,2; *i i*

 

2 1 1

 

 

 *hh hh*

*K S K K*

 

1 2 1

 01 0 , , *<sup>T</sup> h*

*K K S K C*

2 3 ,2 , , , , *K C K C <sup>C</sup> KS S*

*ki ki k*

1 2 1 1 2

<sup>1</sup> ,4 ,4 ,4 <sup>2</sup>

(then for *v*2 the following representation holds true

$$
v \rho^2 \left(\underline{\boldsymbol{\varepsilon}}\_{\prime} \underline{\boldsymbol{t}}\right) = \underline{\boldsymbol{\gamma}}\_0 + \sum\_{k=1}^6 \underline{\boldsymbol{\gamma}}\_{1k} \cos k a\_0 \underline{\boldsymbol{\varepsilon}} + \underline{\boldsymbol{\gamma}}\_{2k} \sin k a\_0 \underline{\boldsymbol{\varepsilon}}\_{\prime}\underline{\boldsymbol{\varepsilon}}\_{\prime}
$$

 <sup>0</sup> , , *ik* are functions linearly dependent on factors *ik* and quadratically - on *ik* ) and introducing designations

$$\begin{aligned} \mathcal{L}\_{\mathbb{C}}\left(k,m\right) &= \int\_{\tilde{\varphi}} \xi^{m} \cos ko\_{0} \tilde{\varphi} \, d\tilde{\varphi} = 2 \sum\_{i=1,3}^{n} \operatorname{i}! C\_{i}^{\operatorname{m}} \frac{\left(2.5\right)^{n-i}}{\left(ko\_{0}\right)^{i+1}} \sin\left(k\pi + \frac{i\pi}{2}\right), \; m = 2; 4; \\ \mathcal{L}\_{\mathbb{S}}\left(k,m\right) &= \int\_{\tilde{\varphi}} \xi^{m} \sin ko\_{0} \tilde{\varphi} \, d\tilde{\varphi} = -2 \sum\_{i=0,2}^{m} \operatorname{i}! C\_{i}^{\operatorname{m}} \frac{\left(2.5\right)^{m-i}}{\left(ko\_{0}\right)^{i+1}} \cos\left(k\pi + \frac{i\pi}{2}\right), \; m = 1; 3; \end{aligned}$$

vector *h*opt of the factors of the observer we write down as follows:

 2 6 0 1 1 1 1 opt 2 6 0 2 2 1 1 5 ,2 10,4 ,2 5 ,3 ,3 *k C k C k k H k S k S k k k k h D k k* 1 2 6 0 2 2 1 1 2 6 0 1 1 1 1 5 ,3 ,3 5 ,4 39,1 ,4 *k S k S k k k C k C k k k k k k* 6 2 2 0 2 1 1 0 6 2 1 0 1 1 1 ,1 5 ,1 , . ,2 5 ,2 *k S k S k k H k C k C k k k k Dh Z h k k* 

Then the system of equations for the factors of expansion has the following form:

180 Stochastic Modeling and Control

<sup>0</sup> , , *ik*

and introducing designations

   

(then for *v*2 the following representation holds true

2

 

 

opt

*h D*

0

*H*

*H*

   

*C i i*

*S i i*

vector *h*opt of the factors of the observer we write down as follows:

 

 

 

> 

*k*

*k*

 

 

are functions linearly dependent on factors

*V t* 

2

1

 

6

1

 

 

*k vt k k*

\* <sup>0</sup> <sup>1</sup>

\* <sup>0</sup> <sup>1</sup>

*n i <sup>n</sup> n n*

*i*

*m i <sup>m</sup> m m*

*i <sup>i</sup> km k d iC k m*

 

*k*

2

1

 

*k*

0 1 02 0

1 02 0

2 , 5 

0 1 02 0

, cos sin , *k k*

1,3 <sup>0</sup> 2.5 , cos 2 ! sin , 2;4; <sup>2</sup>

 

0,2 <sup>0</sup> 2.5 , sin 2 ! cos , 1;3; <sup>2</sup>

> 2 6 0 1 1 1 1

2 6 0 2 2 1 1

*k k*

5 ,3 ,3

2 6 0 2 2 1 1

*k k*

2 6 0 1 1 1 1

*k k*

6 2 2 0 2 1 1

 

*k k*

6 2 1 0 1 1 1

*k k*

*k S k S*

5 ,4 39,1 ,4

*k S k S*

 

*Dh Z h*

 

*k C k C*

 

*k C k C*

*k k*

5 ,3 ,3

*k S k S*

 

*k k*

 

*k k*

*k k*

,1 5 ,1

*k k*

,2 5 ,2

 

5 ,2 10,4 ,2

*k C k C*

 

*k k*

 

*k k*

*<sup>i</sup> kn k d iC k n*

   

 

1

 

, .

 

*ik* and quadratically - on *ik*

)

*k k*

 

 

<sup>1</sup> , cos sin , <sup>2</sup> *k k*

, cos sin , *k k*

 

0

*t kk*

 2 1 2 1 2 1 <sup>1</sup> 3 ,2 ,2 <sup>2</sup> *i K C C K ki ki* 2 0 <sup>0</sup> 1 2 ,3 ,3 <sup>2</sup> *S S <sup>i</sup> i k ki ki* 2 2 1 2 1 2 1 2 1 2 1 1 2 1 <sup>1</sup> ,1 ,1 ,1 <sup>2</sup> , <sup>1</sup> ,2 ,2 ,2 <sup>2</sup> *K i S S K S K T K i C C K C K ki ki k Zh ki ki k* 2 1 2 1 1 2 1 2 2 1 2 1 2 1 <sup>1</sup> ,2 ,2 ,2 <sup>2</sup> <sup>1</sup> , <sup>2</sup> <sup>1</sup> ,3 ,3 ,3 <sup>2</sup> *K i C C K C K T K i S S K S K ki ki k h ki ki k* 2 2 1 2 1 2 1 2 1 2 1 1 2 1 <sup>1</sup> ,3 ,3 ,3 <sup>2</sup> , , <sup>1</sup> ,4 ,4 ,4 <sup>2</sup> *K i S S K S K K i C C K C K ki ki k h ki ki k* <sup>2</sup> 1 2 0 10 , 0 0, 1,2; *i i i* 2 01 1 2 1 2 3 ,2 , , , , *K C K C <sup>C</sup> KS S K k kk kk* 2 2 2 1 2 2 1 1 1 2 2 2 1 1 2 1 1 1 ,2 ,3 ,1 2 , , ,2 ,3 ,4 *K C K S K S K K T T K K C K K S K C K K k k k Zh h <sup>k</sup> k k* 01 0 , , *<sup>T</sup> h hh hh*

 2 1 2 1 2 1 <sup>3</sup> <sup>2</sup> ,2 ,2 <sup>2</sup> *i K C C K ki ki* 1 2 *K K* 1 2 *CC SS ki ki* , , 1 2 , , 1 2 *ki ki* , , 1 2 , , 2 2 2 1 2 1 1 2 2 1 2 1 2 1 1 1 <sup>1</sup> ,1 ,1 ,1 <sup>2</sup> 2 , <sup>1</sup> ,2 ,2 ,2 <sup>2</sup> *K S S K S T K K i K C C K C K K ki ki k Zh ki ki k* 2 1 2 1 2 2 1 1 <sup>1</sup> ,2 ,2 <sup>2</sup> , <sup>1</sup> ,3 ,3 <sup>2</sup> *K C C K T K S S K ki ki h ki ki* 2 2 1 1 2 1 2 1 <sup>1</sup> ,3 ,3 <sup>2</sup> <sup>1</sup> ,4 ,4 <sup>2</sup> *K S S K K C C K ki ki ki ki* 2 2 1 2 1 1 1 2 2 2 2 1 1 1 ,2 ,3 , ,3 ,4 *K C K S K K i K S K C K K k k h k k* 0 0 <sup>1</sup> , , , *<sup>T</sup> C S C S i hh i hh* 0, *<sup>K</sup> t*

Stochastic Observation Optimization on the Basis of the Generalized Probabilistic Criteria 183

*T*

by two ways was

(where 0 is the solution

 

1 2

 

<sup>2</sup> 1, 2, 1, 2, .

The approximated solving of the given boundary-value problem by the method of the invariant imbedding results in the required system of the equations allowing to carry out

<sup>0</sup> <sup>0</sup>

2 , , 2 ,, . *G G D hD D DD h*

The integration of the given system was made by the Runge-Kutta method on interval [0;

For the comparison of efficiency of the approach suggested with that of the existing ones the

carried out: on the basis of the MAP - filter with the linear observer [4] and by defining the

of the last system of the estimation equations ), by means of the method of the random draft. The search of the maximum of the APD was carried out on the simulation interval [500; 600] s. for the estimations of vector 0, taken with interval 1 s. The generated test sample of

The calculation of the estimation errors was made by comparing the current values of estimations with the target coordinate and subsequent defining of the average values of the errors on interval [500; 600] s. Upon terminating the simulation interval the value of the average error obtained in this way for the estimation equations [4], using the linear observer, has exceeded the average estimation error carried out by the technique suggested,

[1] Sirazetdinov T.K. Optimization of systems with the distributed parameters. М: Science,

[2] Pugachev V. S., Sinitsyn I.N. Stochastic differential systems. М: Science, 1985.

 , ,

2 0

0 0

*T* 

 

 

, ,

*DG <sup>h</sup>*

 2 1 0 0 0 0 0 0

 *G hG* 

*G CC SS*

simultaneously the definition of vector *h*opt and formation of vector in the real time:

*h*

 

0 0 0

 

> 

  

formation of the optimum- by-the- criterion-of-the-MAP estimation ˆ

maximum of the function of the APD, approximated by series 0

using the information of the optimum observer, by the factor of ~ 1,52.

dimension 100 was the normalized Gaussian sequence.

*Rostov State University of Means of Communication, Russia* 

   

 

600] s. with the step equal to 0,05 s.

**Author details** 

Sergey V. Sokolov

**6. References** 

1977.

 where the expressions of factors (determined by the numerical integration in the course of solving) aren't given as complicated. In the reduced form the system obtained can be given as

$$
\dot{\beta} = \Phi\_{\beta} \Big[ \beta, h(\alpha, \beta) \Big]\_{\prime},
$$

$$
\dot{\alpha} = \mathcal{G}\_1(\alpha, h) + \mathcal{G}\_2(\beta),
$$

$$
\mathcal{G}\_2(\beta) = \left| \mu \left( \beta \right) \right\rangle \therefore \mu\_\mathbb{C} \left( 1, \beta \right) \middle\| \therefore \mu\_\mathbb{C} \left( 2, \beta \right) \middle\| \therefore \mu\_\mathbb{S} \left( 1, \beta \right) \middle\| \therefore \mu\_\mathbb{S} \left( 2, \beta \right) \middle\|^T.
$$

The approximated solving of the given boundary-value problem by the method of the invariant imbedding results in the required system of the equations allowing to carry out simultaneously the definition of vector *h*opt and formation of vector in the real time:

$$\begin{vmatrix} \dot{\nu}\_0\\ \dot{\beta}\_0 \end{vmatrix} = \begin{vmatrix} h\_0\\ \Phi\_\beta \left(\mathcal{J}\_0, h\_0\right) \end{vmatrix} + D\mathcal{G}\_2\left(\mathcal{J}\_0\right),$$

$$\dot{D} = 2\frac{\partial \Phi\_\beta}{\partial \beta} \left(\mathcal{J}\_0, h\_0\right)D - \frac{\partial \Phi\_\beta}{\partial \alpha} \left(\mathcal{J}\_0, \alpha\right)\bigg|\_{\alpha=0} + 2D \frac{\partial \mathcal{G}\_2}{\partial \beta} \left(\mathcal{J}\_0\right)D - D \frac{\partial \mathcal{G}\_1}{\partial \alpha} \left(\alpha, \beta\_0, h\_0\right)\bigg|\_{\alpha=0}.$$

The integration of the given system was made by the Runge-Kutta method on interval [0; 600] s. with the step equal to 0,05 s.

For the comparison of efficiency of the approach suggested with that of the existing ones the formation of the optimum- by-the- criterion-of-the-MAP estimation ˆ by two ways was carried out: on the basis of the MAP - filter with the linear observer [4] and by defining the maximum of the function of the APD, approximated by series 0 *T* (where 0 is the solution of the last system of the estimation equations ), by means of the method of the random draft. The search of the maximum of the APD was carried out on the simulation interval [500; 600] s. for the estimations of vector 0, taken with interval 1 s. The generated test sample of dimension 100 was the normalized Gaussian sequence.

The calculation of the estimation errors was made by comparing the current values of estimations with the target coordinate and subsequent defining of the average values of the errors on interval [500; 600] s. Upon terminating the simulation interval the value of the average error obtained in this way for the estimation equations [4], using the linear observer, has exceeded the average estimation error carried out by the technique suggested, using the information of the optimum observer, by the factor of ~ 1,52.

## **Author details**

182 Stochastic Modeling and Control

 

2 ,

 

*Zh*

1 2 *K K* 1 2 *CC SS ki ki* , , 1 2 , , 1 2 *ki ki* , , 1 2 , ,

2 2 2 1 2 1 1 2 2 1 2

 

 

*K K*

1 2 1 1 1

*K C C*

*K S S*

*K S S*

*K C C*

*K C K S*

*K S K C*

<sup>1</sup> ,3 ,3 <sup>2</sup>

*ki ki*

 

*k k*

,2 ,3

 

*k k*

0 0 <sup>1</sup> , , , *<sup>T</sup>*

*i hh i hh*

 

0, *<sup>K</sup>*

where the expressions of factors (determined by the numerical integration in the course of solving) aren't given as complicated. In the reduced form the system obtained can

,,, *h*

,3 ,4

 

> 

<sup>1</sup> ,4 ,4 <sup>2</sup>

*ki ki*

 

 

*T K K*

<sup>3</sup> <sup>2</sup> ,2 ,2 <sup>2</sup> *i K C C*

 *ki ki*

<sup>1</sup> ,1 ,1 ,1 <sup>2</sup>

*ki ki k*

*ki ki k*

*K S S K S*

 

*K C C K C*

<sup>1</sup> ,2 ,2 <sup>2</sup>

*ki ki*

 

<sup>1</sup> ,3 ,3 <sup>2</sup>

*ki ki*

 

<sup>1</sup> ,2 ,2 ,2 <sup>2</sup>

 

*i*

 

 

,

 

*h*

 

   

2

*K*

1

 

1 2 1 2

2

2

*K*

 

 

*K*

2

*K*

2

*K*

2 1 1

1 2 1

1 2 1

2 1 1

2 2 1 2 1 1

*K K*

2 2 2 1 1 1

*K K*

*C S C S*

*t*

> 

 

,

 

*i*

1 2

be given as

*T*

*h*

 

Sergey V. Sokolov *Rostov State University of Means of Communication, Russia* 

#### **6. References**


[3] Pervachev S.V., Perov A.I. Adaptive filtration of messages. - Moscow, Radio and communication, 1991.

**Chapter 10** 

© 2012 Nechval and Purgailis, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2012 Nechval and Purgailis, licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

**Stochastic Control and** 

Nicholas A. Nechval and Maris Purgailis

http://dx.doi.org/10.5772/46153

**1. Introduction** 

Additional information is available at the end of the chapter

decisions in a multi-period or multi-stage setting.

**Improvement of Statistical Decisions** 

A large number of problems in production planning and scheduling, location, transportation, finance, and engineering design require that decisions be made in the presence of uncertainty. From the very beginning of the application of optimization to these problems, it was recognized that analysts of natural and technological systems are almost always confronted with uncertainty. Uncertainty, for instance, governs the prices of fuels, the availability of electricity, and the demand for chemicals. A key difficulty in optimization under uncertainty is in dealing with an uncertainty space that is huge and frequently leads to very large-scale optimization models. Decision-making under uncertainty is often further complicated by the presence of integer decision variables to model logical and other discrete

Approaches to optimization under uncertainty have followed a variety of modeling philosophies, including expectation minimization, minimization of deviations from goals, minimization of maximum costs, and optimization over soft constraints. The main approaches to optimization under uncertainty are stochastic programming (recourse models, robust stochastic programming, and probabilistic models), fuzzy programming

This paper is devoted to improvement of statistical decisions in revenue management systems. Revenue optimization – or revenue management as it is also called – is a relatively new field currently receiving much attention of researchers and practitioners. It focuses on how a firm should set and update pricing and product availability decisions across its various selling channels in order to maximize its profitability. The most familiar example probably comes from the airline industry, where tickets for the same flight may be sold at

(flexible and possibilistic programming), and stochastic dynamic programming.

**in Revenue Optimization Systems** 

