*2.5.3 Application to the* n*th approximation of the stationary mesocopic process*

The stochastic variable *YN* <sup>¼</sup> *<sup>p</sup>*0ð Þ *iN*, *N i*<sup>j</sup> *<sup>N</sup>*�1, *<sup>N</sup>* � 1; … ; *<sup>i</sup>*0, 0 is a martingale. In fact, because of the stationarity of *p*<sup>0</sup> we have, renumbering the states

$$p^0\left(i, N\middle|i\_{-1}, N-1; \ldots; i\_{-N}, 0\right) = p^0(i, 0 \middle| i\_{-1}, -1; \ldots; i\_{-N}, -N) \equiv p^0(i, 0 \middle| \mathcal{F}\_N). \tag{31}$$

where F *<sup>N</sup>* is the σ-algebra generated by *i*�1, … *i-N*. Let us write

$$\pi\_N = p^0(i, \mathbf{0}|i\_{-1}, -\mathbf{1}; \dots; i\_{-N}, -N) = p^0(i, \mathbf{0}|\mathcal{F}\_N). \tag{32}$$

*IK* ¼ *iKn*, *iKn*þ1, … *i*ð Þ *<sup>K</sup>*þ<sup>1</sup> *<sup>n</sup>*�<sup>1</sup>

*Stochastic Theory of Coarse-Grained Deterministic Systems: Martingales and Markov…*

Its probability distributions can be written in abbreviated notations

� �≈*P*0ð Þ *<sup>n</sup> IK*, *TK IK*�1, *TK*�<sup>1</sup>

section, *p* is the stationary distribution. Note that, because of this stationarity

So, the transition matrix *W* is well defined from the known stationary

*IK*�<sup>1</sup>

**2.7 Markov approximations of the nonstationary mesoscopic process**

while the *nt*<sup>h</sup> approximation *P* 0(*n*) satisfies (33) exactly.

the approximation (18) it follows (see Appendix A) that

� ; … *I*0, *T*<sup>0</sup>

� � � <sup>¼</sup> *<sup>P</sup>*<sup>0</sup>ð Þ *<sup>n</sup> IK*, *<sup>T</sup>*<sup>1</sup> <sup>j</sup>*IK*�1, *<sup>T</sup>*<sup>0</sup>

*<sup>P</sup>*<sup>0</sup> *IK*, *TK IK*�1, *TK*�<sup>1</sup> �

*DOI: http://dx.doi.org/10.5772/intechopen.95903*

*<sup>P</sup>*<sup>0</sup>ð Þ *<sup>n</sup> IK*, *TK IK*�1, *TK*�<sup>1</sup> �

.

*p*° by the upper index<sup>0</sup>

**93**

*n*-times Markov Equation (see Section 2.7)

*<sup>P</sup>*<sup>0</sup>ð Þ *IK*, *TK* <sup>≈</sup> <sup>X</sup>

. One can write the trivial equality

introduced in Section 2.6. Thus we can write

*p i* <sup>ð</sup> *<sup>N</sup>*, *<sup>N</sup>*; *:* … *iN*þ*n*�1, *<sup>N</sup>* <sup>þ</sup> *<sup>n</sup>* � <sup>1</sup>Þ ¼ <sup>X</sup>

distribution *p*<sup>0</sup>

*<sup>K</sup>* <sup>ð</sup>*I*0, *<sup>T</sup>*0;*I*1, *<sup>T</sup>*1; … *IK*�1, *TK*�1Þ � *<sup>p</sup>*ð Þ *<sup>n</sup> <sup>I</sup>*0, 0, 1, … *<sup>n</sup>* � 1;*I*1, *<sup>n</sup>*, *<sup>n</sup>* <sup>þ</sup> 1, *:::*2*<sup>n</sup>* � 1; … ;*IK*�1, �

*TK* being the group of *n* successive times: *Tk* ¼ kn, kn þ 1, … ,ð Þ *k* þ 1 *n* � 1. From

where we now use the upper index0 in *P*<sup>0</sup> and *P*0(*n*) to recall that, in the present

From the approximate relation (41) if follows that the exact stationnary process *P*<sup>0</sup> on the partial history *IK* during the time interval *TK* approximately obeys the

We return to the nonstationary process *p* generated by the deterministic microscopic process from an arbitrary initial distribution of the mesoscopic states, given by (11). As in paragraph 2.6, it is now necessary to distinguish the stationary process

> *i <sup>N</sup>*�1, … , *<sup>i</sup>* 0

We now use remark (14): the conditional probabilities, conditioned by the whole past up to time 0, are identical in the stationary and nonstationary situations. The stationary distributions *p*<sup>0</sup> can be approximated by its *n*th approximation *p*0(*n*)

*<sup>p</sup>*<sup>0</sup>ð*iN*þ*n*�1, *<sup>N</sup>* <sup>þ</sup> *<sup>n</sup>* � 1; … ; *iN*, *N i*<sup>j</sup> *<sup>N</sup>*�1, *<sup>n</sup>* � 1; … ; *<sup>i</sup>*0, 0Þ ¼ *<sup>p</sup>*<sup>0</sup>ð Þ *iN*þ*n*�1, *<sup>N</sup>* <sup>þ</sup> *<sup>n</sup>* � <sup>1</sup> <sup>j</sup> *iN*þ*n*�2, *<sup>N</sup>* <sup>þ</sup> *<sup>n</sup>* � 2; … ; *<sup>i</sup>*0, 0 *<sup>p</sup>*<sup>0</sup>ð Þ *iN*þ*n*�2, *<sup>N</sup>* <sup>þ</sup> *<sup>n</sup>* � <sup>2</sup> <sup>j</sup> *iN*þ*n*�3, *<sup>N</sup>* <sup>þ</sup> *<sup>n</sup>* � 3; … ; *<sup>i</sup>*0, 0 … … *<sup>p</sup>*<sup>0</sup>ð Þ *iN*, *N i* <sup>j</sup> *<sup>N</sup>*�1, *<sup>N</sup>* � 1; … ; *<sup>i</sup>*0, 0 <sup>≈</sup> *<sup>p</sup>*<sup>0</sup>ð Þ *<sup>n</sup>* ð Þ *iN*þ*n*�1, *<sup>N</sup>* <sup>þ</sup> *<sup>n</sup>* � 1; … ; *iN*, *N i*<sup>j</sup> *<sup>N</sup>*�1, *<sup>n</sup>* � 1; … ; *iN*�*<sup>n</sup>*, *<sup>N</sup>* � *<sup>n</sup> :*

*P*ð Þ *<sup>n</sup>*

� � <sup>∈</sup> <sup>M</sup>*<sup>n</sup>* (39)

ð Þ *K* � 1 *n*, … ,*Kn* � 1Þ*:* (40)

� � � *:* (41)

� � � �

�

� � <sup>¼</sup> *<sup>P</sup>*<sup>0</sup> *IK*, *<sup>T</sup>*<sup>1</sup> *IK*�1, *<sup>T</sup>*<sup>0</sup>

� *W I*ð Þ *<sup>K</sup>* j*IK*�<sup>1</sup> *:* (42)

*W I*ð Þ *<sup>K</sup>* <sup>j</sup>*IK*�<sup>1</sup> *<sup>P</sup>*<sup>0</sup>ð Þ *IK*�1, *TK*�<sup>1</sup> *:* (43)

*p i* ð *<sup>N</sup>*þ*n*�1, *N* þ *n* � 1; … ; *iN*, *N*

(45)

j*iN*�1, *N* � 1; … ; *i*0, 0Þ*p i*ð Þ 0, 0; *:* … ; *iN*�1, *N* � 1 *:* (44)

We have, because F *<sup>N</sup>*�<sup>1</sup> ⊂ F *<sup>N</sup>*

$$
\langle \pi\_N | \mathcal{F}\_{N-1} \rangle = \langle p^0(i, \mathbf{0} | \mathcal{F}\_N) | \mathcal{F}\_{N-1} \rangle = p^0(i, \mathbf{0} | \mathcal{F}\_{N-1}) = \pi\_{N-1}.\tag{33}
$$

So, *π<sup>N</sup>* is a martingale on the σ-algebra F *<sup>N</sup>*, and by the convergence theorem, it converges almost surely to a.

limit *π* when *N* ! ∞.

Now if *N* > *n*, let us write *m* = *N*-*n* > 0. Because of the stationarity of *p0*

$$p^0(i, n+m|i\_{-1}, n+m-1; \dots; i\_{-m}, m) = p^0(i, 0|i\_{-1}, -1; \dots; i\_{-n}, -n) = \pi\_n(i). \tag{34}$$

Thus, for any fixed, positive *m*

$$
\pi\_{n+m} - \pi\_n \stackrel{a.s.}{\rightarrow} \mathbf{0}.\tag{35}
$$

The absolute value distance between *π<sup>n</sup>* <sup>+</sup> *<sup>m</sup>* and *π<sup>n</sup>* is obtained by summing j j *π<sup>n</sup>*þ*<sup>m</sup>*ðÞ�*i πn*ð Þ*i* over the *M* possible states *i*, So

$$d\_{0,\ldots,n+m-1}(q\_{n+m},q\_{n+m}^{(n)}) = d(\pi\_{m+n},\pi\_n) \stackrel{a.s.}{\rightarrow} \mathbf{0} \quad \text{if} \quad n\rightarrow\infty. \tag{36}$$

which is (29), one of our main, formal results.

#### **2.6** *n***-times Markov approximation of the mesoscopic stationary process**

Returning to inequalities (19), when the value *ε* is fixed for obtaining a required precision, the value *n* = *n*(*ε*) is determined and a satisfying approximation of the exact mesoscopic process is obtained by neglecting the memory effects at time differences larger than *n* [18] Thus, one replaces *p i*ð Þ *<sup>N</sup>*, *N*j *iN*�1, *N* � 1; *::* … *i*0, 0 by

$$p^{(n)}(i\_N, N | i\_{N-1}, N - 1; \dots \\\dots i\_0, 0) = p(i\_N, N | i\_{N-1}, N - 1; \dots \\\dots i\_{N-n}, N - n) \quad \text{if} \ N > n \tag{37}$$

With the convention

$$p^{(n)}(i\_0, 0; \dots i\_N, N) \;= \; p(i\_0, 0; \dots i\_N, N) \quad \text{if} \; N \le n. \tag{38}$$

all the probabilities related to the approximate process *p*(*n*) are defined from the probabilities of *p*: this defines *p*(*n*) , the approximate process of order *n* of *p*. So, *p*(*n*) has a finite memory of size *n*, whereas *p* has in general an infinite memory.

The process *p*(*n*) is a Markov process on the partial trajectories *IK* consisting of groups of *n* successive mesocopic states

*Stochastic Theory of Coarse-Grained Deterministic Systems: Martingales and Markov… DOI: http://dx.doi.org/10.5772/intechopen.95903*

$$I\_K = \begin{pmatrix} i\_{Kn}, i\_{Kn+1}, \dots i\_{(K+1)n-1} \end{pmatrix} \in \mathcal{M}^n \tag{39}$$

Its probability distributions can be written in abbreviated notations

$$P\_K^{(n)}(I\_0, T\_0; I\_1, T\_1; \dots \\ I\_{K-1}, T\_{K-1}) \equiv p^{(n)}(I\_0, 0, 1, \dots n - 1; I\_1, n, n + 1, \dots 2n - 1; \dots; I\_{K-1}, \dots)$$

$$(K - 1)n, \dots, Kn - 1). \tag{40}$$

*TK* being the group of *n* successive times: *Tk* ¼ kn, kn þ 1, … ,ð Þ *k* þ 1 *n* � 1. From the approximation (18) it follows (see Appendix A) that

$$P^0\left(I\_K, T\_K \mid I\_{K-1}, T\_{K-1}; \dots I\_0, T\_0\right) \approx P^{0(n)}\left(I\_K, T\_K \mid I\_{K-1}, T\_{K-1}\right) \quad . \tag{41}$$

where we now use the upper index0 in *P*<sup>0</sup> and *P*0(*n*) to recall that, in the present section, *p* is the stationary distribution. Note that, because of this stationarity

$$\begin{array}{c} P^{0(\mathfrak{n})}\left(I\_{K},T\_{K}\left|I\_{K-1},T\_{K-1}\right.\right) = \begin{array}{c} P^{0(\mathfrak{n})}\left(I\_{K},T\_{1}\left|I\_{K-1},T\_{0}\right.\right) = P^{0}\left(I\_{K},T\_{1}\left|I\_{K-1},T\_{0}\right.\right) \\ \equiv W(I\_{K}\left|I\_{K-1}\right.\end{array}\right) . \end{array} \tag{42}$$

So, the transition matrix *W* is well defined from the known stationary distribution *p*<sup>0</sup> .

From the approximate relation (41) if follows that the exact stationnary process *P*<sup>0</sup> on the partial history *IK* during the time interval *TK* approximately obeys the *n*-times Markov Equation (see Section 2.7)

$$P^0(I\_K, T\_K) \approx \sum\_{I\_{K-1}} W(I\_K | I\_{K-1}) \ P^0(I\_{K-1}, T\_{K-1}).\tag{43}$$

while the *nt*<sup>h</sup> approximation *P* 0(*n*) satisfies (33) exactly.

#### **2.7 Markov approximations of the nonstationary mesoscopic process**

We return to the nonstationary process *p* generated by the deterministic microscopic process from an arbitrary initial distribution of the mesoscopic states, given by (11). As in paragraph 2.6, it is now necessary to distinguish the stationary process *p*° by the upper index<sup>0</sup> .

One can write the trivial equality

$$p(i\_N, N; \ldots i\_{N+n-1}, N+n-1) = \sum\_{i\_{N-1}, \ldots, i\_0} p(i\_{N+n-1}, N+n-1; \ldots; i\_N, N)$$

$$|i\_{N-1}, N-1; \ldots; i\_0, 0\rangle p(i\_0, 0; \ldots; i\_{N-1}, N-1). \tag{44}$$

We now use remark (14): the conditional probabilities, conditioned by the whole past up to time 0, are identical in the stationary and nonstationary situations. The stationary distributions *p*<sup>0</sup> can be approximated by its *n*th approximation *p*0(*n*) introduced in Section 2.6. Thus we can write

$$p^0(i\_{N+n-1}N+n-1; \dots; i\_N, N \mid i\_{N-1}, n-1; \dots; i\_0, 0) = \left(p^0(i\_{N+n-2}N+n-2; \dots; i\_0, 0)p^0(i\_{N+n-2}N+n-2 \mid i\_{N+n-3}, N+n-3; \dots; i\_0, 0)\right) \tag{45}$$

$$p^0(i\_{N+n-1}N+n-1 \mid i\_{N+n-2}, N+n-2; \dots; i\_0, 0) \approx p^{0(n)}(i\_{N+n-1}N+n-1; \dots; i\_N, N \mid i\_{N-1}, n-1; \dots; i\_{N-n}, N-n). \tag{46}$$

*2.5.3 Application to the* n*th approximation of the stationary mesocopic process*

*Advances in Dynamical Systems Theory, Models, Algorithms and Applications*

fact, because of the stationarity of *p*<sup>0</sup> we have, renumbering the states

where F *<sup>N</sup>* is the σ-algebra generated by *i*�1, … *i-N*. Let us write

, 0 <sup>¼</sup> *<sup>p</sup>*<sup>0</sup> *<sup>i</sup>*, 0 *<sup>i</sup>*�1, �1; … ; *<sup>i</sup>*�*<sup>N</sup>*

*<sup>π</sup><sup>N</sup>* <sup>¼</sup> *<sup>p</sup>*<sup>0</sup> *<sup>i</sup>*, 0 *<sup>i</sup>*�1, �1; … ; *<sup>i</sup>*�*<sup>N</sup>* 

h i *<sup>π</sup>N*jF *<sup>N</sup>*�<sup>1</sup> <sup>¼</sup> *<sup>p</sup>*0ð ÞF *<sup>i</sup>*, 0jF *<sup>N</sup>* <sup>j</sup> *<sup>N</sup>*�<sup>1</sup>

*<sup>p</sup>*<sup>0</sup> *<sup>i</sup>*, *N i*�1, *<sup>N</sup>* � 1; … ; *<sup>i</sup>*�*<sup>N</sup>*

We have, because F *<sup>N</sup>*�<sup>1</sup> ⊂ F *<sup>N</sup>*

*<sup>p</sup>*<sup>0</sup> *<sup>i</sup>*, *<sup>n</sup>* <sup>þ</sup> *m i*�1, *<sup>n</sup>* <sup>þ</sup> *<sup>m</sup>* � 1; … ; *<sup>i</sup>*�*<sup>m</sup>*

Thus, for any fixed, positive *m*

j j *π<sup>n</sup>*þ*<sup>m</sup>*ðÞ�*i πn*ð Þ*i* over the *M* possible states *i*, So

*<sup>d</sup>*0, … *<sup>n</sup>*þ*m*�<sup>1</sup> *qn*þ*<sup>m</sup>*, *<sup>q</sup>*

which is (29), one of our main, formal results.

converges almost surely to a. limit *π* when *N* ! ∞.

With the convention

**92**

probabilities of *p*: this defines *p*(*n*)

groups of *n* successive mesocopic states

The stochastic variable *YN* <sup>¼</sup> *<sup>p</sup>*0ð Þ *iN*, *N i*<sup>j</sup> *<sup>N</sup>*�1, *<sup>N</sup>* � 1; … ; *<sup>i</sup>*0, 0 is a martingale. In

So, *π<sup>N</sup>* is a martingale on the σ-algebra F *<sup>N</sup>*, and by the convergence theorem, it

¼ *d*ð Þ! *π<sup>m</sup>*þ*<sup>n</sup>*, *π<sup>n</sup>*

Now if *N* > *n*, let us write *m* = *N*-*n* > 0. Because of the stationarity of *p0*

*π<sup>n</sup>*þ*<sup>m</sup>* � *π<sup>n</sup>* !

The absolute value distance between *π<sup>n</sup>* <sup>+</sup> *<sup>m</sup>* and *π<sup>n</sup>* is obtained by summing

**2.6** *n***-times Markov approximation of the mesoscopic stationary process**

Returning to inequalities (19), when the value *ε* is fixed for obtaining a required precision, the value *n* = *n*(*ε*) is determined and a satisfying approximation of the exact mesoscopic process is obtained by neglecting the memory effects at time differences larger than *n* [18] Thus, one replaces *p i*ð Þ *<sup>N</sup>*, *N*j *iN*�1, *N* � 1; *::* … *i*0, 0 by

*<sup>p</sup>*ð Þ *<sup>n</sup>* <sup>ð</sup>*iN*, *N i*<sup>j</sup> *<sup>N</sup>*�1, *<sup>N</sup>* � 1; *::* … *<sup>i</sup>*0, 0<sup>Þ</sup> <sup>¼</sup> *p i*<sup>ð</sup> *<sup>N</sup>*, *N i*<sup>j</sup> *<sup>N</sup>*�1, *<sup>N</sup>* � 1; *::* … *iN*�*<sup>n</sup>*, *<sup>N</sup>* � *<sup>n</sup>*<sup>Þ</sup> if *<sup>N</sup>* <sup>&</sup>gt; *<sup>n</sup>*

all the probabilities related to the approximate process *p*(*n*) are defined from the

The process *p*(*n*) is a Markov process on the partial trajectories *IK* consisting of

has a finite memory of size *n*, whereas *p* has in general an infinite memory.

*<sup>p</sup>*ð Þ *<sup>n</sup>* ð Þ¼ *<sup>i</sup>*0, 0; … *iN*, *<sup>N</sup> p i*ð Þ 0, 0; … *iN*, *<sup>N</sup>* if *<sup>N</sup>* <sup>≤</sup> *<sup>n</sup>:* (38)

, the approximate process of order *n* of *p*. So, *p*(*n*)

, *<sup>m</sup>* <sup>¼</sup> *<sup>p</sup>*<sup>0</sup> *<sup>i</sup>*, 0 *<sup>i</sup>*�1, �1; … ; *<sup>i</sup>*�*<sup>n</sup>*

ð Þ *n n*þ*m* 

, �*<sup>N</sup>* � *<sup>p</sup>*<sup>0</sup>ð Þ *<sup>i</sup>*, 0jF *<sup>N</sup> :* (31)

, �*<sup>n</sup>* <sup>¼</sup> *<sup>π</sup>n*ð Þ*<sup>i</sup> :* (34)

*<sup>a</sup>:s:* 0*:* (35)

*<sup>a</sup>:s:* 0 if *<sup>n</sup>* ! <sup>∞</sup>*:* (36)

(37)

, �*<sup>N</sup>* <sup>¼</sup> *<sup>p</sup>*0ð Þ *<sup>i</sup>*, 0jF *<sup>N</sup> :* (32)

<sup>¼</sup> *<sup>p</sup>*0ð Þ¼ *<sup>i</sup>*, 0jF *<sup>N</sup>*�<sup>1</sup> *<sup>π</sup>N*�1*:* (33)

With (35), Eq. (34) yields the approximate *n*-times Markov Equation

$$p\left(i\_N, N; \ldots; i\_{N+n-1}, N+n-1\right) \approx \sum\_{i\_{N-1}, \ldots, i\_0} p^0(i\_{N+n-1}, N+n-1; \ldots; i\_N, N)$$

$$|i\_{N-1}, n-1; \ldots; i\_{N-n}, N-n\rangle \, p\left(i\_{N-n}, N-n; \ldots; i\_{N-1}, N-1\right). \tag{46}$$

Taking *N* = *Kn* for an integrer *K* ≥ 0, using the condensed notations of § 2.6 and definition (42), Eq. (46) yields an approximate Master Equation for the probability *P I*ð Þ *<sup>K</sup>*, *T* of the partial history *IK* during the time interval *TK*

$$P(I\_K, T\_K) \approx \sum\_{I\_K} W(I\_K | I\_{K-1}) \text{ } P(I\_{K-1}, T\_{K-1}) \tag{47}$$

where *K* is an integer ≥ 1 and *TK* is the time interval (*τ* being the time unit)

the mesoscopic states, (*i.e.* for any positive *α*, |*p*(*i*) – *p*(*j*)| < *α* if the distance between the mesostates *i* and *j* is small enough, with an appropriate metric in the spase of mesostates), and (*b*) that discontinuous trajectories have low probabilities and can be neglected. Of course, these assumptions are not verified for some important, well known processes such as Brownian processes, but they seem to be reasonable for modeling physical processes where the inertial effects are strong

*Stochastic Theory of Coarse-Grained Deterministic Systems: Martingales and Markov…*

enough. Then, a simple approximation is to consider that

*<sup>K</sup>* <sup>¼</sup> <sup>1</sup> *n* X *k* ∈*TK*

> *n* X *k*∈*TK*

*P i*ð Þ , *<sup>K</sup>* <sup>≈</sup> <sup>X</sup>

Consider the time-averaged probability

*P iK*,*<sup>K</sup>* � � � <sup>1</sup>

*<sup>p</sup>*<sup>0</sup> *iKn*�1, *Kn* � 1; … ; *<sup>i</sup>*ð Þ *<sup>K</sup>*�<sup>1</sup> *<sup>n</sup>*,ð Þ *<sup>K</sup>* � <sup>1</sup> *n i*ð Þ *<sup>K</sup>*�<sup>1</sup> *<sup>n</sup>*�1,ð Þ *<sup>K</sup>* � <sup>1</sup> *<sup>n</sup>* � 1; … ; �

Suppose (*a*) that the mesoscopic probabilities *p* are slowly variating functions of

� *<sup>i</sup>*ð Þ *<sup>K</sup>*�<sup>2</sup> *<sup>n</sup>*,ð Þ *<sup>K</sup>* � <sup>2</sup> *<sup>n</sup>* � �

*<sup>K</sup>*, ð Þ *<sup>K</sup>* � <sup>1</sup> *n iK*�1,ð Þ *<sup>K</sup>* � <sup>1</sup> *<sup>n</sup>* � 1; … ; �

*<sup>k</sup>* <sup>¼</sup> <sup>1</sup> *n*

*p ik*, *<sup>k</sup>* � �<sup>≈</sup> <sup>1</sup>

Using the Markov Eq. (47) and the complementary approximations (42), we

This equation is much simpler than Eq. (47), since it applies in the space M of

mesostates. However, Eq. (45) relies on several approximations that are difficult to control. In spite of these difficulties, which can only be precisely discussed for specific examples, Master Equations like (55), resulting from deterministic microscopic systems by coarse-graining both their states and time, are a practical way to

**3. Discussion of the Markov representation derived from Hamiltonian**

The previous results show that the coarse grained mesoscopic dynamics can eventually be represented by a Master Equation, because the memory of this dynamics is gradually lost over time. However, they do not provide the time scale of this fading. In order to estimate its order of magnitude simply, we make an intuitive remark: the conditional probability to jump from some mesostate *i* to another one can be evaluated without knowing the past history of the system if one knows the initial microscopic distribution over *i.* The only unbiaised initial distribution is the uniform one. Thus, one can consider that the system has a memory limited to one time step if uniformity is approximately realized in each mesoscopic cell: this is the basis of the elementary Markov models of mesoscopic evolution. Let *T* be the average time

*j*

the *<sup>M</sup>* mesostates (*i*), whereas (47) is valid in the space <sup>M</sup> *<sup>n</sup>* of *<sup>n</sup>* successive

study their evolution of a mesoscopic scale, used in innumerable works.

**dynamics, and estimation of the uniformization time**

*n* X *k*∈*TK*

� �

� *i*

X*Kn*�<sup>1</sup> *k*¼ð Þ *K*�1 *n*

� *:* � (52)

*<sup>K</sup>*�1,ð Þ *<sup>K</sup>* � <sup>2</sup> *<sup>n</sup>*

*:* (53)

� �*:* (54)

*p iK*, *k*

*Wi j* ð Þ j *P j* ð Þ ,*K* � 1 *:* (55)

*TK* = (*K*-1)*n*, *K* + 1, … *Kn-*1.

*DOI: http://dx.doi.org/10.5772/intechopen.95903*

*<sup>K</sup>*, *Kn* � 1; … ; *i*

*<sup>K</sup>*�<sup>1</sup><sup>Þ</sup> �

obtain the new Master Equation

≈ *p*<sup>0</sup> *i*

where

**95**

� *W iK i*

which is the Eq. (43) obtained for the stationary probability *<sup>P</sup>*0ð Þ *IK*, *<sup>T</sup>* . Let *<sup>P</sup>*ð Þ *<sup>n</sup>* ð Þ *IK*, *<sup>T</sup>* be the exact solution of Eq. (47) that coincides with the exact *<sup>P</sup>* at the *<sup>n</sup>* first elementary times 0, 1, … *<sup>n</sup>*-1 of the system history: *<sup>P</sup>*ð Þ *<sup>n</sup>* ð Þ¼ *<sup>I</sup>*0, *<sup>T</sup> P I*ð Þ 0, *<sup>T</sup>* . Then, *<sup>P</sup>*ð Þ *<sup>n</sup>* ð Þ *IK*, *<sup>T</sup>* defines the *<sup>n</sup>*th approximation of *P I*ð Þ *<sup>K</sup>*, *<sup>T</sup>* : in principle, it can be computed from Eq. (47) since the probability transitions *W* are known by (41).

The stationary probabilities approximation *P*0(*n*) deduced from *p*<sup>0</sup> provide the stationary solution of (47)

$$\begin{array}{c} P^{0(n)}\left(I\_K, T\_K\right) = p^0\left(i\_{Kn}, Kn; \ldots; i\_{K(n+1)-1}, K(n+1) - 1\right) \\ = p^0\left(i\_{Kn}, 0; \ldots; i\_{K(n+1)-1}, n - 1\right). \end{array} \tag{48}$$

So, when *K* ! ∞,

$$P^{(n)}\left(I\_K, T\_K\right) \to P^{0(n)}\left(I\_K, T\_K\right). \tag{49}$$

and consequently, for any integer *k* ∈ [0, *n*-1], the *n*th approximation of the mesoscopic distribution *p* satisfies

$$p^{(n)}(i, Kn + k) \to \mu(i, k) = \mu(i) \text{ if } K \to \infty. \tag{50}$$

for any initial mesocopic distribution, which is the basic assumption of statistical thermodynamics. Supplementary assumptions allow one to conclude that, in realistic situations, the mesoscopic distribution *p* itself satisfies this property (see Appendix B).

#### **2.8 Time averages and simple Markov approximation**

Up to now, we took as time unit some time step *τ* which gives the time scale of microscopic phenomena. By considering some finite partition (*i*) of the phase space *X* and replacing the microscopic states *x* ∈ *X* by the mesocopic states *i* ∈ (*i k* ), we have performed a space coarse graining, as necessary for taking practical observations into account. For the same purpose, one should also introduce [18] a space coarse graining, since the time scale *θ* = *n τ* of current observations is much larger than *τ*: *n* > > 1.

All mesoscopic functions remaining practically constant on the time scale *θ*, their averages can be computed from the time averages *PK* of the probabilities *pk* over *θ*

$$\overline{p}\_K = \frac{1}{n} \sum\_{k \in T\_K} p\_k. \tag{51}$$

*Stochastic Theory of Coarse-Grained Deterministic Systems: Martingales and Markov… DOI: http://dx.doi.org/10.5772/intechopen.95903*

where *K* is an integer ≥ 1 and *TK* is the time interval (*τ* being the time unit) *TK* = (*K*-1)*n*, *K* + 1, … *Kn-*1.

Suppose (*a*) that the mesoscopic probabilities *p* are slowly variating functions of the mesoscopic states, (*i.e.* for any positive *α*, |*p*(*i*) – *p*(*j*)| < *α* if the distance between the mesostates *i* and *j* is small enough, with an appropriate metric in the spase of mesostates), and (*b*) that discontinuous trajectories have low probabilities and can be neglected. Of course, these assumptions are not verified for some important, well known processes such as Brownian processes, but they seem to be reasonable for modeling physical processes where the inertial effects are strong enough. Then, a simple approximation is to consider that

$$\begin{split} &p^0\left(i\_{Kn-1}, Kn-1; \ldots; i\_{(K-1)n}, (K-1)n\left|i\_{(K-1)n-1}, (K-1)n-1; \ldots; i\_{(K-2)n}, (K-2)n\right.\right) \\ &\approx p^0\left(i\_{\overline{K}}, Kn-1; \ldots; i\_{\overline{K}}, (K-1)n\left|i\_{\overline{K}-1}, (K-1)n-1; \ldots; i\_{\overline{K}-1}, (K-2)n\right.\right) \\ &\equiv \; \overline{W}(i\_{\overline{K}}|i\_{\overline{K}-1}) \;. \end{split} \tag{52}$$

where

With (35), Eq. (34) yields the approximate *n*-times Markov Equation

*Advances in Dynamical Systems Theory, Models, Algorithms and Applications*

*i <sup>N</sup>*�1, … , *<sup>i</sup>* 0

j*iN*�1, *n* � 1; … ; *iN*�*<sup>n</sup>*, *N* � *n*Þ*p i* ð Þ *<sup>N</sup>*�*n*, *N* � *n*; … ; *iN*�1, *N* � 1 *:* (46)

Taking *N* = *Kn* for an integrer *K* ≥ 0, using the condensed notations of § 2.6 and definition (42), Eq. (46) yields an approximate Master Equation for the probability

*<sup>p</sup>*0ð*iN*þ*n*�1, *<sup>N</sup>* <sup>þ</sup> *<sup>n</sup>* � 1; … ; *iN*, *<sup>N</sup>*

*W I*ð Þ *<sup>K</sup>* j*IK*�<sup>1</sup> *P I*ð Þ *<sup>K</sup>*�1, *TK*�<sup>1</sup> (47)

<sup>¼</sup> *<sup>p</sup>*<sup>0</sup> *iKn*, 0; … ; *iK n*ð Þ� <sup>þ</sup><sup>1</sup> 1, *<sup>n</sup>* � <sup>1</sup> � �*:* (48)

*<sup>p</sup>*ð Þ *<sup>n</sup>* ð Þ! *<sup>i</sup>*, *Kn* <sup>þ</sup> *<sup>k</sup> <sup>μ</sup>*ð Þ¼ *<sup>i</sup>*, *<sup>k</sup> <sup>μ</sup>*ð Þ*<sup>i</sup>* if *<sup>K</sup>* ! <sup>∞</sup>*:* (50)

� �*:* (49)

*pk:* (51)

*k* ), we

*p i* ð Þ *<sup>N</sup>*, *<sup>N</sup>*; … ; *iN*þ*n*�1, *<sup>N</sup>* <sup>þ</sup> *<sup>n</sup>* � <sup>1</sup> <sup>≈</sup> <sup>X</sup>

*P I*ð Þ *<sup>K</sup>*, *T* of the partial history *IK* during the time interval *TK*

*IK*

*P*ð Þ *<sup>n</sup> IK*, *TK*

**2.8 Time averages and simple Markov approximation**

which is the Eq. (43) obtained for the stationary probability *<sup>P</sup>*0ð Þ *IK*, *<sup>T</sup>* . Let *<sup>P</sup>*ð Þ *<sup>n</sup>* ð Þ *IK*, *<sup>T</sup>* be the exact solution of Eq. (47) that coincides with the exact *<sup>P</sup>* at the *<sup>n</sup>* first elementary times 0, 1, … *<sup>n</sup>*-1 of the system history: *<sup>P</sup>*ð Þ *<sup>n</sup>* ð Þ¼ *<sup>I</sup>*0, *<sup>T</sup> P I*ð Þ 0, *<sup>T</sup>* . Then, *<sup>P</sup>*ð Þ *<sup>n</sup>* ð Þ *IK*, *<sup>T</sup>* defines the *<sup>n</sup>*th approximation of *P I*ð Þ *<sup>K</sup>*, *<sup>T</sup>* : in principle, it can be computed from Eq. (47) since the probability transitions *W* are known by (41). The stationary probabilities approximation *P*0(*n*) deduced from *p*<sup>0</sup> provide the

� � <sup>¼</sup> *<sup>p</sup>*<sup>0</sup> *iKn*,*Kn*; … ; *:iK n*ð Þ� <sup>þ</sup><sup>1</sup> 1,*K n*ð Þ� <sup>þ</sup> <sup>1</sup> <sup>1</sup> � �

� � ! *<sup>P</sup>*<sup>0</sup>ð Þ *<sup>n</sup> IK*, *TK*

for any initial mesocopic distribution, which is the basic assumption of statistical

Up to now, we took as time unit some time step *τ* which gives the time scale of microscopic phenomena. By considering some finite partition (*i*) of the phase space

All mesoscopic functions remaining practically constant on the time scale *θ*, their averages can be computed from the time averages *PK* of the probabilities *pk* over *θ*

have performed a space coarse graining, as necessary for taking practical observations into account. For the same purpose, one should also introduce [18] a space coarse graining, since the time scale *θ* = *n τ* of current observations is much larger

thermodynamics. Supplementary assumptions allow one to conclude that, in realistic situations, the mesoscopic distribution *p* itself satisfies this property (see

*X* and replacing the microscopic states *x* ∈ *X* by the mesocopic states *i* ∈ (*i*

*pK* <sup>¼</sup> <sup>1</sup> *n* X *k* ∈*TK*

and consequently, for any integer *k* ∈ [0, *n*-1], the *n*th approximation of the

*P I*ð Þ *<sup>K</sup>*, *TK* <sup>≈</sup> <sup>X</sup>

stationary solution of (47)

So, when *K* ! ∞,

Appendix B).

than *τ*: *n* > > 1.

**94**

mesoscopic distribution *p* satisfies

*P*<sup>0</sup>ð Þ *<sup>n</sup> IK*, *TK*

$$\overline{K} = \frac{1}{n} \sum\_{k \in T\_K} k \; \;= \frac{1}{n} \sum\_{k=(K-1)n}^{Kn-1} . \tag{53}$$

Consider the time-averaged probability

$$\overline{P}(i\_{\overline{K}},K) \equiv \frac{1}{n} \sum\_{k \in T\_k} p(i\_k, k) \approx \frac{1}{n} \sum\_{k \in T\_k} p\left(i\_{\overline{K}}, k\right). \tag{54}$$

Using the Markov Eq. (47) and the complementary approximations (42), we obtain the new Master Equation

$$
\overline{P}(i,K) \approx \sum\_{j} \overline{W}(i\mid j) \ \overline{P}(j, K-1). \tag{55}
$$

This equation is much simpler than Eq. (47), since it applies in the space M of the *<sup>M</sup>* mesostates (*i*), whereas (47) is valid in the space <sup>M</sup> *<sup>n</sup>* of *<sup>n</sup>* successive mesostates. However, Eq. (45) relies on several approximations that are difficult to control. In spite of these difficulties, which can only be precisely discussed for specific examples, Master Equations like (55), resulting from deterministic microscopic systems by coarse-graining both their states and time, are a practical way to study their evolution of a mesoscopic scale, used in innumerable works.
