*2.2.2 The stationary situation*

*2.1.3 Initial microscopic distribution: The stationary situation*

*Advances in Dynamical Systems Theory, Models, Algorithms and Applications*

quantum states.

paragraph.

*i*∈ M � (*i*

in Section 3.2.2.

**86**

well-known, basic questions.

**2.2 Mesoscopic distributions**

*2.2.1 Mesoscopic states*

*k*

probability distribution is unique [18].

Suppose that S is an isolated physical system and no observation was made on S at time 0 nor before 0. Then, in the absence of any knowledge on S, we admit that at the initial time S is distributed according to the only unbiased probability law, which is the uniform law. This is clearly justified in the finite case, according to the physical meaning traditionally given to probability: in fact, attributing different probabilities for two distinct microstates of *X* would imply that some measurement would allow one to distinguish them objectively, which is not the case at time 0. In the absolutely continuous case, initial uniformity is less obvious: it amounts to assuming that the system should be found with equal probability in two regions of the state space with equal volumes if no information allows one to give preference to any of these regions. This is of course a subjective assertion, but for Hamiltonian systems it agrees with the semi-quantum principle which asserts that, in canonical coordinates, equal volumes of the phase space correspond to equal numbers of

Another way for choosing the initial probability distribution is to make use of Jaynes' principle [25], which is to maximize the Shannon entropy of the distribution under the known constraints over this distribution: in the present case of an isolated system which has not been previously observed, this principle also leads to the uniform law. It is not really better founded than the previous, elementary reasoning, but it may be more satisfying and it can be safely used in more complex situations. We refer to most textbooks on statistical mechanics for discussing these

The uniform distribution in a finite space, either discrete or absolutely continuous, is clearly stationary. In addition to the previous hypotheses, we will assume that the space *X* is indecomposable [26]: the only subsets of *X* which are preserved by the evolution function *φ<sup>t</sup>* are the empty set ∅ and *X* itself. Then, the stationary

For simplicity, we will henceforth assume that the phase space *X* is finite. **Initial, nonstationary situation.** In certain situations, the system can be prepared by submitting it to specific constraints before the initial time 0. Then it may not be distributed uniformly in *X* at *t* = 0. We will consider this case in the next

Because of the imprecision of the physical observations, it is impossible to determine exactly the microstate of the system, but it is currently admitted that the available measure instruments allow one to define a finite partition of *X* into subsets

belonging to the same subset *i*. So, in practice the best possible description of the system consists in specifying the subset *i* where its microstate *x* lies: *i* can be called the *mesostate* of the system. The probability for the system to be in the mesostate *i* at time *t* will be denoted *p*(*i*,*t*). It is not sure, however, that two microstates belonging two different mesostates can always be distinguished: this point will be considered

*Remark:* for convenience, we use the same letter *p* to denote the probability in a countable state space, as well as the probability density in the continuous case. This creates no confusion when the variable type is explicitly mentioned. This is the case now since, as mentioned previously, we assume that the space *X* is discrete. The

), *k* = 1, 2, … *M*, such that it is impossible to distinguish two microstates

If time 0 is the beginning of all observations and actions, we assume that the initial microscopic distribution *μ* is uniform and stationary, as discussed previously, and the probability to find system S in the mesostate *i*<sup>0</sup> at time 0 is *p*(*i*0, 0) = *μ*(*i*0,). The probability to be in *<sup>i</sup>* at time *<sup>t</sup>* is *<sup>p</sup>*0ð Þ¼ *<sup>i</sup>*, *<sup>t</sup> <sup>μ</sup>t*ðÞ¼ *<sup>i</sup> μ φ*�*<sup>t</sup>* ð Þ ð Þ*<sup>i</sup>* . The stationary joint probability to find S in *i*<sup>0</sup> at time 0 and in *i* at time *t* is

$$(p^0(i\_0, 0; i, t)) = \mu(\rho\_{-t} i \cap i\_0) = \mu(i \cap \rho\_t i\_0) \tag{5}$$

and the conditional probability of finding S in the *i* at time *t*, knowing that it was in *i*<sup>0</sup> at time 0 is

$$p^0(i, t | i\_0, 0) = \frac{p^0(i, t; i\_0, 0)}{p^0(i\_0, 0)} = \frac{\mu(\rho\_{-t} \land i\_0)}{\mu(i\_0)} = \frac{\mu(\operatorname{i} \cap \rho\_t i\_0)}{\mu(i\_0)}\tag{6}$$

Similarly, the stationary *n*-times joint probability and related conditional probabilities are readily obtained from

$$\mu^0(i\_0, 0; i\_1, t\_1; \dots i\_{n-1}, t\_{n-1}) \;= \; \mu\left(\rho\_{-t\_{n-1}} i\_{t\_{n-1}} \cap \dots \cap i\_0\right) \;. \tag{7}$$

with, for any *<sup>t</sup>*: *<sup>p</sup>*<sup>0</sup>ð*i*0, *<sup>t</sup>*; *<sup>i</sup>*1, *<sup>t</sup>*<sup>1</sup> <sup>þ</sup> *<sup>t</sup>*; … *in*�1, *tn*�<sup>1</sup> <sup>þ</sup> *<sup>t</sup>*Þ ¼ *<sup>p</sup>*<sup>0</sup>ð Þ *<sup>i</sup>*0, 0; *<sup>i</sup>*1, *<sup>t</sup>*1; … *in*�1, *tn*�<sup>1</sup> . For the sake of simplicity, we will discretize the times 0 < *t*<sup>1</sup> < *t*<sup>2</sup> … , and write *ti* = *kiτ*, *ki* being a nonnegative integer and *τ* a constant time step, which will be taken as time unit.

#### *2.2.3 Non stationary situation*

If S is a physical system, interactions may exist before or at time 0, so that the S can be constrained to lie in a certain subset *A* of *X* at time 0. However, since it is not possible to distinguish two microstates corresponding to the same mesostate, *A* should be a union of mesostates, or at least one mesostate. If it is known that at time 0 the microsate *x* of the system belongs to the mesostate *i*, we should assume that the initial microscopic distribution is uniform over *i*, since no available observation can give further information on *x*: so, in the discrete case, if *n*(*i*) is the number of microstates included in *i* and *χ<sup>i</sup>* (*x*) the characteristic function of *i*

$$p(\mathbf{x}, \mathbf{0} | \mathbf{x} \in i) = \frac{\mathbf{1}}{n(i)} \,\, \chi\_i(\mathbf{x}) \tag{8}$$

In the absolutely continuous case, the similar conditional density is obtained in the same way, replacing the number of microscopic states contained in the mesostate *i* by its volume *v*(*i*). For simplicity, we follow considering the discrete case, with obvious adaptations to the continuous case.

If one only knows the mesoscopic initial distribution *p*(*i*,0) that at time 0 the system belongs to *i*, for each mesostate *i* of M, the initial microscopic distribution becomes

$$p(\mathbf{x}, \mathbf{0}) \, := \sum\_{i} \frac{1}{n(i)} p(i, \mathbf{0}) \, \chi\_i(\mathbf{x}) = \sum\_{i} \frac{p(i, \mathbf{0})}{\mu(i)} \frac{\chi\_i(\mathbf{x})}{N} \tag{9}$$

*N* being the total number of microstates in *X*. **The** *n***-times nonstationary mesoscopic probabilities** are obtained from (9)

$$\begin{split} p\_n(i\_0, 0; i\_1, 1, 1; i\_{n-1}, n-1) &= p\left(i\_0 \cap \rho\_{-1} i\_1 \cap \dots \cap \rho\_{-n+1} i\_{n-1}, 0\right) \\ &= \frac{p(i\_0)}{\mu(i\_0)} \frac{n\left(i\_0 \cap \rho\_{-1} i\_1 \cap \dots \cap \rho\_{-n+1} i\_{n-1}\right)}{N} .\end{split} \tag{10}$$

On the other hand, the new information obtained by observing the system in the mesoscopic state *in* at time *tn*, knowing that it was in the respective states *i*0, … *in*-1

*p i*ð Þ 0, 0; *:* … *in*�1, *<sup>n</sup>* � <sup>1</sup> *S p* •, *n in*�1, *<sup>n</sup>* � 1; … ; *<sup>i</sup>*0, 0<sup>Þ</sup> �

where *p* denotes the infinite process. The properties of *S*(*pn*) and *sn*(*p*) have been extensively studied by Kolmogorov and other authors in the case of the stationary process (6) [19]: they are summarily mentioned in 2.5. They are not necessarily

*2.3.2 Maximizing the* n*-times entropy of the mesoscopic system: The "Markov scheme"*

If one knows the first two distributions *p*<sup>1</sup> and *p*2, one can mimics the exact mesoscopic distributions *pn* by using the Jaynes' principle, maximizing the

entropy *S*(*qn*) of a ditribution *qn* under the constraints *q*<sup>1</sup> = *p*<sup>1</sup> and *q*<sup>2</sup> = *p*2. Then it is found that optimal distribution *qn* is the Markov distribution *q*<sup>n</sup> satifying these

It is shown in Ref. [18] that for n > 2, both the *n*-times entropy *Sn*ð Þ*q* and the instantaneous entropy *sn*ð Þ*q* are larger than the correponding entropies *Sn*ð Þ *p* and

The Markov process *qn* is not really an approximation of the mesoscopic process *p*, because *q*<sup>n</sup> does not tend to *pn* when *n* ! ∞. Approximating the exact mesoscopic

Here we consider the stationary process arising from the initial uniform micro-

exist: *s*(*p*) is the Kolmogorov entropy of the evolution function *f* with respect to the partition (*i*) of the mesoscopic states [19]. More simply, we can call it entropy of

It has been proved recently [18] that, although it is infinite, the memory of the mesoscopic process fades out with time: for *n* large enough, if *N* > *n* the probability

*sn* � *Sn*þ<sup>1</sup> ð Þ� *p Sn* ð Þ *p* ≥0*:* (17)

*Sn*ðÞ¼ *p* lim *<sup>n</sup>*!<sup>∞</sup>*sn*ð Þ¼ *p s p*ð Þ*:* (19)

*sn*þ<sup>1</sup> � *sn* ¼ *Sn*þ<sup>1</sup> ð Þ� *p* 2*Sn*ð Þþ *p Sn*�<sup>1</sup> ð Þ *p* ≤ 0*:* (18)

For the sake of simplicity we omit the index <sup>0</sup> in the present Section, unless otherwise specified. It can be shown [19] that the entropy *Sn*(*p*) is an increasing, concave

process by a Markov process will be the main purpose of the next section.

scopic distribution *μ*(*x*), when the *n*-times stationary probability is *p*<sup>0</sup>

It results from (17) and (18), and also from 2.5.2, that the limits

*p i*ð Þ 0, 0; *:* … *in*, *<sup>n</sup>* ln *p in*, *n in*�1, *<sup>n</sup>* � 1; … ; *<sup>i</sup>*0, 0<sup>Þ</sup> �

� <sup>≥</sup> <sup>0</sup> �

*<sup>n</sup>* given by (7).

� � �*:* � (16)

at the prior times 0, … *n*-1, will be called the instantaneous entropy

*Stochastic Theory of Coarse-Grained Deterministic Systems: Martingales and Markov…*

*i*0,;;; *in*

*sn*ð Þ *p* of the exact process *p*, except if *p* is Markov: *p = q*.

**2.4 Entropy and memory in the stationary situation**

*2.4.1 Kolmogorov entropy of the stationary process*

lim *<sup>n</sup>*!<sup>∞</sup>

*2.4.2 Memory decrease in the stationary mesoscopic process*

1 *n*

*sn* ð Þ¼ *<sup>p</sup> Sn*þ<sup>1</sup>ð Þ� *<sup>p</sup> Sn*ð Þ¼� *<sup>p</sup>* <sup>X</sup>

*DOI: http://dx.doi.org/10.5772/intechopen.95903*

*i*0,; ; ; *in*�<sup>1</sup>

valid for the nonstationary process.

<sup>¼</sup> <sup>X</sup>

constraints [18].

function of *n*

the mesoscopic process.

**89**

where *n*(*A*) is the number of microstates belonging to some subset *A* of *X*. So

$$p\_n(i\_0, 0; i\_1, 1, ; i\_{n-1}, n-1) \ = \ \mu(i\_0 \cap \rho\_{-1} i\_1 \cap \dots \cap \rho\_{-n+1} i\_{n-1}) \frac{p(i\_0)}{\mu(i\_0)}.\tag{11}$$

and all multiple probabilities follow, for instance

$$p\_{n-1}(i\_1, 1; \ldots; i\_n, n) \;= \sum\_{i\_0} \mu\left(i\_0 \cap \rho\_{-1} i\_1 \cap \ldots \cap \rho\_{-n} i\_n\right) \frac{p(i\_0)}{\mu(i\_0)}.\tag{12}$$

The corresponding process is generally not Markovian. For instance, if *i*<sup>0</sup> ∩ *ϕ*�<sup>1</sup> *i*<sup>1</sup> 6¼ ∅, *i*<sup>1</sup> ∩ *ϕ*�<sup>1</sup> *i*<sup>2</sup> 6¼ ∅ and *i*<sup>0</sup> ∩ *ϕ*�<sup>2</sup> *i2* = ∅, it is easily seen that *<sup>p</sup> :i*2, 2 *<sup>i</sup>*1, 1; *<sup>i</sup>*0, 0<sup>Þ</sup> � � <sup>¼</sup> 0 but *p i*2, 2 *<sup>i</sup>*1, 1<sup>Þ</sup> � � 6¼ <sup>0</sup> � � .

From the definition of the relative probabilities, one can formally write

$$p\left(i\_2, t\_2\right) = \sum\_{i\_1} p\left(i\_2, t\_2 \middle| i\_1, t\_1\right) p\left(i\_1, t\_1\right). \tag{13}$$

but in general this equation is useless, since the conditional probability *p*(*i*2, *t*2| *i*1, *t*1) cannot be computed independently of *p*(*i*1, *t*1).

It results from (11) that the nonstationary conditional probabilities, *conditioned by the whole past up to time* 0, are identical to the corresponding stationary probabilities: as an example

$$\begin{split} p(i\_n, n \mid i\_{n-1}, n-1; \dots; i\_0, 0) &= \, \, p^0(i\_n, n \mid i\_{n-1}, n-1; \dots; i\_0, 0) \\ &= \frac{\mu(i\_0 \cap \rho\_{-1}i\_1 \cap \dots \cap \rho\_{-n}i\_n)}{\mu(i\_0 \cap \rho\_{-1}i\_1 \cap \dots \cap \rho\_{-n+1}i\_{n-1})}. \end{split} \tag{14}$$

We will make use of this simple but important property later.
