**Theories of Fluctuations and Dissipation**

### **5.1 Introduction**

In the previous chapters, we see that the hydrodynamic dispersion is in fact a result of solute particles moving along a decreasing pressure gradient and encountering the solid surfaces of a porous medium. The pressure gradient provides the driving force which translates into kinetic energy, and the porous medium acts as the dissipater of the kinetic energy; any such energy dissipation associated with small molecules generates fluctuations among molecules. Looking at a molecular-level picture, the dissolved solute particles in water travelling through the porous medium slow down nearing a surface and then increase in velocity once the molecules get scattered after the impact with solid surface. Refining this picture a bit more, we see that the velocity boundary layers along the solid surfaces are helping this process. Not all the molecules hit solid surfaces either; some of these would be subjected to micro-level local pressure gradients and move away from the surfaces. A physical ensemble of these solute molecules would depict behaviours that are measurable using appropriate extensive variables. (Extensive variables depend on the extent of the system of molecules. i.e., the number of molecules, concentrations, kinetic energy etc., where as intensive variables do not change with size of the system, i.e., pressure, temperature, entropy etc.) These measurable quantities at macroscopic level have origins in microscopic level. Therefore, we can anticipate that molecular level description would justify the operational models that we develop at an ensemble level. Naturally one could expect that the statistical moments of the variables of an ensemble would lead to meaningful models of the process we would like to observe.

In the development of the SSTM, we express the velocity of solute as the sum of the mean velocity and a fluctuating component around the mean. The mean velocity may then be evaluated by using the Darcy's law. We then express the fluctuating component in terms of the spectral expansion dependent on a covariance kernel. However, we need to understand that this type of picture in a more fundamental way should be based on the established theories. Towards that end, in this chapter, we review some of the fundamental theoretical frameworks associated with molecular fluctuation. We show the connectivity of thermodynamical, molecular and stochastic description of fluctuations and dissipations, and then we make use of Ito diffusions to obtain the models of statistical moments of relevant variables. While we do not cite the reference within this chapter -- as the works we refer to are well accepted knowledge in the disciplines such as thermodynamics, statistical mechanics and stochastic processes-- all the relevant works are given in the references list at the end of the book. However, we refer to Keizer's work (1987) primarily in this chapter. that would a at

The Onsager principle for the linear laws for irreversible processes states that the rate of change of an extensive variable is linearly related to the difference of the corresponding thermodynamically conjugate variable from its value at the thermodynamic equilibrium. According to this Onsager linear law we can express the expected value of the momentum

Theories of Fluctuations and Dissipation 163

0 0 0 , *i i ij <sup>e</sup>*

with *E* denoting the expectation operator; *L* denoting the coupling matrix, which is symmetric and non-negative definite; and subscript "*e*" refers to the values at the thermodynamic equilibrium. To simplify the notation, we denote conditional expectation as

*<sup>E</sup>* when the variable within square brackets is conditional upon a well defined value at

According to equation (5.2.5), the rate of change of the conditional average of the momentum of the particle is linearly related to the deviation of conditional average of the thermodynamic conjugate of the momentum from its value at the equilibrium. The conjugate variables are intensive variables and the conjugate for the momentum is *<sup>i</sup> <sup>v</sup>*

> *i S v M T*

*dt m*

The coefficient matrix *L* needs to be found to complete the linear law. In this case, we can make use of the Langevin description of the Brownian motion. (See section 5.3 for a discussion of the Langevin equation.) By disregarding the random force term, the expected

*dE <sup>S</sup> <sup>S</sup> Mt L E t E t dt M M*

. *<sup>i</sup> ij <sup>e</sup> j i i*

*i*

 <sup>0</sup> <sup>0</sup> , *dE Mt EMt*

 

as the fiction constant *m* is the mass of the particle, according to the Newton law

 <sup>0</sup> <sup>0</sup> . *dE Mt Evt dt* 

*dE S S S S MtM L E t E t dt M M M M*

*j i i i i*

(5.2.4)

0 0

(5.2.5)

(5.2.6)

(5.2.7)

(5.2.8)

*T* 

conditional on the initial value in component form as follows:

*t* 0 . Using this notation, equation (5.2.4) can be written as,

value of the particle momentum can be expressed as,

Because *M t mv t* , we can rewrite equation (5.2.7) as,

0

0

with 

of motion.

according to equation (5.2.2); i.e.,

Any description of a process once expressed in mathematical abstraction, becomes a "contracted" description or a contracted model. What is important to understand is that different levels of contracted descriptions could be useful for different purposes, and at the same time, the insights gained from one level of description should be obtained by another level of description and vice versa. However, this is a very difficult task in many of the molecular level processes. One of the main reasons for this difficulty is that the most of the molecular processes are thermodynamically irreversible. In addition, physics of the processes at different levels of descriptions are based on different conceptual frameworks, albeit being very meaningful at a given level. In our discussion here, we consider the thermodynamic level of description, the Boltzmann level of description, and physical ensemble description which is inherently stochastic, hence described in stochastic processes.

### **5.2 Thermodynamic Description**

To facilitate the discussion here, we make use of the Brownian motion as an example with the aim of developing a general framework for discussion. The total differential of entropy of an idealized system of the Brownian particles can be written as,

$$dS = \frac{dII}{T} + \left(\frac{P}{T}\right)dV - \left(\frac{\mu}{T}\right)dN\_{\prime} \tag{5.2.1}$$

where *U* is the internal energy; *V* is the volume of the system; *N* is the number of particles (molecules); *P* is the pressure; is the chemical potential; and, *T* is the absolute temperature. Equation (5.2.1) is a statement for a system of molecules and the system has the well-defined physical boundaries through which mass and heat transfer could occur. The momentum of the particles is included in the internal energy term, and by *TT T S S*

including the momentum ( *M* ) , the total energy is 2 2 *<sup>M</sup> E U m* , where *m* is the mass of a particle. We can write equation (5.2.1) in the following form after including the momentum as a thermodynamic variable:

$$dS = \frac{dE}{T} - \left(\frac{\upsilon}{T}\right) d\underline{M} + \left(\frac{P}{T}\right) dV - \left(\frac{\mu}{T}\right) d\mathcal{N}.\tag{5.2.2}$$

In equation (5.2.2), *v* is the row vector of particle velocities and *M* is the column vector of particle moments. The total differential of entropy *dS* can be expressed in terms of partial derivatives:

$$dS = \left(\frac{\partial S}{\partial E}\right) dE + \left(\frac{\partial S}{\partial \underline{M}}\right) d\underline{M} + \frac{\partial S}{\partial V} dV + \frac{\partial S}{\partial N} dN\_{\prime} \tag{5.2.3}$$

where *<sup>S</sup> M* indicates the row vector of *i S M* ; *S E* , , *S S M V* and *<sup>S</sup> N* are thermodynamically conjugate to the respective variables in equation (5.2.3), namely *E, M*,*V* and *N* .

162 in Porous Media - An Approach Based on Stochastic Calculus

Any description of a process once expressed in mathematical abstraction, becomes a "contracted" description or a contracted model. What is important to understand is that different levels of contracted descriptions could be useful for different purposes, and at the same time, the insights gained from one level of description should be obtained by another level of description and vice versa. However, this is a very difficult task in many of the molecular level processes. One of the main reasons for this difficulty is that the most of the molecular processes are thermodynamically irreversible. In addition, physics of the processes at different levels of descriptions are based on different conceptual frameworks, albeit being very meaningful at a given level. In our discussion here, we consider the thermodynamic level of description, the Boltzmann level of description, and physical ensemble description which is inherently stochastic, hence described in stochastic processes.

To facilitate the discussion here, we make use of the Brownian motion as an example with the aim of developing a general framework for discussion. The total differential of entropy

> , *dU P dS dV dN TT T*

where *U* is the internal energy; *V* is the volume of the system; *N* is the number of

absolute temperature. Equation (5.2.1) is a statement for a system of molecules and the system has the well-defined physical boundaries through which mass and heat transfer could occur. The momentum of the particles is included in the internal energy term, and by

a particle. We can write equation (5.2.1) in the following form after including the

. *dE v P dS dM dV dN TT T T* 

In equation (5.2.2), *v* is the row vector of particle velocities and *M* is the column vector of particle moments. The total differential of entropy *dS* can be expressed in terms of

> , *S S SS dS dE dM dV dN E M VN*

thermodynamically conjugate to the respective variables in equation (5.2.3), namely *E, M*,*V*

*i S M* ;

(5.2.1)

(5.2.3)

and *<sup>S</sup>*

*N* 

are

is the chemical potential; and, *T* is the

, where *m* is the mass of

2 2 *<sup>M</sup> E U*

*m*

(5.2.2)

*S E* 

 , , *S S M V* 

of an idealized system of the Brownian particles can be written as,

**5.2 Thermodynamic Description** 

particles (molecules); *P* is the pressure;

momentum as a thermodynamic variable:

partial derivatives:

*M* 

where *<sup>S</sup>*

and *N* .

including the momentum ( *M* ) , the total energy is

indicates the row vector of

The Onsager principle for the linear laws for irreversible processes states that the rate of change of an extensive variable is linearly related to the difference of the corresponding thermodynamically conjugate variable from its value at the thermodynamic equilibrium. According to this Onsager linear law we can express the expected value of the momentum conditional on the initial value in component form as follows:

$$\frac{dE}{dt}\Big[M\_i(t)\Big|M\_i(0)\Big] = \sum\_{\neq} L\_{\neq} \Big[E\left[\frac{\partial S}{\partial M\_i}(t)\Big|\frac{\partial S}{\partial M\_i}(0)\right]\Big] - E\left[\frac{\partial S}{\partial M\_i}(t\_\epsilon)\Big|\frac{\partial S}{\partial M\_i}(0)\right],\tag{5.2.4}$$

with *E* denoting the expectation operator; *L* denoting the coupling matrix, which is symmetric and non-negative definite; and subscript "*e*" refers to the values at the thermodynamic equilibrium. To simplify the notation, we denote conditional expectation as 0 *<sup>E</sup>* when the variable within square brackets is conditional upon a well defined value at *t* 0 . Using this notation, equation (5.2.4) can be written as,

$$\frac{dE}{dt} \left[ M\_i(t) \right]^0 = \sum\_{\boldsymbol{\ell}} L\_{\boldsymbol{\ell}} \left[ E \left[ \frac{\partial S}{\partial M\_{\boldsymbol{\ell}}}(t) \right]^0 - E \left[ \frac{\partial S}{\partial M\_{\boldsymbol{\ell}}}(t\_{\boldsymbol{\ell}}) \right]^0 \right]. \tag{5.2.5}$$

According to equation (5.2.5), the rate of change of the conditional average of the momentum of the particle is linearly related to the deviation of conditional average of the thermodynamic conjugate of the momentum from its value at the equilibrium. The conjugate variables are intensive variables and the conjugate for the momentum is *<sup>i</sup> <sup>v</sup> T* according to equation (5.2.2); i.e.,

$$\frac{\partial S}{\partial M\_i} = -\frac{\upsilon\_i}{T} \tag{5.2.6}$$

The coefficient matrix *L* needs to be found to complete the linear law. In this case, we can make use of the Langevin description of the Brownian motion. (See section 5.3 for a discussion of the Langevin equation.) By disregarding the random force term, the expected value of the particle momentum can be expressed as,

$$\frac{dE}{dt}\Big[\underline{M}(t)\Big]^{0} = -\left(\frac{\nu}{m}\right)\underline{E}\Big[\underline{M}(t)\Big]^{0},\tag{5.2.7}$$

with as the fiction constant *m* is the mass of the particle, according to the Newton law of motion.

Because *M t mv t* , we can rewrite equation (5.2.7) as,

$$\frac{dE}{dt}\Big[\underline{M}(t)\Big]^{0} = -\nu E\Big[\underline{v}(t)\Big]^{0}.\tag{5.2.8}$$

when *L* is dependent on the external magnetic filed, *B* , when either the effects of the external magnetic field are ignored or the magnetic field is absent, the even or odd variables are coupled by a symmetric matrix where as the odd and even variables are coupled by a antisymmetric matrix. Equation (5.2.11) are called the Onsager-Casimir reciprocal relations. To simplify the notation in the linear laws such as equation (5.2.5), we introduce the

Theories of Fluctuations and Dissipation 165

thermodynamic conjugate of the extensive variable *<sup>j</sup> x* from the corresponding equilibrium

2. <sup>0</sup> <sup>0</sup> ( ) [ ( )] [ ( )] *<sup>i</sup> <sup>i</sup> i e a t Ex t Ex t* to denote the conditional average of the difference between an extensive variable of our choice and its value at the thermodynamic equilibrium. Then the

*da L Y*

and equation (5.2.12) can be interpreted in terms of fluxes and thermodynamic forces: *Yj* is the "thermodynamic force" which drives *<sup>i</sup> a* towards zero, i.e., *<sup>i</sup> x* approaches its equilibrium value on the average. The rate of change of *<sup>i</sup> a* can be considered as the

In the linear laws, the thermodynamic forces are descriptions of the entropy of the system. At thermodynamic equilibrium, the entropy of a given system is maximum as the Second Law of thermodynamics says that entropy increases on the average of any spontaneous process. Let us consider the entropy of an isolated system in the vicinity of its thermodynamic equilibrium. The extensive variable *a* as defined before has finite values, and the entropy associated with the

*ij j*

*i ij j i*

*i*

*i*

( ) (0) <sup>2</sup>

to denote the conditional average difference of the

*dt* (5.2.12)

*da J LY dt* (5.2.13)

2

. (5.2.14)

*e e*

*j i j j i j*

0 *e*

,

*x x x* 

*SS S Sa S <sup>a</sup> a a*

*j S x* 

*j i j*

following variables:

value; and

1.

*<sup>S</sup> <sup>S</sup> YE t E t x x* 

*j e j j*

Onsager linear laws can be written as,

average thermodynamic flux, *<sup>i</sup> J* , giving,

at the maximum of *S a*( ) ,

And

system, *S a*( ) , can be expressed in terms of Taylor series:

0 0

Combining with equation (5.2.6), equation (5.2.8) becomes,

$$
\frac{dE}{dt} \left[ \underline{M}(t) \right]^0 = -\nu T E \left[ \frac{\partial \mathcal{S}}{\partial \underline{M}}(t) \right]^0. \tag{5.2.9}
$$

As the equilibrium value of *v t <sup>e</sup>* is 0, we could express equation (5.2.9) in form of the Onsager linear law, value

$$\begin{array}{l}\text{with (5.2.6), equation (5.2.8) becomes,}\\\\ \frac{dE}{dt}\Big[\Delta(t)\Big]^{2} = -\nu \text{Tr}\left[\frac{\partial S}{\partial \Delta}(t)\right]^{2}. \end{array} \tag{5.2.9}$$
 
$$\text{where of } \underline{n}(t\_{-}) \text{ is 0, we could express equation (5.2.9) in form of the}$$
 
$$\frac{dE}{dt}\Big[\Delta(t\_{i})\Big]^{2} = \sum\_{i} \mathbb{E}\_{0}\Big[\varepsilon \frac{\partial S}{\partial M\_{i}}(t\_{i})\Big]^{2} - \mathbb{E}\left[\frac{\partial S}{\partial M\_{i}}(t\_{i})\right]^{2}\Big]. \tag{5.2.9a}$$

with *L T ij ij* when *ij* is the Kronecker delta.

Another example of linear law is the Newton's law of cooling. Consider the heat transfer between two solids, one at temperature *T*1 and the other at *T*<sup>2</sup> , and the equilibrium temperature the two solids reach is *Te* . The thermodynamic conjugates of the internal energies, *U*1 and *U*<sup>2</sup> , are 1 1 *T* and 2 1 *T* (equation (5.2.1)). The Onsager principle states

that,

$$\frac{d\mathbb{E}\left[\boldsymbol{L}\boldsymbol{I}\_{1}\left(\boldsymbol{t}\right)\right]^{0}}{dt} = -\frac{d\mathbb{E}\left[\boldsymbol{L}\boldsymbol{I}\_{2}\left(\boldsymbol{t}\right)\right]^{0}}{dt} = \frac{\boldsymbol{L}}{T\_{e}^{2}}\boldsymbol{E}\left[\boldsymbol{T}\_{2}\left(\boldsymbol{t}\right)\right]^{0} - \boldsymbol{E}\left[\boldsymbol{T}\_{1}\left(\boldsymbol{t}\right)\right]^{0}.\tag{5.2.10}$$

Equation (5.2.10) can be derived from applying the Onsager principle for the two solids separately and taking into the account of the fact that the energy loss of one solid is the energy gained of the other solid. *L* in equation (5.2.10) is non-negative and needs to be determined experimentally. Further, equation (5.2.10) is only valid in the vicinity of the equilibrium.

The extensive variables, the momentum and the internal energy, are expressed as thermodynamic rule laws in equation (5.2.9) and (5.2.10); however, the momentum and the internal energy have quite distinct forms of functional characteristics. For example, if we reverse the velocities of the molecules, the magnetic field and the time associated with a physical ensemble, the momentum changes the direction but the internal energy remains the same. As the time progresses in the reverse direction, the ensemble will move along the past trajectory. When an extensive variable changes its sign under the reversal of the time or the magnetic field or velocities, we call that variable an odd variable; the variables that are invariant under the reversal are call even variables. In the Brownian molecule and the heat transfer examples discussed previously, the internal energy and the momentum are decoupled, i.e. the coupling effects are ignored. The coupling are only among the variables having the same symmetry under time reversal. Onsager principle can be extended to the situation where the coupling between the variables with different time reversal symmetry exists. The matrix *Lij* in the linear laws now change to

$$L\_{\boldsymbol{\psi}}\left(\underline{\mathcal{B}}\right) = \varepsilon\_{\boldsymbol{\varepsilon}} \varepsilon\_{\boldsymbol{\beta}} L\_{\boldsymbol{\psi}}\left(-\underline{\mathcal{B}}\right) \tag{5.2.11}$$

(5.2.9)

. *e*

(5.2.9a)

*T* (equation (5.2.1)). The Onsager principle states

(5.2.11)

<sup>2</sup> <sup>2</sup> <sup>1</sup> .

(5.2.10)

0

164 in Porous Media - An Approach Based on Stochastic Calculus

0 . *dE <sup>S</sup> M t TE t dt M* 

As the equilibrium value of *v t <sup>e</sup>* is 0, we could express equation (5.2.9) in form of the

*i ij e j j j*

Another example of linear law is the Newton's law of cooling. Consider the heat transfer between two solids, one at temperature *T*1 and the other at *T*<sup>2</sup> , and the equilibrium temperature the two solids reach is *Te* . The thermodynamic conjugates of the internal

Equation (5.2.10) can be derived from applying the Onsager principle for the two solids separately and taking into the account of the fact that the energy loss of one solid is the energy gained of the other solid. *L* in equation (5.2.10) is non-negative and needs to be determined experimentally. Further, equation (5.2.10) is only valid in the vicinity of the

The extensive variables, the momentum and the internal energy, are expressed as thermodynamic rule laws in equation (5.2.9) and (5.2.10); however, the momentum and the internal energy have quite distinct forms of functional characteristics. For example, if we reverse the velocities of the molecules, the magnetic field and the time associated with a physical ensemble, the momentum changes the direction but the internal energy remains the same. As the time progresses in the reverse direction, the ensemble will move along the past trajectory. When an extensive variable changes its sign under the reversal of the time or the magnetic field or velocities, we call that variable an odd variable; the variables that are invariant under the reversal are call even variables. In the Brownian molecule and the heat transfer examples discussed previously, the internal energy and the momentum are decoupled, i.e. the coupling effects are ignored. The coupling are only among the variables having the same symmetry under time reversal. Onsager principle can be extended to the situation where the coupling between the variables with different time reversal symmetry

> *LB L B ij i j ij*

*dE U t dE U t <sup>L</sup> ET t ET t*

1 2 0 0

*e*

*dE <sup>S</sup> <sup>S</sup> Mt L E t E t dt M M*

0

Combining with equation (5.2.6), equation (5.2.8) becomes,

0

1 1 *T*

exists. The matrix *Lij* in the linear laws now change to

and

0 0

*dt dt T*

is the Kronecker delta.

2 1

Onsager linear law,

 *ij* 

energies, *U*1 and *U*<sup>2</sup> , are

 when *ij* 

with *L T ij*

that,

equilibrium.

when *L* is dependent on the external magnetic filed, *B* , when either the effects of the external magnetic field are ignored or the magnetic field is absent, the even or odd variables are coupled by a symmetric matrix where as the odd and even variables are coupled by a antisymmetric matrix. Equation (5.2.11) are called the Onsager-Casimir reciprocal relations.

To simplify the notation in the linear laws such as equation (5.2.5), we introduce the following variables:

$$\mathbf{1}. \quad Y\_j = E\left[\frac{\partial \mathbf{S}}{\partial \mathbf{x}\_j}(t)\right]^0 - E\left[\frac{\partial \mathbf{S}}{\partial \mathbf{x}\_j}(t\_c)\right]^0 \quad \text{to denote the conditional average difference of the}$$

thermodynamic conjugate of the extensive variable *<sup>j</sup> x* from the corresponding equilibrium value; and

2. <sup>0</sup> <sup>0</sup> ( ) [ ( )] [ ( )] *<sup>i</sup> <sup>i</sup> i e a t Ex t Ex t* to denote the conditional average of the difference between an extensive variable of our choice and its value at the thermodynamic equilibrium. Then the Onsager linear laws can be written as,

$$\frac{d\mathbf{a}\_i}{dt} = \sum L\_{\parallel} Y\_{\parallel} \tag{5.2.12}$$

and equation (5.2.12) can be interpreted in terms of fluxes and thermodynamic forces: *Yj* is the "thermodynamic force" which drives *<sup>i</sup> a* towards zero, i.e., *<sup>i</sup> x* approaches its equilibrium value on the average. The rate of change of *<sup>i</sup> a* can be considered as the average thermodynamic flux, *<sup>i</sup> J* , giving,

$$\frac{da\_i}{dt} = f\_i = \sum\_i L\_{ij} Y\_j \tag{5.2.13}$$

In the linear laws, the thermodynamic forces are descriptions of the entropy of the system. At thermodynamic equilibrium, the entropy of a given system is maximum as the Second Law of thermodynamics says that entropy increases on the average of any spontaneous process. Let us consider the entropy of an isolated system in the vicinity of its thermodynamic equilibrium. The extensive variable *a* as defined before has finite values, and the entropy associated with the system, *S a*( ) , can be expressed in terms of Taylor series:

$$S(\underline{a}) = S(0) + \sum\_{j} \left(\frac{\partial S}{\partial \mathbf{x}\_{j}}\right)^{\epsilon} a\_{j} + \frac{S}{2} \sum\_{i} \sum\_{j} \left(\frac{\partial^{2} S}{\partial \mathbf{x}\_{i} \partial \mathbf{x}\_{j}}\right)^{\epsilon} a\_{i} a\_{j} \,. \tag{5.2.14}$$

at the maximum of *S a*( ) ,

$$\left(\frac{\partial \mathbf{S}}{\partial \mathbf{x}\_j}\right)^\* = \mathbf{0}\_{\prime\prime}$$

And

where *ij ik kj*

which has the solution,

matrix form,

If 2 1 *t t*

respective to time.

<sup>0</sup> <sup>0</sup>

At equilibrium, <sup>0</sup>

initial value as,

Then,

form of equation (5.2.21) can be written as,

*<sup>i</sup>*

*k*

*H LS* , which is the relaxation matrix *H* governing the return of mean

*dt* (5.2.21)

<sup>0</sup> *a t Ht a* ( ) exp( ) (5.2.22)

values of the extensive variables to equilibrium; equation (5.2.20) can be written in the

Theories of Fluctuations and Dissipation 167

, *da Ha*

where <sup>0</sup> *a* is the initial value of the selected process. The matrix *H* must be negative

1 2 1 2 <sup>1</sup> <sup>2</sup> 1 2 ( , ) [ ( ) ( )] [ exp( ) exp( ) ] [ ]exp . *<sup>T</sup> <sup>T</sup> <sup>T</sup> <sup>T</sup> C t t E a t a t E Ht a Ht a E a a Ht H t* (5.2.23)

<sup>0</sup> 0 0 ( ) [ ( )] [ ]exp . *<sup>T</sup> <sup>T</sup> <sup>T</sup> C Eaa Eaa H*

As we can see, from a thermodynamic point of view, equation (5.2.23) and (5.2.24) state that the covariances have an exponential character to them, and they are decaying functions with

Equation (5.2.21) shows the behaviour of the conditional average value of

() () ( ) . *<sup>i</sup> <sup>i</sup> i e a t Ex t Ex t* in relation to matrix *Hij* . By substituting for *a* the component

 <sup>0</sup> <sup>0</sup> <sup>0</sup> <sup>0</sup> () ( ) () ( ) . *<sup>i</sup> i e ij i ij i e*

> <sup>0</sup> <sup>0</sup> <sup>0</sup> ( ) () ( ) . *<sup>i</sup> ij i ij i e*

As discussed before any dissipative system would have fluctuations in extensive variables. Let us define the fluctuations with reference to the expected value conditioned upon the

() () () . *i i <sup>i</sup>*

() () () () ( ) ( ); *<sup>i</sup> ij i <sup>i</sup> ij i e i e <sup>d</sup> xt xt H xt xt H xt xt dt*

<sup>0</sup>

*x t x t Ex t* (5.2.25)

*<sup>d</sup> Ex t HEx t HEx t*

*<sup>d</sup> Ex t Ex t HEx t HEx t*

*dt*

*dt*

( ) *Ex t i e* remains unchanged; therefore,

<sup>0</sup> <sup>0</sup> 0 0

(5.2.24)

semi-definite as the entropy increases on the average during the relaxation process.

Using equation (5.2.22), we can deduce the covariance function, 1 2 *Ct t* (,),

, and closer to the equilibrium the process is stationary,

 

$$S\_{\circ} = \left(\frac{\partial^2 S}{\partial x\_{\wedge} \partial x\_{\wedge}}\right)^{\epsilon} < 0\chi$$

as these conditions are true for any *<sup>a</sup>* , the matrix *ij S* must be negative semi-definite. By incorporating equation (5.2.12) in equation (5.2.14), and using the conditions for the thermodynamic equilibrium stated previously, we obtain,

$$S(\underline{a}) = S(0) + \frac{1}{2} \sum\_{i} \sum\_{j} S\_{ij} a\_i a\_j \,. \tag{5.2.15}$$

But as an approximation, we can write,

*i ij j j Y Sa* , (5.2.16)

Because our definition of *Yi* is the first derivative of *S* with respect to *<sup>i</sup> x* on the average. Therefore, we can write equation (5.2.15),

$$S(\underline{a}) = S(0) = \frac{1}{2} \sum\_{i} Y\_i a\_i \dots$$

This can be expressed on the average

$$\frac{dS(\underline{a})}{dt} = \frac{1}{2} \sum\_{i} Y\_{i} \frac{da\_{i}}{dt} \tag{5.2.17}$$

Using equation (5.2.12),

$$\frac{dS(\underline{a})}{dt} = \sum\_{i} \sum\_{j} L\_{ij} Y\_i Y\_j \tag{5.2.18}$$

As the derivative of the entropy with respect to time is positive, *L* is a positive semidefinite matrix. What equation (5.2.17) and (5.2.18) convey is that the mean fluctuations of an extensive variable give rise to increase in the entropy, and therefore *dS a*( ) *dt* alludes to dissipation of energy due to the fluctuations of an extensive variable. We define the Rayleigh-Onsager dissipation function as

$$\Phi(\underline{a}) = \frac{dS(\underline{a})}{dt} = \sum\_{i} \sum\_{j} L\_{ij} Y\_i Y\_j \ge 0. \tag{5.2.19}$$

The dissipation function is also called the entropy production and is dependent on our choice of the system.

Using equation (5.2.16) and (5.2.12), one can write,

$$\frac{d\mathbf{a}\_i}{dt} = \sum\_j L\_{ij} \sum\_j S\_{ij} \mathbf{a}\_j = \sum\_j H\_{ij} \mathbf{a}\_{j\prime} \tag{5.2.20}$$

where *ij ik kj k H LS* , which is the relaxation matrix *H* governing the return of mean values of the extensive variables to equilibrium; equation (5.2.20) can be written in the matrix form,

$$\frac{d\underline{a}}{dt} = \underline{\mathbf{H}} \underline{a}\_{\prime} \tag{5.2.21}$$

which has the solution,

Computational Modelling of Multi-Scale Non-Fickian Dispersion

*Sa S S aa* . (5.2.15)

*Y Sa* , (5.2.16)

*dt dt* (5.2.17)

*dt* (5.2.18)

(5.2.19)

*dt* (5.2.20)

*dt*

alludes to

166 in Porous Media - An Approach Based on Stochastic Calculus

2

*<sup>S</sup> <sup>S</sup> x x* 

*i j*

as these conditions are true for any *<sup>a</sup>* , the matrix *ij S* must be negative semi-definite. By incorporating equation (5.2.12) in equation (5.2.14), and using the conditions for the

> <sup>1</sup> ( ) (0) <sup>2</sup> *ij <sup>i</sup> <sup>j</sup> i j*

> > *i ij j j*

Because our definition of *Yi* is the first derivative of *S* with respect to *<sup>i</sup> x* on the average.

<sup>1</sup> ( ) (0) . <sup>2</sup> *i i*

() 1 2

*S a S Ya* .

( ) , *ij <sup>i</sup> <sup>j</sup> i j dS a L YY*

As the derivative of the entropy with respect to time is positive, *L* is a positive semidefinite matrix. What equation (5.2.17) and (5.2.18) convey is that the mean fluctuations of

dissipation of energy due to the fluctuations of an extensive variable. We define the

( ) ( ) 0. *ij i j i j*

The dissipation function is also called the entropy production and is dependent on our

, *<sup>i</sup> ij ij j ij j j j j da L Sa Ha*

*a L YY dt*

an extensive variable give rise to increase in the entropy, and therefore *dS a*( )

*dS a*

*i*

*i i i dS a da <sup>Y</sup>*

*ij*

thermodynamic equilibrium stated previously, we obtain,

But as an approximation, we can write,

Therefore, we can write equation (5.2.15),

This can be expressed on the average

Rayleigh-Onsager dissipation function as

Using equation (5.2.16) and (5.2.12), one can write,

Using equation (5.2.12),

choice of the system.

0;

*e*

$$\underline{a}(t) = \exp(\underline{H}t)a^{0} \tag{5.2.22}$$

where <sup>0</sup> *a* is the initial value of the selected process. The matrix *H* must be negative semi-definite as the entropy increases on the average during the relaxation process.

Using equation (5.2.22), we can deduce the covariance function, 1 2 *Ct t* (,),

$$\mathbb{E}\left[\underline{\mathbf{C}}(t\_1, t\_2)\right] = \mathbb{E}[\underline{a}(t\_1)\underline{a}^\top(t\_2)] = \mathbb{E}[\left(\exp(\underline{H}t\_1)\underline{a}^0\right)\left(\exp(\underline{H}t\_2)\underline{a}^0\right)^\top] = \mathbb{E}[\underline{a}^0\underline{a}^{0\top}]\exp\left(\underline{H}t\_1 + \underline{H}^\top t\_2\right). \tag{5.2.23}$$

If 2 1 *t t* , and closer to the equilibrium the process is stationary,

$$\underline{C}(\tau) = E[\underline{a}^0 \underline{a}^T(\tau)] = E[\underline{a}^0 \underline{a}^{0\dagger}] \exp\left(\underline{H}^\dagger \tau\right). \tag{5.2.24}$$

As we can see, from a thermodynamic point of view, equation (5.2.23) and (5.2.24) state that the covariances have an exponential character to them, and they are decaying functions with respective to time.

Equation (5.2.21) shows the behaviour of the conditional average value of <sup>0</sup> <sup>0</sup> () () ( ) . *<sup>i</sup> <sup>i</sup> i e a t Ex t Ex t* in relation to matrix *Hij* . By substituting for *a* the component form of equation (5.2.21) can be written as,

$$\frac{d}{dt}\left\{E\left[\mathbf{x}\_i(t)\right]^0 - E\left[\mathbf{x}\_i(t\_\epsilon)\right]^0\right\} = H\_{\parallel i}E\left[\mathbf{x}\_i(t)\right]^0 - H\_{\parallel i}E\left[\mathbf{x}\_i(t\_\epsilon)\right]^0 \dots$$

At equilibrium, <sup>0</sup> ( ) *Ex t i e* remains unchanged; therefore,

$$\frac{d}{dt}\left\{E\left[\mathbf{x}\_i(t)\right]^0\right\} = H\_{\psi}E\left[\mathbf{x}\_i(t)\right]^0 - H\_{\psi}E\left[\mathbf{x}\_i(t\_e)\right]^0 \dots$$

As discussed before any dissipative system would have fluctuations in extensive variables. Let us define the fluctuations with reference to the expected value conditioned upon the initial value as,

$$
\delta \mathbf{x}\_i(t) = \mathbf{x}\_i(t) - E\left[\mathbf{x}\_i(t)\right]^0. \tag{5.2.25}
$$

Then,

$$\frac{d}{dt}\{\mathbf{x}\_i(t) - \delta \mathbf{x}\_i(t)\} = H\_{\boldsymbol{\eta}}\left(\mathbf{x}\_i(t) - \delta \mathbf{x}\_i(t)\right) - H\_{\boldsymbol{\eta}}\left(\mathbf{x}\_i(t\_e) - \delta \mathbf{x}\_i(t\_e)\right);$$

We call this six-dimensional space the -space or molecule phase space. We can divide the six-dimensional space into small cellular volumes and each volume elements is assigned an index *i* 1,2,3, as a unique number for identification purposes. The number of molecules, ( ) *N t <sup>i</sup>* would be the macroscopic Boltzmann variable associated in the volume element *i* , and we choose the volume element *i* to be sufficiently large that ( ) *N t <sup>i</sup>* is a

Theories of Fluctuations and Dissipation 169

It is assumed that binary collisions between the molecules of only two volume elements located at *r v*, and 1 *r v*, occur in the -space. Each of these volumes lose one molecule

each collisions. (The primes denote the velocities at the centre of mass velocities after the collisions.) We can define the extensive property of the number density is -space,

1 11 ˆ , *<sup>r</sup> v T v F g dv*

*r* and *v* are the derivatives with respective to *r* and *v* , respectively. The third term

operator and *g* is the constant relative velocity magnitude. In the absence of an external

Unlike the Onsager's linear laws, which are true only near the thermodynamic equilibrium, the Boltzmann equation is true not only in the vicinity of the equilibrium but also away from the equilibrium. However, near the equilibrium these two pictures are similar and while the Boltzmann equation is valid strictly speaking only for diluted gases, the Onsager linear laws

Boltzmann's theory includes a function called the H-function which behaves in an entropylike manner; for a closed system *H*-function is a non-increasing function, i.e., 0. *dH*

> *H drdv*

This attribute of *H* function is *H*-theorem, and *H* function has similar character to the Gibb's

, *r e v d*

 

on the right hand side of equation (5.3.1) is the dissipative effect of collisions; ˆ

*t* 

(,, ) *r v dt drdv* is the number of molecules with centre of mass position

' '

, *F* is an external force field acting in the -space, and

(5.3.1)

(5.3.2)

ln . (5.3.3)

<sup>1</sup> *r v*, gain one molecule each at the end of

*dt*

*H*

*<sup>T</sup>* is a linear

each and volume elements located at , *r v*, and '

and velocity in the ranges *r r dr* , and *v v dv* , .

<sup>1</sup> <sup>1</sup>

 

*t* 

with *<sup>e</sup> d* lumping the dissipation due to collisions.

Then the Boltzmann equation gives,

where ' ' ' '

( , ,) , ( , ,) *rvt rvt* 

force equation (5.3.1) can be written as,

are valid for any ensemble.

free energy in thermodynamics.

function is defined as

large number.

( , ,) *r vt* , so that

$$\frac{d\mathbf{x}\_i(t)}{dt} - \frac{d\delta\mathbf{x}\_i(t)}{dt} = H\_{\psi}\mathbf{x}\_i(t) - H\_{\psi}\delta\mathbf{x}\_i(t) - H\_{\psi}\mathbf{x}\_i(t\_e) + H\_{\psi}\delta\mathbf{x}\_i(t\_e).$$

Rearranging,

$$\frac{d\delta\mathbf{x}\_i(t)}{dt} = H\_{\neq}\delta\mathbf{x}\_i(t) + \frac{d\mathbf{x}\_i(t)}{dt}H\_{\neq}\mathbf{x}\_i(t\_\circ) + H\_{\neq}\mathbf{x}\_i(t) - H\_{\neq}\delta\mathbf{x}\_i(t\_\circ).$$

Near equilibrium,

() 0 *i e x t* ; and ( ) ( ( ) ( )) *<sup>i</sup> ij i e i e dx t H xt xt dt* is small compared to ( ) *<sup>i</sup> <sup>d</sup> x t dt* following

equation (5.2.21), we can simplify the above equation to

$$\frac{d\delta \mathbf{x}\_i(t)}{dt} = H\_\psi \delta \mathbf{x}\_i(t) + f\_i.$$

where *<sup>i</sup> f* is a random term. Expressing this in matrix form,

$$\frac{d\delta\underline{X}}{dt} = \underline{H}\delta\underline{X} + \underline{f}.\tag{5.2.26}$$

This equation is the Onsager's regression hypothesis for fluctuations. This hypothesis is based on the thermodynamic arguments not on the particles behaviour in a physical ensemble. However, as we will see in the next sections, equation (5.2.26) has a similar character to those derived from the particle dynamics.

To complete the Onsager picture of random fluctuations in equation (5.2.26), we need to consider equation (5.2.26) as linear stochastic differential equation. Then *f* term can be defined in terms of the Wiener process and *H* as a function of *X* to develop the simplest form of a stochastic differential equation. We will address this in section 5.4.

### **5.3 The Boltzmann Picture**

As mentioned in the previous section, the Onsager regression hypothesis is based on the entropy and the coefficients which form the coupling matrix, *L* . The Boltzmann equation is on the other hand dependent entirely on the molecular dynamics of collisions and the resulting fluctuations. We do not intent to derive the Boltzmann equation here; instead, we describe the equation and the variables here. For the technical details of the derivation, there are many excellent texts on statistical mechanics and some of the original works are given in the references.

Boltzmann's work was on the dynamics of dilute gases and the average behaviour of gas molecules was the main focus on his work. The Boltzmann's equation describes the nonlinear dynamics of the molecular collisions, while Onsager theory is on linear dynamics without fluctuations. It can be shown that the linearized Boltzmann equation is a special case of Onsager theory.

In the derivation of the Boltzmann equation, we have a six-dimensional space in which the position of, *r* , and the velocity, *v* , of the centre of mass of a single molecule are defined.

(5.2.26)

*X* to develop the

*<sup>d</sup> x t dt* 

following

is small compared to ( ) *<sup>i</sup>*

168 in Porous Media - An Approach Based on Stochastic Calculus

() () ( ) ( ) ( ) ( ). *<sup>i</sup> <sup>i</sup>*

( ) ( ) ( ) ( ) ( ) ( ). *<sup>i</sup> <sup>i</sup> ij i ij i e ij i ij i e*

> ( ) () . *<sup>i</sup> ij i i*

*dxt H xt f dt*

. *d X HX f dt*

This equation is the Onsager's regression hypothesis for fluctuations. This hypothesis is based on the thermodynamic arguments not on the particles behaviour in a physical ensemble. However, as we will see in the next sections, equation (5.2.26) has a similar

To complete the Onsager picture of random fluctuations in equation (5.2.26), we need to consider equation (5.2.26) as linear stochastic differential equation. Then *f* term can be

As mentioned in the previous section, the Onsager regression hypothesis is based on the entropy and the coefficients which form the coupling matrix, *L* . The Boltzmann equation is on the other hand dependent entirely on the molecular dynamics of collisions and the resulting fluctuations. We do not intent to derive the Boltzmann equation here; instead, we describe the equation and the variables here. For the technical details of the derivation, there are many excellent texts on statistical mechanics and some of the original works are given in

Boltzmann's work was on the dynamics of dilute gases and the average behaviour of gas molecules was the main focus on his work. The Boltzmann's equation describes the nonlinear dynamics of the molecular collisions, while Onsager theory is on linear dynamics without fluctuations. It can be shown that the linearized Boltzmann equation is a special

In the derivation of the Boltzmann equation, we have a six-dimensional space in which the position of, *r* , and the velocity, *v* , of the centre of mass of a single molecule are defined.

simplest form of a stochastic differential equation. We will address this in section 5.4.

*dx t d x t Hx t H x t Hx t H x t*

*dxt dx t H x t Hx t Hx t H x t*

 

*dt dt* 

*x t* ; and ( ) ( ( ) ( )) *<sup>i</sup>*

*dt*

*dt dt*

*dx t H xt xt*

where *<sup>i</sup> f* is a random term. Expressing this in matrix form,

character to those derived from the particle dynamics.

**5.3 The Boltzmann Picture** 

the references.

case of Onsager theory.

equation (5.2.21), we can simplify the above equation to

*ij i e i e*

defined in terms of the Wiener process and *H* as a function of

 

Rearranging,

() 0 *i e*

Near equilibrium,

*ij i ij i ij ie ij i e*

We call this six-dimensional space the -space or molecule phase space. We can divide the six-dimensional space into small cellular volumes and each volume elements is assigned an index *i* 1,2,3, as a unique number for identification purposes. The number of molecules, ( ) *N t <sup>i</sup>* would be the macroscopic Boltzmann variable associated in the volume element *i* , and we choose the volume element *i* to be sufficiently large that ( ) *N t <sup>i</sup>* is a large number.

It is assumed that binary collisions between the molecules of only two volume elements located at *r v*, and 1 *r v*, occur in the -space. Each of these volumes lose one molecule each and volume elements located at , *r v*, and ' <sup>1</sup> *r v*, gain one molecule each at the end of each collisions. (The primes denote the velocities at the centre of mass velocities after the collisions.) We can define the extensive property of the number density is -space, ( , ,) *r vt* , so that (,, ) *r v dt drdv* is the number of molecules with centre of mass position and velocity in the ranges *r r dr* , and *v v dv* , .

Then the Boltzmann equation gives,

$$\frac{\partial \rho}{\partial t} = -\underline{\underline{v}} \bullet \nabla\_{\prime} \rho - \underline{\underline{F}} \bullet \nabla\_{\prime} \rho + \int \hat{\sigma}\_{\Gamma} \underline{g} \left[ \rho^{\dagger} \dot{\rho}\_{1}^{\prime} - \rho \rho\_{1} \right] d\underline{v}\_{1} \tag{5.3.1}$$

where ' ' ' ' <sup>1</sup> <sup>1</sup> ( , ,) , ( , ,) *rvt rvt* , *F* is an external force field acting in the -space, and *r* and *v* are the derivatives with respective to *r* and *v* , respectively. The third term on the right hand side of equation (5.3.1) is the dissipative effect of collisions; ˆ *<sup>T</sup>* is a linear operator and *g* is the constant relative velocity magnitude. In the absence of an external force equation (5.3.1) can be written as,

$$\frac{\partial \underline{\rho}}{\partial t} = -\underline{\underline{v}} \bullet \nabla\_r \underline{\rho} + d\_{\varepsilon \prime} \tag{5.3.2}$$

with *<sup>e</sup> d* lumping the dissipation due to collisions.

Unlike the Onsager's linear laws, which are true only near the thermodynamic equilibrium, the Boltzmann equation is true not only in the vicinity of the equilibrium but also away from the equilibrium. However, near the equilibrium these two pictures are similar and while the Boltzmann equation is valid strictly speaking only for diluted gases, the Onsager linear laws are valid for any ensemble.

Boltzmann's theory includes a function called the H-function which behaves in an entropylike manner; for a closed system *H*-function is a non-increasing function, i.e., 0. *dH dt H* function is defined as

$$H = \iint \rho \ln \rho d\underline{r} \, d\underline{v}.\tag{5.3.3}$$

This attribute of *H* function is *H*-theorem, and *H* function has similar character to the Gibb's free energy in thermodynamics.

It can be shown that when 0, *dH dt* if *<sup>e</sup>* then

$$
\rho^{\epsilon} \rho^{\epsilon}\_{\iota} = \rho^{\epsilon} \rho^{\epsilon}\_{\iota} \tag{5.3.4}
$$

correlated stochastic processes to model the fluctuations. Moving away from the -space, we describe the fluctuations and dissipation using the theory of stochastic processes in an effort

Theories of Fluctuations and Dissipation 171

The Onsager regression hypothesis, equation (5.2.26), states that the fluctuations of extensive variables around their expected values conditional on the initial values can be expressed in terms of a system of differential equations through a relaxation matrix which is defined in equation (5.2.20). Equation (5.2.26) is similar in form to equations (5.3.6) and (5.3.7) which are derived from Boltzmann's equation (5.3.1). Both of these theories support the hypothesis that the time derivatives of fluctuations on the average follow differential equations with additive random terms. The average fluctuations are driven by thermodynamically coupled driving forces because of energy dissipation according to Boltzmann. We have seen in the previous section that both of these descriptions are phenomenologically equivalent. However, none of those descriptions are amenable for

Starting point of the development of such models is the Langevin equation which describes the motion of Brownian particles. Even though Langevin used the Newtonian laws to describe the particle motion, he developed a differential equation with an addictive random term, which is quite similar to the Onsager regression hypothesis. Langevin started by

momentum vector of the particle and *V* is the velocity, we can write from the Newton

*dr p dt m* 

*dp <sup>F</sup> dt* 

We have slightly changed the notation to indicate that the variables are associated with a

In equation (5.4.2), *F* is the force vector on the particle (the particle is bombarded by the surrounding water molecules). We can express the *F* as ( ) *F F d e* where *Fd* is the drag component due to friction and *Fe* is the external force; the force due to molecular collisions on

> *F V <sup>d</sup>*

the particle is assumed to be random. *Fd* can be expressed through the friction constant

and *<sup>e</sup> dp p F f dt m* 

as in equation (5.4.1),

is the

,

, (5.4.1)

, (5.4.2)

. (5.4.4)

. (5.4.5)

and *<sup>p</sup> mV* . (5.4.3)

considering a particle of mass *m* at a distance *r* from an initial point, if *p*

**5.4 Onsager Regression Hypothesis, Langevin Equation and Itō processes** 

to develop operational models of molecular fluctuations.

operational models of fluctuation and dissipation.

particle rather than with an ensemble.

*dt m* 

Now we can write, *dr <sup>p</sup>*

laws,

where ' *e* ' indicates the equilibrium state.

In the vicinity of equilibrium, we can write,

$$
\rho(\underline{r}\_\prime \underline{v}\_\prime t) = \rho^\*(\underline{v}) + \Delta \rho(\underline{r}\_\prime \underline{v}\_\prime t) \tag{5.3.5}
$$

where ( , ,) *r vt* is a small change in the -space density. By substituting equation (5.3.5) in the Boltzmann equation and ignoring the higher order terms of D*r* , we obtain,

$$\frac{\partial \rho}{\partial t} = -\underline{v} \bullet \nabla\_r \rho + \mathbb{C} \left[ \Lambda \rho \right]\_{\prime} \tag{5.3.6}$$

with *C*replacing the dissipation integral as a linear functional.

It can be shown that (Fox and Uhlenbeck, 1970 a and b) by adopting the Onsager hypothesis,

$$\frac{\partial \Delta \rho}{\partial t} = L \left[ X \right] + \tilde{f} (\underline{r}\_{\prime} \underline{v}\_{\prime} t)\_{\prime} \tag{5.3.7}$$

where, *<sup>B</sup>* ln *e X k* the local thermodynamic force in -space;

$$L\left[X\right] \equiv \left(\frac{\underline{\upsilon}\underline{\rho}\underline{\rho}^\*}{k\_B}\right) \bullet \nabla\_r X + \int L^S(\underline{\upsilon}\,\underline{\upsilon}\_1) X\_1 d\underline{\upsilon}\_1 \wedge \underline{\upsilon}\_2$$

with *<sup>S</sup> L* is a linear operator (see Fox and Uhlenbeck, 1970 a and b); and *f* is a random term which needs to be characterised. 

The random term now can be defined by,

$$E\left[\tilde{f}\left(\underline{\underline{r}},\underline{\underline{v}},t\right)\right] = 0 \quad \text{and}$$

$$E\left[\tilde{f}\left(\underline{\underline{r}},\underline{\underline{v}},t\right)\tilde{f}\left(\underline{\underline{r}},\underline{\underline{v}},t'\right)\right] = 2k\_{\underline{k}}L^{\mathcal{S}}\left(\underline{\underline{v}},\underline{\underline{v}}\_{\perp}\right)\delta(\underline{\underline{r}}-\underline{\underline{r}})\delta(t-t'). \tag{5.3.8}$$

In equation (5.3.7), the rate of change of the -space density increments are expressed in terms of thermodynamic forces *X* .

By deriving the random term *f* as in (5.3.8), we see that the random term is a zero-mean stochastic process in the -space, -correlated in *r* and *t* but influenced by the velocity of the centre of mass through a linear operator derived from the dissipation term, *<sup>S</sup> d* , in the Boltzmann equation. Equation (5.3.7) and (5.3.8) show that the Boltzmann and Onsager pictures are united near equilibrium. Equally importantly, equation (5.3.8) justifies the -

*r vt v r vt* (5.3.5)

(5.3.6)

(5.3.7)

is a random


(5.3.4)

170 in Porous Media - An Approach Based on Stochastic Calculus

then

( , ,) ( ) ( , ,) *<sup>e</sup>*

( , ,) *r vt* is a small change in the -space density. By substituting equation (5.3.5)

, *<sup>r</sup> v C*

It can be shown that (Fox and Uhlenbeck, 1970 a and b) by adopting the Onsager hypothesis,

*LX f r vt* ( , , ), *<sup>t</sup>*

the local thermodynamic force in -space;

1 11 (, ) ,

*r*

*E f r vt* ( , ,) 0 and

<sup>1</sup> ( , , ) ( , , ) 2 ( , ) ( ) ( ). *<sup>S</sup> E f r vt f r v t kL vv r r t t <sup>B</sup>*

In equation (5.3.7), the rate of change of the -space density increments are expressed in

of the centre of mass through a linear operator derived from the dissipation term, *<sup>S</sup> d* , in the Boltzmann equation. Equation (5.3.7) and (5.3.8) show that the Boltzmann and Onsager pictures are united near equilibrium. Equally importantly, equation (5.3.8) justifies the

*<sup>v</sup> L X X L v v X dv*

*S*

' '' ' '

(5.3.8)

 

as in (5.3.8), we see that the random term is a zero-mean


in the Boltzmann equation and ignoring the higher order terms of D*r* , we obtain,

*t* 

*e*

*B*

with *<sup>S</sup> L* is a linear operator (see Fox and Uhlenbeck, 1970 a and b); and *f*

*k* 

replacing the dissipation integral as a linear functional.

' ' 1 1 *ee e e*

*dt* if *<sup>e</sup>* 

 

It can be shown that when 0, *dH*

where

with *C*

where, *<sup>B</sup>* ln

*X k*

where ' *e* ' indicates the equilibrium state. In the vicinity of equilibrium, we can write,

*e*

term which needs to be characterised.

terms of thermodynamic forces *X* .

By deriving the random term *f*

stochastic process in the -space,

The random term now can be defined by,

correlated stochastic processes to model the fluctuations. Moving away from the -space, we describe the fluctuations and dissipation using the theory of stochastic processes in an effort to develop operational models of molecular fluctuations.

### **5.4 Onsager Regression Hypothesis, Langevin Equation and Itō processes**

The Onsager regression hypothesis, equation (5.2.26), states that the fluctuations of extensive variables around their expected values conditional on the initial values can be expressed in terms of a system of differential equations through a relaxation matrix which is defined in equation (5.2.20). Equation (5.2.26) is similar in form to equations (5.3.6) and (5.3.7) which are derived from Boltzmann's equation (5.3.1). Both of these theories support the hypothesis that the time derivatives of fluctuations on the average follow differential equations with additive random terms. The average fluctuations are driven by thermodynamically coupled driving forces because of energy dissipation according to Boltzmann. We have seen in the previous section that both of these descriptions are phenomenologically equivalent. However, none of those descriptions are amenable for operational models of fluctuation and dissipation.

Starting point of the development of such models is the Langevin equation which describes the motion of Brownian particles. Even though Langevin used the Newtonian laws to describe the particle motion, he developed a differential equation with an addictive random term, which is quite similar to the Onsager regression hypothesis. Langevin started by considering a particle of mass *m* at a distance *r* from an initial point, if *p* is the momentum vector of the particle and *V* is the velocity, we can write from the Newton laws,

$$\frac{dr}{dt} = \frac{\bar{p}}{m},\tag{5.4.1}$$

$$
\underline{F} = \frac{d\overline{p}}{dt}\,'\,\tag{5.4.2}
$$

$$\text{and} \quad \bar{p} = m\underline{V} \,. \tag{5.4.3}$$

We have slightly changed the notation to indicate that the variables are associated with a particle rather than with an ensemble.

In equation (5.4.2), *F* is the force vector on the particle (the particle is bombarded by the surrounding water molecules). We can express the *F* as ( ) *F F d e* where *Fd* is the drag component due to friction and *Fe* is the external force; the force due to molecular collisions on the particle is assumed to be random. *Fd* can be expressed through the friction constant ,

$$
\underline{F}\_{\mathcal{l}} = -\eta \underline{V} \,. \tag{5.4.4}
$$

Now we can write, *dr <sup>p</sup> dt m* as in equation (5.4.1),

$$\text{and} \quad \frac{d\bar{p}}{dt} = -\eta \left(\frac{\bar{p}}{m}\right) + \underline{E} + \underline{f} \,. \tag{5.4.5}$$

In equations (5.4.5) and (5.4.1), the position of the particle, *r* and the momentum, *p* are coupled, and *f* is a random additive noise. In a dissipative system, the random forcing term *f* can be assumed to have an expected value of zero:

$$E\left[\underline{f}\right] = \underline{\mathbf{0}}\,. \tag{5.4.6}$$

covariance

0 

2 

We can write a general Ito integral,

1 1

.

**5.5 Velocity as a Stochastic Variable** 

and <sup>1</sup> <sup>1</sup> *p*( ,0| ,0) ( ) *n n nn*

*p*

*ij* . Then by defining,

*ij* is not cross correlated, i.e., where *i j* ,

Equation (5.4.10) depicts a Markov process and is a martingale.

 

the Fokker-Planck equation (given in the repeated summation indices):

*Gnt gnt* ( ,) ( ,) ,

Theories of Fluctuations and Dissipation 173

*dn h n t dt G n t dw* ( ,) ( ,) , and

0 0 <sup>0</sup> () ( ) ( ,) ( ,) *t t*

2

*t t*

The probability density or the transitional probability function of *n t*( ) , 1 1 *p*(,| , ) *nt n t* obeys

( ,| , ) <sup>1</sup> ( , ) ( , | , ) ( , ) ( , ) ( , | , ), <sup>2</sup>

Once the Fokker-Plank equation is solved for the conditional density, the Markov process *n t*( ) can be described completely. For the most of the Markov process of practical interest *hnt* ( ,) is linear in *n* and *Gnt* ( ,) is independent of *n* , and therefore the Fokker-Plank equation (5.4.10) can also be solved using integration by parts without resorting to Ito

Equation (5.4.6) expresses the dynamics of a single Brownian particle based on the first principles. We can write the infinitesimal change in momentum in a slightly modified form:

*<sup>e</sup> p p dp p F dt dw*

where the subscript " *p* " indicates that they are associated with particle momentum. *Fe* denotes the external force acting on the particle. If the particle is in a porous media saturated with water, the porous matrix exerts a force opposite to the direction of flow, whereas the pressure gradient acting in the flow direction would be largely responsible for

. The first term on the right hand side of equation (5.5.1) can be thought of as the change

of momentum on the average if we lump the fluctuating component of *Fe* in to *p p*

, (5.5.1)

*i ik*

*j i j pnt n t <sup>h</sup> <sup>g</sup> ntpnt n t ntg ntpnt n t t n n n*

calculus. However, in general stochastic integrals are solved using Ito definition.

*m* 

 *ij* 

*n t n t h n t dt G n t dw* . (5.4.10)

1 1 1 1

*ik*

(5.4.11)

*ii* and when *i j* ,

*dw* .

At the given time, the random force term, *f* at time 1*t* is uncorrelated to that of 2*t* , and it is a result if molecular impacts on the particle. Therefore, we can assume that *f* to be a -correlated function: 

$$\text{Cov}\left(\underline{f}\begin{pmatrix}t\_1\\ \end{pmatrix}\right)\underline{f}\begin{pmatrix}t\_2\end{pmatrix}=\sigma^2\delta(t\_1-t\_2)\,,\tag{5.4.7}$$

where <sup>2</sup> is the variance.

It is now clear that the Wiener process described in Chapter 2 is a good model for *f* , and therefore we can write equation (5.4.5) as,

$$d\bar{p} = -\left(\frac{\eta}{m}\right)\bar{p}dt + F\_{\epsilon}dt + \sigma d\underline{w}(t) \tag{5.4.6}$$

where, *w t*( ) is the standard Wiener process.

In the absence of an external force,

$$d\bar{p} = -\left(\frac{\eta}{m}\right)\bar{p}dt + \sigma d\underline{w}(t)\,. \tag{5.4.7}$$

Therefore, the solution of the stochastic differential equation (5.4.7), can be written as,

$$
\bar{p}(t) = \bar{p}(0) - \int \left(\frac{\eta}{m}\right) \bar{p}dt + \int \sigma d\underline{w}(t) \,, \tag{5.4.8}
$$

and equation (5.4.8) in an stochastic integral. The last integration can be interpreted in two ways: as an Itō integral or as a Stratonovich integral. Because of the martingale of property of Itō integrals, we choose to interpret the second integral on the right hand side of equation (5.4.8) as an Itō integral. The implications of this choice is important to understand: it makes stochastic processes such as equation (5.4.8) Markov processes with the transitional conditional probabilities obeying Fokker-Planck type equations. The stochastic differential equations of the type given by equation (5.4.7) describe the time evolution of stochastic variables. We generalize the stochastic differential of a vector valued stochastic process by,

$$d\underline{\mathbf{u}} = \underline{\mathbf{h}}(\underline{\underline{u}}\_{\prime}\mathbf{t})dt + \underline{\underline{\sigma}}\underline{\mathbf{g}}(\underline{\underline{u}}\_{\prime}\mathbf{t})d\underline{\underline{w}}\_{\prime} \tag{5.4.9}$$

where *n* is an extensive variable, *h* is a vector function of *n* and *t* , is a diagonal matrix with *ii* as the diagonal element, *gnt* ( ,) is a matrix function of *n* and *t* and *w* is the standard Wiener process vector. By taking to be diagonal matrix, we assume that

*E f* <sup>0</sup> . (5.4.6)

, (5.4.7)

are

172 in Porous Media - An Approach Based on Stochastic Calculus

coupled, and *f* is a random additive noise. In a dissipative system, the random forcing

At the given time, the random force term, *f* at time 1*t* is uncorrelated to that of 2*t* , and it is a result if molecular impacts on the particle. Therefore, we can assume that *f* to be a

> <sup>2</sup> 1 2 1 2 *Cov f t f t t t* () () ( )

It is now clear that the Wiener process described in Chapter 2 is a good model for *f* , and

( ) *<sup>e</sup> dp pdt F dt dw t*

*dp pdt dw t*( ) *m* 

 

Therefore, the solution of the stochastic differential equation (5.4.7), can be written as,

*p*( ) (0) *t p pdt dw t*( ) *m* 

and equation (5.4.8) in an stochastic integral. The last integration can be interpreted in two ways: as an Itō integral or as a Stratonovich integral. Because of the martingale of property of Itō integrals, we choose to interpret the second integral on the right hand side of equation (5.4.8) as an Itō integral. The implications of this choice is important to understand: it makes stochastic processes such as equation (5.4.8) Markov processes with the transitional conditional probabilities obeying Fokker-Planck type equations. The stochastic differential equations of the type given by equation (5.4.7) describe the time evolution of stochastic variables. We generalize the stochastic differential of a vector valued stochastic process by, *dn h n t dt g n t dw* ( ,) ( ,) 

 

where *n* is an extensive variable, *h* is a vector function of *n* and *t* ,

*ii* as the diagonal element, *gnt* ( ,) is a matrix function of *n* and *t* and *w*

 

, (5.4.6)

. (5.4.7)

, (5.4.9)

to be diagonal matrix, we assume that

is a diagonal

, (5.4.8)

*m* 

In equations (5.4.5) and (5.4.1), the position of the particle, *r* and the momentum, *p*

term *f* can be assumed to have an expected value of zero:

where <sup>2</sup> 

matrix with

is the standard Wiener process vector. By taking


is the variance.

therefore we can write equation (5.4.5) as,

where, *w t*( ) is the standard Wiener process.

In the absence of an external force,

covariance 2 *ij* is not cross correlated, i.e., where *i j* , *ij ii* and when *i j* , 0 *ij* . Then by defining,

*Gnt gnt* ( ,) ( ,) , (

We can write a general Ito integral,

$$
\int d\underline{\underline{n}} = \int \underline{h}(\underline{n}, t)dt + \int \underline{G}(n, t)d\underline{w}\,\,\,\,\text{and}
$$

$$
\underline{m}(t) = \underline{n}(t\_0) + \int\_{t\_0}^{t} \underline{h}(\underline{n}, t)dt + \int\_{t\_0}^{t} \underline{G}(n, t)d\underline{w}\,\,\,.\,\tag{5.4.10}
$$

Equation (5.4.10) depicts a Markov process and is a martingale.

The probability density or the transitional probability function of *n t*( ) , 1 1 *p*(,| , ) *nt n t* obeys the Fokker-Planck equation (given in the repeated summation indices):

$$\frac{\partial p(\underline{\mathbf{u}},t \mid \underline{\mathbf{u}}\_{1},t\_{1})}{\partial t} = \frac{\partial \mathfrak{l}\_{i}}{\partial n\_{j}}(\underline{\mathbf{u}},t)\overline{p}(\underline{\mathbf{u}},t \mid \underline{\mathbf{u}}\_{1},t\_{1}) + \frac{1}{2}\frac{\partial^{2} \mathcal{g}\_{k}}{\partial n\_{i}\partial n\_{j}}(\underline{\mathbf{u}},t)\mathcal{g}\_{k}(\underline{\mathbf{u}},t)p(\underline{\mathbf{u}},t \mid \underline{\mathbf{u}}\_{1},t\_{1}),\tag{5.4.11}$$

and <sup>1</sup> <sup>1</sup> *p*( ,0| ,0) ( ) *n n nn* .

Once the Fokker-Plank equation is solved for the conditional density, the Markov process *n t*( ) can be described completely. For the most of the Markov process of practical interest *hnt* ( ,) is linear in *n* and *Gnt* ( ,) is independent of *n* , and therefore the Fokker-Plank equation (5.4.10) can also be solved using integration by parts without resorting to Ito calculus. However, in general stochastic integrals are solved using Ito definition.

### **5.5 Velocity as a Stochastic Variable**

Equation (5.4.6) expresses the dynamics of a single Brownian particle based on the first principles. We can write the infinitesimal change in momentum in a slightly modified form:

$$d\bar{p} = -\left[\left(\frac{\eta}{m}\right)\bar{p} + F\_c\right]dt + \sigma\_p dw\_{p\text{-}\text{'}}\tag{5.5.1}$$

where the subscript " *p* " indicates that they are associated with particle momentum. *Fe* denotes the external force acting on the particle. If the particle is in a porous media saturated with water, the porous matrix exerts a force opposite to the direction of flow, whereas the pressure gradient acting in the flow direction would be largely responsible for *p* . The first term on the right hand side of equation (5.5.1) can be thought of as the change of momentum on the average if we lump the fluctuating component of *Fe* in to *p p dw* .

*i ii v v*

"average" noise representing the fluctuations.

**5.5.1Thermodynamic Character of SSTM** 

, and diving this equation by *n*,

stochastic solute transport model (SSTM) in Chapter 3 is based.

influencing the fluctuations can be considered as a continuum.

*V V*

Theories of Fluctuations and Dissipation 175

We have shown that the velocity can be expressed as consisting of a mean component and an additive fluctuating component, based on the Langevin description of Brownian particles. From an application point of view, the additive form of the velocity can be used to explain the local heterogeneity of the porous medium, i.e., we can always calculate the average velocity in a region and then the changes in the porous structure may be assumed to cause the fluctuations around the mean. This is the working assumption on which the

As we have seen in section 5.3, equation (5.3.7) unites the Onsager and Boltzmann pictures close to equilibrium (Keizer, 1987). The SSTM given by equation (4.2.1) has a similar form to that of equation (5.3.7) and equation (5.3.2) where the fluctuating component is separated out as an additive component but the fluctuating part is now more complicated reflecting the influence of the porous media. According to equation (5.3.8), the "noisy" random functions have zero means and the two-time covariances are δ-correlated in time and space; and these Dirac's delta functions are related through a linear operator. In the development of SSTM, we assume only the δ-correlation in time because the spatial aspect is separated into a continuous function of space. This assumption can be justified as the porous medium

where *V* is the Gausssian velocity of the ensemble, *V* is the mean velocity and

,

is the

Therefore, we can write equation (5.5.1) as,

$$d\bar{p} = -\left[\left(\frac{\eta}{m}\right)\bar{p} + F\_c\right]dt + \sigma\_p^\cdot dw\_p^\cdot\,\tag{5.5.2}$$

where ' ' *p p dw* now contains the fluctuating component of the momentum change due to the porous media. *Fe* is in the mean force acting on the particle, and in the saturated medium, it may be dominating the first term of the right hand side of equation (5.5.2). Therefore, we could approximate equation (5.5.2) for an *i* th particle in an ensemble particles with,

$$d\mathbf{p}\_i = \mathbf{F}\_i dt + \sigma\_{p,i} dw\_{p,i} \,\prime\,\tag{5.5.3}$$

where *Fi* now depicts the mean force acting on a particle *i*, and all the variables are vectors and *<sup>p</sup>*,*<sup>i</sup>* is a matrix. Now we can write,

$$p\_i = m\_i \upsilon\_{i \ne i}$$

where *mi* is the mass of a particle *i* and *<sup>i</sup> v* is the particle velocity, which is a random variable. We can express equation (5.5.3) as,

$$d(m\_i v\_i) = m\_i \frac{dv\_i}{dt} dt + \sigma\_{p,i} dw\_{p,i'} \text{ and } \dot{\varepsilon}$$

the instantaneous change in the velocity, *<sup>i</sup> dv* , can be approximated by *<sup>i</sup> v dt* where *<sup>i</sup> v* is the mean velocity of the *i* th particle at the locality of the particle at time, *t*. Now we can write

$$d\upsilon\_i = \overline{\upsilon}\_i dt + \sigma\_{\upsilon,i} dw\_{\upsilon,i} \,\prime \tag{5.5.4}$$

where *wv i*, is the standard Wiener process related to velocity fluctuations and *vi vi i* , , *m* is the associated amplitude. As discussed in Chapter 2, we can express the fluctuating component as,

$$
\sigma\_{v,l} dw\_{v,l} = \xi\_l dt \tag{5.5.5}
$$

where *<sup>i</sup>* is the noise associated with velocity. We can rewrite equation (5.5.4) as,

$$d\upsilon\_{i} = \overline{\upsilon}\_{i}dt + \underline{\xi}\_{i}dt = \left(\overline{\upsilon}\_{i} + \underline{\xi}\_{i}\right)dt = d\left(\overline{\upsilon}\_{i} + \underline{\xi}\_{i}\right),\tag{5.5.6}$$

for very small increments of *dt*.

Therefore, we can write,

$$
\overline{\boldsymbol{\upsilon}}\_{i} = \overline{\boldsymbol{\upsilon}}\_{i} + \underline{\boldsymbol{\xi}}\_{i} \; \tag{5.5.7}
$$

where particle velocity is decomposed into the mean velocity and a fluctuating component. For an ensemble of *n* particles,

, (5.5.2)

, (5.5.3)

, (5.5.4)

(5.5.5)

, (5.5.7)

, (5.5.6)

' '

*dw* now contains the fluctuating component of the momentum change due to

174 in Porous Media - An Approach Based on Stochastic Calculus

*<sup>e</sup> p p dp p F dt dw*

the porous media. *Fe* is in the mean force acting on the particle, and in the saturated medium, it may be dominating the first term of the right hand side of equation (5.5.2). Therefore, we could approximate equation (5.5.2) for an *i* th particle in an ensemble particles

> *i i <sup>p</sup>*, , *<sup>i</sup> <sup>p</sup> <sup>i</sup> dp Fdt dw*

where *Fi* now depicts the mean force acting on a particle *i*, and all the variables are vectors

*i ii p m v* ,

where *mi* is the mass of a particle *i* and *<sup>i</sup> v* is the particle velocity, which is a random

the instantaneous change in the velocity, *<sup>i</sup> dv* , can be approximated by *<sup>i</sup> v dt* where *<sup>i</sup> v* is the mean velocity of the *i* th particle at the locality of the particle at time, *t*. Now we can

> *i i vi vi* , , *dv v dt dw*

where *wv i*, is the standard Wiener process related to velocity fluctuations and

*vi vi i* , ,

*dv v dt dt v dt d v i i i ii*

 *dw dt* 

is the noise associated with velocity. We can rewrite equation (5.5.4) as,

*i ii v v* 

where particle velocity is decomposed into the mean velocity and a fluctuating

*i i*

*m* is the associated amplitude. As discussed in Chapter 2, we can express the

, and

, , ( ) *<sup>i</sup> ii i p i p i dv d m v m dt dw dt* 

 

*m* 

Therefore, we can write equation (5.5.1) as,

*<sup>p</sup>*,*<sup>i</sup>* is a matrix. Now we can write,

variable. We can express equation (5.5.3) as,

where ' '

with,

and 

write

 *vi vi i* , , 

where *<sup>i</sup>* 

fluctuating component as,

for very small increments of *dt*.

component. For an ensemble of *n* particles,

Therefore, we can write,

*p p* 

*i ii v v* , and diving this equation by *n*,

$$V = \overline{V} + \xi\_{\overline{\nabla}\_{\Delta'}}$$

where *V* is the Gausssian velocity of the ensemble, *V* is the mean velocity and is the "average" noise representing the fluctuations.

We have shown that the velocity can be expressed as consisting of a mean component and an additive fluctuating component, based on the Langevin description of Brownian particles. From an application point of view, the additive form of the velocity can be used to explain the local heterogeneity of the porous medium, i.e., we can always calculate the average velocity in a region and then the changes in the porous structure may be assumed to cause the fluctuations around the mean. This is the working assumption on which the stochastic solute transport model (SSTM) in Chapter 3 is based.

### **5.5.1Thermodynamic Character of SSTM**

As we have seen in section 5.3, equation (5.3.7) unites the Onsager and Boltzmann pictures close to equilibrium (Keizer, 1987). The SSTM given by equation (4.2.1) has a similar form to that of equation (5.3.7) and equation (5.3.2) where the fluctuating component is separated out as an additive component but the fluctuating part is now more complicated reflecting the influence of the porous media. According to equation (5.3.8), the "noisy" random functions have zero means and the two-time covariances are δ-correlated in time and space; and these Dirac's delta functions are related through a linear operator. In the development of SSTM, we assume only the δ-correlation in time because the spatial aspect is separated into a continuous function of space. This assumption can be justified as the porous medium influencing the fluctuations can be considered as a continuum. random

**Multiscale, Generalised Stochastic Solute** 

In Chapter 3 and 4, we have developed a stochastic solute transport model in 1-D without rosorting to simplifying Fickian assumptions, but by using the idea that the fluctuations in velocity are influenced by the nature of porous medium. We model these fluctuations through the velocity covariance kernel. We have also estimated the dispersivity by taking the realisations of the solution of the SSTM and using them as the observations in the stochastic inverse method (SIM) based on the maximum likelihood estimation procedure for the stochastic partial differential equation obtained by adding a noise term to the advectiondispersion equation. We have confined the estimation of dispersitivities to a flow length of 1 m (i.e, *x* 0,1 ) except in Chapter 3, section 3.10, where we have estimated the dispersitivities up to 10 km using the SIM by simplifying the SSTM. This approach was proven to be computationally expensive and the approximation of the SSTM we have developed was based on the spatial average of the variance of the fluctuation term over the flow length. Further, the solution is based on a specific kernel. This development in Chapter 3 is inadequate to examine the scale dependence of the dispersitivity. Therefore, we set out to develop a dimensionless model for any given arbitrary flow length, *L* , in this Chapter for any given velocity kernel provided that we have the eigen functions in the form given by equation (4.2.3). Then we examine the dispersivities in relation to the flow lengths to

The starting point of the development of the multi-scale SSTM is the Langevin equation for the SSTM, which is interpreted locally. From equation (4.9.1), the Langevin equation can be

<sup>2</sup> ( ) ( ( ), ( , ), ) ( ( ), , , ) ( ) *x x*

zero-mean and *dt* variance. As discussed in Chapter 4, equation (6.1.1) has to be interpreted carefully to understand it better. Equation (6.1.1) is a SDE and also an Ito diffusion with the coefficients depending on the functions of space variables. It gives us the time evolution of the concentration of solute at a given point *x* which is denoted by subscript *x* . Obviously, the computation of *Cx* also depends on how the spatial

and *x* , respectively. *dw t*( ) are the standard Wiener increments with

*C C dC t C t V x t x dt C t x dw t*

*x x x x x*

2

*<sup>x</sup>* are dependent on , () *<sup>x</sup> xC t* and *Vxt* ( ,) ; and

(6.1.1)

*x x*

**6.1 Introduction** 

written as,

*C C C t x x* 

*x*

where the coefficients

2 <sup>2</sup> ( ), , *x x*

understand the multi-scale behaviour of the SSTM.

*x* and

**Transport Model in One Dimension** 
