**1. Introduction**

204 Advanced Topics in Measurements

V. Kangas, J. Lemanczyk (2003), "Compact Range Defocused Quiet Zone Characterization",

V. Viikari (April 2007), "Antenna Pattern Correction Techniques at Submillimeter

Y. Álvarez, F. Las-Heras, and M. R. Pino (December 2007), "Reconstruction of equivalent

pp.183-186, Munich.

3468.

XXV AMTA Symposium, Irvine, California.

Doctor of Science in Technology.

Measurements Techniques Association Europe (AMTA Europe) Symposium,

Wavelengths". Helsinki University of Technology. Dissertation for the degree of

currents distribution over arbitrary three-dimensional surfaces based on integral equation algorithms," IEEE Trans. Antennas Propagat., vol. 55, No. 12, pp. 3460-

> This paper is an attempt to introduce into measurement science a very unique and unified entropy-production-based method to characterize material parameters, equations of motion, and the related fluctuation-dissipation expressions for electrical and thermal transport properties such as conductivity, noise, and mobility. The approach is general enough to be used to study processes beyond equilibrium and yet it yields the normal transport coefficients near equilibrium. We will emphasize electromagnetic applications and heat transfer.

> Transport coefficients are related to fluctuation-dissipation relations (FDRs). These include the Einstein relation, permittivity, resistance, noise, mobility, conductivity, power, and viscosity. In micrometer to nanoscale measurements, FDRs become crucial for modeling and to enhance an understanding of the property being measured.

> The method we use in this paper is a projection-operator statistical mechanical approach. The background of this approach has been published and is summarized Baker-Jarvis and Kabos (2001). However, the present paper presents a unified approach that could be applied to a plethora of problems, near or far from equilibrium. The projection-operator approach was pioneered by Mori (1965); Zwanzig (1960). The theoretical approach used here has its roots in the work of Robertson that was based on a generalization and extension of the work of Zwanzig (1960), Rau and Müller (1996); Robertson (1966; 1999). Robertson's theory uses expected values of relevant variables and a nonequilibrium entropy for a dynamically driven system. The results reduce to the relevant thermodynamic potentials, forces, and entropy in the equilibrium limit. The advantage of this approach in studying time evolution of relevant variables is that the equations incorporate both relevant and irrelevant information, are exact, are Hamiltonian-based, have a direct relation to thermodynamics, and are based on reversible microscopic equations.

> The system is described by a set of relevant variables, but in order to maintain an exact solution to Liouville's equation, irrelevant information is incorporated by the use of a projection-like operator. This correction for the irrelevant variables manifests and defines relaxation and dissipation Weiss (1999). A common argument about the projection-operator theories is that we do not yet know how to model them in numerical simulators; however,

However note, and this is crucial, that the derivatives *dσ*(*t*)/*dt* and *dρ*(*t*)/*dt* are not equal and also the derivatives of the expectations with respect to *σ*(*t*) and *ρ*(*t*) are not equal and the approach is to find the derivatives in terms of each other and thereby develop an equation of motion. Note that at this stage, neither *λ* nor *< Fn >* are known. Throughout the paper unless

<sup>207</sup> An Analysis of the Interaction of Electromagnetic

Maximization by the common variational procedure leads to the generalized canonical

*σ*(*t*) = exp (−*λ*(*t*) ∗ *F*), (4)

*Tr*(exp (−*λ*(*t*) ∗ *F*)) = 1. (5)

*λn*(**r**, *t*) · *Fn*(**r**). (6)

otherwise noted, the brackets *<>* indicate expectations with respect to *σ*(*t*).

and Thermal Fields with Materials Based on Fluctuations and Entropy Production

*λ* ∗ *F* =

*<sup>d</sup>***r**∑*<sup>n</sup>*

*λn*(**r**, *t*), and *< Fn >* are determined in terms of each other from simultaneous solution of Eqs.(3), and (4), and the equation of motion Eq.(13) that will be developed later in the section. This highlights the difference between this approach and the Jaynesian Maximum Entropy approach where *< Fn >* would be assumed known and then the *λ<sup>n</sup>* are determined. Even though both methods maximize entropy, the Robertson method does not assume that *< Fn >* are known but determines them and *λ<sup>n</sup>* by requiring them to in addition satisfy the equations of motion in addition to the constraint equations and thus incorporating irrelevant

As an example involving electromagnetic driving, the relevant variables may be the microscopic internal energy density operator *u*(**r**) and polarizations **p**(**r**) and **m**(**r**) with associated intensive quantities with *λ*'s 1/*kBT*, and effective local fields −**E***p*/*kBT* and −**H***m*/*kBT*. The generalized temperature is 1/*β* = (*h*¯ *ω*/2) coth (*h*¯ *ω*/2*kBT*). In a high-temperature approximation this reduces to 1/*β* → *kBT*. We need to note that quantities such as internal energy and polarization energies are defined at equilibrium and as a system

The dynamical variables we use are a set of operators, or classically, a set of functions of phase *F*1(**r**), *F*2(**r**), ···. The expectations of these are the observable or measured quantities. For normalization, *F*<sup>0</sup> = 1 is included in the set. We will assume a normalization of the density function *σ* so that no *λ*<sup>0</sup> or *F*<sup>0</sup> is required. The operators *Fn*(**r**) are functions of **r** and phase variables, but are not explicitly time dependent. The time dependence enters through the driving fields in the Hamiltonian and when the trace operation is performed. These operators are, for example, the microscopic internal-energy density *u*(**r**), and the electromagnetic polarizations **m**(**r**) and **p**(**r**) or microscopic electromagnetic induction and electric fields **b**(**r**) and **e**(**r**). Associated with these operators are a set of generalized forces that represent generalized thermodynamic fields that are not operators and do not depend on phase, such as generalized temperature and local electromagnetic fields such as **E***p*(**r**, *t*), **H***m*(**r**, *t*), and temperature. In any complex system, in addition to the set of *Fn*(**r**), there maybe other uncontrolled or unobserved variables that are categorized as irrelevant variables.

This Gibbsian form of the entropy appears to be very reasonable since the condition of maximal information entropy is the most unbiased and maximizes the uncertainty in *σ*(*t*)

moves out of thermal equilibrium the interpretation of these quantities change.

We use the ∗ notation for scalar or vector *Fn* and *λ<sup>n</sup>*

information and the Liouville, equation into the solution.

density

where we require

Nettleton (1999) has made significant progress in this regard, and Eq.(18) of this paper eliminates the explicit projection-like operator in the equations of motion.

The approach in Robertson (1966; 1993) develops the full density operator *ρ*(*t*) in terms of the relevant canonical density operator *σ*(*t*) that is developed from constraints on relevant variables only, plus a relaxation correction term that accounts for irrelevant information. The statistical-density operator *ρ*(*t*) satisfies the Liouville equation, whereas the relevant canonical-density operator *σ*(*t*) does not. By including the irrelevant information by the projection-like operator, this approach is exact and time-symmetric. This theory yields an expression that exhibits all the required properties of a nonequilibrium entropy, and yet is based entirely on time-symmetric equations. In the past, other researchers have developed nonequilibrium statistical mechanical theories by adding a source term to Liouville's equation as in Zubarev et al. (1996), but our approach used here requires no source term. This approach has been used previously to study the microscopic time evolution of electromagnetic properties Baker-Jarvis (2008); Baker-Jarvis and Kabos (2001); Baker-Jarvis et al. (2004); Baker-Jarvis and Surek (2009); Robertson (1967b). The theory can be formulated either quantum-mechanically or classically.

#### **2. Theoretical background of the statistical-mechanical method**

In the Robertson projection operator statistical mechanical formulation there are two density operators. The first is the full statistical-density operator *ρ*(*t*) that encompasses all information of the system in relation to the Hamiltonian and that satisfies the Liouville equation:

$$d\rho/dt = -\mathrm{i}\mathcal{L}(t)\rho(t) = \frac{1}{\mathrm{i}\hbar}[\mathcal{H}(t), \rho(t)],\tag{1}$$

where L(*t*) is the time-dependent Liouville operator, H(*t*) is the time-dependent Hamiltonian, and [, ] denote commutator.

In addition to *ρ*(*t*), a relevant canonical-density operator *σ*(*t*) is constructed. Robertson chose to construct *σ*(*t*) by maximizing the information entropy subject to a limited knowledge on a finite set of constraints on the expected values of operators that are contained in the Hamiltonian, at times *t*. In the equilibrium limit the expected values of the relevant operators are the thermodynamic potentials and associated generalized forces can be identified as thermodynamic forces. The entropy summarizes our state of uncertainty in the expected values of the relevant variables at time *t* and is

$$S(t) = -k\_B \text{Tr} \left( \sigma(t) \ln \sigma(t) \right),\tag{2}$$

where *kB* is Boltzmann's constant, *Tr* denotes trace and in a classical analysis will represent integration over phase variables. We need to note that other forms of the entropy other than Eq.(2) could be used to construct *σ*(*t*) from the relevant variables contained in the Hamiltonian. The basic generalized thermodynamic quantities that we wish to determine are *< Fn*(**r**) *>*≡ *Tr*(*Fn*(**r**)*ρ*(*t*)). The constraints for the entropy are that the expectations of *Fn*(**r**) under both *σ*(*t*) and *ρ*(*t*) are equal for all times

$$\operatorname{Tr}(F\_{\mathfrak{n}}(\mathbf{r})\rho(t)) = \operatorname{Tr}(F\_{\mathfrak{n}}(\mathbf{r})\sigma(t)) = < F\_{\mathfrak{n}}(\mathbf{r})>.\tag{3}$$

However note, and this is crucial, that the derivatives *dσ*(*t*)/*dt* and *dρ*(*t*)/*dt* are not equal and also the derivatives of the expectations with respect to *σ*(*t*) and *ρ*(*t*) are not equal and the approach is to find the derivatives in terms of each other and thereby develop an equation of motion. Note that at this stage, neither *λ* nor *< Fn >* are known. Throughout the paper unless otherwise noted, the brackets *<>* indicate expectations with respect to *σ*(*t*).

Maximization by the common variational procedure leads to the generalized canonical density

$$\sigma(t) = \exp\left(-\lambda(t) \* F\right),\tag{4}$$

where we require

2 Will-be-set-by-IN-TECH

Nettleton (1999) has made significant progress in this regard, and Eq.(18) of this paper

The approach in Robertson (1966; 1993) develops the full density operator *ρ*(*t*) in terms of the relevant canonical density operator *σ*(*t*) that is developed from constraints on relevant variables only, plus a relaxation correction term that accounts for irrelevant information. The statistical-density operator *ρ*(*t*) satisfies the Liouville equation, whereas the relevant canonical-density operator *σ*(*t*) does not. By including the irrelevant information by the projection-like operator, this approach is exact and time-symmetric. This theory yields an expression that exhibits all the required properties of a nonequilibrium entropy, and yet is based entirely on time-symmetric equations. In the past, other researchers have developed nonequilibrium statistical mechanical theories by adding a source term to Liouville's equation as in Zubarev et al. (1996), but our approach used here requires no source term. This approach has been used previously to study the microscopic time evolution of electromagnetic properties Baker-Jarvis (2008); Baker-Jarvis and Kabos (2001); Baker-Jarvis et al. (2004); Baker-Jarvis and Surek (2009); Robertson (1967b). The theory can be formulated either

In the Robertson projection operator statistical mechanical formulation there are two density operators. The first is the full statistical-density operator *ρ*(*t*) that encompasses all information

where L(*t*) is the time-dependent Liouville operator, H(*t*) is the time-dependent Hamiltonian,

In addition to *ρ*(*t*), a relevant canonical-density operator *σ*(*t*) is constructed. Robertson chose to construct *σ*(*t*) by maximizing the information entropy subject to a limited knowledge on a finite set of constraints on the expected values of operators that are contained in the Hamiltonian, at times *t*. In the equilibrium limit the expected values of the relevant operators are the thermodynamic potentials and associated generalized forces can be identified as thermodynamic forces. The entropy summarizes our state of uncertainty in the expected

where *kB* is Boltzmann's constant, *Tr* denotes trace and in a classical analysis will represent integration over phase variables. We need to note that other forms of the entropy other than Eq.(2) could be used to construct *σ*(*t*) from the relevant variables contained in the Hamiltonian. The basic generalized thermodynamic quantities that we wish to determine are *< Fn*(**r**) *>*≡ *Tr*(*Fn*(**r**)*ρ*(*t*)). The constraints for the entropy are that the expectations of

*ih*¯

*S*(*t*) = −*kBTr* (*σ*(*t*)ln *σ*(*t*)), (2)

*Tr*(*Fn*(**r**)*ρ*(*t*)) = *Tr*(*Fn*(**r**)*σ*(*t*)) =*< Fn*(**r**) *>* . (3)

[H(*t*), *ρ*(*t*)], (1)

of the system in relation to the Hamiltonian and that satisfies the Liouville equation:

*<sup>d</sup>ρ*/*dt* <sup>=</sup> <sup>−</sup>*i*L(*t*)*ρ*(*t*) = <sup>1</sup>

eliminates the explicit projection-like operator in the equations of motion.

**2. Theoretical background of the statistical-mechanical method**

quantum-mechanically or classically.

and [, ] denote commutator.

values of the relevant variables at time *t* and is

*Fn*(**r**) under both *σ*(*t*) and *ρ*(*t*) are equal for all times

$$\operatorname{Tr}(\exp\left(-\lambda(t)\*F\right))=1.\tag{5}$$

We use the ∗ notation for scalar or vector *Fn* and *λ<sup>n</sup>*

$$
\lambda \ast F = \int d\mathbf{r} \sum\_{\mathcal{U}} \lambda\_{\mathcal{U}}(\mathbf{r}, t) \cdot F\_{\mathcal{U}}(\mathbf{r}). \tag{6}
$$

*λn*(**r**, *t*), and *< Fn >* are determined in terms of each other from simultaneous solution of Eqs.(3), and (4), and the equation of motion Eq.(13) that will be developed later in the section. This highlights the difference between this approach and the Jaynesian Maximum Entropy approach where *< Fn >* would be assumed known and then the *λ<sup>n</sup>* are determined. Even though both methods maximize entropy, the Robertson method does not assume that *< Fn >* are known but determines them and *λ<sup>n</sup>* by requiring them to in addition satisfy the equations of motion in addition to the constraint equations and thus incorporating irrelevant information and the Liouville, equation into the solution.

As an example involving electromagnetic driving, the relevant variables may be the microscopic internal energy density operator *u*(**r**) and polarizations **p**(**r**) and **m**(**r**) with associated intensive quantities with *λ*'s 1/*kBT*, and effective local fields −**E***p*/*kBT* and −**H***m*/*kBT*. The generalized temperature is 1/*β* = (*h*¯ *ω*/2) coth (*h*¯ *ω*/2*kBT*). In a high-temperature approximation this reduces to 1/*β* → *kBT*. We need to note that quantities such as internal energy and polarization energies are defined at equilibrium and as a system moves out of thermal equilibrium the interpretation of these quantities change.

The dynamical variables we use are a set of operators, or classically, a set of functions of phase *F*1(**r**), *F*2(**r**), ···. The expectations of these are the observable or measured quantities. For normalization, *F*<sup>0</sup> = 1 is included in the set. We will assume a normalization of the density function *σ* so that no *λ*<sup>0</sup> or *F*<sup>0</sup> is required. The operators *Fn*(**r**) are functions of **r** and phase variables, but are not explicitly time dependent. The time dependence enters through the driving fields in the Hamiltonian and when the trace operation is performed. These operators are, for example, the microscopic internal-energy density *u*(**r**), and the electromagnetic polarizations **m**(**r**) and **p**(**r**) or microscopic electromagnetic induction and electric fields **b**(**r**) and **e**(**r**). Associated with these operators are a set of generalized forces that represent generalized thermodynamic fields that are not operators and do not depend on phase, such as generalized temperature and local electromagnetic fields such as **E***p*(**r**, *t*), **H***m*(**r**, *t*), and temperature. In any complex system, in addition to the set of *Fn*(**r**), there maybe other uncontrolled or unobserved variables that are categorized as irrelevant variables.

This Gibbsian form of the entropy appears to be very reasonable since the condition of maximal information entropy is the most unbiased and maximizes the uncertainty in *σ*(*t*)

for any operator *A* ( Robertson (1966)). As a consequence, *dσ*/*dt* = *P*(*t*)*dρ*/*dt*. In Robertson's pioneering work he has shown that Eq.(10) is equivalent to the Kawasaki-Grunton and Grabert's projection operators, and is a generalization of the Mori and Zwanzig projection operators Robertson (1978). Although *P*<sup>2</sup> = *P*, it is not necessarily hermitian and as a consequence is not a projection operator and we term it a projection-like operator. In an open system, *ρ*(*t*) does not evolve unitarily and *ρ*(*t*) need not satisfy Eq.(1), but the theory developed in this paper can easily be modified by adding a source term for the interaction

<sup>209</sup> An Analysis of the Interaction of Electromagnetic

and Thermal Fields with Materials Based on Fluctuations and Entropy Production

An important identity was proven previously in Oppenheim and Levine (1979); Rau and

It has been shown previously that the exact time evolution of the relevant variables can be expressed for a dynamically driven system as Baker-Jarvis (2005; 2008); Oppenheim and

)} ∗ *∂λ*(**r**�

Δ{*FF*} can be related to material properties such as heat capacity, susceptibility, etc and the \* operator is defined in Eq.(6), and Δ{*FF*} is defined as Δ{*FF*} =*< FF >* − *< F >< F >* as shown in Appendix 10. Note that Eq.(13) is time symmetric (invariant under *t* → −*t*) and yet models dissipation by including the effects of the irrelevant variables. In many problems it is useful to use Eq.(11) in Eq.(13). The first term on the RHS is the reversible contribution, the second term is the initial-condition contribution, and the last term is due to dissipation. Equations (3) and (13) form a closed system of equations, and the procedure for determining the generalized forces in terms of *< Fn >* is to solve Eqs.(3) and (13) simultaneously. For operators that are odd under time reversal, such as the magnetic moment, the first term on the right hand side of Eq.(13), the reversible term, is nonzero, whereas for functions even under time reversal, such as dielectric polarization and microscopic entropy, this term is zero. However, the third term in Eq.(13) in any dissipative system is nonzero. The relaxation correction term that appears in this formalism is essential and is a source of the time-dependence in the entropy rate. Although these equations are nonlinear, in many cases linear approximations have been successfully made Baker-Jarvis et al. (2007). For open systems, where there is a source that is not in the Hamiltonian, Eq.(13) is modified only by adding a source term Robertson and Mitchell (1971). An exact entropy-evolution equation

, *t*) *<sup>∂</sup><sup>t</sup>* <sup>=</sup>*<sup>&</sup>lt; <sup>F</sup>*˙

where the bar is the Kubo transform for any operator *A*(**r**), where *A* = <sup>1</sup>

Oppenheim and Levine (1979); Robertson (1967a). In a classical analysis, *A* = *A*.

�

 *t ti*

can be derived from Eq.(13) and this equation will be a key element in this paper.

*<sup>i</sup>*L*σ*(*t*) = <sup>−</sup>*<sup>λ</sup>* <sup>∗</sup> *<sup>F</sup>*˙*σ*, (11)

<sup>0</sup> *<sup>σ</sup>x*(*t*)*Aσ*−*x*(*t*)*dx*

*Tr*(*i*L*σ*(*t*)) = <sup>−</sup>*λ*<sup>∗</sup> *<sup>&</sup>lt; <sup>F</sup>*˙ *<sup>&</sup>gt;*<sup>=</sup> 0, (12)

*<sup>n</sup>*(**r**) *>*

*Tr* (*i*L*Fn*(**r**)T (*t*, *τ*)(1 − *P*(*τ*))*i*L*σ*(*τ*)) *dτ*. (13)

with a reservoir (Robertson and Mitchell (1971); Yu (2008)).

*<sup>∂</sup><sup>t</sup>* ≡ −Δ{*F*(**r**)*F*(**<sup>r</sup>**

*<sup>n</sup>*(**r**)T (*t*, *ti*)*χ*(*ti*)) −

Müller (1996); Robertson (1993):

Levine (1979); Robertson (1966)

+*Tr*(*F*˙

*∂ < Fn*(**r**) *>*

and

consistent with the constraints at each point in time. If we choose not to use the form of Eq.(2) to generate *σ*(*t*) from a set of constraints, then we must possess additional information beyond the set of constraints. The entropy in Eq.(2) will contain input from Liouville's equation and the Hamiltonian. The *λ*'s and *< Fn*(**r**) *>* can only be determined by solving Eq.(3) in conjunction with equations of motion developed later in the paper. In Eq.(5), *λ*(**r**, *t*) are generalized forces that are not functions of phase, are not operators, and are related to local nonquantized generalized forces, such as temperature and electromagnetic fields and together with *< Fn*(**r**) *>* are determined by the simultaneous solution of Eqs.(3) and (13). Notice that the information that is assumed to be known is only a subset of the total information that would be required to totally characterize a system. The irrelevant variables are included in the theory exactly by means of a projection-like operator, and these effects are also manifest in the generalized forces.

The dynamical evolution of the relevant variables is the reversible evolution through the Hamiltonian and is denoted by

$$\dot{F}\_{\boldsymbol{\mathsf{H}}}(\mathbf{r}) \equiv \dot{\imath} \mathcal{L} F\_{\boldsymbol{\mathsf{H}}}(\mathbf{r}) = -\frac{1}{i\hbar} [\mathcal{H}(t), F\_{\boldsymbol{\mathsf{H}}}(\mathbf{r})].\tag{7}$$

Note the difference in sign from that of the statistical density operator evolution in Eq.(1). In addition to the dynamical evolution of the relevant variables, the full time derivative of the expected values of the relevant variables is also influenced by the irrelevant variables, and is manifested as dissipation. For an electromagnetic system with internal energy and microscopic fields **d** and **b** interacting with applied fields **E** and **H**, the Hamiltonian is H(*t*) = *<sup>d</sup>***r**{*u*(**r**) <sup>−</sup> **<sup>d</sup>**(**r**) · **<sup>E</sup>**(**r**, *<sup>t</sup>*) <sup>−</sup> **<sup>b</sup>**(**r**) · **<sup>H</sup>**(**r**, *<sup>t</sup>*)}.

Robertson's approach is based on developing an exact integral equation for *ρ*(*t*) in terms of *σ*(*t*). If we use Oppenheim's extended initial condition, Oppenheim and Levine (1979), this relationship is

$$
\rho(t) = \sigma(t) + \mathcal{T}(t, t\_l)\chi(t\_l) - \int\_{t\_l}^{t} d\tau \mathcal{T}(t, \tau) \{1 - P(\tau)\} \rangle i\mathcal{L}(\tau)\sigma(\tau), \tag{8}
$$

where the initial condition at time *ti* is *χ*(*ti*) = *ρ*(*ti*) − *σ*(*ti*) (note that Oppenheim and Levine (1979) generalized the analysis of Robertson to include this more generalized initial condition). The easiest and generally valid approximation is to set T (*t*, *ti*)*χ*(*ti*) = 0, for example when *ρ*(*ti*) = *σ*(*ti*) or letting *ti* → −∞ and assume the density operators begin the same in the distant past. T is an evolution operator with T (*t*, *t*) = 1 and satisfies

$$\frac{\partial \mathcal{T}(t,\tau)}{\partial \tau} = \mathcal{T}(t,\tau)(1 - P(\tau))i\mathcal{L}(\tau),\tag{9}$$

where *P*(*t*) is a nonhermitian projection-like operator defined by the functional derivative

$$P(t)A \equiv \sum\_{n=1}^{m} \int d\mathbf{r} \frac{\delta \sigma(t)}{\delta < F\_{\text{fl}}(\mathbf{r}) >} Tr(F\_{\text{fl}}(\mathbf{r})A)$$

$$= \sum\_{m,n=1}^{p} \int d\mathbf{r}' \overline{F}\_{\text{fl}}(\mathbf{r}')\sigma(t) < F\_{\text{fl}}(\mathbf{r}')\overline{F}\_{\text{n}}(\mathbf{r}) >^{-1} Tr(F\_{\text{fl}}(\mathbf{r})A)d\mathbf{r},\tag{10}$$

for any operator *A* ( Robertson (1966)). As a consequence, *dσ*/*dt* = *P*(*t*)*dρ*/*dt*. In Robertson's pioneering work he has shown that Eq.(10) is equivalent to the Kawasaki-Grunton and Grabert's projection operators, and is a generalization of the Mori and Zwanzig projection operators Robertson (1978). Although *P*<sup>2</sup> = *P*, it is not necessarily hermitian and as a consequence is not a projection operator and we term it a projection-like operator. In an open system, *ρ*(*t*) does not evolve unitarily and *ρ*(*t*) need not satisfy Eq.(1), but the theory developed in this paper can easily be modified by adding a source term for the interaction with a reservoir (Robertson and Mitchell (1971); Yu (2008)).

An important identity was proven previously in Oppenheim and Levine (1979); Rau and Müller (1996); Robertson (1993):

$$i\mathcal{L}\sigma(t) = -\lambda \ast \overline{\mathcal{F}}\sigma\_{\prime} \tag{11}$$

and

4 Will-be-set-by-IN-TECH

consistent with the constraints at each point in time. If we choose not to use the form of Eq.(2) to generate *σ*(*t*) from a set of constraints, then we must possess additional information beyond the set of constraints. The entropy in Eq.(2) will contain input from Liouville's equation and the Hamiltonian. The *λ*'s and *< Fn*(**r**) *>* can only be determined by solving Eq.(3) in conjunction with equations of motion developed later in the paper. In Eq.(5), *λ*(**r**, *t*) are generalized forces that are not functions of phase, are not operators, and are related to local nonquantized generalized forces, such as temperature and electromagnetic fields and together with *< Fn*(**r**) *>* are determined by the simultaneous solution of Eqs.(3) and (13). Notice that the information that is assumed to be known is only a subset of the total information that would be required to totally characterize a system. The irrelevant variables are included in the theory exactly by means of a projection-like operator, and these effects are also manifest in

The dynamical evolution of the relevant variables is the reversible evolution through the

Note the difference in sign from that of the statistical density operator evolution in Eq.(1). In addition to the dynamical evolution of the relevant variables, the full time derivative of the expected values of the relevant variables is also influenced by the irrelevant variables, and is manifested as dissipation. For an electromagnetic system with internal energy and microscopic fields **d** and **b** interacting with applied fields **E** and **H**, the Hamiltonian is H(*t*) =

Robertson's approach is based on developing an exact integral equation for *ρ*(*t*) in terms of *σ*(*t*). If we use Oppenheim's extended initial condition, Oppenheim and Levine (1979), this

> *t ti*

where the initial condition at time *ti* is *χ*(*ti*) = *ρ*(*ti*) − *σ*(*ti*) (note that Oppenheim and Levine (1979) generalized the analysis of Robertson to include this more generalized initial condition). The easiest and generally valid approximation is to set T (*t*, *ti*)*χ*(*ti*) = 0, for example when *ρ*(*ti*) = *σ*(*ti*) or letting *ti* → −∞ and assume the density operators begin the same in the

where *P*(*t*) is a nonhermitian projection-like operator defined by the functional derivative

)*σ*(*t*) *< Fm*(**r**

*Tr*(*Fn*(**r**)*A*)

�

*δσ*(*t*) *δ < Fn*(**r**) *>* *ih*¯

[H(*t*), *Fn*(**r**)]. (7)

*dτ*T (*t*, *τ*){1 − *P*(*τ*))}*i*L(*τ*)*σ*(*τ*), (8)

)*Fn*(**r**) *>*−<sup>1</sup> *Tr*(*Fn*(**r**)*A*)*d***r**, (10)

*∂τ* <sup>=</sup> <sup>T</sup> (*t*, *<sup>τ</sup>*)(<sup>1</sup> <sup>−</sup> *<sup>P</sup>*(*τ*))*i*L(*τ*), (9)

*<sup>n</sup>*(**r**) <sup>≡</sup> *<sup>i</sup>*L*Fn*(**r**) = <sup>−</sup> <sup>1</sup>

the generalized forces.

relationship is

Hamiltonian and is denoted by

*<sup>d</sup>***r**{*u*(**r**) <sup>−</sup> **<sup>d</sup>**(**r**) · **<sup>E</sup>**(**r**, *<sup>t</sup>*) <sup>−</sup> **<sup>b</sup>**(**r**) · **<sup>H</sup>**(**r**, *<sup>t</sup>*)}.

*P*(*t*)*A* ≡

*p* ∑ *m*,*n*=1

=

*m* ∑ *n*=1

> *d***r** � *Fm*(**r** �

 *d***r**

*F*˙

*ρ*(*t*) = *σ*(*t*) + T (*t*, *ti*)*χ*(*ti*) −

distant past. T is an evolution operator with T (*t*, *t*) = 1 and satisfies

*∂*T (*t*, *τ*)

$$\text{Tr}(\mathbf{i}\mathcal{L}\sigma(t)) = -\lambda \ast < \overline{\overline{F}} \, \lambda = 0,\tag{12}$$

where the bar is the Kubo transform for any operator *A*(**r**), where *A* = <sup>1</sup> <sup>0</sup> *<sup>σ</sup>x*(*t*)*Aσ*−*x*(*t*)*dx* Oppenheim and Levine (1979); Robertson (1967a). In a classical analysis, *A* = *A*.

It has been shown previously that the exact time evolution of the relevant variables can be expressed for a dynamically driven system as Baker-Jarvis (2005; 2008); Oppenheim and Levine (1979); Robertson (1966)

$$\frac{\partial \, \delta \, \langle \mathbf{r} \rangle \, \mathbf{r} \rangle}{\partial t} \equiv -\Delta \{ F(\mathbf{r}) \overline{F}(\mathbf{r}') \} \ast \frac{\partial \lambda(\mathbf{r}', t)}{\partial t} = \epsilon \, \dot{F}\_{\text{fl}}(\mathbf{r}) >$$

$$+ \text{Tr}(\dot{F}\_{\text{fl}}(\mathbf{r}) \mathcal{T}(t, t\_{i}) \chi(t\_{i})) - \int\_{t\_{i}}^{t} \text{Tr} \left( i \mathcal{L} F\_{\text{fl}}(\mathbf{r}) \mathcal{T}(t, \tau) (1 - P(\tau)) i \mathcal{L} \sigma(\tau) \right) d\tau. \tag{13}$$

Δ{*FF*} can be related to material properties such as heat capacity, susceptibility, etc and the \* operator is defined in Eq.(6), and Δ{*FF*} is defined as Δ{*FF*} =*< FF >* − *< F >< F >* as shown in Appendix 10. Note that Eq.(13) is time symmetric (invariant under *t* → −*t*) and yet models dissipation by including the effects of the irrelevant variables. In many problems it is useful to use Eq.(11) in Eq.(13). The first term on the RHS is the reversible contribution, the second term is the initial-condition contribution, and the last term is due to dissipation. Equations (3) and (13) form a closed system of equations, and the procedure for determining the generalized forces in terms of *< Fn >* is to solve Eqs.(3) and (13) simultaneously. For operators that are odd under time reversal, such as the magnetic moment, the first term on the right hand side of Eq.(13), the reversible term, is nonzero, whereas for functions even under time reversal, such as dielectric polarization and microscopic entropy, this term is zero. However, the third term in Eq.(13) in any dissipative system is nonzero. The relaxation correction term that appears in this formalism is essential and is a source of the time-dependence in the entropy rate. Although these equations are nonlinear, in many cases linear approximations have been successfully made Baker-Jarvis et al. (2007). For open systems, where there is a source that is not in the Hamiltonian, Eq.(13) is modified only by adding a source term Robertson and Mitchell (1971). An exact entropy-evolution equation can be derived from Eq.(13) and this equation will be a key element in this paper.

where we have eliminated *P*(*t*) and avoided some of complications for simulations using the projection-like operator *P*. In linear driving the projection operator can be neglected. We see that only the variables that change sign under time reversal, such as magnetic moment,

<sup>211</sup> An Analysis of the Interaction of Electromagnetic

and Thermal Fields with Materials Based on Fluctuations and Entropy Production

is related to the energy FDR relation developed by Berne and Harp (1970), but is based on entropy and not energy. These FDR equations reduce to the traditional FDR relations, such as Nyquist's theorem, or conductivity as we approach equilibrium. Equation (17) will form the basis of our applications to various electromagnetic driving and measurement problems. The LHS of Eq.(17) represents the macroscopic dissipation, and the last term on the RHS represents the fluctuations in terms of the microscopic entropy rate *s*˙(*t*). For almost all many-body systems, due to incomplete information, there are contributions from the positive semi-definite relaxation terms in Eq.(17). The projection operator, which models the state of knowledge of the system described by the relevant variables, acts to decrease the entropy rate and in the limit of no decoherence, Eq.(17) becomes *dS*/*dt* → 0. For an open system Eq.(17)

To summarize, for a dynamically driven, but otherwise isolated, system, the expected value of the microscopic entropy rate (*< s*˙(*t*) *>*) is zero, but the fluctuations in this variable are not zero. This is due to the microscopic reversibility of the underlying equations of motion. However, in a complex system there are other uncontrolled variables in addition to the relevant ones and as a consequence there is dissipation and irreversibility and a net entropy evolution. Whereas *ρ*(*t*) satisfies Liouville's equation, *σ*(*t*) does not, and this is a consequence of irrelevant variables. Equation (17) can model systems away from equilibrium. Transport coefficients and the related FDR relations for conductivity, susceptibility, noise, and

The paper is organized as follows. The paper begins with an argument that the author's previously developed entropy-based fluctuation-dissipation equation (EFDR) can be used to obtain classical FDRs and be extended into the realm of nonequilibrium systems. The equations of motion are all time symmetric, in that if one transforms *t* → −*t*, the equation remain the same Robertson (1999). We show that if we apply the Euler-Lagrange equations to the difference between the entropy production and *dS*/*dt*, or equivalently the net flux through the system, we obtain the statistical mechanical equations of motion. In the last sections of the paper, fluctuation-dissipation equations are derived from the equation for electrical conductivity, noise, thermal conductivity, and the determination of Boltzmann's constant.

A generalized entropy rate EFDR relation that is valid arbitrarily far from equilibrium that was developed in recent papers Baker-Jarvis (2005; 2008); Baker-Jarvis and Surek (2009) is the

*<sup>∂</sup><sup>t</sup>* <sup>∗</sup> *<sup>λ</sup>* <sup>=</sup> <sup>−</sup>*kB<sup>λ</sup>* <sup>∗</sup>

*s*˙(*t*)T (*t*, *τ*)(1 − *P*(*τ*))*s*˙(*τ*)

<sup>Δ</sup>{*FF*} ∗ *∂λ*

 fluctuation

 *dτ*

*∂t*  + *< s*˙(*t*) *>*

. (19)

would be modified by adding a term for the entropy source.

**4. Entropy rate fluctuation relation and transport coefficients**

*kB*

*∂ < F >*

 *t ti* 

*dS*/*dt* <sup>=</sup> <sup>1</sup>

= *kB*

other quantities follow naturally from Eq.(13).

equation of motion for the entropy in Eq.(17)

*<sup>n</sup> <sup>&</sup>gt;*�<sup>=</sup> 0. The even variables satisfy *<sup>&</sup>lt; <sup>F</sup>*˙

*<sup>n</sup> >*= 0. Equation (18

have expected values *< F*˙

#### **3. The entropy, entropy rate, and entropy production**

Due to the invariance of the trace operation under unitary transformations, the Neumann entropy −*kBTr*(*ρ*(*t*)ln *ρ*(*t*)), formed from the full statistical-density operator *ρ*(*t*) that satisfies the Liouville Eq.(1) in an isolated system, is independent of time and cannot be a nonequilibrium entropy. However, the entropy −*kBTr*(*σ*(*t*)ln *σ*(*t*)), is not constant in time and has all the properties of a nonequilibrium entropy, and reduces to the thermodynamic entropy in the appropriate limit. It is important to note that unlike energy, entropy is not a conserved quantity and can be produced in interactions.

We define the entropy from Eqs. (2) and (4)

$$S(t) \equiv k\_B \lambda \* < F >\_\prime \tag{14}$$

and microscopic dynamically-driven entropy rate from Eq.(11) as

$$
\dot{s}(t) \equiv k\_B \lambda \* \dot{F} = k\_B \lambda \* i\mathcal{L}F. \tag{15}
$$

The expected value of the dynamical evolution of the entropy rate vanishes due to Eq.(12) and invariance of the trace under cyclic permutations (for bounded operators) Oppenheim and Levine (1979):

$$<\langle \bar{\hat{s}}(t)> = k\_B \text{Tr}(\lambda \ast \bar{\hat{F}} \sigma) = k\_B \lambda \ast \text{Tr}(\dot{F} \sigma) = <\dot{s}(t)> = 0. \tag{16}$$

Equation (16) is a result of the microreversibility of the dynamics of the relevant variables and is what would be expected for reversible microscopic equations of motion in a dynamically-driven, but otherwise isolated system with dynamical evolution of the relevant variables. The total entropy evolution equation that contains both relevant and irrelevant effects can be formed from Eq.(13) by multiplying by the *λ*� *ns*, summing, and integrating over space. The entropy evolution is

$$\frac{dS}{dt} - \operatorname{Tr}(\chi(t\_i)\dot{s}(t)\mathcal{T}(t, t\_i)) \equiv \Sigma(t) = \frac{1}{k\_B} \int\_{t\_i}^{t} \left< \dot{s}(t)\mathcal{T}(t, \tau)(1 - P(\tau))\overline{\dot{s}}(\tau) \right> d\tau. \tag{17}$$

This equation and Eq.(13) form the foundations of this paper. Note that the entropy rate is antisymmetric about the origin: *dS*/*dt*(*t* = *ti*) − *Tr*(*χ*(*ti*)*s*˙(*ti*) = 0, and without initial condition, *dS*(−*t*)/*dt* = −*dS*(*t*)/*dt*, and Σ(*t*) is the entropy rate. The RHS relates to dissipation and is equivalent to Eq.(49) in Reference Baker-Jarvis and Surek (2009). The macroscopic entropy rate can be expressed in two equivalent forms (*dS*(*t*)/*dt* ≡ −*kBλ* ∗ [Δ{*FF*} ∗ *∂λ*/*∂t*], see Appendix 10 for definition of Δ), or *dS*/*dt* ≡ *kBλ* ∗ *∂ < F >* /*∂t*. Rau and Müller (1996) note that for unbounded operators the cyclic invariance of the trace argument breaks down. The projection operator contribution in Eq.(17) can be re-expressed as (Robertson (1967b))

$$
\Sigma(t) = \frac{d\mathcal{S}}{dt} - \operatorname{Tr}(\chi(t\_l)\dot{s}(t)\mathcal{T}(t, t\_l)) = \frac{1}{k\_B} \int\_{t\_l}^t \langle \dot{s}(t)\mathcal{T}(t, \tau)\tilde{s}(\tau) \rangle \,d\tau
$$

$$
$$

6 Will-be-set-by-IN-TECH

Due to the invariance of the trace operation under unitary transformations, the Neumann entropy −*kBTr*(*ρ*(*t*)ln *ρ*(*t*)), formed from the full statistical-density operator *ρ*(*t*) that satisfies the Liouville Eq.(1) in an isolated system, is independent of time and cannot be a nonequilibrium entropy. However, the entropy −*kBTr*(*σ*(*t*)ln *σ*(*t*)), is not constant in time and has all the properties of a nonequilibrium entropy, and reduces to the thermodynamic entropy in the appropriate limit. It is important to note that unlike energy, entropy is not a

The expected value of the dynamical evolution of the entropy rate vanishes due to Eq.(12) and invariance of the trace under cyclic permutations (for bounded operators) Oppenheim

Equation (16) is a result of the microreversibility of the dynamics of the relevant variables and is what would be expected for reversible microscopic equations of motion in a dynamically-driven, but otherwise isolated system with dynamical evolution of the relevant variables. The total entropy evolution equation that contains both relevant and irrelevant

*kB*

This equation and Eq.(13) form the foundations of this paper. Note that the entropy rate is antisymmetric about the origin: *dS*/*dt*(*t* = *ti*) − *Tr*(*χ*(*ti*)*s*˙(*ti*) = 0, and without initial condition, *dS*(−*t*)/*dt* = −*dS*(*t*)/*dt*, and Σ(*t*) is the entropy rate. The RHS relates to dissipation and is equivalent to Eq.(49) in Reference Baker-Jarvis and Surek (2009). The macroscopic entropy rate can be expressed in two equivalent forms (*dS*(*t*)/*dt* ≡ −*kBλ* ∗ [Δ{*FF*} ∗ *∂λ*/*∂t*], see Appendix 10 for definition of Δ), or *dS*/*dt* ≡ *kBλ* ∗ *∂ < F >* /*∂t*. Rau and Müller (1996) note that for unbounded operators the cyclic invariance of the trace argument breaks down. The projection operator contribution in Eq.(17) can be re-expressed

*dt* <sup>−</sup> *Tr*(*χ*(*ti*)*s*˙(*t*)<sup>T</sup> (*t*, *ti*)) = <sup>1</sup>

*s*˙(*t*)T (*t*, *τ*)*F*

 *t ti* 

*S*(*t*) ≡ *kBλ*∗ *< F >*, (14)

*<sup>s</sup>*˙(*t*) <sup>≡</sup> *kB<sup>λ</sup>* <sup>∗</sup> *<sup>F</sup>*˙ <sup>=</sup> *kB<sup>λ</sup>* <sup>∗</sup> *<sup>i</sup>*L*F*. (15)

*s*˙(*t*)T (*t*, *τ*)(1 − *P*(*τ*))*s*˙(*τ*)

*kB*

 *t ti* 

<sup>∗</sup> *<sup>&</sup>lt; <sup>F</sup>*˙ *<sup>&</sup>gt;* <sup>∗</sup> *<sup>&</sup>lt; FF <sup>&</sup>gt;*−<sup>1</sup> *<sup>d</sup>τ*, (18)

*ns*, summing, and integrating over

*s*˙(*t*)T (*t*, *τ*)*s*˙(*τ*)

 *dτ*

*dτ*. (17)

*<sup>&</sup>lt; <sup>s</sup>*˙(*t*) *<sup>&</sup>gt;*<sup>=</sup> *kBTr*(*<sup>λ</sup>* <sup>∗</sup> *<sup>F</sup>*˙*σ*) = *kB<sup>λ</sup>* <sup>∗</sup> *Tr*(*F*˙*σ*) =*<sup>&</sup>lt; <sup>s</sup>*˙(*t*) *<sup>&</sup>gt;*<sup>=</sup> 0. (16)

**3. The entropy, entropy rate, and entropy production**

conserved quantity and can be produced in interactions.

and microscopic dynamically-driven entropy rate from Eq.(11) as

effects can be formed from Eq.(13) by multiplying by the *λ*�

*dt* <sup>−</sup> *Tr*(*χ*(*ti*)*s*˙(*t*)<sup>T</sup> (*t*, *ti*)) <sup>≡</sup> <sup>Σ</sup>(*t*) = <sup>1</sup>

<sup>Σ</sup>(*t*) = *dS*

− *t ti* 

We define the entropy from Eqs. (2) and (4)

and Levine (1979):

space. The entropy evolution is

*dS*

as (Robertson (1967b))

where we have eliminated *P*(*t*) and avoided some of complications for simulations using the projection-like operator *P*. In linear driving the projection operator can be neglected. We see that only the variables that change sign under time reversal, such as magnetic moment, have expected values *< F*˙ *<sup>n</sup> <sup>&</sup>gt;*�<sup>=</sup> 0. The even variables satisfy *<sup>&</sup>lt; <sup>F</sup>*˙ *<sup>n</sup> >*= 0. Equation (18 is related to the energy FDR relation developed by Berne and Harp (1970), but is based on entropy and not energy. These FDR equations reduce to the traditional FDR relations, such as Nyquist's theorem, or conductivity as we approach equilibrium. Equation (17) will form the basis of our applications to various electromagnetic driving and measurement problems. The LHS of Eq.(17) represents the macroscopic dissipation, and the last term on the RHS represents the fluctuations in terms of the microscopic entropy rate *s*˙(*t*). For almost all many-body systems, due to incomplete information, there are contributions from the positive semi-definite relaxation terms in Eq.(17). The projection operator, which models the state of knowledge of the system described by the relevant variables, acts to decrease the entropy rate and in the limit of no decoherence, Eq.(17) becomes *dS*/*dt* → 0. For an open system Eq.(17) would be modified by adding a term for the entropy source.

To summarize, for a dynamically driven, but otherwise isolated, system, the expected value of the microscopic entropy rate (*< s*˙(*t*) *>*) is zero, but the fluctuations in this variable are not zero. This is due to the microscopic reversibility of the underlying equations of motion. However, in a complex system there are other uncontrolled variables in addition to the relevant ones and as a consequence there is dissipation and irreversibility and a net entropy evolution. Whereas *ρ*(*t*) satisfies Liouville's equation, *σ*(*t*) does not, and this is a consequence of irrelevant variables. Equation (17) can model systems away from equilibrium. Transport coefficients and the related FDR relations for conductivity, susceptibility, noise, and other quantities follow naturally from Eq.(13).

The paper is organized as follows. The paper begins with an argument that the author's previously developed entropy-based fluctuation-dissipation equation (EFDR) can be used to obtain classical FDRs and be extended into the realm of nonequilibrium systems. The equations of motion are all time symmetric, in that if one transforms *t* → −*t*, the equation remain the same Robertson (1999). We show that if we apply the Euler-Lagrange equations to the difference between the entropy production and *dS*/*dt*, or equivalently the net flux through the system, we obtain the statistical mechanical equations of motion. In the last sections of the paper, fluctuation-dissipation equations are derived from the equation for electrical conductivity, noise, thermal conductivity, and the determination of Boltzmann's constant.

#### **4. Entropy rate fluctuation relation and transport coefficients**

A generalized entropy rate EFDR relation that is valid arbitrarily far from equilibrium that was developed in recent papers Baker-Jarvis (2005; 2008); Baker-Jarvis and Surek (2009) is the equation of motion for the entropy in Eq.(17)

$$dS/dt = \underbrace{\frac{1}{k\_B} \int\_{t\_l}^{t} \left< \dot{s}(t) \mathcal{T}(t, \tau) (1 - P(\tau)) \overline{\dot{s}}(\tau) \right> d\tau}\_{\text{fluctuation}} < \dot{s}(t) > $$
 
$$= k\_B \frac{\partial < F >}{\partial t} \* \lambda = -k\_B \lambda \* \left[ \Delta \{ F \overline{F} \} \* \frac{\partial \lambda}{\partial t} \right]. \tag{19}$$

In the long-wavelength, short-relaxation time limit, <sup>↔</sup>

**4.1 Steady-state approximation to entropy production**

we can write the time-independent entropy production as

Σ(**r**) = *kBλ*(**r**) ∗

= *kB*∇*λ*(**r**) ∗

=

This is a positive semi-definite quantity.

For a single variable this reduces to

entropy production density is

<sup>Σ</sup>*d*(**r**) = *kB*∇*<sup>λ</sup>* · **<sup>J</sup>** <sup>=</sup> <sup>1</sup>

**5. Exact equations of motion**

**5.1 Heat transfer**

the chemical potential *μ<sup>c</sup>* instead of *φ*), we have

*T*(**r**)

*τ*) and then Eq.(20) for the entropy production rate can be written as

and Thermal Fields with Materials Based on Fluctuations and Entropy Production

Σ(**r**, *t*) = *kB*∇*λ*(**r**, *t*)·

In the case of a linear approximation to Eq.(19) with relevant variables *Fn*, and *F*˙

 ∞ 0 

 ∞ 0

Σ(**r**) ≈ *kBθ*(**r**)∇*x*(**r**) ∗

<sup>Σ</sup>*d*(**r**) = *kB*∇*<sup>λ</sup>* · **<sup>J</sup>***<sup>h</sup>* <sup>=</sup> <sup>∇</sup>*T*(**r**)

**<sup>E</sup>**(**r**) <sup>−</sup> *<sup>φ</sup>*(**r**)∇*T*(**r**) *T*(**r**)

∇*x*(**r**)·

↔

The expectation is under the equilibrium distribution so the correlation function is assumed to be for a stationary system. In the case of steady-state heat transfer where *θ* = 1/*T*, the

> *T*2(**r**) −∇*λ*

> > · ↔ *σ <sup>f</sup>* (**r**) ·

For steady-state electrical dissipation where *θ* = 1 (note the same equation would apply to

For the first example let us consider heat transfer. For this case *λ* = 1/*kBT*, *F* = *u*(**r**), where *u*(**r**) is the internal energy density. The equation of continuity in this case is *i*L*u* = *u*˙ = −∇· **j***h*. *λ* ∗ (Δ{*FF*}) relates to material parameters, in this case the heat capacity. The exact equation

· ↔

 *θ*(**r**) *T*(**r**) *K***<sup>j</sup>** (**r**, *t*,**r**�

↔

<sup>213</sup> An Analysis of the Interaction of Electromagnetic

where the system is driven by a constant field, such as a temperature gradient or electric field,

*F*˙(**r**, 0)*F*˙(**r**

*<* **j**(**r**, 0)**j**(**r**

� , *τ*) <sup>0</sup> ∗ *λ*(**r** � )*dτ*

�

 ∞ 0

, *τ*) *>*<sup>0</sup> *dτ* ∗ ∇*λ*(**r**

*<* **j**(**r**, 0)**j**(**r**�

*<sup>κ</sup>* (**r**) · ∇*T*(**r**) −**J***<sup>h</sup>*

**<sup>E</sup>**(**r**) <sup>−</sup> *<sup>φ</sup>*(**r**)∇*T*(**r**) *T*(**r**)

, *<sup>τ</sup>*) =<sup>↔</sup>

*C* (**r**,**r**�

*C* (**r**, *t*) · ∇*λ*(**r**, *t*). (24)

�

, *τ*) *>*<sup>0</sup> *kBT <sup>d</sup><sup>τ</sup>* <sup>∗</sup> *<sup>θ</sup>*(**<sup>r</sup>**

*L mn* (**r**) · ∇*x*(**r**)*d***r**. (26)

, *t*, *τ*)*δ*(**r** − **r**�

)*δ*(*t* −

*<sup>n</sup>* = −∇ · **j***n*,

). (25)

� )∇*x*(**r** � )

. (27)

. (28)

Note that *< s*˙(*t*) *>*= 0 for the set of *λ* satisfying Eqs.(13) and (4). This equation has units of entropy rate and the notation Δ{*FF*} is defined above. The *λ<sup>n</sup>* correspond to applied quantities such as electromagnetic fields, temperature, etc. The relevant variables *Fn* correspond to generalized thermodynamic potentials such as currents, polarizations, momentum, etc. The LHS relates to fluctuations and the RHS relates to dissipation as entropy production. The RHS was put into two distinct, but equivalent forms. Use of the different forms may be advantageous for different applications. The last term on the RHS is derived using the expression derived in Eq.(70) in Appendix A. The physical interpretation of Eq.(19) is that rate of production of entropy drives fluctuations in the microscopic entropy and fluctuations in the microscopic entropy rate drives the rate of entropy production. Note that temperature does not appear explicitly in the equation. Equation (19) is exact in both classical and quantum-mechanical contexts when driven by non-quantized fields. However, in Eq.(19) we have neglected the effects of the decaying initial condition term. For an open system where there is heat from a reservoir interacting with the system and is not treated as a term in the Hamiltonian, an entropy source term is added to RHS of Eq.(19).

For conserved quantities, Eq.(19) can be cast into another form in terms of generalized microscopic currents **j***n*(**r**, *t*) that satisfy the conservation conditions ∇ · **j***n*(**r**, *t*) = −*i*L*Fn*(**r**). Note **j***n*(**r**, *t*) have an explicit time dependence since the Hamiltonian is time dependent. Substituting this relationship into Eq.(19), and integrating by parts and discarding surface terms (entropy fluxes), we obtain the following form for the entropy production rate

$$\Sigma(t) = k\_{\mathbb{B}} \nabla \lambda(\mathbf{r}, t) \ast \int\_{t\_i}^{t} \overset{\leftrightarrow}{\underset{\mathbf{l}\_i}{\mathbf{K}}}(\mathbf{r}, t, \mathbf{r}', \mathbf{r}) \ast \nabla \lambda(\mathbf{r}', \mathbf{r}) d\mathbf{r} \equiv \underbrace{\mathbf{J}(\mathbf{r}, t)}\_{\text{current}} \ast \underbrace{k\_{\mathbb{B}} \nabla \lambda(\mathbf{r}, t)}\_{\text{generalized force}},\tag{20}$$

where <sup>↔</sup>

$$\stackrel{\leftrightarrow}{K}\_{\mathbf{j}(mn)}(\mathbf{r},t,\mathbf{r}',\tau) = <\mathbf{j}\_m(\mathbf{r},t)\mathcal{T}(t,\tau)(1 - P(\tau))\mathbf{j}\_n(\mathbf{r}',\tau)>,\tag{21}$$

$$\mathbf{J(r,t)} = \int\_{t\_l}^{t} \overset{\leftrightarrow}{\mathbf{K}}\_{\mathbf{j}}(\mathbf{r}, t, \mathbf{r'}, \mathbf{r}) \ast \nabla \lambda(\mathbf{r'}, \mathbf{r}) d\mathbf{r} \approx -\overset{\leftrightarrow}{L}\_{\text{mm}}(\mathbf{r}) \cdot \nabla \mathbf{x},\tag{22}$$

where the last expression is in a linear approximation and ∇*x* is a driving force. Equation (20) is in the form of a flux times a generalized force, which is commonly identified with entropy production. Note we omitted a possible reversible term *<* ˙ **j***<sup>n</sup> >*.

A very general expression for the transport coefficients in the linear approximation is

$$\stackrel{\leftrightarrow}{L}\_{mm}(\mathbf{r}) = \int\_0^\infty \int d\mathbf{r}' \theta(\mathbf{r}) \frac{<\mathbf{j}\_m(\mathbf{r},0)\mathbf{j}\_n(\mathbf{r}',s)>\_0}{k\_BT(\mathbf{r}')}ds.\tag{23}$$

To obtain the linear approximation, we write ∇*λ* = −*θ*∇*x*/*kBT*. For example in heat transfer, *λ* = 1/*kBT* and *θ* = 1/*T* and the driving force is the temperature gradient ∇*x* = ∇*T*, and then the transport coefficient from Eq.(23) is the thermal conductivity defined in **<sup>J</sup>** <sup>=</sup> <sup>−</sup> <sup>↔</sup> *κ* ·∇*T*. As another example, if the electrical potential is *φ* and *λ* = −*φ*/*kBT*, where ∇*x* = ∇*φ* ≈ −**E**, and *θ* = 1. In this case the transport coefficient from Eq.(23) is the electrical conductivity defined in **<sup>J</sup>** <sup>=</sup> <sup>−</sup> <sup>↔</sup> *σ <sup>f</sup>* ·∇*φ*.

In the long-wavelength, short-relaxation time limit, <sup>↔</sup> *K***<sup>j</sup>** (**r**, *t*,**r**� , *<sup>τ</sup>*) =<sup>↔</sup> *C* (**r**,**r**� , *t*, *τ*)*δ*(**r** − **r**� )*δ*(*t* − *τ*) and then Eq.(20) for the entropy production rate can be written as

$$
\Delta\Sigma(\mathbf{r},t) = k\_B \nabla \lambda(\mathbf{r},t) \cdot \stackrel{\leftrightarrow}{\mathbb{C}}(\mathbf{r},t) \cdot \nabla \lambda(\mathbf{r},t). \tag{24}
$$

This is a positive semi-definite quantity.

8 Will-be-set-by-IN-TECH

Note that *< s*˙(*t*) *>*= 0 for the set of *λ* satisfying Eqs.(13) and (4). This equation has units of entropy rate and the notation Δ{*FF*} is defined above. The *λ<sup>n</sup>* correspond to applied quantities such as electromagnetic fields, temperature, etc. The relevant variables *Fn* correspond to generalized thermodynamic potentials such as currents, polarizations, momentum, etc. The LHS relates to fluctuations and the RHS relates to dissipation as entropy production. The RHS was put into two distinct, but equivalent forms. Use of the different forms may be advantageous for different applications. The last term on the RHS is derived using the expression derived in Eq.(70) in Appendix A. The physical interpretation of Eq.(19) is that rate of production of entropy drives fluctuations in the microscopic entropy and fluctuations in the microscopic entropy rate drives the rate of entropy production. Note that temperature does not appear explicitly in the equation. Equation (19) is exact in both classical and quantum-mechanical contexts when driven by non-quantized fields. However, in Eq.(19) we have neglected the effects of the decaying initial condition term. For an open system where there is heat from a reservoir interacting with the system and is not treated as a term in the

For conserved quantities, Eq.(19) can be cast into another form in terms of generalized microscopic currents **j***n*(**r**, *t*) that satisfy the conservation conditions ∇ · **j***n*(**r**, *t*) = −*i*L*Fn*(**r**). Note **j***n*(**r**, *t*) have an explicit time dependence since the Hamiltonian is time dependent. Substituting this relationship into Eq.(19), and integrating by parts and discarding surface

�

, *τ*) =*<* **j***m*(**r**, *t*)T (*t*, *τ*)(1 − *P*(*τ*))**j***n*(**r**

�

, *<sup>τ</sup>*)*d<sup>τ</sup>* <sup>≡</sup> **<sup>J</sup>**(**r**, *<sup>t</sup>*)

, *<sup>τ</sup>*)*d<sup>τ</sup>* ≈ − <sup>↔</sup>

*<* **j***m*(**r**, 0)**j***n*(**r**�

current

**j***<sup>n</sup> >*.

,*s*) *>*<sup>0</sup>

<sup>∗</sup> *kB*∇*λ*(**r**, *<sup>t</sup>*) generalized force

�

, (20)

*κ* ·∇*T*. As

, *τ*) *>*, (21)

*L mn* (**r**) · ∇*x*, (22)

*kBT*(**r**�) *ds*. (23)

terms (entropy fluxes), we obtain the following form for the entropy production rate

, *τ*) ∗ ∇*λ*(**r**

, *τ*) ∗ ∇*λ*(**r**

A very general expression for the transport coefficients in the linear approximation is

the transport coefficient from Eq.(23) is the thermal conductivity defined in **<sup>J</sup>** <sup>=</sup> <sup>−</sup> <sup>↔</sup>

where the last expression is in a linear approximation and ∇*x* is a driving force. Equation (20) is in the form of a flux times a generalized force, which is commonly identified with entropy

To obtain the linear approximation, we write ∇*λ* = −*θ*∇*x*/*kBT*. For example in heat transfer, *λ* = 1/*kBT* and *θ* = 1/*T* and the driving force is the temperature gradient ∇*x* = ∇*T*, and then

another example, if the electrical potential is *φ* and *λ* = −*φ*/*kBT*, where ∇*x* = ∇*φ* ≈ −**E**, and *θ* = 1. In this case the transport coefficient from Eq.(23) is the electrical conductivity defined

Hamiltonian, an entropy source term is added to RHS of Eq.(19).

 *t ti* ↔ *K***<sup>j</sup>** (**r**, *t*,**r** �

*K***j**(*mn*) (**r**, *t*,**r**

 *t ti* ↔ *K***<sup>j</sup>** (**r**, *t*,**r** �

production. Note we omitted a possible reversible term *<* ˙

 ∞ 0

 *d***r** � *θ*(**r**)

**J**(**r**, *t*) =

↔ *L mn* (**r**) =

�

Σ(*t*) = *kB*∇*λ*(**r**, *t*) ∗

where <sup>↔</sup>

in **<sup>J</sup>** <sup>=</sup> <sup>−</sup> <sup>↔</sup>

*σ <sup>f</sup>* ·∇*φ*.

#### **4.1 Steady-state approximation to entropy production**

In the case of a linear approximation to Eq.(19) with relevant variables *Fn*, and *F*˙ *<sup>n</sup>* = −∇ · **j***n*, where the system is driven by a constant field, such as a temperature gradient or electric field, we can write the time-independent entropy production as

$$
\Sigma(\mathbf{r}) = k\_B \lambda(\mathbf{r}) \* \int\_0^\infty \left< \dot{\mathbf{r}}(\mathbf{r}, 0) \dot{\mathbf{r}}(\mathbf{r}', \tau) \right>\_0 \* \lambda(\mathbf{r}') d\tau$$

$$
= k\_B \nabla \lambda(\mathbf{r}) \* \int\_0^\infty < \mathbf{j}(\mathbf{r}, 0) \mathbf{j}(\mathbf{r}', \tau) >\_0 d\tau \* \nabla \lambda(\mathbf{r}'). \tag{25}$$

For a single variable this reduces to

$$
\Sigma(\mathbf{r}) \approx k\_B \theta(\mathbf{r}) \nabla \mathbf{x}(\mathbf{r}) \* \int\_0^\infty \frac{<\mathbf{j}(\mathbf{r}, 0) \mathbf{j}(\mathbf{r}', \tau) >\_0}{k\_B T} d\tau \* \theta(\mathbf{r}') \nabla \mathbf{x}(\mathbf{r}')
$$

$$
= \int \frac{\theta(\mathbf{r})}{T(\mathbf{r})} \nabla \mathbf{x}(\mathbf{r}) \cdot \stackrel{\leftrightarrow}{L}\_{mm}(\mathbf{r}) \cdot \nabla \mathbf{x}(\mathbf{r}) d\mathbf{r}.\tag{26}
$$

The expectation is under the equilibrium distribution so the correlation function is assumed to be for a stationary system. In the case of steady-state heat transfer where *θ* = 1/*T*, the entropy production density is

$$\Sigma\_d(\mathbf{r}) = k\_B \nabla \lambda \cdot \mathbf{J}\_h = \underbrace{\frac{\nabla T(\mathbf{r})}{T^2(\mathbf{r})}}\_{-\nabla \lambda} \cdot \underbrace{\overset{\leftrightarrow}{\kappa}}\_{-\mathbf{J}\_h} (\mathbf{r}) \cdot \nabla T(\mathbf{r}) \,. \tag{27}$$

For steady-state electrical dissipation where *θ* = 1 (note the same equation would apply to the chemical potential *μ<sup>c</sup>* instead of *φ*), we have

$$\Sigma\_d(\mathbf{r}) = k\_B \nabla \lambda \cdot \mathbf{J} = \frac{1}{T(\mathbf{r})} \left( \mathbf{E}(\mathbf{r}) - \frac{\phi(\mathbf{r}) \nabla T(\mathbf{r})}{T(\mathbf{r})} \right) \cdot \stackrel{\leftrightarrow}{\sigma}\_f(\mathbf{r}) \cdot \left( \mathbf{E}(\mathbf{r}) - \frac{\phi(\mathbf{r}) \nabla T(\mathbf{r})}{T(\mathbf{r})} \right) . \tag{28}$$

#### **5. Exact equations of motion**

#### **5.1 Heat transfer**

For the first example let us consider heat transfer. For this case *λ* = 1/*kBT*, *F* = *u*(**r**), where *u*(**r**) is the internal energy density. The equation of continuity in this case is *i*L*u* = *u*˙ = −∇· **j***h*. *λ* ∗ (Δ{*FF*}) relates to material parameters, in this case the heat capacity. The exact equation

Here *τ<sup>h</sup>* is the characteristic relaxation time for heat transfer. We also use the approximation

<sup>215</sup> An Analysis of the Interaction of Electromagnetic

+ *ρcp*

This equation shows that the temperature can obtain wave properties due to the finite time it takes for the information to propagate through the material. This reverts to Fourier's heat

*∂T*

*<sup>∂</sup><sup>t</sup>* <sup>=</sup> ∇ · (

↔

, *τ*)

∇� *T*(**r**� , *τ*)

,*τ*) , (which in a linear approximation is

*T*(**r**�

, *<sup>τ</sup>*) <sup>=</sup> *<sup>g</sup>*(**r**, *<sup>t</sup>*) *T*(**r**, *t*) . (34)

. (35)

, *<sup>τ</sup>*) ·

*T*(**r**, *t*)

*Le*(*λ*(**r**, *t*), ∇*λ*(**r**, *t*),**r**)*d***r**. (36)

*kBT*(**r**�

*<sup>T</sup>* <sup>+</sup> *<sup>g</sup>*(**r**, *<sup>t</sup>*)

*κ* (**r**) · ∇*T*(**r**, *t*)). (33)

for the heat capacity, so that Eq.(32) becomes

*∂*2*T*

*<sup>∂</sup>t*<sup>2</sup> <sup>−</sup> *<sup>τ</sup>hρcp*

equation Eq.(31) in the short relaxation time limit *τ<sup>h</sup>* → 0.

<sup>0</sup> *dτ*

*<sup>∂</sup><sup>t</sup>* <sup>+</sup> ∇ · <sup>1</sup>

The Euler-Lagrange functional used to search for an extremum is

*I* = **r**<sup>2</sup> **r**1

2 *T*

and Thermal Fields with Materials Based on Fluctuations and Entropy Production

**5.2 Equation of motion for the entropy production in heat transfer**

, *<sup>t</sup>*) <sup>−</sup> <sup>∇</sup>*T*(**r**, *<sup>t</sup>*) *<sup>T</sup>*2(**r**, *<sup>t</sup>*) ·

> *d***r**� ↔ *Kh* (**r**,*t*,**r**� ,*τ*)

*kBT*(**r**�

*<sup>T</sup>* **<sup>J</sup>***h*(**r**, *<sup>t</sup>*)

**5.3 Using the Euler-Lagrange equation applied to entropy production extrema to obtain**

A function that satisfies the Euler-Lagrange condition makes Eq.(36) an extremum when *δI* = 0. In order to study the Euler-Lagrange extremum problem in entropy production rate we consider systems where the relevant variables satisfy a microscopic conservation condition ∇ · **<sup>j</sup>**(**r**, *<sup>t</sup>*) = <sup>−</sup>*F*˙(**r**). Examples of this are heat transfer ∇ · **<sup>j</sup>***h*(**r**, *<sup>t</sup>*) = <sup>−</sup>*u*˙(**r**) and charge transfer where ∇ · **j***c*(**r**, *t*) = −*ρ*˙(**r**). We set *I* to the entropy production minus the total change in entropy density: *<sup>d</sup>***r**(Σ*d*(**r**, *<sup>t</sup>*) <sup>−</sup> *dSd*(**r**, *<sup>t</sup>*)/*dt*), which is equivalent to the entropy flux through the system. Using the RHS of Eq.(19) with *λ* and ∇*λ* as variables for the case when the relevant

*<sup>n</sup>* = −∇ · **j***<sup>n</sup>* in the entropy production in Eq.(20) we have

*λt*(*n*)(**r**, *t*)

, *τ*) ∗ ∇*λ*(**r**

Δ{*Fn*(**r**)*F*(**r**

 -dS/dt

�

�

)} ∗ *∂λt*(**r**�

, *τ*)*dτ*. (38)

, *t*) *∂t*

 *d***r**

, (37)

+ ∑*n*

*ti* ↔ *K***<sup>j</sup>** (**r**, *t*,**r** �

**<sup>J</sup>**(**r**, *<sup>t</sup>*) = *<sup>t</sup>*

 *∂T ∂t*

<sup>2</sup>

For heat transfer the entropy production-rate density with a source term using Eq.(19)

 *t* 0 *dτ d***r** � ↔ *Kh* (**r**, *t*,**r**�

,*τ*) · <sup>∇</sup>�

*T*(**r**� ,*τ*) *T*(**r**�

*κ* (**r**) · ∇*T*/*T*), then we can write Eq.(34) as an entropy-density balance equation

<sup>=</sup> ∇ · **<sup>J</sup>***h*(**r**, *<sup>t</sup>*)

*τhρcp*

*u*(**r**� ) *T*(**r**� ,*t*) }

*∂T ∂t* (**r** �

*∂S*(**r**, *t*)

∇*λt*(*n*)(**r**, *t*) · **J***n*(**r**, *t*)*d***r**

 entropy production

*kBT*(**r**� , *t*)

If we define **<sup>J</sup>***h*(**r**, *<sup>t</sup>*) = <sup>−</sup> *<sup>t</sup>*

**exact equations of motion**

 *d***r** � <sup>Δ</sup>{ *<sup>u</sup>*(**r**) *T*(**r**,*t*)

equal to <sup>−</sup> <sup>↔</sup>

variables satisfy *F*˙

*kB* <sup>∑</sup>*<sup>n</sup>*

*I* = 

where

of motion, without a source, can be written using Eq.(13) as

$$\begin{split} \frac{\partial \left< u(\mathbf{r}) > \right>}{\partial t} &= \int d\mathbf{r}' \frac{1}{T(\mathbf{r}', t)} \frac{\Delta \{u(\mathbf{r}) \overline{u}(\mathbf{r}')\}}{k\_B T(\mathbf{r}', t)} \frac{\partial T}{\partial t}(\mathbf{r}', t) \\ &= \nabla \cdot \left( \int\_0^t d\tau \int d\mathbf{r}' \frac{\overset{\leftrightarrow}{K}\_h(\mathbf{r}, t, \mathbf{r}', \tau)}{k\_B T^2(\mathbf{r}', \tau)} \cdot \nabla' T(\mathbf{r}', \tau) \right), \end{split} \tag{29}$$

where <sup>↔</sup> *Kh* (**r**, *t*,**r**� , *τ*) =*<* **j***h*(**r**, *t*)T (*t*, *τ*)**j***h*(**r**� , *τ*) *>* and *<* **j***<sup>h</sup> >*= 0. Note that Eq.(29) is invariant to the transformation *t* → −*t*. This is not true for the normal heat transfer equation. In addition, due to the time integral, information is not transfered instantaneously, as it is in the normal heat equation where it is transfered without delay.

To see how Eq.(29) reverts to Fourier's equation, consider the case when <sup>1</sup> *kBT*<sup>2</sup>(**r**� ,*t*) *< u*(**r**)*u*(**r**� ) *>*≡ *ρd*(**r**� , *t*)*cp*(**r**� , *t*)*δ*(**r** − **r**� ) and <sup>↔</sup> *Kh* (**r**, *t*,**r**� , *<sup>τ</sup>*) <sup>≡</sup> *kBT*<sup>2</sup>(**r**� , *τ*) <sup>↔</sup> *κ* (**r**)*δ*(**r** − **r**� )*δ*(*t* − *τ*). This is a long wavelength and short relaxation time approximation. Note from Eq. (23) in the linear approximation we have an expression for the thermal conductivity as

$$\stackrel{\leftrightarrow}{\mathbf{x}}(\mathbf{r}) = \frac{\int d\mathbf{r}' \int\_0^\infty < \mathbf{j}\_h(\mathbf{r}, \mathbf{0}) \mathbf{j}\_h(\mathbf{r}', \mathbf{r}) >\_0 d\tau}{k\_B T^2}. \tag{30}$$

If we use this approximation for the heat capacity and thermal conductivity, we obtain the normal heat transport equation

$$
\rho\_d c\_p \frac{\partial T(\mathbf{r}, t)}{\partial t} = \nabla \cdot (\stackrel{\leftrightarrow}{\kappa}^\*(\mathbf{r}) \cdot \nabla T(\mathbf{r}, t)). \tag{31}
$$

Note that the general equation for heat transfer, Eq.(29), is exact and time symmetric, but when the thermal conductivity tensor is defined as above, Eq.(29) loses time symmetry and is no longer invariant under the transformation *t* → −*t* (Robertson (1999)).

Equation(29) can obtain wave-like properties. To see this take the time derivative of Eq.(29), assuming the expectations are evaluated under an equilibrium distribution function

$$\begin{split} &\int d\mathbf{r}' \frac{1}{T(\mathbf{r}',t)} \frac{\Delta\{u(\mathbf{r})\overline{u}(\mathbf{r}')\}\_{0}}{k\_{\text{B}}T(\mathbf{r}',t)} \frac{\partial^2 \overline{T}}{\partial t^2} (\mathbf{r}',t) \\ & \qquad - 2 \int d\mathbf{r}' \frac{1}{T^2(\mathbf{r}',t)} \frac{\Delta\{u(\mathbf{r})\overline{u}(\mathbf{r}')\}\_{0}}{k\_{\text{B}}T(\mathbf{r}',t)} \left(\frac{\partial T}{\partial t}(\mathbf{r}',t)\right)^2 \\ & \qquad = \nabla \cdot \left(\int d\mathbf{r}' \frac{\overline{K\_{h(0)}}\ (\mathbf{r},t,\mathbf{r}',t)}{k\_{\text{B}}T^2(\mathbf{r}',\mathbf{r})} \cdot \nabla^{\prime}T(\mathbf{r}',t)\right) \\ & \qquad + \nabla \cdot \left(\int\_{0}^{t} d\mathbf{r} \int d\mathbf{r}' \frac{\partial \stackrel{\leftrightarrow}{K}\_{h(0)}\ (\mathbf{r},t,\mathbf{r}',\mathbf{r})}{\partial t} \cdot \frac{\nabla^{\prime}T(\mathbf{r}',\mathbf{r})}{k\_{\text{B}}T^2(\mathbf{r}',\mathbf{r})}\right). \tag{32} \end{split} \tag{32}$$

On the first term on the RHS of Eq.(32) we used Eq.(31) to obtain *∂T*/*∂t*. In the long wavelength, but not short relaxation time special case, when the time dependence is modeled by an exponential decay, we can write <sup>↔</sup> *Kh*= *T*<sup>2</sup>*kB* ↔ *κ* (**r**� )*δ*(**r** − **r**� ) exp (−(*t* − *τ*)/*τh*)/*τ<sup>h</sup>* ↔ *I* . Here *τ<sup>h</sup>* is the characteristic relaxation time for heat transfer. We also use the approximation for the heat capacity, so that Eq.(32) becomes

$$
\tau\_{\hbar} \rho c\_p \frac{\partial^2 T}{\partial t^2} - \tau\_{\hbar} \rho c\_p \frac{2}{T} \left(\frac{\partial T}{\partial t}\right)^2 + \rho c\_p \frac{\partial T}{\partial t} = \nabla \cdot \left(\stackrel{\leftrightarrow}{\kappa} \left(\mathbf{r}\right) \cdot \nabla T \left(\mathbf{r}, t\right)\right). \tag{33}
$$

This equation shows that the temperature can obtain wave properties due to the finite time it takes for the information to propagate through the material. This reverts to Fourier's heat equation Eq.(31) in the short relaxation time limit *τ<sup>h</sup>* → 0.

#### **5.2 Equation of motion for the entropy production in heat transfer**

For heat transfer the entropy production-rate density with a source term using Eq.(19)

$$\int d\mathbf{r}' \frac{\Delta\{\frac{\mathbf{u}(\mathbf{r})}{T(\mathbf{r},t)} \frac{\overline{\mathbf{u}}(\mathbf{r}')}{T(\mathbf{r}',t)}\}}{k\_B T(\mathbf{r}',t)} \frac{\partial T}{\partial t}(\mathbf{r}',t) - \frac{\nabla T(\mathbf{r},t)}{T^2(\mathbf{r},t)} \cdot \int\_0^t d\tau \int d\mathbf{r}' \frac{\stackrel{\leftrightarrow}{K}\_{\text{fl}}(\mathbf{r},t,\mathbf{r}',\mathbf{r})}{k\_B T(\mathbf{r}',\mathbf{r})} \cdot \frac{\nabla' T(\mathbf{r}',\mathbf{r})}{T(\mathbf{r}',\mathbf{r})} = \frac{\mathbf{g}(\mathbf{r},t)}{T(\mathbf{r},t)}.\tag{34}$$

If we define **<sup>J</sup>***h*(**r**, *<sup>t</sup>*) = <sup>−</sup> *<sup>t</sup>* <sup>0</sup> *dτ d***r**� ↔ *Kh* (**r**,*t*,**r**� ,*τ*) *kBT*(**r**� ,*τ*) · <sup>∇</sup>� *T*(**r**� ,*τ*) *T*(**r**� ,*τ*) , (which in a linear approximation is equal to <sup>−</sup> <sup>↔</sup> *κ* (**r**) · ∇*T*/*T*), then we can write Eq.(34) as an entropy-density balance equation

$$\frac{\partial \mathbf{S}(\mathbf{r},t)}{\partial t} + \nabla \cdot \left(\frac{1}{T} \mathbf{J}\_{\hbar}(\mathbf{r},t)\right) = \frac{\nabla \cdot \mathbf{J}\_{\hbar}(\mathbf{r},t)}{T} + \frac{\mathbf{g}(\mathbf{r},t)}{T(\mathbf{r},t)}.\tag{35}$$

#### **5.3 Using the Euler-Lagrange equation applied to entropy production extrema to obtain exact equations of motion**

The Euler-Lagrange functional used to search for an extremum is

$$I = \int\_{\mathbf{r}\_1}^{\mathbf{r}\_2} L\_\mathbf{r}(\lambda(\mathbf{r}, t), \nabla \lambda(\mathbf{r}, t), \mathbf{r}) d\mathbf{r}.\tag{36}$$

A function that satisfies the Euler-Lagrange condition makes Eq.(36) an extremum when *δI* = 0. In order to study the Euler-Lagrange extremum problem in entropy production rate we consider systems where the relevant variables satisfy a microscopic conservation condition ∇ · **<sup>j</sup>**(**r**, *<sup>t</sup>*) = <sup>−</sup>*F*˙(**r**). Examples of this are heat transfer ∇ · **<sup>j</sup>***h*(**r**, *<sup>t</sup>*) = <sup>−</sup>*u*˙(**r**) and charge transfer where ∇ · **j***c*(**r**, *t*) = −*ρ*˙(**r**). We set *I* to the entropy production minus the total change in entropy density: *<sup>d</sup>***r**(Σ*d*(**r**, *<sup>t</sup>*) <sup>−</sup> *dSd*(**r**, *<sup>t</sup>*)/*dt*), which is equivalent to the entropy flux through the system. Using the RHS of Eq.(19) with *λ* and ∇*λ* as variables for the case when the relevant variables satisfy *F*˙ *<sup>n</sup>* = −∇ · **j***<sup>n</sup>* in the entropy production in Eq.(20) we have

$$I = \underbrace{\int k\_{B} \sum\_{\mathbf{n}} \nabla \lambda\_{t(n)}(\mathbf{r}, t) \cdot \mathbf{J}\_{\mathbf{n}}(\mathbf{r}, t) d\mathbf{r}}\_{\text{in the non-dimensional matrix}} + \underbrace{\int \sum\_{\mathbf{n}} \lambda\_{t(n)}(\mathbf{r}, t) \left[\Delta \{F\_{\mathbf{n}}(\mathbf{r})\overline{F}(\mathbf{r}')\} \* \frac{\partial \lambda\_{t}(\mathbf{r}', t)}{\partial t}\right] d\mathbf{r}}\_{\text{iC} \text{ } \lambda\_{t} \text{s}}\tag{37}$$

entropy production

$$\mathsf{dS/dt}$$

where

10 Will-be-set-by-IN-TECH

*kBT*<sup>2</sup>(**r**�

invariant to the transformation *t* → −*t*. This is not true for the normal heat transfer equation. In addition, due to the time integral, information is not transfered instantaneously, as it is in

*Kh* (**r**, *t*,**r**�

<sup>0</sup> *<* **j***h*(**r**, 0)**j***h*(**r**�

↔

This is a long wavelength and short relaxation time approximation. Note from Eq. (23) in the

If we use this approximation for the heat capacity and thermal conductivity, we obtain the

Note that the general equation for heat transfer, Eq.(29), is exact and time symmetric, but when the thermal conductivity tensor is defined as above, Eq.(29) loses time symmetry and is

Equation(29) can obtain wave-like properties. To see this take the time derivative of Eq.(29),

*∂*2*T <sup>∂</sup>t*<sup>2</sup> (**<sup>r</sup>** � , *t*)

, *t*)

*Kh*(0) (**r**, *t*,**r**�

On the first term on the RHS of Eq.(32) we used Eq.(31) to obtain *∂T*/*∂t*. In the long wavelength, but not short relaxation time special case, when the time dependence is modeled

*Kh*= *T*<sup>2</sup>*kB*

, *<sup>τ</sup>*) · ∇�

)}0

� *∂T ∂t* (**r** � , *t*) �<sup>2</sup>

*T*(**r** � , *t*) ⎞ ⎠

, *τ*) *<sup>∂</sup><sup>t</sup>* · <sup>∇</sup>�

> ↔ *κ* (**r**�

*T*(**r**� , *τ*)

, *τ*)

⎞

⎠ . (32)

↔ *I* .

) exp (−(*t* − *τ*)/*τh*)/*τ<sup>h</sup>*

*kBT*<sup>2</sup>(**r**�

)*δ*(**r** − **r**�

)}0

Δ{*u*(**r**)*u*(**r**�

*kBT*(**r**� , *t*)

*Kh*(0) (**r**, *t*,**r**�

*kBT*<sup>2</sup>(**r**�

To see how Eq.(29) reverts to Fourier's equation, consider the case when <sup>1</sup>

) and <sup>↔</sup>

linear approximation we have an expression for the thermal conductivity as

� *d***r**� � <sup>∞</sup>

*∂T*(**r**, *t*)

no longer invariant under the transformation *t* → −*t* (Robertson (1999)).

Δ{*u*(**r**)*u*(**r**�

*kBT*(**r**� , *t*)

*<sup>∂</sup><sup>t</sup>* <sup>=</sup> ∇ · (

assuming the expectations are evaluated under an equilibrium distribution function

Δ{*u*(**r**)*u*(**r**�

*kBT*(**r**� , *t*)

, *τ*)

, *<sup>τ</sup>*) · ∇�

)}

*T*(**r** � , *τ*) �

, *<sup>τ</sup>*) <sup>≡</sup> *kBT*<sup>2</sup>(**r**�

, *τ*) *>*<sup>0</sup> *dτ*

*∂T ∂t* (**r** � , *t*)

, *τ*) *>* and *<* **j***<sup>h</sup> >*= 0. Note that Eq.(29) is

, *τ*) <sup>↔</sup>

*kBT*<sup>2</sup> . (30)

*κ* (**r**) · ∇*T*(**r**, *t*)). (31)

*κ* (**r**)*δ*(**r** − **r**�

, (29)

*kBT*<sup>2</sup>(**r**�

,*t*) *<*

)*δ*(*t* − *τ*).

of motion, without a source, can be written using Eq.(13) as

�� *<sup>t</sup>* 0 *dτ* � *d***r** � ↔ *Kh* (**r**, *t*,**r**�

, *τ*) =*<* **j***h*(**r**, *t*)T (*t*, *τ*)**j***h*(**r**�

the normal heat equation where it is transfered without delay.

, *t*)*δ*(**r** − **r**�

*ρdcp*

� *d***r** � <sup>1</sup> *T*(**r**� , *t*)

*∂ < u*(**r**) *> <sup>∂</sup><sup>t</sup>* <sup>=</sup>

= ∇ ·

, *t*)*cp*(**r**�

↔ *κ* (**r**) =

where <sup>↔</sup>

*u*(**r**)*u*(**r**�

*Kh* (**r**, *t*,**r**�

) *>*≡ *ρd*(**r**�

normal heat transport equation

� *d***r** � <sup>1</sup> *T*(**r**� , *t*)

> −2 � *d***r** � <sup>1</sup> *T*2(**r**� , *t*)

= ∇ ·

+∇ ·

by an exponential decay, we can write <sup>↔</sup>

⎛ ⎝ � *d***r** � ↔

⎛ ⎝ � *t* 0 *dτ* � *d***r** � *∂* ↔

$$\mathbf{J}(\mathbf{r},t) = \int\_{t\_l}^{t} \overset{\leftrightarrow}{K}\_{\mathbf{j}}(\mathbf{r},t,\mathbf{r}',\mathbf{r}) \ast \nabla \lambda(\mathbf{r}',\mathbf{r})d\tau. \tag{38}$$

With the definitions of the thermal conductivity and heat capacity in Section 5.1 this becomes the Fourier equation for heat transfer, Eq.(31). Therefore the extrema of Eq.(37) yields the equations of motion. The same approach could be performed for other variables or for a collection of relevant variables and its generalized forces to obtain equations of motion. Therefore we conclude that the test functions that yield an extremum is the set of *λ* that satisfy the equations of motion for the actual dynamical trajectory. This is similar to Hamilton's Principle, but for statistical-mechanical evolution. Therefore we have a way of deriving equations of motion from the entropy production rate. There could be many more applications

<sup>217</sup> An Analysis of the Interaction of Electromagnetic

and Thermal Fields with Materials Based on Fluctuations and Entropy Production

**5.4 Equation of motion for the charge density and microscopic definition of the electrical**

In our next example, we consider the equation for the free-charge density conservation that satisfies *ρ*˙ *<sup>f</sup>* = −∇ · **j***<sup>f</sup>* . In this example, *F* = *ρ<sup>f</sup>* , *λ* = −*φ*/*kBT*, where *φ* is the electric potential.

)}

, *τ*)

, *<sup>τ</sup>*) ·

, *τ*) *>*<sup>0</sup>

, *<sup>τ</sup>*) <sup>=</sup><sup>↔</sup>

*<sup>∂</sup><sup>t</sup>* <sup>=</sup> −∇ · �<sup>↔</sup>

In a linear approximation and using **E** ≈ −∇*φ*, we can write this as the equation of continuity

*Cd* is the capacitance density. We see the charge transfer is driven by the electric

In a linear approximation and a stationary system without an impressed temperature gradient, integrating on both sides over time and space for system of volume *V* reduces

field and a temperature gradient. The generalized current density is **<sup>J</sup>***<sup>f</sup>* <sup>=</sup><sup>↔</sup> *<sup>σ</sup> <sup>f</sup>* (**r**, *<sup>t</sup>*) · �

> � ∞ 0 *dτ <* ˙

*σ <sup>f</sup>* (**r**, *t*) ·

�

**j***f*(**r**, 0)**j***f*(**r**, *τ*) *>*<sup>0</sup>

*∂φ ∂t* (**r** � , *t*)

� ∇� *φ*(**r** �

, *<sup>τ</sup>*) <sup>−</sup> <sup>∇</sup>*T*(**r**, *<sup>t</sup>*)

*σ <sup>f</sup>* (**r**, *t*)*δ*(*t* − *τ*)*δ*(**r** − **r**

**<sup>E</sup>**(**r**, *<sup>t</sup>*) <sup>−</sup> <sup>∇</sup>*T*(**r**, *<sup>t</sup>*)

*<sup>T</sup>*(**r**, *<sup>t</sup>*) *<sup>φ</sup>*(**r**, *<sup>t</sup>*)

, *τ*) *>*. The relationship to the electrical

�

*<sup>T</sup>*(**r**, *<sup>t</sup>*) *<sup>φ</sup>*(**r**, *<sup>t</sup>*)

*kBT* . (46)

�<sup>⎞</sup>

⎠ , (43)

). (44)

�� , (45)

Δ{*ρf*(**r**)*ρf*(**r**�

*kBT*(**r**� , *t*)

*kBT*(**r**�

, *τ*) =*<* **j***f*(**r**, *t*)T (*t*, *τ*)**j***f*(**r**�

**j***f*(**r**, *t*)T (*t*, *τ*)**j***f*(**r**�

*kBT*(**r**�

*∂φ*(**r**, *t*)

Eq.(44) to the fluctuation-dissipation relation for the conductivity

*σ <sup>f</sup>* (**r**) = *V*

↔

of this principle.

**conductivity**

where <sup>↔</sup>

*∂ < ρf*(**r**) *> <sup>∂</sup><sup>t</sup>* <sup>=</sup>

> ⎛ ⎝ � *t ti dτ* � *d***r** � ↔ *K <sup>f</sup>* (**r**, *t*,**r**�

*<* ˙

*<sup>∂</sup><sup>t</sup>* <sup>=</sup> *Cd*(**r**, *<sup>t</sup>*)

*<sup>T</sup>*(**r**,*t*) *φ*(**r**, *t*)

� .

= ∇ ·

conductivity is identified through

*K <sup>f</sup>* (**r**, *t*,**r**�

for charge conservation

*∂ < ρf*(**r**) *>*

**<sup>E</sup>**(**r**, *<sup>t</sup>*) <sup>−</sup> <sup>∇</sup>*T*(**r**,*t*)

� *d***r** �

If *λt*(**r**, *t*) is a test function, the Euler-Lagrange equation for extremum is

$$\frac{\partial L\_{\varepsilon}}{\partial \lambda\_{\ell}} - \nabla \cdot \frac{\partial L\_{\varepsilon}}{\partial \nabla \lambda\_{\ell}(\mathbf{r}\_{\prime}, t)} = 0. \tag{39}$$

We need the identity for functional derivatives

$$F = \int dr \int \mathcal{B}(r, r') n(r') dr' dr',\tag{40}$$

$$\frac{\delta F}{\delta n(r)} = 2 \int B(r, r') n(r') dr'. \tag{41}$$

The Euler-Lagrange equations for this problem yield the statistical-mechanical equations of motion in Eq.(13) in analogy to the Lagrange's equations of motion in mechanics. This is similar to Hamilton's Principle and Lagrange's equations of mechanics. Therefore we conclude that an extremum (usually a minimum) of *I*, defined in Eq.(37) yields the statistical-mechanical equations of motion. Below we will illustrate this by examples in heat transfer and electromagnetics.

a) As an example, if we apply the Euler-Lagrange equation for Eq.(20) for steady-state heat transfer entropy production. In this case *dS*/*dt* is a constant and does not contribute. In Eq.(27), with *<sup>λ</sup>* <sup>=</sup> 1/*kBT*, and taking a variation with respect to <sup>∇</sup>*<sup>λ</sup>* <sup>=</sup> −∇*T*/*kBT*2, we obtain the equation of motion for steady-state heat transfer ∇ · ( ↔ *κ* ·∇*T*) = ∇ · **J***<sup>h</sup>* = 0. This problem was addressed previously by other researchers by using *T* and ∇*T* as the variational variables, but this led to inconsistencies. In our approach we use *λ* and ∇*λ* as the variables and this leads to the expected results.

b) As another example, if we apply the Euler-Lagrange equation for steady-state charge transfer entropy production (*dS*/*dt* is constant) in Eq.(28) with *λ* = −*φ*/*kBT*, and taking a variation with respect to <sup>∇</sup>*<sup>λ</sup>* <sup>=</sup> −∇*φ*/*kBT* <sup>+</sup> *<sup>φ</sup>*∇*T*/*T*2, we obtain the equation of motion for steady-state charge transfer ∇ · <sup>↔</sup> *σ <sup>f</sup>* · **<sup>E</sup>**(**r**) <sup>−</sup> *<sup>φ</sup>*(**r**)∇*T*(**r**) *T*(**r**) <sup>=</sup> ∇ · **<sup>J</sup>***<sup>f</sup>* <sup>=</sup> 0. The time dependent equation of motion is obtained if we include <sup>−</sup>*dS*/*dt* <sup>=</sup> *dr*(*φ*/*T*)(*<sup>∂</sup> <sup>&</sup>lt; <sup>ρ</sup><sup>f</sup> <sup>&</sup>gt;* /*∂t*) and then take the variation to obtain *∂ < ρ<sup>f</sup> >* /*∂t* = −∇ · **J***<sup>f</sup>* .

c) As another example, we consider the simple case of one variable for time-dependent heat transfer, *<sup>u</sup>*(**r**) with *<sup>λ</sup><sup>t</sup>* <sup>=</sup> 1/*kBT* and <sup>∇</sup>*λ<sup>t</sup>* <sup>=</sup> −∇*T*/*kBT*<sup>2</sup> and *Le*(*λ*, <sup>∇</sup>*λ*,**r**) = *kB*∇*<sup>λ</sup>* · **<sup>J</sup>***<sup>h</sup>* <sup>−</sup> *kBρcpλ∂T*/*∂t*, where **J***<sup>h</sup>* = −*κ*∇*T*. The Euler-Lagrange equation then yields Eq.(31).

d) If instead we use the exact expression for the current and heat capacity as developed above, in the Euler-Lagrange condition in Eqs.(36) and (37, we obtain the very general heat transfer equation Eq.(29)

$$\begin{split} &\int d\mathbf{r}' \frac{1}{T(\mathbf{r}',t)} \frac{\Delta\{u(\mathbf{r})\overline{u}(\mathbf{r}')\}}{k\_B T(\mathbf{r}',t)} \frac{\partial T}{\partial t}(\mathbf{r}',t) \\ &= \nabla \cdot \left( \int\_0^t d\tau \int d\mathbf{r}' \frac{\overset{\leftrightarrow}{K}\_{\text{fl}}(\mathbf{r},t,\mathbf{r}',\tau)}{k\_B T^2(\mathbf{r}',\tau)} \cdot \nabla' T(\mathbf{r}',\tau) \right). \end{split} \tag{42}$$

12 Will-be-set-by-IN-TECH

−∇· *<sup>∂</sup>Le*

*B*(*r*,*r*�

The Euler-Lagrange equations for this problem yield the statistical-mechanical equations of motion in Eq.(13) in analogy to the Lagrange's equations of motion in mechanics. This is similar to Hamilton's Principle and Lagrange's equations of mechanics. Therefore we conclude that an extremum (usually a minimum) of *I*, defined in Eq.(37) yields the statistical-mechanical equations of motion. Below we will illustrate this by examples in heat

a) As an example, if we apply the Euler-Lagrange equation for Eq.(20) for steady-state heat transfer entropy production. In this case *dS*/*dt* is a constant and does not contribute. In Eq.(27), with *<sup>λ</sup>* <sup>=</sup> 1/*kBT*, and taking a variation with respect to <sup>∇</sup>*<sup>λ</sup>* <sup>=</sup> −∇*T*/*kBT*2, we obtain

was addressed previously by other researchers by using *T* and ∇*T* as the variational variables, but this led to inconsistencies. In our approach we use *λ* and ∇*λ* as the variables and this leads

b) As another example, if we apply the Euler-Lagrange equation for steady-state charge transfer entropy production (*dS*/*dt* is constant) in Eq.(28) with *λ* = −*φ*/*kBT*, and taking a variation with respect to <sup>∇</sup>*<sup>λ</sup>* <sup>=</sup> −∇*φ*/*kBT* <sup>+</sup> *<sup>φ</sup>*∇*T*/*T*2, we obtain the equation of motion for

**<sup>E</sup>**(**r**) <sup>−</sup> *<sup>φ</sup>*(**r**)∇*T*(**r**) *T*(**r**)

equation of motion is obtained if we include <sup>−</sup>*dS*/*dt* <sup>=</sup> *dr*(*φ*/*T*)(*<sup>∂</sup> <sup>&</sup>lt; <sup>ρ</sup><sup>f</sup> <sup>&</sup>gt;* /*∂t*) and then

c) As another example, we consider the simple case of one variable for time-dependent heat transfer, *<sup>u</sup>*(**r**) with *<sup>λ</sup><sup>t</sup>* <sup>=</sup> 1/*kBT* and <sup>∇</sup>*λ<sup>t</sup>* <sup>=</sup> −∇*T*/*kBT*<sup>2</sup> and *Le*(*λ*, <sup>∇</sup>*λ*,**r**) = *kB*∇*<sup>λ</sup>* · **<sup>J</sup>***<sup>h</sup>* <sup>−</sup>

d) If instead we use the exact expression for the current and heat capacity as developed above, in the Euler-Lagrange condition in Eqs.(36) and (37, we obtain the very general heat transfer

)}

*kBT*<sup>2</sup>(**r**�

*∂T ∂t* (**r** � , *t*)

, *τ*)

, *<sup>τ</sup>*) · ∇�

*T*(**r** � , *τ*) 

*kBρcpλ∂T*/*∂t*, where **J***<sup>h</sup>* = −*κ*∇*T*. The Euler-Lagrange equation then yields Eq.(31).

Δ{*u*(**r**)*u*(**r**�

*kBT*(**r**� , *t*)

*B*(*r*,*r*�

)*n*(*r*� )*dr*�

> )*n*(*r*� )*dr*�

> > ↔

*<sup>∂</sup>*∇*λt*(**r**, *<sup>t</sup>*) <sup>=</sup> 0. (39)

*dr*, (40)

. (41)

*κ* ·∇*T*) = ∇ · **J***<sup>h</sup>* = 0. This problem

= ∇ · **J***<sup>f</sup>* = 0. The time dependent

. (42)

If *λt*(**r**, *t*) is a test function, the Euler-Lagrange equation for extremum is

*∂Le ∂λt*

*F* = *dr* 

the equation of motion for steady-state heat transfer ∇ · (

take the variation to obtain *∂ < ρ<sup>f</sup> >* /*∂t* = −∇ · **J***<sup>f</sup>* .

 *d***r** � <sup>1</sup> *T*(**r**� , *t*)

= ∇ ·

 *<sup>t</sup>* 0 *dτ d***r** � ↔ *Kh* (**r**, *t*,**r**�

↔ *σ <sup>f</sup>* · 

*δF <sup>δ</sup>n*(*r*) <sup>=</sup> <sup>2</sup>

We need the identity for functional derivatives

transfer and electromagnetics.

to the expected results.

equation Eq.(29)

steady-state charge transfer ∇ ·

With the definitions of the thermal conductivity and heat capacity in Section 5.1 this becomes the Fourier equation for heat transfer, Eq.(31). Therefore the extrema of Eq.(37) yields the equations of motion. The same approach could be performed for other variables or for a collection of relevant variables and its generalized forces to obtain equations of motion. Therefore we conclude that the test functions that yield an extremum is the set of *λ* that satisfy the equations of motion for the actual dynamical trajectory. This is similar to Hamilton's Principle, but for statistical-mechanical evolution. Therefore we have a way of deriving equations of motion from the entropy production rate. There could be many more applications of this principle.

#### **5.4 Equation of motion for the charge density and microscopic definition of the electrical conductivity**

In our next example, we consider the equation for the free-charge density conservation that satisfies *ρ*˙ *<sup>f</sup>* = −∇ · **j***<sup>f</sup>* . In this example, *F* = *ρ<sup>f</sup>* , *λ* = −*φ*/*kBT*, where *φ* is the electric potential.

$$\begin{split} \frac{\partial \rho < \rho\_f(\mathbf{r}) > & \quad \int d\mathbf{r}' \frac{\Delta \{\rho\_f(\mathbf{r}) \overline{\rho}\_f(\mathbf{r}')\}}{k\_B T(\mathbf{r}', t)} \frac{\partial \phi}{\partial t}(\mathbf{r}', t) \\ &= \nabla \cdot \left( \int\_{t\_i}^t d\mathbf{r} \int d\mathbf{r}' \frac{\overset{\leftrightarrow}{K}\_f(\mathbf{r}, t, \mathbf{r}', \mathbf{r})}{k\_B T(\mathbf{r}', \mathbf{r})} \cdot \left( \nabla' \phi(\mathbf{r}', \mathbf{r}) - \frac{\nabla T(\mathbf{r}, t)}{T(\mathbf{r}, t)} \phi(\mathbf{r}, t) \right) \right), \end{split} \tag{43}$$

where <sup>↔</sup> *K <sup>f</sup>* (**r**, *t*,**r**� , *τ*) =*<* **j***f*(**r**, *t*)T (*t*, *τ*)**j***f*(**r**� , *τ*) *>*. The relationship to the electrical conductivity is identified through

$$\frac{1 < \mathbf{j}\_f(\mathbf{r}, t)\mathcal{T}(t, \tau)\mathbf{j}\_f(\mathbf{r}', \tau) >\_0}{k\_B T(\mathbf{r}', \tau)} = \stackrel{\leftrightarrow}{\sigma}\_f(\mathbf{r}, t)\delta(t - \tau)\delta(\mathbf{r} - \mathbf{r}').\tag{44}$$

In a linear approximation and using **E** ≈ −∇*φ*, we can write this as the equation of continuity for charge conservation

$$\frac{\partial \phi < \rho\_f(\mathbf{r}) >}{\partial t} = \mathbb{C}\_d(\mathbf{r}, t) \frac{\partial \phi(\mathbf{r}, t)}{\partial t} = -\nabla \cdot \left( \stackrel{\leftrightarrow}{\sigma}\_f(\mathbf{r}, t) \cdot \left( \mathbf{E}(\mathbf{r}, t) - \frac{\nabla T(\mathbf{r}, t)}{T(\mathbf{r}, t)} \phi(\mathbf{r}, t) \right) \right), \tag{45}$$

*Cd* is the capacitance density. We see the charge transfer is driven by the electric field and a temperature gradient. The generalized current density is **<sup>J</sup>***<sup>f</sup>* <sup>=</sup><sup>↔</sup> *<sup>σ</sup> <sup>f</sup>* (**r**, *<sup>t</sup>*) · � **<sup>E</sup>**(**r**, *<sup>t</sup>*) <sup>−</sup> <sup>∇</sup>*T*(**r**,*t*) *<sup>T</sup>*(**r**,*t*) *φ*(**r**, *t*) � .

In a linear approximation and a stationary system without an impressed temperature gradient, integrating on both sides over time and space for system of volume *V* reduces Eq.(44) to the fluctuation-dissipation relation for the conductivity

$$\stackrel{\leftrightarrow}{\sigma}\_{f}(\mathbf{r}) = V \int\_{0}^{\infty} d\tau \frac{<\mathbf{j}\_{f}(\mathbf{r}, \mathbf{0}) \mathbf{j}\_{f}(\mathbf{r}, \tau) >\_{0}}{k\_{B}T}. \tag{46}$$

For time-harmonic fields we have

<sup>Σ</sup> (*ω*, *<sup>t</sup>*) = *<sup>ω</sup>*

external heat transport flux **j***<sup>h</sup>* we have

<sup>2</sup>*ω*0*�*��(*ω*0) <sup>1</sup>

<sup>=</sup> <sup>1</sup> *T*(**r**, *t*)

magnetic effects we have

= *d***r** 1 *T*(**r**, *t*)

*d***r** 1 *T*(**r**, *t*)

<sup>=</sup> **<sup>E</sup>***eff*(**r**, *<sup>t</sup>*) *T*(**r**, *t*) ·

<sup>Σ</sup>(*t*) =

*s*˙(*t*) = −

**E***p*(**r**, *t*) ·

 *t ti d***r** � ˙

**6.2 The permittivity from the entropy production rate equation**

*∂U*(**r**, *t*)

*∂***D**(**r**, *t*)

 *t* −∞

Eq.(3) to first order, we obtain **<sup>E</sup>***<sup>p</sup>* <sup>≈</sup> **<sup>D</sup>**/*�*<sup>+</sup> <sup>↔</sup>

2 *d***r** 1 *T*(**r**, *t*)

<sup>Σ</sup>(*t*) = <sup>1</sup>

 1 *T*(**r**, *t*) and Thermal Fields with Materials Based on Fluctuations and Entropy Production

By Eq.(19) the macroscopic entropy production in Eq.(49) must equal

 *t ti* 

*kB*

*�*��(**r**, *ω*)|**E**(**r**, *ω*)|

<sup>219</sup> An Analysis of the Interaction of Electromagnetic

Using the microscopic Maxwell's equations with a source current **j***<sup>e</sup>* and including a possible

**j***h*(**r**, *t*) · **n***dS* +

*<sup>T</sup>*(**r**, *<sup>t</sup>*) *<sup>&</sup>lt;* **<sup>E</sup>**(**r**, *<sup>t</sup>*)**E**(**r**, *<sup>t</sup>*) *<sup>&</sup>gt;ω*<sup>0</sup> <sup>+</sup>2*ω*0*μ*��(*ω*0) <sup>1</sup>

Where the macroscopic current is defined by the underbrace. This is a FDR.

*<sup>∂</sup><sup>t</sup>* <sup>−</sup> *<sup>∂</sup>***D**(**r**, *<sup>t</sup>*)

*<sup>∂</sup><sup>t</sup>* · **<sup>E</sup>***eff*(**r**, *<sup>t</sup>*)

we used (*∂***D**/*∂t*) · **<sup>D</sup>** <sup>≈</sup> 0 and the effective field is then **<sup>E</sup>***eff* <sup>=</sup> **<sup>E</sup>**<sup>−</sup> <sup>↔</sup>

({*<sup>&</sup>lt; <sup>u</sup>*˙ *<sup>&</sup>gt;* <sup>−</sup> *<sup>&</sup>lt;* **<sup>d</sup>**˙ *<sup>&</sup>gt;* ·**E***p*<sup>−</sup> *<sup>&</sup>lt;* **<sup>b</sup>**˙ *<sup>&</sup>gt;* ·**H***m*}/*T*)*d***<sup>r</sup>** <sup>=</sup> 0.

For a thermally insulated system, we can rewrite Eq.(52) after using Eq.(49) for the entropy

If we consider only the internal energy and displacement terms in Eq.(48) and neglect

*<sup>∂</sup><sup>t</sup>* · **<sup>E</sup>***p*(**r**, *<sup>t</sup>*)

*<sup>&</sup>lt;* **<sup>d</sup>**˙ (**r**, *<sup>t</sup>*)<sup>T</sup> (*t*, *<sup>τ</sup>*)(<sup>1</sup> <sup>−</sup> *<sup>P</sup>*(*τ*))**d**˙ (**r**�

Where used an expression for the internal energy without an applied magnetic field, derived from the Hamiltonian in Eq.(58): *∂U*/*∂t* = *∂***D**/*∂t* · **E**. Expanding *<* **D** *>* through the use of

*kBT*(**r**�

*Dp* ·**E***p*. Where <sup>↔</sup>

*s*˙(*t*)T (*t*, *τ*)(1 − *P*(*τ*))*s*˙(*τ*)

 *d***r** 1 *T*(**r**, *t*)

**j***e*(**r**, *t*)T (*t*, *τ*)(1 − *P*(*τ*))**j***e*(**r**�

*kBT*(**r**�

 **J***f*(**r**,*t*)

<sup>2</sup> <sup>+</sup> *<sup>μ</sup>*��(**r**, *<sup>ω</sup>*)|**H**(**r**, *<sup>ω</sup>*)<sup>|</sup>

2 

*<sup>τ</sup> dτ*. (52)

**j***e*(**r**, *t*) · **E***p*(**r**, *t*). (53)

*<sup>T</sup>*(**r**, *<sup>t</sup>*) *<sup>&</sup>lt;* **<sup>H</sup>**(**r**, *<sup>t</sup>*)**H**(**r**, *<sup>t</sup>*) *<sup>&</sup>gt;ω*<sup>0</sup>

� , *τ*)*dτ*

� , *τ*)*d***r** �

*Dp* is the depolarization tensor and

*Dp* ·**E***p*.

*dτ*. (55)

(54)

, *τ*) *τ*

, *<sup>τ</sup>*) · **<sup>E</sup>***p*(**<sup>r</sup>**

, *τ*) *>*

, *<sup>τ</sup>*) · **<sup>E</sup>***eff*(**<sup>r</sup>**

. (51)

∼

where *< s*˙(*t*) *>*=

density as

If we multiply both sides of Eq.(43) by *φ*/*T* and integrate by parts, then we write Eq.(34) as an entropy-density balance equation

$$\frac{\partial \mathbf{S}\_{\varepsilon}(\mathbf{r},t)}{\partial t} + \nabla \cdot \left(\frac{\phi(\mathbf{r},t)}{T(\mathbf{r},t)} \mathbf{J}\_{f}(\mathbf{r},t)\right) = \frac{\phi(\mathbf{r},t)}{T(\mathbf{r},t)} \nabla \cdot \mathbf{J}\_{f}(\mathbf{r},t). \tag{47}$$

#### **6. Entropy production in electromagnetic driving**

#### **6.1 Electromagnetic entropy production**

We now consider Eq.(19) for electromagnetic entropy production when the *λ<sup>n</sup>* are the inverse temperature 1/*kBT*, and local fields −**E***p*/*kBT* and −**H***m*/*kBT* and the relevant variables are the generalized internal energy density *U* =*< u >*, the electric displacement, **D** =*<* **d** *>*, and the magnetic induction **B** =*<* **b** *>*. The macroscopic charge current density **J***<sup>f</sup>* is related to the conductivity <sup>↔</sup> *<sup>σ</sup> <sup>f</sup>* (**r**, *<sup>t</sup>*) by **<sup>J</sup>***f*(**r**, *<sup>t</sup>*) =<sup>↔</sup> *σ <sup>f</sup>* (**r**, *t*) · **E***p*(**r**, *t*). With these definitions we can write the macroscopic entropy production rate in terms of dissipation and heat flowing through the boundary surfaces (<sup>↔</sup> *Q* (**r**, *t*))

$$\Sigma(t) = \int d\mathbf{r} \frac{1}{T(\mathbf{r}, t)} \left\{ \frac{\partial \mathcal{U}(\mathbf{r}, t)}{\partial t} - \frac{\partial \mathbf{D}(\mathbf{r}, t)}{\partial t} \cdot \mathbf{E}\_p(\mathbf{r}, t) - \frac{\partial \mathbf{B}(\mathbf{r}, t)}{\partial t} \mathbf{H}\_m(\mathbf{r}, t) \right\}$$

$$= - \int \frac{1}{T} \mathbf{Q}(\mathbf{r}, t) \cdot \mathbf{n} dS + \int d\mathbf{r} \frac{1}{T(\mathbf{r}, t)} \mathbf{E}\_p(\mathbf{r}, t) \cdot \underbrace{\stackrel{\leftrightarrow}{\sigma}\_f(\mathbf{r}, t) \cdot \mathbf{E}\_p(\mathbf{r}, t)}\_{\mathbf{J}\_f}.\tag{48}$$

Note that the interpretation of quantities such as internal energy, polarization, and temperature must be generalized when working with nonequilibrium systems, but we use these symbols in order to relate quantities to the thermodynamic limit.

When the frequency dependence of the fields is dominated by a narrow band around *ω*<sup>0</sup> and there is no external heat source, then we can write Jackson (1999)

$$\Sigma(t) = \int d\mathbf{r} \frac{1}{T(\mathbf{r}, t)} \left\{ \frac{\partial \mathcal{U}\_{eff}(\mathbf{r}, t)}{\partial t} - \left\langle \frac{\partial \mathbf{D}(\mathbf{r}, t)}{\partial t} \cdot \mathbf{E}\_p(\mathbf{r}, t) \right\rangle\_{\omega\_0} - \left\langle \frac{\partial \mathbf{B}(\mathbf{r}, t)}{\partial t} \cdot \mu\_0 \mathbf{H}\_m(\mathbf{r}, t) \right\rangle\_{\omega\_0} \right\} \tag{49}$$
 
$$= 2\omega\_0 \mathbf{\varepsilon}^{\prime\prime}(\omega\_0) \int d\mathbf{r} \frac{1}{T(\mathbf{r}, t)} < \mathbf{E}(\mathbf{r}, t) \mathbf{E}(\mathbf{r}, t) > \omega\_0 + 2\omega\_0 \mu\_{\prime}^{\prime\prime}(\omega\_0) \int d\mathbf{r} \frac{1}{T(\mathbf{r}, t)} < \mathbf{H}(\mathbf{r}, t) \mathbf{H}(\mathbf{r}, t) > \omega\_0 \tag{40}$$

*�*�� and *μ*�� are the loss component of the permittivity and permeability. This shows that the entropy production rate is due to dissipation. We assume that *�*�� also contains the effects due to dc conductivity. In this equation *<>ω*<sup>0</sup> indicates time averaging the fields over a period and the effective internal energy is

$$\mathcal{U}\_{eff} = \mathfrak{R} \left[ \frac{d(\omega \epsilon)}{d\omega} (\omega\_0) < \mathbf{E}(\mathbf{r}, t) \mathbf{E}(\mathbf{r}, t) >\_{\omega\_0} > \right] + \mathfrak{R} \left[ \frac{d(\omega \mu)}{d\omega} (\omega\_0) < \mathbf{H}(\mathbf{r}, t) \mathbf{H}(\mathbf{r}, t) >\_{\omega\_0} > \right]. \tag{50}$$

For time-harmonic fields we have

14 Will-be-set-by-IN-TECH

If we multiply both sides of Eq.(43) by *φ*/*T* and integrate by parts, then we write Eq.(34) as an

**J***f*(**r**, *t*) 

We now consider Eq.(19) for electromagnetic entropy production when the *λ<sup>n</sup>* are the inverse temperature 1/*kBT*, and local fields −**E***p*/*kBT* and −**H***m*/*kBT* and the relevant variables are the generalized internal energy density *U* =*< u >*, the electric displacement, **D** =*<* **d** *>*, and the magnetic induction **B** =*<* **b** *>*. The macroscopic charge current density **J***<sup>f</sup>* is related to

the macroscopic entropy production rate in terms of dissipation and heat flowing through the

*<sup>∂</sup><sup>t</sup>* <sup>−</sup> *<sup>∂</sup>***D**(**r**, *<sup>t</sup>*)

 *d***r** 1 *T*(**r**, *t*)

Note that the interpretation of quantities such as internal energy, polarization, and temperature must be generalized when working with nonequilibrium systems, but we use

When the frequency dependence of the fields is dominated by a narrow band around *ω*<sup>0</sup> and

*�*�� and *μ*�� are the loss component of the permittivity and permeability. This shows that the entropy production rate is due to dissipation. We assume that *�*�� also contains the effects due to dc conductivity. In this equation *<>ω*<sup>0</sup> indicates time averaging the fields over a period

> + � *d*(*ωμ*)

*<sup>∂</sup><sup>t</sup>* · **<sup>E</sup>***p*(**r**, *<sup>t</sup>*)

*∂***D**(**r**, *t*)

<sup>=</sup> *<sup>φ</sup>*(**r**, *<sup>t</sup>*) *T*(**r**, *t*)

*<sup>∂</sup><sup>t</sup>* · **<sup>E</sup>***p*(**r**, *<sup>t</sup>*) <sup>−</sup> *<sup>∂</sup>***B**(**r**, *<sup>t</sup>*)

**E***p*(**r**, *t*) ·

*ω*<sup>0</sup> −

 *d***r** 1

*σ <sup>f</sup>* (**r**, *t*) · **E***p*(**r**, *t*). With these definitions we can write

*∂t*

↔

**H***m*(**r**, *t*)

*σ <sup>f</sup>* (**r**, *t*) · **E***p*(**r**, *t*) **J***f*

*∂***B**(**r**, *t*)

*<sup>∂</sup><sup>t</sup>* · *<sup>μ</sup>*0**H***m*(**r**, *<sup>t</sup>*)

*<sup>T</sup>*(**r**, *<sup>t</sup>*) *<sup>&</sup>lt;* **<sup>H</sup>**(**r**, *<sup>t</sup>*)**H**(**r**, *<sup>t</sup>*) *<sup>&</sup>gt;ω*<sup>0</sup>

*<sup>d</sup><sup>ω</sup>* (*ω*0) *<sup>&</sup>lt;* **<sup>H</sup>**(**r**, *<sup>t</sup>*)**H**(**r**, *<sup>t</sup>*) *<sup>&</sup>gt;ω*0*<sup>&</sup>gt;*

. (48)

*ω*<sup>0</sup>

 . (50)

(49).

∇ · **J***f*(**r**, *t*). (47)

 *φ*(**r**, *t*) *T*(**r**, *t*)

entropy-density balance equation

*∂Se*(**r**, *t*)

**6.1 Electromagnetic entropy production**

the conductivity <sup>↔</sup>

Σ(*t*) =

= 2*ω*0*�*��(*ω*0)

*Ueff* = �

 *d***r** 1 *T*(**r**, *t*)

> *d***r** 1

and the effective internal energy is

 *d*(*ω�*)

boundary surfaces (<sup>↔</sup>

Σ(*t*) =

 *d***r** 1 *T*(**r**, *t*)

= −

*<sup>∂</sup><sup>t</sup>* <sup>+</sup> ∇ ·

**6. Entropy production in electromagnetic driving**

*<sup>σ</sup> <sup>f</sup>* (**r**, *<sup>t</sup>*) by **<sup>J</sup>***f*(**r**, *<sup>t</sup>*) =<sup>↔</sup>

*∂U*(**r**, *t*)

*<sup>T</sup>* **<sup>Q</sup>**(**r**, *<sup>t</sup>*) · **<sup>n</sup>***dS* <sup>+</sup>

these symbols in order to relate quantities to the thermodynamic limit.

there is no external heat source, then we can write Jackson (1999)

*<sup>∂</sup><sup>t</sup>* −

*<sup>T</sup>*(**r**, *<sup>t</sup>*) *<sup>&</sup>lt;* **<sup>E</sup>**(**r**, *<sup>t</sup>*)**E**(**r**, *<sup>t</sup>*) *<sup>&</sup>gt;ω*<sup>0</sup> <sup>+</sup>2*ω*0*μ*��(*ω*0)

*<sup>∂</sup>Ueff*(**r**, *<sup>t</sup>*)

*<sup>d</sup><sup>ω</sup>* (*ω*0) *<sup>&</sup>lt;* **<sup>E</sup>**(**r**, *<sup>t</sup>*)**E**(**r**, *<sup>t</sup>*) *<sup>&</sup>gt;ω*0*<sup>&</sup>gt;*

*Q* (**r**, *t*))

1

$$\stackrel{\sim}{\Sigma}(\omega,t) = \frac{\omega}{2} \int d\mathbf{r} \frac{1}{T(\mathbf{r},t)} \left[ \epsilon''(\mathbf{r},\omega) |\mathbf{E}(\mathbf{r},\omega)|^2 + \mu''(\mathbf{r},\omega) |\mathbf{H}(\mathbf{r},\omega)|^2 \right]. \tag{51}$$

By Eq.(19) the macroscopic entropy production in Eq.(49) must equal

$$
\Delta\Sigma(t) = \frac{1}{k\_B} \int\_{t\_l}^{t} \langle \dot{\mathbf{s}}(t)\mathcal{T}(t,\tau)(1 - P(\tau))\tilde{\mathbf{s}}(\tau) \rangle\_{\tau} d\tau. \tag{52}
$$

Using the microscopic Maxwell's equations with a source current **j***<sup>e</sup>* and including a possible external heat transport flux **j***<sup>h</sup>* we have

$$\dot{\mathbf{s}}(t) = -\int \frac{1}{T(\mathbf{r}, t)} \mathbf{j}\_{\hbar}(\mathbf{r}, t) \cdot \mathbf{n} dS + \int d\mathbf{r} \frac{1}{T(\mathbf{r}, t)} \mathbf{j}\_{\ell}(\mathbf{r}, t) \cdot \mathbf{E}\_{\mathbf{p}}(\mathbf{r}, t). \tag{53}$$

where *< s*˙(*t*) *>*= ({*<sup>&</sup>lt; <sup>u</sup>*˙ *<sup>&</sup>gt;* <sup>−</sup> *<sup>&</sup>lt;* **<sup>d</sup>**˙ *<sup>&</sup>gt;* ·**E***p*<sup>−</sup> *<sup>&</sup>lt;* **<sup>b</sup>**˙ *<sup>&</sup>gt;* ·**H***m*}/*T*)*d***<sup>r</sup>** <sup>=</sup> 0.

For a thermally insulated system, we can rewrite Eq.(52) after using Eq.(49) for the entropy density as

$$2\omega\_0 \varepsilon^{\prime\prime}(\omega\_0) \frac{1}{T(\mathbf{r}, t)} < \mathbf{E}(\mathbf{r}, t) \mathbf{E}(\mathbf{r}, t) > \omega\_0 \\ \quad + 2\omega\_0 \mu^{\prime\prime}(\omega\_0) \frac{1}{T(\mathbf{r}, t)} < \mathbf{H}(\mathbf{r}, t) \mathbf{H}(\mathbf{r}, t) > \omega\_0$$

$$\quad = \frac{1}{T(\mathbf{r}, t)} \mathbf{E}\_p(\mathbf{r}, t) \cdot \underbrace{\int\_{t\_i}^t \int d\mathbf{r}' \frac{\langle \dot{\mathbf{j}}\_\varepsilon(\mathbf{r}, t) T(t, \tau) (1 - P(\tau)) \tilde{\mathbf{j}}\_\varepsilon(\mathbf{r}', \tau) \rangle\_{\mathbf{r}}}\_{\mathbf{J}\_f(\mathbf{r}, t)} \cdot \mathbf{E}\_p(\mathbf{r}', \tau) d\tau} \cdot \mathbf{E}\_p(\mathbf{r}', \tau) d\tau} \tag{54}$$

Where the macroscopic current is defined by the underbrace. This is a FDR.

#### **6.2 The permittivity from the entropy production rate equation**

If we consider only the internal energy and displacement terms in Eq.(48) and neglect magnetic effects we have

$$\begin{split} \Sigma(t) &= \int d\mathbf{r} \frac{1}{T(\mathbf{r},t)} \left[ \frac{\partial \mathcal{U}(\mathbf{r},t)}{\partial t} - \frac{\partial \mathbf{D}(\mathbf{r},t)}{\partial t} \cdot \mathbf{E}\_p(\mathbf{r},t) \right] \\ &= \int d\mathbf{r} \frac{1}{T(\mathbf{r},t)} \frac{\partial \mathbf{D}(\mathbf{r},t)}{\partial t} \cdot \mathbf{E}\_{eff}(\mathbf{r},t) \\ &= \frac{\mathbf{E}\_{eff}(\mathbf{r},t)}{T(\mathbf{r},t)} \cdot \int\_{-\infty}^{t} \int \frac{\dot{\mathbf{d}}(\mathbf{r},t) \mathcal{T}(t,\mathbf{r}) (1 - P(\mathbf{r})) \dot{\mathbf{d}}(\mathbf{r}',\mathbf{r})}{k\_B T(\mathbf{r}',\mathbf{r})} \cdot \mathbf{E}\_{eff}(\mathbf{r}',\mathbf{r}) d\mathbf{r}' d\mathbf{r}. \end{split} \tag{55}$$

Where used an expression for the internal energy without an applied magnetic field, derived from the Hamiltonian in Eq.(58): *∂U*/*∂t* = *∂***D**/*∂t* · **E**. Expanding *<* **D** *>* through the use of Eq.(3) to first order, we obtain **<sup>E</sup>***<sup>p</sup>* <sup>≈</sup> **<sup>D</sup>**/*�*<sup>+</sup> <sup>↔</sup> *Dp* ·**E***p*. Where <sup>↔</sup> *Dp* is the depolarization tensor and we used (*∂***D**/*∂t*) · **<sup>D</sup>** <sup>≈</sup> 0 and the effective field is then **<sup>E</sup>***eff* <sup>=</sup> **<sup>E</sup>**<sup>−</sup> <sup>↔</sup> *Dp* ·**E***p*.

respect to **E***p*/*kBT*, to find (Baker-Jarvis et al. (2007))

**K***<sup>e</sup>* (**r**, *t*,**r**�

− *d*3*r*� *t* 0 ↔ **K***<sup>m</sup>* (**r**, *t*,**r**

<sup>Σ</sup>(*t*) =

−

) · **H**(**r**�

*d***r** 1 *T*(**r**, *t*) {

*<sup>∂</sup><sup>t</sup>* · *<sup>μ</sup>*0**H***m*(**r**, *<sup>t</sup>*)

*∂***B**(**r**, *t*)

Here <sup>↔</sup>

where <sup>↔</sup>

satisfies

from Eq.(62) when <sup>↔</sup>

(2004); Robertson (1967b)):

**7.4 Maxwell's equations**

where *< s*˙(*t*) *>*=

∇ × **E**(**r**, *t*).

*∂***P**(**r**, *t*) *<sup>∂</sup><sup>t</sup>* <sup>=</sup> <sup>−</sup>

and Thermal Fields with Materials Based on Fluctuations and Entropy Production

× · **P**(**r** � , *<sup>τ</sup>*)<sup>−</sup> <sup>↔</sup>

, *<sup>τ</sup>*) =<sup>↔</sup>

*∂***M**(**r**, *t*)

useful for ferromagnetic and ferrite solid materials (Lax and Button (1962)).

**7.3 Generalized equation of motion for the magnetization**

 *d*3*r*� *t* 0 ↔ **K***<sup>e</sup>* (**r**, *t*,**r** � , *τ*)

<sup>221</sup> An Analysis of the Interaction of Electromagnetic

*I δ*(*t* − *τ*)*δ*(**r** − **r**�

The magnetic polarization can be obtained using Eq.(13) for the case of **m**(**r**) and *u*(**r**) in the Hamiltonian <sup>H</sup>(*t*) = *<sup>d</sup>***r**{*u*(**r**) <sup>−</sup> *<sup>μ</sup>*0**m**(**r**) · **<sup>H</sup>**(**r**, *<sup>t</sup>*)}, by taking a variation with respect to **H***m*/*kBT*, to find (Baker-Jarvis (2005; 2008); Baker-Jarvis and Kabos (2001); Baker-Jarvis et al.

�

To derive Maxwell's equations from Eq.(19). For generality, the reversible term *< s*˙(*t*) *>* has been included. Note that *< s*˙(*t*) *>*= 0 only when the *λ*'s are the functions that satisfy the equations of motion, so we keep it in the variational principle for completeness. The Hamiltonian is <sup>H</sup>(*t*) = *<sup>d</sup>***r**{*u*(**r**) <sup>−</sup> **<sup>d</sup>**(**r**, *<sup>t</sup>*) · **<sup>E</sup>**(**r**, *<sup>t</sup>*) <sup>−</sup> **<sup>b</sup>**(**r**) · **<sup>H</sup>**(**r**, *<sup>t</sup>*)}, and the entropy rate

> *∂Ueff*(**r**, *t*) *<sup>∂</sup><sup>t</sup>* −

*ω*<sup>0</sup>

equations we take variations of Eq.(64) with respect to −**E***p*/*T* to obtain the first Maxwell equation, *∂***D**/*∂t* = ∇ × **H** − **J**, and the second (*∂***B**/*∂t* = −∇ × **E**), by a variation with respect to <sup>−</sup>**H***m*/*T*. Here we used the commutation relations in Eq.(7): *<sup>&</sup>lt; <sup>s</sup>*˙(*t*) *<sup>&</sup>gt;*=*<sup>&</sup>lt; <sup>s</sup>*˙(*t*), <sup>H</sup>(*t*) *<sup>&</sup>gt;* /*ih*¯: *<sup>d</sup>***r**� *<sup>&</sup>lt;* [**d**(**r**), **<sup>b</sup>**(**r**�

, *<sup>t</sup>*)]/*ih*¯ *<sup>&</sup>gt;*<sup>=</sup> −∇ × **<sup>H</sup>**(**r**, *<sup>t</sup>*) and *<sup>d</sup>***r**� *<sup>&</sup>lt;* [**b**(**r**), **<sup>d</sup>**(**r**�

*χ*<sup>0</sup> ·**E**(**r** � , *τ*) 

*χ*<sup>0</sup> is the static susceptibility. The Debye relaxation differential equation is recovered

*<sup>∂</sup><sup>t</sup>* <sup>=</sup> −|*γg*|**M**(**r**, *<sup>t</sup>*) <sup>×</sup> **<sup>H</sup>***eff*(**r**, *<sup>t</sup>*)

**K***<sup>m</sup>* is a kernel that contains of the microstructural interactions given in Baker-Jarvis and Kabos (2001), *γ<sup>g</sup>* is the gyromagnetic ratio, *χ*<sup>0</sup> is the static susceptibility, and **H***eff* is the effective magnetic field. Special cases of Eq.(63) reduce to constitutive relations such as the Landau-Lifshitz, Gilbert, and Bloch equations. The Landau-Lifshitz equation of motion is

, *τ*) · *χ*0**H***eff*(**r**

*∂***D**(**r**, *t*)

} =*< s*˙(*t*) *>* +

({*<sup>&</sup>lt; <sup>u</sup>*˙ *<sup>&</sup>gt;* <sup>−</sup> *<sup>&</sup>lt;* **<sup>d</sup>**˙ *<sup>&</sup>gt;* ·**E***p*<sup>−</sup> *<sup>&</sup>lt;* **<sup>b</sup>**˙ *<sup>&</sup>gt;* ·**H***m*}/*T*)*d***r**. To obtain Maxwell's

*<sup>∂</sup><sup>t</sup>* · **<sup>E</sup>***p*(**r**, *<sup>t</sup>*)

 *d***r** 1 *ω*<sup>0</sup>

*<sup>T</sup>* **<sup>J</sup>** · **<sup>E</sup>***p*, (64)

) · **E**(**r**�

, *t*)]/*ih*¯ *>*=

� , *τ*)*dτ*,

)/*τe*.

*dτ*. (62)

, (63)

If we suppress the spatial dependence in a volume *V* and apply this near equilibrium we have

$$\frac{\partial \mathbf{D}(t)}{\partial t} = V \int\_{-\infty}^{t} \underbrace{\frac{<\dot{\mathbf{d}}(t) \mathcal{T}\_0(t, \tau) \dot{\mathbf{d}}(\tau) >\_0}{k\_B T(\tau)} \cdot \mathbf{E}\_{eff}(\tau) d\tau}\_{df\_d/dt} \tag{56}$$

where *fd* is the impulse response function in the limit of a time invariant, linear, isothermal, stationary system. In that approximation the kernel is a function of *t* − *τ* and the Laplace transform can be used. In this limit we can take the Laplace transform (L*L*) of this equation and obtain an expression for the permittivity for time-harmonic fields, *�*(*ω*), where <sup>∼</sup> **D**=<sup>↔</sup> *�* (*ω*) ∼ **E***eff* (*ω*)

$$V\mathcal{L}\_L\left[\frac{<\dot{\mathbf{d}}(0)\dot{\mathbf{d}}(\tau-t)>}{k\_BT}\right] \to \mathrm{i}\omega \stackrel{\leftrightarrow}{\tilde{\mathbf{e}}}(\omega). \tag{57}$$

#### **7. Applications**

#### **7.1 Conservation of electromagnetic energy in a nonequilibrium system**

Let us write the Hamiltonian under electromagnetic driving in terms of the microscopic electric and magnetic polarization operators **d**(**r**) and **b**(**r**)

$$\mathcal{H}(t) = \int d\mathbf{r} \{\boldsymbol{\mu}(\mathbf{r}) - \mathbf{d}(\mathbf{r}) \cdot \mathbf{E}(\mathbf{r}, t) - \mathbf{b}(\mathbf{r}) \cdot \mathbf{H}(\mathbf{r}, t)\}.\tag{58}$$

The energy dissipated by heat in internal relaxation is

$$
\left\langle \frac{\partial \mathcal{H}(\mathbf{r}, t)}{\partial t} \right\rangle = \int d\mathbf{r} \left[ \frac{\partial \mathbf{E}(\mathbf{r}, t)}{\partial t} \cdot \mathbf{D}(\mathbf{r}, t) + \frac{\partial \mathbf{H}(\mathbf{r}, t)}{\partial t} \cdot \mathbf{B}(\mathbf{r}, t) \right]. \tag{59}
$$

Therefore if we take the time derivative of the expectation of Eq.(58) and subtract the internal relaxation energy that is given by Eq.(59), we have the conservation equation

$$\underbrace{\frac{\partial}{\partial t}\langle\mathcal{H}(t)\rangle}\_{\text{heat added}} - \left\langle\frac{\partial\mathcal{H}(t)}{\partial t}\right\rangle = \int d\mathbf{r} \left[\frac{\partial\mathcal{U}}{\partial t} - \frac{\partial\mathbf{D}(\mathbf{r},t)}{\partial t} \cdot \mathbf{E}(\mathbf{r},t) - \frac{\partial\mathbf{B}(\mathbf{r},t)}{\partial t} \cdot \mathbf{H}(\mathbf{r},t)\right].\tag{60}$$

This is a general energy conservation relation that is valid away from equilibrium. The LHS is the generalized external power delivered to the system beyond that due to the driving fields, such as an open system where heat enters the system. For a system that is isolated except for the dynamically driven fields, the LHS is zero and we obtain the normal energy conservation condition for the fields

$$\frac{\partial \mathcal{U}}{\partial t} = \frac{\partial \mathbf{D}(\mathbf{r}, t)}{\partial t} \cdot \mathbf{E}(\mathbf{r}, t) + \frac{\partial \mathbf{B}(\mathbf{r}, t)}{\partial t} \cdot \mathbf{H}(\mathbf{r}, t). \tag{61}$$

#### **7.2 Generalized equation of motion for the polarization and relation to the Debye equation**

The electric polarization evolution equation can be obtained using Eq.(13) for the case of **p**(**r**) and *<sup>u</sup>*(**r**) in the Hamiltonian <sup>H</sup>(*t*) = *<sup>d</sup>***r**{*u*(**r**) <sup>−</sup> **<sup>p</sup>**(**r**) · **<sup>E</sup>**(**r**, *<sup>t</sup>*)}, or by taking a variation with respect to **E***p*/*kBT*, to find (Baker-Jarvis et al. (2007))

16 Will-be-set-by-IN-TECH

If we suppress the spatial dependence in a volume *V* and apply this near equilibrium we have

where *fd* is the impulse response function in the limit of a time invariant, linear, isothermal, stationary system. In that approximation the kernel is a function of *t* − *τ* and the Laplace transform can be used. In this limit we can take the Laplace transform (L*L*) of this equation and obtain an expression for the permittivity for time-harmonic fields, *�*(*ω*), where <sup>∼</sup>

Let us write the Hamiltonian under electromagnetic driving in terms of the microscopic

Therefore if we take the time derivative of the expectation of Eq.(58) and subtract the internal

*<sup>∂</sup><sup>t</sup>* <sup>−</sup> *<sup>∂</sup>***D**(**r**, *<sup>t</sup>*)

This is a general energy conservation relation that is valid away from equilibrium. The LHS is the generalized external power delivered to the system beyond that due to the driving fields, such as an open system where heat enters the system. For a system that is isolated except for the dynamically driven fields, the LHS is zero and we obtain the normal energy conservation

*<sup>∂</sup><sup>t</sup>* · **<sup>E</sup>**(**r**, *<sup>t</sup>*) + *<sup>∂</sup>***B**(**r**, *<sup>t</sup>*)

**7.2 Generalized equation of motion for the polarization and relation to the Debye equation** The electric polarization evolution equation can be obtained using Eq.(13) for the case of **p**(**r**) and *<sup>u</sup>*(**r**) in the Hamiltonian <sup>H</sup>(*t*) = *<sup>d</sup>***r**{*u*(**r**) <sup>−</sup> **<sup>p</sup>**(**r**) · **<sup>E</sup>**(**r**, *<sup>t</sup>*)}, or by taking a variation with

relaxation energy that is given by Eq.(59), we have the conservation equation

*<sup>∂</sup><sup>t</sup>* · **<sup>D</sup>**(**r**, *<sup>t</sup>*) + *<sup>∂</sup>***H**(**r**, *<sup>t</sup>*)

<sup>→</sup> *<sup>i</sup><sup>ω</sup>* <sup>↔</sup>

*d***r**{*u*(**r**) − **d**(**r**) · **E**(**r**, *t*) − **b**(**r**) · **H**(**r**, *t*)}. (58)

*<sup>∂</sup><sup>t</sup>* · **<sup>E</sup>**(**r**, *<sup>t</sup>*) <sup>−</sup> *<sup>∂</sup>***B**(**r**, *<sup>t</sup>*)

*<sup>∂</sup><sup>t</sup>* · **<sup>B</sup>**(**r**, *<sup>t</sup>*)

*<sup>∂</sup><sup>t</sup>* · **<sup>H</sup>**(**r**, *<sup>t</sup>*)

*<sup>∂</sup><sup>t</sup>* · **<sup>H</sup>**(**r**, *<sup>t</sup>*). (61)

. (59)

. (60)

*<sup>&</sup>lt;* **<sup>d</sup>**˙ (0)**d**˙ (*<sup>τ</sup>* <sup>−</sup> *<sup>t</sup>*) *<sup>&</sup>gt; kBT*

**7.1 Conservation of electromagnetic energy in a nonequilibrium system**

*<sup>&</sup>lt;* **<sup>d</sup>**˙ (*t*)T0(*t*, *<sup>τ</sup>*)**d**˙ (*τ*) *<sup>&</sup>gt;*<sup>0</sup> *kBT*(*τ*) *d fd*/*dt*

·**E***eff*(*τ*)*dτ*, (56)

*�* (*ω*). (57)

**D**=<sup>↔</sup> *�*

*∂***D**(*t*) *<sup>∂</sup><sup>t</sup>* <sup>=</sup> *<sup>V</sup>*

(*ω*) ∼ **E***eff* (*ω*)

**7. Applications**

 *t* −∞

*V*L*<sup>L</sup>* 

electric and magnetic polarization operators **d**(**r**) and **b**(**r**)

H(*t*) =

 *<sup>∂</sup>*H(*t*) *∂t*

*∂U*

 heat added

 *<sup>∂</sup>*H(**r**, *<sup>t</sup>*) *∂t*

*∂* �H(*t*)� *<sup>∂</sup><sup>t</sup>* −

condition for the fields

The energy dissipated by heat in internal relaxation is

 = *d***r** *∂***E**(**r**, *t*)

= *d***r** *∂U*

*<sup>∂</sup><sup>t</sup>* <sup>=</sup> *<sup>∂</sup>***D**(**r**, *<sup>t</sup>*)

$$\begin{split} \frac{\partial \mathbf{P}(\mathbf{r},t)}{\partial t} &= -\int d^3r' \int\_0^t \stackrel{\leftrightarrow}{\mathbf{K}}\_{\varepsilon}(\mathbf{r},t,\mathbf{r}',\mathbf{r}) \\ &\times \cdot \left(\mathbf{P}(\mathbf{r}',\mathbf{r}) - \stackrel{\leftrightarrow}{\chi\_0} \cdot \mathbf{E}(\mathbf{r}',\mathbf{r})\right) d\tau. \end{split} \tag{62}$$

Here <sup>↔</sup> *χ*<sup>0</sup> is the static susceptibility. The Debye relaxation differential equation is recovered from Eq.(62) when <sup>↔</sup> **K***<sup>e</sup>* (**r**, *t*,**r**� , *<sup>τ</sup>*) =<sup>↔</sup> *I δ*(*t* − *τ*)*δ*(**r** − **r**� )/*τe*.

#### **7.3 Generalized equation of motion for the magnetization**

The magnetic polarization can be obtained using Eq.(13) for the case of **m**(**r**) and *u*(**r**) in the Hamiltonian <sup>H</sup>(*t*) = *<sup>d</sup>***r**{*u*(**r**) <sup>−</sup> *<sup>μ</sup>*0**m**(**r**) · **<sup>H</sup>**(**r**, *<sup>t</sup>*)}, by taking a variation with respect to **H***m*/*kBT*, to find (Baker-Jarvis (2005; 2008); Baker-Jarvis and Kabos (2001); Baker-Jarvis et al. (2004); Robertson (1967b)):

$$\begin{split} \frac{\partial \mathbf{M}(\mathbf{r},t)}{\partial t} &= -|\gamma\_{\mathcal{S}}| \mathbf{M}(\mathbf{r},t) \times \mathbf{H}\_{eff}(\mathbf{r},t) \\ -\int d^3r' \int\_0^t \dot{\mathbf{K}}\_{\mathcal{m}}^+(\mathbf{r},t,\mathbf{r}',\mathbf{r}) \cdot \chi\_0 \mathbf{H}\_{eff}(\mathbf{r}',\mathbf{r}) d\tau\_\prime \end{split} \tag{63}$$

where <sup>↔</sup> **K***<sup>m</sup>* is a kernel that contains of the microstructural interactions given in Baker-Jarvis and Kabos (2001), *γ<sup>g</sup>* is the gyromagnetic ratio, *χ*<sup>0</sup> is the static susceptibility, and **H***eff* is the effective magnetic field. Special cases of Eq.(63) reduce to constitutive relations such as the Landau-Lifshitz, Gilbert, and Bloch equations. The Landau-Lifshitz equation of motion is useful for ferromagnetic and ferrite solid materials (Lax and Button (1962)).

#### **7.4 Maxwell's equations**

To derive Maxwell's equations from Eq.(19). For generality, the reversible term *< s*˙(*t*) *>* has been included. Note that *< s*˙(*t*) *>*= 0 only when the *λ*'s are the functions that satisfy the equations of motion, so we keep it in the variational principle for completeness. The Hamiltonian is <sup>H</sup>(*t*) = *<sup>d</sup>***r**{*u*(**r**) <sup>−</sup> **<sup>d</sup>**(**r**, *<sup>t</sup>*) · **<sup>E</sup>**(**r**, *<sup>t</sup>*) <sup>−</sup> **<sup>b</sup>**(**r**) · **<sup>H</sup>**(**r**, *<sup>t</sup>*)}, and the entropy rate satisfies

$$\Sigma(t) = \int d\mathbf{r} \frac{1}{T(\mathbf{r}, t)} \{ \frac{\partial \mathbf{L}\_{eff}(\mathbf{r}, t)}{\partial t} - \left\langle \frac{\partial \mathbf{D}(\mathbf{r}, t)}{\partial t} \cdot \mathbf{E}\_p(\mathbf{r}, t) \right\rangle\_{\omega\_0} $$

$$ - \left\langle \frac{\partial \mathbf{B}(\mathbf{r}, t)}{\partial t} \cdot \mu\_0 \mathbf{H}\_m(\mathbf{r}, t) \right\rangle\_{\omega\_0} = \left\langle \dot{\mathbf{s}}(t) > + \int d\mathbf{r} \frac{1}{T} \mathbf{J} \cdot \mathbf{E}\_{p\prime} \tag{64}$$

where *< s*˙(*t*) *>*= ({*<sup>&</sup>lt; <sup>u</sup>*˙ *<sup>&</sup>gt;* <sup>−</sup> *<sup>&</sup>lt;* **<sup>d</sup>**˙ *<sup>&</sup>gt;* ·**E***p*<sup>−</sup> *<sup>&</sup>lt;* **<sup>b</sup>**˙ *<sup>&</sup>gt;* ·**H***m*}/*T*)*d***r**. To obtain Maxwell's equations we take variations of Eq.(64) with respect to −**E***p*/*T* to obtain the first Maxwell equation, *∂***D**/*∂t* = ∇ × **H** − **J**, and the second (*∂***B**/*∂t* = −∇ × **E**), by a variation with respect to <sup>−</sup>**H***m*/*T*. Here we used the commutation relations in Eq.(7): *<sup>&</sup>lt; <sup>s</sup>*˙(*t*) *<sup>&</sup>gt;*=*<sup>&</sup>lt; <sup>s</sup>*˙(*t*), <sup>H</sup>(*t*) *<sup>&</sup>gt;* /*ih*¯: *<sup>d</sup>***r**� *<sup>&</sup>lt;* [**d**(**r**), **<sup>b</sup>**(**r**� ) · **H**(**r**� , *<sup>t</sup>*)]/*ih*¯ *<sup>&</sup>gt;*<sup>=</sup> −∇ × **<sup>H</sup>**(**r**, *<sup>t</sup>*) and *<sup>d</sup>***r**� *<sup>&</sup>lt;* [**b**(**r**), **<sup>d</sup>**(**r**� ) · **E**(**r**� , *t*)]/*ih*¯ *>*= ∇ × **E**(**r**, *t*).

production in the fluctuations. Since *kB*Δ*f* is the entropy production rate over the bandwidth of the black body due to the voltage fluctuations, this entropy production rate must be equal the entropy produced in equilibrium fluctuations in the resistors, which is the noise power per temperature: *< v*<sup>2</sup> *>*<sup>0</sup> /4*RT*. The emissivity for a material with a radiated power *P* at temperature *T* can then be defined as the ratio of the entropy rate produced in the body

<sup>223</sup> An Analysis of the Interaction of Electromagnetic

As noted in Baker-Jarvis (2008), Eq.(19 ) could be used to obtain values for *kB* from measurements of entropy production and it is an exact equation. Note that Eq.(19) does not contain the temperature explicitly. This equation or Eq.(66) can be used to model noise in the limit as we approach equilibrium. Boltzmann's constant can in principle be determined by

The goal of this paper was to develop in the context of electromagnetic measurement science, an explanation of how a previously derived exact entropy rate relationship relates to exact equations of motion, fluctuation-dissipation relations, and transport coefficients. Applications in the areas of measureable quantities such as thermal conductivity, electromagnetic response, electrical noise, and Boltzmann's constant were developed. We showed that the concept of entropy production rate can be viewed as a basis for deriving electromagnetic equations of motion, including Maxwell's equations, and extending FDRs. Unlike the classical FDRs, Eq.(19) is valid away from equilibrium. We developed expressions for thermal conductivity, electrical conductivity, and permittivity in terms of correlation functions using an analysis

Nyquist-noise can also be understood by an analysis based on entropy production instead of an standard argument based on power absorbed and emitted from resistors. Nyquist assumed that for a waveguide in equilibrium terminated by resistors, that in order for the concept of detailed balance to hold, the power absorbed by the resistor at one end of a waveguide must equal the power in the emitted fields that travel down the waveguide and is absorbed by the resistor at the other end. Using the results of this paper we can interpret this as follows. In equilibrium the mean of the microscopic dynamical evolving entropy-production rate is zero, but fluctuations around the mean are nonzero. The principle of detailed balance for equilibrium electrical noise applied to this process requires that any entropy production in the resistor at one end, induced by the microscopic fluctuating voltages, is balanced by the emitted electromagnetic power, with corresponding entropy production, and travels down the waveguide to the other resistor, and, when once absorbed, causes an equivalent production of entropy at that end. Note that the Nyquist result naturally falls out of Eq.(19) without evoking the requirement of an energy of (1/2)*kBT* per mode. Using the concept of entropy production rate to study blackbody processes appears to be more fundamental than using the concept of power alone, since it merges the concepts of power and temperature together. Equation (19) gives us an important tool for further study and extension to nonequilibrium analysis. The

↔

,*s*) *>*<sup>0</sup>

*<* **j***m*(**r**, 0)**j***n*(**r**�

*σ*, or *R*, through the equation

*<sup>T</sup>*(**r**�) *ds*. (68)

divided that that produced in a pure black body: *e* = (*P*/*T*)/*kB*Δ*f* .

and Thermal Fields with Materials Based on Fluctuations and Entropy Production

measurements of any of the transport coefficients , such as *κ*,

*<sup>L</sup> mn* (**r**) = <sup>∞</sup>

0

 *d***r** � *θ*(**r**)

**7.6 Estimation of Boltzmann's constant**

*kB* ↔

based on entropy-production fluctuations.

**8. Conclusion**

#### **7.5 Voltage fluctuations and Nyquist's theorem**

We consider the general problem of electrical noise and the related Nyquist problem of dissipation in a resistor. We will begin with a very general analysis that is valid away from thermal equilibrium, and then show how this reduces to Nyquist's result in the equilibrium limit.

The microscopic entropy rate for this electrical system is a function of the electromagnetic energy due to random charge motion. We write the microscopic entropy rate in terms of the microscopic charge current density, *<sup>s</sup>*˙(*t*) = <sup>−</sup> *<sup>d</sup>***r***ρ*˙ *<sup>f</sup>* '*φ*/*<sup>T</sup>* <sup>=</sup> <sup>−</sup> *<sup>d</sup>***rD** · **<sup>E</sup>***p*/*<sup>T</sup>* and since *<sup>ρ</sup>*˙ *<sup>f</sup>* <sup>=</sup> −∇ · **<sup>j</sup>***<sup>f</sup>* we have *<sup>&</sup>lt; <sup>s</sup>*˙(*t*) *<sup>&</sup>gt;*<sup>=</sup> *<sup>d</sup>***r**∇· *<sup>&</sup>lt;* **<sup>j</sup>***<sup>f</sup> <sup>&</sup>gt; <sup>φ</sup>*/*<sup>T</sup>* <sup>=</sup> 0. We also assume that the macroscopic entropy produced due to a constant bias current *I*<sup>0</sup> in the resistor is Σ = *I*<sup>2</sup> <sup>0</sup>*R*/*T*. For a system with a resistance *R* driven by an applied electrical field the RHS of Eq.(19) can be written as

$$\Sigma(t) = -\int d\mathbf{r} \frac{\phi(\mathbf{r}, t)}{T} \frac{\partial < \rho\_f(\mathbf{r})}{\partial t} = -\int d\mathbf{r} \frac{1}{T} \nabla \phi(\mathbf{r}, t) \cdot \mathbf{J}\_\ell(\mathbf{r}) $$

$$= \int d\mathbf{r} \frac{1}{T} \nabla \phi \cdot \stackrel{\leftrightarrow}{\sigma}\_f \cdot \nabla \phi = \int d\mathbf{r} \frac{\mathbf{E}(\mathbf{r}, t)}{T(\mathbf{r}, t)} \cdot \stackrel{\leftrightarrow}{\sigma}\_f \cdot \mathbf{E} \to \frac{I(t)^2 R}{T} . \tag{65}$$

In this special case, the LHS of Eq.(19) can be written in the following equivalent forms

$$\begin{split} \Sigma(t) &= \int\_{0}^{t} \int d\mathbf{r} d\mathbf{r}' \phi(\mathbf{r},t) \frac{<\dot{\rho}\_{f}(t)\mathcal{T}(t,\tau)(1-P(\tau))\dot{\rho}\_{f}(\tau)>\frac{\dot{\Phi}(\mathbf{r}',\tau)}{T(\mathbf{r}',\tau)}d\tau}{k\_{B}T(\mathbf{r},t)} \\ &= \int\_{0}^{t} \int d\mathbf{r} d\mathbf{r}' \left(\mathbf{E}(\mathbf{r},t) - \frac{\nabla T(\mathbf{r},t)}{T(\mathbf{r},t)}\phi(\mathbf{r},t)\right) \cdot \frac{<\mathbf{j}\_{f}(\mathbf{r})\mathcal{T}(t,\tau)(1-P(\tau))\dot{\mathbf{j}}\_{f}(\mathbf{r}')>}{k\_{B}T(\mathbf{r},t)} \\ &\times \cdot \frac{\left(\mathbf{E}(\mathbf{r}',\tau) - \frac{\nabla T(\mathbf{r}',\tau)}{T(\mathbf{r}',\tau)}\phi(\mathbf{r}',\tau)\right)}{T(\mathbf{r},\tau)} d\tau \\ &\approx \frac{1}{k\_{B}} \int\_{0}^{t} \frac{I(t)}{T(t)} < v\_{f}(t)\mathcal{T}(t,\tau)v\_{f}(\tau) > \frac{I(\tau)}{T(\tau)}d\tau. \end{split} \tag{66}$$

Here we used *dr***j***<sup>f</sup>* · **<sup>E</sup>** <sup>=</sup> *<sup>I</sup>*0(*V*<sup>0</sup> <sup>+</sup> *vf*(*t*)), where the constant driving voltage *<sup>V</sup>*<sup>0</sup> doesn't contribute and *< vf*(*t*) *>*= 0. This equation is very general and valid away from equilibrium. This approach could be used to generalize the Nyquist result if the temperature was not assumed to be constant in time or space. As we approach a steady state for a constant driving current *I*<sup>0</sup> and temperature , then the time domain fluctuation-dissipation form of Nyquist's theorem is recovered from the last expression in Eq.(66) when we equate it to *I*<sup>2</sup> <sup>0</sup>*R*:

$$R = \int\_0^\infty \frac{\_0}{k\_B T} d\tau. \tag{67}$$

Equation (67) is an example of Kubo's second fluctuation-dissipation theorem where *< v >*0= 0.

For a transmission line with a noisy resistor *R* that generates a random voltage with zero mean, and load resistor of resistance *R* over a bandwidth Δ*f* , Eq.(67) with a constant driving current yields *< v*<sup>2</sup> *>*0= 4*kBTR*Δ*f* . We can interpret Nyquist's equation in terms of entropy production in the fluctuations. Since *kB*Δ*f* is the entropy production rate over the bandwidth of the black body due to the voltage fluctuations, this entropy production rate must be equal the entropy produced in equilibrium fluctuations in the resistors, which is the noise power per temperature: *< v*<sup>2</sup> *>*<sup>0</sup> /4*RT*. The emissivity for a material with a radiated power *P* at temperature *T* can then be defined as the ratio of the entropy rate produced in the body divided that that produced in a pure black body: *e* = (*P*/*T*)/*kB*Δ*f* .

#### **7.6 Estimation of Boltzmann's constant**

As noted in Baker-Jarvis (2008), Eq.(19 ) could be used to obtain values for *kB* from measurements of entropy production and it is an exact equation. Note that Eq.(19) does not contain the temperature explicitly. This equation or Eq.(66) can be used to model noise in the limit as we approach equilibrium. Boltzmann's constant can in principle be determined by measurements of any of the transport coefficients , such as *κ*, ↔ *σ*, or *R*, through the equation

$$k\_{\rm B} \stackrel{\leftrightarrow}{L}\_{\rm mm}(\mathbf{r}) = \int\_0^\infty \int d\mathbf{r}' \theta(\mathbf{r}) \frac{<\mathbf{j}\_{\rm m}(\mathbf{r}, \mathbf{0}) \mathbf{j}\_{\rm n}(\mathbf{r}', s) >\_0}{T(\mathbf{r}')} ds. \tag{68}$$

#### **8. Conclusion**

<sup>0</sup>*R*/*T*.

18 Will-be-set-by-IN-TECH

We consider the general problem of electrical noise and the related Nyquist problem of dissipation in a resistor. We will begin with a very general analysis that is valid away from thermal equilibrium, and then show how this reduces to Nyquist's result in the equilibrium

The microscopic entropy rate for this electrical system is a function of the electromagnetic energy due to random charge motion. We write the microscopic entropy rate in terms of the microscopic charge current density, *<sup>s</sup>*˙(*t*) = <sup>−</sup> *<sup>d</sup>***r***ρ*˙ *<sup>f</sup>* '*φ*/*<sup>T</sup>* <sup>=</sup> <sup>−</sup> *<sup>d</sup>***rD** · **<sup>E</sup>***p*/*<sup>T</sup>* and since *<sup>ρ</sup>*˙ *<sup>f</sup>* <sup>=</sup> −∇ · **<sup>j</sup>***<sup>f</sup>* we have *<sup>&</sup>lt; <sup>s</sup>*˙(*t*) *<sup>&</sup>gt;*<sup>=</sup> *<sup>d</sup>***r**∇· *<sup>&</sup>lt;* **<sup>j</sup>***<sup>f</sup> <sup>&</sup>gt; <sup>φ</sup>*/*<sup>T</sup>* <sup>=</sup> 0. We also assume that the macroscopic entropy produced due to a constant bias current *I*<sup>0</sup> in the resistor is Σ = *I*<sup>2</sup>

For a system with a resistance *R* driven by an applied electrical field the RHS of Eq.(19) can

*∂ < ρf*(**r**) *>*

 *d***r E**(**r**, *t*) *T*(**r**, *t*) · ↔ *σ <sup>f</sup>* ·**E** →

In this special case, the LHS of Eq.(19) can be written in the following equivalent forms

*<sup>T</sup>*(**r**, *<sup>t</sup>*) *<sup>φ</sup>*(**r**, *<sup>t</sup>*)

, *τ*) 

theorem is recovered from the last expression in Eq.(66) when we equate it to *I*<sup>2</sup>

 ∞ 0

*R* =

*<sup>∂</sup><sup>t</sup>* <sup>=</sup> <sup>−</sup>

*< ρ*˙ *<sup>f</sup>*(*t*)T (*t*, *τ*)(1 − *P*(*τ*))*ρ*˙ *<sup>f</sup>*(*τ*) *> kBT*(**r**, *t*)

> ·

*I*(*τ*) *T*(*τ*)

*< v*(0)*v*(*τ*) *>*<sup>0</sup>

Equation (67) is an example of Kubo's second fluctuation-dissipation theorem where *< v >*0=

For a transmission line with a noisy resistor *R* that generates a random voltage with zero mean, and load resistor of resistance *R* over a bandwidth Δ*f* , Eq.(67) with a constant driving current yields *< v*<sup>2</sup> *>*0= 4*kBTR*Δ*f* . We can interpret Nyquist's equation in terms of entropy

Here we used *dr***j***<sup>f</sup>* · **<sup>E</sup>** <sup>=</sup> *<sup>I</sup>*0(*V*<sup>0</sup> <sup>+</sup> *vf*(*t*)), where the constant driving voltage *<sup>V</sup>*<sup>0</sup> doesn't contribute and *< vf*(*t*) *>*= 0. This equation is very general and valid away from equilibrium. This approach could be used to generalize the Nyquist result if the temperature was not assumed to be constant in time or space. As we approach a steady state for a constant driving current *I*<sup>0</sup> and temperature , then the time domain fluctuation-dissipation form of Nyquist's

 *d***r** 1

*<sup>T</sup>* <sup>∇</sup>*φ*(**r**, *<sup>t</sup>*) · **<sup>J</sup>***e*(**r**))

*I*(*t*)2*R*

*φ*(**r**� , *τ*)

*T*(**r**� , *τ*) *dτ*

*kBT*(**r**, *t*)

*dτ*. (66)

*kBT <sup>d</sup>τ*. (67)

*<* **j***f*(**r**)T (*t*, *τ*)(1 − *P*(*τ*))**j***f*(**r**�

*<sup>T</sup>* . (65)

) *>*

<sup>0</sup>*R*:

**7.5 Voltage fluctuations and Nyquist's theorem**

Σ(*t*) = −

, *<sup>τ</sup>*) <sup>−</sup> <sup>∇</sup>*T*(**r**�

= *d***r** 1 *<sup>T</sup>* <sup>∇</sup>*φ*·

 *d***r** *φ*(**r**, *t*) *T*

> ↔ *σ <sup>f</sup>* ·∇*φ* =

**<sup>E</sup>**(**r**, *<sup>t</sup>*) <sup>−</sup> <sup>∇</sup>*T*(**r**, *<sup>t</sup>*)

,*τ*) *T*(**r**�

,*τ*) *φ*(**r**�

*<sup>T</sup>*(*t*) *<sup>&</sup>lt; vf*(*t*)<sup>T</sup> (*t*, *<sup>τ</sup>*)*vf*(*τ*) *<sup>&</sup>gt;*

*<sup>T</sup>*(**r**, *<sup>τ</sup>*) *<sup>d</sup><sup>τ</sup>*

limit.

be written as

Σ(*t*) =

= *t* 0 *d***r***d***r** � 

× ·

≈ 1 *kB*

0.

 **E**(**r**�

> *t* 0

*I*(*t*)

 *t* 0 *d***r***d***r** � *φ*(**r**, *t*) The goal of this paper was to develop in the context of electromagnetic measurement science, an explanation of how a previously derived exact entropy rate relationship relates to exact equations of motion, fluctuation-dissipation relations, and transport coefficients. Applications in the areas of measureable quantities such as thermal conductivity, electromagnetic response, electrical noise, and Boltzmann's constant were developed. We showed that the concept of entropy production rate can be viewed as a basis for deriving electromagnetic equations of motion, including Maxwell's equations, and extending FDRs. Unlike the classical FDRs, Eq.(19) is valid away from equilibrium. We developed expressions for thermal conductivity, electrical conductivity, and permittivity in terms of correlation functions using an analysis based on entropy-production fluctuations.

Nyquist-noise can also be understood by an analysis based on entropy production instead of an standard argument based on power absorbed and emitted from resistors. Nyquist assumed that for a waveguide in equilibrium terminated by resistors, that in order for the concept of detailed balance to hold, the power absorbed by the resistor at one end of a waveguide must equal the power in the emitted fields that travel down the waveguide and is absorbed by the resistor at the other end. Using the results of this paper we can interpret this as follows. In equilibrium the mean of the microscopic dynamical evolving entropy-production rate is zero, but fluctuations around the mean are nonzero. The principle of detailed balance for equilibrium electrical noise applied to this process requires that any entropy production in the resistor at one end, induced by the microscopic fluctuating voltages, is balanced by the emitted electromagnetic power, with corresponding entropy production, and travels down the waveguide to the other resistor, and, when once absorbed, causes an equivalent production of entropy at that end. Note that the Nyquist result naturally falls out of Eq.(19) without evoking the requirement of an energy of (1/2)*kBT* per mode. Using the concept of entropy production rate to study blackbody processes appears to be more fundamental than using the concept of power alone, since it merges the concepts of power and temperature together. Equation (19) gives us an important tool for further study and extension to nonequilibrium analysis. The

There is a similar relation for **M**.

*dt* ≡ −*kB<sup>λ</sup>* <sup>∗</sup> <sup>Δ</sup>[*FF*] <sup>∗</sup> *∂λ*

*<sup>T</sup> <sup>β</sup>*2**E***<sup>p</sup>* · <sup>Δ</sup>[**p**(**r**)*u*(**<sup>r</sup>**

*<sup>T</sup> <sup>β</sup>*2**E***<sup>p</sup>* · <sup>Δ</sup>[**p**(**r**)**p**(**<sup>r</sup>**

*dS*(*t*)

+ *kB*

<sup>−</sup> *kB*

**11. References**

33, 423.

Vol. 99A, 383–402.

Vol. 144, 151–161.

391–403.

Lagrangian multipliers can also be constructed from Eq.(70)

� )] *<sup>∂</sup><sup>T</sup>*

� )] · **E***<sup>p</sup>*

fluctuations. *Entropy*, Vol. 10, 411–429.

relaxation times. *Phys. Rev. E*, Vol. 75, 056612.

statistical formalism. *Ann. Phys.* Vol. 8, 425–436.

transport. *Physics Reports*, Vol. MTT-36, 1–59.

transport. *Phys. Rev.*, Vol. 160, 175–183.

magnetization. *Phys. Rev. E*, Vol. 64, 56127.

where we used the heat equation *<sup>c</sup>∂T*/*∂<sup>t</sup>* <sup>=</sup> ∇· <sup>↔</sup>

*<sup>∂</sup><sup>t</sup>* ≈ *d***r** � { <sup>∇</sup>*T*·

> *∂T ∂t*

and Thermal Fields with Materials Based on Fluctuations and Entropy Production

electromagnetic relaxation. *Phys. Rev. E* Vol. 72, 066613.

An alternative equation for the entropy evolution in terms of time derivatives of the

<sup>225</sup> An Analysis of the Interaction of Electromagnetic

*<sup>∂</sup><sup>t</sup>* <sup>+</sup> *kBβ*2**E***<sup>p</sup>* · <sup>Δ</sup>[**p**(**r**)**p**(**<sup>r</sup>**

↔ *κ* ·∇*T <sup>T</sup>*<sup>2</sup> <sup>+</sup>

*κ* ·∇*T*.

Baker-Jarvis, J. (2005). Time-dependent entropy evolution in microscopic and macroscopic

Baker-Jarvis, J. (2008). Electromagnetic nanoscale metrology based on entropy production and

Baker-Jarvis, J., Janezic, M. D., Riddle, B. (2007). Dielectric polarization equations and

Baker-Jarvis, J., Kabos, P., (2001). Dynamic constitutive relations for polarization and

Baker-Jarvis, J., Kabos, P., Holloway, C. L., (2004). Nonequilibrium electromagnetics: Local and macroscopic fields using statistical mechanics. *Phys. Rev. E*, Vol. 70, 036615. Baker-Jarvis, J., Surek, J., 2009. Transport of heat and charge in electromagnetic metrology based on nonequilibrium statistical mechanics. *Entropy*, Vol. 11, 748–765. Berne, J. B., Harp, G. D., (1970). On the calculation of time correlation functions. In: I.Prigogine, Rice, S. A. (Eds.), *Advances in Chemical Physics IVII*. Wiley, NY, p. 63.

Jackson, J. D., (1999). *Classical Electrodynamics (3rd Ed.)*. John Wiley and Sons, New York. Lax, B., Button, K. J., 1962. *Microwave ferrites and ferromagnetics*. McGraw-Hill, New York. Mori, H., (1965). Transport, collective motion, and brownian motion. *Prog. Theor. Phys.*, Vol.

Nettleton, R. E., (1999). Perturbation treatment of non-linear transport via the Robertson

Oppenheim, I., Levine, R. D., (1979). Nonlinear transport processes: Hydrodynamics. *Physica*,

Rau, J., Muller, B., (1996). From reversible quantum microdynamics to irreversible quantum

Robertson, B., (1966). Equations of motion in nonequilibrium statistical mechanics. *Phys. Rev.*,

Robertson, B., (1967)a. Equations of motion in nonequilibrium statistical mechanics II, Energy

Robertson, B., (1967)b. Equations of motion of nuclear magnetism. *Phys. Rev.*, Vol. 153,

Robertson, B., (1978). Applications of maximum entropy to nonequilibrium statistical

mechanics. In: *The Maximum Entropy Formalism*. M.I.T. Press, Cambridge, MA, p. 289.

� )] · *∂***E***p ∂t*

*d***r**(*kBβ*2Δ[*u*(**r**)**p**(**r**

)}, (73)

� )] · *∂***E***p ∂t*

hope is that this paper is a step in progress toward developing measurement metrology that can be extended to study processes out of equilibrium and relate the measurements to theory.

#### **9. Acknowledgments**

We also acknowledge various discussions with members of the NIST Innovative Measurement Science research team for Detection of Corrosion in Steel-Reinforced Concrete by Antiferromagnetic Resonance.

#### **10. Appendix: Alternative expansion of the evolution of the relevant quantities**

We can re-express the LHS of Eq.(13) for the various relevant variables in terms of time derivative of the generalized forces. This casts the LHS of Eq.(13) in terms of measurable quantities such as temperature and field rates and thereby produces generalized heat transfer and polarization equations.

$$Tr\left(F(\mathbf{r})\frac{\partial \sigma}{\partial t}\right) = Tr\left(\left(F(\mathbf{r})\frac{\partial}{\partial t}e^{\left(-\sum\_{\mathbf{n}=1}\lambda\_{\mathbf{n}}(\mathbf{r},t)F\_{\mathbf{n}}(\mathbf{r})+\ln Z\right)}\right)
$$

$$= -\left( -  <\overline{F}(\mathbf{r})>\right)\frac{\partial \lambda\_{\mathbf{n}}(\mathbf{r},t)}{\partial t}\tag{69}$$

where *Z* = ∑*<sup>n</sup> Tr*(*Fn*(**r**)*σ*(*t*). Therefore

$$\begin{split} \frac{\partial \zeta < F(\mathbf{r}) > \mathbf{}}{\partial t} = -\int d\mathbf{r}' \{ - <\overline{F}(\mathbf{r}')>\} \* \frac{\partial \lambda(\mathbf{r}',t)}{\partial t} \\ \equiv -\int d\mathbf{r} \Delta \{F(\mathbf{r})\overline{F}(\mathbf{r}')\} \* \frac{\partial \lambda(\mathbf{r}',t)}{\partial t}. \end{split} \tag{70}$$

We defined Δ{*ab*} ≡*< ab >* − *< a >< b >*. Using this equation, the internal energy density can be re-expressed in terms of time derivatives of the Lagrangian multipliers

$$\begin{split} \frac{\partial \mathcal{U}(\mathbf{r},t)}{\partial t} &= \int d\mathbf{r}' \frac{1}{T} \frac{\Delta[u(\mathbf{r})\overline{\mathbf{z}}(\mathbf{r}')]}{k\_B T} \frac{\partial T(\mathbf{r}',t)}{\partial t} \\ &- \int d\mathbf{r}' \frac{1}{T} \frac{\Delta[u(\mathbf{r})\overline{\mathbf{p}}(\mathbf{r}')]}{k\_B T} \cdot \mathbf{E}\_p \frac{\partial T(\mathbf{r}',t)}{\partial t} + \int d\mathbf{r}' \frac{\Delta[u(\mathbf{r})\overline{\mathbf{p}}(\mathbf{r}')]}{k\_B T} \cdot \frac{\partial \mathbf{E}\_p(\mathbf{r}',t)}{\partial t} \\ &- \int d\mathbf{r}' \frac{1}{T} \frac{\Delta[u(\mathbf{r})\overline{\mathbf{m}}(\mathbf{r}')]}{k\_B T} \cdot \mathbf{H}\_m \frac{\partial T(\mathbf{r}',t)}{\partial t} + \int d\mathbf{r}' \frac{\Delta[u(\mathbf{r})\overline{\mathbf{m}}(\mathbf{r}')]}{k\_B T} \cdot \frac{\partial \mathbf{H}\_m(\mathbf{r}',t)}{\partial t} \\ &\equiv (c\_{\text{uu}} + c\_{\text{up}(1)} + c\_{\text{unun}(1)}) \frac{\partial T(\mathbf{r}',t)}{\partial t} + \mathbf{c}\_{\text{up}(2)} \cdot \frac{\partial \mathbf{E}\_p(\mathbf{r}',t)}{\partial t} + \mathbf{c}\_{\text{un}(2)} \cdot \frac{\partial \mathbf{H}\_m(\mathbf{r}',t)}{\partial t}. \end{split} \tag{71}$$

The polarization satisfies

$$\begin{split} \frac{\partial \mathbf{P}(\mathbf{r},t)}{\partial t} &= \int d\mathbf{r}' \Delta[\mathbf{p}(\mathbf{r})\overline{\mathbf{p}}(\mathbf{r}')] \cdot \frac{\partial \mathbf{E}\_p(\mathbf{r}',t) \beta(\mathbf{r}',t)}{\partial t} \\ &+ \int d\mathbf{r}' \Delta[\mathbf{p}(\mathbf{r})\overline{\mathbf{m}}(\mathbf{r}')] \cdot \frac{\partial \mathbf{H}\_m(\mathbf{r}',t) \beta(\mathbf{r}',t)}{\partial t} + \int d\mathbf{r} \frac{1}{T} \frac{\Delta[\mathbf{p}(\mathbf{r})\overline{\mathbf{u}}(\mathbf{r}')]}{k\_B T} \frac{\partial T(\mathbf{r}',t)}{\partial t} \\ &= (\chi\_{pu} + \chi\_{pp} + \chi\_{pm}) \frac{\partial T(\mathbf{r}',t)}{\partial t} + \overline{\chi}\_{pp} \cdot \frac{\partial \mathbf{E}\_p(\mathbf{r}',t)}{\partial t} + \overline{\chi}\_{pm} \cdot \frac{\partial \mathbf{H}\_m(\mathbf{r}',t)}{\partial t} .\end{split} \tag{72}$$

There is a similar relation for **M**.

20 Will-be-set-by-IN-TECH

hope is that this paper is a step in progress toward developing measurement metrology that can be extended to study processes out of equilibrium and relate the measurements to theory.

We also acknowledge various discussions with members of the NIST Innovative Measurement Science research team for Detection of Corrosion in Steel-Reinforced Concrete by

**10. Appendix: Alternative expansion of the evolution of the relevant quantities**

We can re-express the LHS of Eq.(13) for the various relevant variables in terms of time derivative of the generalized forces. This casts the LHS of Eq.(13) in terms of measurable quantities such as temperature and field rates and thereby produces generalized heat transfer

*<sup>&</sup>lt; <sup>F</sup>*(**r**)*F*(**r**) *<sup>&</sup>gt;* <sup>−</sup> *<sup>&</sup>lt; <sup>F</sup>*(**r**) *>< <sup>F</sup>*(**r**) *<sup>&</sup>gt; ∂λn*(**r**, *<sup>t</sup>*)

≡ − 

We defined Δ{*ab*} ≡*< ab >* − *< a >< b >*. Using this equation, the internal energy density

 *d***r**

> *d***r**

, *t*)*β*(**r**� , *t*)

> *d***r** 1 *T*

*∂***E***p*(**r**� , *t*) *<sup>∂</sup><sup>t</sup>* <sup>+</sup> <sup>↔</sup>

*∂t*

*χ pp* ·

) *>* − *< F*(**r**) *>< F*(**r**

(− ∑*n*=<sup>1</sup> *λn*(**r**,*t*)*Fn*(**r**)+ln *Z*)

�

)] *kBT* ·

> )] *kBT* ·

Δ[**p**(**r**)*u*(**r**�

*kBT*

*χ pm*(2) ·

)]

*∂T*(**r**� , *t*) *∂t*

*∂***H***m*(**r**�

, *t*) *<sup>∂</sup><sup>t</sup>* . (72)

*∂***E***p*(**r**� , *t*) *∂t*

*∂***H***m*(**r**�

*∂t*

*∂***H***m*(**r**�

, *t*)

, *t*) *<sup>∂</sup><sup>t</sup>* . (71)

�

*d***r**Δ{*F*(**r**)*F*(**r**

� <sup>Δ</sup>[*u*(**r**)**p**(**r**�

*∂***E***p*(**r**� , *t*) *<sup>∂</sup><sup>t</sup>* <sup>+</sup> **<sup>c</sup>***um*(2) ·

� <sup>Δ</sup>[*u*(**r**)**m**(**r**�

) *<sup>&</sup>gt;*} ∗ *∂λ*(**r**�

)} ∗ *∂λ*(**r**�

*<sup>∂</sup><sup>t</sup>* (69)

, *t*) *∂t*

, *t*)

*<sup>∂</sup><sup>t</sup>* . (70)

**9. Acknowledgments**

Antiferromagnetic Resonance.

and polarization equations.

*Tr F*(**r**) *∂σ ∂t* = *Tr* (*F*(**r**) *<sup>∂</sup> ∂t e*

where *Z* = ∑*<sup>n</sup> Tr*(*Fn*(**r**)*σ*(*t*). Therefore

*<sup>∂</sup><sup>t</sup>* <sup>=</sup> <sup>−</sup>

 *d***r** � 1 *T*

Δ[*u*(**r**)**p**(**r**�

Δ[*u*(**r**)**m**(**r**�

≡ (*cuu* + *cup*(1) + *cum*(1))

 *d***r** �

Δ[**p**(**r**)**m**(**r**

≡ (*χpu* + *χpp* + *χpm*(1))

*∂ < F*(**r**) *>*

*∂U*(**r**, *t*) *<sup>∂</sup><sup>t</sup>* <sup>=</sup>

> − *d***r** � 1 *T*

> − *d***r** � 1 *T*

The polarization satisfies

+ *d***r** �

*∂***P**(**r**, *t*) *<sup>∂</sup><sup>t</sup>* <sup>=</sup> <sup>=</sup> <sup>−</sup>

 *d***r** �

{*< F*(**r**)*F*(**r**

can be re-expressed in terms of time derivatives of the Lagrangian multipliers

)]

*∂T*(**r**� , *t*) *<sup>∂</sup><sup>t</sup>* <sup>+</sup>

*∂T*(**r**� , *t*) *<sup>∂</sup><sup>t</sup>* <sup>+</sup> **<sup>c</sup>***up*(2) ·

� )] ·

*∂***H***m*(**r**�

*∂T*(**r**� , *t*) *<sup>∂</sup><sup>t</sup>* <sup>+</sup> <sup>↔</sup>

*∂T*(**r**� , *t*) *<sup>∂</sup><sup>t</sup>* <sup>+</sup>

*∂***E***p*(**r**�

, *t*)*β*(**r**� , *t*) *<sup>∂</sup><sup>t</sup>* <sup>+</sup>

*∂T*(**r**� , *t*) *∂t*

Δ[*u*(**r**)*u*(**r**�

)] *kBT* · **<sup>E</sup>***<sup>p</sup>*

)] *kBT* · **<sup>H</sup>***<sup>m</sup>*

Δ[**p**(**r**)**p**(**r**

� )] ·

*kBT*

�

An alternative equation for the entropy evolution in terms of time derivatives of the Lagrangian multipliers can also be constructed from Eq.(70)

$$\begin{split} \frac{dS(t)}{dt} & \equiv -k\_{\mathrm{B}}\lambda \ast \Delta[\overline{F}] \ast \frac{\partial \lambda}{\partial t} \approx \int d\mathbf{r}' \{\frac{\nabla \Gamma \cdot \overset{\leftrightarrow}{\pi} \cdot \nabla T}{T^2} + \int d\mathbf{r} (k\_{\mathrm{B}}\beta^2 \Delta[u(\mathbf{r})\overline{\mathbf{p}}(\mathbf{r}')] \cdot \frac{\partial \mathbf{E}\_p}{\partial t} \\ & + \frac{k\_{\mathrm{B}}}{T} \beta^2 \mathbf{E}\_p \cdot \Delta[\mathbf{p}(\mathbf{r})\overline{u}(\mathbf{r}')] \frac{\partial T}{\partial t} + k\_{\mathrm{B}}\beta^2 \mathbf{E}\_p \cdot \Delta[\mathbf{p}(\mathbf{r})\overline{\mathbf{p}}(\mathbf{r}')] \cdot \frac{\partial \mathbf{E}\_p}{\partial t} \\ & - \frac{k\_{\mathrm{B}}}{T} \beta^2 \mathbf{E}\_p \cdot \Delta[\mathbf{p}(\mathbf{r})\overline{\mathbf{p}}(\mathbf{r}')] \cdot \mathbf{E}\_p \frac{\partial T}{\partial t} \}), \end{split} \tag{73}$$

where we used the heat equation *<sup>c</sup>∂T*/*∂<sup>t</sup>* <sup>=</sup> ∇· <sup>↔</sup> *κ* ·∇*T*.

#### **11. References**


**1. Introduction** 

**2.1 EVMS** 

**11** 

Seon-Gyoo Kim

*South Korea* 

*Kangwon National University,* 

**Risk Performance Index** 

**and Measurement System** 

A mega project can generally be defined as a project that costs more than 1 billion US dollars and includes many risk factors that can cause delays or failures during the project life cycle (Flyvbjerg et al. 2003). Thus, it is important to establish a method and system to manage these risk factors effectively in advance. Moreover, it is necessary to reduce the probability of such risk factors causing failures in the project by measuring the performance of projects from the point of view of risk management. This chapter defines a risk performance index (RPI) that measures the performance of projects by integrating the cost/schedule/risk factors and by adding risk management activities to the EVMS, which is the existing integrated cost/schedule-based performance measurement system for construction projects. We also propose a method to produce and analyze the RPIs to improve the accuracy and efficiency of the general performance measurement for mega projects by extending the conventional

cost/schedule-based performance measurement system to include risk management.

Performance management, which examines and manages whether projects, implemented by either individuals or organizations, are effectively executed, has four components: duty, strategy goal, performance goal, and performance index. A strategy goal is a major policy direction that promotes specific duties including the goal, value, and function of an organization. A performance goal is subordinate to the strategy goal and shows major projects planned in a particular year or a specific goal covering multiple aspects of a business group. A performance index is a scale to measure the level of achievement of the performance goal. It is important to identify quantitative measures of the goals pursued in the project. The development of a performance index enables the efficiency of the project to be measured by comparing and evaluating quantitatively the achievement and level of the performance goal. This chapter surveyed three methodologies of performance measurement systems used in

The Earned Value Management System (EVMS) is the most widely used performance measurement system in construction businesses. The United States Department of Defense

**2. Survey of existing performance measurement systems** 

existing construction businesses: EVMS, BSC, and KPI.

