**3.1 Quantum** *statistical* **mechanics in a nutshell**

The *state vectors* dealt with in Section 1 represent *pure* states. They are the ones which display the spectacular effects seen in recent experiments. Since in this section, we will allow *creation annihilation* of electron states, we must work in the framework of the *grand canonical ensemble*. <sup>3</sup> When one deals with *statistical ensembles* of quantum states, the object of interest is the Hermitian operator exp ð Þ �*βH* , called the *density matrix* operator (here <sup>β</sup> <sup>≔</sup> ð Þ *kBT* �<sup>1</sup> and *kB* <sup>¼</sup> *<sup>R</sup>=NA* are Boltzmann'<sup>s</sup> constant).

What drives our interest in the *density matrix*—namely, the matrix elements between pure states of exp ð Þ �*βH* —is the fact that it can be used to find the ground state of many-body systems by stochastic methods. For *β* large enough, exp ð Þ �*βH* acts effectively as a *projector* over the lowest-lying energy eigenstate to which the initial (trial) state ∣*φ*⟩ is not definitely orthogonal. Let *E* be the corresponding eigenvalue, and consider another trial state ∣*χ*⟩ over which we will project the result. Then we may numerically compute *E* from

$$\exp\left(-\Delta\beta\mathbb{E}\right) = \lim\_{\beta \to \infty} [\langle \chi | \exp\left[-(\beta + \Delta\beta)H] | \rho \rangle / \langle \chi | \exp\left(-\Delta\beta H\right) | \rho \rangle]. \tag{11}$$

But what is yet more interesting is that in the process, we find a good estimate of the eigenstate itself, namely, its composition in terms of a known basis.

## **3.2 Monte Carlo pursuit of the ground state**

The first step in this computation is to divide the interval 0½ � *; β* into *L* "time" slices of width Δ*τ* ¼ *β=L*. Some comments are in order:


If we can decompose *H* into a sum of several terms *Hi* which (although not commuting among them) are themselves *sums of commuting terms*, then for *L* large enough, the error of approximating

$$\begin{aligned} U &= \exp\left[-\Delta\tau (H\_1 + 2)\right] = \exp\left(-\Delta\tau \mathbf{1}\right) \exp\left(-\Delta\tau H\_2\right) \exp\left\{-\left(\Delta\tau\right)^2 [H\_1, H\_2] \right\}, \\ &= U\_1 U\_2 \left\{\mathbf{1} - \left(\Delta\tau\right)^2 [H\_1, H\_2] + \dots \right\} \sim U\_1 U\_2 \end{aligned}$$

would be at most of order ð Þ <sup>Δ</sup>*<sup>τ</sup>* <sup>2</sup> . Hence

<sup>3</sup> We have already stated that the Fermi level is the *chemical potential* of an ideal free-electron gas. This concept is peculiar of the *grand canonical ensemble*.

*Solid State Physics - Metastable, Spintronics Materials and Mechanics of Deformable…*

$$<\langle \chi | \, \exp \left( -\beta H \right) | \rho \rangle \sim \left\langle \chi | (U\_1 U\_2)^L | \rho \right\rangle. \tag{12}$$

H3 ¼ �*t*∑all jnj↑nj↓*:*

� �*, U*<sup>3</sup> <sup>¼</sup> <sup>Q</sup>

� �j<sup>10</sup> � � <sup>¼</sup> coshΔ*τ,*

� �j<sup>11</sup> � � <sup>¼</sup> <sup>1</sup>*,*

� �j<sup>10</sup> � � <sup>¼</sup> sinhΔ*τ,* (15)

*<sup>j</sup>* exp �Δ*τ* nj↑nj<sup>↓</sup>

� �*,* and

*evenj* exp �Δ*τhj,j*þ<sup>1</sup>

The corresponding matrix elements are then *ψi*þ1j*U*3*U*2*U*1j*ψ<sup>i</sup>* � � with *<sup>U</sup>*<sup>1</sup> <sup>¼</sup> <sup>Q</sup>

� �j<sup>01</sup> � � <sup>¼</sup> <sup>10</sup><sup>j</sup> exp �Δ*τhj,j*þ<sup>1</sup>

� �j<sup>00</sup> � � <sup>¼</sup> <sup>11</sup><sup>j</sup> exp �Δ*τhj,j*þ<sup>1</sup>

from which we write up the (a priori) transition probabilities. Then, in case there is only one occupied site in the block, we draw a random number *r* and compare it with the a priori transition probability *p* for the state to remain the same. In case that *r* . *p*, we make a hopping, i.e., exchange empty and occupied states in

The a priori probabilities can be better chosen if we take into account the occupation of those same two sites by electrons with the other spin projection, thus anticipating to the fact that they will penalize doubly occupied sites [4, 5]. This will

Up to now, we have dealt with crystalline solids. This means that disregarding

the *topology*<sup>4</sup> of the interaction network, we paid attention to the underlying *geometry* of the quantum problem. At present, a host of synthetic materials has outperformed metals at their initial tasks. Some of them still display a varying degree of crystalline character, but others are not crystalline at all. Vulcanized rubbers are an example: created by forcing random chemical bonds in a melt (a "spaghetti dish"), they are inhibited to flow, and, thus, they are amorphous solids.<sup>5</sup> But they exhibit a varying degree of viscoelastic behavior. In the last decades, the vast discipline of *soft condensed matter* has incorporated to mainstream research in solid-state physics, at equal footing with crystalline solids. The scope of

soft condensed matter is very wide. In particular, it considers many nonequilibrium routes to *self-assembled emergent* structures. Of huge interest is the neocortex (not just because understanding the brain's behavior is one of the "Holy Grails" of science, but because in doing it we may achieve to master a computa-

We devote this section to the emergence of non-equilibrium routes to spatiotemporal patterns in an assembly of model "neurons" which keep their essential trait, namely, *excitability*. Admittedly, here the interaction network has the *topology* of a lattice, but here it is not the underlying geometry that is at stake. What does matter here is that the boundary condition be compatible with the interaction, a fact

<sup>4</sup> It will be a lattice only if all interactions but nearest neighbor ones are neglected. Note that crystals

<sup>5</sup> The electronic properties of amorphous solids are also of interest, e.g., in the photovoltaic (PV)

tional strategy which is far more efficient than the present one).

may even have a Cayley tree structure, like the so-called "Bethe lattices."

� �j<sup>01</sup> � � <sup>¼</sup> <sup>01</sup><sup>j</sup> exp �Δ*τhj,j*þ<sup>1</sup>

*oddj* exp �Δ*τhj,j*þ<sup>1</sup>

*Issues in Solid-State Physics*

the block.

� �*, U*<sup>2</sup> <sup>¼</sup> <sup>Q</sup>

*DOI: http://dx.doi.org/10.5772/intechopen.84367*

01j exp �Δ*τhj,j*þ<sup>1</sup>

00j exp �Δ*τhj,j*þ<sup>1</sup>

**4. Non-equilibrium routes to soft solids**

that contributes to the network's topology.

industry.

**11**

10j exp �Δ*τhj,j*þ<sup>1</sup>

certainly improve convergence.

In order to evaluate expression (12), we introduce complete sets of states at each time slice.

The clue to quantum Monte Carlo simulation of Eq. (11) resides in evaluating the sums over complete states *by importance sampling*: in order to do that, observe first that we can rather arbitrarily decompose

$$\left\langle \psi\_{\vec{j}} | (U\_1 U\_2)^L | \psi\_{\vec{i}} \right\rangle = \mathbf{S}\_{\vec{i}\vec{j}} P\_{\vec{i}\vec{j}} \tag{13}$$

as the product of a probability times a (complex) number which we will call a "score." The probability distribution *Pij* is at our disposal in order to optimize numerical convergence, minimize statistical error, etc. It can be shown that the way to achieve the last goal is by assigning to every matrix element the same score: that is the basis for the so-called *population method*. Here the initial (trial) state is represented by a "population" in which there are *ni* copies of state ∣*ψi*⟩. The latter corresponds to a definite assignment of occupation numbers both in coordinate and spin (always belonging to the Hilbert space of the problem, i.e., compatible with the conserved quantum numbers). To each individual in the population, we apply the evolution operator, thus obtaining a new state after one time slice. That particular matrix element can be decomposed as indicated in Eq. (20) (but being now *Sij* ¼ *S* ¼ const). The way in which we implement the *Pij* is by making as many copies of that particular resulting state as indicated by 〈*ψj*∣ð Þ *U*1*U*<sup>2</sup> *L* ∣*ψi*〉*=S*. Proceeding this way, we will get a different population after each time slice which we expect to approach successively to one representing the lowest reachable energy eigenstate.

We have not said anything about the way in which we evaluate the alluded matrix elements, besides the fact that we resort to the decomposition (20): if, as we have already assumed, the term *Hi* can itself be decomposed into mutually commuting terms, we need only to focus on the Hilbert space of that (much smaller) system. We can compute exactly the matrix elements of the evolution operator for that cluster, write them as the product of a probability times a score (now we can choose the probability distribution to minimize total computing time), and make transitions among cluster states according to those probabilities, assigning then the corresponding score to the particular transition.

#### **3.3 The case of fermions**

Again within the *tight-binding approach* to crystalline solids, quantum *creation* (*c* † *is*) and *annihilation* (*cis*) operators determine the existence of electrons with spin projection *σ* at site *i*. For the state vector of the whole set of electrons in the crystal to be totally antisymmetric under exchange, those operators must *anticommute* with each other, unless they refer to the same site and spin projection. In such a case, there can be *at most one* electron per site and spin projection, as required by Pauli's principle.

In the case of the 1D Hubbard model, we chose the following decomposition of the Hamiltonian, which allows us to consider clusters of only two sites:

$$H\_1 = -t\sum\_{oddj} \sum\_{\sigma} \left( c\_{j+1\sigma}^\dagger c\_{j\sigma} + h.c. \right) = -t \sum\_{oddj} \sum\_{\sigma} h\_{j, j+1}$$

$$\mathbf{H}\_2 = -t \sum\_{\text{evenj}} \sum\_{\sigma} \left( c\_{j+1\sigma}^\dagger c\_{j\sigma} + \text{h.c.} \right) = -\mathbf{t} \sum\_{\text{evenj}} \sum\_{\sigma} \mathbf{h}\_{\text{j}, j+1} \tag{14}$$

*Issues in Solid-State Physics DOI: http://dx.doi.org/10.5772/intechopen.84367*

h *χ*j exp ð Þj �*βH φ*i � *χ*jð Þ *U*1*U*<sup>2</sup>

*Solid State Physics - Metastable, Spintronics Materials and Mechanics of Deformable…*

*ψj*jð Þ *U*1*U*<sup>2</sup>

copies of that particular resulting state as indicated by 〈*ψ<sup>j</sup>*

D E

time slice.

that we can rather arbitrarily decompose

In order to evaluate expression (12), we introduce complete sets of states at each

The clue to quantum Monte Carlo simulation of Eq. (11) resides in evaluating the sums over complete states *by importance sampling*: in order to do that, observe first

> *L* j*ψi*

as the product of a probability times a (complex) number which we will call a "score." The probability distribution *Pij* is at our disposal in order to optimize numerical convergence, minimize statistical error, etc. It can be shown that the way to achieve the last goal is by assigning to every matrix element the same score: that is the basis for the so-called *population method*. Here the initial (trial) state is represented by a "population" in which there are *ni* copies of state ∣*ψi*⟩. The latter corresponds to a definite assignment of occupation numbers both in coordinate and spin (always belonging to the Hilbert space of the problem, i.e., compatible with the conserved quantum numbers). To each individual in the population, we apply the evolution operator, thus obtaining a new state after one time slice. That particular matrix element can be decomposed as indicated in Eq. (20) (but being now *Sij* ¼ *S* ¼ const). The way in which we implement the *Pij* is by making as many

ing this way, we will get a different population after each time slice which we expect to approach successively to one representing the lowest reachable energy eigenstate. We have not said anything about the way in which we evaluate the alluded matrix elements, besides the fact that we resort to the decomposition (20): if, as we have already assumed, the term *Hi* can itself be decomposed into mutually commuting terms, we need only to focus on the Hilbert space of that (much smaller) system. We can compute exactly the matrix elements of the evolution operator for that cluster, write them as the product of a probability times a score (now we can choose the probability distribution to minimize total computing time),

and make transitions among cluster states according to those probabilities,

Again within the *tight-binding approach* to crystalline solids, quantum *creation* (*c*

and *annihilation* (*cis*) operators determine the existence of electrons with spin projection *σ* at site *i*. For the state vector of the whole set of electrons in the crystal to be totally antisymmetric under exchange, those operators must *anticommute* with each other, unless they refer to the same site and spin projection. In such a case, there can be *at most one* electron per site and spin projection, as required by Pauli's principle. In the case of the 1D Hubbard model, we chose the following decomposition of

> *<sup>j</sup>*þ1*<sup>σ</sup>cj<sup>σ</sup>* <sup>þ</sup> *<sup>h</sup>:c:* � �

<sup>j</sup>þ1σc*j<sup>σ</sup>* <sup>þ</sup> <sup>h</sup>*:*c*:* � � ¼ �*t* ∑ *oddj* ∑ *σ hj,j*þ<sup>1</sup>

¼ �t ∑ evenj ∑ σ

assigning then the corresponding score to the particular transition.

the Hamiltonian, which allows us to consider clusters of only two sites:

†

†

*H*<sup>1</sup> ¼ �*t*∑*oddj*∑*<sup>σ</sup> c*

H2 ¼ �*t*∑evenj∑<sup>σ</sup> c

**3.3 The case of fermions**

**10**

*L* j*φ*

*:* (12)

¼ *SijPij* (13)

∣ð Þ *U*1*U*<sup>2</sup> *L*

∣*ψi*〉*=S*. Proceed-

† *is*)

hj*,*jþ<sup>1</sup> (14)

D E

$$\mathbf{H}\_3 = -t\sum\_{\text{all } j} \mathbf{n}\_j \star \mathbf{n}\_j \downarrow \downarrow$$

The corresponding matrix elements are then *ψi*þ1j*U*3*U*2*U*1j*ψ<sup>i</sup>* � � with *<sup>U</sup>*<sup>1</sup> <sup>¼</sup> <sup>Q</sup> *oddj* exp �Δ*τhj,j*þ<sup>1</sup> � �*, U*<sup>2</sup> <sup>¼</sup> <sup>Q</sup> *evenj* exp �Δ*τhj,j*þ<sup>1</sup> � �*, U*<sup>3</sup> <sup>¼</sup> <sup>Q</sup> *<sup>j</sup>* exp �Δ*τ* nj↑nj<sup>↓</sup> � �*,* and

$$
\langle \mathbf{0} \mathbf{1} | \exp \left( -\Delta \pi h\_{\mathbf{j}, j+1} \right) | \mathbf{0} \mathbf{1} \rangle = \langle \mathbf{1} \mathbf{0} | \exp \left( -\Delta \pi h\_{\mathbf{j}, j+1} \right) | \mathbf{1} \mathbf{0} \rangle = \cosh \Delta \pi,
$$

$$
\langle \mathbf{1} \mathbf{0} | \exp \left( -\Delta \pi h\_{\mathbf{j}, j+1} \right) | \mathbf{0} \mathbf{1} \rangle = \langle \mathbf{0} \mathbf{1} | \exp \left( -\Delta \pi h\_{\mathbf{j}, j+1} \right) | \mathbf{1} \mathbf{0} \rangle = \sinh \Delta \pi,
\tag{15}
$$

$$
\langle \mathbf{0} \mathbf{0} | \exp \left( -\Delta \pi h\_{\mathbf{j}, j+1} \right) | \mathbf{0} \mathbf{0} \rangle = \langle \mathbf{1} \mathbf{1} | \exp \left( -\Delta \pi h\_{\mathbf{j}, j+1} \right) | \mathbf{1} \mathbf{1} \rangle = \mathbf{1},
$$

from which we write up the (a priori) transition probabilities. Then, in case there is only one occupied site in the block, we draw a random number *r* and compare it with the a priori transition probability *p* for the state to remain the same. In case that *r* . *p*, we make a hopping, i.e., exchange empty and occupied states in the block.

The a priori probabilities can be better chosen if we take into account the occupation of those same two sites by electrons with the other spin projection, thus anticipating to the fact that they will penalize doubly occupied sites [4, 5]. This will certainly improve convergence.

## **4. Non-equilibrium routes to soft solids**

Up to now, we have dealt with crystalline solids. This means that disregarding the *topology*<sup>4</sup> of the interaction network, we paid attention to the underlying *geometry* of the quantum problem. At present, a host of synthetic materials has outperformed metals at their initial tasks. Some of them still display a varying degree of crystalline character, but others are not crystalline at all. Vulcanized rubbers are an example: created by forcing random chemical bonds in a melt (a "spaghetti dish"), they are inhibited to flow, and, thus, they are amorphous solids.<sup>5</sup> But they exhibit a varying degree of viscoelastic behavior. In the last decades, the vast discipline of *soft condensed matter* has incorporated to mainstream research in solid-state physics, at equal footing with crystalline solids. The scope of soft condensed matter is very wide. In particular, it considers many nonequilibrium routes to *self-assembled emergent* structures. Of huge interest is the neocortex (not just because understanding the brain's behavior is one of the "Holy Grails" of science, but because in doing it we may achieve to master a computational strategy which is far more efficient than the present one).

We devote this section to the emergence of non-equilibrium routes to spatiotemporal patterns in an assembly of model "neurons" which keep their essential trait, namely, *excitability*. Admittedly, here the interaction network has the *topology* of a lattice, but here it is not the underlying geometry that is at stake. What does matter here is that the boundary condition be compatible with the interaction, a fact that contributes to the network's topology.

<sup>4</sup> It will be a lattice only if all interactions but nearest neighbor ones are neglected. Note that crystals may even have a Cayley tree structure, like the so-called "Bethe lattices."

<sup>5</sup> The electronic properties of amorphous solids are also of interest, e.g., in the photovoltaic (PV) industry.
