**2. Basic techniques of modeling and simulation**

Modeling and simulation, like any other field of science and technology has some certain basic techniques using which all practices are carried out. These are the foundation stones on which the building of modeling and simulation practices and procedures is built.

© 2015 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### **2.1. Introduction**

Various techniques have evolved in modeling and simulation since its inception [6] for the solution of technical and engineering problems, ranging from ancient Roman military techniques to classical analog methods to modern Runge – Kutta method and Monte Carlo techniques. [7]. The history of modeling and simulation dates back to ancient times. It was first used by ancient Romans to simulate the actual war conditions in areas of peace to train its soldiers to fight in areas where they have never been. These war games were based upon very well and adequately designed models. Later, techniques of modeling and simulation were used by artists and scientists to test their designs of statuary or edifices during the age of the Renaissance (1200 – 1600 C.E). The renowned Leonardo da Vinci, extensively made use of techniques of modeling and simulation to test and validate his models in art, military, and civil works. [7]. Chess, also known as the world's first war game and its evolution in to a computer game is a result of rigorous use of techniques of modeling and simulation [8]. Similarly, war games (a technique of modeling and simulation) were used in Europe (Prussia, modern-day northeastern Germany) and same was used by Army Corps of Engineers in the United States [9]. In technical fields, the first successful use is reported in the production and use of "Link Flight Simulator", which was patented in 1929 by the American Edward Link. [10]. SAGE – semi, automated ground environment (1949);, MEW – Microwave early warning (1950) [11];, "Whirlwind", MIT, *Cape Cod System (1953)* were also important milestones in modeling and simulation. Ranging from days of the Cold War to the war in Iraq (1991), more advanced techniques were used to develop more realistic and real-world-scenario war games. Following this increasingly well designed simulation centers were opened at various universities and institutions in the United States and the world to better research the areas of modeling and simulation, develop new models, improve existing ones, and develop applications, as a result of which various new techniques/methods of modeling and simulation were formulated [11].

#### **2.2. Energy minimization**

Energy minimization (also called energy optimization or geometry optimization) methods are numerical procedures for finding a minimum on the potential energy surface/state starting from a higher energy initial structure/state [1, 14]. These are extensively used in chemistry, mathematics, computer science, image processing, biology, metallurgical engineering, materials science, mechanical engineering, chemical engineering, electrical engineering etc. to find the stable/equilibrium states of molecules, solids, and items. Extensive studies have been carried out in various fields making use of energy minimization techniques to formulate models highlighting the importance, significance, and use of this method in modeling and simulation and solution of engineering problems.

Levitt [12] used energy minimization to formulate solutions of protein folding. The potential energy functions used are detailed and include terms that allow bond stretching, bond angle bending, bond twisting, van der Waals' forces, and hydrogen bonds. A unique feature of the methods used includes easy approach for restrained energy minimization work (including all terms) to anneal the conformations and reduce their energies further. The methods used were very versatile and were proposed to be applicable for building models of protein conforma‐ tions that have low energy values and obey a wide variety of restraints. Recently, Micheletti and Maritan, [13] also used energy minimization methods to formulate solutions of protein design. They went a step further in their approach, and defined actual real-world scenarios and formulated alternative design strategies based upon correct treatment of free energy. Sutton [14] presented the use of energy minimization methods to determine the solution of atomic structures and solute concentration profiles at defects in elemental solids and substi‐ tutional alloys as a function of temperature. He used mean field approximation, rewrote free energy, used Einstein models and auto-correlation approximation and showed that the better statistical averaging of the auto-correlation approximation leads to better temperature – and concentration – dependent pair interactions. His formula was fairly simple and effective. Lwin [15] used spreadsheets to solve chemical equilibrium problems by Gibbs energy minimization.

**2.1. Introduction**

226 Heat Transfer Studies and Applications

**2.2. Energy minimization**

simulation and solution of engineering problems.

Various techniques have evolved in modeling and simulation since its inception [6] for the solution of technical and engineering problems, ranging from ancient Roman military techniques to classical analog methods to modern Runge – Kutta method and Monte Carlo techniques. [7]. The history of modeling and simulation dates back to ancient times. It was first used by ancient Romans to simulate the actual war conditions in areas of peace to train its soldiers to fight in areas where they have never been. These war games were based upon very well and adequately designed models. Later, techniques of modeling and simulation were used by artists and scientists to test their designs of statuary or edifices during the age of the Renaissance (1200 – 1600 C.E). The renowned Leonardo da Vinci, extensively made use of techniques of modeling and simulation to test and validate his models in art, military, and civil works. [7]. Chess, also known as the world's first war game and its evolution in to a computer game is a result of rigorous use of techniques of modeling and simulation [8]. Similarly, war games (a technique of modeling and simulation) were used in Europe (Prussia, modern-day northeastern Germany) and same was used by Army Corps of Engineers in the United States [9]. In technical fields, the first successful use is reported in the production and use of "Link Flight Simulator", which was patented in 1929 by the American Edward Link. [10]. SAGE – semi, automated ground environment (1949);, MEW – Microwave early warning (1950) [11];, "Whirlwind", MIT, *Cape Cod System (1953)* were also important milestones in modeling and simulation. Ranging from days of the Cold War to the war in Iraq (1991), more advanced techniques were used to develop more realistic and real-world-scenario war games. Following this increasingly well designed simulation centers were opened at various universities and institutions in the United States and the world to better research the areas of modeling and simulation, develop new models, improve existing ones, and develop applications, as a result of which various new techniques/methods of modeling and simulation were formulated [11].

Energy minimization (also called energy optimization or geometry optimization) methods are numerical procedures for finding a minimum on the potential energy surface/state starting from a higher energy initial structure/state [1, 14]. These are extensively used in chemistry, mathematics, computer science, image processing, biology, metallurgical engineering, materials science, mechanical engineering, chemical engineering, electrical engineering etc. to find the stable/equilibrium states of molecules, solids, and items. Extensive studies have been carried out in various fields making use of energy minimization techniques to formulate models highlighting the importance, significance, and use of this method in modeling and

Levitt [12] used energy minimization to formulate solutions of protein folding. The potential energy functions used are detailed and include terms that allow bond stretching, bond angle bending, bond twisting, van der Waals' forces, and hydrogen bonds. A unique feature of the methods used includes easy approach for restrained energy minimization work (including all terms) to anneal the conformations and reduce their energies further. The methods used were very versatile and were proposed to be applicable for building models of protein conforma‐

Similarly, Olga Veksler during her PhD thesis at Cornell University [16] presented the use of energy minimization techniques in computer vision problems. She developed algorithms for several important classes of energy functions incorporating everywhere smooth, piecewise constant and piecewise smooth priors. These algorithms primarily rely on graph cuts as an optimization technique. For a certain everywhere smooth prior, an algorithm based on finding the exact minimum by computing a single graph cut was developed. For piecewise smooth priors, two approximate iterative algorithms, computing several graph cuts at each iteration, were developed and for certain piecewise constant prior, same algorithms were used along with a new one which finds a local minimum in yet another move space. The approach was quite effective on image restoration, stereo, and motion. [16]. Similar studies were carried out later as well to further test and evaluate energy minimization in computer vision [17, 19]. Nikolova [20] explained the use of energy minimization methods in the field of image analysis and processing. Onofrio and Tubaro applied the same to the problem of three-dimensional (3D) face recognition [21]. Standard [22] explained the use of energy minimization to determine the states for a molecule in chemistry; he explained that the geometry of molecule is changed in a stepwise fashion so that the energy is reduced to lowest minimum.

**Figure 1.** Graphical representation of energy minimization process [22]

Figure 1 shows energy minimization process for a molecule in steps. "*Most energy minimization methods proceed by determining the energy and the slope of the function at point 1. If the slope is positive, it is an indication that the coordinate is too large (as for point 1). If the slope is negative, then the coordinate is too small. The numerical minimization technique then adjusts the coordinate; if the slope is positive, the value of the coordinate is reduced as shown by point 2. The energy and the slope are again calculated for point 2. If the slope is zero, a minimum has been reached. If the slope is still positive, then the coordinate is reduced further, as shown for point 3, until a minimum is obtained"*. [22]

There are other methods for actually varying the geometry to find the minimum [22]. Many of these, which are used to find a minimum on the potential energy surface of a molecule, use an iterative formula to work in a step wise fashion. These are all based on formulas of the following type:

$$\mathbf{x}\_{uww} = \mathbf{x}\_{old} + \text{correction} \tag{1}$$

where, *xnew* is value of the geometry at the next step, *xold* is geometry at the current step, and *correction* is some adjustment made to the geometry.

#### *2.2.1. Newton Raphson method*

*"The Newton-Raphson method is the most computationally expensive per step of all the methods utilized to perform energy minimization. It is based on Taylor series expansion of the potential energy surface at the current geometry"* [22]. The equation for updating the geometry is a modification of eq. [1]:

$$\mathbf{x}\_{new} = \mathbf{x}\_{old} - \frac{E'(\mathbf{x}\_{old})}{E''(\mathbf{x}\_{old})}.\tag{2}$$

The correction term depends on both the first derivative (also called the slope or gradient) of the potential energy surface at the current geometry and also on the second derivative (also called the curvature). The Newton Raphson method involves fewest steps to reach the minimum.

#### *2.2.2. Steepest descent method*

This is a method which relies on an approximation. In this method, the second derivative is assumed to be a constant.

$$\mathfrak{X}\_{new} = \mathfrak{X}\_{old} - \mathcal{Y}E'\{\mathfrak{X}\_{old}\} \tag{3}$$

where γ is a constant. In this method, the gradient at each point is again calculated. Because of the approximation, it is not efficient, so more steps are required to find the minimum. [22]

#### *2.2.3. Conjugate gradient method*

*"In this method, the gradients of the current geometry are first computed. Then, the direction of the largest gradient is determined. The geometry is minimized along this one direction (this is called a line search). Then, a direction orthogonal to the first one is selected (a 'conjugate' direction). The geometry is minimized along this direction. This continues until the geometry is optimized in all the direc‐ tions".* [22]

#### *2.2.4. Simplex method*

Figure 1 shows energy minimization process for a molecule in steps. "*Most energy minimization methods proceed by determining the energy and the slope of the function at point 1. If the slope is positive, it is an indication that the coordinate is too large (as for point 1). If the slope is negative, then the coordinate is too small. The numerical minimization technique then adjusts the coordinate; if the slope is positive, the value of the coordinate is reduced as shown by point 2. The energy and the slope are again calculated for point 2. If the slope is zero, a minimum has been reached. If the slope is still positive, then*

There are other methods for actually varying the geometry to find the minimum [22]. Many of these, which are used to find a minimum on the potential energy surface of a molecule, use an iterative formula to work in a step wise fashion. These are all based on formulas of the

where, *xnew* is value of the geometry at the next step, *xold* is geometry at the current step, and

*"The Newton-Raphson method is the most computationally expensive per step of all the methods utilized to perform energy minimization. It is based on Taylor series expansion of the potential energy surface at the current geometry"* [22]. The equation for updating the geometry is a modification of eq. [1]:

> ' . ''

The correction term depends on both the first derivative (also called the slope or gradient) of the potential energy surface at the current geometry and also on the second derivative (also called the curvature). The Newton Raphson method involves fewest steps to reach the

This is a method which relies on an approximation. In this method, the second derivative is

where γ is a constant. In this method, the gradient at each point is again calculated. Because of the approximation, it is not efficient, so more steps are required to find the minimum. [22]

*new old* '( ) *old x x Ex* = g

*E x*

*new old*

*x x*

( ) ( )

*old*

*old*

*correction* is some adjustment made to the geometry.

*new old x x correction* = + (1)

*E x* = - (2)

(3)

*the coordinate is reduced further, as shown for point 3, until a minimum is obtained"*. [22]

following type:

228 Heat Transfer Studies and Applications

minimum.

*2.2.2. Steepest descent method*

assumed to be a constant.

*2.2.1. Newton Raphson method*

In the Simplex Method, the energies at the initial geometry and two neighboring geometries on the potential energy surface are calculated (points A, B, and C in Fig. 2).

**Figure 2.** Schematic of Simplex Method implementation (three points)

*"The point with the highest energy of the three is noted. Then, this point is reflected through the line segment connected to the other two (to move away from the region of high energy). For example, if the energy of point A is the highest out of the three points A, B, and C, then A is reflected through line segment BC to produce point D."* (Fig. 3)

**Figure 3.** Simplex Method (four points)

*"In the next step, the two original lowest energy points (B and C) along with the new point D are analyzed. The highest energy point of these is selected, and that point is reflected through the line segment connecting the other two. The process continues until a minimum is located"* [22]. As a result, it is the least expensive in CPU time per step. However, it often requires the most steps.

#### **2.3. Molecular Dynamics (MD) simulations**

Molecular dynamics (MD) is a technique in which physical movements of atoms and molecules is simulated using computers. In this the atoms and molecules are allowed to interact for a period of time, giving a view of the motion of the atoms. MD simulation circumvents the problem of finding the properties of complex molecular systems by using numerical methods. In the most common version, the trajectories of molecules and atoms are determined by numerically solving Newton's equations of motion for a system of interacting particles [1, 23]. This is one of the two main families of simulation techniques [23]. The results of molecular dynamics simulation can be used in various fields such as thermodynamics, biology, chemis‐ try, materials science and engineering, statistical mechanics and nanotechnology [1, 24, 25].

van Gunsteren, [26] explained in detail about methodology, applications and prospective of molecular dynamics in chemistry. He effectively explained molecular dynamics in terms of choosing unavoidable assumptions, approximations and simplifications of the molecular model and computational procedure such that their contributions to the overall inaccuracy are of comparable size, without affecting – significantly the property of interest. *"He further postulated and argued that the aim of computer simulation of molecular systems is to compute macroscopic behavior from microscopic interactions giving the reason that the main contributions a microscopic consideration can offer are (1) the understanding and (2) interpretation of experimental results, (3) semi – quantitative estimates of experimental results, and (4) the capability to interpolate or extrapolate experimental data into regions that are only difficultly accessible in the laboratory*" [26]. His methodology was good, accurate and in detail for explaining molecular dynamics. A similar study is also conducted by McKenzie [27]. Karplus and McCammon [28] extensively reviewed the use of molecular dynamics as applied to biomolecules. Their study encompasses all aspects of application of computational techniques for solving structure, folding, internal motion, conformational changes, etc., of biomolecules and problems. A similar study was carried by Kovalskyy et al. [29] in which they used molecular dynamics for the study of structural stability of HIV – 1 Protease under physiological conditions.

Kupka [30] applied molecular dynamics in computer-based graphic accelerators. He proposed an algorithm consisting of CPU and GPU parts, The CPU part is responsible for streams preparations and running kernel functions from the GPU part, while the GPU part consists of two kernels and one reduce function.

A very nice study about molecular dynamics simulation for heat transfer problems is given by Maruyama [31]. He also applied MD simulations to the problem of heat conduction of finite length single walled-carbon nanotubes [32]. The measured thermal conductivity did not converge to a finite value with increase in tube length up to 404 nm, but an interesting power law relation was observed.

Wang and Xu applied MD techniques to problems of heat transfer and phase change during laser matter interaction [33]. They irradiated argon crystal by a picoseconds pulsed laser and investigated the phenomena using molecular dynamics simulations. Result reveals transition region, superheating, and rapid movement of solid-liquid interface and vapors during phase change. Lin and Hu [34] applied the same techniques to the problems of ablation and bio heat transfer in bimolecular systems and biotissues and developed a new model.

**2.3. Molecular Dynamics (MD) simulations**

230 Heat Transfer Studies and Applications

Molecular dynamics (MD) is a technique in which physical movements of atoms and molecules is simulated using computers. In this the atoms and molecules are allowed to interact for a period of time, giving a view of the motion of the atoms. MD simulation circumvents the problem of finding the properties of complex molecular systems by using numerical methods. In the most common version, the trajectories of molecules and atoms are determined by numerically solving Newton's equations of motion for a system of interacting particles [1, 23]. This is one of the two main families of simulation techniques [23]. The results of molecular dynamics simulation can be used in various fields such as thermodynamics, biology, chemis‐ try, materials science and engineering, statistical mechanics and nanotechnology [1, 24, 25].

van Gunsteren, [26] explained in detail about methodology, applications and prospective of molecular dynamics in chemistry. He effectively explained molecular dynamics in terms of choosing unavoidable assumptions, approximations and simplifications of the molecular model and computational procedure such that their contributions to the overall inaccuracy are of comparable size, without affecting – significantly the property of interest. *"He further postulated and argued that the aim of computer simulation of molecular systems is to compute macroscopic behavior from microscopic interactions giving the reason that the main contributions a microscopic consideration can offer are (1) the understanding and (2) interpretation of experimental results, (3) semi – quantitative estimates of experimental results, and (4) the capability to interpolate or extrapolate experimental data into regions that are only difficultly accessible in the laboratory*" [26]. His methodology was good, accurate and in detail for explaining molecular dynamics. A similar study is also conducted by McKenzie [27]. Karplus and McCammon [28] extensively reviewed the use of molecular dynamics as applied to biomolecules. Their study encompasses all aspects of application of computational techniques for solving structure, folding, internal motion, conformational changes, etc., of biomolecules and problems. A similar study was carried by Kovalskyy et al. [29] in which they used molecular dynamics for the study of

Kupka [30] applied molecular dynamics in computer-based graphic accelerators. He proposed an algorithm consisting of CPU and GPU parts, The CPU part is responsible for streams preparations and running kernel functions from the GPU part, while the GPU part consists of

A very nice study about molecular dynamics simulation for heat transfer problems is given by Maruyama [31]. He also applied MD simulations to the problem of heat conduction of finite length single walled-carbon nanotubes [32]. The measured thermal conductivity did not converge to a finite value with increase in tube length up to 404 nm, but an interesting power

Wang and Xu applied MD techniques to problems of heat transfer and phase change during laser matter interaction [33]. They irradiated argon crystal by a picoseconds pulsed laser and investigated the phenomena using molecular dynamics simulations. Result reveals transition region, superheating, and rapid movement of solid-liquid interface and vapors during phase

structural stability of HIV – 1 Protease under physiological conditions.

two kernels and one reduce function.

law relation was observed.

Krivtsov, [35] discussed the problems of heat conductivity in monocrystalline materials with defects via molecular dynamics simulation. "*It was shown that in ideal monocrystals the heat conductivity is not described by the classical conductivity theory. For the crystals with defects for the big enough specimens the conductivity obeys the classical relations and the coefficient (β) describing the heat conductivity is calculated. The dependence of the heat conductivity on the defect density, number of particles in the specimen, and dimension of the space is investigated"* [35]. The obtained depend‐ encies increase with time: almost linear in two dimensional (2D) cases and nonlinear in onedimensional (1D) and (3D) (with positive time derivative in 1D case, and with negative time derivative in 3D case).

**Figure 4.** An element of 2D monocrystal with predefined distribution of defects.[35]

He also applied the same technique for determining and simulating the mechanical properties of polycrystals as well earlier. [36]. Recently, Steinhauser applied molecular dynamics simulation technique to various condensed matter forms [37]. He showed how semi flexibility or stiffness of polymers can be included in the potentials describing the interactions of particles in proteins and biomolecules. For ceramics he modeled the brittle failure behavior of a typical ceramic and simulated explicitly the set-up of corresponding high-speed impact experiments. It was shown that this multiscale particle model reproduces the macroscopic physics of shock wave propagation in brittle materials very well while at the same time allowing for a resolution of the material on the microscale.

#### **2.4. Monte Carlo (MC) simulations**

Monte Carlo (MC) methods/simulations are a set of simulation techniques that rely on repeated random sampling to compute their results. They are often used in computer simulations of physical and mathematical systems. These are also used to complement theoretical derivations. Monte Carlo methods are especially useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular struc‐ tures. They are widely used in business (calculation of risk), mathematics, (evaluate multidi‐ mensional definite integrals), Space exploration, and oil exploration (predictions of failures, cost overruns and schedule overruns) [1, 38].

Howell [39] explained in detail the use of Monte Carlo method in radiative heat transfer problems. He used the method for computations of complex geometries, configurations, and exchange factors, inverse design, packed beds, and fiber layers, etc., and also explained the use of related algorithms (READ, REM, Markov Chains, etc.). A similar study was also conducted by Zeeb [40] and Kersch (1993) [41]. Modest [42] used various implementations of the backward Monte Carlo method for problems with arbitrary radiation sources. His focus area was backward Monte Carlo simulation. He included small collimated beams, point sources, etc., in media of arbitrary optical thickness and solved radiative heat transfer equation with specified internal source and boundary intensity.

Frijns et al. [43] used Monte Carlo simulation to discuss and solve problems of heat transfer in micro and nanochannels. They proposed and utilized a combination algorithm of Monte Carlo and molecular dynamics simulation to argue about its effectiveness.

**Figure 5.** Schematic view of the coupling algorithm. Left: MD steps; right: MC steps. The particles that have been as‐ signed to molecular dynamics have a light color, whereas the MC particles are dark [43]

Steps of performing simulation are: I) define an initial condition. II) Assign particles to MD or MC part. III) Distribute over MD and MC codes. IV) Compute new positions and velocities. V) Update the particles in the buffer layer. VI) Start over with step III.

An extensive use of Monte Carlo in gas flow problems is explained by Wang and co-workers [44, 45, 46]. They used direct simulation MC for simulation of gas flows in MEMS devices. They examined orifice and corner flow using modified DSMC codes and showed that the channel geometry significantly affects the micro gas flow [44]. For orifice flow, the flow separation occurred at very small Reynolds numbers while in corner flow, no flow separation occurred even with a high driving pressure. The results were found to have good agreement with continuum theory and existing experimental data. In a later study, they used the same methods to discuss and solve the problem of gas mixing in micro channels [45]. Very high Knudsen numbers were used. The simulation results show that the wall characteristics have little effect on the mixing length. The mixing length is nearly inversely proportional to the gas tempera‐ ture. The dimensionless mixing coefficient is proportional to the Mach number and inversely proportional to the Knudsen number. They also extended the use of their codes to heat transfer and gas flow problems in vacuum-packaged MEMS devices [46] and found to have good results in explaining the heat transfer and gas flow behavior on chip surfaces.

#### **2.5. Langevin dynamics**

**2.4. Monte Carlo (MC) simulations**

232 Heat Transfer Studies and Applications

cost overruns and schedule overruns) [1, 38].

with specified internal source and boundary intensity.

and molecular dynamics simulation to argue about its effectiveness.

Monte Carlo (MC) methods/simulations are a set of simulation techniques that rely on repeated random sampling to compute their results. They are often used in computer simulations of physical and mathematical systems. These are also used to complement theoretical derivations. Monte Carlo methods are especially useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular struc‐ tures. They are widely used in business (calculation of risk), mathematics, (evaluate multidi‐ mensional definite integrals), Space exploration, and oil exploration (predictions of failures,

Howell [39] explained in detail the use of Monte Carlo method in radiative heat transfer problems. He used the method for computations of complex geometries, configurations, and exchange factors, inverse design, packed beds, and fiber layers, etc., and also explained the use of related algorithms (READ, REM, Markov Chains, etc.). A similar study was also conducted by Zeeb [40] and Kersch (1993) [41]. Modest [42] used various implementations of the backward Monte Carlo method for problems with arbitrary radiation sources. His focus area was backward Monte Carlo simulation. He included small collimated beams, point sources, etc., in media of arbitrary optical thickness and solved radiative heat transfer equation

Frijns et al. [43] used Monte Carlo simulation to discuss and solve problems of heat transfer in micro and nanochannels. They proposed and utilized a combination algorithm of Monte Carlo

**Figure 5.** Schematic view of the coupling algorithm. Left: MD steps; right: MC steps. The particles that have been as‐

signed to molecular dynamics have a light color, whereas the MC particles are dark [43]

Langevin dynamics is an approach to the mathematical modeling of the dynamics of molecular systems. The approach is characterized by the use of simplified models while accounting for omitted degrees of freedom by the use of stochastic differential equations. [1]. In philosophy, the Langevin equation is a stochastic differential equation in which two force terms have been added to Newton's second law to approximate the effects of neglected degrees of freedom. One term represents a frictional force, the other a *random* force [47]. They are used in biology, chemistry, engineering, etc, to formulate solutions of complex problems. Antonie [48] used LD methods to investigate influence of confinement on protein folding. He used MATLAB to formulate code of equation developed using LD methods. The model developed and then its programming was found effective. A similar type of study was also conducted by Lange et al [49].

Quigley [50] discussed the advantages of using LD in constant pressure extended systems and showed it to be effective technique for simulating the equilibrium isobaric–isothermal ensemble. They analyzed canonical ensemble, Hoover ensemble, and Parrinello–Rahman ensemble and showed that despite the presence of intrinsic probability gradients in this system, a Langevin dynamics approach samples the extended phase space in the correct fashion. Wu, Li and Nies [51] applied Langevin dynamics method to the problem of cross-linking into polymer networks. Commercially available software package GROMACS 4.0 was used for simulation. Their study revealed that cross-linking is associated with effects such as changes in thermodynamic stability of reacting mixture or the presence of nanoparticles. This also facilitated the study of macromolecules.

**Figure 6.** "*Overlay of average neurotensin structures. The relative orientation of the structures minimizes the RMSD between the C \_ atoms. The green structure is obtained from state A, and the two yellow structures are obtained from state B. The parts of the side chains that were overly distorted due to the averaging were removed. The N terminus is oriented towards the upper right corner"*. [49]

#### **2.6. Normal mode (harmonic) analysis**

Normal mode (harmonic) analysis is a method of simulation in which the characteristic vibrations of an energy-minimized system and the corresponding frequencies are determined assuming its energy function is harmonic in all degrees of freedom. Normal mode analysis is less expensive than MD simulation, but requires much more memory [52]. These are exten‐ sively used in science and engineering to model, simulate and solve engineering problems. Magyari [53] used this method to examine the convection model of the fully developed flow in a differentially heated vertical slot with open to capped ends. He found that the method is quite transparent and has algebraic and computational efficiency. It is shown that dimension‐ less temperature field and the velocity field scaled by the Grashof number are characterized by only two physical parameters; also, capped slot is an ideal heat transfer device. Schuyler et al., [54] used the same method to Cα – based elastic network model (C<sup>α</sup> – NMA) of protein analysis and "*present a new coarse grained rigid body based analysis (cluster NMA). This new cluster NMA represents a protein as a collection of rigid bodies interconnected with harmonic potentials. This produces reduced degree of freedom (DOF) equations of motion (EOMs), which even in the case of large structures enable the computation of normal modes to be done on a desktop PC"* [54]. This new cluster NMA proved to be very effective for protein analysis. Similar type of studies have been done by Hinson [55] in France and showed that normal mode analysis is advantageous as no sampling is required, enables fast calculations and is simple to use. However, it suffers from the drawback of exhibiting inaccuracies in certain cases and is limited to single-well potentials and thus offers no possibility to study conformational transitions explicitly.

**Figure 7.** Elastic network model [55]

**2.6. Normal mode (harmonic) analysis**

*corner"*. [49]

234 Heat Transfer Studies and Applications

Normal mode (harmonic) analysis is a method of simulation in which the characteristic vibrations of an energy-minimized system and the corresponding frequencies are determined assuming its energy function is harmonic in all degrees of freedom. Normal mode analysis is less expensive than MD simulation, but requires much more memory [52]. These are exten‐ sively used in science and engineering to model, simulate and solve engineering problems. Magyari [53] used this method to examine the convection model of the fully developed flow in a differentially heated vertical slot with open to capped ends. He found that the method is quite transparent and has algebraic and computational efficiency. It is shown that dimension‐ less temperature field and the velocity field scaled by the Grashof number are characterized by only two physical parameters; also, capped slot is an ideal heat transfer device. Schuyler et al., [54] used the same method to Cα – based elastic network model (C<sup>α</sup> – NMA) of protein analysis and "*present a new coarse grained rigid body based analysis (cluster NMA). This new cluster NMA represents a protein as a collection of rigid bodies interconnected with harmonic potentials. This produces reduced degree of freedom (DOF) equations of motion (EOMs), which even in the case of large structures enable the computation of normal modes to be done on a desktop PC"* [54]. This new cluster NMA proved to be very effective for protein analysis. Similar type of studies have been done by Hinson [55] in France and showed that normal mode analysis is advantageous as no sampling is required, enables fast calculations and is simple to use. However, it suffers from the drawback of exhibiting inaccuracies in certain cases and is limited to single-well potentials

**Figure 6.** "*Overlay of average neurotensin structures. The relative orientation of the structures minimizes the RMSD between the C \_ atoms. The green structure is obtained from state A, and the two yellow structures are obtained from state B. The parts of the side chains that were overly distorted due to the averaging were removed. The N terminus is oriented towards the upper right*

and thus offers no possibility to study conformational transitions explicitly.

#### **2.7. Stimulated annealing**

*"Simulated annealing (SA) is a random-search technique which exploits an analogy between the way in which a metal cools and freezes into a minimum energy crystalline structure (the annealing process) and the search for a minimum in a more general system; it forms the basis of an optimization technique for combinatorial and other problems"* [56]. It has attracted significant attention as suitable for optimization problems of large scale, especially ones where a desired global extremum is hidden among many, poorer, local extrema. The method has proved effective in solving problems such as traveling salesman problem in N cities, designing complex integrated circuits, etc. In the latter case it has proved effective in arranging several hundred thousand circuit elements on a tiny silicon substrate in an optimized way so as to avoid/minimize interference among their connecting wires. "*SA's major advantage over other methods is an ability to avoid becoming trapped in local minima. The algorithm employs a random search which not only accepts changes that decrease the objective function (assuming a minimization problem), but also some changes that increase it"* [57].
