**A Simulated Annealing Algorithm for the Satisfiability Problem Using Dynamic Markov Chains with Linear Regression Equilibrium**

Felix Martinez-Rios and Juan Frausto-Solis

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/46175

## **1. Introduction**

18 Will-be-set-by-IN-TECH

[13] Miki, M., Hiroyasu, T. & Ono, K. [2002]. Simulated annealing with advanced adaptive neighborhood, *In Second international workshop on Intelligent systems design and application*,

[14] Tavares, R. S., Martins, T. C. & Tsuzuki, M. S. G. [2011]. Simulated annealing with adaptive neighborhood: A case study in off-line robot path planning, *Expert Systems with*

*http://dx.doi.org/10.1109/TBME.2012.2188398*.

Dynamic Publishers, Inc. ISBN, pp. 113–118.

*Applications* 38(4): 2951–2965.

electrical impedance tomography, *IEEE Transactions on Biomedical Engineering*, URL:

Since the appearance of Simulated Annealing algorithm it has shown to be an efficient method to solve combinatorial optimization problems such as Boolean Satisfiability problem. New algorithms based on two cycles: one external for temperatures and other internal, named Metropolis, have emerged. These algorithms usually use the same Markov chain length in the Metropolis cycle for each temperature. In this paper we propose a method based on linear regression to find the Metropolis equilibrium. Experimentation shows that the proposed method is more efficient than the classical one, since it obtains the same quality of the final solution with less processing time.

Today we have a considerable interest for developing new and efficient algorithms to solve hard problems, mainly those considered in the complexity theory (NP-complete or NP-hard) [8]. The Simulated Annealing algorithm proposed by Kirkpatrick et al. [18] and Cerny [5, 6] is an extension of the Metropolis algorithm [23] used for the simulation of the physical annealing process and is specially applied to solve NP-hard problems where it is very difficult to find the optimal solution or even near-to-optimum solutions.

Efficiency and efficacy are given to Simulated Annealing algorithm by the cooling scheme which consists of initial (*ci*) and final (*c <sup>f</sup>*) temperatures, the cooling function (*f*(*ck*)) and the length of the Markov chain (*Lk*) established by the Metropolis algorithm. For each value of the control parameter (*ck*) (temperature), Simulated Annealing algorithm accomplishes a certain number of Metropolis decisions. In this regard, in order to get a better performance of the Simulated Annealing algorithm a relation between the temperature and Metropolis cycles may be enacted [13].

The Simulated Annealing algorithm can get optimal solutions in an efficient way only if its cooling scheme parameters are correctly tuned. Due this, experimental and analytical

©2012 Martinez-Rios and Frausto-Solis, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2012 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

2 Will-be-set-by-IN-TECH 22 Simulated Annealing – Advances, Applications and Hybridizations A Simulated Annealing Algorithm for the Satisfiability Problem Using Dynamic Markov Chains with Linear Regression Equilibrium <sup>3</sup>

parameters tuning strategies are currently being studied; one of them known as ANDYMARK [13] is an analytical method that has been shown to be more efficient. The objective of these methods is to find better ways to reduce the required computational resources and to increment the quality of the final solution. This is executed applying different accelerating techniques such as: variations of the cooling scheme [3, 27], variations of the neighborhood scheme [26] and with parallelization techniques [12, 26].

is, they are disjunction (OR) of terms, where each term is a conjunction (AND) of literals. Such a formula is indeed satisfiable if and only if at least one of its terms is satisfiable, and a term is satisfiable if and only if it does not contain both *<sup>x</sup>* and *<sup>x</sup>* for some variable *<sup>x</sup>*, this can be

A Simulated Annealing Algorithm for the Satis ability Problem Using Dynamic Markov Chains with Linear Regression Equilibrium

23

SAT is also easier if the number of literals in a clause is limited to 2, in which case the problem is called 2 − *SAT*, this problem can also be solved in polynomial time [2, 10]. One of the most important restrictions of SAT is HORN-SAT where the formula is a conjunction of Horn clauses (a Horn clause is a clause with at most one positive literal). This problem is solved by

The 3-satisfiability (3-SAT) is a special case of k-satisfiability (k-SAT), when each clause contains exactly *k* = 3 literals. 3-SAT is NP-complete and it is used as a starting point for proving that other problems are also NP-hard [31]. This is done by polynomial-time reduction

Simulated Annealing improves this strategy through the introduction of two elements. The first is the Metropolis algorithm [23], in which some states that do not improve energy are accepted when they serve to allow the solver to explore more of the possible space of solutions. Such "bad" states are allowed using the Boltzman criterion: *e*−Δ*J*/*<sup>T</sup>* > *rnd*(0, 1), where Δ*J* is the change of energy, *T* is a temperature, and *rnd*(0, 1) is a random number in the interval [0, 1). *J* is called a cost function and corresponds to the free energy in the case of annealing a metal. If *T* is large, many "bad" states are accepted, and a large part of solution space is

The second is, again by analogy with annealing of a metal, to lower the temperature. After visiting many states and observing that the cost function declines only slowly, one lowers the temperature, and thus limits the size of allowed "bad" states. After lowering the temperature several times to a low value, one may then "quench" the process by accepting only "good"

• For every *<sup>i</sup>*, a collection of positive coefficients *qij*, *<sup>j</sup>* <sup>∈</sup> *<sup>S</sup>*(*i*), such that <sup>∑</sup>*j*∈*S*(*i*) *qij* <sup>=</sup> 1. It is

• A nonincreasing function *T* : *N* → (0, ∞), called the cooling schedule. Here *N* is the set of

The Simulated Annealing algorithms consists of a discrete time inhomogeneus Markov chain *x*(*t*) [4]. If the current state *x*(*t*) is equal to *i*, chose a neighbor *j* of *i* at random; the probability

checked in polynomial time.

from 3-SAT to the other problem [28].

accessed.

• A finite set *S*.

• An initial state *x*(0) ∈ *S*.

**3. Simulated Annealing algorithm**

The elements of Simulated Annealing are:

assumed that *j* ∈ *S*(*i*) if and only if *i* ∈ *S*(*j*).

the polynomial-time Horn-satisfiability algorithm [9].

states in order to find the local minimum of the cost function.

• A cost function *J* defined on *S*. Let *S*<sup>∗</sup> ⊂ *S* be the set of global minima of *J*.

• For each *i* ∈ *S*, a set *S*(*i*) ⊂ *S* − *i* is called the set of neighbors of *i*.

positive integers, and *T*(*t*) is called the temperature al time *t*.

In this chapter an analytic adaptive method to establish the length of each Markov chain in a dynamic way for Simulated Annealing algorithm is presented; the method determines the equilibrium in the Metropolis cycle using Linear Regression Method (LRM). LRM is applied to solve the satisfiability problems instances and is compared versus a classical ANDYMARK tune method.

## **2. Background**

In complexity theory, the satisfiability problem is a decision problem. The question is: given the expression, is there some assignment of TRUE and FALSE values to the variables that will make the entire expression true? A formula of propositional logic is said to be satisfiable if logical values can be assigned to its variables in a way that makes the formula true.

The propositional satisfiability problem, which decides whether a given propositional formula is satisfiable, is of critical importance in various areas of computer science, including theoretical computer science, algorithmics, artificial intelligence, hardware design, electronic design automation, and verification. The satisfiability problem was the first problem refered to be as NP complete [7] and is fundamental to the analysis of the computational complexity of many problems [28].

## **2.1. Boolean satisfiability problem (SAT)**

An instance of SAT is a boolean formula which consists on the next components:


This is:

$$\Phi = \mathbb{C}\_1 \land \mathbb{C}\_2 \land \mathbb{C}\_3 \land \dots \land \mathbb{C}\_m \tag{1}$$

where Φ, in Equation 1, is the SAT instance. Then we can enunciate the SAT problem as follows:

**Definition 1.** *Given a finite set* {*C*1, *C*2, *C*3,..., *Cm*} *of clauses, determine whether there is an assignment of truth-values to the literals appearing in the clauses which makes all the clauses true.*

NP-completeness in SAT problem, only refers to the run-time of the worst case instances. Many of the instances that occur in practical applications can be solved much faster, for example, SAT is easier if the formulas are restricted to those in disjunctive normal form, that is, they are disjunction (OR) of terms, where each term is a conjunction (AND) of literals. Such a formula is indeed satisfiable if and only if at least one of its terms is satisfiable, and a term is satisfiable if and only if it does not contain both *<sup>x</sup>* and *<sup>x</sup>* for some variable *<sup>x</sup>*, this can be checked in polynomial time.

SAT is also easier if the number of literals in a clause is limited to 2, in which case the problem is called 2 − *SAT*, this problem can also be solved in polynomial time [2, 10]. One of the most important restrictions of SAT is HORN-SAT where the formula is a conjunction of Horn clauses (a Horn clause is a clause with at most one positive literal). This problem is solved by the polynomial-time Horn-satisfiability algorithm [9].

The 3-satisfiability (3-SAT) is a special case of k-satisfiability (k-SAT), when each clause contains exactly *k* = 3 literals. 3-SAT is NP-complete and it is used as a starting point for proving that other problems are also NP-hard [31]. This is done by polynomial-time reduction from 3-SAT to the other problem [28].

## **3. Simulated Annealing algorithm**

Simulated Annealing improves this strategy through the introduction of two elements. The first is the Metropolis algorithm [23], in which some states that do not improve energy are accepted when they serve to allow the solver to explore more of the possible space of solutions. Such "bad" states are allowed using the Boltzman criterion: *e*−Δ*J*/*<sup>T</sup>* > *rnd*(0, 1), where Δ*J* is the change of energy, *T* is a temperature, and *rnd*(0, 1) is a random number in the interval [0, 1). *J* is called a cost function and corresponds to the free energy in the case of annealing a metal. If *T* is large, many "bad" states are accepted, and a large part of solution space is accessed.

The second is, again by analogy with annealing of a metal, to lower the temperature. After visiting many states and observing that the cost function declines only slowly, one lowers the temperature, and thus limits the size of allowed "bad" states. After lowering the temperature several times to a low value, one may then "quench" the process by accepting only "good" states in order to find the local minimum of the cost function.

The elements of Simulated Annealing are:

• A finite set *S*.

2 Will-be-set-by-IN-TECH

parameters tuning strategies are currently being studied; one of them known as ANDYMARK [13] is an analytical method that has been shown to be more efficient. The objective of these methods is to find better ways to reduce the required computational resources and to increment the quality of the final solution. This is executed applying different accelerating techniques such as: variations of the cooling scheme [3, 27], variations of the neighborhood

In this chapter an analytic adaptive method to establish the length of each Markov chain in a dynamic way for Simulated Annealing algorithm is presented; the method determines the equilibrium in the Metropolis cycle using Linear Regression Method (LRM). LRM is applied to solve the satisfiability problems instances and is compared versus a classical ANDYMARK

In complexity theory, the satisfiability problem is a decision problem. The question is: given the expression, is there some assignment of TRUE and FALSE values to the variables that will make the entire expression true? A formula of propositional logic is said to be satisfiable if

The propositional satisfiability problem, which decides whether a given propositional formula is satisfiable, is of critical importance in various areas of computer science, including theoretical computer science, algorithmics, artificial intelligence, hardware design, electronic design automation, and verification. The satisfiability problem was the first problem refered to be as NP complete [7] and is fundamental to the analysis of the computational complexity

• A set of *m* clauses: *C*1, *C*2, *C*3, ..., *Cm* where each clause consists of literals *li* linked by the

where Φ, in Equation 1, is the SAT instance. Then we can enunciate the SAT problem as

**Definition 1.** *Given a finite set* {*C*1, *C*2, *C*3,..., *Cm*} *of clauses, determine whether there is an assignment of truth-values to the literals appearing in the clauses which makes all the clauses true.*

NP-completeness in SAT problem, only refers to the run-time of the worst case instances. Many of the instances that occur in practical applications can be solved much faster, for example, SAT is easier if the formulas are restricted to those in disjunctive normal form, that

Φ = *C*<sup>1</sup> ∧ *C*<sup>2</sup> ∧ *C*<sup>3</sup> ∧ ... ∧ *Cm* (1)

logical values can be assigned to its variables in a way that makes the formula true.

An instance of SAT is a boolean formula which consists on the next components:

• A set *<sup>L</sup>* of literals; a literal *li*, is a variable *xi* or its negation *<sup>x</sup><sup>i</sup>*.

scheme [26] and with parallelization techniques [12, 26].

tune method.

**2. Background**

of many problems [28].

**2.1. Boolean satisfiability problem (SAT)**

• A set *S* of *n* variables *x*1, *x*2, *x*3, ..., *xn*.

logical connective OR (∨).

This is:

follows:


The Simulated Annealing algorithms consists of a discrete time inhomogeneus Markov chain *x*(*t*) [4]. If the current state *x*(*t*) is equal to *i*, chose a neighbor *j* of *i* at random; the probability

#### 4 Will-be-set-by-IN-TECH 24 Simulated Annealing – Advances, Applications and Hybridizations A Simulated Annealing Algorithm for the Satisfiability Problem Using Dynamic Markov Chains with Linear Regression Equilibrium <sup>5</sup>

that any particular *j* ∈ *S*(*i*) is selectec is equal to *qij*. Once *j* is chosen, the next state *x*(*t* + 1) is determined as follows:

$$\begin{array}{l} \text{if } f(j) \le f(i) \text{ then } \mathbf{x}(t+1) = j\\ \text{if } f(j) \le f(i) \text{ then }\\ \mathbf{x}(t+1) = j \text{ with probability } e^{-(f(j) - f(i)) / T(t)}\\ \text{else} \\ \mathbf{x}(t+1) = i \end{array} \tag{2}$$

In a formal way:

$$P\left[\mathbf{x}(t+1)=j|\mathbf{x}(t)=\mathbf{i}\right]=\begin{cases}\mathscr{q}\_{\overline{\mathbf{i}}\overline{\mathbf{j}}^{\mathcal{E}}}\left[-\frac{1}{\mathbf{r}^{\mathcal{E}(\overline{\mathbf{i}})}}\max\{0, I(j)-I(\overline{\mathbf{i}})\}\right] & \mathbf{j}\neq\mathbf{i},\mathbf{j}\in\mathcal{S}(\mathbf{i})\\\mathbf{0} & \mathbf{j}\neq\mathbf{i},\mathbf{j}\notin\mathcal{S}(\mathbf{i})\end{cases}\tag{3}$$

In Simulated Annealing algorithm we are considering a homogeneus Markov chain *xT*(*t*) wich temperature *T*(*t*) is held at a constant value *T*. Let us assume that the Markov chain *xT*(*t*) is irreducible and aperiodic and that *qij* = *xji*∀*i*, *j*, then *xT*(*t*) is a reversible Markov chain, and its invariant probability distribution is given by:

$$
\pi\_T(i) = \frac{1}{Z\_T} e^{\left[-\frac{l(i)}{T}\right]} \tag{4}
$$

1

3

4

5

**Figure 1.** Simulated Annealing algorithm

[30] we give the basis of this method.

established in terms of *PA*(*Sj*) as follows:

produce to the current solution, that means:

scheme:

the probability to accept the cost difference Δ*Jij* = *J*(*Si*) − *J*(*Sj*).

Else

Else

Initializing: Initial solution *Si*

2 Temperatures cycle:

Metropolis cycle: Generating *Sj* from *Si di f* = *J*(*Sj*) − *J*(*Si*) If *di f* < 0 then *Si* = *Sj*

−*di f*

*Si* = *Sj*

goto 5

goto 3

Stop criterion:

End

Update *T* goto 2

Metropolis condition:

*<sup>T</sup>* > rnd(0,1)

If thermal equilibrium is reached

If the final temperature *Tf* is reached

It is well known that Simulated Annealing requires a well defined neighborhood structure and other parameters as initial and final temperatures *Ti* and *Tf* . In order to determine these paratmeters we follow the next method proposed by [30]. So following the analysis made in

Let *PA*(*Sj*) be the accepting probability of one proposed solution *Sj* generated from a current solution *Si*, and *PR*(*Sj*) the rejecting probability. The probability of rejecting *Sj* can be

Accepting or rejecting *Sj* only depends on the cost deterioration size that this change will

In Equation 6, *J*(*Si*) and *J*(*Sj*) are the cost associated to *Si* and *Sj* respectively, and *g*(Δ*Jij*) is

The solution selected from *Si* may be any solution *Sj* defined by the next neighborhood

*PR*(*Sj*) = 1 − *PA*(*Sj*) (5)

*PA*(*Sj*) = *g*[*J*(*Si*) − *J*(*Sj*)] = *g*(Δ*Jij*) (6)

*T* = *Ti*

else if *e*

Initial and final temperature: *Ti* and *Tf*

A Simulated Annealing Algorithm for the Satis ability Problem Using Dynamic Markov Chains with Linear Regression Equilibrium

25

In Equation 4 *ZT* is a normalized constant and is evident that as *T* → 0 the probability *π<sup>T</sup>* is concentrate on the set *S*∗ of global minima of *J*, this property remains valid if the condition *qij* = *qji* is relaxed [11].

In the optimization context we can generate an optimal element with high probability if we produce a random sample according to the distribution *πT*, known as the Gibbs distribution. When is generated an element of *S* accomplished by simulating Markov chain *xT*(*t*) until it reaches equilibrium we have a Metropolis algorithm [23].

The Simulated Annealing algorithm can also be viewed as a local search method occasionally moves to higher values of the cost function *J*, this moves will help to Simulated Annealing escape from local minima. Proof of convergence of Simulated Annealing algorithm can be revised [4].

#### **3.1. Traditional Simulated Annealing algorithms**

Figure 1 shows the classic algorithm simulated annealing. In the algorithm, we can see the cycle of temperatures between steps 2 and 5. Within this temperature cycle, are the steps 3 and 4 which correspond to the Metropolis algorithm.

As described in the simulated annealing algorithm, Metropolis cycle is repeated until thermal equilibrium is reached, now we use the formalism of Markov chains to estimate how many times it is necessary to repeat the cycle metropolis of so that we ensure (with some probability) that all solutions of the search space are explored.

Similarly we can estimate a very good value for the initial and final temperature of the temperature cycle. All these estimates were made prior to running the simulated annealing algorithm, using data information SAT problem is solved.

**Figure 1.** Simulated Annealing algorithm

4 Will-be-set-by-IN-TECH

that any particular *j* ∈ *S*(*i*) is selectec is equal to *qij*. Once *j* is chosen, the next state *x*(*t* + 1) is

In Simulated Annealing algorithm we are considering a homogeneus Markov chain *xT*(*t*) wich temperature *T*(*t*) is held at a constant value *T*. Let us assume that the Markov chain *xT*(*t*) is irreducible and aperiodic and that *qij* = *xji*∀*i*, *j*, then *xT*(*t*) is a reversible Markov

> *ZT e* <sup>−</sup> *<sup>J</sup>*(*i*) *T*

In Equation 4 *ZT* is a normalized constant and is evident that as *T* → 0 the probability *π<sup>T</sup>* is concentrate on the set *S*∗ of global minima of *J*, this property remains valid if the condition

In the optimization context we can generate an optimal element with high probability if we produce a random sample according to the distribution *πT*, known as the Gibbs distribution. When is generated an element of *S* accomplished by simulating Markov chain *xT*(*t*) until it

The Simulated Annealing algorithm can also be viewed as a local search method occasionally moves to higher values of the cost function *J*, this moves will help to Simulated Annealing escape from local minima. Proof of convergence of Simulated Annealing algorithm can be

Figure 1 shows the classic algorithm simulated annealing. In the algorithm, we can see the cycle of temperatures between steps 2 and 5. Within this temperature cycle, are the steps 3

As described in the simulated annealing algorithm, Metropolis cycle is repeated until thermal equilibrium is reached, now we use the formalism of Markov chains to estimate how many times it is necessary to repeat the cycle metropolis of so that we ensure (with some probability)

Similarly we can estimate a very good value for the initial and final temperature of the temperature cycle. All these estimates were made prior to running the simulated annealing

*x*(*t* + 1) = *i*

 *qije* <sup>−</sup> <sup>1</sup>

*<sup>π</sup>T*(*i*) = <sup>1</sup>

*x*(*t* + 1) = *j* with probability *e*−(*J*(*j*)−*J*(*i*))/*T*(*t*)

*<sup>T</sup>*(*t*) *max*{0,*J*(*j*)−*J*(*i*)}

*j* �= *i*, *j* ∈ *S*(*i*) <sup>0</sup> *<sup>j</sup>* �<sup>=</sup> *<sup>i</sup>*, *<sup>j</sup>* <sup>∈</sup>/ *<sup>S</sup>*(*i*) (3)

(2)

(4)

if *J*(*j*) ≤ *J*(*i*) then *x*(*t* + 1) = *j*

if *J*(*j*) ≤ *J*(*i*) then

else

*P* [*x*(*t* + 1) = *j*|*x*(*t*) = *i*] =

chain, and its invariant probability distribution is given by:

reaches equilibrium we have a Metropolis algorithm [23].

**3.1. Traditional Simulated Annealing algorithms**

and 4 which correspond to the Metropolis algorithm.

that all solutions of the search space are explored.

algorithm, using data information SAT problem is solved.

determined as follows:

In a formal way:

*qij* = *qji* is relaxed [11].

revised [4].

It is well known that Simulated Annealing requires a well defined neighborhood structure and other parameters as initial and final temperatures *Ti* and *Tf* . In order to determine these paratmeters we follow the next method proposed by [30]. So following the analysis made in [30] we give the basis of this method.

Let *PA*(*Sj*) be the accepting probability of one proposed solution *Sj* generated from a current solution *Si*, and *PR*(*Sj*) the rejecting probability. The probability of rejecting *Sj* can be established in terms of *PA*(*Sj*) as follows:

$$P\_{\mathbb{R}}(\mathbb{S}\_{\circ}) = 1 - P\_{A}(\mathbb{S}\_{\circ}) \tag{5}$$

Accepting or rejecting *Sj* only depends on the cost deterioration size that this change will produce to the current solution, that means:

$$P\_A(\mathcal{S}\_{\dot{j}}) = \mathcal{g}[J(\mathcal{S}\_{\dot{i}}) - J(\mathcal{S}\_{\dot{j}})] = \mathcal{g}(\Delta I\_{\dot{i}\dot{j}}) \tag{6}$$

In Equation 6, *J*(*Si*) and *J*(*Sj*) are the cost associated to *Si* and *Sj* respectively, and *g*(Δ*Jij*) is the probability to accept the cost difference Δ*Jij* = *J*(*Si*) − *J*(*Sj*).

The solution selected from *Si* may be any solution *Sj* defined by the next neighborhood scheme:

**Definition 2.** *Let* {∀*Si* ∈ *S*, ∃ *a set VSi* ⊂ *S*|*VSi* = *V* : *S* −→ *S*} *be the neighborhood of a solution Si, where VSi is the neighborhood set of Si, V* : *S* −→ *S is a mapping and S is the solution space of the problem being solved.*

It can be seen from the Definition 2 that neighbors of a solution *Si* only depends on the neighborhood structure *V* established for a specific problem. Once *V* is defined, the maximum and minimum cost deteriorations can be written as:

$$
\Delta I\_{V\_{\text{max}}} = \max[I(\mathcal{S}\_{i}) - I(\mathcal{S}\_{j})], \forall \mathcal{S}\_{j} \in V\_{\mathcal{S}\_{i'}}, \forall \mathcal{S}\_{i} \in \mathcal{S} \tag{7}
$$

$$
\Delta I\_{V\_{\min}} = \min[J(\mathcal{S}\_{\dot{i}}) - J(\mathcal{S}\_{\dot{j}})], \forall \mathcal{S}\_{\dot{j}} \in V\_{\mathcal{S}\_{\dot{i}}}, \forall \mathcal{S}\_{\dot{i}} \in \mathcal{S} \tag{8}
$$

where Δ*JV*max and Δ*JV*min are the maximum and minimum cost deteriorations of the objective function through *J* respectively.

#### **3.2. Markov Chains and Cooling Function**

The Simulated Annealing algorithm can be seen like a sequence of homogeneous Markov chains, where each Markov chain is constructed for descending values of the control parameter *T* > 0 [1]. The control parameter is set by a cooling function like:

$$T\_{k+1} = f(T\_k) \tag{9}$$

where *G*(*Tk*) is established as follows:

In Equation 14, we define:

*Tf* ) may be set as:

*<sup>G</sup>*(*Tk*) = <sup>1</sup>

In this regard, the probability to get the solution *Sj* in *N* samples is:


*PA*(*Sj*) = 1 − *e*

Notice in Equation 13 that *PA*(*Sj*) may be understood as the expected fraction of different solutions obtained when *N* samples are taken. From Equation 13, *N* can be obtained as:

You can see that *PR*(*Sj*) = 1 − *PA*(*Sj*), *PR*(*Sj*) is the rejection probability. Constant *C* establishes the level of exploration to be done In this way different levels of exploration can be applied. For example: if a 99% of the solution space is going to be explored, the rejection

*• Given the set of probability of acceptance* Φ*PA* = {70, 75, 80, 85, 90, 95, 99, 99.9, 99.99, 99.999, ...}

Then in any Simulated Annealing algorithm the maximum Markov chain length (when *Tk* =

*Lmax* = *N* = *C*|*VSi*

Because a high percentage of the solution space should be explored, *C* varies from 1 ≤ *C* ≤ 4.6

When the process is at the beginning the temperature *Ti* is very high. This is because in the Boltzman distribution the acceptance probability is directly related with the cost increment

*Tk* <sup>=</sup> <sup>−</sup> <sup>Δ</sup>*<sup>J</sup>*

At the beginning of the process, *PA* is close to one (normally 0.99, [21]) and the temperature is extremely high. Almost any solution is accepted at this temperature; as a consequence the stochastic equilibrium of a Markov cycle is reached with the first guess solution. Similarly, when the process is ending the acceptance probability (tipically 0.01) and the temperature

For instance SAT values Δ*JV*max and Δ*JV*min in the energy of different states can be estimated at the beginning of the execution on the simulated annealing algorithm. To estimate these values, we can count the maximum number of Clauses containing any of the variables of the problem,

*• Using Equation 15:* Φ*<sup>C</sup>* = {1.20, 1.39, 1.61, 1.90, 2.30, 3.00, 4.61, 6.91, 9.21, 11.51, ...}

probability will be *PR*(*Sj*) = 0.01, so, from Equation 15 we obtain *C* = 4.60. **Definition 3.** *The exploration set of the search space,* Φ*C, is defined as follows:*

which guarantees a good level of exploration of the neighborhood at *Tf* .

*PA* = *e*−(Δ*J*/*Tk* ); where *Tk* is the temperature parameter, therefore:

closer to zero but the Metropolis cicle is very long.

*<sup>N</sup>* <sup>=</sup> <sup>−</sup> ln(<sup>1</sup> <sup>−</sup> *PA*(*Sj*))

<sup>|</sup> <sup>∀</sup>*Sj* <sup>∈</sup> *VSi* 0 ∀*Sj* ∈/ *VSi*

<sup>−</sup> *<sup>N</sup>*

*VSi* 

= − ln(1 − *PA*(*Sj*)) = − ln(*PR*(*Sj*)) (15)


A Simulated Annealing Algorithm for the Satis ability Problem Using Dynamic Markov Chains with Linear Regression Equilibrium

(14)


ln(*PA*) (17)

(12)

27

and *Tk* must satisfy the next property:

$$\begin{array}{ll}\lim\_{k \to \infty} T\_k = 0\\T\_k \ge T\_{k+1} \quad \forall k \ge 1 \end{array} \tag{10}$$

At the beginning of the process *Tk* has a high value and the probability to accept one proposed solution is high. When *Tk* decreases this probability also decreases and only good solutions are accepted at the end of the process. In this regard every Markov chain makes a stochastic walk in the solution space until the stationary distribution is reached. Then a strong relation between the Markov chain length (*Lk*) and the cooling speed of Simulated Annealing exists: when *Tk* → ∞, *Lk* → 0 and when *Tk* → 0, *Lk* → ∞.

Because the Markov chains are built through a neighborhood sampling method, the maximum number of different solutions rejected at *Tf* when the current solution *Si* is the optimal one, is the neighborhood size |*VSi* |. In this regard the maximum Markov chain length is a function of |*VSi* |. In general *Lk* can be established as:

$$L\_k \le L\_{\max} = \mathcal{g}(|V\_{\mathcal{S}\_i}|) \tag{11}$$

In Equation 11, *L*max is the Markov chain length when *Tk* = *Tf* , and *g*(|*VSi* |) is a function that gives the maximum number of samples that must be taken from the neighborhood *VSi* in order to evaluate an expected fraction of different solutions at *Tf* . The value of *L*max only depends on the number of elements of *VSi* that will be explored at *Tf* .

Usually a Simulated Annealing algorithm uses a uniform probability distribution function *G*(*Tk*)given by a random replacement sampling method to explore *VSi* at any temperature *Tk*, where *G*(*Tk*) is established as follows:

6 Will-be-set-by-IN-TECH

**Definition 2.** *Let* {∀*Si* ∈ *S*, ∃ *a set VSi* ⊂ *S*|*VSi* = *V* : *S* −→ *S*} *be the neighborhood of a solution Si, where VSi is the neighborhood set of Si, V* : *S* −→ *S is a mapping and S is the solution space of the*

It can be seen from the Definition 2 that neighbors of a solution *Si* only depends on the neighborhood structure *V* established for a specific problem. Once *V* is defined, the maximum

where Δ*JV*max and Δ*JV*min are the maximum and minimum cost deteriorations of the objective

The Simulated Annealing algorithm can be seen like a sequence of homogeneous Markov chains, where each Markov chain is constructed for descending values of the control

At the beginning of the process *Tk* has a high value and the probability to accept one proposed solution is high. When *Tk* decreases this probability also decreases and only good solutions are accepted at the end of the process. In this regard every Markov chain makes a stochastic walk in the solution space until the stationary distribution is reached. Then a strong relation between the Markov chain length (*Lk*) and the cooling speed of Simulated Annealing exists:

Because the Markov chains are built through a neighborhood sampling method, the maximum number of different solutions rejected at *Tf* when the current solution *Si* is the optimal one, is

*Lk* ≤ *L*max = *g*(|*VSi*

that gives the maximum number of samples that must be taken from the neighborhood *VSi* in order to evaluate an expected fraction of different solutions at *Tf* . The value of *L*max only

Usually a Simulated Annealing algorithm uses a uniform probability distribution function *G*(*Tk*)given by a random replacement sampling method to explore *VSi* at any temperature *Tk*,

In Equation 11, *L*max is the Markov chain length when *Tk* = *Tf* , and *g*(|*VSi*

depends on the number of elements of *VSi* that will be explored at *Tf* .

, ∀*Si* ∈ *S* (7)

, ∀*Si* ∈ *S* (8)

*Tk*<sup>+</sup><sup>1</sup> = *f*(*Tk*) (9)

*Tk* <sup>≥</sup> *Tk*<sup>+</sup><sup>1</sup> <sup>∀</sup>*<sup>k</sup>* <sup>≥</sup> <sup>1</sup> (10)




Δ*JV*max = max[*J*(*Si*) − *J*(*Sj*)], ∀*Sj* ∈ *VSi*

Δ*JV*min = min[*J*(*Si*) − *J*(*Sj*)], ∀*Sj* ∈ *VSi*

parameter *T* > 0 [1]. The control parameter is set by a cooling function like:

lim *<sup>k</sup>*→<sup>∞</sup> *Tk* <sup>=</sup> <sup>0</sup>

*problem being solved.*

function through *J* respectively.

and *Tk* must satisfy the next property:

the neighborhood size |*VSi*


and minimum cost deteriorations can be written as:

**3.2. Markov Chains and Cooling Function**

when *Tk* → ∞, *Lk* → 0 and when *Tk* → 0, *Lk* → ∞.


$$G(T\_k) = \begin{cases} \frac{1}{|V\_{\mathbf{S}\_l}|} \,\forall \mathbf{S}\_j \in V\_{\mathbf{S}\_l} \\ \mathbf{0} \quad \forall \mathbf{S}\_j \notin V\_{\mathbf{S}\_l} \end{cases} \tag{12}$$

In this regard, the probability to get the solution *Sj* in *N* samples is:

$$P\_A(S\_j) = 1 - e^{-\frac{N}{\left|\frac{V\_{S\_j}}{V\_{S\_j}}\right|}} \tag{13}$$

Notice in Equation 13 that *PA*(*Sj*) may be understood as the expected fraction of different solutions obtained when *N* samples are taken. From Equation 13, *N* can be obtained as:

$$N = -\ln(1 - P\_A(\mathbb{S}\_{\bar{f}})) \left| V\_{\mathbb{S}\_l} \right| \tag{14}$$

In Equation 14, we define:

$$=-\ln(1 - P\_A(S\_j)) = -\ln(P\_R(S\_j))\tag{15}$$

You can see that *PR*(*Sj*) = 1 − *PA*(*Sj*), *PR*(*Sj*) is the rejection probability. Constant *C* establishes the level of exploration to be done In this way different levels of exploration can be applied. For example: if a 99% of the solution space is going to be explored, the rejection probability will be *PR*(*Sj*) = 0.01, so, from Equation 15 we obtain *C* = 4.60.

**Definition 3.** *The exploration set of the search space,* Φ*C, is defined as follows:*


Then in any Simulated Annealing algorithm the maximum Markov chain length (when *Tk* = *Tf* ) may be set as:

$$L\_{\max} = \mathcal{N} = \mathcal{C}|V\_{\mathcal{S}\_l}| \tag{16}$$

Because a high percentage of the solution space should be explored, *C* varies from 1 ≤ *C* ≤ 4.6 which guarantees a good level of exploration of the neighborhood at *Tf* .

When the process is at the beginning the temperature *Ti* is very high. This is because in the Boltzman distribution the acceptance probability is directly related with the cost increment *PA* = *e*−(Δ*J*/*Tk* ); where *Tk* is the temperature parameter, therefore:

$$T\_k = -\frac{\Delta f}{\ln(P\_A)}\tag{17}$$

At the beginning of the process, *PA* is close to one (normally 0.99, [21]) and the temperature is extremely high. Almost any solution is accepted at this temperature; as a consequence the stochastic equilibrium of a Markov cycle is reached with the first guess solution. Similarly, when the process is ending the acceptance probability (tipically 0.01) and the temperature closer to zero but the Metropolis cicle is very long.

For instance SAT values Δ*JV*max and Δ*JV*min in the energy of different states can be estimated at the beginning of the execution on the simulated annealing algorithm. To estimate these values, we can count the maximum number of Clauses containing any of the variables of the problem,

#### 8 Will-be-set-by-IN-TECH 28 Simulated Annealing – Advances, Applications and Hybridizations A Simulated Annealing Algorithm for the Satisfiability Problem Using Dynamic Markov Chains with Linear Regression Equilibrium <sup>9</sup>

the largest number of clauses that can change when we change the value of a variable, is an upper bound to change maximum of Energy and:

$$T\_i = -\frac{\Delta I\_{V\_{\text{max}}}}{\ln(P\_A)} = -\frac{\text{max number of clauses}}{\ln(0.99)}\tag{18}$$

In this regard the length of each Markov chain must be incremented at any temperature cycle in a similar but in inverse way that *Tk* is decremented. This means that *Lk* must be incremented until *Lmax* is reached at *Tf* by applying an increment Markov chain factor (*β*). The cooling function given by Equation 24 is applied many times until the final temperature *Tf* is reached. Because Metropolis cycle is finished when the stochastic equilibrium is reached, it can be also

In previous Equation 25, *Lk* represents the length of the current Markov chain at a given temperature, that means the number of iterations of the Metropolis cycle for a *Tk* temperature. So *Lk*<sup>+</sup><sup>1</sup> represents the length of the next Markov chain. In this Markov Model, *β* represents

If the cooling function given by Equation 24 is applied over and over, *n* times, until *Tk* = *Tf* ,

Knowing the initial (*Ti*) and the final (*Tf* ) temperature and the cooling coefficient (*α*), the

*<sup>n</sup>* <sup>=</sup> ln *Tf* <sup>−</sup> ln *Ti*

If we make a similar process for increasing the equation of the Markov chain length, another

Once *n* is known by Equation 27, the value of the increment coefficient (*β*) is calculated as:

ln *<sup>L</sup>*max−ln *<sup>L</sup>*<sup>1</sup> *<sup>n</sup>*

Once *Lmax* (calculated form Equation 16), *L*<sup>1</sup> and *β* are known, the length of each Markov chain for each temperature cycle can be calculated using Equation 27. In this way *Lk* is computed dynamically from *L*<sup>1</sup> = 1 for *Ti* until *Lmax* at *Tf* . First we can obtain *Ti* from Equation 18 and *Tf* from Equation 27, with both values and Equation 29 algorithm can calculate *β* [30].

In Figure 2 we can see the simulated annealing algorithm modifications using Markov chains described above. Below we will explain how we will use the linear regression for the simulated annealing algorithm run more efficiently without losing quality in the solution.

We explain, in Section 3.2, how to estimate the initial and final temperature for SAT instances that will be provided to the simulated annealing algorithm to determine if it is satisfiable or

As shown in the Figure 3, the algorithm found Metropolis various configurations with

The typical behavior of the energy for a given temperature can be observed in Figure 3. We set out to determine when the cycle of Metropolis reaches the equilibrium although not all

*β* = *e*

an increment of the number of iterations in the next Metropolis cycle.

number of times that the Metropolis cycle is executed can be calculated as:

*Lk*<sup>+</sup><sup>1</sup> = *βLk* (25)

A Simulated Annealing Algorithm for the Satis ability Problem Using Dynamic Markov Chains with Linear Regression Equilibrium

*Tf* = *αnTi* (26)

ln *<sup>α</sup>* (27)

(29)

29

*L*max = *βnL*<sup>1</sup> (28)

modeled as a Markov chain as follows:

the next geometrical function is easily gotten:

geometrical function is obtained:

**4. Linear Regresion Method (LRM)**

different energy at a given temperature.

not.

Similarly, the minimum of change Energy can be estimated by counting the clauses that are changed when creating a new neighbor and obtain the lowest of these values:

$$T\_f = -\frac{\Delta f\_{V\_{\min}}}{\ln(P\_A)} = -\frac{\text{min number of clauses}}{\ln(0.01)}\tag{19}$$

Some criticisms about Simulated Annealing are about the long time of execution of standard Boltzmann-type Simulated Annealing, has many times driven these projects to utilize a temperature schedule too fast to satisfy the sufficiency conditions required to establish a true ergodic search. In this chapter we use a logarithmic an exponential temperature schedule that is consistent with the Boltzmann algorithm follow:

$$T\_k = T\_0 e^{\left[ (\mathfrak{a} - 1)k \right]}, 0 < \mathfrak{a} < 1 \tag{20}$$

From Equation 20 we can obtain:

$$\frac{\Delta T}{\Delta k} = T\_k(a-1), k \gg 1 \tag{21}$$

and

$$
\Delta T = T\_k(\mathfrak{a} - 1)\Delta k \, k \gg 1 \tag{22}
$$

if in the previous expression Δ*k* is equal to 1 then obtain the equation for two successive values of the temperature

$$T\_{k+1} = \mathfrak{a}T\_k \, 0 < \mathfrak{a} < 1/k \gg 1 \,\tag{23}$$

where *Tk* is the "temperature," *k* is the "time" index of annealing [16, 17].

#### **3.3. Simulated Annealing algorithm with the Markov chain Lenght dynamically**

In [13, 20, 21] authors shown a strong relation between the cooling function and the length of the Markov chain exists. For the Simulated Annealing algorithm, the stationary distribution for each Markov chain is given by the Boltzmann probability distribution, which is a family of curves that vary from a uniform distribution to a pulse function.

At the very beginning of the process (with *Tk* = *Ti*), Simulated Annealing has a uniform distribution, henceforth any guess would be accepted as a solution. Besides any neighbor of the current solution is also accepted as a new solution. In this way when Simulated Annealing is just at the beginning the Markov chain length is really small, *Lk* = *Li* ≈ 1. When running the temperature cycle of simulated annealing, for values of *k* greater than 1, the value of *Tk* is decremented by the cooling function [16], until the final temperature is reached (*Tk* = *Tf* ):

$$T\_{k+1} = \mathfrak{a}T\_k \tag{24}$$

In Equation 24 *α* is normally in the range of [0.7, 0.99][1].

In this regard the length of each Markov chain must be incremented at any temperature cycle in a similar but in inverse way that *Tk* is decremented. This means that *Lk* must be incremented until *Lmax* is reached at *Tf* by applying an increment Markov chain factor (*β*). The cooling function given by Equation 24 is applied many times until the final temperature *Tf* is reached. Because Metropolis cycle is finished when the stochastic equilibrium is reached, it can be also modeled as a Markov chain as follows:

$$L\_{k+1} = \beta L\_k \tag{25}$$

In previous Equation 25, *Lk* represents the length of the current Markov chain at a given temperature, that means the number of iterations of the Metropolis cycle for a *Tk* temperature. So *Lk*<sup>+</sup><sup>1</sup> represents the length of the next Markov chain. In this Markov Model, *β* represents an increment of the number of iterations in the next Metropolis cycle.

If the cooling function given by Equation 24 is applied over and over, *n* times, until *Tk* = *Tf* , the next geometrical function is easily gotten:

$$T\_f = \mathfrak{a}^n T\_i \tag{26}$$

Knowing the initial (*Ti*) and the final (*Tf* ) temperature and the cooling coefficient (*α*), the number of times that the Metropolis cycle is executed can be calculated as:

$$m = \frac{\ln T\_f - \ln T\_i}{\ln a} \tag{27}$$

If we make a similar process for increasing the equation of the Markov chain length, another geometrical function is obtained:

$$L\_{\text{max}} = \beta^n L\_1 \tag{28}$$

Once *n* is known by Equation 27, the value of the increment coefficient (*β*) is calculated as:

$$\mathcal{B} = e^{\left(\frac{\ln L\_{\text{MAX}} - \ln L\_1}{\imath}\right)} \tag{29}$$

Once *Lmax* (calculated form Equation 16), *L*<sup>1</sup> and *β* are known, the length of each Markov chain for each temperature cycle can be calculated using Equation 27. In this way *Lk* is computed dynamically from *L*<sup>1</sup> = 1 for *Ti* until *Lmax* at *Tf* . First we can obtain *Ti* from Equation 18 and *Tf* from Equation 27, with both values and Equation 29 algorithm can calculate *β* [30].

In Figure 2 we can see the simulated annealing algorithm modifications using Markov chains described above. Below we will explain how we will use the linear regression for the simulated annealing algorithm run more efficiently without losing quality in the solution.

#### **4. Linear Regresion Method (LRM)**

8 Will-be-set-by-IN-TECH

the largest number of clauses that can change when we change the value of a variable, is an

Similarly, the minimum of change Energy can be estimated by counting the clauses that are

Some criticisms about Simulated Annealing are about the long time of execution of standard Boltzmann-type Simulated Annealing, has many times driven these projects to utilize a temperature schedule too fast to satisfy the sufficiency conditions required to establish a true ergodic search. In this chapter we use a logarithmic an exponential temperature schedule that

if in the previous expression Δ*k* is equal to 1 then obtain the equation for two successive values

**3.3. Simulated Annealing algorithm with the Markov chain Lenght dynamically** In [13, 20, 21] authors shown a strong relation between the cooling function and the length of the Markov chain exists. For the Simulated Annealing algorithm, the stationary distribution for each Markov chain is given by the Boltzmann probability distribution, which is a family

At the very beginning of the process (with *Tk* = *Ti*), Simulated Annealing has a uniform distribution, henceforth any guess would be accepted as a solution. Besides any neighbor of the current solution is also accepted as a new solution. In this way when Simulated Annealing is just at the beginning the Markov chain length is really small, *Lk* = *Li* ≈ 1. When running the temperature cycle of simulated annealing, for values of *k* greater than 1, the value of *Tk* is decremented by the cooling function [16], until the final temperature is reached (*Tk* = *Tf* ):

changed when creating a new neighbor and obtain the lowest of these values:

*Tk* = *<sup>T</sup>*0*e*[(*α*−1)*k*]

Δ*T*

where *Tk* is the "temperature," *k* is the "time" index of annealing [16, 17].

of curves that vary from a uniform distribution to a pulse function.

In Equation 24 *α* is normally in the range of [0.7, 0.99][1].

ln(*PA*) <sup>=</sup> <sup>−</sup>max number of clauses

ln(*PA*) <sup>=</sup> <sup>−</sup>min number of clauses

ln(0.99) (18)

ln(0.01) (19)

, 0 < *α* < 1 (20)

<sup>Δ</sup>*<sup>k</sup>* <sup>=</sup> *Tk*(*<sup>α</sup>* <sup>−</sup> <sup>1</sup>), *<sup>k</sup>* � <sup>1</sup> (21)

Δ*T* = *Tk*(*α* − 1)Δ*k*, *k* � 1 (22)

*Tk*<sup>+</sup><sup>1</sup> = *αTk*, 0 < *α* < 1, *k* � 1 (23)

*Tk*<sup>+</sup><sup>1</sup> = *αTk* (24)

upper bound to change maximum of Energy and:

is consistent with the Boltzmann algorithm follow:

From Equation 20 we can obtain:

and

of the temperature

*Ti* <sup>=</sup> <sup>−</sup> <sup>Δ</sup>*JV*max

*Tf* <sup>=</sup> <sup>−</sup> <sup>Δ</sup>*JV*min

We explain, in Section 3.2, how to estimate the initial and final temperature for SAT instances that will be provided to the simulated annealing algorithm to determine if it is satisfiable or not.

As shown in the Figure 3, the algorithm found Metropolis various configurations with different energy at a given temperature.

The typical behavior of the energy for a given temperature can be observed in Figure 3. We set out to determine when the cycle of Metropolis reaches the equilibrium although not all

**Figure 2.** Simulated Annealing algorithm with dinamically Markov chain

of the iterations required by Markov have been executed. In order to determine this zone in adaptive way, we will fit by least squares a straight line and will stop the Metropolis cycle if the slope of this line is equal or smaller than zero. This Linear Regression Method LRM is a well known method but never was applied to detect Metropolis equilibrium in Simulated Annealing.

Suppose that the data set consists of the points:

$$(\mathbf{x}\_i, y\_i), \mathbf{i} = \mathbf{1}, \mathbf{2}, \mathbf{3}, \dots, \mathbf{n} \tag{30}$$

**Figure 3.** Energy of different states, explored in the Metropolis cycle, for a fixed temperature

In Equation 31 *a* and *b* are not yet known. In our problem *f*(*xi*, *a*, *b*) = *f*(*i*, *a*, *b*) = *Ji*.

*S* = *n* ∑ *i*=1

*a n* ∑ *i*=1 *x*2 *<sup>i</sup>* + *b*

> *a n* ∑ *i*=1

In Equation 33 and Equation 34 we can define the following constants:

*A* =

*n* ∑ *i*=1 *x*2 *<sup>i</sup>* , *B* =

respect to each parameter *a* and *b*, and we obtain this system of linear equations:

*xi* + *b*

*n* ∑ *i*=1

As usual, we now seek the values of *a* and *b*, that minimize the sum of the squares of the

As it is well known regression equations are obtained by differentiating *S* in Equation 32 with

*n* ∑ *i*=1 *xi* =

> *n* ∑ *i*=1 1 = *n* ∑ *i*=1

*xi* , *C* =

[*yi* − (*axi* + *b*)]

*n* ∑ *i*=1

*n* ∑ *i*=1

*xiyi* , *D* =

*n* ∑ *i*=1

*yi* ≈ *f*(*xi*, *a*, *b*) = *axi* + *b* (31)

A Simulated Annealing Algorithm for the Satis ability Problem Using Dynamic Markov Chains with Linear Regression Equilibrium

<sup>2</sup> (32)

31

*xiyi* (33)

*yi* (34)

*yi* (35)

In our problem:

residuals as follows:

We want to find a function *f* such that *f*(*xi*) ≈ *yi*. To attain this goal, we suppose that the function *f* is of a particular form containing some parameters (*a*1, *a*2, *a*3, ...., *am*) which need to be determined.

**Figure 3.** Energy of different states, explored in the Metropolis cycle, for a fixed temperature

In our problem:

10 Will-be-set-by-IN-TECH

Initial and final temperature: *Ti* and *Tf*

1

3

4

5

Suppose that the data set consists of the points:

Annealing.

be determined.

Initializing: Initial solution *Si*

*T* = *Ti*

*L* = *L*<sup>1</sup>

else if *e*

*L* = *βL* if *L* = *Lmax* goto 5

Else

Else

**Figure 2.** Simulated Annealing algorithm with dinamically Markov chain

Calculate *n*, *β*, *L*max

<sup>2</sup> Temperatures cycle:

Metropolis cycle: Generating *Sj* from *Si di f* = *J*(*Sj*) − *J*(*Si*) If *di f* < 0 then *Si* = *Sj*

−*di f*

*Si* = *Sj*

goto 3

*Tk* = *αTk* goto 2

of the iterations required by Markov have been executed. In order to determine this zone in adaptive way, we will fit by least squares a straight line and will stop the Metropolis cycle if the slope of this line is equal or smaller than zero. This Linear Regression Method LRM is a well known method but never was applied to detect Metropolis equilibrium in Simulated

We want to find a function *f* such that *f*(*xi*) ≈ *yi*. To attain this goal, we suppose that the function *f* is of a particular form containing some parameters (*a*1, *a*2, *a*3, ...., *am*) which need to

(*xi*, *yi*), *i* = 1, 2, 3, ..., *n* (30)

Stop criterion: If *Tk* = *Tf* End

Metropolis condition:

*<sup>T</sup>* > rnd(0,1)

$$y\_i \approx f(\mathbf{x}\_i, a, b) = a\mathbf{x}\_i + b \tag{31}$$

In Equation 31 *a* and *b* are not yet known. In our problem *f*(*xi*, *a*, *b*) = *f*(*i*, *a*, *b*) = *Ji*.

As usual, we now seek the values of *a* and *b*, that minimize the sum of the squares of the residuals as follows:

$$S = \sum\_{i=1}^{n} \left[ y\_i - (a\mathbf{x}\_i + b) \right]^2 \tag{32}$$

As it is well known regression equations are obtained by differentiating *S* in Equation 32 with respect to each parameter *a* and *b*, and we obtain this system of linear equations:

$$a\sum\_{i=1}^{n} \mathbf{x}\_i^2 + b\sum\_{i=1}^{n} \mathbf{x}\_i = \sum\_{i=1}^{n} \mathbf{x}\_i y\_i \tag{33}$$

$$a\sum\_{i=1}^{n} \mathbf{x}\_{i} + b\sum\_{i=1}^{n} \mathbf{1} = \sum\_{i=1}^{n} y\_{i} \tag{34}$$

In Equation 33 and Equation 34 we can define the following constants:

$$A = \sum\_{i=1}^{n} \mathbf{x}\_i^2 \; \; \; B = \sum\_{i=1}^{n} \mathbf{x}\_i \; \; \; \mathbf{C} = \sum\_{i=1}^{n} \mathbf{x}\_i y\_i \; \; \; D = \sum\_{i=1}^{n} y\_i \tag{35}$$

#### 12 Will-be-set-by-IN-TECH 32 Simulated Annealing – Advances, Applications and Hybridizations A Simulated Annealing Algorithm for the Satisfiability Problem Using Dynamic Markov Chains with Linear Regression Equilibrium <sup>13</sup>

Then the system of equations (Equation 33 and 34) can be rewritten as:

$$aA + bB = \mathbb{C} \tag{36}$$

$$aB + b\mathfrak{n} = D \tag{37}$$

1

3

4

5

an instance at the end of the program execution.

and variables *σ* [24, 25].

Initializing: Initial solution *Si*

*T* = *Ti*

*L* = *L*<sup>1</sup>

else if *e*

*L* = *βL* if *L* = *Lmax* goto 5

Else

Else

Calculate *n*, *β*, *L*max

<sup>2</sup> Temperatures cycle:

Metropolis cycle: Generating *Sj* from *Si di f* = *J*(*Sj*) − *J*(*Si*) If *di f* < 0 then *Si* = *Sj*

−*di f*

*Si* = *Sj* If *<sup>L</sup>* <sup>≥</sup> *<sup>L</sup>C*−<sup>1</sup> *max* then calculate *a* ..........If *a* ≈ 0 then

..................goto 4

goto 3

*Tk* = *αTk* goto 2

**Figure 4.** Simulated Annealing algorithm with dinamically Markov chain and LRM

other are in SATLIB [14]. We generated several instances that had the same relation of clauses

The measurement of efficiency of this algorithm was based on the execution time and we also obtained a solution quality measure (*SQM*), *SQM* is taken as the number of "true" clauses in

Stop criterion: If *Tk* = *Tf* End

Metropolis condition:

*<sup>T</sup>* > rnd(0,1)

*L* = *Lmax*

Initial and final temperature: *Ti* and *Tf*

A Simulated Annealing Algorithm for the Satis ability Problem Using Dynamic Markov Chains with Linear Regression Equilibrium

33

We recall that parameter *a*, the slope of Equation 31 is:

$$a = \frac{Cn - BD}{An - B^2} \tag{38}$$

In our data *xi* = 1, 2, 3, ..., *n*, then we can write:

$$A = \sum\_{i=1}^{n} x\_i^2 = \sum\_{i=1}^{n} i^2 = \frac{n(n+1)(2n+1)}{6} \tag{39}$$

and

$$B = \sum\_{i=1}^{n} x\_i = \sum\_{i=1}^{n} i = \frac{n(n+1)}{2} \tag{40}$$

in the same way:

$$\mathcal{C} = \sum\_{i=1}^{n} i f\_i \tag{41}$$

and

$$D = \sum\_{i=1}^{n} J\_i \tag{42}$$

By substitution of equations: 39, 40, 41 and 42; in Equation 38 finally we get the equation

$$a = \frac{\mathcal{C}n - BD}{n^3 - n} \tag{43}$$

In order to apply LRM to traditional Simulated Annealing, we apply the following strategy:


Notice from Equation 43 and for Figure 4, that the computation of LRM is *O*(*n*) where *n* is the number of points taken to compute the slope. So the complexity of Simulated Annealing with LRM is not affected [19, 22].

#### **5. Experimental results**

In order to prove LRM algorithm we used the SAT instances in Table 1 and Table 2. Some of these instances were generated using the programs proposed by Horie et al. in 1997 [15] and

12 Will-be-set-by-IN-TECH

*<sup>a</sup>* <sup>=</sup> *Cn* <sup>−</sup> *BD*

*n* ∑ *i*=1

*C* = *n* ∑ *i*=1

*D* =

By substitution of equations: 39, 40, 41 and 42; in Equation 38 finally we get the equation

*<sup>a</sup>* <sup>=</sup> *Cn* <sup>−</sup> *BD <sup>n</sup>*<sup>3</sup> − *<sup>n</sup>*

In order to apply LRM to traditional Simulated Annealing, we apply the following strategy:

1. Metropolis cycle is running as usual, just as explained in the Section 3.3, using the

2. When the repeats of metropolis, *L*, are equal to *LC*−<sup>1</sup> *max* (*LC*−<sup>1</sup> *max* is calculated by Equation 28

3. If the value of the slope *a*, in the equation of the line (Equation 31), found by Equation 43 is close to zero, then stop the cycle of Metropolis although this has not reached the value

Notice from Equation 43 and for Figure 4, that the computation of LRM is *O*(*n*) where *n* is the number of points taken to compute the slope. So the complexity of Simulated Annealing with

In order to prove LRM algorithm we used the SAT instances in Table 1 and Table 2. Some of these instances were generated using the programs proposed by Horie et al. in 1997 [15] and

maximum value of Markov chain length calculated by the Equation 28 *L<sup>C</sup>*

*n* ∑ *i*=1

<sup>2</sup> <sup>=</sup> *<sup>n</sup>*(*<sup>n</sup>* <sup>+</sup> <sup>1</sup>)(2*<sup>n</sup>* <sup>+</sup> <sup>1</sup>)

*<sup>i</sup>* <sup>=</sup> *<sup>n</sup>*(*<sup>n</sup>* <sup>+</sup> <sup>1</sup>)

*aA* + *bB* = *C* (36)

*aB* + *bn* = *D* (37)

*An* <sup>−</sup> *<sup>B</sup>*<sup>2</sup> (38)

<sup>6</sup> (39)

<sup>2</sup> (40)

*i Ji* (41)

*Ji* (42)

(43)

*max*, *C* is calculated

Then the system of equations (Equation 33 and 34) can be rewritten as:

We recall that parameter *a*, the slope of Equation 31 is:

*A* =

*n* ∑ *i*=1 *x*2 *<sup>i</sup>* = *n* ∑ *i*=1 *i*

*B* = *n* ∑ *i*=1 *xi* =

In our data *xi* = 1, 2, 3, ..., *n*, then we can write:

and

and

*Pi <sup>A</sup>* ∈ Φ*PA*

with *Pi*−<sup>1</sup>

*Lmax*.

*<sup>A</sup>* ∈ Φ*PA* ).

LRM is not affected [19, 22].

**5. Experimental results**

in the same way:

**Figure 4.** Simulated Annealing algorithm with dinamically Markov chain and LRM

other are in SATLIB [14]. We generated several instances that had the same relation of clauses and variables *σ* [24, 25].

The measurement of efficiency of this algorithm was based on the execution time and we also obtained a solution quality measure (*SQM*), *SQM* is taken as the number of "true" clauses in an instance at the end of the program execution.


SAT problem Id Variables Clauses *σ* SAT? g2\_V100\_C400\_P4\_I1 g3 100 400 4.00 Yes hole8 h4 72 297 4.13 No uuf225-045 u4 225 960 4.27 No RTI\_k3\_n100\_m429\_150 r1 100 429 4.29 Yes uf175-023 u1 175 753 4.30 Yes uuf100-0789 u8 100 430 4.30 No uf50-01 u7 50 218 4.36 Yes uuf50-01 u9 50 218 4.36 No ii8a2 i3 180 800 4.44 Yes g2\_V50\_C250\_P5\_I1 g19 50 250 5.00 Yes hole10 h1 110 561 5.10 No ii32e1 i2 222 1186 5.34 Yes anomaly a6 48 261 5.44 Yes aim-50-6\_0-yes1-1 a5 50 300 6.00 Yes g2\_V50\_C300\_P6\_I1s g20 50 800 6.00 Yes jnh201 j1 100 800 8.00 Yes jnh215 j3 100 800 8.00 No medium m1 116 953 8.22 Yes jnh301 j2 100 900 9.00 Yes

A Simulated Annealing Algorithm for the Satis ability Problem Using Dynamic Markov Chains with Linear Regression Equilibrium

35

Computer 1 2 3 4 5 6 7 8 9

Floating Point Math (MOS) 168.1 168.3 168.0 168.0 168.2 168.1 167.9 168.2 168.2 Integer Maths (MOS) 33.61 33.56 33.60 33.59 33.65 33.62 33.56 33.65 33.62 Search for prime numbers (MOS) 126.6 126.6 126.4 126.5 126.7 126.7 126.3 126.6 126.6 String Sorting (MSS) 415.6 416.0 415.3 408.3 415.9 415.8 415.7 415.6 415.7 Memory blocks transfer (MbTS) 470.9 476.6 475.9 465.7 466.4 467.4 468.7 469.0 459.7 Cache memory read (MbTS) 1141 1142 1141 1141 1142 1141 1141 1142. 1088 Non cache memory read (MbTS) 831.8 831.7 832.0 831.2 830.7 831.1 830.6 831.6 831.2 Memory write (MbTS) 377.3 379.7 378.2 378.3 378.0 379.1 378.1 378.1 377.0 Overall calculation speed 232.5 232.6 232.1 232.0 232.4 232.5 232.1 232.5 232.6 Overall memory speed 209.9 210.8 210.4 209.6 209.5 210.0 209.8 209.8 209.9

between computers are at most equal to 1.6 percent, so we can infer that we obtain similar

Each SAT instance was executed 100 times with a slow cooling function (0.99 in Equation 20 and we obtained the average time of the executions and the average quality of the solution.

**Table 2.** SAT instances for testing algorithms, continuation

**Table 3.** Performance of computers used for experiments

results in one or another computer.

**Table 1.** SAT instances for testing algorithms

Both algorithms: Simulated Annealing Algorithm with the Markov Chain Lenght dynamically (*SA*\_*C*) and Simulated Annealing with Linear Regresion Method (*SA*\_*LRM*), were implemented in Dell Lattitude with 1 Gb of Ram memory and Pentium 4 processor running at 2.13 GHz.

## **5.1. Experiment design**

The experiments with these algorithms require a considerable run-time, because each instance SAT, is solved several times to take average values of performance.

Another important element to consider is to guarantee that the conditions of execution on the various algorithms are similar (because we measure time of execution on). In this regard, the first thing we did was on the Evaluation of a set of computers with similar hardware and software conditions.

To run this task, there were used programs available from the Internet, that perform different computers test and give us the result of this evaluation [29, 32, 33].

In Table 3, MOS means: million operations per second, MSS million strings per second and MBTS represent millions of bytes transferred per second. As we can see Table 3 the differences


**Table 2.** SAT instances for testing algorithms, continuation

14 Will-be-set-by-IN-TECH

aim-100-1\_6-yes1-1 a1 100 160 1.60 Yes aim-50-1\_6-yes1-3 a2 50 80 1.60 Yes aim-200-1\_6-no-1 a7 200 320 1.60 No aim-50-1\_6-no-2 a8 50 80 1.60 No g2\_V100\_C200\_P2\_I1 g1 100 200 2.00 Yes aim-50-2\_0-no-4 a10 50 100 2.00 No aim-50-2\_0-yes1-1 a3 50 100 2.00 Yes aim-50-2\_0-no-3 a9 50 100 2.00 No dubois21 d2 63 168 2.67 No dubois26 d1 78 208 2.67 No dubois27 d3 81 216 2.67 No BMS\_k3\_n100\_m429\_161 b1 100 283 2.83 Yes g2\_V300\_C900\_P3\_I1 g15 300 900 3.00 Yes g2\_V50\_C150\_P3\_I1 g17 50 150 3.00 Yes BMS\_k3\_n100\_m429\_368 b2 100 308 3.08 Yes hole6 h2 42 133 3.17 No par8-1 p1 350 1149 3.28 Yes aim-50-3\_4-yes1-2 a4 50 170 3.40 Yes hole7 h3 56 204 3.64 No par8-3-c p2 75 298 3.97 Yes par8-5-c p3 75 298 3.97 Yes

Both algorithms: Simulated Annealing Algorithm with the Markov Chain Lenght dynamically (*SA*\_*C*) and Simulated Annealing with Linear Regresion Method (*SA*\_*LRM*), were implemented in Dell Lattitude with 1 Gb of Ram memory and Pentium 4 processor

The experiments with these algorithms require a considerable run-time, because each instance

Another important element to consider is to guarantee that the conditions of execution on the various algorithms are similar (because we measure time of execution on). In this regard, the first thing we did was on the Evaluation of a set of computers with similar hardware and

To run this task, there were used programs available from the Internet, that perform different

In Table 3, MOS means: million operations per second, MSS million strings per second and MBTS represent millions of bytes transferred per second. As we can see Table 3 the differences

SAT, is solved several times to take average values of performance.

computers test and give us the result of this evaluation [29, 32, 33].

**Table 1.** SAT instances for testing algorithms

running at 2.13 GHz.

software conditions.

**5.1. Experiment design**

SAT problem Id Variables Clauses *σ* SAT?


**Table 3.** Performance of computers used for experiments

between computers are at most equal to 1.6 percent, so we can infer that we obtain similar results in one or another computer.

Each SAT instance was executed 100 times with a slow cooling function (0.99 in Equation 20 and we obtained the average time of the executions and the average quality of the solution.

#### 16 Will-be-set-by-IN-TECH 36 Simulated Annealing – Advances, Applications and Hybridizations A Simulated Annealing Algorithm for the Satisfiability Problem Using Dynamic Markov Chains with Linear Regression Equilibrium <sup>17</sup>

The *SQM* is established by the next expression:

$$SQM = \frac{\text{clauses true}}{\text{total clauses}} \times 100\tag{44}$$

*LC*

**Table 5.** Experimentals results, continuation

must be less than 100%.

other reducing runtime up to 90%.

of the solution.

magnitudes *Qtime* and *Qquality*, as shown in the following Table 6.

*max* = 4.61 *L<sup>C</sup>*

*max* = 3.00 *L<sup>C</sup>*

*LC*−<sup>1</sup> *max* = 3.00 *LC*−<sup>1</sup> *max* = 2.30 *LC*−<sup>1</sup> *max* = 1.96

Instance *Qquality Qtime Qquality Qtime Qquality Qtime* g3 99.8 20.8 99.6 21.2 99.6 18.4 h1 100.1 54.5 99.6 53.5 99.9 47.1 h2 99.7 22.0 99.3 23.2 99.5 19.1 h3 100.1 30.5 99.7 29.7 99.5 24.9 h4 98.9 41.0 99.3 38.5 99.0 32.7 i2 99.1 55.0 98.9 54.4 99.7 45.7 i3 100.0 65.0 100.2 65.4 100.3 58.7 j1 99.7 11.6 99.7 12.8 99.6 11.1 j2 99.7 11.6 99.8 12.3 99.8 10.4 j3 99.5 11.3 99.7 12.3 99.8 10.2 m1 100.5 74.5 99.9 70.9 99.7 66.8 p1 99.9 66.0 99.9 65.4 99.9 59.7 p2 99.4 13.8 99.0 13.3 99.3 11.8 p3 99.3 14.0 99.5 12.4 99.5 11.0 r1 99.4 24.0 99.7 22.6 99.5 21.1 u1 99.6 49.6 100.0 49.8 100.0 44.5 u4 99.7 53.3 100.0 50.9 99.3 48.1 u7 99.9 14.7 100.0 15.2 99.5 13.1 u8 99.7 26.0 100.0 23.9 99.6 20.0 u9 99.1 13.8 99.5 14.0 99.2 11.9

From Table 4 and Table 5 experimental results we can obtain the average values for the

As you can see, in Table 6, the quality factor of the solutions, *Qquality* is very close to 100%, which implies that the *SA*\_*LRM* algorithm finds solutions as good as the *SA*\_*C* algorithm, it is important to note that 37% of SAT instances used for the experiments, are not-SAT, which implies that their respective *SQM* can not be equal to 100% and therefore the quality factor

Also in Table 6, we see that the factor *Qtime* diminishes values less than 30%, showing that our algorithm, *SA*\_*LRM*, is 70% faster than the *SA*\_*C* algorithm but maintains the same quality

As shown in Figure 5 are some instances in which the reduction of run time is only 25% while

*max* = 2.30

A Simulated Annealing Algorithm for the Satis ability Problem Using Dynamic Markov Chains with Linear Regression Equilibrium

37

Both results, *SA*\_*C* and *SA*\_*LRM*, were compared using two quotients which we denominated time improvement *Qtime* and quality improvement *Qquality* defined by:

$$Q\_{time} = \frac{AverageTime\_{SA\\_LRM}}{AverageTime\_{SA\\_C}} \times 100\tag{45}$$

$$Q\_{quality} = \frac{SQM\_{SA\\_LRM}}{SQM\_{SA\\_C}} \times 100\tag{46}$$

If *Qquality* is close to 100% this means that both algorithm found good solutions, however *Qquality* factor must decrease, which implies that the new algorithm *SA*\_*LRM* is faster than *SA*\_*C*.


**Table 4.** Experimentals results


**Table 5.** Experimentals results, continuation

16 Will-be-set-by-IN-TECH

Both results, *SA*\_*C* and *SA*\_*LRM*, were compared using two quotients which we

*AverageTimeSA*\_*<sup>C</sup>*

*SQMSA*\_*<sup>C</sup>*

*max* = 3.00 *L<sup>C</sup>*

*LC*−<sup>1</sup> *max* = 3.00 *LC*−<sup>1</sup> *max* = 2.30 *LC*−<sup>1</sup> *max* = 1.96

If *Qquality* is close to 100% this means that both algorithm found good solutions, however *Qquality* factor must decrease, which implies that the new algorithm *SA*\_*LRM* is faster than

> Instance *Qquality Qtime Qquality Qtime Qquality Qtime* a1 99.4 13.0 99.3 12.9 99.2 11.7 a10 99.0 11.0 99.2 11.6 99.6 9.7 a2 99.5 11.4 98.7 11.2 99.0 9.8 a3 99.9 11.3 99.4 11.6 99.2 9.6 a4 99.2 9.8 99.6 10.4 98.9 8.8 a5 99.5 10.1 99.1 10.6 99.5 9.0 a6 99.0 34.5 99.3 32.1 99.2 27.8 a7 98.7 13.3 99.1 13.9 98.8 11.8 a8 99.6 11.3 99.2 12.2 99.4 9.8 a9 99.2 10.9 98.7 11.6 99.4 9.5 b1 99.6 54.0 99.7 51.0 99.9 46.0 b2 100.2 57.7 99.8 53.5 99.9 49.2 d1 99.2 11.8 99.4 11.6 99.2 10.1 d2 99.5 11.6 99.5 11.5 99.4 10.1 d3 100.0 11.5 99.1 11.3 99.1 9.5 g1 99.8 71.7 99.9 69.7 99.8 64.7 g15 99.7 28.7 99.7 28.3 99.9 29.0 g17 99.3 11.7 99.2 12.7 99.5 10.5 g19 99.6 11.5 98.6 11.5 99.3 9.8.0 g20 99.5 10.5 99.5 11.1 99.4 9.5.0

total clauses <sup>×</sup> <sup>100</sup> (44)

*max* = 2.30

× 100 (45)

× 100 (46)

*SQM* <sup>=</sup> clauses true

denominated time improvement *Qtime* and quality improvement *Qquality* defined by:

*Qtime* <sup>=</sup> *AverageTimeSA*\_*LRM*

*Qquality* <sup>=</sup> *SQMSA*\_*LRM*

*max* = 4.61 *L<sup>C</sup>*

*LC*

The *SQM* is established by the next expression:

*SA*\_*C*.

**Table 4.** Experimentals results

From Table 4 and Table 5 experimental results we can obtain the average values for the magnitudes *Qtime* and *Qquality*, as shown in the following Table 6.

As you can see, in Table 6, the quality factor of the solutions, *Qquality* is very close to 100%, which implies that the *SA*\_*LRM* algorithm finds solutions as good as the *SA*\_*C* algorithm, it is important to note that 37% of SAT instances used for the experiments, are not-SAT, which implies that their respective *SQM* can not be equal to 100% and therefore the quality factor must be less than 100%.

Also in Table 6, we see that the factor *Qtime* diminishes values less than 30%, showing that our algorithm, *SA*\_*LRM*, is 70% faster than the *SA*\_*C* algorithm but maintains the same quality of the solution.

As shown in Figure 5 are some instances in which the reduction of run time is only 25% while other reducing runtime up to 90%.

18 Will-be-set-by-IN-TECH 38 Simulated Annealing – Advances, Applications and Hybridizations A Simulated Annealing Algorithm for the Satisfiability Problem Using Dynamic Markov Chains with Linear Regression Equilibrium <sup>19</sup>


[2] Aspvall, B. & Tarjan, M. F. P. R. E. [1979]. Alinear-time algorithm for testing the truth of

A Simulated Annealing Algorithm for the Satis ability Problem Using Dynamic Markov Chains with Linear Regression Equilibrium

39

[3] Atiqullah, M. [2004]. An efficient simple cooling schedule for simulated annealing,

[6] Cerny, V. [1985]. Thermodynamical approach to the traveling salesman problem: an efficient simulation algorithm, *Journal of Optimazation Theory and Applications* 45(1). [7] Cook, S. A. [1971]. *The complexity of theorem proving procedures*, Proceedings on the third

[8] Crescenzi, P. & Kann, V. [1998]. How to find the best approximation results - a follow-up

[9] Dowling, W. F. & Gallier, J. H. [1984]. Linear-time algorithms for testing the satisfiability

[10] Even, S., Itai, A. & Shamir, A. [1976]. On the complexity of timetable and

[11] Faigle, U. & Kern, W. [1991]. Note on the convergence of simulated annealing algorithms,

[12] Fleischer, M. A. [1996]. Cybernetic optimization by simulated annealing: Accelerating convergence by parallel processing and probabilistic feedback control, *Journal of*

[13] Frausto-Solis, J., Sanvicente, H. & Imperial, F. [2006]. Andymark: An analytical method to establish dynamically the length of the markov chain in simulated annealing for the

[14] Hoos, H. H. & Stutzle, T. [2000]. Satlib: An online resource for research on sat, *SAT 2000*

[15] Horie, S. & Watanabe, O. [1997]. Hard instance generation for sat, *ISAAC '97: Proceedings of the 8th International Symposium on Algorithms and Computation*, Springer-Verlag,

[16] Ingber, L. [1993]. Simulated annealing: Practice versus theory, *Mathematical and Computer*

[17] Ingber, L. [1996]. Adaptive simulated annealing (asa): Lessons learned, *Control and*

[18] Kirkpatrick, S., Gelatt, C. D. & Vecchi, M. P. [1983]. Optimization by simulated annealing,

[19] Martinez-Rios, F. & Frausto-Solis, J. [2007]. A hybrid simulated annealing threshold accepting algorithm for satisfiability problems using dynamically cooling schemes,

[20] Martinez-Rios, F. & Frausto-Solis, J. [2008a]. Golden annealing method for job shop scheduling problem, *Mathematics and Computers in Science and Engineering, ISSN*

[21] Martinez-Rios, F. & Frausto-Solis, J. [2008b]. Golden ratio annealing for satisfiability problems using dynamically cooling schemes, *Lecture Notes in Computer Science*

of propositional horn formulae, *Journal of Logic Programming* 1(3): 267–284.

multicommodity flow problems, *SIAM Journal on Computing* 5(4): 691–703.

[4] Bertsimas, D. & Tsitsiklis, J. [1993]. Simulated annealing, *Statistical Science* 8: 10–15. [5] Cerny, V. [1982]. A thermodynamical approach to the travelling salesman problem: An

certain quantified boolean formulas, *Information Processing Letters* 8(3).

efficient simulation algorithm, *Comenius University* .

annual ACM symposium on theory of computing.

URL: *http://link.aip.org/link/?SMJ/5/691/1*

satisfiablity problem, *Springer Verlag* .

*Heuristics* 1: 225–246.

*Modelling* 18(11): 29 – 57.

*Science* (4598)(220): 671–680.

*Cybernetics* 25: 33–54.

pp. 22–31.

*1790-2769* .

4994: 215–224.

URL: *http://e-archive.informatik.uni-koeln.de/67/*

to garey and johnson, *ACM SIGACT News* 29(4): 90–97.

*SIAM journal on control and optimization* 29(1): 153–159.

pp. 283–292. Disponible en http://www.satlib.org/.

*Electrical and Computer Engineering Series WSEAS* pp. 282–286.

3045: 396–404.

**Table 6.** *Qtime* and *Qquality* averages for all instances tested

**Figure 5.** *Qtime* for SAT instances

## **6. Conclusions**

In this paper a new adaptive Simulated Annealing algorithm named *SA*\_*LRM* that uses least squares method as a way to find the equilibrium zone in Metropolis cycle is presented. When this zone is found our algorithm abort the Metropolis cycle, although the iterations calculated with the dynamic chains of Markov have not been completed. After experimentation we show that *SA*\_*LRM* is more efficient than those tuned using only an analytical method.

## **Author details**

Felix Martinez-Rios *Universidad Panamericana, México*

Juan Frausto-Solis *UPMOR, México*

## **7. References**

[1] Aarts, E. & Korst, J. [1989]. *Simulated annealing and Boltzman machines: An stochastic approach to combinatorial optimization and neural computing*, John Wiley and Sons.


18 Will-be-set-by-IN-TECH

*max LC*−<sup>1</sup> *max Qtime*(%) *Qquality*(%) 4.61 3.00 99.6 27.3 3.00 2.30 99.5 26.8 2.30 1.96 99.5 23.8

In this paper a new adaptive Simulated Annealing algorithm named *SA*\_*LRM* that uses least squares method as a way to find the equilibrium zone in Metropolis cycle is presented. When this zone is found our algorithm abort the Metropolis cycle, although the iterations calculated with the dynamic chains of Markov have not been completed. After experimentation we show

[1] Aarts, E. & Korst, J. [1989]. *Simulated annealing and Boltzman machines: An stochastic approach to combinatorial optimization and neural computing*, John Wiley and Sons.

that *SA*\_*LRM* is more efficient than those tuned using only an analytical method.

*LC*

**Table 6.** *Qtime* and *Qquality* averages for all instances tested

**Figure 5.** *Qtime* for SAT instances

*Universidad Panamericana, México*

**6. Conclusions**

**Author details** Felix Martinez-Rios

Juan Frausto-Solis *UPMOR, México*

**7. References**

	- [22] Martinez-Rios, F. & Frausto-Solis, J. [2008c]. Simulated annealing for sat problems using dynamic markov chains with linear regression equilibrium, *MICAI 2008, IEEE* pp. 182–187.

**Optimization by Use of Nature in Physics**

We prefer to find the most appropriate choice in daily life for convenience and efficiency. When we go to a destination, we often use a searching program to find the fastest way, the minimum-length path, or most-reasonable one in cost. In such a searching problem, we mathematically design our benefit as a multivariable function (cost function) depending on many candidates and intend to maximize it. Such a mathematical issue is called the optimization problem. Simulated annealing (SA) is one of the generic solvers for the optimization problem [14]. We design the lowest-energy state in a physical system, which corresponds to the minimizer/maximizer of the cost function. The cost function to describe the instantaneous energy of the system is called as the Hamiltonian *H*0(*σ*1, *σ*2, ··· , *σN*), where *σ<sup>i</sup>* is the degrees of freedom in the system and *N* is the number of components related with the problem size. The typical instance of the Hamiltonian is a form of the spin glass, which is the disordered magnetic material, since most of the optimization problems with discrete variables

*<sup>H</sup>*0(*σ*1, *<sup>σ</sup>*2, ··· , *<sup>σ</sup>N*) = − ∑

The configuration of *Jij* depends on the details of the optimization problem.

*dt <sup>P</sup>*(*σ*; *<sup>t</sup>*) = ∑

; *t*)*P*eq(*σ*�

*d*

; *t*) = 1 and the detailed balance condition

*M*(*σ*|*σ*�

where *σ<sup>i</sup>* indicates the direction of the spin located at the site *i* in the magnetic material as *σ<sup>i</sup>* = ±1. The summation is taken over all the connected bonds (*ij*) through the interaction *Jij*.

Then we introduce an artificial design of stochastic dynamics governed by the master

where *P*(*σ*; *t*) is the probability with a specific configuration of *σ<sup>i</sup>* simply denoted as *σ* at time

; *t*) = *M*(*σ*�

*M*(*σ*|*σ*�

*σ*�

�*ij*�

; *t*)*P*(*σ*�

©2012 Ohzeki, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0),which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2012 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Jijσiσj*, (1)

**Chapter 3**

; *t*), (2)



**Beyond Classical Simulated Annealing**

Additional information is available at the end of the chapter

can be rewritten in terms of such a physical system,

*t*. The transition matrix is written as *M*(*σ*�

Masayuki Ohzeki

**1. Introduction**

equation.

∑*<sup>σ</sup> M*(*σ*|*σ*�

http://dx.doi.org/10.5772/50636


40 Simulated Annealing – Advances, Applications and Hybridizations **Chapter 0 Chapter 3**
