**1.1 Basic principle**

TS uses a memory which allows it to remember the current best solution and also enables it to explore the previous solutions and to direct the search moves. The features of memory adaptation and exploration facilitate TS to find better solutions and discover new potential regions in the search space. The memory adaptation feature helps to realize its course of action to exploit the search space efficiently. The ability of TS to explore good solutions enables it to assimilate the intelligent search mechanisms to search for good solutions and to discover new potential regions. The use of adaptive memory helps TS to learn and creates a more flexible and effective search strategy compared with memory less methods, such as simulated annealing (SA) and genetic algorithms (GA).

#### **1.2 Components of TS**

The components of TS are explained as follows.

## *1.2.1 Neighbors generations and neighborhood search*

To optimize the function *f*(*x*) globally from all the probable solutions *x*∈*X* in the space *X*, it requires to specify a structure in the vicinity of the solution space and the staring solution. The search advances to alter the existing solution to create a set of promising solutions in the vicinity of solution space. During the search process, the number of solutions traversed by TS is the product of the number of solutions in the vicinity of the solution space, *N*(*xi*), and the number of iterations, *k*. The function at each iteration is evaluated for *N*(*xi*) solutions. The better move is chosen in the vicinity of the complete solution space. The search proceeds to the next iteration, to find a solution in the vicinity of the accepted move. Thus, the TS builds a set of viable solutions by using a history record of the search.

#### *1.2.2 Tabu list*

The tabu list in TS maintains the data of the previously visited solutions. The list is included with the latest moves and it is altered dynamically as the search proceeds. The data in the tabu list helps to direct the move from the present solution to the next solution. At each iteration, the search process is maintained by updating the tabu list. The tabu list also avoids re-visiting of recent neighbors recorded in the list and thus save computational time.

#### *1.2.3 Short-term memory and long-term memory*

The information is stored in tabu list as recency-based short-term memory (RSM) and a frequency-based long-term memory (FRM). When the search proceeds, the nearly better solution to the present solution is sorted as tabu, and added

#### *A Metaheuristic Tabu Search Optimization Algorithm: Applications to Chemical… DOI: http://dx.doi.org/10.5772/intechopen.98240*

to the recency-based tabu list. As fresh solutions enter the list, older solutions are removed from the bottom. Long-term memory relies on the frequency of a solution that is visited. When the tabu list is filled with the highest number of elements in the frequency based tabu list, the solution with the smallest frequency index will be replaced.

#### *1.2.4 Intensification and diversification*

These strategies are used to create neighbors that have higher likelihood of finding optimal solutions based on the data in the tabu list. Intensification strategies are used to search potential areas in detail around the areas that are found good in the past. Intensification strategies are generally employed based on long term memory whose components are used to create neighbors for search intensification [4, 6, 10]. Diversification strategies are used to search the complete viable region, thus restricting the search getting trapped in local optima. These strategies promote probing unvisited regions by creating solutions radically different from those searched earlier [4]. A frequency-based tabu list is used to keep track of the search area.

During the generation of neighbor solutions, the difference between the present and fresh neighbor solution is managed by using a coefficient, *α*. The change from the current point is multiplied by *α* during the course of building new neighbors. The coefficient *α* is in the form of a sine function [10].

$$a = \frac{1}{2} \left[ 1 + \sin \left[ \frac{i\theta\pi}{N\_{neighbor}} \right] \right] \tag{1}$$

Here *i* is index of the neighbor, *Nneigh* is the total number of neighbor solutions generated at each iteration, and *θ* is a parameter that controls the oscillation period of *α*.

#### *1.2.5 Aspiration criterion*

The TS conditions at times prevent moves leading to unvisited solutions. Aspiration criterion is a condition that can override the tabu status of a certain move. To avoid certain missing solutions during the search, the aspiration criterion in certain cases may invalidate the tabu property and maintains an appropriate balance between diversification and intensification [4, 10, 11].

An aspiration criterion that is designed based on a sigmoid function as given by

$$S(k) = \frac{1}{1 + e^{-\sigma(k - k\_{\text{center}} \times M)}} \tag{2}$$

where *kcenter*, *k*, *σ* and *M* denote the tuning parameter, current iteration number, another tuning parameter and maximum number of iterations. The value of *kcenter* can be in the range of 0.30–0.70, the value of *σ* can be in the range of 5/*M* to 10/*M*. A random number *P* that lies between 0 and 1 is generated from a uniform distribution at each iteration. If *P* is greater than *S*(*k*), the tabu property is active and the best non-tabu neighbor is used as a fresh starting point. If *P* is less than or equal to *S*(*k*), the aspiration criterion ignores the tabu property.

#### *1.2.6 Stopping criteria*

A stopping criterion is needed to terminate the search when the optimum is reached. The stopping criterion can be in the form of fixing the number of iterations or specifying a threshold for convergence of solution. The criteria like the maximum time termination [6] and termination-on convergence criteria [10] are also used as stopping criteria for the search process. The termination-on-convergence criteria is expressed by Lin and Miller [10] as.

$$\left| \frac{f\_k(\mathbf{x}) - f\_{k-\Gamma}(\mathbf{x})}{f\_{k-\Gamma}(\mathbf{x})} \right| < \delta \tag{3}$$

where *δ* is the ratio of the change in the objective function value, *Γ* = *ηM*, and *η* is the fraction of the maximum iterations (*M*) by which the change in the objective function is compared. As per this stopping criterion, if the enhancement over *Γ* generations is no longer than a threshold (*δ*), continuation of further iterations can be ineffective, and the search should be discontinued.

#### **1.3 TS implementation procedure**

The flow chart of TS algorithm is shown in **Figure 1**. Tabu Search begins with an initial solution *x*o. The neighbor solutions are created by altering the existing solution through a sequence of moves. The best new neighbor, *x*\* , is used as the starting point for the next iteration, unless it is in the tabu list. Thus, even if no neighbor solutions are better than the starting solution, the best solution is still selected as the starting point for the next iteration. A record of the best solutions ever found, *x*\* , is separately maintained. Also, the adaptive memory in tabu lists guides the search by utilizing the benefit of past information. This memory facilitates TS to make strategic choices and accomplish responsive exploration.

TS is implemented using the following steps.


**Figure 1.** *Flow chart of the TS algorithm.*

*A Metaheuristic Tabu Search Optimization Algorithm: Applications to Chemical… DOI: http://dx.doi.org/10.5772/intechopen.98240*

	- a. Majority of the recent solutions are in tabu list
	- b. Older solutions are discarded
	- a. Recency-based short-term memory (RSM)
	- b. Frequency-based long-term memory (FRM)
	- a. To search potential areas
	- b. To search whole feasible region
	- a. To explore unvisited solutions
	- b. To evade feasible missing solutions
	- a. According iterations
	- b. Stopping on convergence
