**3.1 Multi-strategy** *MAX-MIN* **ant system**

*MM*AS uses Eq. (3) to compute the probability of an ant to move from vertex *i* to *j*. Besides, it incorporates improvements to avoid search stagnation and a pheromone update rule that limits pheromone concentration rates. Eq. (7) presents the pheromone update rule. Limits *τmax* and *τmin* prevent stagnation of pheromone values.

$$\pi\_{\vec{\eta}} = \max \left\{ \tau\_{\min}, \min \left\{ \tau\_{\max}, (1 - \rho) \times \tau\_{\vec{\eta}} + \rho \times \Delta \tau\_{\vec{\eta}}^{\text{best}} \right\} \right\}, \qquad \rho \in [0, 1] \tag{7}$$

Δ*τbest ij* ¼ 1 *Cost Wbest* � � , if arcð Þ *<sup>i</sup>*, *<sup>j</sup>* <sup>∈</sup>*Wbest:* 0, otherwise*:* 8 < : (8) concept. The proportion of the wheel assigned to each heuristic information is directly related to the quality of solutions achieved. So, ants learn, at each iteration, the best heuristic information. At the final iterations, ants tend to use the heuristic

*Multi-Strategy* MAX-MIN *Ant System for Solving Quota Traveling Salesman…*

Algorithm 1 presents the pseudo-code of the MS-*MM*AS. It has the following parameters: maximum number of iterations (*maxIter*), number of ants (*m* ∈*Z* <sup>&</sup>gt;0), pheromone coefficient (*α* ∈ R>0), heuristic coefficient (*β* ∈ R>0), evaporation factor (*ρ* ∈½ � 0, 1 ), and pheromone limits (*τmax*, *τmin* ∈ R>0). It also has the following

: solution produced after applying the *RMH* heuristic [5] to route *W<sup>k</sup>*

;

information that proved to be most promising.

*DOI: http://dx.doi.org/10.5772/intechopen.93860*

• *ξ*: index of the heuristic information source;

: the best route built in the *i*-th iteration;

: the best solution produced in the *i*-th iteration;

**Algorithm 1:** MS-*MM*AS(*maxIter*, *m*, *α*, *β*, *ρ τmax*, *τmin*)

• *Wbest*: route used as input to the pheromone updating procedure;

• Π: hash table that stores every solution *Si* constructed and used as initial

, *W* <sup>∗</sup> )

• *W<sup>k</sup>*: route built by the *k*-th ant;

• *W* <sup>∗</sup> : the best route found so far;

• *S* <sup>∗</sup> : the best solution found so far;

solution to the local search algorithm;

2. Initialize pheromone trails

4. *<sup>W</sup><sup>k</sup>*½ � <sup>2</sup> random\_city(*N*nf g*<sup>s</sup>* )

8. *<sup>W</sup><sup>k</sup>* build\_route(*α*, *<sup>β</sup>*, *<sup>ξ</sup>*) 9. *Sk* assign\_passengers(*W<sup>k</sup>*)

7. *ξ* chose\_heuristic\_information()

,*Si* )

)

15. *<sup>W</sup>best* alternate(*maxIter*, *<sup>i</sup>*, *<sup>W</sup><sup>i</sup>*

)

16. Pheromone\_update(*Wbest*,*ρ*, *τmax*, *τmin*)

parameters and variables:

• *N*: set of vertices;

• *S<sup>k</sup>*

• *W<sup>i</sup>*

• *S<sup>i</sup>*

1. Π ∅

3. For *k* ¼ 1 to *m*.

5. For *i* ¼ 1 to *maxIter* 6. For *k* ¼ 1 to *m*

10. Update(*W<sup>i</sup>*

12. *Si* MnLS(*S<sup>i</sup>*

14. Update(*W* <sup>∗</sup> ,*S* <sup>∗</sup> )

13. Store(Π, *S<sup>i</sup>*

11. If *S<sup>i</sup>* ∉ Π

17. Return *S* <sup>∗</sup>

**83**

There are three possibilities for the best route (*Wbest*) considered in the algorithm: the best route in the current iteration, the best route found so far, and the best route since the last time pheromone trails were reinitiated. In the *MM*AS original design [21], these routes were chosen alternately. The initial value of pheromone trails was *tmax*. If the algorithm reached stagnation, i.e., the best current route remained the same for several iterations, the pheromone value reinitialized to *tmax*. Assigning *tmax* to pheromone trails produces a small variability among pheromone levels at the start of the search [21].

The implementation of the *MM*AS for the QTSP-PIC extends the original proposal [21] with the following adaptions:


The ants in the *MM*AS, use arc costs to compute heuristic information. In the MS-*MM*AS, ants use four sources for this task, listed in the following.


In the MS concept proposed in [5], every ant decides which strategy to use at random with uniform distribution. A roulette wheel selection improves this

*Multi-Strategy* MAX-MIN *Ant System for Solving Quota Traveling Salesman… DOI: http://dx.doi.org/10.5772/intechopen.93860*

concept. The proportion of the wheel assigned to each heuristic information is directly related to the quality of solutions achieved. So, ants learn, at each iteration, the best heuristic information. At the final iterations, ants tend to use the heuristic information that proved to be most promising.

Algorithm 1 presents the pseudo-code of the MS-*MM*AS. It has the following parameters: maximum number of iterations (*maxIter*), number of ants (*m* ∈*Z* <sup>&</sup>gt;0), pheromone coefficient (*α* ∈ R>0), heuristic coefficient (*β* ∈ R>0), evaporation factor (*ρ* ∈½ � 0, 1 ), and pheromone limits (*τmax*, *τmin* ∈ R>0). It also has the following parameters and variables:

• *N*: set of vertices;

**3.1 Multi-strategy** *MAX-MIN* **ant system**

*Operations Management - Emerging Trend in the Digital Era*

Δ*τbest ij* ¼

pheromone levels at the start of the search [21].

proposal [21] with the following adaptions:

*W<sup>k</sup>* with the *RMH* algorithm [5];

• Ants start at vertex *s*;

• Use of the MS concept.

quota collected.

**82**

• Solution *S<sup>k</sup>*

*<sup>τ</sup>ij* <sup>¼</sup> *max <sup>τ</sup>min*, *min <sup>τ</sup>max*, 1ð Þ� � *<sup>ρ</sup> <sup>τ</sup>ij* <sup>þ</sup> *<sup>ρ</sup>* � <sup>Δ</sup>*τbest*

8 < :

n o n o

1

*tmax*. Assigning *tmax* to pheromone trails produces a small variability among

• Ants include vertices in the route up to reach the minimum quota;

MS-*MM*AS, ants use four sources for this task, listed in the following.

• Cost oriented: uses *cij* as heuristic information, such that *<sup>η</sup>ij* <sup>¼</sup> <sup>1</sup>

• Time oriented: uses *tij* as heuristic information, such that *<sup>η</sup>ij* <sup>¼</sup> <sup>1</sup>

• Quota oriented: *<sup>q</sup> <sup>j</sup>* is used as heuristic information, *<sup>η</sup>ij* <sup>¼</sup> *<sup>q</sup> <sup>j</sup>*

• Passenger oriented: the heuristic information is *<sup>η</sup>ij* <sup>¼</sup> <sup>∣</sup>*<sup>L</sup> <sup>j</sup>*<sup>∣</sup>

ants to maximize the number of travel requests fulfilled.

information guides ants to vertices that lead to travel time savings.

The implementation of the *MM*AS for the QTSP-PIC extends the original

The ants in the *MM*AS, use arc costs to compute heuristic information. In the

information guides ants to go to vertices that lead to the maximization of the

In the MS concept proposed in [5], every ant decides which strategy to use at

random with uniform distribution. A roulette wheel selection improves this

values.

*MM*AS uses Eq. (3) to compute the probability of an ant to move from vertex *i* to *j*. Besides, it incorporates improvements to avoid search stagnation and a pheromone update rule that limits pheromone concentration rates. Eq. (7) presents the pheromone update rule. Limits *τmax* and *τmin* prevent stagnation of pheromone

*ij*

*Cost Wbest* � � , if arcð Þ *<sup>i</sup>*, *<sup>j</sup>* <sup>∈</sup>*Wbest:*

, built by the *k*-th ant, is computed by assigning passengers to route

0, otherwise*:*

There are three possibilities for the best route (*Wbest*) considered in the algorithm: the best route in the current iteration, the best route found so far, and the best route since the last time pheromone trails were reinitiated. In the *MM*AS original design [21], these routes were chosen alternately. The initial value of pheromone trails was *tmax*. If the algorithm reached stagnation, i.e., the best current route remained the same for several iterations, the pheromone value reinitialized to

, *ρ* ∈½ � 0, 1 (7)

*cij* ;

*tij*

. This heuristic

*cij* . This strategy orients

*cij*

. This heuristic

(8)


**Algorithm 1:** MS-*MM*AS(*maxIter*, *m*, *α*, *β*, *ρ τmax*, *τmin*)

```
1. Π ∅
2. Initialize pheromone trails
3. For k ¼ 1 to m.
4. Wk½ � 2 random_city(Nnf gs )
5. For i ¼ 1 to maxIter
6. For k ¼ 1 to m
7. ξ chose_heuristic_information()
8. Wk build_route(α, β, ξ)
9. Sk assign_passengers(Wk)
10. Update(Wi
                 ,Si
                    )
11. If Si ∉ Π
12. Si MnLS(Si
                    )
13. Store(Π, Si
                 )
14. Update(W ∗ ,S ∗ )
15. Wbest alternate(maxIter, i, Wi
                                  , W ∗ )
16. Pheromone_update(Wbest,ρ, τmax, τmin)
17. Return S ∗
```
The algorithm sets *τmax* as the initial value of pheromone trails (step 2). Since ants begin at vertex *s*, the second vertex is selected randomly with uniform distribution (steps 3 and 4). The *k*-th ant decides which heuristic information, *ξ*, to use (step 7) and builds a route (step 8). The algorithm uses the *RMH* heuristic to assign passengers to *Wk*, completing a solution (step 9). The algorithm updates *W<sup>i</sup>* and *S<sup>i</sup>* (step 10). The MnLS algorithm is applied to *S<sup>i</sup>* (step 12) if the solution *S<sup>i</sup>* ∉ Π. After the local search, the algorithm stores *S<sup>i</sup>* in the hash table Π. At the next iteration, the current *Si* is the starting solution of the local search if it is not in Π. This procedure prevents redundant work. The algorithm updates the best route and the best solution found so far, *W* <sup>∗</sup> and *S* <sup>∗</sup> (step 14). Similar to the original design of *MM*AS, *W<sup>i</sup>* is assigned to *Wbest* at the first 25% iterations or if *i* ranges from [50%,75%] of *maxIter*. *W* <sup>∗</sup> is assigned to *Wbest* if *i* ranges from [25%,50%] of *maxIter* or if it is greater than or equal to 75% of *maxIter* (step 15). This procedure improves diversification by shifting the emphasis over the search space. *Wbest* is used to update pheromones (step 16). Finally, the algorithm returns *S* <sup>∗</sup> .

to adjust the parameters. Those instances were selected at random. The IRACE uses the *maxExperiments* and *maxTime* parameters as stopping criteria. This parameters

In this section, the results of the MS-*MM*AS are tested and compared to those produced by the other three ACO variants proposed in [5]: AS, ACS, and MS-ACS. **Table 1** presents the comparison between the ant algorithms. The best results obtained by MS-*MM*AS were compared with those achieved by each ant algorithm proposed in [5]. The results are in the *X* � *Y* format, where *X* and *Y* stand for the number of instances in which the ant algorithm *X* found the best solution and the

number of instances in which the ant algorithm *Y* found the best solution,

be a good algorithmic strategy for solving large instances.

**Table 1** shows that the MS-*MM*AS was the algorithm that reported the best solution for most instances. This algorithm performed best than other ACO variants due to its enhanced pheromone update procedures. The MS implementation with roulette wheel selection proved to be effective at finding the best heuristic information used by the ants during the run. **Table 1** also shows that the MS-*MM*AS provides results with better quality than the MS-ACS in the most symmetric cases. The MS-ACS was superior to the MS-*MM*AS in seventeen asymmetric cases and fourteen symmetric instances. Was observed that the pseudo-random action choice rule of MS-ACS [20], which allows for a greedier solution construction, proved to

**Tables 2** and **3** shows the ranks of the ant algorithms based on the Friedman test [23] with the Nemenyi [24] post-hoc test. The first column of this Tables presents the subsets of instances grouped according to their sizes. The other columns of this Tables present the p-values of the Friedman test and the ranks from the Nemenyi post-hoc test. In the post-hoc test, the order ranks from *a* to *c*. The *c* rank indicates that the algorithm achieved the worst performance in comparison to the others. The *a* rank indicates the opposite. If the performances of two or more algorithms are similar, the test assigns the same rank for them. In this experiment, the significance

The p-values presented in **Tables 2** and **3** show that the performance of the ant algorithms was not similar, i.e., the null hypothesis [24] is rejected in all cases. In these Tables, can be observed that MS-*MM*AS ranks higher than AS and ACS for all subsets. The ranks of MS-ACS and MS-*MM*AS were the same in the most cases. This implies that the performance of only these two algorithms where similar, i. e., the relative distance between the results achieved by these two algorithms are short. To analyze the variability of the results provided by each ant algorithm compared to the best results so far for the benchmark set, three metrics regarding the

MS-*MM*AS 68 x 0 68 x 1 45 x 17 66 x 1 68 x 2 48 x 14

**Asymmetric Symmetric AS ACS MS-ACS AS ACS MS-ACS**

For the asymmetric instance set, the parameters were defined as follows: *maxIter* ¼ 31; *m* ¼ 51; *α* ¼ 3*:*08; *β* ¼ 10*:*31; *ρ* ¼ 0*:*52; *τmax* ¼ 0*:*8; and *τmin* ¼ 0*:*2. For the symmetric instance set, the parameters were: *maxIter* ¼ 29; *m* ¼ 57; *α* ¼ 2*:*92;

*Multi-Strategy* MAX-MIN *Ant System for Solving Quota Traveling Salesman…*

; and, *maxTime* ¼ ∞.

were set as follows: *maxExperiments* <sup>¼</sup> <sup>10</sup><sup>3</sup>

*DOI: http://dx.doi.org/10.5772/intechopen.93860*

*β* ¼ 9*:*53; *ρ* ¼ 0*:*67; *τmax* ¼ 0*:*7; and *τmin* ¼ 0*:*2.

**4.3 Results**

respectively.

level was assigned with 0.05.

*Comparison between the ant algorithms.*

**Table 1.**

**85**
