**7. Experimental results**

This section presents an experimental design and results derived from testing the approach described in the section 6. In order to show the performance of the SA algorithm, two experiments were developed. The first experiment had as purpose to fine tune the probabilities of the neighborhood functions to be selected. The second experiment evaluated the performance of SA over a new benchmark proposed in this chapter. The results were compared against two of the well-known tools in the literature that constructs CAs, the TConfig<sup>1</sup> (recursive constructions) and ACTS2 (a greedy algorithm named IPOG-F) respectively.

In all the experiments the following parameters were used for our SA implementation:


<sup>1</sup> TConfig: http://www.site.uottawa.ca/~awilliam/TConfig.jar

<sup>2</sup> ACTS: http://csrc.nist.gov/groups/SNS/acts/index.html

5. Frozen factor *φ* = 11

18 Grid Computing

At this point, the master can be started using the specification described above. Upon checking that all is right, the master will wait for incoming connections from the workers. Workers are generic jobs that can perform any operation requested by the master which are submitted to the Grid. In addition, these workers must be submitted to the selected CEs in the pre-processing stage. When a worker registers to the master, the master will automatically

This schema has several advantages derived from the fact that a worker can execute more than one task. Only when a worker has successfully completed a task the master will reassign it a new one. In addition, when a worker demands a new task it is not necessary to submit a new job. This way, the queuing time of the task is intensively reduced. Moreover, the dynamic behavior of this schema allows achieving better performance results, in comparison to the

However, there are also some disadvantages that must be mentioned. The first issue refers to the unidirectional connectivity between the master host and the worker hosts (Grid node). While the master host needs inbound connectivity, the worker node needs outbound connectivity. The connectivity problem in the master can be solved easily by opening a port in the local host; however the connectivity in the worker will rely in the remote system configuration (the CE). So, in this case, this extra detail must be taken into account when selecting the computing resources. Another issue is defining an adequate timeout value. If, for some reason, a task working correctly suffers from temporary connection problems and exceeds the timeout threshold it will cause the worker being removed by the master. Finally, a key factor will be to identify the rightmost number of worker agents and tasks. In addition, if the number of workers is on the order of thousands (i.e. when N is about 1000) bottlenecks could be met, resulting on the master being overwhelmed by the excessive

This section presents an experimental design and results derived from testing the approach described in the section 6. In order to show the performance of the SA algorithm, two experiments were developed. The first experiment had as purpose to fine tune the probabilities of the neighborhood functions to be selected. The second experiment evaluated the performance of SA over a new benchmark proposed in this chapter. The results were compared against two of the well-known tools in the literature that constructs CAs, the TConfig<sup>1</sup> (recursive constructions) and ACTS2 (a greedy algorithm named IPOG-F)

In all the experiments the following parameters were used for our SA implementation:

4. Maximum neighboring solutions per temperature *<sup>L</sup>* = (*<sup>N</sup>* <sup>×</sup> *<sup>k</sup>* <sup>×</sup> *<sup>v</sup>*)<sup>2</sup>

<sup>1</sup> TConfig: http://www.site.uottawa.ca/~awilliam/TConfig.jar <sup>2</sup> ACTS: http://csrc.nist.gov/groups/SNS/acts/index.html

assign it a task.

asynchronous schema.

number of connections.

respectively.

**7. Experimental results**

1. Initial temperature *Ti* = 4.0

3. Cooling factor *α* = 0.99

2. Final temperature *Tf* = 1.0*E* − 10

6. According to the results shown in section 7.1, the neighborhood function N3(*s*, *x*) is applied using a probability *P* = 0.3

Moreover, the characteristics of the Grid infrastructure employed for carrying the experiments are:


#### **7.1 Fine tuning the probability of execution of the neighborhood functions**

It is well-known that the performance of a SA algorithm is sensitive to parameter tuning. In this sense, we follow a methodology for a fine tuning of the two neighborhood functions used in our SA algorithm. The fine tuning was based on the next linear Diophantine equation,

$$P\_1\mathbf{x}\_1 + P\_2\mathbf{x}\_2 = q$$

where *xi* represents a neighborhood function and its value set to 1, *Pi* is a value in {0.0, 0.1, .., 1.0} that represents the probability of executing *xi* , and *q* is set to 1.0 which is the maximum probability of executing any *xi*. A solution to the given linear Diophantine equation must satisfy

$$\sum\_{i=1}^{2} P\_i x\_i = 1.0$$

This equation has 11 solutions, each solution is an experiment that test the degree of participation of each neighborhood function in our SA implementation to accomplish the construction of an CA. Every combination of the probabilities was applied by SA to construct the set of CAs shows in Table 5(a) and each experiment was run 31 times, with the data obtained for each experiment we calculate the median. A summary of the performance of SA with the probabilities that solved the 100% of the runs is shown in Table 5(b).

Finally, given the results shown in Fig. 5 the best configuration of probabilities was *P*<sup>1</sup> = 0.3 and *P*<sup>2</sup> = 0.7 because it found the CAs in smaller time (median value). The values *P*<sup>1</sup> = 0.3 and *P*<sup>2</sup> = 0.7 were kept fixed in the second experiment.

In the next subsection, we will present more computational results obtained from a performance comparison carried out among our SA algorithm, a well-known greedy algorithm (IPOG\_F) and a tool named TConfig that constructs CAs using recursive functions.

#### **7.2 Comparing SA with the state-of-the-art algorithms**

For the second of our experiments we have obtained the ACTS and TConfig software. We create a new benchmark composed by 60 ternary CAs instances where 5 ≤ *k* ≤ 100 and 2 ≤ *t* ≤ 4.

The SA implementation reported by (Cohen et al., 2003) for solving the CAC problem was intentionally omitted from this comparison because as their authors recognize this algorithm fails to produce competitive results when the strength of the arrays is *t* ≥ 3.

rightmost Grid execution schema: experiments involving a value of the parameter *N* equal or less than 500 have been executed with the synchronous schema while the rest have been

Using Grid Computing for Constructing Ternary Covering Arrays 241

Table 6. Comparison among TConfig, IPOG-F and our SA to construct ternary CAs when

5 ≤ *k* ≤ 100 and 2 ≤ *t* ≤ 4.

performed using the asynchronous schema.


Table 5. (a) A set of 7 CAs configurations; (b) Performance of SA with the 11 combinations of probabilities which solved the 100% of the runs to construct the CAs listed in (a).

Fig. 5. Performance of our SA with the 11 combinations of probabilities.

The results from this experiment are summarized in Table 6, which presents in the first two columns the strength *t* and the degree *k* of the selected benchmark instances. The best size *N* found by the TConfig tool, IPOG-F algorithm and our SA algorithm are listed in columns 3, 4 and 5 respectively. Next, Fig. 6 compares the results shown in Table 6.

From Table 6 and Fig. 6 we can observe that our SA algorithm gets solutions of better quality than the other two tools. Finally, each of the 60 ternary CAs constructed by our SA algorithm have been verified by the algorithm described in Section 3 . In order to minimize the execution time required by our SA algorithm, the following rule has been applied when choosing the 20 Grid Computing

Table 5. (a) A set of 7 CAs configurations; (b) Performance of SA with the 11 combinations of

[0.0,1.0] [0.1,0.9] [0.2,0.8] [0.3,0.7] [0.4,0.6] [0.5,0.5] [0.6,0.4] [0.7,0.3] [0.8,0.2] [0.9,0.1] [1.0,0.0]

P ca1 ca2 ca3 ca4 ca5 ca6 ca7

The results from this experiment are summarized in Table 6, which presents in the first two columns the strength *t* and the degree *k* of the selected benchmark instances. The best size *N* found by the TConfig tool, IPOG-F algorithm and our SA algorithm are listed in columns 3, 4

From Table 6 and Fig. 6 we can observe that our SA algorithm gets solutions of better quality than the other two tools. Finally, each of the 60 ternary CAs constructed by our SA algorithm have been verified by the algorithm described in Section 3 . In order to minimize the execution time required by our SA algorithm, the following rule has been applied when choosing the

Fig. 5. Performance of our SA with the 11 combinations of probabilities.

and 5 respectively. Next, Fig. 6 compares the results shown in Table 6.

probabilities which solved the 100% of the runs to construct the CAs listed in (a).

(b) **p1 p2 ca1 ca2 ca3 ca4 ca5 ca6 ca7** 0 1 4789.763 3.072 46.989 12.544 3700.038 167.901 0.102 0.1 0.9 1024.635 0.098 0.299 0.236 344.341 3.583 0.008 0.2 0.8 182.479 0.254 0.184 0.241 173.752 1.904 0.016 0.3 0.7 224.786 0.137 0.119 0.222 42.950 1.713 0.020 0.4 0.6 563.857 0.177 0.123 0.186 92.616 3.351 0.020 0.5 0.5 378.399 0.115 0.233 0.260 40.443 1.258 0.035 0.6 0.4 272.056 0.153 0.136 0.178 69.311 2.524 0.033 0.7 0.3 651.585 0.124 0.188 0.238 94.553 2.127 0.033 0.8 0.2 103.399 0.156 0.267 0.314 81.611 5.469 0.042 0.9 0.1 131.483 0.274 0.353 0.549 76.379 4.967 0.110 1 0 7623.546 15.905 18.285 23.927 1507.369 289.104 2.297

(a) **Id CA description** *ca*<sup>1</sup> *CA*(19; 2, 30, 3) *ca*<sup>2</sup> *CA*(35; 3, 5, 3) *ca*<sup>3</sup> *CA*(58; 3, 10, 3) *ca*<sup>4</sup> *CA*(86; 4, 5, 3) *ca*<sup>5</sup> *CA*(204; 4, 10, 3) *ca*<sup>6</sup> *CA*(243; 5, 5, 3) *ca*<sup>7</sup> *CA*(1040; 5, 15, 3)

Best times

rightmost Grid execution schema: experiments involving a value of the parameter *N* equal or less than 500 have been executed with the synchronous schema while the rest have been performed using the asynchronous schema.


(c) *CAN*(4, *k*, 3)


Table 6. Comparison among TConfig, IPOG-F and our SA to construct ternary CAs when 5 ≤ *k* ≤ 100 and 2 ≤ *t* ≤ 4.

The empirical evidence presented in this work showed that SA improved the size of many CAs in comparison with the tools that are among the best found in the state-of-the-art of the

Using Grid Computing for Constructing Ternary Covering Arrays 243

To make up for the time the algorithm takes to converge, we proposed an implementation of our SA algorithm for Grid Computing. The main conclusion extracted from this point was the possibility of using two different schemas (asynchronous and synchronous) depending on the size of the experiment. On the one hand, the synchronous schema achieves better performance but is limited by the maximum number of slave connections that the master can keep track of. On the other hand, the asynchronous schema is slower but experiments with a huge value of

As future work, we aim to extend the experiment where 100 ≤ *k* ≤ 20000 and 2 ≤ *t* ≤ 12, and compare our results against the best upper bounds found in the literature (Colbourn, 2011). Finally, the new CAs are available in CINVESTAV Covering Array Repository (CAR), which

The authors thankfully acknowledge the computer resources and assistance provided by Spanish Supercomputing Network (TIRANT-UV). This research work was partially funded by the following projects: CONACyT 58554, Calculo de Covering Arrays; 51623 Fondo Mixto

Aarts, E. H. L. & Van Laarhoven, P. J. M. (1985). Statistical Cooling: A General Approach to Combinatorial Optimization Problems, *Philips Journal of Research* 40: 193–226. Almond, J. & Snelling, D. (1999). Unicore: Uniform access to supercomputing as an element

Atiqullah, M. (2004). An efficient simple cooling schedule for simulated annealing, *Proceedings*

Avila-George, H., Torres-Jimenez, J., Hernández, V. & Rangel-Valdez, N. (2010). Verification

Bryce, R. C. & Colbourn, C. J. (2007). The density algorithm for pairwise interaction testing, *Softw Test Verif Rel* 17(3): 159–182. http://dx.doi.org/10.1002/stvr.365. Bush, K. A. (1952). Orthogonal arrays of index unity, *Ann Math Stat* 23(3): 426–434. http://

Calvagna, A., Gargantini, A. & Tramontana, E. (2009). Building T-wise Combinatorial

Interaction Test Suites by Means of Grid Computing, *Proceedings of the 18th IEEE International Workshops on Enabling Technologies: Infrastructures for Collaborative Enterprises - WETICE 2009*, IEEE Computer Society, pp. 213–218. http://dx.doi.

http://dx.doi.org/10.1007/978-3-540-24767-8\_41.

http://dx.doi.org/10.1007/978-3-642-15108-8\_10.

of electronic commerce, *Future Generation Computer Systems* 613: 1–10. http://dx.

*of the International Conference on Computational Science and its Applications - ICCSA 2004*, Vol. 3045 of *Lecture Notes in Computer Science*, Springer-Verlag, pp. 396–404.

of general and cyclic covering arrays using grid computing, *Proceedings of the Third international conference on Data management in grid and peer-to-peer systems - GLOBE 2010*, Vol. 6265 of *Lecture Notes in Computer Science*, Springer-Verlag, pp. 112–123.

is available under request at http://www.tamps.cinvestav.mx/~jtj/CA.php.

construction of CAs.

*N* can be seamlessly performed.

CONACyT y Gobierno del Estado de Tamaulipas.

doi.org/10.1016/S0167-739X(99)00007-2.

dx.doi.org/10.1214/aoms/1177729387.

org/10.1109/WETICE.2009.52.

**9. Acknowledgments**

**10. References**

Fig. 6. Graphical comparison of the performance among TConfig, IPOG-F and our SA to construct ternary CAs when 5 ≤ *k* ≤ 100 and 2 ≤ *t* ≤ 4.

#### **8. Conclusions**

In large problem domains, testing is limited by cost. Every test adds to the cost, so CAs are an attractive option for testing.

Simulated annealing (SA) is a general-purpose stochastic optimization method that has proven to be an effective tool for approximating globally optimal solutions to many types of NP-hard combinatorial optimization problems. But, the sequential implementation of SA algorithm has a slow convergence that can be improved using Grid or parallel implementations

This work focused on constructing ternary CAs with a new approach of SA, which integrates three key features that importantly determines its performance:


The empirical evidence presented in this work showed that SA improved the size of many CAs in comparison with the tools that are among the best found in the state-of-the-art of the construction of CAs.

To make up for the time the algorithm takes to converge, we proposed an implementation of our SA algorithm for Grid Computing. The main conclusion extracted from this point was the possibility of using two different schemas (asynchronous and synchronous) depending on the size of the experiment. On the one hand, the synchronous schema achieves better performance but is limited by the maximum number of slave connections that the master can keep track of. On the other hand, the asynchronous schema is slower but experiments with a huge value of *N* can be seamlessly performed.

As future work, we aim to extend the experiment where 100 ≤ *k* ≤ 20000 and 2 ≤ *t* ≤ 12, and compare our results against the best upper bounds found in the literature (Colbourn, 2011).

Finally, the new CAs are available in CINVESTAV Covering Array Repository (CAR), which is available under request at http://www.tamps.cinvestav.mx/~jtj/CA.php.
