**5.2 Performance comparison**

18 Will-be-set-by-IN-TECH

To study the convergence of the w-GA algorithm, we do three levels of experiments according to the size of the workflow. At each level, we use the w-GA to map workflows to the RMSs. The maximum number of generations is 1000. The best found *makespan* is recorded at 0, 100,

From the data in the Table 4, we see a trend that the w-GA algorithm needs more generations

At the simple level experiment, we map workflow having from 7 to 13 sub-jobs to the RMSs. From this data, we can see that the w-GA converges to the same value after fewer than 200

At the intermediate level of the experiment where we map a workflow having from 14 to 20 sub-jobs to the RMSs, the situation is slightly different than the simple level. In addition to many cases showing that the w-GA converges to the same value after fewer than 200 generations, there are some cases where the algorithm found a better solution after 600 or

When the size of the workflow increases from 21 to 32 sub-jobs as in the advanced level experiment, converging after fewer than 200 generations happens in only one case. In other

200, 400, 600, 800 and 1000 generations. The result is presented in Table 4.

to convergence when the size of the workflow increases.

cases, the w-GA needs from 400 to more than 800 generations.

Table 4. w-GA convergent experiment results

**5.1 Time to convergence**

generations in most case.

800 generations.

We have not noticed a similar resource model or workflow model as stated in Section 2. To do the performance evaluation, in the previous work we implemented the w-DCP, Grasp, minmin, maxmin, and suffer algorithms to our problem Quan (2007). The extensive experiment result is shown in Figure 12.

Fig. 12. Overall performance comparison among w-Tabu and other algorithms Quan (2007)

The experiment result in Figure 12 shows that the w-Tabu algorithm has the highest performance. For that reason, we only need to consider the w-Tabu algorithm in this work. To compare the performance of the w-GA algorithm with other algorithms, we map 18 workflows to RMSs using the w-GA, the w-Tabu, and the n-GA algorithms. Similar to the experiment studying the convergence of the w-GA algorithm, this experiment is also divided into three levels according to the size of the workflow. With the n-GA algorithm, we run it with 1000 generations. With w-GA algorithm, we run it with 120 generations and 1000 generations and thus we have the w-GA1000 algorithm and the w-GA120 algorithm respectively. The purpose of running the w-GA at 1000 generations is for theoretical purpose. We want to see the limit performance of w-GA and n-GA within a long enough period. Thus, with the theoretical aspect, we compare the performance of the w-GA1000, the w-Tabu and the n-GA1000 algorithms. The purpose of running w-GA at 120 generations is for practical purposes. We want to compare the performance of the w-Tabu algorithm and the w-GA algorithm in the same runtime. With each mapping instance, the *makespan* of the solution and the runtime of the algorithm are recorded. The experiment results are presented in Table 5.

In three levels of the experiments, we can see the domination of the w-GA1000 algorithm. In the whole experiment, w-GA1000 found 14 better and 3 worse solutions than did the n-GA1000 algorithm and the w-Tabu algorithm. The overall performance comparison in average relative value is presented in Figure 13. From this Figure, we can see that the w-GA1000 is about 21% better than the w-Tabu and the n-GA1000 algorithms. The data in the Table 5 and Figure 13 also show an equal performance between the w-Tabu and the n-GA1000 algorithms.


With the runtime aspect, the runtime of the w-GA1000 algorithm is slightly greater than the n-GA1000 algorithm because the w-GA is more complicated than the n-GA. However, the runtime of both the w-GA1000 and the n-GA1000 are much, much longer when compare to the runtime of w-Tabu algorithm. On average, the runtime of the w-GA1000 and the n-GA1000

<sup>23</sup> w-TG: A Combined Algorithm to Optimize the Runtime

The long runtime of the w-GA1000 and the n-GA1000 is the great disadvantage for them to be employed in the real environment. In practice, thought, the broker scheduling a workflow for 1 or 2 minutes is not acceptable. As the w-Tabu algorithm needs only from 1 to 10 seconds, we run the w-GA algorithm at 120 generations so it has relatively the same runtime as w-Tabu algorithm. As the n-GA algorithm does not have a good performance even at 1000 generations, we will not consider it within the practical framework. In particular, we focus on

From the data in Table 5, we see a trend that the w-GA120 decreases its performance compared

At the simple level and intermediate level of the experiment, the quality of the w-GA120 is better than the quality of the w-Tabu algorithm. The w-GA algorithm found 3 worse solutions

However, at the advance level experiment, the quality of the w-GA120 is not acceptable. Apart from one equal solution, the w-GA120 found more worse solutions than the w-Tabu algorithm. This is because of the large search space. With a small number of generations, the

From the experiment results of the w-GA120 and w-Tabu algorithms, we have noted the

• The w-Tabu algorithm has runtime from 1 to 10 seconds and this range is generally acceptable. Thus, the mapping algorithm could make use of the maximum value of allowed time period, i.e 10 seconds in this case, to find the highest possible quality solution. • Both the w-GA and the w-Tabu found solutions with great differing quality in some cases. This means in some case the w-GA found a very high quality solution but the w-Tabu

• When the size of the workflow is very big and the runtime of the w-GA and the w-Tabu to find out solution also reaches the limit, the quality of the w-GA120 is not as good as the

From these observations, we propose an other algorithm combining the w-GA120 and the

From the experiment data in Table 5, the runtime of the w-TG algorithm is from 4 to 10

• If the size of the workflow is large, increasing the number of generations will significantly increase the runtime of the algorithm. Thus, this runtime may exceed the acceptable range.

w-Tabu algorithm. The new algorithm called w-TG is presented in Algorithm 6.

seconds. We run the w-GA with 120 generations in all cases for two reasons.

are 10 times longer than the runtime of the w-Tabu algorithm.

of the Grid-Based Workflow Within an SLA Context

to the w-Tabu when the size of the workflow increases.

and 11 better solutions than the w-Tabu algorithm.

found very low quality solutions and vice versa.

w-GA cannot find high quality solutions.

**6. The combined algorithm**

following observations.

w-Tabu algorithm.

comparing the performance of the w-GA120 and the w-Tabu algorithm.

Table 5. Performance comparison among w-GA and other algorithms

Fig. 13. Overall performance comparison among w-GA and other algorithms

20 Will-be-set-by-IN-TECH

Table 5. Performance comparison among w-GA and other algorithms

Fig. 13. Overall performance comparison among w-GA and other algorithms

With the runtime aspect, the runtime of the w-GA1000 algorithm is slightly greater than the n-GA1000 algorithm because the w-GA is more complicated than the n-GA. However, the runtime of both the w-GA1000 and the n-GA1000 are much, much longer when compare to the runtime of w-Tabu algorithm. On average, the runtime of the w-GA1000 and the n-GA1000 are 10 times longer than the runtime of the w-Tabu algorithm.

The long runtime of the w-GA1000 and the n-GA1000 is the great disadvantage for them to be employed in the real environment. In practice, thought, the broker scheduling a workflow for 1 or 2 minutes is not acceptable. As the w-Tabu algorithm needs only from 1 to 10 seconds, we run the w-GA algorithm at 120 generations so it has relatively the same runtime as w-Tabu algorithm. As the n-GA algorithm does not have a good performance even at 1000 generations, we will not consider it within the practical framework. In particular, we focus on comparing the performance of the w-GA120 and the w-Tabu algorithm.

From the data in Table 5, we see a trend that the w-GA120 decreases its performance compared to the w-Tabu when the size of the workflow increases.

At the simple level and intermediate level of the experiment, the quality of the w-GA120 is better than the quality of the w-Tabu algorithm. The w-GA algorithm found 3 worse solutions and 11 better solutions than the w-Tabu algorithm.

However, at the advance level experiment, the quality of the w-GA120 is not acceptable. Apart from one equal solution, the w-GA120 found more worse solutions than the w-Tabu algorithm. This is because of the large search space. With a small number of generations, the w-GA cannot find high quality solutions.
