**4. Computational results**

In this section, a comparison study is carried out on the effectiveness of the proposed SPPSO algorithm. SPPSO was exclusively tested in comparison with two other recently introduced PSO algorithms: PSOspv algorithm of Tasgetiren et al. [17] and DPSO algorithm of Pan et al. [21]. Two experimental frameworks, namely E1 and E2, are considered implying the type of discrete uniform distribution used to generate job-processing times. That is, the processing time of each job is generated by using uniform distribution of U[1,100] and U[100,800] for experiments E1 and E2 respectively. All SPPSO, PSOspv and DPSO algorithms are coded in C and run on a PC with the configuration of 2.6 GHz CPU and 512MB memory. The size of the population considered by all algorithms is the number of jobs (*n*).

For SPPSO and DPSO, the social and cognitive parameters were taken as 1 2 *c c* = = 0.5 , initial inertia weight is set to 0.9 and never decreased below 0.40, and the decrement factor β is fixed at 0.999. For the PSOspv algorithm, the social and cognitive parameters were fixed at 1 2 *c c* = = 2 , initial inertia weight is set to 0.9 and never decreased below 0.40, and the decrement factor β is selected as 0.999. The algorithms were run for 20000/*n* iterations. All the there algorithms were applied without embedding any kind of local search.

The instances of problems were generated for 3, 4, 5, 10, 20, 30, 40, 50 machines and 20, 50, 100, 200, and 500 jobs. In order to allow for the variations, 10 instances are generated for each problem size. Hence, the overall number of instances added up to 350. The measures considered in this chapter are mainly about the solution quality. The performance measure 376 Bio-Inspired Computational Algorithms and Their Applications

that the new solution is selected among *s1*, *s2* and *s3*, based on their fitness values. The selected particle may be worse than the current solution that keep the swarm diverse. The convergence is obtained by changing the personal best of each new particle and the global

Fig. 2. Pseudo code of the proposed SPPSO algorithm for PMS problem

population considered by all algorithms is the number of jobs (*n*).

In this section, a comparison study is carried out on the effectiveness of the proposed SPPSO algorithm. SPPSO was exclusively tested in comparison with two other recently introduced PSO algorithms: PSOspv algorithm of Tasgetiren et al. [17] and DPSO algorithm of Pan et al. [21]. Two experimental frameworks, namely E1 and E2, are considered implying the type of discrete uniform distribution used to generate job-processing times. That is, the processing time of each job is generated by using uniform distribution of U[1,100] and U[100,800] for experiments E1 and E2 respectively. All SPPSO, PSOspv and DPSO algorithms are coded in C and run on a PC with the configuration of 2.6 GHz CPU and 512MB memory. The size of the

For SPPSO and DPSO, the social and cognitive parameters were taken as 1 2 *c c* = = 0.5 , initial inertia weight is set to 0.9 and never decreased below 0.40, and the decrement factor

fixed at 0.999. For the PSOspv algorithm, the social and cognitive parameters were fixed at 1 2 *c c* = = 2 , initial inertia weight is set to 0.9 and never decreased below 0.40, and the

The instances of problems were generated for 3, 4, 5, 10, 20, 30, 40, 50 machines and 20, 50, 100, 200, and 500 jobs. In order to allow for the variations, 10 instances are generated for each problem size. Hence, the overall number of instances added up to 350. The measures considered in this chapter are mainly about the solution quality. The performance measure

the there algorithms were applied without embedding any kind of local search.

is selected as 0.999. The algorithms were run for 20000/*n* iterations. All

βis

**4. Computational results** 

decrement factor

β

best.

is a relative quality measure, C/LB, where C is the result achieved (makespan) by the algorithm and LB is the lower bound of the instance which is calculated in Eq.(3). Once C catches LB, the index results 1.0, otherwise remains larger.


Table 2. Results for experiment E1:p~U(1,100)

A Stochastically Perturbed

Particle Swarm Optimization for Identical Parallel Machine Scheduling Problems 379

p~**U(1,100)** p~**U(100,800)** p~**U(1,100)** p~**U(100,800)** p~**U(1,100)** p~**U(100,800) m n nopt CPU nopt CPU nopt CPU nopt CPU nopt CPU nopt CPU**  3 20 10 0.008 10 0.266 10 0.014 10 0.308 10 0.005 10 0.241 50 10 0.015 10 0.571 10 0.008 10 0.077 10 0.003 10 0.029 100 10 0.038 9 2.020 10 0.010 10 0.091 10 0.005 10 0.023 200 10 0.310 9 8.054 10 0.044 10 0.239 10 0.019 10 0.062 500 10 3.172 10 57.143 10 0.259 10 1.437 10 0.083 10 0.180 4 20 10 0.112 1 1.007 10 0.201 3 0.383 10 0.096 4 0.406 50 10 0.013 2 0.836 10 0.055 7 0.294 10 0.024 10 0.202 100 10 0.027 9 1.676 10 0.059 8 0.355 10 0.019 10 0.126 200 10 0.202 9 4.391 10 0.115 7 0.865 10 0.053 10 0.239 500 10 3.169 10 11.438 10 1.085 10 3.635 10 0.234 10 0.485 5 20 7 0.206 0 0.603 8 0.218 0 0.363 9 0.233 0 0.430 50 9 0.084 8 0.678 10 0.134 1 0.274 10 0.052 5 0.286 100 8 0.028 5 2.308 10 0.199 3 0.424 10 0.072 9 0.255 200 9 0.408 9 4.877 10 0.397 2 1.023 10 0.127 9 0.357 500 6 3.177 9 15.739 10 2.502 3 4.576 10 0.453 9 0.720 10 20 0 0.414 0 0.429 0 0.374 0 0.401 0 0.559 0 0.449 50 5 0.799 0 0.922 0 0.322 0 0.329 8 0.344 0 0.399 100 4 0.778 1 2.853 0 0.512 0 0.542 8 0.354 0 0.435 200 0 0.208 1 10.314 0 1.189 0 1.259 5 0.630 0 0.673 500 0 3.194 5 52.414 0 4.869 0 5.207 2 1.347 0 1.439 20 50 0 0.960 0 1.514 0 0.438 0 0.446 0 0.450 0 0.471 100 0 2.840 0 2.883 0 0.627 0 0.650 0 0.510 0 0.551 200 0 10.385 0 10.671 0 1.397 0 1.451 0 0.806 0 0.862 500 0 52.525 0 67.284 0 5.334 0 5.643 0 1.750 0 1.853 30 50 0 1.636 0 1.631 0 0.459 0 0.469 0 0.485 0 0.504 100 0 2.842 0 2.898 0 0.643 0 0.674 0 0.561 0 0.607 200 0 10.495 0 11.330 0 1.455 0 1.532 0 0.906 0 0.972 500 0 59.247 0 66.154 0 5.550 0 5.940 0 1.978 0 2.324 40 50 0 1.684 0 1.636 0 0.497 0 0.522 0 0.518 0 0.590 100 0 2.984 0 2.873 0 0.699 0 0.742 0 0.620 0 0.726 200 0 10.625 0 10.531 0 1.568 0 1.667 0 1.022 0 1.164 500 0 59.573 0 65.551 0 5.829 0 6.292 0 2.244 0 2.548 50 100 0 3.658 0 3.626 0 0.813 0 0.861 0 0.697 0 0.745 200 0 10.702 0 10.556 0 1.680 0 1.763 0 1.140 0 1.247 500 0 65.759 0 65.793 0 6.117 0 6.465 0 2.521 0 2.844

**Total 148 117 148 94 172 126 Average 8.922 14.385 1.305 1.634 0.598 0.727** 

Table 4. Results for both experiments

**PSOspv DPSO SPPSO** 


Table 3. Results for experiment E2:p~U(100,800)

378 Bio-Inspired Computational Algorithms and Their Applications

**m n min avg max min avg max min avg max**  3 20 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 50 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 100 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 200 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 500 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 4 20 1.000 1.001 1.001 1.000 1.000 1.001 1.000 1.000 1.001 50 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 100 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 200 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 500 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 5 20 1.001 1.002 1.003 1.001 1.002 1.003 1.001 1.001 1.002 50 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 100 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 200 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 500 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 10 20 1.046 1.071 1.128 1.040 1.068 1.128 1.040 1.068 1.128 50 1.001 1.003 1.005 1.003 1.006 1.010 1.001 1.002 1.003 100 1.000 1.001 1.001 1.003 1.004 1.004 1.001 1.001 1.001 200 1.000 1.000 1.001 1.001 1.002 1.003 1.000 1.001 1.001 500 1.000 1.000 1.000 1.000 1.001 1.002 1.000 1.000 1.001 20 50 1.022 1.067 1.113 1.026 1.037 1.054 1.011 1.019 1.025 100 1.012 1.016 1.021 1.012 1.023 1.029 1.006 1.006 1.007 200 1.002 1.005 1.010 1.011 1.014 1.017 1.003 1.003 1.004 500 1.000 1.001 1.002 1.005 1.007 1.009 1.001 1.002 1.003 30 50 1.080 1.122 1.195 1.096 1.128 1.195 1.080 1.123 1.195 100 1.016 1.029 1.043 1.038 1.055 1.065 1.012 1.015 1.016 200 1.012 1.017 1.022 1.027 1.033 1.037 1.008 1.010 1.012 500 1.005 1.006 1.007 1.012 1.015 1.017 1.005 1.007 1.008 40 50 1.268 1.378 1.534 1.268 1.378 1.534 1.268 1.378 1.534 100 1.024 1.069 1.095 1.077 1.093 1.102 1.022 1.029 1.036 200 1.016 1.022 1.028 1.046 1.057 1.066 1.015 1.019 1.021 500 1.009 1.010 1.011 1.022 1.025 1.027 1.011 1.012 1.014 50 100 1.034 1.052 1.084 1.121 1.154 1.166 1.047 1.060 1.084 200 1.007 1.011 1.022 1.076 1.086 1.099 1.026 1.032 1.035 500 1.001 1.003 1.007 1.034 1.039 1.044 1.015 1.019 1.022 **Average 1.016 1.025 1.038 1.026 1.035 1.046 1.016 1.023 1.033** 

Table 3. Results for experiment E2:p~U(100,800)

**PSOspv DPSO SPPSO** 


Table 4. Results for both experiments

A Stochastically Perturbed

399-403

84-90.

Particle Swarm Optimization for Identical Parallel Machine Scheduling Problems 381

[2] Van deVelde, S. L. (1993) "Duality-based algorithms for scheduling unrelated parallel

[3] Dell Amico, M., Martello, S. (1995) "Optimal scheduling of tasks on identical parallel

[4] Mokotoff, E. (2004). "An exact algorithm for the identical parallel machine scheduling

[5] Graham, R. L., (1969). "Bounds on multiprocessor timing anomalies. SIAM", Journal of

[6] Blazewicz, J ., Ecker, K., Pesch, E., Schmidt, G., and Weglarz, J., (1996), "Scheduling

[7] Coffman EG, Garey MR, Johnson DS, (1978). "An application of bin-packing to multi-

[8] Yue, M., (1990) "On the exact upper bound for the MULTIFIT processor algorithm",

[9] Gupta JND, Ruiz-Torres AJ (2001) "A LISTFIT heuristic for minimizing makespan on

[10] Min, L.,Cheng, W.(1999) "A genetic algorithm for the minimizing the makespan in case

[11] Lee WC, Wu CC, Chen P (2006) "A simulated annealing approach to makespan

[12] Tang L,, Luo J. (2006) "A New ILS Algorithm for Parallel Machine Scheduling

[13] Eberhart, R.C., and Kennedy, J., (1995) "A new optimizer using particle swarm theory,

[14] Onwubolu, G.C. and M. Clerc. (2004). "Optimal Operating Path for Automated Drilling

[15] Akjiratikarl,C., Yenradee,P., Drake,P.R. (2007), "PSO-based algorithm for home care worker scheduling in the UK", Computers & Industrial Engineering 53(4), 559-583 [16] Van den Bergh, F. and A.P. Engelbecht. (2000). "Cooperative Learning in Neural

[17] Tasgetiren, M.F., Liang, Y.C., Sevkli, M. and Gencyilmaz, G, (2007) "Particle Swarm

[18] Sha,D.Y., Hsu, C-Y, (2006) "A hybrid particle swarm optimization for job shop scheduling problem", Computers & Industrial Engineering, 51(4),791-808 [19] Salman, A., I. Ahmad, and S. Al-Madani. (2003). "Particle Swarm Optimization for Task Assignment Problem." Microprocessors and Microsystems 26, 363-371. [20] Kennedy, J., R.C. Eberhart, and Y. Shi. (2001). Swarm Intelligence, San Mateo, Morgan

Problems", Journal of Intelligent Manufacturing 17 (5), 609-619

International Journal of Production Research 42(3), 473-491

of scheduling identical parallel machines", Artificial Intelligence in Engineering 13,

minimization on identical parallel machines". Intelligent Journal of Advanced

Proceedings of the Sixth International Symposium on Micro Machine and Human

Operations by a New Heuristic Approach Using Particle Swarm Optimisation."

Networks Using Particle Swarm Optimizers." South African Computer Journal 26,

Optimization Algorithm for Makespan and Total Flowtime Minimization in Permutation Flowshop Sequencing Problem", European Journal of Operational

problem", European Journal of Operational Research, 152, 758–769.

Computer and Manufacturing Systems". (Berlin: Springer).

processor scheduling". SIAM Journal of Computing 7, 1–17.

identical parallel machines". Prod Plan Control 12:28–36

machines". ORSA Journal on Computing, 5, 192–205.

processors", ORSA Journal on Computing 7, 191-200.

Applied Mathematics, 17, 416-429.

Annals of Operations Research, 24, 233-259

Manufacturing Technology 31, 328–334.

Science", Nagoya, Japan, 1995, 39-43.

Research 177 (3), 1930-1947

Kaufmann, CA, USA.

The results for the instances with different sizes are shown in Table 3 and Table 4, where the minimum, average and maximum of the C/LB ratio are presented. Each line summarizes the values for the 10 instances of each problem size, where 10 replications are performed for each instance.

The result for the experiment E1, in which processing times are generated by using U(1,100) are summarized in Table 2. In this experiment, it is found that the minimum, average and maximum values of the ratios are quite similar for SPPSO and PSOspv. On the other hand, SPPSO and PSOspv performed better than DPSO.

The result for the experiment E2 in which processing times are generated by using U(100,800) are summarized in Table 3. In this experiment, there is also no significant difference between SPPSO and PSOspv. However, in terms of max ratio performance SPPSO performed slightly better than PSOspv. In addition, PSOspv and SPPSO are also better than DPSO for all the three ratios in this experiment.

Table 4 shows the number of times the optimum is reached within the group (nopt) for each algorithm and their average CPU times in seconds for each experiment. Total number of optimum solutions obtained by PSOspv, DPSO and SPPSO for the both experiment are summarized as (148,148,172) and (117, 94,126) respectively. Here, the superiority of SPPSO over PSOspv and DPSO is more pronounced in terms of number of total optimum solutions obtained.

In terms of the average CPU, SPPSO shows better performance than PSOspv and DSPO. SPPSO (0.598, 0.727) is about 15 times faster than PSOspv (8.922, 14,395) and about 2 times faster than DPSO (1.305, 1.634) in both experiments.
