**A Stochastically Perturbed Particle Swarm Optimization for Identical Parallel Machine Scheduling Problems**

Mehmet Sevkli1 and Aise Zulal Sevkli2

*1King Saud University, Faculty of Engineering, Department of Industrial Engineering, Riyadh 2King Saud University, College of Computer and Information Sciences, Department of Information Technology, Riyadh Kingdom of Saudi Arabia* 

#### **1. Introduction**

20 Will-be-set-by-IN-TECH

370 Bio-Inspired Computational Algorithms and Their Applications

Zhang, B., Hirahashi, J., Cullere, X. & Mayadas, T. N. (2003). Elucidation of molecular events

*chemistry* 278: 28443–28454.

leading to neutrophil apoptosis following phagocytosis, *The Journal of biological*

Identical parallel machine scheduling (PMS) problems with the objective of minimizing makespan (Cmax) is one of the well known NP-hard [1] combinatorial optimization problems. It is unlikely to obtain optimal schedule through polynomial time-bounded algorithms. Small size instances of PMS problem can be solved with reasonable computational time by exact algorithms such as branch-and-bound [2, 3], and the cutting plane algorithm [4]. However, as the problem size increases, the computation time of exact methods increases exponentially. On the other hand, heuristic algorithms generally have acceptable time and memory requirements, but do not guarantee optimal solution. That is, a feasible solution is obtained which is likely to be either optimal or near optimal. The well-known longest processing time (LPT) rule of Graham [5] is a sort of so called list scheduling algorithm. It is known that the rule works very well when makespan is taken as the single criterion [6]. Later, Coffman et al. [7] proposed MULTIFIT algorithm that considers the relation between bin-packing and maximum completion time problems. Yue [8] showed that the MULTIFIT heuristic is not guaranteed to perform better than LPT for every problem. Gupta and Ruiz-Torres [9] developed a LISTFIT algorithm that combines the bin packing method of the MULTIFIT heuristic with multiple lists of jobs. Min and Cheng [10] introduced a genetic algorithm (GA) that outperformed simulated annealing (SA) algorithm. Lee et al. [11] proposed a SA algorithm for the PMS problems and compared their results with the LISTFIT algorithm. Tang and Luo [12] developed a new iterated local search (ILS) algorithm that is based on varying number of cyclic exchanges.

Particle swarm optimization (PSO) is based on the metaphor of social interaction and communication among different spaces in nature, such as bird flocking and fish schooling. It is different from other evolutionary methods in a way that it does not use the genetic operators (such as crossover and mutation), and the members of the entire population are maintained through out the search procedure. Thus, information is socially shared among

A Stochastically Perturbed

**2.2 Solution representation and lower bound** 

hand, the particles present permutations themselves.

Table 1. An example of 9-job × 4-machine PMS problem

Fig. 1. Shedule generated from random sequence

The lower bound for *P||Cmax* is calculated as follows [22]:

can easily be seen in figure 1.

1, the makespan value of the given vector is depicted in Figure 1.

Particle Swarm Optimization for Identical Parallel Machine Scheduling Problems 373

The solution for the PMS problem is represented as a permutation of integers Π*= {1,..., n}* where Π defines the processing order of the jobs. As mentioned in the text above, three versions of the PSO algorithm are compared in terms of solution quality and CPU time.

In continuous based PSO by Tasgetiren et al. [17], PSOspv , particles themselves do not present permutations. Instead, the SPV rule is used to derive a permutation from the position values of the particle. In discrete PSO by Pan et al.[21] and the proposed algorithm (SPPSO), on the other

Jobs 1 2 3 4 5 6 7 8 9

*pi* 7 7 6 6 5 5 4 4 4

For all of the three algorithms, the process of finding makespan value for a particle can be illustrated by an example. Namely, let's assume a permutation vector of Π*= {1 8 3 4 5 6 7 2 9}.* By considering 4 parallel machines and 9 jobs, whose processing times are given in Table

According to the schedule, each value of the vector is iteratively assigned to the most available machine. First four elements of the permutation vector (1,8,3,4) are assigned to the four machines respectively. The remaining jobs are assigned one by one to the first machine available. For instance, 5 goes to second machine (M2), since it is the first machine released. If there is more than one available machine at the time, the job will be assigned randomly (ties can be broken arbitrarily). The makespan value of the given sequence is *Cmax*(Π)=14, as

> max { } 1 <sup>1</sup> ( ) max ; max *n*

It is obtained by assuming that preemption is not allowed. If Cmax(Π)=LB(Cmax), the current solution(Π) is optimum. So, lower bound will be used as one of the termination criteria

*LB C p p m* <sup>=</sup>

*i i <sup>i</sup> <sup>i</sup>*

<sup>=</sup> (3)

individuals to direct the search towards the best position in the search space. In a PSO algorithm, each member is called a particle, and each particle moves around in the multidimensional search space with a velocity constantly updated by the particle's experience, the experience of the particle's neighbours, and the experience of the whole swarm. PSO was first introduced to optimize various continuous nonlinear functions by Eberhart and Kennedy [13]. PSO has been successfully applied to a wide range of applications such as automated drilling [14], home care worker scheduling [15], neural network training [16], permutation flow shop sequencing problems [17], job shop scheduling problems [18], and task assignment [19]. More information about PSO can be found in Kennedy et al. [20].

The organization of this chapter is as follows: Section II introduces PMS problem, the way how to represent the problem, lower bound of the problem and overview of the classical PSO algorithm. The third section reveals the proposed heuristic algorithm. The computational results are reported and discussed in the fourth section, while the fifth section includes the concluding remarks.

#### **2. Background**

#### **2.1 Problem description**

The problem of identical parallel machine scheduling is about creating schedules for a set *J ={J1, J2 , J3 ,..., Jn}* of *n* independent jobs to be processed on a set *M={M1, M2, M3,..., Mm}* of *m* identical machines. Each job should be carried out on one of the machines, where the time required for processing job *i* on a machine is denoted by *pi*. The subset of jobs assigned to machine *Mi* in a schedule is denoted by *Mi S* . Once a job begins processing, it must be completed without interruption. Furthermore, each machine can process one job at a time, and there is no precedence relation between the jobs. The aim is to find a permutation for the *n* jobs to machines from set *M* so as to minimize the maximum completion time, in other words the makespan. The problem is denoted as *P||Cmax* , where *P* represents identical parallel machines, the jobs are not constrained, and the objective is to obtain the minimum length schedule. An integer programming formulation of the problem that minimize the makespan is as follows: [5]

$$\min y$$

subject to:

$$\sum\_{j=1}^{m} \mathbf{x}\_{ij} = \mathbf{1}, \quad \mathbf{1} \le i \le m \tag{1}$$

$$y - \sum\_{i=1}^{n} p\_i x\_{ij} \ge 0 \quad 1 \le j \le m \tag{2}$$

where the optimal value of y is *Cmax* and *xij*=1 when job *i* is assigned to machine *j*, otherwise *xij*=0.

#### **2.2 Solution representation and lower bound**

372 Bio-Inspired Computational Algorithms and Their Applications

individuals to direct the search towards the best position in the search space. In a PSO algorithm, each member is called a particle, and each particle moves around in the multidimensional search space with a velocity constantly updated by the particle's experience, the experience of the particle's neighbours, and the experience of the whole swarm. PSO was first introduced to optimize various continuous nonlinear functions by Eberhart and Kennedy [13]. PSO has been successfully applied to a wide range of applications such as automated drilling [14], home care worker scheduling [15], neural network training [16], permutation flow shop sequencing problems [17], job shop scheduling problems [18], and task assignment [19]. More information about PSO can be found in Kennedy et al.

The organization of this chapter is as follows: Section II introduces PMS problem, the way how to represent the problem, lower bound of the problem and overview of the classical PSO algorithm. The third section reveals the proposed heuristic algorithm. The computational results are reported and discussed in the fourth section, while the fifth

The problem of identical parallel machine scheduling is about creating schedules for a set *J ={J1, J2 , J3 ,..., Jn}* of *n* independent jobs to be processed on a set *M={M1, M2, M3,..., Mm}* of *m* identical machines. Each job should be carried out on one of the machines, where the time required for processing job *i* on a machine is denoted by *pi*. The subset of jobs assigned to machine *Mi* in a schedule is denoted by *Mi S* . Once a job begins processing, it must be completed without interruption. Furthermore, each machine can process one job at a time, and there is no precedence relation between the jobs. The aim is to find a permutation for the *n* jobs to machines from set *M* so as to minimize the maximum completion time, in other words the makespan. The problem is denoted as *P||Cmax* , where *P* represents identical parallel machines, the jobs are not constrained, and the objective is to obtain the minimum length schedule. An integer programming formulation of the problem that minimize the

*min y* 

1

0

where the optimal value of y is *Cmax* and *xij*=1 when job *i* is assigned to machine *j*, otherwise

<sup>=</sup> , 1 ≤ ≤*i n* , (1)

− ≥ , 1 ≤ ≤*<sup>j</sup> <sup>m</sup>* (2)

1

*n*

1

*i y px* =

*i ij*

*m ij j x* =

[20].

**2. Background** 

**2.1 Problem description** 

makespan is as follows: [5]

subject to:

*xij*=0.

section includes the concluding remarks.

The solution for the PMS problem is represented as a permutation of integers Π*= {1,..., n}* where Π defines the processing order of the jobs. As mentioned in the text above, three versions of the PSO algorithm are compared in terms of solution quality and CPU time.

In continuous based PSO by Tasgetiren et al. [17], PSOspv , particles themselves do not present permutations. Instead, the SPV rule is used to derive a permutation from the position values of the particle. In discrete PSO by Pan et al.[21] and the proposed algorithm (SPPSO), on the other hand, the particles present permutations themselves.


Table 1. An example of 9-job × 4-machine PMS problem

For all of the three algorithms, the process of finding makespan value for a particle can be illustrated by an example. Namely, let's assume a permutation vector of Π*= {1 8 3 4 5 6 7 2 9}.* By considering 4 parallel machines and 9 jobs, whose processing times are given in Table 1, the makespan value of the given vector is depicted in Figure 1.

Fig. 1. Shedule generated from random sequence

According to the schedule, each value of the vector is iteratively assigned to the most available machine. First four elements of the permutation vector (1,8,3,4) are assigned to the four machines respectively. The remaining jobs are assigned one by one to the first machine available. For instance, 5 goes to second machine (M2), since it is the first machine released. If there is more than one available machine at the time, the job will be assigned randomly (ties can be broken arbitrarily). The makespan value of the given sequence is *Cmax*(Π)=14, as can easily be seen in figure 1.

The lower bound for *P||Cmax* is calculated as follows [22]:

$$LB(C\_{\max}) = \max\left\{ \left\lceil \frac{1}{m} \sum\_{i=1}^{n} p\_i \right\rceil \; ; \; \max\_i \{ p\_i \} \right\} \tag{3}$$

It is obtained by assuming that preemption is not allowed. If Cmax(Π)=LB(Cmax), the current solution(Π) is optimum. So, lower bound will be used as one of the termination criteria

A Stochastically Perturbed

*Insert* function(

η

swarm moves based on the following equations.

weight to decide whether to apply *Insert* function(

Particle Swarm Optimization for Identical Parallel Machine Scheduling Problems 375

algorithm. However, the search strategy of SPPSO is different. That is, each particle in the

*t t i*

η

β

η

η

*sw X*

= ⋅

= ⊕

*sc P*

= ⊕

*sc G*

= ⊕

=

*X best s s s*

At each iteration, the position vector of each particle, its personal best and the global best are considered. First of all, a random number of U(0,1) is generated to compare with the inertia

sometimes) of another randomly chosen job. For instance, for the PMS problem, suppose a sequence of {3, 5, 6, 7, 8, 9, 1, 2, 4}. In order to apply *Insert* function, we also need to derive two random numbers; one is for determining the job to change place and the other is for the job in front of which the former job is to be inserted. Let's say those numbers are 3 and 5 (that is, the third job will move in front of the fifth. In other words, job no.6 will be inserted in front of job no.8 {3, 5, 6, 7, 8, 9, 1, 2, 4}). The new sequence will be {3, 5, 7, 8, 6, 9, 1, 2, 4}.

If the random number chosen is less than the inertia weight, the particle is manipulated with this *Insert* function, and the resulting solution, say *s1*, is obtained. Meanwhile, the inertia weight is discounted by a constant factor at each iteration, in order to tighten the acceptability of the manipulated particle for the next generation, that is, to diminish the

The next step is to generate another random number of U(0,1) to be compared with *c1*, cognitive parameter, to make a decision whether to apply *Insert* function to personal best of the particle considered. If the random number is less than *c1*, then the personal best of the particle undertaken is manipulated and the resulting solution is spared as *s2*. Likewise, a third random number of U(0,1) is generated for making a decision whether to manipulate the global best with the *Insert* function. If the random number is less than *c2*, social parameter, then *Insert* is applied to the global best to obtain a new solution of *s3*. Unlike the case of inertia weight, the values of *c1* and *c2* factors are not increased or decreased iteratively, but are fixed at 0.5. That means the probability of applying *Insert* function to the personal and global bests remains the same. The new replacement solution is selected among *s1*, *s2* and *s3*, based on their fitness values. This solution may not always be better than the current solution. This is to keep the swarm diverse. The convergence is traced by checking the personal best of each new particle and the global best. As it is seen, proposed equations have all major characteristics of the classical PSO equations. The following

It can be seen from the pseudo-code of the algorithm that the algorithm has all major characteristics of the classical PSO, the search strategy of the algorithm is different in a way

impact of the randomly operated solutions on the swarm evolution.

pseudo-code describes in detail the steps of the SPPSO algorithm.

( )

( )

*t i t*

( )

123

(5)

(;;)

η

) implies the insertion of a randomly chosen job in front (or back

) to the particle or not.

1 1

*t*

+

2 1

*w w*

3 2 1

*t i*

+

throughout this chapter. The lower bound of the example presented in Table 1 can be calculated as:

$$LB(\mathbb{C}\_{\max}) = \max\left\{ \left\lceil \frac{1}{4} \sum\_{i=1}^{9} p\_i \right\rceil \; ; \; \max\_i \{ p\_i \} \right\} = \max(12;7) = 12$$

#### **2.3 Classic Particle Swarm Optimization**

In PSO, each single solution, called a particle, is considered as an individual, the group becomes a swarm (population) and the search space is the area to explore. Each particle has a fitness value calculated by a fitness function, and a velocity to fly towards the optimum. All particles fly across the problem space following the particle that is nearest to the optimum. PSO starts with an initial population of solutions, which is updated iteration-byiteration. The principles that govern PSO algorithm can be stated as follows:


$$\begin{aligned} \boldsymbol{\upsilon}\_{iu}^{t+1} &= \boldsymbol{\upsilon} \boldsymbol{\upsilon}\_{iu}^{t} + \boldsymbol{c}\_{1} \boldsymbol{r}\_{1} (\boldsymbol{p}\_{iu}^{t} - \boldsymbol{\chi}\_{iu}^{t}) + \boldsymbol{c}\_{2} \boldsymbol{r}\_{2} (\boldsymbol{g}\_{i}^{t} - \boldsymbol{\chi}\_{iu}^{t}) \\ \boldsymbol{\chi}\_{iu}^{t+1} &= \boldsymbol{\upsilon}\_{iu}^{t+1} + \boldsymbol{\chi}\_{iu}^{t} \end{aligned} \tag{4}$$

where *t* represents the iteration number, *w* is the inertia weight which is a coefficient to control the impact of the previous velocities on the current velocity. *c1* and *c2* are called learning factors. *r1* and *r2* are uniformly distributed random variables in [0,1].

The original PSO algorithm can optimize problems in which the elements of the solution space are continuous real numbers. The major obstacle for successfully applying PSO to combinatorial problems in the literature is due to its continuous nature. To remedy this drawback, Tasgetiren et al. [17] presented the smallest position value (SPV) rule. Another approach to tackle combinatorial problems with PSO is done by Pan et al. [21]. They generate a similar PSO equation to update the particle's velocity and position vectors using one and two cut genetic crossover operators.

#### **3. The proposed Stochastically Perturbed Particle Swarm Optimization algorithm**

In this chapter, a stochastically perturbed particle swarm optimization algorithm (SPPSO) is proposed for the PMS problems. The initial population is generated randomly. Initially, each individual with its position, and fitness value is assigned to its personal best (i.e., the best value of each individual found so far). The best individual in the whole swarm with its position and fitness value, on the other hand, is assigned to the global best (i.e., the best particle in the whole swarm). Then, the position of each particle is updated based on the personal best and the global best. These operations in SPPSO are similar to classical PSO 374 Bio-Inspired Computational Algorithms and Their Applications

throughout this chapter. The lower bound of the example presented in Table 1 can be

{ } <sup>9</sup>

<sup>1</sup> ( ) max ; max max(12;7) 12 <sup>4</sup> *i i <sup>i</sup> <sup>i</sup>*

In PSO, each single solution, called a particle, is considered as an individual, the group becomes a swarm (population) and the search space is the area to explore. Each particle has a fitness value calculated by a fitness function, and a velocity to fly towards the optimum. All particles fly across the problem space following the particle that is nearest to the optimum. PSO starts with an initial population of solutions, which is updated iteration-by-

• *n* dimensional position ( 1 2 ( , ,..., ) *X xx x i i i in* = ) and velocity vector ( 1 2 ( , ,..., ) *V vv v i i i in* = for

• Each particle knows its position and value of the objective function for that position. The best position of *ith* particle is donated as 1 2 ( , ,..., ) *Pi i i in* = *pp p* , and the best position of the whole swarm as, 1 2 ( , ,..., ) *G <sup>n</sup>* = *gg g* respectively. The PSO algorithm is governed by

1 1 2 2

(4)

( ) ( ), *t t t t tt in in in in i in*

where *t* represents the iteration number, *w* is the inertia weight which is a coefficient to control the impact of the previous velocities on the current velocity. *c1* and *c2* are called

The original PSO algorithm can optimize problems in which the elements of the solution space are continuous real numbers. The major obstacle for successfully applying PSO to combinatorial problems in the literature is due to its continuous nature. To remedy this drawback, Tasgetiren et al. [17] presented the smallest position value (SPV) rule. Another approach to tackle combinatorial problems with PSO is done by Pan et al. [21]. They generate a similar PSO equation to update the particle's velocity and position vectors using

**3. The proposed Stochastically Perturbed Particle Swarm Optimization** 

In this chapter, a stochastically perturbed particle swarm optimization algorithm (SPPSO) is proposed for the PMS problems. The initial population is generated randomly. Initially, each individual with its position, and fitness value is assigned to its personal best (i.e., the best value of each individual found so far). The best individual in the whole swarm with its position and fitness value, on the other hand, is assigned to the global best (i.e., the best particle in the whole swarm). Then, the position of each particle is updated based on the personal best and the global best. These operations in SPPSO are similar to classical PSO

= + −+ −

*v wv c r p x cr g x*

 <sup>=</sup> = = 

1

*LB C p p* =

iteration. The principles that govern PSO algorithm can be stated as follows:

*ith* particle starts with a random position and velocity.

1

+ + +

1 1

*xvx*

*ttt in in in*

= +

learning factors. *r1* and *r2* are uniformly distributed random variables in [0,1].

calculated as:

max

**2.3 Classic Particle Swarm Optimization** 

the following main equations:

one and two cut genetic crossover operators.

**algorithm** 

algorithm. However, the search strategy of SPPSO is different. That is, each particle in the swarm moves based on the following equations.

$$\begin{aligned} s\_1 &= w^t \oplus \eta(X\_i^t) \\\\ w^{t+1} &= w \cdot \beta \\\\ s\_2 &= c\_1 \oplus \eta(P\_i^t) \\\\ s\_3 &= c\_2 \oplus \eta(G^t) \\\\ X\_i^{t+1} &= best(s\_1; s\_2; s\_3) \end{aligned} \tag{5}$$

At each iteration, the position vector of each particle, its personal best and the global best are considered. First of all, a random number of U(0,1) is generated to compare with the inertia weight to decide whether to apply *Insert* function(η) to the particle or not.

*Insert* function(η ) implies the insertion of a randomly chosen job in front (or back sometimes) of another randomly chosen job. For instance, for the PMS problem, suppose a sequence of {3, 5, 6, 7, 8, 9, 1, 2, 4}. In order to apply *Insert* function, we also need to derive two random numbers; one is for determining the job to change place and the other is for the job in front of which the former job is to be inserted. Let's say those numbers are 3 and 5 (that is, the third job will move in front of the fifth. In other words, job no.6 will be inserted in front of job no.8 {3, 5, 6, 7, 8, 9, 1, 2, 4}). The new sequence will be {3, 5, 7, 8, 6, 9, 1, 2, 4}.

If the random number chosen is less than the inertia weight, the particle is manipulated with this *Insert* function, and the resulting solution, say *s1*, is obtained. Meanwhile, the inertia weight is discounted by a constant factor at each iteration, in order to tighten the acceptability of the manipulated particle for the next generation, that is, to diminish the impact of the randomly operated solutions on the swarm evolution.

The next step is to generate another random number of U(0,1) to be compared with *c1*, cognitive parameter, to make a decision whether to apply *Insert* function to personal best of the particle considered. If the random number is less than *c1*, then the personal best of the particle undertaken is manipulated and the resulting solution is spared as *s2*. Likewise, a third random number of U(0,1) is generated for making a decision whether to manipulate the global best with the *Insert* function. If the random number is less than *c2*, social parameter, then *Insert* is applied to the global best to obtain a new solution of *s3*. Unlike the case of inertia weight, the values of *c1* and *c2* factors are not increased or decreased iteratively, but are fixed at 0.5. That means the probability of applying *Insert* function to the personal and global bests remains the same. The new replacement solution is selected among *s1*, *s2* and *s3*, based on their fitness values. This solution may not always be better than the current solution. This is to keep the swarm diverse. The convergence is traced by checking the personal best of each new particle and the global best. As it is seen, proposed equations have all major characteristics of the classical PSO equations. The following pseudo-code describes in detail the steps of the SPPSO algorithm.

It can be seen from the pseudo-code of the algorithm that the algorithm has all major characteristics of the classical PSO, the search strategy of the algorithm is different in a way

A Stochastically Perturbed

Particle Swarm Optimization for Identical Parallel Machine Scheduling Problems 377

is a relative quality measure, C/LB, where C is the result achieved (makespan) by the algorithm and LB is the lower bound of the instance which is calculated in Eq.(3). Once C

**m n min avg max min avg max min avg max**  3 20 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 50 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 100 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 200 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 500 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 4 20 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 50 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 100 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 200 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 500 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 5 20 1.000 1.001 1.005 1.000 1.001 1.005 1.000 1.001 1.005 50 1.000 1.000 1.002 1.000 1.000 1.000 1.000 1.000 1.000 100 1.000 1.000 1.001 1.000 1.000 1.000 1.000 1.000 1.000 200 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 500 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 10 20 1.050 1.091 1.168 1.050 1.091 1.168 1.050 1.091 1.168 50 1.000 1.002 1.004 1.004 1.005 1.008 1.000 1.001 1.004 100 1.000 1.001 1.002 1.002 1.003 1.005 1.000 1.000 1.002 200 1.001 1.002 1.002 1.001 1.002 1.002 1.000 1.001 1.001 500 1.001 1.001 1.002 1.001 1.001 1.001 1.000 1.000 1.001 20 50 1.015 1.026 1.050 1.033 1.043 1.053 1.009 1.024 1.050 100 1.007 1.009 1.013 1.025 1.029 1.037 1.004 1.009 1.013 200 1.006 1.007 1.010 1.013 1.015 1.018 1.004 1.006 1.008 500 1.004 1.006 1.007 1.006 1.007 1.009 1.002 1.003 1.005 30 50 1.066 1.154 1.266 1.076 1.161 1.266 1.066 1.154 1.266 100 1.013 1.022 1.028 1.043 1.061 1.072 1.019 1.029 1.039 200 1.009 1.017 1.021 1.032 1.037 1.043 1.014 1.017 1.020 500 1.009 1.011 1.015 1.011 1.016 1.021 1.008 1.009 1.011 40 50 1.282 1.538 1.707 1.282 1.538 1.707 1.282 1.538 1.707 100 1.033 1.047 1.067 1.084 1.115 1.142 1.042 1.055 1.061 200 1.021 1.028 1.034 1.054 1.067 1.075 1.028 1.035 1.042 500 1.016 1.019 1.022 1.025 1.030 1.031 1.016 1.020 1.026 50 100 1.070 1.088 1.114 1.156 1.184 1.220 1.070 1.097 1.140 200 1.036 1.044 1.053 1.081 1.096 1.106 1.049 1.057 1.065 500 1.023 1.027 1.030 1.034 1.043 1.046 1.028 1.032 1.035 **Average 1.019 1.033 1.046 1.029 1.044 1.058 1.020 1.034 1.048** 

**PSOspv DPSO SPPSO** 

catches LB, the index results 1.0, otherwise remains larger.

Table 2. Results for experiment E1:p~U(1,100)

that the new solution is selected among *s1*, *s2* and *s3*, based on their fitness values. The selected particle may be worse than the current solution that keep the swarm diverse. The convergence is obtained by changing the personal best of each new particle and the global best.

Fig. 2. Pseudo code of the proposed SPPSO algorithm for PMS problem
