**2.3. Artificial bee colony optimization**

Clerc [8] suggested the use of a different velocity update rule, which introduced a parameter *K* called constriction factor. The role of the constriction factor is to ensure convergence when all the particles have stopped their movement. The velocity update rule is then given by:

> 2 2 2 4

where *φ* =*c*<sup>1</sup> + *c*2 and *φ* >4.This PSO algorithm variant is known as Constriction Factor PSO

Kennedy [20] proposed a new PSO approach, the BB PSO, where the standard PSO velocity equation is replaced with samples from a normal distribution. In this method, the position

*N*( , ) denotes the normal distribution. The method allows particles with *pbest* significant different from *gbest* to make large step sizes towards it. When *pbest* is close to *gbest*, step size

In [20], a variation of BB PSO, the BBExp PSO, was also proposed. In this method, approxi‐ mately half of the time velocity is based on samples from a normal distribution; for the rest of the time, velocity is derived from the particle's personal best position. The position update

> ( ) 1, 1, 1, 1,

<sup>+</sup> - > <sup>=</sup>

where *U*( , ) denotes the uniform distribution. In BBExp PSO, position updates equal *pbestn* for half of the time resulting in the improved exploitation of *pbestn* compared to the BB PSO. One

that the barebones PSO algorithms do not require parameter tuning. More details can be found

, otherwise

*G ni G ni*

+ +

, , 0,1 0.5

1, 1, 1, 1, 1, , <sup>2</sup> *G ni G ni G ni G ni G ni pbest gbest x N pbest gbest* + + + + + æ ö <sup>+</sup> <sup>=</sup> ç ÷ è ø

jjj

*K*

update rule for the *i*th particle in the nth dimension becomes

decreases and limits exploration in favor of exploitation.

1,

*G ni*

+

*pbest*

2

*G ni G ni*

+ +

*pbest gbest <sup>N</sup> pbest gbest U <sup>x</sup>*

<sup>ì</sup> æ ö <sup>ï</sup> ç ÷ <sup>í</sup> è ø <sup>ï</sup>

(CFPSO).

**2.2. Barebones PSO**

4 Contemporary Issues in Wireless Communications

rule, (6), is modified into

1,

î

*G ni*

+

may notice

in [20, 21].

=

1, , 1 2 1(0,1) 1, , 2(0,1) 1, , ( )( ) *G ni G ni G ni G ni G ni G ni u K u c rand pbest x c rand gbest x* + ++ = + é ù - + - ë û (4)


(6)

(7)

The ABC algorithm models and simulates the honey bee behavior in food foraging. In ABC algorithm, a potential solution to the optimization problem is represented by the position of a food source while the nectar amount of a food source corresponds to the quality (objective function fitness) of the associated solution. In order to find the best solution the algorithm defines three classes of bees: employed bees, onlooker bees and scout bees. The employed bee searches for the food sources, the onlooker bee makes a decision to choose the food sources by sharing the information of employed bee, and the scout bee is used to determine a new food source if a food source is abandoned by the employed bee and onlooker bee. For each food source exists only one employed bee (i.e. the number of the employed bees is equal to the number of solutions). The employed bees search for new neighbor food source near of their hive. A new position of the *x*¯*<sup>i</sup>* =(*xi*,1, .., *xi*, *<sup>j</sup>* , .., *xi*,*<sup>D</sup>*) solution, where *D* is the problem dimension, is generated using

$$\boldsymbol{\mu}\_{\boldsymbol{i}\_{\boldsymbol{i}\_{\boldsymbol{i}}}} = \mathbf{x}\_{\boldsymbol{i}\_{\boldsymbol{i}\_{\boldsymbol{i}}}} + \boldsymbol{\varphi}\_{\boldsymbol{i}\_{\boldsymbol{i}\_{\boldsymbol{i}}}} \left(\mathbf{x}\_{\boldsymbol{i}\_{\boldsymbol{i}\_{\boldsymbol{i}}}} - \mathbf{x}\_{\boldsymbol{i}\_{\boldsymbol{i}\_{\boldsymbol{i}}}}\right) \tag{8}$$

where *k* ∈{1, 2, .., *SN* }, *k* ≠*i*, *j* ∈{1, 2, .., *D*} are randomly chosen indices, where SN is the number of food sources, and *φi*, *<sup>j</sup>* is a uniformly distributed random number within [-1,1]. ABC uses a greedy selection operator, which for minimization problems is defined by

$$\overline{\mathbf{x}}'\_{i} = \begin{cases} \overline{\overline{u}}\_{i}, & \text{if } f(\overline{u}\_{i}) < f(\overline{\overline{x}}\_{i}) \\ \overline{\overline{x}}\_{i}, & \text{otherwise} \end{cases} \tag{9}$$

where *x*¯*<sup>i</sup>* ′ is the new position of the food source.

An onlooker bee chooses a food source depending on the probability value associated with that food source, *pi* , given by

$$p\_{\perp} = \frac{fit\_{\perp}}{\sum\_{\ldots=1}^{\infty} fit\_{\perp}} \tag{10}$$

where *fiti* is the fitness value of the ith solution which is proportional to the nectar amount of the food source in the ith position. When a food source (solution) cannot be improved anymore then the scout bee helps the colony to randomly generate create new solutions

$$\mathbf{x}\_{\boldsymbol{\omega}\_{\boldsymbol{\cdot},\boldsymbol{\cdot}}} = \text{rand}\_{\boldsymbol{\cdot}|\boldsymbol{0},\boldsymbol{\upcdot}} \left( \mathbf{x}\_{\boldsymbol{\cdot},\boldsymbol{\omega}} - \mathbf{x}\_{\boldsymbol{\cdot},\boldsymbol{\upcdot}} \right) + \mathbf{x}\_{\boldsymbol{\cdot},\boldsymbol{\upcdot}} \quad j = 1, 2, \dots, D \tag{11}$$

where *x <sup>j</sup>*,*L* and *x <sup>j</sup>*,*U* are the lower and upper bounds of the jth dimension respectively and *rand <sup>j</sup>*(0,1) is a uniformly distributed random number within (0,1).

#### **2.4. Ant colony optimization**

Ant colony optimization (ACO) [6, 12, 22] is a meta-heuristic inspired by the ants' foraging behavior. At the core of this behavior is the indirect communication between the ants by means of chemical pheromone trails, which enables them to find short paths between their nest and food sources. Ants can sense pheromone. When they decide path to follow a path, they tend to choose the ones with strong pheromone intensities way back to the nest or to the food source. Therefore, shorter paths would accumulate more pheromone than longer ones. This feature of real ant colonies is exploited in ACO algorithms in order to solve combinatorial optimization problems considered to be NP-Hard.
