**2. Methods**

A population (or swarm) in PSO, ABC, ACO and DE consists of *NP* vectors (or particles) *<sup>x</sup>*¯ *<sup>G</sup>*,*<sup>i</sup>* , *i* =1, 2, ......, *NP*, where *G* is the generation number. The population is initialized ran‐ domly from a uniform distribution. Each *D*-dimensional vector represents a possible solution, which is expressed as:

Evolutionary Algorithms for Wireless Communications — A Review of the State-of-the art http://dx.doi.org/10.5772/59147 3

$$\overline{\mathbf{x}}\_{\mathbf{G},i} = (\mathbf{x}\_{\mathbf{G},1i'}, \mathbf{x}\_{\mathbf{G},2i'}, \dots, \mathbf{x}\_{\mathbf{G},|i'}, \dots, \mathbf{x}\_{\mathbf{G},\mathbf{D}i}) \tag{1}$$

The population is initialized as follows:

successfully applied to several problems in wireless communications [10]. ABC variants that

Ant Colony Optimization (ACO) is a population-based metaheuristic introduced by Marco Dorigo [12]. This algorithm was inspired by the behaviour of real ants. The algorithm is based on the fact that ant colonies can find the shortest path between their nest and a food source just by depositing and reacting to pheromones while they are exploring their environment. ACO is suitable for solving combinatorial optimization problems, which are common in

Differential evolution (DE) [13, 14] is a population-based stochastic global optimization algorithm, which has been used in several real world engineering problems. Several DE variants or strategies exist. One of the DE advantages is that very few control parameters have to be adjusted in each algorithm run. However, the control parameters involved in DE are highly dependent on the optimization problem. Moreover, the selection of the appropriate strategy for trial vector generation requires additional computational time using a trial-anderror search procedure. Therefore, it is not always an easy task to fine-tune the control parameters and strategy. Since finding the suitable control parameter values and strategy in such a way is often very time-consuming, there has been an increasing interest among researchers in designing new adaptive and self-adaptive DE variants. Self adaptive DE (SaDE), is a DE algorithm that self-adapts both control parameters and strategy based on learning experiences from previous generations is presented in [15-17]. SaDE has been applied to

The purpose of this chapter is to briefly describe the above algorithms and present their application to wireless communications optimization problems found in the literature. This chapter also presents results from different cases using PSO, ABC, ACO and DE. These include the cell to switch assignment problem in cellular networks using PSO algorithms, peak to average power ratio (PAPR) reduction of OFDM signals with the partial transmit sequences (PTS) approach using ABC and ACO algorithms [7, 11], and dual-band microwave filter design

This chapter is subdivided into four sections. Section 2 presents the different evolutionary algorithms. Section 3 reviews the related work in wireless communications problems from the literature. Section 4 describes the design cases and presents the numerical results. Finally section 5 contains the discussion about the advantages of using a EA-based approach and the

A population (or swarm) in PSO, ABC, ACO and DE consists of *NP* vectors (or particles)

, *i* =1, 2, ......, *NP*, where *G* is the generation number. The population is initialized ran‐ domly from a uniform distribution. Each *D*-dimensional vector represents a possible solution,

improve the original algorithm have also been proposed [11].

microwave filter design, [18], and to antenna arrays synthesis [19].

for wireless communications using SADE.

conclusions.

**2. Methods**

which is expressed as:

*<sup>x</sup>*¯ *<sup>G</sup>*,*<sup>i</sup>*

wireless communications.

2 Contemporary Issues in Wireless Communications

$$\mathbf{x}\_{0,\bar{\jmath}} = rand\_{\jmath \{0,1\}} \left( \mathbf{x}\_{\jmath,\mathcal{U}} - \mathbf{x}\_{\jmath,\mathcal{L}} \right) + \mathbf{x}\_{\jmath,\mathcal{L}} \quad \bar{\jmath} = \mathbf{1}, \mathbf{2}, \dots, \mathbf{D} \tag{2}$$

where *x <sup>j</sup>*,*L* and *x <sup>j</sup>*,*U* are *D*-dimensional vectors of the lower and upper bounds respectively and *rand j* 0,1) is a uniformly distributed random number within [0,1). The stopping criterion for PSO, ABC and DE is usually the generation number or the number of objective-function evaluations.

#### **2.1. Particle Swarm Optimization (PSO)**

In PSO, the particles move in the search space, where each particle position is updated by two optimum values. The first one is the best solution (fitness) that has been achieved so far. This value is called *pbest*. The other one is the global best value obtained so far by any particle in the swarm. This best value is called *gbest*. After finding the *pbest* and *gbest*, the velocity update rule is an important factor in a PSO algorithm. The most commonly used algorithm defines that the velocity of each particle for every problem dimension is updated with the following equation:

$$\mu\_{G\*1, \text{ul}} = \text{zw}\_{G, \text{ul}} + \text{c}\_i \text{rand}\_{\text{l}(0, 1)} \{ \text{gbest}\_{G\*1, \text{ul}} - \text{x}\_{G, \text{ul}} \} + \text{c}\_z \text{rand}\_{\text{l}(0, 1)} \{ \text{gbest}\_{G\*1, \text{ul}} - \text{x}\_{G, \text{ul}} \} \tag{5}$$

where *uG*+1,*ni* is the ith particle velocity in the nth dimension, *G+1* denotes the current iteration and *G* the previous, *xG*,*ni* is the particle position in the nth dimension, *rand*1(0,1) , *rand*2(0,1) are uniformly distributed random numbers in (0,1), *w* is a parameter known as the inertia weight, and *c*1 and *c*2 are the learning factors.

The parameter *w* (inertia weight) is a constant between 0 and 1. This parameter represents the particle's fly without any external influence. The higher the value of *w*, or the closer it is to one, the more the particle stays unaffected from pbest and gbest. The inertia weight controls the impact of the previous velocity: a large inertia weight favors exploration, while a small inertia weight favors exploitation. The parameter *c*1 represents the influence of the particle memory on its best position, while the parameter *c*2 represents the influence of the swarm best position. Therefore, in the Inertia Weight PSO (IWPSO) algorithm the parameters to be determined are: the swarm size (or population size), usually 100 or less, the cognitive learning factor *c*1 and the social learning factor *c*2 (usually both are set to equal to 2.0), the inertia weight *w*, and the maximum number of iterations. It is common practice to linearly decrease the inertia weight starting from 0.9 or 0.95 to 0.4.

Clerc [8] suggested the use of a different velocity update rule, which introduced a parameter *K* called constriction factor. The role of the constriction factor is to ensure convergence when all the particles have stopped their movement. The velocity update rule is then given by:

$$\boldsymbol{u}\_{G\text{+1},nl} = \mathbf{K} \left[ \boldsymbol{u}\_{G,nl} + \boldsymbol{c}\_i \text{rand}\_{1\text{(0,1)}} (\text{pbest}\_{G\text{+1},nl} - \mathbf{x}\_{G,nl}) + \boldsymbol{c}\_i \text{rand}\_{2\text{(0,1)}} (\text{gbest}\_{G\text{+1},nl} - \mathbf{x}\_{G,nl}) \right] \tag{4}$$

$$K = \frac{2}{\left| 2 - \wp - \sqrt{\wp^2 - 4\wp} \right|}\tag{5}$$

where *φ* =*c*<sup>1</sup> + *c*2 and *φ* >4.This PSO algorithm variant is known as Constriction Factor PSO (CFPSO).

#### **2.2. Barebones PSO**

Kennedy [20] proposed a new PSO approach, the BB PSO, where the standard PSO velocity equation is replaced with samples from a normal distribution. In this method, the position update rule for the *i*th particle in the nth dimension becomes

$$\left| \mathbf{x}\_{G \ast 1, nl} = \mathcal{N} \left( \frac{pbest\_{G \ast 1, nl} + gbest\_{G \ast 1, nl}}{2}, \left| pbest\_{G \ast 1, nl} - gbest\_{G \ast 1, nl} \right| \right) \tag{6} \right|$$

*N*( , ) denotes the normal distribution. The method allows particles with *pbest* significant different from *gbest* to make large step sizes towards it. When *pbest* is close to *gbest*, step size decreases and limits exploration in favor of exploitation.

In [20], a variation of BB PSO, the BBExp PSO, was also proposed. In this method, approxi‐ mately half of the time velocity is based on samples from a normal distribution; for the rest of the time, velocity is derived from the particle's personal best position. The position update rule, (6), is modified into

$$\chi\_{\mathcal{C}\_{G+1,w}} = \begin{cases} N\left(\frac{pbest\_{G+1,w'} + gbest\_{G+1,w'}}{2}, \left| pbest\_{G+1,w} - gbest\_{G+1,w} \right|\right) & \text{if } \{0,1\} > 0.5\\ & pbest\_{G+1,w'} & \text{otherwise} \end{cases} \tag{7}$$

where *U*( , ) denotes the uniform distribution. In BBExp PSO, position updates equal *pbestn* for half of the time resulting in the improved exploitation of *pbestn* compared to the BB PSO. One may notice

that the barebones PSO algorithms do not require parameter tuning. More details can be found in [20, 21].
