**6. Optimization method**

238 Renewable Energy – Trends and Applications

 1 *nominal ir <sup>f</sup> ir*

 <sup>1</sup> , 1 1

> 1 1

*ir* 

1

*<sup>i</sup> n L <sup>n</sup>*

In this study, cost of electricity interruptions is considered. The values found for this parameter are in the range of 5-40 US\$/kWh for industrial users and 2-12 US\$/kWh for domestic users [Garcia et al., 2006]. In this study, the cost of customer's dissatisfaction,

where, *Closs* is cost of costumer's dissatisfaction (in this study, US\$5.6/kWh). Now, the

where *i* indicates type of the source, wind, PV, or battery. To solve the optimization

min max

The last constraint is the reliability constraint. Equivalent Loss Factor is ratio of effective load outage hours to the total number of hours. In the rural areas and stand-alone applications (as this study), ELF<0.01 is acceptable [Tina, 2006]. For solving the optimization

The system is simulated for each hour in period of one year. In each step time, one of the


max

max

*i*

&

*E EE E ELF ELF*

<sup>0</sup> <sup>2</sup>

10 20

*N N H*

*i hub PV PVT bat bat bat*

 

*i loss*

*i*

*y*

*CRF ir R*

*K*

caused by loss of load, is assumed to be 5.6 US\$/kWh [Garcia et al., 2006].

objective function with aim to minimize total cost of system is described:

0

remainder of the available power is consumed in the dump load.

problem, all the below constraints have to be considered:

problem, particle swarm algorithm has been exploited.

**5. Operation strategy** 

below states can occur:

**4.1 The cost of loss of load** 

Annual cost of loss of load is calculated by:

*f* 

*ir*

*i*

*R R ir ir*

(11)

(12)

(13)

*NPC LOEE C PWA loss loss* (14)

*Cost NPC NPC* (15)

(16)

For size optimization of components PSO algorithm is used. Direct search method (traditional optimization method) heavily depends on good starting points, and may fall into local optima. On the other hand, as a global method for solving both constrained and unconstrained optimization problems based on natural evolution, the PSO can be applied to solve a variety of optimization problems that are not well suited for standard optimization algorithms. Moreover, the GA can also be employed to solve a variety of optimization problems. Compared to GA, the advantages of PSO are that PSO is easy to implement and there are few parameters to adjust. PSO has been successfully applied in many areas.

#### **6.1 The PSO algorithm**

Particle swarm optimization was introduced in 1995 by Kennedy and Eberhart. The following is a brief introduction to the operation of the PSO algorithm. The particle swarm optimization (PSO) algorithm is a member of the wide category of swarm intelligence methods for solving global optimization problems. PSO is an evolutionary algorithm technique through individual improvement plus population cooperation and competition which is based on the simulation of simplified social models, such as bird flocking, fish schooling and the swarm theory [Jahanbani et al., 2008].

Each individual in PSO, referred as a particle, represents a potential solution. In analogy with evolutionary computation paradigms, a swarm is similar to population, while a particle is similar to an individual.

In simple terms, each particle is flown through a multidimensional search space, where the position of each particle is adjusted according to its own experience and that of its neighbors.

Assume *x* and *v* denote a particle position and its speed in the search space. Therefore, the *i*th particle can be represented as 1 2 [ , ,..., ,..., ] *i ii i i d N x xx x x* in the *N*-dimensional space. Each particle continuously records the best solution it has achieved thus far during its flight. This fitness value of the solution is called *pbest*. The best previous position of the *i*th particle is memorized and represented as:

$$pbest\_i = \left[ pbest\_{i\_1}, pbest\_{i\_2}, \dots, pbest\_{i\_d}, \dots, pbest\_{i\_N} \right] \tag{17}$$

The global best *gbest* is also tracked by the optimizer, which is the best value achieved so far by any particle in the swarm. The best particle of all the particles in the swarm is denoted by *gbestd*. The velocity for particle *i* is represented as 1 2 [ , ,..., ,..., ] *i ii i i d N v vv v v* .

The velocity and position of each particle can be continuously adjusted based on the current velocity and the distance from *pbestid* to *gbestd*:

$$
\upsilon\_i(t+1) = \alpha(t)\upsilon\_i(t) + c\_1\tau\_1(P\_i(t) - X\_i(t)) + c\_2\tau\_2(G(t) - X(t))\tag{18}
$$

$$X\_i(t+1) = X\_i(t) + \chi v\_i(t+1) \tag{19}$$

where *c1* and *c2* are acceleration constants and *r1* and *r2* are random real numbers drawn from [0,1]. Thus the particle flies trough potential solutions toward ( ) *P t <sup>i</sup>* and *G t*( ) in a navigated way while still exploring new areas by the stochastic mechanism to escape from local optima.

Since there was no actual mechanism for controlling the velocity of a particle, it was necessary to impose a maximum value *Vmax*, which controls the maximum travel distance in each iteration to avoid this particle flying past good solutions. Also after updating the positions, it must be checked that no particle violates the boundaries of search space. If a particle has violated the boundaries, it will be set at boundary of search space [Jahanbani et al., 2008].

In Eq. (20), ( )*t* is employed to control the impact of the previous history of velocities on the current one and is extremely important to ensure convergent behavior. It is exposed completely in the following section. ( )*t* is the constriction coefficient, which is used to restrain velocity. is constriction factor which is used to limit velocity, here 0.7 .
