**5. Optimal selection framework (***OptSelectionAHP***)**

In this section we present our approach, called *OptSelectionAHP*, for optimal selection prob‐ lems by use of GAs adapted to proposed selection criteria model. GAs are adaptive heuristic search algorithms which simulate processes of natural selection, natural evolution and genetics, in full accordance with Charles Darwin principles of "survival of the fittest" [11]. GAs start with a set of solutions (represented by "chromosomes") called "population". The relative success of each individual on the problem is considered its "fitness", and used to selectively reproduce the fittest individuals to produce similar but not identical offspring for the next generation. In that sense, crossover (that combines bits of two individuals to produce one or more individuals) or mutation operators (that makes random modifications on individual genomes) are applied. By iterating this process, the population efficiently samples the space of potential individuals and eventually converges into the fittest one.

#### **5.1. GA adoption with optimization steps**

In our approach, we use GAs for evaluation of different combinations of options in order to optimize stakeholders' preferences with satisfaction of the constraints defined among certain demands. Since the proposed two‐layered decision criteria model lacks optimality in execu‐ tion, we propose a parallelized implementation based on GAs. The quality measurements are dynamically calculated as each valid configuration is created (//4 and //7 in the algorithm), which imposed dynamism (i.e., adaptivity) of some other parts of the GA, in accordance with well‐accepted and widely recommended approaches in optimization techniques [11, 24, 25]. The complexity benefits are estimated and discussed later in Section 5.2, while the adoption of all GA elements is presented in **Figure 4**.


**Figure 4.** *OptSelectionAHP* framework.

**4.3. Optimal selection goals**

*the hard constraints satisfaction.*

tioned complexities.

Finally, in the proposed selection criteria model, the final optimal selection goal is defined as the determination of the most preferable combination of available options based on both

}*i*=1,…,*<sup>n</sup> with interdependencies*

*, where mi*

*LB*, *Qagg UB*,

*LB*, *Qagg UB*

}*i*=1…*n*, *<sup>j</sup>*=1…*mi*

measurements representing stakeholders' requirements and preferences. Thus:

*is the number of available options for ith demand, and CS‐AHP selection criteria model (C, Qagg*

*QT, P), where C is a set of concerns, QT is a set of qualifier tags over aggregated intervals Qagg*

*for k selection‐making criteria, and P represents the set of specified preferences; the* **optimal selection goal** *is to find a valid combination of options which maximizes the overall selection criteria fitting degree <sup>r</sup> <sup>S</sup>* (*o*1, …, *on*), *subject to its affiliation to the most preferable combination of selection criteria and*

It is necessary to notice that in comparison to standard optimization goals widely used in the literature [21–23], defined as *maximization of the overall aggregated values*, definition 2 addition‐

In the following, we propose a meta‐heuristic search approach that overcomes the aforemen‐

In this section we present our approach, called *OptSelectionAHP*, for optimal selection prob‐ lems by use of GAs adapted to proposed selection criteria model. GAs are adaptive heuristic search algorithms which simulate processes of natural selection, natural evolution and genetics, in full accordance with Charles Darwin principles of "survival of the fittest" [11]. GAs start with a set of solutions (represented by "chromosomes") called "population". The relative success of each individual on the problem is considered its "fitness", and used to selectively reproduce the fittest individuals to produce similar but not identical offspring for the next generation. In that sense, crossover (that combines bits of two individuals to produce one or more individuals) or mutation operators (that makes random modifications on individual genomes) are applied. By iterating this process, the population efficiently samples the space

In our approach, we use GAs for evaluation of different combinations of options in order to optimize stakeholders' preferences with satisfaction of the constraints defined among certain demands. Since the proposed two‐layered decision criteria model lacks optimality in execu‐ tion, we propose a parallelized implementation based on GAs. The quality measurements are dynamically calculated as each valid configuration is created (//4 and //7 in the algorithm),

ally requires affiliation to the most preferable combination of selection criteria.

**5. Optimal selection framework (***OptSelectionAHP***)**

of potential individuals and eventually converges into the fittest one.

**5.1. GA adoption with optimization steps**

**Definition 2** (**optimal selection goal**). *Given a set of user demands*{*di*

204 Applications and Theory of Analytic Hierarchy Process - Decision Making for Strategic Decisions

*defined with q logical statements, and associated with available set of options* {*oij*

**Service chromosome encoding.** An array encoding *e*1, …, *en* is used to represent a potential solution (i.e., one combination of available options) as a chromosome. The set of possible values for *i*th element in the array *ei* is the set of available options of demand *di .* Additionally, if the *i*th demand is optional, the set of possible values for the *i*th element in the array additionally includes 0 (representing that demand which is not fulfilled).

**Initial population.** In order to start with GA search process, initial population should be created. It is generated randomly, representing random combinations of options, and hence, may not be valid with respect to *q* logical statements which defined interconnections between demands. In order to solve the problem of generating invalid elements in the population, we use a simple *optionsTransform* algorithm as introduced in reference [20].

**Fitness evaluation.** The fitness function quantifies the performance of an individual solution. There are a variety of studies regarding the definition of appropriate fitness functions [24, 26] and the major recommendation is to use fitness functions that penalize individuals that do not meet the problem constraints, which will eventually drive the evolution towards constraint satisfaction [27]. The main advantage of this approach is that we can incorporate any constraint into the fitness function, along with an appropriate penalty measure for it, and we can expect the GA to take this constraint into account during optimization. Relative penalty values may be chosen to reflect intuitive judgments of the relative importance for satisfying different kinds of constraints [22]. Our fitness function takes two types of information into account, namely, the structural constraints, and the stakeholders' preferences as follows.

Structural constraints might be defined for special demands for limiting the corresponding selection criteria properties and its violation should directly eliminate the corresponding combination of options. The penalty factor is defined as the weighted distance from constraint

satisfaction which is measured as: *D*(*e*1, …, *en*) <sup>=</sup>∑ *i*=1 *l cl i* (*e*1, …, *en*)*⋅ yi* , where

*yi* ={ 0, *cl i* (*e*1, …, *en*) ≤*ui* 1, *cl i* (*e*1, …, *en*) >*ui* .

In the literature [25], the penalty weight in a fitness function is dynamically increased with the number of generations and with higher value of the weight than the requirements. If the weight for the penalty factor is low, there is the risk that individuals will not be discarded although they violate the constraints [27].

However, the optimal selection goal is to find the combination of options which maximizes both kinds of stakeholders' preferences about selection criteria, as follows:

2(a) In order to ensure falling into the most preferable combination of covering subintervals, we use a dynamic penalty factor defined as the ratio of the absolute distance between the decision criteria fitting degree *<sup>r</sup> QT* (*QTi* 1 1 *x* … *xQTi n n* ) of combination of covering subintervals reached by the running configuration of services, and the most preferable combination of covering subintervals reached by current population, annotated with *rMAX* <sup>−</sup>*gen QT* (). The penalty is updated for every generation according to the information gathered from the population, and in the literature known as *adaptive penalty* [17]. Time complexity needed for the calculation of this penalty factor is small since it includes only comparison of the decision criteria fitting degrees of new elements in the population with the previous most preferable combination of covering subintervals.

2(b) The overall quality of the combination of options is measured by decision criteria fitting degree *r <sup>S</sup>* (), but in order to provide positive values of fitness function [11], we decided to use its reciprocal value where higher values correspond to less preferable combinations of options. Thus, the optimal selection process is driven by finding the minimal value of fitness function defined as:

#### Framework for Optimal Selection Using Meta‐Heuristic Approach and AHP Algorithm http://dx.doi.org/10.5772/63991 207

$$\begin{split}Fitness\left(e\_{1},...,e\_{n}\right) &= \frac{1}{r^{\mathcal{S}}\left(\left\{e\_{1},...,e\_{n}\right\}\right)} + \mathcal{w}\left(gen\right)D\left(\left\{e\_{1},...,e\_{n}\right\}\right) \\ &+ \frac{\left|r\_{MAX-gen}^{\mathcal{Q}T}\left(\left\{\right\}\right.\left. -r^{\mathcal{Q}T}\left(\mathcal{Q}T\_{i\_{1}}^{1}x,...x\mathcal{Q}T\_{i\_{k}}^{1}\right)\right|}{r\_{MAX-gen}^{\mathcal{Q}T}\left(\mathcal{Q}\right.\end{split}}\right) \end{split}$$

Proposed modification in the penalty factors makes changes in the defined stopping criterion: once all hard constraints are met [i.e., *D*(*g*) = 0], then the process is continued for a fixed number of iterations in order to reach lower values of fitness function. Alternatively, iterate until the best fitness individual remains unchanged for a given number of iterations.

**Crossover and mutation.** Traditional schemes utilize two operators which combine one or more chromosomes to produce a new chromosome: mutation and crossover [11]. The *k*‐point crossover operator is used as common for non‐binary genomes, which splits the genome along randomly selected *k* crossover points, pasting parts which alternate among parental genomes. After performing the crossover operator, random point mutation operator will be applied by selecting random position in the genome and putting randomly generated values. As a result of both operators, invalid chromosomes might be generated and we will employ the *option‐ sTransform* algorithm as a repair method that restores feasibility in the chromosome.

#### **5.2. Complexity analysis**

**Fitness evaluation.** The fitness function quantifies the performance of an individual solution. There are a variety of studies regarding the definition of appropriate fitness functions [24, 26] and the major recommendation is to use fitness functions that penalize individuals that do not meet the problem constraints, which will eventually drive the evolution towards constraint satisfaction [27]. The main advantage of this approach is that we can incorporate any constraint into the fitness function, along with an appropriate penalty measure for it, and we can expect the GA to take this constraint into account during optimization. Relative penalty values may be chosen to reflect intuitive judgments of the relative importance for satisfying different kinds of constraints [22]. Our fitness function takes two types of information into account, namely,

Structural constraints might be defined for special demands for limiting the corresponding selection criteria properties and its violation should directly eliminate the corresponding combination of options. The penalty factor is defined as the weighted distance from constraint

*i*=1

In the literature [25], the penalty weight in a fitness function is dynamically increased with the number of generations and with higher value of the weight than the requirements. If the weight for the penalty factor is low, there is the risk that individuals will not be discarded although

However, the optimal selection goal is to find the combination of options which maximizes

2(a) In order to ensure falling into the most preferable combination of covering subintervals, we use a dynamic penalty factor defined as the ratio of the absolute distance between the

reached by the running configuration of services, and the most preferable combination of

is updated for every generation according to the information gathered from the population, and in the literature known as *adaptive penalty* [17]. Time complexity needed for the calculation of this penalty factor is small since it includes only comparison of the decision criteria fitting degrees of new elements in the population with the previous most preferable combination of

2(b) The overall quality of the combination of options is measured by decision criteria fitting degree *r <sup>S</sup>* (), but in order to provide positive values of fitness function [11], we decided to use its reciprocal value where higher values correspond to less preferable combinations of options. Thus, the optimal selection process is driven by finding the minimal value of fitness function

*n n*

*x* … *xQTi*

both kinds of stakeholders' preferences about selection criteria, as follows:

1 1

covering subintervals reached by current population, annotated with *rMAX* <sup>−</sup>*gen*

(*e*1, …, *en*)*⋅ yi*

, where

) of combination of covering subintervals

*QT* (). The penalty

*l cl i*

the structural constraints, and the stakeholders' preferences as follows.

206 Applications and Theory of Analytic Hierarchy Process - Decision Making for Strategic Decisions

satisfaction which is measured as: *D*(*e*1, …, *en*) <sup>=</sup>∑

.

(*e*1, …, *en*) ≤*ui*

(*e*1, …, *en*) >*ui*

they violate the constraints [27].

covering subintervals.

defined as:

decision criteria fitting degree *<sup>r</sup> QT* (*QTi*

*yi* ={

0, *cl i*

1, *cl i*

> The algorithm complexity of the *OptSelectionAHP* can be decomposed as follows. First, let us use the following notations: *k* is the number of selection criteria, *n* the number of demands in selection process, *m* the maximal number of available options per each demand, *s* the maximal number of covering subintervals per each selection criteria and *r* is the number of preferences over two‐layered selection criteria model.

> Step //1 in the algorithm requires O(*knm*) time to estimate ranges of each selection criteria dimension for each demand in selection process. The propagation of selection criteria ranges (step //2) costs *0 (k\*logn)*, as explained in reference [13]. So, the two‐layered selection criteria structure creation has polynomial time complexity.

The complexity of the adopted GA can be decomposed as follows.

Step //3 requires *O*(*P* \* *n* \* *T* (*optionsTransform*)), where *T(optionsTransform)* is time needed for the *optionsTransform* algorithm. In reference [27], it is estimated as *O*(*cnk* \**log* <sup>2</sup> *k*), where *c* is the maximum number of constraints. The calculation of selection criteria measurements for population (step //4) takes *O*(*P*(*<sup>r</sup>* <sup>+</sup> *<sup>k</sup>* <sup>2</sup> <sup>2</sup> <sup>+</sup> *<sup>n</sup><sup>∙</sup> <sup>s</sup>* <sup>2</sup> <sup>2</sup> )) operations [2].

The following steps are repeated *G* times:

(step //5) The parents selection operation costs 0(1)

(step //6) The crossover operation costs *0*(*n*) and mutation operator costs 0(1)

(step //7) Replace operator costs *O*(*P*); the validity of each element of the population is checked with the *optionsTransform* algorithm, which takes *T(optionsTransform*). Over each element of population, the selection criteria measurement is calculated, which takes *O*(*<sup>r</sup>* <sup>+</sup> *<sup>k</sup>* <sup>2</sup> <sup>2</sup> <sup>+</sup> *<sup>n</sup><sup>∙</sup> <sup>s</sup>* <sup>2</sup> 2 ) operations as given earlier.

Thus, the iteration steps of the GAs take *OGA* <sup>=</sup>*O*(*G*(*<sup>r</sup>* <sup>+</sup> *<sup>k</sup>* <sup>2</sup> <sup>2</sup> <sup>+</sup> *<sup>n</sup><sup>∙</sup> <sup>s</sup>* <sup>2</sup> <sup>2</sup> <sup>+</sup> *<sup>n</sup>* <sup>+</sup> *<sup>P</sup>* <sup>+</sup> *cnk∙log* <sup>2</sup> *k*)). This is significantly reduced complexity compared to our previous work [20] where the complexity was exponential to the size of selection criteria model.
