**4.1 Phase 1: multi-criteria aggregation**

Multi-criteria aggregation relies on pairwise comparisons and concordancediscordance principles. For each pair of actions, we compute the concordance index (resp. discordance index), which evaluates the extent to which the criterion agrees (does not agree) with the assertion "action *ai* is at-least as good as action *ak*." Then, if a majority of the criteria support this assertion and if the opposition of the other criteria—the minority—is not "too strong," action *ai* is declared to be at least as good as action *ak*. Strong and weak outranking relations are constructed at this step. Next, the obtained outranking relations are transformed into preference relations *P*, *Q*,*I*, *R*, *Q*�<sup>1</sup> , *P*�<sup>1</sup> . Thus, for each pair of actions and for each period, we obtain either a strict preference (P), weak preference (Q), indifference (I), incomparability (R), inverse weak preference (*Q*�<sup>1</sup> <sup>Þ</sup>, or inverse strict preference (*P*�<sup>1</sup> ). The multi-criteria aggregation is a four-step phase [7]:

**Figure 2.** *Steps of the MUPOM method [7].*

*Temporal MCDA Methods for Decision-Making in Sustainable Development Context DOI: http://dx.doi.org/10.5772/intechopen.90698*

**Step 1.1**: For each period *t* and for each pair of actions *ai* ð Þ , *ak* , compute the concordance index *<sup>C</sup><sup>t</sup> ai* ð Þ , *ak* .

**Step 1.2**: For each period *t*, for each pair of actions *ai* ð Þ , *ak* , and for each criterion *j*, compute the discordance index *Dt <sup>j</sup> ai* ð Þ , *ak* .

**Step 1.3**: Construct the relational preference systems *<sup>S</sup><sup>t</sup> ai* ð Þ , *ak* for each pair of actions *ai* ð Þ , *ak* and for each period *t* using concordance and discordance thresholds. We deduce that action *ai* strongly outranks *ak* (*aiSFak*Þ or *ai* weakly outranks *ak aiSf ak* � �.

**Step 1.4**: For each period t and for each pair of actions *ai* ð Þ , *ak* , convert the obtained outranking relations to preference relation *<sup>R</sup><sup>t</sup> ai* ð Þ , *ak* <sup>∈</sup>

*P*, *Q*,*I*, *R*, *Q*�<sup>1</sup> , *P*�<sup>1</sup> � � where *P*, *Q*,*I*, *R* refers, respectively, to strict preference, weak preference, indifference, and incomparability. We note *aiP*�<sup>1</sup> *ak* for *akPai*, and *aiQ*�<sup>1</sup> *ak* for *akQai*.

#### **4.2 Phase 2: temporal aggregation**

and for each pair of actions. Then in Phase 2 and for each pair of actions, a measure of distance between preference relations is used for temporal aggregation of the preference relations obtained in Phase 1. A graph showing relations between all pairs of actions illustrates the results of this aggregation. Next, in Phase 3 an exploitation procedure is used to compute the performance of each action *ai*. The following subsections provide details on the three phases. A full version of the

Multi-criteria aggregation relies on pairwise comparisons and concordancediscordance principles. For each pair of actions, we compute the concordance index (resp. discordance index), which evaluates the extent to which the criterion agrees (does not agree) with the assertion "action *ai* is at-least as good as action *ak*." Then, if a majority of the criteria support this assertion and if the opposition of the other criteria—the minority—is not "too strong," action *ai* is declared to be at least as good as action *ak*. Strong and weak outranking relations are constructed at this step.

, *P*�<sup>1</sup> . Thus, for each pair of actions and for each period, we

<sup>Þ</sup>, or inverse strict preference (*P*�<sup>1</sup>

).

obtain either a strict preference (P), weak preference (Q), indifference (I), incom-

Next, the obtained outranking relations are transformed into preference

mathematical details of the method is provided in [7]. **Figure 2** graphs the steps of the MUPOM method.

**4.1 Phase 1: multi-criteria aggregation**

*Sustainability Concept in Developing Countries*

parability (R), inverse weak preference (*Q*�<sup>1</sup>

The multi-criteria aggregation is a four-step phase [7]:

relations *P*, *Q*,*I*, *R*, *Q*�<sup>1</sup>

**Figure 2.**

**22**

*Steps of the MUPOM method [7].*

This phase consists of aggregating the preference relations obtained for each pair of actions and at each period (results of Phase 1). This aggregation is done using a measure of distance between preorders [19]. Thus, the aggregated preference relation which minimizes the distance with the preorders at each period is obtained. The temporal aggregation phase consists of three steps [7]:

**Step 2.1**: For each pair of actions (*ai*, *ak*) and at each period *t*, compute the distance between the preference relation *Rt ai* ð Þ , *ak* resulting from Step 1.4 and each possible preference relation *H* ∈ *P*, *Q*,*I*, *R*, *Q*�<sup>1</sup> , *P*�<sup>1</sup> � �. This distance is noted *<sup>Δ</sup> <sup>H</sup>*, *Rt ai* ð Þ ð Þ , *ak* .

**Step 2.2**: Aggregate the distances obtained at each period into a mean distance *<sup>Φ</sup><sup>H</sup> ai* ð Þ , *ak :Φ<sup>H</sup> ai* ð Þ¼ , *ak* P*<sup>T</sup> <sup>t</sup>*¼<sup>1</sup>*αt<sup>Δ</sup> <sup>H</sup>*, *Rt ai* ð Þ ð Þ , *ak* where *<sup>α</sup><sup>t</sup>* is the relative importance of period *t*.

**Step 2.3**: Assign to the pair of actions *ai* ð Þ , *ak* the preference relation *<sup>H</sup>*<sup>∗</sup> , such as:

$$H^\* = \left\{ H^\* / \Phi^{H^\*} = \min\_{H \in \left(P, Q, I, \mathcal{R}, \mathcal{Q}^{-1}, P^{-1}\right)} \Phi^H(a\_i, a\_k) \right\},$$

A graph representing relations between all pairs of actions displays the results.

#### **4.3 Phase 3: exploitation**

This phase consists of computing the performance of each action *ai* . Performance calculation is based on the number of actions that are preferred (strictly or weakly) to *ai* and those that *ai* are preferred to (strictly or weakly). The set of "best compromise" action(s) is then deduced based on the computed performance. This set contains the actions with the highest performance and those which are incomparable to them. Details on the exploitation phase are provided in [19].

MUPOM method has important contributions. First, it proposes a generalization of outranking methods based on ELECTRE principles (concordance, discordance, and credibility indexes) to multi-period and temporal settings. Consequently, the method supports partial preferences and partial rankings and confirms that the outranking methods can be generalized to temporal context. In practical terms, MUPOM provides valuable contributions for researchers and practitioners concerned with decision-making processes under sustainability. Beyond the financial dimension, it enables integration of social and environmental impacts in the

short, medium, and long term. By taking into account immediate and future consequences of actions, it guarantees decisions are not made that compromise future generations.

**5.1 Phase 1: multi-criteria aggregation and Monte Carlo simulations**

*Temporal MCDA Methods for Decision-Making in Sustainable Development Context*

and incoming flows ∅�

*<sup>j</sup>* ð Þ *ai* in the interval [*g*�

by computing the mean ∅<sup>þ</sup>

ð Þ *ai* , ∅<sup>þ</sup>

*max* ,*t*

**5.2 Phase 2: temporal aggregation**

**5.3 Phase 3: exploitation**

(strictly or weakly).

**6. Case study**

**25**

*<sup>t</sup>* ð Þ *ai* .

*max* ,*t*

the values of ∅<sup>þ</sup>

these steps [8]:

*g t*,*s*

and *σ* ∅�

flows∅<sup>þ</sup>

used.

*t*,*s*

*DOI: http://dx.doi.org/10.5772/intechopen.90698*

*<sup>j</sup>* ð Þ *ai* ,*g*<sup>þ</sup>

value for criteria weights for each criterion *j*, noted *π<sup>t</sup>*,*<sup>s</sup>*

ð Þ *ai* , ∅�

*min* ,*t*

ð Þ *ai* , ∅<sup>þ</sup>

*<sup>t</sup>* ð Þ *ai* and ∅�

*max* ,*t*

ð Þ *ai* , ∅�

**Step 1.5**: The resulting interval limits of the outgoing and incoming

ð Þ *ai* , ∅�

*max* ,*t*

compute outgoing and incoming flows ∅<sup>þ</sup>

*min* ,*t*

In Phase 1, the criteria at each period of the horizon are aggregated. The method looks at a representation of uncertainty with probability distributions for uncertain parameters (evaluations and weights) and uses Monte Carlo simulation to generate numerical values for each uncertainty scenario. In this illustration and without loss of generality, uniform distributions using intervals are simulated for each parameter and for each period *t*. For each scenario of uncertainty *s*, we generate from the interval a specific value for evaluations and weights. Then at each period *t* and for each scenario*s*, we use the PROMETHEE method, and we compute outgoing ∅<sup>þ</sup>

generalization of PROMETHEE III that associates an interval to the outgoing and incoming flows for each action and deduces a partial preorder for the actions. The multi-criteria aggregation and Monte Carlo simulation phase consists of

**Steps 1.1 and 1.2**: For each period t, we conduct a Monte Carlo simulation *s*. Each simulation generates, for each criterion *j*, a specific evaluation of action *ai* noted

**Step 1.3**: For each scenario*s*, action *i* and period *t*, we apply PROMETHEE and

ð Þ *ai* and ∅�

*t*,*s*

**Step 1.4**: In this step, the outgoing and incoming flow distributions are defined

*min* ,*t*

**Step 1.6**: Preference relations *St*(*ai*, *ak*Þ*ϵ*f g *I*, *P*, *Q*, *R* are deduced, depending on

Here the temporal aggregation procedure of MUPOM (Section 4.2) is used to aggregate the preference relations obtained over the periods in Step 1.6. As with the MUPOM method, the measure of distance between preorders developed in [19] is

The temporal exploitation procedure of MUPOM (Section 4.3) is used in this phase. It computes the performance of each action *ai* based on the number of actions that are preferred (strictly or weakly) to *ai* and those that *ai* are preferred to

In this section, MUPOM and PROMETHEE-MP are applied in the context of sustainable forest management. Sustainable forest management is a well-suited application context since it considers conflicting and heterogeneous criteria that should be assessed on about 150 years ahead. Actually, the selection of sustainable forest management options should arrive at a balance between biodiversity, soil and

ð Þ *ai* , ∅�

ð Þ *ai* for each action *ai*. As part of the model, we propose a

*<sup>j</sup>* ð Þ *ai* ]. Also, each simulation considers a different

*t*,*s* ð Þ *ai* .

ð Þ *ai* are deduced.

*min* ,*t*

*<sup>j</sup>* ð Þ *ai* .

*<sup>t</sup>* ð Þ *ai* and the standard deviations *σ* ∅<sup>þ</sup>

ð Þ *ai* (see [8]).

*t*,*s* ð Þ *ai*

*<sup>t</sup>* ð Þ *ai* 
