**2. Methodology for integrated multicriteria decision-making with uncertainty (MIMDU)**

This section aims to present the process defined by MIMDU, detailing its three phases along with an example case to ease understanding. The three phases include modeling opinions, alternatives ranking, and results interpretation. For comprehension purpose, the example considers a small number of alternatives (e.g., 3) and criteria (e.g., 3), although real case studies can take into account larger numbers of alternatives and criteria, as seen in Section 3. The application of the third phase in the example case is complemented with a sensitivity analysis that aims to show the potential of MIMDU to assist decision-making. Next, the three phases are discussed as follows:

**P1. Modeling Opinions**: Triangular fuzzy numbers (TFNs) are used in the form of fuzzy rating scales to continuously define the shape of the TFN through intuitive questions gathering the experts' hesitance [13]. Two steps are defined in this phase:

Step 1: The expert must choose a value on a 0–5 scale to rate the importance of a criterion (high value means high importance of the criterion) and to evaluate an alternative according to a criterion (high value means high adequacy of the alternative to the criterion).

Step 2: The expert must express his/her confidence with the reference value expressed in Step 1, from five options presented in **Table 1**. The more confident the expert is, the lower "*support*" (base of the TFN) will have in the answer quantification, and the less vague the quantification will be.

Example case:

This example considers three experts assessing the importance of the three criteria shown in **Table 2**. The adequacy of each alternative according to each criterion is


#### **Table 1.**

*Options to express the level of confidence and quantify the support of the TFN [7].*


#### **Table 2.**

*Experts' evaluations of the importance of the criteria [7].*

presented in **Table 3**. In both processes, namely rating the importance of the criteria and evaluating the alternatives according to the criteria, each expert first provides a reference value on a 0–5 scale and the associated confidence level, from the options in **Table 1**.

For instance, **Figure 2** illustrates the importance given by the three experts (E1, E2, E3) to C1: E1 is sure on the importance with a score 3 out of 5, E2 is indecisive about it with a score of 4, and E3 rates with a 1 but is very unsure.

This approach establishes a more precise modeling of opinions compared with literature, since TFNs are not defined beforehand. Such flexibility allows to quantify the experts' level of confidence (Step 2) and defines several confidence levels associated with the answers from experts, i.e., concrete or vague answers. The confidence levels have a decisive influence on the ranking of the final alternatives, as shown in Phase 3 (P3) below. Different confidence levels allow the experts to express their potential lack of confidence, which may also reduce the pressure felt by experts when answering, especially in scenarios of limited knowledge or high uncertainty.

**P2. Alternatives Ranking**: The Compromise Ranking Method (CRM) is used to calculate a final FN for each alternative as an indicator of how good the alternative is compared with the others. For the CRM, this indicator is the distance of each alternative to an ideal solution, which determines the best of all the alternatives across selected criteria [14]. The best alternative will be the one with the lower distance to the ideal solution, which is an utopian solution that performs optimally (achieving the best evaluations) for all the criteria considered [7]. In particular, MIMDU includes a

*Perspective Chapter: A Novel Method for Integrated Multicriteria… DOI: http://dx.doi.org/10.5772/intechopen.106589*


#### **Table 3.**

*Experts' evaluations of the adequacy of alternatives according to criteria [7].*

**Figure 2.** *Modeling of the importance given by the three experts to C1.*

fuzzy version of the CRM (F-CRM), represented in Eqs. (1) and (2), using α-cut intervals. Thus, each FN is represented as a sequence of a discrete number of intervals for different cuts (from the bottom *α* =0, to the top *α* =1) according to different values of *α*. The reader is referred to [7] for an exhaustive explanation of the α-cut arithmetic.

$${}^{\alpha}\mathbf{L}\_{\mathbf{i},\mathbf{p}} = \left[\sum\_{j=1}^{n} {}^{\alpha}({}^{\alpha}\mathbf{W}\_{\mathbf{j}})^{\mathbf{p}} \cdot \left(\frac{{}^{\alpha}\mathbf{F}\_{\mathbf{j}}^{\*} - {}^{\alpha}\mathbf{f}\_{\mathbf{i}\mathbf{j}}}{{}^{\alpha}\mathbf{F}\_{\mathbf{j}}^{\*} - {}^{\alpha}\mathbf{f}\_{\mathbf{j}}^{\*}}\right)^{\mathbf{p}}\right]^{1/\mathbf{p}} \tag{1}$$

$${}^{a}L\_{i} = \mathbf{0}.\mathbf{5} \cdot {}^{a}L\_{i,1} + \mathbf{0}.\mathbf{5} \cdot {}^{a}L\_{i,\text{so}}\tag{2}$$

Where *<sup>α</sup>Li*,*<sup>p</sup>* is defined as the standardized distance of each alternative *i* to the ideal solution for a given metric *p*, and *<sup>α</sup>Li* is the final distance to the ideal solution and constitutes the final score of the alternative *i* and allows it to be ranked. In particular, for each value of *α*, *<sup>α</sup>Wj* is the weight of criterion *j* (an average of opinions on the importance of criterion *j* by all experts consulted); *<sup>α</sup> fij* is the evaluation of alternative *i* according to criterion *j* (also an average of all experts consulted); *<sup>α</sup> F* ∗ *<sup>j</sup>* and *<sup>α</sup> f* ∗ *<sup>j</sup>* are the best and the worst values obtained, respectively, for any alternative on criterion *j*, and *p* is a metric used to calculate different distances to the ideal solution (as mentioned above, the one that ideally achieves the best values for all the criteria). An average (*<sup>α</sup>Li*) is calculated from the two usual and extreme metrics, *p* =1, for maximum global utility (*<sup>α</sup>Li*,1) and *p* =∞, for the minimum individual regret (*<sup>α</sup>Li*,∞) [15].

Example case:

Applying Eqs. (1) and (2) for 11 values of α (from 0 to 1 with a step size of 0.1), the results of the distance to the ideal solution for each alternative (*<sup>α</sup>Li*) are shown in **Figure 3**. As it can be seen, all alternatives have distances to the ideal solution above 0. Intuitively, it seems that A1 and A3 achieve lower distances than A2, so the latter could be discarded. However, the "Results Interpretation" phase is useful to discuss which one is the best (minor distance), since fuzzy numbers are clearly overlap in this example.

**P3. Results Interpretation**: As mentioned in P2, ranking alternatives from their fuzzy values might be misleading (e.g., in the above example, it is not clear if A1 or A3 achieve a lower fuzzy distance to the ideal solution). Thus, a comparison of a crisp ranking and a fuzzy-based ranking is proposed:

Crisp ranking: it is determined by the results of <sup>1</sup> *Li*, which does not consider the experts' level of confidence, but only the numerical values responded by the experts in Step 1. This deterministic ranking is intrinsically significant and meant the only decision-aid source in relevant studies of the literature [2, 16].

Fuzzy-based: The Middle Point of the Mean Interval (MPMI) described in Eq. (3) is used to calculate the best non-fuzzy performance value of each final FN (*<sup>α</sup>Li*) [17].

**Figure 3.** *FN for the distance of A1–A3 to the ideal solution [7].*

This method integrates, for each alternative, the average of the lowest and highest value for each *α*-cut interval of the distance of the alternative to the ideal solution:

$$\text{MPMI}\_{\text{i}} = \int\_{0}^{1} \frac{\min \, \, \text{"L}\_{\text{i}} + \, \text{max } \, \, \text{"L}\_{\text{i}}}{2} \text{d}\alpha \tag{3}$$

Example case:

**Table 4** shows both the crisp ranking and the fuzzy-based values that can be used to rank the alternatives. As shown in **Table 4**, the two rankings diverge. According to the crisp ranking, the best alternative would be A1 (lower distance to the ideal solution), followed by A3 and A2, i.e., A1-A3-A2. These results can be observed at the top of **Figure 3** (only values for *α* =1). However, for the fuzzy-based ranking, which considers the level of confidence of the experts, the ranking of the alternatives would be A3-A1-A2. The difference occurs because A1 has been better evaluated than A3 by the experts when expressing numerical values for their adequacy to each criterion (Step 1), but at the same time, they expressed a lower level of confidence (Step 2). Oppositely, A3 has been evaluated slightly worse, but more confidently, which eventually can make A3 a more reliable choice when assessing the whole experts' opinions.

In line with this discussion, and in order to fully show the potential of MIMDU to assist decision-making, a sensitivity analysis was carried out by modifying the evaluations of A3 according to C3 performed by expert E3. In particular, it is considered that this expert evaluates A3 according to C3 with the same reference value of 2 as shown in the last row of **Table 3**. But for this case, it evaluates A3 with the five options of confidence levels: CS, S, I, U, VU (i.e., five scenarios), instead of only the original I. **Table 5** shows the non-fussy performance value of the distance of alternative A3 to the ideal solution (*MPMIA*3) for all confidence scenarios. It can be seen that the lowest distance, and thus the best ranking-value of alternative A3, is obtained when E3 is completely sure (CS) of their evaluation of A3. Meanwhile, the worst ranking-value of A3 is achieved when he/she is very unsure (VU) of the evaluation. This result is consistent with the process detailed and is understandable, since more confidently evaluated alternatives are achieving better ranking results.

Those two extreme values (*MPMIA*<sup>3</sup> when the expert is CS and when he/she is VU) can be compared with *MPMIA*1, which remains unchanged, i.e., when *MPMIA*<sup>3</sup> in the VU confidence case is 9.42% lower than the one for A1 (0.298 against 0.329; **Tables 4** and **5**, respectively); it increases up to 11.85% lower for the CS case (0.290 against 0.329; **Tables 4** and **5**, respectively). This means there is a difference of


#### **Table 4.**

*Crisp and fuzzy rankings of the alternatives in the example case [7].*


#### **Table 5.**

*Sensitivity analysis on the hesitance on the evaluation of A3 for E3.*

25.80% (11.85 vs. 9.42%) between the distances of A1 and A3 with only one expert changing the level of confidence of the evaluation. It seems then that the level of confidence plays an important role in the final ranking of the alternatives, in which alternatives evaluated more confidently, such as A3 stands out.
