**3. DANP-evaluation of AHP-DSS**

## **3.1. Selection of AHP-DSS**

Owing to the wide range of software solutions supporting AHP application [17], the selection of software has to be conducted first, whereby initially all products are relevant which are freeware or are ensuring a free trial access for evaluation purposes (at least for a limited time period). Regarding the fact that Questfox is a Software as a Service (SaaS) product which has to be accessed by users via a web browser, this solution was excluded from our list of potential evaluation products. For a mutual evaluation of software solutions, it is important to determine a manageable number of evaluation alternatives. By random selection, *MakeItRational, Qualica Decision Suite, SelectPro, easy-mind* and *SuperDecisions* have been determined for our evaluation.

#### **3.2. Evaluation criteria derived from international standard norm**

Evaluating software demands to consider commonly understood quality criteria. The first part of the international standard norm ISO/IEC 25010-1:2011 "Systems and software engineering. Systems and software Quality Requirements and Evaluation (SQuaRE). System and software quality models" provides a first foundation for our software evaluation. Scope of this international standard is the definition of a product quality model composed of eight characteristics that relate to static properties of software and dynamic properties of the computer system.

With respect to the evaluation of AHP software products and the underlying standard norm, the focus of interest lies on the product quality from a user's point of view. The quality criteria provided by ISO/IEC are postulated for software evaluations in general. But on account of a certain lack of concreteness with respect to AHP software products, these criteria had to be customized for the teaching and research requirements of the members of the Management Science Department. Thereby, this standard norm is used as a starting point to develop relevant criteria for the evaluation of AHP software products. The results of the transition process are shown in **Figure 1**.

**Figure 1.** Transition process for criteria identification.

As the relevance of the AHP for scientists and practitioners is proved in bibliometric studies [5, 15], a need for AHP-adequate software support for these groups of persons is comprehensible. The question is first of all, if such software products should be evaluated and selected

Even though AHP-based evaluations of AHP-software exist [16–18], it is to be considered that some criteria relevant for quality estimation of AHP-oriented software products in academic departments do not seem to be independent from each other. Therefore, it seems to be an

As there is no ANP-based evaluation of AHP-software to be found in the literature, which might support our software evaluation problem, an own tailor-made process of selecting AHP software, adequate to the research and teaching requirements and demands of a Management

Owing to the wide range of software solutions supporting AHP application [17], the selection of software has to be conducted first, whereby initially all products are relevant which are freeware or are ensuring a free trial access for evaluation purposes (at least for a limited time period). Regarding the fact that Questfox is a Software as a Service (SaaS) product which has to be accessed by users via a web browser, this solution was excluded from our list of potential evaluation products. For a mutual evaluation of software solutions, it is important to determine a manageable number of evaluation alternatives. By random selection, *MakeItRational, Qualica Decision Suite, SelectPro, easy-mind* and *SuperDecisions* have been determined for our

Evaluating software demands to consider commonly understood quality criteria. The first part of the international standard norm ISO/IEC 25010-1:2011 "Systems and software engineering. Systems and software Quality Requirements and Evaluation (SQuaRE). System and software quality models" provides a first foundation for our software evaluation. Scope of this international standard is the definition of a product quality model composed of eight characteristics that relate to static properties of software and dynamic properties of the computer system.

With respect to the evaluation of AHP software products and the underlying standard norm, the focus of interest lies on the product quality from a user's point of view. The quality criteria provided by ISO/IEC are postulated for software evaluations in general. But on account of a certain lack of concreteness with respect to AHP software products, these criteria had to be customized for the teaching and research requirements of the members of the Management Science Department. Thereby, this standard norm is used as a starting point to develop relevant criteria for the evaluation of AHP software products. The results of the transition process are

**3.2. Evaluation criteria derived from international standard norm**

by AHP or by ANP?

appropriate option to use ANP for our evaluation.

Science Department was developed.

90 Recent Progress in Parallel and Distributed Computing

**3.1. Selection of AHP-DSS**

evaluation.

shown in **Figure 1**.

**3. DANP-evaluation of AHP-DSS**

Partial pretests suggested that all software alternatives worked efficiently and secure enough for the department's purposes. Furthermore, there was no need for us to modify the software. Therefore, the criteria performance efficiency, security and maintainability could be disregarded in our model in order to avoid an unnecessarily high level of complexity. Moreover, we added costs and advanced functions as clusters to our model. Within costs, the initial investment was regarded exclusively. The criteria within advanced functions are AHP/ANP-specific as they are derived from special requirements towards Group decision making [19–21], Transparency, Benefits-Opportunities-Costs-Risks (BOCR) modelling [22, 23] and more general AHP advancements. Group decision modelling is an important characteristic as decisions with uncertain attributes often have to be solved in a group context to achieve a broader base of intersubjectivation or objectivity. Furthermore, transparency is necessary for performing sensitivity analysis as well as for the interpretation of the results. In this context, the possibility of BOCR modelling is inevitable for structuring complex strategic decision settings. Apart from structuring, it is moreover possible to cope with scale incommensurability [22, 24–26]. The possibilities of considering horizontal (inter-)dependencies (ANP-extension) or multiplicative AHP [27] are subsumed under AHP advancements.

#### **3.3. DANP-evaluation framework**

#### *3.3.1. Application of DANP in literature*

As (inter-)dependencies can be assumed between the derived criteria in **Figure 1**, an approach has to be used being able to cope with this kind of criteria structure. So, the ANP moves into focus and is therefore considered to be an adequate evaluation method.

In order to meet our group decision requirements, we additionally use the DEMATEL approach [28–30] for identifying criteria (inter-)dependencies within the ANP evaluation model. Regarding its frequency ranked in the literature [31] shortly beyond fuzzy set theory, DEMATEL belongs to the most common auxiliary tools of ANP—as such denoted by DANP.

In order to highlight the importance of DEMATEL in the field of the ANP literature, we analysed the bibliometric study of Kaspar [15], who used for his study the leading databases EBSCOhost Business Source® Complete, SciVerse® ScienceDirect and Thomson Reuters Web of Knowledge and thus exceeded other bibliometric studies on the ANP [32, 33] to achieve a maximum scope of the literature. The procedure covers three databases and a time horizon from 1998 to 2012 (database accesses for 1998–2011: July 16, 2012 and for 2012: January 28, 2013 [15]). In total, Kaspar found 4187 AHP publications and 613 ANP publications within the databases using the keywords "analytic hierarchy process" and "analytical hierarchy process", respectively, "analytic network process" and "analytical network process" in titles, key words or abstracts [15]. About 52 publications dealt with DEMATEL [15].

**Figure 2** [15] gives an overview of AHP and ANP publications from 1998 to 2012. Both methods show a clear upward trend in the timeline, especially rising with the last few years. Although there is a clear growth of ANP related publications, the comparison of total numbers of the publications points out that AHP seems to be more popular in research and practice. This might be due to a lack of software support for ANP and its more complex cognitive requirements. Thereby, evaluating and selecting adequate DSS is an important task to improve the chances of these MCDM methods to be accepted and implemented in a real multi-personal decision contexts. As existing evaluations [16, 18, 34] do not work with advanced approaches as the ANP and/or DEMATEL, and since the 52 publications using DEMATEL within an ANP procedure did not supply any evaluation of AHP software, a new study on this was motivated.

**Figure 2.** Overview of AHP and ANP publications (1998–2012).

#### *3.3.2. Formal description of DEMATEL for ANP*

In order to meet our group decision requirements, we additionally use the DEMATEL approach [28–30] for identifying criteria (inter-)dependencies within the ANP evaluation model. Regarding its frequency ranked in the literature [31] shortly beyond fuzzy set theory, DEMATEL belongs to the most common auxiliary tools of ANP—as such denoted by DANP. In order to highlight the importance of DEMATEL in the field of the ANP literature, we analysed the bibliometric study of Kaspar [15], who used for his study the leading databases EBSCOhost Business Source® Complete, SciVerse® ScienceDirect and Thomson Reuters Web of Knowledge and thus exceeded other bibliometric studies on the ANP [32, 33] to achieve a maximum scope of the literature. The procedure covers three databases and a time horizon from 1998 to 2012 (database accesses for 1998–2011: July 16, 2012 and for 2012: January 28, 2013 [15]). In total, Kaspar found 4187 AHP publications and 613 ANP publications within the databases using the keywords "analytic hierarchy process" and "analytical hierarchy process", respectively, "analytic network process" and "analytical network process" in titles, key

**Figure 2** [15] gives an overview of AHP and ANP publications from 1998 to 2012. Both methods show a clear upward trend in the timeline, especially rising with the last few years. Although there is a clear growth of ANP related publications, the comparison of total numbers of the publications points out that AHP seems to be more popular in research and practice. This might be due to a lack of software support for ANP and its more complex cognitive requirements. Thereby, evaluating and selecting adequate DSS is an important task to improve the chances of these MCDM methods to be accepted and implemented in a real multi-personal decision contexts. As existing evaluations [16, 18, 34] do not work with advanced approaches as the ANP and/or DEMATEL, and since the 52 publications using DEMATEL within an ANP procedure did not supply any evaluation of AHP software, a new study on this was motivated.

words or abstracts [15]. About 52 publications dealt with DEMATEL [15].

92 Recent Progress in Parallel and Distributed Computing

**Figure 2.** Overview of AHP and ANP publications (1998–2012).

In order to achieve a better understanding of our evaluation, we start with a short formal explanation of the DEMATEL approach [28]. The initial step within DEMATEL is the determination of the influence values ∝ij r for each decision maker DMr (r = 1, …, R) for all criteria elements (software alternatives are excluded). The influence values are on an ordinal scale from "0" (no influence) to "4" (extreme strong influence). For synthesizing the group, the next step is to calculate the average matrix ^ F:

$$\hat{\mathbf{F}} = \frac{1}{\mathcal{R}} \sum\_{t=1}^{\mathbb{R}} \left[ \mathbf{cc}\_{t\downarrow}^{t} \right]\_{\text{m\times m}}.\tag{1}$$

which has to be normalized by scalar â, in the following way:

$$\begin{array}{ccccc}\cline{2-2} & \begin{array}{c} \heartsuit\\ \heartsuit\end{array} & \begin{array}{c} \heartsuit\\ \heartsuit\end{array} \\ \cline{2-2} \end{array} \tag{2}$$

$$\hat{\mathbf{a}} = \max \left( \max\_{1 \le \alpha \le n} \sum\_{l=1}^{m} \infty\_{l,l} \max\_{1 \le \beta \le n} \sum\_{l=1}^{m} \infty\_{l,l} \right). \tag{3}$$

As a next step, the total-influence matrix ^ T is to be calculated as follows in order to consider the indirect effects:

$$\text{where } \lim\_{k \to \! \! -} \hat{\mathbf{F}}\_{\text{norm}}^{\!\! \! \! -} = \text{ [}\hat{\mathbf{0}}\text{]}\_{\text{norm}} \tag{4}$$

$$\text{and}\,\text{lim}\_{\text{k}\to\text{a}}\left(\bigtriangleup\text{+}\bigtriangleup\text{ }\_{\text{norm}}^{\text{a}}\dagger\underset{\text{norm}}{\text{\small}}^{\text{a}}\dagger+\dots\downarrow+\undersetupleft\{\text{\small}\_{\text{norm}}^{\text{a}}\}\right)=\left(\bigtriangleup\text{\small}\left\triangleup\text{ }\_{\text{norm}}^{\text{a}}\right)^{-1}\,,\tag{5}$$

$$\text{and } \widehat{\mathbf{T}} = \left. \widehat{\mathbf{E}} - \widehat{\mathbf{F}}\_{\text{norm}} \right| \quad \text{or} \quad \widehat{\mathbf{T}} = \left. \widehat{\mathbf{F}}\_{\text{norm}} \left( \widehat{\mathbf{E}} - \widehat{\mathbf{F}}\_{\text{norm}} \right)^{-1} \right. \tag{6}$$

Having constructed ^ T, it is necessary to set a threshold value of the required influence level. Only some elements, of which the influence level in matrix F is higher than the threshold value, can be chosen and converted into the impact-digraph-map respectively into the ANP network model [28, 29].

According to the setting of a threshold value, there is no fixed determination rule. It can be decided by experts through discussions [35] or brainstorming [28]. Another possibility would be the exogenous determination by a meta-decision maker. Regardless of the variant of determination, the threshold value should not be too high (too low), as only a few (too many) dependencies would be considered within the ANP model [36].

#### *3.3.3. Construction of the evaluation network model: clustering and dependencies*

Before identifying the dependencies within the model, all evaluation criteria have to be assigned to clusters. To cope with scale incommensurability [26], our evaluation model fundamentally consists of the two subnets *Benefits* (with *performance quality* criteria which should be assessed by ordinal judgements) and *Costs* (priorities are derived from monetary values). The overlapping alternatives-cluster consisting of the five AHP-DSS A<sup>1</sup> to A5 is an integral part of both subnets. The subnet costs contain only one cluster with the element initial investment. The subnet benefits further contains the clusters usability (US), compatibility (CO), functional suitability (FS), reliability (RE) and advanced functions (AF) which are further explained in the evaluation process. All suggested characteristics have to be individually specified to ensure the principle of preferential independence.

Having derived and clustered the relevant criteria and (inter-)dependencies with DEMATEL, the evaluation model is created in a network structure. In order to reach a result representing the different needs of the department**'**s members, the estimations should be the output of a multi-personal estimation, reconsidering different personal requirements with weighed and assessed estimations thus caring for a higher level of intersubjectivity/ objectivity. The members of this group—an advanced Master-student (Tutor), academic lecturers and professors—had graduations in Management Science/Business, Informatics and Mathematics, and rated the strength of the dependencies between all model elements within benefits subnet. For assessing the strength of model influences the DEMATEL standard scale with "0 = no interdependency", "1 = low influence", "2 = medium influence", "3 = high influence" and "4 = very high influence" is used. The results in form of ^ T are shown in **Table 1** (see the Appendix for individual direct relation matrices [∝i.<sup>j</sup> r ]m×m ), whereby the number in each cell indicates the influence of the row element on the column element. Following Ou Yang [28] we interpret an influence as essential and considerable if it exceeds a threshold value of 0.1. Such an exceedance characterises an influence as significant and therefore to be subsequently considered in the model of (inter-)dependencies, whereas influences not surpassing this threshold are to be neglected in the model as insignificant. Influences regarded as significant and therefore to be considered are highlighted in the Table by bold numbers. Thereupon the influences are transferred as (inter-)dependencies to the evaluation model to complete the network structure. For improving the overall view of the model, within the ANP approach, the (inter-)dependencies are aggregated by clusters and then visualized by directional arrows. Arrows with double tips are used for representing interdependencies. **Figure 3** shows the final evaluation model for AHP software.

#### **3.4. Assessments and results**

#### *3.4.1. Preliminary remarks*

Users' individual requirements towards adequate/appropriate software solutions can vary strongly. Therefore, our aim is not primarily to determine a "best DSS" as a general recommendation from the point of view of the members of a Management Science Department. Instead, our evaluation focuses on a transparent confrontation with the 5 heterogeneous products displaying their dependencies and interdependencies within the network of our evaluation criteria.

At an expert workshop, the pairwise comparisons as for the software alternatives' fulfilment of the quality criteria were performed by the authors' mutual agreement to derive a consensus [37]. So, there was no necessity to aggregate the results with the support of a group decision rule. But within a greater department with more experts sharing the evaluation procedure such a rule might have made sense. To evaluate the alternate software products, we constructed a multi-criteria standard problem to be handled by the different DSS. The matrices representing the judgements on Saaty's 1 ("equal importance")-to-9 ("extreme importance") scale [13] are listed below. In the tables, C.R. stands for consistency ratio. The more inconsistent the pairwise judgements, the higher the consistency ratio. Theory suggests that if the consistency ratio for the matrix is not smaller than 0.1, the ratios should be adjusted to make them more consistent. In our evaluation, there was no need for additional adjustments.


Having derived and clustered the relevant criteria and (inter-)dependencies with DEMATEL, the evaluation model is created in a network structure. In order to reach a result

the output of a multi-personal estimation, reconsidering different personal requirements with weighed and assessed estimations thus caring for a higher level of intersubjectivity

objectivity. The members of this group—an advanced Master-student (Tutor), academic lecturers and professors—had graduations in Management Science/Business, Informatics and Mathematics, and rated the strength of the dependencies between all model elements within benefits subnet. For assessing the strength of model influences the DEMATEL stan

dard scale with "0 = no interdependency", "1 = low influence", "2 = medium influence",

number in each cell indicates the influence of the row element on the column element. Following Ou Yang [28] we interpret an influence as essential and considerable if it exceeds a threshold value of 0.1. Such an exceedance characterises an influence as significant and therefore to be subsequently considered in the model of (inter-)dependencies, whereas influences not surpassing this threshold are to be neglected in the model as insignificant. Influences regarded as significant and therefore to be considered are highlighted in the Table by bold numbers. Thereupon the influences are transferred as (inter-)dependencies to the evaluation model to complete the network structure. For improving the overall view of the model, within the ANP approach, the (inter-)dependencies are aggregated by clus

ters and then visualized by directional arrows. Arrows with double tips are used for rep

resenting interdependencies. **Figure 3** shows the final evaluation model for AHP software.

Users' individual requirements towards adequate/appropriate software solutions can vary strongly. Therefore, our aim is not primarily to determine a "best DSS" as a general recommen

dation from the point of view of the members of a Management Science Department. Instead, our evaluation focuses on a transparent confrontation with the 5 heterogeneous products dis

playing their dependencies and interdependencies within the network of our evaluation criteria. At an expert workshop, the pairwise comparisons as for the software alternatives' fulfilment of the quality criteria were performed by the authors' mutual agreement to derive a consensus [37]. So, there was no necessity to aggregate the results with the support of a group decision rule. But within a greater department with more experts sharing the evaluation procedure such a rule might have made sense. To evaluate the alternate software products, we con

structed a multi-criteria standard problem to be handled by the different DSS. The matrices representing the judgements on Saaty's 1 ("equal importance")-to-9 ("extreme importance") scale [13] are listed below. In the tables, C.R. stands for consistency ratio. The more incon

sistent the pairwise judgements, the higher the consistency ratio. Theory suggests that if the consistency ratio for the matrix is not smaller than 0.1, the ratios should be adjusted to make them more consistent. In our evaluation, there was no need for additional adjustments.

= high influence" and "4 = very high influence" is used. The results in form of

in **Table 1** (see the Appendix for individual direct relation matrices

**'**s members, the estimations should be

[ ∝ i . j r ] m × m /








^T are shown

), whereby the

representing the different needs of the department

94 Recent Progress in Parallel and Distributed Computing

"3

**3.4. Assessments and results**

*3.4.1. Preliminary remarks*

DANP-Evaluation of AHP-DSS http://dx.doi.org/10.5772/67130

95

**Table 1.** Total influence matrix with final model influences (threshold value = 0.1).

**Figure 3.** Final DANP-evaluation model for AHP software.

The local priorities of the criteria are shown at the bottom of each table, providing evidence of their importance. The priorities in our study are derived by Saaty's principal eigenvalue method [6, 38, 39].

The next subsection deals with the direct assessments for alternatives' clusters, followed by the presentation of subnets benefits' unweighted supermatrix ^<sup>S</sup> U and the clustermatrix ^<sup>C</sup> showing the indirect influences.

#### *3.4.2. Assessments for alternatives cluster*

Subsequently, the pairwise comparisons of the DSS alternatives are represented by clusters. Regarding the functional completeness (FS<sup>1</sup> , see **Table 2**), all software alternatives except *Qualica Decision Suite*—which cannot handle different alternatives—are equipped with a large and detailed set of functions to handle problems by support of AHP, including dynamic sensitivity analysis, direct data entry and consistency calculation. With respect to the hierarchy to be modelled, the number of criteria, levels and alternatives are not limited in all software products. Highlighting distinctive features, it can be pointed out that *MakeItRational* contains a special alert feature which proposes steps for trouble-shooting when inconsistency reaches 0.1. *SuperDecisions* and *SelectPro* comparatively provide the best sensitivity analysis and the largest set of functions according to direct data entry possibilities. In addition, *SuperDecisions* provides a broader range of rating possibilities (direct priorities, graphically or numbers on 1–9 scale).


**Table 2.** Judgements of the alternatives I (FS).

The local priorities of the criteria are shown at the bottom of each table, providing evidence of their importance. The priorities in our study are derived by Saaty's principal eigenvalue

The next subsection deals with the direct assessments for alternatives' clusters, followed

Subsequently, the pairwise comparisons of the DSS alternatives are represented by clusters.

*Qualica Decision Suite*—which cannot handle different alternatives—are equipped with a large and detailed set of functions to handle problems by support of AHP, including dynamic sensitivity analysis, direct data entry and consistency calculation. With respect to the hierarchy to be modelled, the number of criteria, levels and alternatives are not limited in all software products. Highlighting distinctive features, it can be pointed out that *MakeItRational* contains a special alert feature which proposes steps for trouble-shooting when inconsistency reaches 0.1. *SuperDecisions* and *SelectPro* comparatively provide the best sensitivity analysis and the largest set of functions according to direct data entry possibilities. In addition, *SuperDecisions* provides a broader range of rating possibilities (direct priorities, graphically or numbers on 1–9 scale).

U

, see **Table 2**), all software alternatives except

and the clustermatrix ^<sup>C</sup>

by the presentation of subnets benefits' unweighted supermatrix ^<sup>S</sup>

method [6, 38, 39].

showing the indirect influences.

*3.4.2. Assessments for alternatives cluster*

Regarding the functional completeness (FS<sup>1</sup>

**Figure 3.** Final DANP-evaluation model for AHP software.

96 Recent Progress in Parallel and Distributed Computing

Within all DSS, the provided functions work correctly (functional correctness, FS2 , see **Table 2**). Due to *Qualica Decision Suite's* inability regarding to the handling of alternatives, it is not possible to achieve a final AHP calculation result. *MakeItRational, SuperDecisions* and *SelectPro* provide a large number of possibilities to show the results either in a graphical way or as data tables. Unfortunately, the given results of *SuperDecisions* and *SelectPro* are sometimes not exact in fourth or fifth decimal place (S*uperDecisions* sometimes curiously displays later on, e.g. 3.0003 instead of the original value of 3.0 inside the evaluation matrices). Furthermore, *MakeItRational* has slight advantages because of its advanced visualization possibilities for the results including alternatives, criteria and ranking comparisons as well as handling of local and global weights.

With respect to the appropriateness [7] of recognizability (US1 , see **Table 3**), *MakeItRational* and *SuperDecisions* provide the best information about the supported functions on their website as well as free trials equipped with the full set of functions, even if *MakeItRational* is slightly more detailed, whereas *SuperDecisions* needs registration to download the trial version. *SelectPro* offers a 30-day and fully functional demo-version without registration, but does not inform about the functions, while *easy-mind's* provided information about the functions is not structured helpfully and partly overhauled. In addition, *easy-mind's* trial is hardly limited in its use of functions. *Qualica Decision Suite* finally allows to download a 30-day trial with all features, but it is absolutely not evident, which functions the software offers or if it supports AHP at all.



**Table 3.** Judgements of the alternatives II (US).

**Usability easy-mind MakeItRational Qualica D S SelectPro SuperDecisions**

**easy-mind** 1 1/4 7 1/2 1/3 **MakeItRational** 4 1 9 3 2 **Qualica D S** 1/7 1/9 1 1/8 1(8 **SelectPro** 2 1/3 8 1 1/2 **SuperDecisions** 3 1/2 8 2 1

**Local priority** 0.11351 0.41896 0.02805 0.17265 0.26683

**easy-mind** 1 1/5 3 1/3 3 **MakeItRational** 5 1 9 3 9 **Qualica D S** 1/3 1/9 1 1/5 1 **SelectPro** 3 1/3 5 1 5 **SuperDecisions** 1/3 1/9 1 1/5 1

**Local priority** 0.11737 0.53822 0.04817 0.24808 0.04817

**easy-mind** 1 1/4 4 1/4 4 **MakeItRational** 4 1 6 1 6 **Qualica D S** 1/4 1/6 1 1/6 1 **SelectPro** 4 1 6 1 6 **SuperDecisions** 1/4 1/6 1 1/6 1

**Local priority** 0.14386 0.37688 0.05119 0.37688 0.05119

**easy-mind** 1 1/3 2 1/2 1 **MakeItRational** 3 1 5 2 3 **Qualica D S** 1/2 1/5 1 1/3 1/2 **SelectPro** 2 1/2 3 1 2 **SuperDecisions** 1 1/3 2 1/2 1

**Local priority** 0.13500 0.41428 0.07427 0.24145 0.13500

**US1**

**US2**

**US3**

**US4**

C.R. = 0.00385

**: Operability**

**: User error protection**

C.R. = 0.03498

**: Learnability**

C.R. = 0.01875

C.R. = 0.04383

**: Appropriateness recognizability**

98 Recent Progress in Parallel and Distributed Computing

With regard to the criterion learnability (US2 , see **Table 3**), *MakeItRational, SelectPro* and *easymind* are much more intuitively to handle and more advanced in providing assistants, helping hints and tools, online guides and tutorials as well as examples. The other two software alternatives were more difficult to understand with respect to a convenient handling and had a less number of helpful tools.

Operability (US3 , see **Table 3**) was more complex in *Qualica Decision Suite* and *SuperDecisions* and a longer initiation period was needed to operate with these programs. The commands are sometimes hard to find and the next operating step is mostly not obvious. The other three programs are more intuitive in handling, they operate using step-by-step methods. Especially operating with *MakeItRational* and *SelectPro* is easily possible after a very short initiation period.

User error protection (US4 , see **Table 3**) is well-performed in almost all DSS alternatives, the differences are slight. Best hints, handling and protection from errors [7] are implemented in *MakeItRational* and *SelectPro*. Within *MakeItRational*, we did not manage to produce any errors. Owing to the non-step-by-step operating structure of *SuperDecisions*, errors may occur within the working process. *Easy-mind* has to be saved manually by the user after each operating step which produces errors if forgotten.

Regarding the user interface aesthetics (US<sup>5</sup> , see **Table 3**), the operation interfaces of *SelectPro, MakeItRational* and *Qualica Decision Suite* are modern, appealing and clearly arranged. In addition, they provide a large set of optical adaptation possibilities according to the needs of the user, for example, the size of the windows, whereby *SelectPro* is notably more professional compared to the other two software alternatives. Although symbols and view are generally clear, the menues and total interface of *easy-mind* are a little amateurish, and it is not possible to make any adaptations. Finally, the menues and structure of *SuperDecisions* are a little too complex and adaptation possibilities are missing, too.

Maturity (RE<sup>1</sup> , see **Table 4**) is good in *MakeItRational, Qualica Decision Suite* and *SuperDecisions*. In *SelectPro* occurred some errors using the export functions, while *easy-mind* often produced errors in criteria and alternative management as well as browser errors due to *easy-mind's* nature of a web-based software product.


**Table 4.** Judgements of the alternatives III (RE).

Regarding *fault tolerance* (RE<sup>2</sup> , see **Table 4**), all programs except *easy-mind* were robust and almost operating in a stable manner. Hints on errors and error handling were sometimes missing in *SelectPro*, whereas *easy-mind* was not able to catch most errors which led to a crash of the software.

The recoverability (RE*<sup>3</sup>* , see **Table 4**) of data on error is best solved in *easy-mind* because the user is forced to save each operating step. In all the other software alternatives, data did not get lost if saved before manually by the user.

The interoperability (CO1 , see **Table 5**), especially the export of data, is very strong in *Qualica Decision Suite*, which supports the most important file types. *MakeItRational* and *SelectPro* provide satisfying import and export possibilities (e.g. jpg, doc and xls), although *MakeItRational* is a little more advanced supporting pdf, html and chart images. While *SuperDecisions* is only able to handle MS Excel-importable text files for the super, limit and cluster matrices, *easy-mind* provides no import or export functions at all.


**Table 5.** Judgements of the alternatives IV (CO).

Regarding *fault tolerance* (RE<sup>2</sup>

**Table 4.** Judgements of the alternatives III (RE).

get lost if saved before manually by the user.

The recoverability (RE*<sup>3</sup>*

software.

**RE1 : Maturity** C.R. = 0.00443

100 Recent Progress in Parallel and Distributed Computing

**RE2**

**RE3**

**: Fault tolerance**

**: Recoverability**

C.R. = 0.00000

C.R. = 0.00296

, see **Table 4**), all programs except *easy-mind* were robust and

, see **Table 4**) of data on error is best solved in *easy-mind* because the

almost operating in a stable manner. Hints on errors and error handling were sometimes missing in *SelectPro*, whereas *easy-mind* was not able to catch most errors which led to a crash of the

**Reliability easy-mind MakeItRational Qualica D S SelectPro SuperDecisions**

**easy-mind** 1 1/6 1/6 1/3 1/6 **MakeItRational** 6 1 1 3 1 **Qualica D S** 6 1 1 3 1 **SelectPro** 3 1/3 1/3 1 1/3 **SuperDecisions** 6 1 1 3 1 **Local priority** 0.04393 0.28420 0.28420 0.10348 0.28420

**easy-mind** 1 1/4 1/4 1/3 1/4 **MakeItRational** 4 1 3 3 2 **Qualica D S** 4 1/3 1 1 1/2 **SelectPro** 3 1/3 1 1 1/2 **SuperDecisions** 4 1 1 2 1 **Local priority** 0.06137 0.26469 0.26469 0.14457 0.26469

**easy-mind** 1 4 4 4 4 **MakeItRational** 1/4 1 1 1 1 **Qualica D S** 1/4 1 1 1 1 **SelectPro** 1/4 1 1 1 1 **SuperDecisions** 1/4 1 1 1 1 **Local priority** 0.50000 0.12500 0.12500 0.12500 0.12500

user is forced to save each operating step. In all the other software alternatives, data did not

All alternatives run on Windows, which was the testing environment, but installability (CO2 , see **Table 5**) within other systems is not guaranteed for *SelectPro* and *Qualica Decision Suite* by the developer, whereas *SuperDecisions* runs on different versions of Windows, Mac, Ubuntu and Linux. Thereby, *easy-mind* has great advantage due to its nature as independent web-based product as well as *MakeItRational* which can run as desktop version or web-based in a webbrowser with MS Silverlight.

Group decision-making (AF1 , see **Table 6**) is implemented in all DSS products except in *SuperDecisions*, but *Qualica Decision Suite* provides only mail questionnaires which have to be inserted by the moderator. The left three software alternatives support remote group decision making, whereby the number of users is only limited in *easy-mind. SelectPro* is overall the most professional in rating, calculating the mean and comparing the single user votes.


**Table 6.** Judgements of the alternatives V (AF).

Transparency (AF2 , see **Table 6**) is strongest in *SelectPro*, which shows at every time and on every level the current results for each user, alternative, criterion, weight and priority as comparable data as well as graphically. Each alternative and criterion can be deselected at each time with automatically actualized results. For the criteria, this is possible in *MakeItRational*, too, which provides slightly less transparency to the user. *SuperDecisions* and *easy-mind* give at least a good overview about the partial results and values of the criteria and alternatives on each level, whereas *Qualica Decision Suite* shows its transparency only regarding the criteria.

Only *SuperDecisions* has implemented a real and good working (pre-structured) Benefits, Opportunities, Costs, Risks (BOCR) modelling (AF<sup>3</sup> , see **Table 6**). *SelectPro* supports only costscore-ratios, while *easy-mind* handles only direct cardinal entries for BOCR. *MakeItRational* and *Qualica Decision Suite* have no implemented BOCR support.

None of the software alternatives except *SuperDecisions* supports any AHP advancements (AF4 , see **Table 6**). However, *SuperDecisions* is the only software product which is able to calculate the results by ANP.

Regarding the costs (initial investment), relevant on account of the financial budget restrictions of the Management Science Department of a medium-sized university, *SuperDecisions* and *easy-mind* were the preferred DSS solutions. Users who understand themselves as researchers or educators can receive both products for free. As there was no ordinal assessment, the local priorities were derived by direct cardinal data entry of the cardinal information (see **Table 7**). Apart from different scaling levels, the criterion initial investment is directed negatively. So, the lowest priorities are assigned to the preferred DSS. This reversed ranking will be transformed in the subsequent synthetization process of the entire model.


**Table 7.** Judgements of the alternatives VI (Costs).

Owing to the qualitative expert judgements, all priorities are used now to construct ^ S U (see **Table 8**). Due to the (inter-)dependencies determined by DEMATEL, several cluster comparisons had to be made.

**Table 9** shows the arising cluster matrix ^ C of the evaluation.

#### *3.4.3. Final results*

**Advanced functions easy-mind MakeItRational Qualica D S SelectPro SuperDecisions**

**easy-mind** 1 1/2 4 1/3 6 **MakeItRational** 2 1 5 1/2 7 **Qualica D S** 1/4 1/5 1 1/6 4 **SelectPro** 3 2 6 1 9 **SuperDecisions** 1/6 1/7 1/4 1/9 1 **Local priority** 0.18244 0.27821 0.07216 0.43475 0.03245

**easy-mind** 1 1/4 2 1/6 1 **MakeItRational** 4 1 5 1/2 5 **Qualica D S** 1/2 1/5 1 1/7 1/2 **SelectPro** 6 2 7 1 6 **SuperDecisions** 1 1/5 2 1/6 1 **Local priority** 0.08358 0.30371 0.05191 0.48022 0.08058

**easy-mind** 1 3 3 1 1/5 **MakeItRational** 1/3 1 1 1/3 1/9 **Qualica D S** 1/3 1 1 1/3 1/9 **SelectPro** 1 3 3 1 1/5 **SuperDecisions** 5 9 9 5 1 **Local priority** 0.14578 0.05389 0.05389 0.14578 0.60066

**easy-mind** 1 1 1 1 1/9 **MakeItRational** 1 1 1 1 1/9 **Qualica D S** 1 1 1 1 1/9 **SelectPro** 1 1 1 1 1/9 **SuperDecisions** 9 9 9 9 1 **Local priority** 0.07692 0.07692 0.07692 0.07692 0.69231

**AF1**

**AF2**

**AF3 : BOCR** C.R. = 0.00937

**AF4**

C.R. = 0.00000

**: AHP advancements**

**Table 6.** Judgements of the alternatives V (AF).

**: Transparency**

C.R. = 0.01621

C.R. = 0.04604

**: Group decision making**

102 Recent Progress in Parallel and Distributed Computing

For deriving the final priorities, as a first step the weighted supermatrix (^ S W ) is generated:

$$\mathbf{S}^{\mathsf{W}} = \mathbf{S}^{\mathsf{U}} \, \star \mathbf{\widehat{C}} \tag{7}$$

Thus, ^ S W<sup>k</sup> (k = 1, 2, …, ∞) can be raised, until a converging column-stochastic matrix, the limit matrix (^ S L ) is reached.


**Table 8.** Unweighted supermatrix *ŜU* subnet benefits.


**Table 9.** Cluster matrix (*C* ^ ) subnet benefits.

**FS1**

> **FS1**

**FS2** **US1** **US2** **US3** **US4** **US5** **RE1** **RE2** **RE3** **CO1** **CO2**

**AF1** **AF2** **AF3** **AF4**

**A1** **A2** **A3** **A4** **A5** **Table 8.**

Unweighted supermatrix *ŜU* subnet benefits.

0.3954

0.1820

0.2668

0.0482

0.0512

0.1350

0.0892

0.2842

0.2647

0.1250 0.0491

0.1763

0.0325

0.0806

0.6007

0.6923

0.0000

0.0000

0.0000

0.0000

0.0000

0.2962

0.2735

0.1726 0.2481

0.3769

0.2414

0.4169

0.1035

0.1446

0.1250

0.1374

0.0986

0.4347

0.4802

0.1458

0.0769

0.0000

0.0000

0.0000

0.0000

0.0000

0.0277

0.0274

0.0280

0.0482

0.0512

0.0743

0.2257

0.2842

0.2647

0.1250

0.5129

0.0986

0.0722

0.0519

0.0539

0.0769

0.0000

0.0000

0.0000

0.0000

0.0000

0.1620

0.4226

0.4190

0.5382

0.3769

0.4143

0.2257

0.2842

0.2647

0.1250 0.2681

0.3133

0.2782

0.3037

0.0539

0.0769

0.0000

0.0000

0.0000

0.0000

0.0000

0.1187

0.0945

0.1135

0.1174

0.1439

0.1350

0.0425

0.0439

0.0614

0.5000

0.0325

0.3133

0.1824

0.0836

0.1458

0.0769

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.2000

0.2000

0.2000

0.2000

0.2000

0.0000

0.0000

0.0000

0.0000

0.8000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.4000

0.4000

0.4000

0.4000

0.4000

0.0000

0.0000

0.0000

1.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.2000

0.2000

0.2000

0.2000

0.2000

0.0000

0.0000

0.0000

0.0000

0.2000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

1.0000

0.0000

0.0000

0.2000

0.2000

0.2000

0.2000

0.2000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.3333

0.3333

0.3333

0.3333

0.3333

0.0000

0.0000

0.0000

0.0000

1.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.6667

0.6667

0.6667

0.6667

0.6667

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.3333

0.3333

0.3333

0.3333

0.3333

0.0000

0.0000

0.0000

0.0000

0.5000

0.0000

0.0000

1.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.3333

0.3333

0.3333

0.3333

0.3333

0.0000

0.0000

0.0000

0.0000

0.5000

0.0000

0.0000

0.0000

1.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.3333

0.3333

0.3333

0.3333

0.3333

0.0000

0.0000

0.0000

0.1667

1.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.2500

0.2500

0.2500

0.2500

0.2500

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.1500

0.1500

0.1500

0.1500

0.1500

0.0000

0.0000

0.0000

0.8333

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.2500

0.2500

0.2500

0.2500

0.2500

104 Recent Progress in Parallel and Distributed Computing

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.2500

0.2500

0.2500

0.2500

0.2500

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.1000

0.1000

0.1000

0.1000

0.1000

0.0000

0.0000

0.0000

0.0000

0.3333

0.0000

0.0000

1.0000

1.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.5000

0.5000

0.5000

0.5000

0.5000

0.0000

0.0000

0.0000

1.0000

0.6667

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

1.0000

0.0000

1.0000

0.5000

0.5000

0.5000

0.5000

0.5000

**FS2**

**US1**

**US2**

**US3**

**US4**

**US5**

**RE1**

**RE2**

**RE3**

**CO1**

**CO2**

**AF1**

**AF2**

**AF3**

**AF4**

**A1**

**A2**

**A3**

**A4**

**A5**

The final synthesized (additive probabilistic variant) ranking of the software products is shown in **Figure 4**. Thereby, the overall control criteria weighting for subnet benefits was set to 0.8 and to 0.2 for subnet costs (preferential value 4 on standard scale).

**Figure 4.** Synthesized evaluation results (global priorities of the software products).

Owing to our subjective judgements, *MakeItRational* was found to be the preferred software alternative for the purposes of an academic Management Science Department, followed by *SuperDecisions, easy-mind* and *SelectPro*. The results show that there is a small distance regarding the level of performance. Regarding the other quality criteria, the differences between these programs are not extreme, but noticeable. Therefore, different rankings could result, if members of other academic departments or of other types of organizations with deviating targets, requirements, preferences and size would have evaluated the alternative software solutions. So, there may exist contexts, in which another software product than *MakeItRational* would fit better to the needs of the users.

*MakeItRational, easy-mind* and *SelectPro*, e.g. are convincing due to their very intuitive handling and step-by-step operating methods. The commands are obvious to find and easy to understand, mostly supported by helping functions or assistants. In general, the initiation period to operate with these programs is very short. Among all software products, *MakeItRational* is the most intuitive and the less complex. Besides, it provides more visualization and export possibilities and has the best error protection.

But especially when BOCR modelling or ANP is needed within the decision process, *SuperDecisions* is the only alternative within which these functions are implemented. Additionally, it offers more possibilities and functions than *MakeItRational* that go beyond the pure AHP application. But, this charges at learnability and operability. Furthermore, this product provides no group decision-making support, which is handled best and most detailed by *SelectPro*, being in this respect a good alternative to MakeItRational. It is the most professional in rating, calculating the mean and comparing the single votes. Besides, it scores by its transparency, showing current results at every time and on every level as comparable data as well as graphically.
