4. Multi attribute decision-making methods

Engineering or management decisions are generally made through available data and information that are mostly vague, imprecise, and uncertain by nature [26]. The decision-making process in bridge remediation is one of these ill-structured occasions, which usually need a rigorous approach which applies explicit subject domain knowledge to ill-structured (adaptive) problems in order to reformulate them as structured problems. Multi-attribute decisionmaking (MADM) is an efficient tool for dealing with uncertainties.

A standard feature of multi-attribute decision-making methodology is the decision matrix with m criteria and n alternative as illustrated in Figure 4. In the matrix C1,...,Cm and A1,.., An indicate the criteria and alternatives respectively: each row belongs to a criterion and each column describes the performance of an alternative. The score aij describes the performance of alternative Aj against criterion Ci. It has been conventionally assumed that a higher score value means a better performance [27].

As shown in Figure 4, weights W1,...,Wm are assigned to the criteria. Weight Wi reflects the relative importance of criteria Ci to the decision, and is assumed to be positive. The weights of the criteria are typically defined on subjective basis. The values X1,...,Xn related to the alternatives in the decision matrix are used in the Multi-Attribute Utility Theory (MAUT) methods.


Figure 4. The decision matrix.

Generally, higher ranking value represents a higher performance of the alternative, so the item with the highest ranking is the best action item [27].

In addition to some monetary based and elementary methods, the two main families in the multi-attribute decision-making methods are those founded on the MAUT and Outranking Methods.

### 4.1. Elementary methods of MADM

A semi-structured decision is made when some, but not all, of the phases of decision-making are structured. While some standard solution procedures may be applicable, human judgment

When none of the phases of decision-making are structured, the resulting decisions are classified as unstructured. Lack of clear decision criterion and the difficulty in identifying a finite set of alternatives and high levels of uncertainty concerning the consequences of the known alternatives at most of the decision levels, are all symptoms of this unstructuredness [25].

Semi-structured and unstructured decisions are made when problems are ill-defined (illstructured). Srinivasan et al. notes that most real-world problems fall towards the unstructured end of this spectrum [20]. Table 1 demonstrates the characteristics of structured and unstruc-

Engineering or management decisions are generally made through available data and information that are mostly vague, imprecise, and uncertain by nature [26]. The decision-making process in bridge remediation is one of these ill-structured occasions, which usually need a rigorous approach which applies explicit subject domain knowledge to ill-structured (adaptive) problems in order to reformulate them as structured problems. Multi-attribute decision-

A standard feature of multi-attribute decision-making methodology is the decision matrix with m criteria and n alternative as illustrated in Figure 4. In the matrix C1,...,Cm and A1,.., An indicate the criteria and alternatives respectively: each row belongs to a criterion and each column describes the performance of an alternative. The score aij describes the performance of alternative Aj against criterion Ci. It has been conventionally assumed that a higher score value

As shown in Figure 4, weights W1,...,Wm are assigned to the criteria. Weight Wi reflects the relative importance of criteria Ci to the decision, and is assumed to be positive. The weights of the criteria are typically defined on subjective basis. The values X1,...,Xn related to the alternatives in the decision matrix are used in the Multi-Attribute Utility Theory (MAUT) methods.

is also called upon to develop decisions which tend to be adaptive in nature [1].

4. Multi attribute decision-making methods

making (MADM) is an efficient tool for dealing with uncertainties.

tured decisions.

26 Management of Information Systems

means a better performance [27].

Figure 4. The decision matrix.

These elementary approaches are characterized by their simplicity and their independence to computational support. They are suitable for problems with a single decision maker, limited alternatives and criteria which can rarely occur in engineering decision-making [28]. Maximin and Maximax methods, Pros and Cons analysis, Conjunctive and Disjunctive methods and the Lexicographic method are all in this category [29].

#### 4.1.1. Maximin and Maximax methods

The Maximin method's strategy is to avoid the worst possible performance, maximizing the minimal performing criterion. The alternative, for which the score of its weakest criterion is the highest, is preferred. For example, a weight of one is given to the criterion which is least best achieved by that choice and a weight of zero to all other criteria. The strategy with the maximum minimum score will be the optimum choice. In contrast to the Maximin method, The Maximax method selects an alternative by its best attribute rather than its worst. This method is particularly useful when the alternatives can be specialized in use based upon one attribute and decision maker has no prior requirement as to which attribute this is [30].

#### 4.1.2. Pros and cons analysis

Pros and Cons analysis is a qualitative comparison method in which positive and negative aspect of each alternative are assessed and compared. It is easy to implement since no mathematical skill is required [29].

#### 4.1.3. Conjunctive and disjunctive methods

The conjunctive and disjunctive methods are non-compensatory screening methods. They do not need criteria to be estimated in commensurate units. These methods require satisfactory rather than best performance in each attribute, i.e., if an action item passes the screening, it is adequate [31].

In Conjunctive method, an alternative must meet a minimal threshold for all attributes while in disjunctive method; the alternative should exceed the given threshold for at least one attribute. Any option that does not meet the rules is deleted from the further consideration [28].

#### 4.1.4. Decision tree analysis

Decision trees provide a useful schematic representation of decision and outcome events, provided the number of courses of action, ai, and the number of possible outcomes, Oij, not large. Decision trees are most useful in simple situations where chance events are dependent on the courses of action considered, making the chance events (states of nature) synonymous with outcomes [25].

Square nodes correspond to decision events. Possible courses of action are represented by action lines which link decision events and outcome (chance) events. Circular nodes differentiate the outcome events from the decision events in order to underline that the decision-maker does not have control when chance or Nature determines an outcome [1].

The outcomes for each alternative, originates from the chance nodes and terminate in a partitioned payoff/expected value node. The expected value for each course of action is achieved by summing the expected values of each branch associated with the action [25].

A decision tree representation of a problem is shown below as an example. Three strategies (courses of action) are investigated (See Figure 5):

a1: replace the distressed bridge section (it would soon be unsafe)

a2: rehabilitate the bridge (repair costs will not be prohibitive)

a3: do nothing (the symptoms are more superficial than structural)

The estimated costs of replacement and rehabilitation are \$6.3 M and \$1.1 M respectively. If the road section is replaced, it is assumed that no further capital costs will be incurred. If the road is rehabilitated and repairs are not satisfactory, an additional \$6.3 M replacement cost will result. If no action is taken and the road consequently requires major repairs or becomes totally unserviceable, respective costs of \$6.3 M and \$18 M will apply (Lemass [1]).

Figure 5. A decision tree for selecting the best remediation strategy of a bridge.


In this example, states of nature are the same as possible outcomes. The outcomes and associated negative payoffs (costs in millions of dollars) can be considered as follows:

The expected value (cost) of action a2 is the lowest, based on the probability (likelihood of occurrence) assigned for each outcome, pij and this course of action can be followed [9].

#### 4.1.5. Lexicographic method

large. Decision trees are most useful in simple situations where chance events are dependent on the courses of action considered, making the chance events (states of nature) synonymous

Square nodes correspond to decision events. Possible courses of action are represented by action lines which link decision events and outcome (chance) events. Circular nodes differentiate the outcome events from the decision events in order to underline that the decision-maker

The outcomes for each alternative, originates from the chance nodes and terminate in a partitioned payoff/expected value node. The expected value for each course of action is achieved by summing the expected values of each branch associated with the action [25].

A decision tree representation of a problem is shown below as an example. Three strategies

The estimated costs of replacement and rehabilitation are \$6.3 M and \$1.1 M respectively. If the road section is replaced, it is assumed that no further capital costs will be incurred. If the road is rehabilitated and repairs are not satisfactory, an additional \$6.3 M replacement cost will result. If no action is taken and the road consequently requires major repairs or becomes totally

does not have control when chance or Nature determines an outcome [1].

a1: replace the distressed bridge section (it would soon be unsafe)

a3: do nothing (the symptoms are more superficial than structural)

unserviceable, respective costs of \$6.3 M and \$18 M will apply (Lemass [1]).

a2: rehabilitate the bridge (repair costs will not be prohibitive)

Figure 5. A decision tree for selecting the best remediation strategy of a bridge.

(courses of action) are investigated (See Figure 5):

with outcomes [25].

28 Management of Information Systems

In lexicographic analysis of problems, a chronological elimination process is continued until either a single solution is found or all the problems are solved. In this method criteria are first rank-ordered in terms of importance. The alternative with the best performance score on the most important criterion is selected. If there are ties related to this attribute, the performance of the joined option on the next most important factor will be compared until the unique alternative is chosen [31].

### 4.1.6. Cost-benefit analysis (CBA) and cost-effectiveness analysis (CEA)

The concept of cost–benefit analysis (CBA) originated in the United States in the 1930s where it was used to find a solution to problems of water provision. This method is used to estimate all the costs and benefits associated with a particular project which is usually defined in money terms, in order to weigh up whether a project will bring a net benefit to the public and to be able to compare the possible options for limited resources. It is one of the most comprehensive and at the same time the most difficult technique for decision-making [32].

According to Kuik et al. the application of CBA in an integrated assessment causes the following concerns [33]:


#### 4.2. Multi attribute utility theory (MAUT)

MAUT is based upon the use of utility functions. Utility functions are employed to quantify the preference of the decision-maker by allocating a numerical index to different degrees of satisfaction as the attribute under consideration takes values between the most and least defined limits [34]. They are considered a compliant tool of representing how much an attribute (or a measure) satisfies the decision-maker objectives to transform the raw performance values of the alternatives against diverse criteria, both factual (quantitative) and judgmental (qualitative), to a general dimensionless scale [35]. They represent a means to translate attributes units into utility units. Utility functions can be specified in terms of a graph, table or mathematical expression. Mathematical expressions of utility functions include: straight-line, logarithmic, or exponential functions [34].

The utility values are estimated by normalizing the output of the simulation tests. Normalization of performance measures is conducted utilizing the minimum and maximum limits that are obtained from the simulation. Moreover, they are commonly checked against the outputs and replaced if there are values beyond the limits. The utility functions can be monotonic in a way that the least desirable scenario corresponds to the lowest utility [U(xi) =0] while the most desirable scenario matches with the highest utility [U(xi) =1.0], the interval [0,100] can also be used for this purpose [34].

#### 4.2.1. Simple multi-attribute rating technique (SMART)

Simple Multi Attribute Rating Technique (SMART) is a method that used to determine the weights of the attributes. This method was initially developed by Edwards [50] and is based on direct numerical ratings that are aggregated additively. There are many derivates of SMART, also including non-additive methods. In a basic format of SMART, there is a rank-ordering of action items for each criterion setting the worst to zero and the best to 100 and interpolating between [27]. By filtering the performance values with associated weights for all criteria a utility value for each option is estimated [36].

SMART is independent of the action items/alternatives. The advantage of this approach is that the assessments are not relative; hence shifting the number of options will not change the final outcomes. If new alternatives are likely to be added, and the action items are compliant to a rating model, then SMART can be a better option [37].

One of the limitations of this technique is that it disregards the interrelationships between parameters. However, SMART is a valuable technique since it is uncomplicated, easy and quick which is quite important for decision makers. In SMART, changing the number of alternatives will not change the decision scores of the original alternatives and this is useful when new alternatives are added [37]. He also argued that using SMART in performance measures can be a better alternative than other methods.

#### 4.2.2. Analytical hierarchy process (AHP)

AHP is a multi-attribute decision-making technique which belongs to the class of methods known as "additive weighting methods" [28]. The AHP was suggested by Saaty and uses an objective function to aggregate various features of a decision where the main goal is to select the decision alternative that has the maximum value of the objective function [38]. The AHP is based on four clearly defined axioms (Saaty [39]). Similar to MAU/VT and SMART, the AHP is classified as a compensatory technique, where attributes/criteria with low scores are compensated by higher scores on other attributes/criteria, but contrasting the utilitarian models, the AHP employs pair wise comparisons of criteria rather than value functions or utility where all criteria are compared and the end results accumulated into a decision-making matrix [40].

satisfaction as the attribute under consideration takes values between the most and least defined limits [34]. They are considered a compliant tool of representing how much an attribute (or a measure) satisfies the decision-maker objectives to transform the raw performance values of the alternatives against diverse criteria, both factual (quantitative) and judgmental (qualitative), to a general dimensionless scale [35]. They represent a means to translate attributes units into utility units. Utility functions can be specified in terms of a graph, table or mathematical expression. Mathematical expressions of utility functions include: straight-line,

The utility values are estimated by normalizing the output of the simulation tests. Normalization of performance measures is conducted utilizing the minimum and maximum limits that are obtained from the simulation. Moreover, they are commonly checked against the outputs and replaced if there are values beyond the limits. The utility functions can be monotonic in a way that the least desirable scenario corresponds to the lowest utility [U(xi) =0] while the most desirable scenario matches with the highest utility [U(xi) =1.0], the interval [0,100] can also be

Simple Multi Attribute Rating Technique (SMART) is a method that used to determine the weights of the attributes. This method was initially developed by Edwards [50] and is based on direct numerical ratings that are aggregated additively. There are many derivates of SMART, also including non-additive methods. In a basic format of SMART, there is a rank-ordering of action items for each criterion setting the worst to zero and the best to 100 and interpolating between [27]. By filtering the performance values with associated weights for all criteria a

SMART is independent of the action items/alternatives. The advantage of this approach is that the assessments are not relative; hence shifting the number of options will not change the final outcomes. If new alternatives are likely to be added, and the action items are compliant to a

One of the limitations of this technique is that it disregards the interrelationships between parameters. However, SMART is a valuable technique since it is uncomplicated, easy and quick which is quite important for decision makers. In SMART, changing the number of alternatives will not change the decision scores of the original alternatives and this is useful when new alternatives are added [37]. He also argued that using SMART in performance

AHP is a multi-attribute decision-making technique which belongs to the class of methods known as "additive weighting methods" [28]. The AHP was suggested by Saaty and uses an objective function to aggregate various features of a decision where the main goal is to select the decision alternative that has the maximum value of the objective function [38]. The AHP is based on four clearly defined axioms (Saaty [39]). Similar to MAU/VT and SMART, the AHP is

logarithmic, or exponential functions [34].

4.2.1. Simple multi-attribute rating technique (SMART)

utility value for each option is estimated [36].

rating model, then SMART can be a better option [37].

measures can be a better alternative than other methods.

4.2.2. Analytical hierarchy process (AHP)

used for this purpose [34].

30 Management of Information Systems

The process of AHP includes three phases: decomposition, comparative judgments, and synthesis of priority. Through the AHP process, problems are decomposed into a hierarchical structure, and both quantitative and qualitative information can be used to develop ratio scales between the decision elements at each level using pair wise comparisons. The top level of hierarchy corresponds to overall objectives and the lower levels criteria, sub-criteria, and alternatives. Users are asked to set up a comparison matrix (with comparative judgments) by comparing pairs of criteria or sub-criteria. A scale of quantities -ranging from 1 (indifference) to 9 (extreme preference) is used to identify the users priorities. Eventually, each matrix is then solved by an eigenvector technique for measuring the performance [41].

The comparisons are normally shown in a comparative matrix A, which must be transitive such that if, i > j and j > k then i > k where i, j, and k are action items; for all j > k > i and reciprocal, aij ¼ 1=aji. Preferences are then calculated from the comparison matrix by normalising the matrix, to develop the priority vector, by A.W = λmax.W; where A is the comparison matrix; W is the Eigen vector and λmax is the maximal Eigen value of matrix A [42].

Through the AHP process, decision-makers' inconsistency can be calculated via consistency index (CI) to find out whether decisions break the transitivity, and to what extent. A threshold value of 0.10 is acceptable, but if it exceeds then the CI is calculated by using the consistency ratio CR = CI/RI where RI is the ratio index. CI is defined as CI ¼ ð Þ λmax � n =ð Þ n � 1 ; where λmax as above; n is the dimension [43]. Table 2 shows the average consistencies of RI.

The advantages of the AHP method are that it demonstrates a systematic approach (through a hierarchy) and it has an objectivity and reliability for estimating weighting factors for criteria [45]. It can also provide a well-tested method which allows analysts to embrace multiple, conflicting, non-monetary attributes into their decision-making.

On the other hand, the disadvantages are that the calculation of a pair-wise comparison matrix for each attribute is quite complicated and as the number of criteria and/or alternatives increases, the complexity of the calculations increases considerably. Moreover if a new alternative is added after finishing an evaluation calculation, it is very troublesome because all the calculation processes have to be restarted again [46].

The limitations of AHP are of a more theoretical nature, and have been the subject of some debate in the technical literature. Many analysts have pointed out that, the attribute weighting questions must be answered with respect to the average performance levels of the alternatives. Others have noted the possibility for ranking reversal among remaining alternatives after one


Table 2. Random inconsistency index, adapted from Ishizaka [44].

is deleted from consideration. Finally, some theorists go so far as to state that as currently practiced, "the rankings of (AHP) are arbitrary." Defenders of AHP, such as Saaty himself, answered that rank reversal is not a fault because real-world decision-making shows this characteristic as well [47].

#### 4.3. Outranking methods

The most important outranking methods assume data availability roughly similar to what required for the MAUT methods. Fundamental problems with most MAUT and MAUT-related methods are handling uncertain or fuzzy information and dealing with information stated in other than ratio or interval scale. In some conditions, instead of quantitative measures descriptive expressions are frequently faced [48]. The outranking method acts as one alternative for approaching complex choice problems with multiple criteria and multiple participants. Outranking shows the degree of domination of one alternative over another and facilitates the employment of incomplete value information and, for example, judgments on ordinal measurement scale. They provide the (partial) preference ranking of the alternatives, not a principal measure of the preference relation [48]. Here the two most famous categories of the outranking methods, the ELECTRE and the PROMETHEE methods are briefly explained.

#### 4.3.1. The ELECTRE methods

The ELECTRE method is a part of MCDA (multi criteria decision-aid). The main aim of the ELECTRE method is to choose alternative that unites two conditions from the preference concordance on many evaluations with the competitor and preference discordance was supervised by many options of the comparison. The starting point is the data of the decision matrix assuming the sum of the weights equals to 1 [49]. As shown in Eq. (1), for an ordered pair of alternatives (Aj, Ak), the concordance index Cjk is the sum of all the weights for those attributes where the overall performance of Aj is least as high as Ak.

$$\mathbf{C}\_{\mathbf{jk}} = \sum\_{\mathbf{a}\_{\mathbf{j}} \succeq \mathbf{a}\_{\mathbf{k}}} \mathbf{w}\_{\mathbf{i}} \qquad \mathbf{j}, \mathbf{k} = 1, \ldots, \mathbf{n}, \qquad \mathbf{j} \neq \mathbf{k} \tag{1}$$

The concordance index must lies between 0 and 1.

The calculation of the discordance index djk is more complex. If Aj performs better than Ak on all criteria, the discordance index will be zero. Otherwise, as per Eq. (2):

$$\mathbf{d}\_{\mathbf{jk}} = \max \mathbf{x} \frac{\mathbf{a}\_{\mathbf{ik}} - \mathbf{a}\_{\mathbf{ij}}}{\max \mathbf{a}\_{\mathbf{ik}} - \min \mathbf{a}\_{\mathbf{ij}}} \qquad \qquad \mathbf{j}, \mathbf{k} = 1, \dots, \mathbf{n}, \qquad \mathbf{j} \neq \mathbf{k} \tag{2}$$

Therefore for each attribute where Ak outperforms Aj, the ratio is computed between the variance in performance between Ak and Aj and the maximum difference in score on the attribute/criterion concerned between the alternatives. The maximum of these ratios (must be between 0 and 1) is the discordance index [27].

This method determines a partial raking on the alternatives. The set of all options that outrank at least one other alternative and are themselves not outranked.

#### 4.3.2. The PROMETHEE methods

This method was introduced by Brans and Vincke [47], Brans et al. [17], and Edwards [50]. The scores of the decision table need not necessarily be normalized or transformed into a dimensionless scale. Higher score value indicates a better performance. It is also assumed that a preference function is associated to each attribute. For this aim, a preference function PFi(Aj, Ak) is defined showing the degree of the preference of option Aj over Ak for criterion Ci:

#### 0≤PFi(Aj, Ak) ≤1 and

is deleted from consideration. Finally, some theorists go so far as to state that as currently practiced, "the rankings of (AHP) are arbitrary." Defenders of AHP, such as Saaty himself, answered that rank reversal is not a fault because real-world decision-making shows this

The most important outranking methods assume data availability roughly similar to what required for the MAUT methods. Fundamental problems with most MAUT and MAUT-related methods are handling uncertain or fuzzy information and dealing with information stated in other than ratio or interval scale. In some conditions, instead of quantitative measures descriptive expressions are frequently faced [48]. The outranking method acts as one alternative for approaching complex choice problems with multiple criteria and multiple participants. Outranking shows the degree of domination of one alternative over another and facilitates the employment of incomplete value information and, for example, judgments on ordinal measurement scale. They provide the (partial) preference ranking of the alternatives, not a principal measure of the preference relation [48]. Here the two most famous categories of the outranking methods, the ELECTRE and the PROMETHEE methods are briefly explained.

The ELECTRE method is a part of MCDA (multi criteria decision-aid). The main aim of the ELECTRE method is to choose alternative that unites two conditions from the preference concordance on many evaluations with the competitor and preference discordance was supervised by many options of the comparison. The starting point is the data of the decision matrix assuming the sum of the weights equals to 1 [49]. As shown in Eq. (1), for an ordered pair of alternatives (Aj, Ak), the concordance index Cjk is the sum of all the weights for those

The calculation of the discordance index djk is more complex. If Aj performs better than Ak on

Therefore for each attribute where Ak outperforms Aj, the ratio is computed between the variance in performance between Ak and Aj and the maximum difference in score on the attribute/criterion concerned between the alternatives. The maximum of these ratios (must be

This method determines a partial raking on the alternatives. The set of all options that outrank

wi j, k ¼ 1, …, n, j 6¼ k (1)

j, k ¼ 1, …, n, j 6¼ k (2)

attributes where the overall performance of Aj is least as high as Ak.

all criteria, the discordance index will be zero. Otherwise, as per Eq. (2):

aik � aij maxaij � minaij

Cjk <sup>¼</sup> <sup>X</sup> aij ≥ aik

at least one other alternative and are themselves not outranked.

The concordance index must lies between 0 and 1.

djk ¼ max

between 0 and 1) is the discordance index [27].

characteristic as well [47].

32 Management of Information Systems

4.3. Outranking methods

4.3.1. The ELECTRE methods


In most realistic cases, Pi is a function of the deviation d = aij-aik, i.e., PFi(Aj, Ak) = PFi(aij-aik), where PFi is a non-decreasing function, PFi(d) = 0 for d ≤ 0 and 0 ≤ PFi(d) < 1 for d > 0. The main benefit of these preferences functions is the simplicity since there are no more than two parameters in each case.

As shown in Eq. (3), multi criteria preference index π (Aj, Ak) of Aj over Ak can then be calculated considering all the attributes:

$$\pi(\mathbf{A}\_{\circ}\mathbf{A}\_{\mathbf{k}}) = \sum\_{i=1}^{m} \mathbf{w}\_{i} \, \text{P}\_{\text{i}} \left(\mathbf{A}\_{\text{j}}, \mathbf{A}\_{\text{k}}\right) \tag{3}$$

The value of this index is between 0 and 1, and characterises the global intensity of preference between the couples of choices [27].

For ranking the alternatives, the following outranking flows (Eq. (4) and Eq. (5)) are classified: Positive outranking flow:

$$\log^{+}\left(\mathbf{A}\_{\mathbf{j}}\right) = \frac{1}{\mathbf{n}-1} \sum\_{\mathbf{k}=1}^{n} \pi\left(\mathbf{A}\_{\mathbf{j}}, \mathbf{A}\_{\mathbf{k}}\right) \tag{4}$$

Negative outranking flow:

$$\!\!\!\!\!\varphi^{-}\left(\mathbf{A}\_{\mathbf{j}}\right) = \frac{1}{\mathbf{n}-1} \sum\_{\mathbf{k}=1}^{n} \pi\left(\mathbf{A}\_{\mathbf{k}}, \mathbf{A}\_{\mathbf{j}}\right) \tag{5}$$

The positive outranking describes how much each option is outranking the other items. The higher φ<sup>þ</sup> Aj � �, the better the alternative. The negative outranking flow shows the power of Aj its outranking character.

The negative outranking flow shows how much each alternative is outranked by the others. The smaller φ� Aj � �, the better the alternative. φ� Aj � � depicts the weakness of Aj its outranked character (ibid).

#### 4.3.3. TOPSIS methods

Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) which was firstly proposed by Hwang and Yoon (1981) is one of the mostly used multi-criteria decision-making techniques [45]. The basic concept of TOPSIS is that the selected option should have the shortest distance from the positive ideal solution and the farthest distance from the negativeideal solution in a geometrical sense. Within the process an index called "similarity index" is defined to the positive-ideal option by combining the proximity to the positive-ideal and the remoteness from the negative solution- ideal option. Then the method selects a solution with the maximum similarity to the positive-ideal solution. The default assumption is that the larger the outcome, the greater the preference for benefit attributes and the less the preference for cost attributes [51]. The idea of TOPSIS can be expressed in a series of steps:

Step 1: Identify performance data for n alternatives over m attributes. Raw measurements are normalized by converting raw measures xij into normalized measures rij as follows (please see Eq. (6)):

$$\mathbf{r}\_{\overline{\mathbf{i}}} = \frac{\mathbf{x}\_{\overline{\mathbf{i}}\overline{\mathbf{j}}}}{\sqrt{\mathbf{x}\_{\overline{\mathbf{i}}}\overline{\mathbf{j}}}} \qquad \mathbf{i} = 1, \dots, \mathbf{m}, \ \mathbf{j} = 1, \dots, \mathbf{n} \tag{6}$$

Step 2: Estimate weighted normalized ratings as per Eq. (7):

$$\mathbf{\color{red}{Weightst}}\quad \mathbf{r}\_{\overrightarrow{\mathbf{i}}\mathbf{j}} = \mathbf{w}\_{\overrightarrow{\mathbf{j}}} \mathbf{r}\_{\overrightarrow{\mathbf{i}}\mathbf{j}}\tag{7}$$

wj is the weight of the jth attribute. The basis for the weights is usually an ad hoc reflective of relative importance. If normalizing was accomplished in Step 1, scale is not an issue.

Step 3: Obtain the positive-ideal alternative (extreme performance on each criterion) A+.

Step 4: Find the negative-ideal alternative (reverse extreme performance on each criterion) A�.

Step 5: Create a distance measure for each decisive factor to both positive-ideal (Si+) and negative-ideal (Si�).

Step 6: For each option/alternative, find out a ratio Ci + equal to the distance to the negativeideal divided by the summation of the distance to the positive-ideal and the distance to the negative-ideal (as shown in Eq. (8)):

$$\mathbf{C}\_{\mathbf{i}^+} = \frac{\mathbf{S}\_{\mathbf{i}^-}}{\left(\mathbf{S}^{\mathbf{i}^-} + \mathbf{S}^{\mathbf{i}^+}\right)} \tag{8}$$

Step 7: Rank order all the options by maximizing the ratio (specified) in Step 6.

#### 4.4. Sensitivity analysis

Sensitivity analysis is the method used to find whether a particular utility or probability is essential in determining the preferred alternative. There are always some uncertainties for the weights of the criteria and the scores of the alternatives against the subjective (judgmental) criteria [52]. As a result an important question is how the final ranking or the ranking values of the alternatives is sensitive to the changes of some input parameters of the decision model [27].

#### 4.5. Summary

4.3.3. TOPSIS methods

34 Management of Information Systems

Eq. (6)):

negative-ideal (Si�).

4.4. Sensitivity analysis

negative-ideal (as shown in Eq. (8)):

Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) which was firstly proposed by Hwang and Yoon (1981) is one of the mostly used multi-criteria decision-making techniques [45]. The basic concept of TOPSIS is that the selected option should have the shortest distance from the positive ideal solution and the farthest distance from the negativeideal solution in a geometrical sense. Within the process an index called "similarity index" is defined to the positive-ideal option by combining the proximity to the positive-ideal and the remoteness from the negative solution- ideal option. Then the method selects a solution with the maximum similarity to the positive-ideal solution. The default assumption is that the larger the outcome, the greater the preference for benefit attributes and the less the preference for cost

Step 1: Identify performance data for n alternatives over m attributes. Raw measurements are normalized by converting raw measures xij into normalized measures rij as follows (please see

wj is the weight of the jth attribute. The basis for the weights is usually an ad hoc reflective of

Step 4: Find the negative-ideal alternative (reverse extreme performance on each criterion) A�. Step 5: Create a distance measure for each decisive factor to both positive-ideal (Si+) and

Step 6: For each option/alternative, find out a ratio Ci + equal to the distance to the negativeideal divided by the summation of the distance to the positive-ideal and the distance to the

<sup>þ</sup> <sup>¼</sup> Si

Si � <sup>þ</sup> Si

Sensitivity analysis is the method used to find whether a particular utility or probability is essential in determining the preferred alternative. There are always some uncertainties for the

�

relative importance. If normalizing was accomplished in Step 1, scale is not an issue.

Ci

Step 7: Rank order all the options by maximizing the ratio (specified) in Step 6.

Step 3: Obtain the positive-ideal alternative (extreme performance on each criterion) A+.

<sup>2</sup> <sup>p</sup> <sup>i</sup> <sup>¼</sup> <sup>1</sup>, …, <sup>m</sup>, <sup>j</sup> <sup>¼</sup> <sup>1</sup>, …, <sup>n</sup> (6)

Weighted rij ¼ wjrij (7)

<sup>þ</sup> � � (8)

attributes [51]. The idea of TOPSIS can be expressed in a series of steps:

rij <sup>¼</sup> xij ffiffiffiffiffiffiffi xij

Step 2: Estimate weighted normalized ratings as per Eq. (7):

This chapter covers the definition of decision support system, its ideal characteristics and its background history. Different decision analysis methods including elementary methods, multi attribute utility theory and outranking methods have also been introduced and compared.

## Author details

Maria Rashidi\*, Maryam Ghodrat, Bijan Samali and Masoud Mohammadi

\*Address all correspondence to: m.rashidi@westernsydney.edu.au

Centre for Infrastructure Engineering, Western Sydney University, Sydney, New South Wales, Australia

### References


[25] Lemass B, Carmichael D. Front-End Project Management. Sydney: Pearson Prentice Hall; 2008

[9] Rashidi M, Ghodrat M, Samali B, Kendall B, Zhang C. Remedial modelling of steel bridges through application of analytical hierarchy process (AHP). Applied Sciences. 2017;7(2):1-20

[10] Power D. Decision Support Systems Concepts and Resources for Managers. London:

[11] Bonczek R, Holsapple C, Whinston A. The evolving roles of models in decision support

[12] Rashidi M, Samali B, Sharafi P. A new model for bridge management: Part a: Condition assessment and priority ranking of bridges. Australian Journal of Civil Engineering.

[13] Rashidi M, Samali B, Sharafi P. A new model for bridge management: Part B: Decision support system for remediation planning. Australian Journal of Civil Engineering. 2016b;

[14] Rashidi M, Gibson P. A methodology for bridge condition evaluation. Journal of Civil

[15] Mora M, Forgionne G, Gupta J. Decision Making Support Systems: Achievements and Challenges for the New Decade. Natural Resources Planning, Silva Fennica: Harrisburg,

[16] Costello T, Zalkind SS. Psychology in Administration. New Jersey: Prentice Hall; 1963

[19] Simon H. The New Science of Management Decisions. New Jersey: Prentice-Hall; 1977

[20] Srinivasan A, Sundaram D, Davis J. Implementing Decision Support Systems: Methods,

[21] Ghodrat M, Rashidi M, Samali B. Life cycle assessments of incineration treatment for sharp medical waste. In: Energy Technology: Carbon Dioxide Management and Other

[22] Bartol K, Tein M, Matthews G, Sharma B. Management: A Pacific Rim Focus. Australia:

[23] Rashidi M, Kempton S, Samali B. Analysis of bridge abutment movement through a case study. In: Mechanics of Structures and Materials: Advancements and Challenges. London:

[24] Rashidi M, Lemass B. A decision support methodology for remediation planning of concrete bridges. Journal of Construction Engineering and Project Management (JCEPM).

method. European Journal of Operational Research. 1986;24(2):228-238

[18] Churchman C. Challenges to Reason. New York: McGraw-Hill; 1968

[17] Brans J, Vincke P, Mareschal B. How to select and how to rank projects: The Promethee

Greenwood Publishing Group; 2002

2016a;14(1):35-45

36 Management of Information Systems

14(1):46-53

PA; 2003

systems. Decision Sciences. 11(2):337-356

Engineering and Architecture. 2012;6(9)

Techniques and Tools. McGraw-Hill; 2000

Technologies. Switzerland: Springer; 2017

McGraw Hill; 2007

2017

2011;1(2):1-10


#### **Developing the EBS Management Model to Assist SMEs to Evaluate E-Commerce Success Developing the EBS Management Model to Assist SMEs to Evaluate E-Commerce Success**

DOI: 10.5772/intechopen.77149

Mingxuan Wu, Ergun Gide and Rod Jewell Mingxuan Wu, Ergun Gide and Rod Jewell

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.77149

#### **Abstract**

[41] Cheng S, Chen MY, Chang HY, Chou T. Semantic-based facial expression recognition using analytical hierarchy process. Expert Systems with Applications. 3(1):86-95

[42] Saaty T. Decision making- the analytical hierarchy and network processes (AHP/ANP).

[43] Kim S, Song O. A MAUT approach for selecting a Dismantlin scenario for the thermal

[44] Ishizaka A. Development of an Intelligent Tutoring System for AHP (Analytical Hierarchy

[45] Kangas A, Kangas J, Pykalainen J. Outranking methods as tools in strategic natural

[46] Chih Huang W, Hua Chen C. Using the ELECTRE II Method to Apply and Analyse the Differentiation Theory. The Eastern Asia Society for Transportation Studies; 2005

[47] Brans J, Vincke P. A preference ranking organisation method: (the PROMETHEE method for multiple criteria decision making). Management Science. 1985;31(6):647-656

[48] Kilic H. Supplier selection application based on a fuzzy multiple criteria decision making

[49] Elbehairy H, Hegazy T, Elbeltagi E, Souki K. Comparison of two evolutionary algorithms for optimisation of bridge deck repairs. Computer-Aided Civil and Infrastructure Engi-

[50] Edwards W. The engineering economic summer symposium series. Social Utilities. 1971;6:

[51] Rashidi M, Ghodrat M, Samali B, Kendall B, Zhang C. Remedial modelling of steel bridges through application of analytical hierarchy process (AHP). Applied Sciences. 2017;7(2)

[52] Rashidi M, Samali B, Azad A, Hatamian H. "Asset management of steel bridges," in Mechanics of Structures and Materials: Advancements and Challenges. Perth, W. A: CRC

methodology. Online Academic Journal of Information Technology. 2012

Process). University of Basel, ZDepartment of Business and Economics; 2004

Journal of Systems Science and Systems Engineering. 2004;13(1):1-35

column in KRR-1. Annals of Nuclear Energy. 2009;36(2):145-150

Resurces planning. In: Fennica S, editor. 2001. pp. 215-227

neering. 2006;21:561-572

38 Management of Information Systems

119-129

Press; 2016

In the literature, a lack of strong consensus or well-known theoretical research framework exists to defining and evaluating e-commerce success among small to medium enterprises (SMEs). Exploring more effective methods to describe and evaluate e-commerce success becomes a challenging task. This research seeks to help fill the gap by proposing a new model to evaluate e-commerce success from a business perspective. This measure has been termed e-commerce business satisfaction (EBS). A total of 2401 surveys were successfully sent to SMEs. The usable response rate for the surveys was 7.54%. Principal component analysis with varimax rotation method was then adopted within the factor analysis. Using the 15 critical success factors (CSFs) obtained from previous research as a foundation, an EBS management model was finally simply developed to assist SMEs business managers in effectively adopting e-commerce systems or evaluating e-commerce success, which was categorised into five components including *Marketing, Management Support and Customer Acceptance, Website Effectiveness and Cost, Managing Change* and *Knowledge and Skills*. Further research is needed to determine the weighting of each CSF so that a yardstick measurement method might be further developed to assist SMEs in adopting e-commerce successfully.

**Keywords:** E-commerce satisfaction, E-commerce success, evaluation, management model, SMEs
