**6. Generating a threshold for compatibility index** *G*

To answer the initial question (*when close really means close*), first it is necessary to have a reliable index of compatibility. However, that is not sufficient; it is also necessary a second condition: a limit or threshold for the index.

For useful purpose, it is necessary to have a limiting lower value (minimum threshold) to indicate when two priority vectors are compatible or close to be compatible, in order to define precisely when close really means close.


**Figure 7.** Sensitive analysis for different compatibility indices.

On medicine Measuring the degree of matching (proximity) between patient and disease diagnose

Measuring how close are two (or more) different value systems (where they differ

Measuring the proximity between the cultivate plants against a healthy plant (based

Measuring how close are the different views among the different stakeholders

on its micro & macro nutrients) and selecting the best nutrient seller

On buyers-seller profiles Measuring the degree of matching between house buyers and sales project

On shift-work prioritization Measuring how close are the different views among the different stakeholders (workers view, company view, community view)

(economic, environmental and social view)

**Figure 6** summarizes the compatibility indices that exist, adding the option of Euclidian norm (the classic distance calculation based on Norm2), normalized by its maximum possible value

Six formulae in a 2D vector for seven cases for two different trends (parallel trend and

To answer the initial question (*when close really means close*), first it is necessary to have a reliable index of compatibility. However, that is not sufficient; it is also necessary a second condition:

On quality tests Measuring what MCDM decision method can builds a better metric

profiles

*Notice: all the examples mentioned above are coming from real cases of application*.

and for how much)

258 Applications and Theory of Analytic Hierarchy Process - Decision Making for Strategic Decisions

On group decision making (conflict resolution)

On agricultural production and

On company social responsibility

**Table 4.** Possible field of applications for index *G*.

of to present the results in a percentage format.

**Figure 6.** Definition of different formulas for compatibility assessment.

a limit or threshold for the index.

perpendicular trend) were tested. The results are shown in **Figure 7**.

**6. Generating a threshold for compatibility index** *G*

supplier selection

(CSR)

We have four different ways to define a minimum threshold for compatibility [2]:

**(1)** Considering that compatibility is ranged between 0 and 100% (0 <cos *α*<1) being 100% the case of total compatibility (represented by parallel vectors), it is reasonable to define a value of 10% of tolerance (1/10th of 100%) as a maximum threshold of incompatibility to consider two vectors as compatible vectors (which means a minimum of 90% of compat‐ ibility to consider two vectors compatible), this explanation is based on the idea of one order of magnitude for an admissible perturbation for measurement. This lower bound is also based on the accepted 10% used in AHP for the consistency index. In the comparison matrix of AHP, the 10% limit of tolerate inconsistency comes from the ratio of consistency (CR) obtained from consistency index compared to a random index (CR = CI/RI), that in general has to be less equal 10% (except for the 3 × 3 and 4 × 4 matrix cases). This is telling that as far as CI is from the RI (random index response) as better is CR. It is interesting to recall that CR is built as a comparison from the statistical analysis of RI (this idea will be reviewed in the last case analysis).

**(2)** The compatibility index is related to a topologic analysis since compatibility is related to measuring closeness in weighted environments (weighted spaces). **Figure 8** presents a sequence of two, three, four, and five dimensional vectors, the first or initial vector is obtained as an isotropic flat space situation, that is, with equal values (1/*n*) in each coordinate (no privileged direction in the space); the second one is a vector obtained perturbing (adding or subtracting) 10% on each coordinate, creating "small crisps" or little privileged directions, then the incompatibility index is calculated with the five different formulas (the reason to use all formulae is because we are working on a near flat space [no singularities], where every formula works relatively well).

When looking the outputs for incompatibilities, it is possible to observe a good response for everyone (equal or less than 10%), with *G* and Norm1 ca. 10% and 5% as upper bound in every case.

**(3)** In **Figure 9**, a simple test was run over an Excel spreadsheet, using the common area example of AHP, where the result (the importance of the area of the figures) can be calculated precisely with the typical geometric formulas and then normalizing its values to obtain the exact priorities as a function of the size of their areas. Doing this way, it is possible to have a reference point of the element values (the right coordinates for the actual area vector).

The next step is perturbing the actual area values by ± 10% producing a new vector of areas, finally over these two vectors (actual and perturbed) is applied the *G* function to measure their compatibility obtaining a value for compatibility index of 91.92% (or 8.08% of incompatibility). This result is very close to the standard error deviation calculated as Σabs(perturbated – actual)/actual = 10%, this shows that 90% might represent a good threshold, considering that the difference between both outputs is related to the significant fact that this numbers are not just numbers but weights.

**(4)** The last way to analyze the correctness of 90% for threshold was carried out, it consists in working with a random function and filling the area vector with random values and calculating *G* for every case. The goal is to generate an average *G* for the case of full random values for the areas ("*full random*" means without any previous order among the areas, like figure A is clearly bigger than figure B, and so on), and again producing random values but this time keeping the correct order among the figures (imitating the behavior of a rational DM), and once again generating an average *G* for this case, then both results are compared against actual values.

Measuring in Weighted Environments: Moving from Metric to Order Topology (Knowing When Close Really Means Close) http://dx.doi.org/10.5772/63670 261


**Figure 8.** Defining a possible threshold of 10% for *G* function.

is also based on the accepted 10% used in AHP for the consistency index. In the comparison matrix of AHP, the 10% limit of tolerate inconsistency comes from the ratio of consistency (CR) obtained from consistency index compared to a random index (CR = CI/RI), that in general has to be less equal 10% (except for the 3 × 3 and 4 × 4 matrix cases). This is telling that as far as CI is from the RI (random index response) as better is CR. It is interesting to recall that CR is built as a comparison from the statistical analysis of RI (this idea will be

**(2)** The compatibility index is related to a topologic analysis since compatibility is related to measuring closeness in weighted environments (weighted spaces). **Figure 8** presents a sequence of two, three, four, and five dimensional vectors, the first or initial vector is obtained as an isotropic flat space situation, that is, with equal values (1/*n*) in each coordinate (no privileged direction in the space); the second one is a vector obtained perturbing (adding or subtracting) 10% on each coordinate, creating "small crisps" or little privileged directions, then the incompatibility index is calculated with the five different formulas (the reason to use all formulae is because we are working on a near flat space

When looking the outputs for incompatibilities, it is possible to observe a good response for everyone (equal or less than 10%), with *G* and Norm1 ca. 10% and 5% as upper bound

The next step is perturbing the actual area values by ± 10% producing a new vector of areas, finally over these two vectors (actual and perturbed) is applied the *G* function to measure their compatibility obtaining a value for compatibility index of 91.92% (or 8.08% of incompatibility). This result is very close to the standard error deviation calculated as Σabs(perturbated – actual)/actual = 10%, this shows that 90% might represent a good threshold, considering that the difference between both outputs is related to the significant

**(4)** The last way to analyze the correctness of 90% for threshold was carried out, it consists in working with a random function and filling the area vector with random values and calculating *G* for every case. The goal is to generate an average *G* for the case of full random values for the areas ("*full random*" means without any previous order among the areas, like figure A is clearly bigger than figure B, and so on), and again producing random values but this time keeping the correct order among the figures (imitating the behavior of a rational DM), and once again generating an average *G* for this case, then both results

**(3)** In **Figure 9**, a simple test was run over an Excel spreadsheet, using the common area example of AHP, where the result (the importance of the area of the figures) can be calculated precisely with the typical geometric formulas and then normalizing its values to obtain the exact priorities as a function of the size of their areas. Doing this way, it is possible to have a reference point of the element values (the right coordinates for the actual

[no singularities], where every formula works relatively well).

260 Applications and Theory of Analytic Hierarchy Process - Decision Making for Strategic Decisions

fact that this numbers are not just numbers but weights.

are compared against actual values.

reviewed in the last case analysis).

in every case.

area vector).

The average *G* for 15 experiments in the first case (keeping no order) was around 50% of compatibility and 78% for the second case (keeping the order among the five figures), both results show that limit of 90% might be a good threshold, in the first case the ratio between threshold and the full random *G* is almost twice bigger 1.8 (0.90 over 0.50), keeping the 0.90 compatibility threshold far from random responses.


**Figure 9.** Possible threshold of 10% for *G* function.

In the second case (threshold over sorted figures), the ratio is much more closer (as expected to be), with a value of 1.16 (0.90 over 0.78), saying that order may help to improve compatibility but is not enough, it needs to consider the weights (not just the preference but the intensity of the preference) which is related to the values of the elements that belong to the vector, as well as the angles of both vectors point to point (geometrically viewed as profiles).

Of course, this test should be carried out for a large number of experiments to have a more reliable response. A second test conducted for 225 experiments (15 people making 15 experi‐ ments each) has shown more or less the same initial results for average *G* value in both cases with and without orders (± 0.78 and ± 0.50).

Next, **Table 3** provides the meaning of ranges of compatibility in terms of index *G* and its description.


**Table 3.** Ranges of compatibilities and its meaning.

Finally, another interesting way to illustrate the 90% as a good threshold for compatibility is the pattern recognition issue. Compatibility is the way to measure if a set of data (vector of priorities or profile of behavior) correspond to a recognized pattern or not. For instance, in the medical pattern recognition application, the diagnose profile (the pattern) is built with the intensity values of sign and symptoms that correctly describe the disease and then is compared with the sign and symptoms gathered from the patient, when these two profiles were about 90% or more of matching, then the physician was confident to say that the patient has the disease.

When the matching between those profiles were 85–90%, the physicians in general agreed with the diagnoses offered by the software, but when the *G* values was below 85% (between 79– 84%), then the doctor sometime found trouble to recognize if the new sign and symptoms (the new patient's profile) was corresponding or not to the disease initially presented (nonconclu‐ sive information). Finally, when the matching value (the *G* index) was below 75%, the physician was not able anymore to clearly recognize in the patient's profile the disease initially offered.

Notice: the new profiles were built artificially, changing some values of sign and symptoms in an imaginary patient profile, in order to achieve matching values of 90%, 85%, 80%, and so on, the intention was to evaluate when an experimented doctor change his perception (mostly based on his pattern recognition ability).

Thus, two vectors may be considered compatible (similar or matching patterns) when *G* is greater or equal 90% with great certainty or confidence. Also, values between 85% and 90% in general have a good chance to be correct (to have a good level of certainty or approximation).
