**6.3.7 Knowledge validation and legitimacy**

A used specific case to test the quality of the KB is acceptable, verifying that the source of knowledge is accurate.

#### **6.3.7.1 Validation and contrasting of results with the experts**

A workshop is held with experts in order to present the results obtained with the KB, for analysis and give their opinions. The main criterion should be used for the tests, is the opinion of domain experts, as they can tell if the results are satisfactory or not. Potential users can also serve as judges, with criteria regarding the usability, the interface and the clarity of the explanations.

#### **6.3.7.2 Refinement**

Based on the different opinions of experts, an analysis of BC and, as if the knowledge engineer finds it necessary, the redesign or restatement is realized.

#### **6.3.7.3 Assessment**

There is a close relationship between evaluation and refinements of the ES as the evaluation may reveal cases handled by the rules of the system and as a result, we add new rules or modify old ones, such changes can have unexpected negative effects on some parts of the system.

#### *6.3.7.3.1 Verifying*

The purpose of this evaluation is to check if the system implements its specifications and there are good logical consistency (i.e. the problem of verifying that the method is correct) for this purpose is the assessment obtained, taking into account actual performance levels. The prototype and system upgrades should be tested for performance in the laboratory and in the field. The initial evaluation is conducted in a simulated environment, the system is exposed to test problems (e.g., case histories or suggested by users.).

It also examines whether the system can be used efficiently and if it is cost-effective. The assessment should consider the quality of the messages presented, i.e., design and programming of the explanatory power of the BC to determine to what extent they meet the proposed objectives.

#### *6.3.7.3.2 Validation*

Should test the system and application performance, which makes comparison with expert, to establish that the system built to solve the problem is correct and will perform at an acceptable level of accuracy. The decrease in response time and quality of the conclusions may be a good initial approach to evaluate an ES.

For this purpose we used the modified Turing test, with which they are presented to potential users or administrators are two solutions to one problem: the first is the result of expert opinion and the other the result of IMIS (Intelligent Management Information Systems). Without knowing the source, are asked to compare the solutions. With the results of the comparison determines how valid the results of IMIS are. To use this approach should consider potential disagreement between the criteria of the evaluators, since the problems can be so complex that occur disagreements over its interpretation and solution. Other requirements to be considered in validating the final system are:

