**6. Evaluation and proofs**

When we perform a search on a search engine, we are looking to find the most relevant material, while minimizing the junk that is retrieved. This is the basic objective of any search engine. Get important information while avoiding junk is difficult, if not impossible to accomplish. The experiments carried out, in order to evaluate the effectiveness of the assignment of runtime ontology. The main objective has been to verify if the agent-assisted query formulation mechanism provides a suitable tool to increase the number of significant documents extracted from the DIRs to be stored in the CBR. For our experiments, we included about 50 users with different profiles. It set a context for users, they were asked to at least start their essay before issuing any query to the system. They were also asked to look through all results returned by OntoEnter before clicking on any result [37].

We compared the top 10 search results for each keyword phrase per search engine. Our application recorded the results they clicked on, which we used as a form of implicit user relevance in our analysis. We must consider that the relevance of recovered documents is subjective. That's different people can assign different relevance values to the same document. In our study, we have agreed different values to measure the quality of recovered documents, excellent, good, acceptable and poor, as can be seen in **Table 1**. After the data were collected, we had a record of queries with an average of 5 queries per user. From these queries, some of


**Table 1.** Analysis of retrieved document relevance for select queries.

them had to be deleted, either because multiple results were clicked, no results were clicked or no information was available for that particular query.

In each experiment, we report the average rank of the user-clicked result for our baseline system, another search engine, and for our search engine OntoEnter. Thus basically, we can define two set-based measures: precision and recall.

\_{\bullet} = \{\text{define two set-based measures: precision and recall.}\}

$$precision = \frac{\left|\left\{\left(relevant\\_t\_{\text{element}}\right) \cap \left\{retrieve\\_d\_{\text{element}}\right\}\right\}\right|}{\left|\left\{\left(retrieve\\_d\_{\text{documents}}\right)\right\}\right|}
$$

$$recall = \frac{\left|\left\{\left(relevant\\_{\text{documents}}\right) \cap \left\{retrieve\\_d\_{\text{documents}}\right\}\right|}{\left|\left\{relevant\\_{\text{documents}}\right\}\right|}\right|}
$$

It is possible to measure how well a search performed with respect to these two parameters. For each such set, precision and recall values can be plotted to give a precision-recall curve. We need these measures, if we are to evaluate the ranked retrieval results for search engines. These measures are computed using unordered sets of documents. The remaining queries were analyzed and evaluated **Table 2**.

It is easy to compare several classifiers in the precision graph. Curves near the perfect precision-recall curve have a better performance level than those closest to the baseline. In other words, a curve above the other curve has a better performance level (**Figure 8**).

Precision and retrieve are inversely related, i.e., as precision increases recall falls and vice-versa. When a relevant document is not retrieved at all, the precision value in the above equation is taken to be 0. A balance between these two needs to be achieved by the search engine that to achieve this and to compare performance, the precision-recall curves come in the practice.


**Table 2.** Precision and recall values.

and data conversion in a form that can be communicated back to the master, interpretation and output of the commands received from the master, local filtering performance, calculation, and processes to allow specific functions to be performed locally. The supervision below and RTU includes all network devices and substation and feeder levels like circuit breakers, reclosers, autosectionalizers, the local automation distributed at these devices, and the com-

OntoEnter can monitor, in real time, the network's main parameters, making use of the information supplied by the SCADA, placed on the main company building, and the RTUs are installed at different stations. From the information provided, the operator can take action to solve any errors that may arise or send a technician to repair the station equipment. OntoEnter allows the operator to search for information, alarms, or digital and analog parameters of measurement, registered in each IA or RTU. The system has the ability to select the IA that is best suited to satisfy the client's requirements, without the client being aware of the details about the agent. In addition, the AI is able to communicate and negotiate with the other IAs. Collaborative IAs

When we perform a search on a search engine, we are looking to find the most relevant material, while minimizing the junk that is retrieved. This is the basic objective of any search engine. Get important information while avoiding junk is difficult, if not impossible to accomplish. The experiments carried out, in order to evaluate the effectiveness of the assignment of runtime ontology. The main objective has been to verify if the agent-assisted query formulation mechanism provides a suitable tool to increase the number of significant documents extracted from the DIRs to be stored in the CBR. For our experiments, we included about 50 users with different profiles. It set a context for users, they were asked to at least start their essay before issuing any query to the system. They were also asked to look through all results returned by

We compared the top 10 search results for each keyword phrase per search engine. Our application recorded the results they clicked on, which we used as a form of implicit user relevance in our analysis. We must consider that the relevance of recovered documents is subjective. That's different people can assign different relevance values to the same document. In our study, we have agreed different values to measure the quality of recovered documents, excellent, good, acceptable and poor, as can be seen in **Table 1**. After the data were collected, we had a record of queries with an average of 5 queries per user. From these queries, some of

OntoEnter 7.5% 42.3% 35.1% 14.4% Traditional SE 1.4% 25.7% 31.5% 21.3%

**Excellent Good Acceptable Poor**

are useful, especially when a task involves several systems in the network.

munications infrastructure [36].

138 Knowledge Management Strategies and Applications

**6. Evaluation and proofs**

OntoEnter before clicking on any result [37].

**Table 1.** Analysis of retrieved document relevance for select queries.

**Figure 8.** Performance OntoEnter & Traditional Search Engine (TSE).

This trade-off between precision and recall can be observed using the precision-recall curve and an appropriate balance between the two obtained. The precision-recall curves for two algorithms are shown. Depending on the requirement of high precision at the cost of the recall, or high recall with lower precision, an appropriate algorithm can be chosen. In our case, we choose the appropriate system depending on the high precision and data with false positives allowed. Two precision-recall curves represent the performance levels of the two search engines. The search engine OntoEnter clearly outperforms TSE in this domain example. Our system performs satisfactorily with about a 95.2% rate of success in real cases.

Another important aspect of the design and implementation of OntoEnter is the determination of the degree of speed in the answer that the system provides. During experimentation, heuristics and measures that are commonly adopted in information retrieval have been used. A statistical analysis was performed to determine the importance values in the results. While users were performing these searches, an application continued to run in the background on the server and captured the content of the written queries and search results. We can establish that OntoEnter speed in our domain improves the answer time and the average of the traditional search engine. **Figure 9** shows a graphic of these parameters that was collected as a part of the experiment.

We can establish that the speed in the system improves the procedure time and the average of the traditional search engine. Results for OntoEnter are 15.1% better than procedure time and 19.5% better than running time/sec searches on traditional search engines.

Intelligent Knowledge Retrieval from Industrial Repositories http://dx.doi.org/10.5772/intechopen.70724 141

**Figure 9.** OntoEnter search analysis report.

This trade-off between precision and recall can be observed using the precision-recall curve and an appropriate balance between the two obtained. The precision-recall curves for two algorithms are shown. Depending on the requirement of high precision at the cost of the recall, or high recall with lower precision, an appropriate algorithm can be chosen. In our case, we choose the appropriate system depending on the high precision and data with false positives allowed. Two precision-recall curves represent the performance levels of the two search engines. The search engine OntoEnter clearly outperforms TSE in this domain example. Our system performs satisfactorily with about a 95.2% rate of success in real

**Figure 8.** Performance OntoEnter & Traditional Search Engine (TSE).

140 Knowledge Management Strategies and Applications

Another important aspect of the design and implementation of OntoEnter is the determination of the degree of speed in the answer that the system provides. During experimentation, heuristics and measures that are commonly adopted in information retrieval have been used. A statistical analysis was performed to determine the importance values in the results. While users were performing these searches, an application continued to run in the background on the server and captured the content of the written queries and search results. We can establish that OntoEnter speed in our domain improves the answer time and the average of the traditional search engine. **Figure 9** shows a graphic of these parameters that was collected as a part

We can establish that the speed in the system improves the procedure time and the average of the traditional search engine. Results for OntoEnter are 15.1% better than procedure time and

19.5% better than running time/sec searches on traditional search engines.

cases.

of the experiment.
