**4. Results**

Eight algorithms had their performance studies as indicated in **Figure 7** below. Of these eight, some would be discarded in preference for the best-performing algorithm.


#### **Table 3.**

*Question to answer before invoking a particular algorithm.*

*Autonomous Update of a Dataset for Anomaly Detection Services in Elderly Care Smart House DOI: http://dx.doi.org/10.5772/intechopen.103953*

#### **Figure 6.**

*Flow chart of creation of custom datasets.*


#### **Figure 7.**

*Comparison of algorithm performance per dataset.*

During the experiment, algorithm ranking was established. The lower-ranked as less effective algorithms are to be eliminated. This reduced number of options increases the efficiency as unfavorable options are not computed. The extra computation would be the worst of resources. The selecting of the best option thus avoids unnecessary use of computing power in analyzing some irrelevant options. The less likely algorithm options are removed immediately when identified. Below is **Figure 7** which is a snapshot of the accuracies of various investigated methods.

In **Figure 7**, Algorithm KNN is more accurate for the Sisfall dataset, and XGBoost is more accurate for both MobiAct and Ucihar datasets. A selection of three of the graphs of the accuracy with the best performance is presented below. The next two graphs are for fall classification and the second is for human activity classification.

The detection accuracy for the Sisfall dataset is indicated as 98.08% for the training session and 97.92% for the testing session as shown below in **Figure 8**. This is the best accuracy from the list of classifiers in the experiment.

The detection accuracy for MobiAct dataset accuracy is indicated in **Figure 9** at 99.23% for the training session and 98.84% for the testing session. This is the best accuracy from the list of classifiers in the experiment.

The detection accuracy for Ucihar dataset accuracy is indicated at 96.85% for the training session and 95.21% for the testing session as shown below in **Figure 10**. The Ucihar accuracy at 96.85% is the worst accuracy of the usages of XGBoost classifiers.

The Sisfall test dataset has 14,783 ADLs cases and 3775 fall cases; **Figure 11** shows the confusion matrix for the Sisfall test dataset during the training session.

#### **Figure 8.**

*Results for accuracy simulation using the Sisfall dataset.*

**Figure 9.** *Results for accuracy simulation using MobiAct dataset.*

The MobiAct test dataset has 16,562 ADLs cases and 3984 fall cases; **Figure 12** shows the confusion matrix for the MobiAct test dataset.

The Ucihar test dataset has 1963 ADLs cases and 464 Laying cases; **Figure 13** shows the confusion matrix for the Ucihar test dataset.

From the confusion matrix, we can extract the accuracy, sensitivity, and specificity of our classifier. The higher each of these parameters the better the performance of the classifier. However, accuracy must be considered in conjunction with specificity and sensitivity. A classifier must have a high sensitivity and specificity, to be defined as having superior performance. As seen in **Table 4** below, both sensitivity and specificity are above 78% which is high performing case. This shows that the one selected option from the eight models performs quite well and can be used to develop the proposed system. As indicated in **Figure 10**, the MobiAct dataset accuracy is recorded at 98% for the training and at 99% for the testing which is the best accuracy of our

*Autonomous Update of a Dataset for Anomaly Detection Services in Elderly Care Smart House DOI: http://dx.doi.org/10.5772/intechopen.103953*

**Figure 10.**

*Results for the accuracy simulation using the Ucihar dataset.*

**Figure 11.** *Confusion matrix for XGBoost classifier on Sisfall dataset.*

possible classifiers. Below is **Table 4** which indicates the performance of the most effective classier(XGboost) in the experiment.

The high accuracy indicates that the models were efficient and can be used in detections. However, when executing these algorithms speed and accuracy are a factor in optimization. If given more time an algorithm can perform better. However, this has to be considered with efficiency on time when an anomaly is reported. If it takes too much time to compute a high accuracy decision, it could be that by the time the decision is taken it's too late for the damage already done. Below is **Figure 14** which indicates the time it took for each algorithm to complete a single task.

Moreover, these eight machine learning methods were compared with deep learning. Deep learning has multiple models chained together to enhance performance. However, the computation costs were more than for the machine learning method.

**Figure 12.** *Confusion matrix for XGBoost classifiers on MobiAct dataset.*

**Figure 13.** *Confusion matrix for XGBoost classifier for Ucihar dataset.*

The accuracy deep learning method on the Sisfall dataset Sisfall selection dataset accuracy is recorded at 96.84% for the training session and at 93.55% for the validation session as indicated below in **Figure 15**.

Deep learning accuracy on the MobiAct dataset selection dataset accuracy is recorded at 96.97% for the training session and at 100.0% for the validation session as indicated below in **Figure 16**.

*Autonomous Update of a Dataset for Anomaly Detection Services in Elderly Care Smart House DOI: http://dx.doi.org/10.5772/intechopen.103953*


#### **Table 4.**

*Indicators of classifier efficiency.*

**Figure 14.** *Recorded time consumed per tested algorithm.*

**Figure 15.** *Deep learning accuracy on the Sisfall dataset.*

**Figure 16.** *Deep learning accuracy on the MobiAct dataset.*

**Figure 17.** *Deep learning accuracy on the Ucihar dataset.*

Deep learning accuracy on the Ucihar dataset Sisfall selection dataset accuracy is recorded at 98.96% for the training session and at 100.0% for validation as indicated below in **Figure 17**.
