**2. Related research**

Several people have attempted to solve this problem. The following are the most interesting of the research works which are of interest to this article's research aims.

According to the article [3], a solution is proposed where a group of agents work together to sense communicate and interpret sensor data. The agents are seven types which consist of communication, sensor, refining, reconstruction, interpretation, prevention, and cognitive agents. The agents are separated into two groups. The group of agents each processes the sensor data and then aggregates the result to form a concrete decision. The first group of agents was prediction activities in the smart house. The performance was tabulated as 72.00% for machine learning, 88.00% for expert-knowledge agents, and 91.33% for meta-prediction agents. The last group of agents was prevention agents. This was a simulation and achieved 100% accuracy. However, the last group was not real-life but simulative experiment.

The researcher in the article [9] presents a monitoring system for senior citizens. If an anomaly is identified, then the system will send an alarm to a caregiver. Some activities monitored include waking up in the morning, preparing food, having breakfast, reading, working on a computer, having lunch, napping, or reading. An example of an anomaly is where the system detects that she woke up and starts walking around the house at 2 pm. This is an anomaly because this time is an awkward time for walking around the house. An alarm is evoked as this is not part of the normal schedule. A mock apartment was designed for use in this experiment.

The study in the article [10], graphical presents a comparison and heuristic technique was utilized in detecting falls. A publicly available suit GBAD test suite was defined. It selects the best subgraph or pattern and then compares it with the sensor data. Each graph is compared to the full graph using the formula. An abnormality can be identified from the graph in the suit presented.

#### **2.1 Datasets with classification labels**

In article [11] a dataset (Sisfall), and a fall detection algorithm are presented. The algorithms have five stages which are included in this order; sensor data, pre-process data feature extraction fall detection, and finally, call for help when a fall is detected. Four algorithms were used, and these are decision tree (DT), Logistic regression, knearest neighbor (KNN), and support vector machine. The dataset that was used is called Sisfall. Sisfall contains 15 types of falls and 19 types of ADL. These were performed by 23 young people aged 19 to 30 years old and 15 elderly people aged 60 to 75 years old. The recording frequency was 200 HZ for sampling. The sensors used were two accelerometers and one gyroscope. The accuracy was derived from the relationship between true positive (TP), true negatives (TN), false positive (FP), and false negatives (FN). Accuracy is defined as in Eq. (1) below.

$$Accuracy = \frac{TP + TN}{TP + FN + TN + FP} \times 100\tag{1}$$

The accuracies recorded were DT at 99.02%, LR at 99.38%, KNN at 99.91%, and SVM at 99.98%. The most accurate results are the SVM classifier. SVM was found to have performed not only better in this experiment but also better than selected previous benchmark works of previous performances.

In article [12], a dataset (MobiFall) is presented. An experiment was conducted to standardize a dataset that can be used to determine if there is or no fall in a sensor. Datasets are used in machine learning to benchmark and identify specified activities. They compared two fall detection systems. One threshold-based system and another machine learning-based system. The machine learning system had a higher accuracy level. In this dataset, four kinds of falls were studied. Forward-lying, front-kneeslying, sideward-lying, and back-sitting-chair. Apart from fall detections, ADLs were also studied. These were nine which include standing, walking, jogging, jumping, starting up, stair down, sitting in a chair, car-step in, car step out. The sensor data used was from three types of sensors, the accelerometer, gyroscope, and orientation signals. The Size of displacement defined by the slope (SL) was measured using the formula SL given as in Eq. (2) below.

$$\text{SL} = \sqrt{(\max\_{x} - \min\_{x})^2 + \left(\max\_{y} - \min\_{y}\right)^2 + \left(\max\_{x} - \min\_{x}\right)^2} \tag{2}$$

Where the X stands for X-axis displacement, Y is the Y-axis displacement and Z is the Z-axis displacement.

Accuracy in fall detection was at 98% and in fall classification was at 68% using the 10-fold cross-validation. However, in another method where two-thirds are for training and one-third for testing, the accuracy was fall detection at 98.74 and fall classification at 68%. The dataset used is called MobiFall. This dataset is publicly available at Hellenic Mediterranean University (HMU) in Crete, Greece.

In article [13] a dataset (Ucihar) and six classifiers are used to detect falls. These involve distinguishing falls from ADLs. The six classifiers are the k-nearest neighbor (k-NN), least squares method (LSM), support vector machines (SVM), Bayesian decision making (BDM), artificial neural networks (ANNs), and dynamic time warping (DTW), Fourteen people performed the experiment for data acquisition. The trial had 20 falls and 16 ADLs. The formula used to determine the total acceleration is given in Eq. (3) below.

$$A\_T = \sqrt{\left(\mathbf{A}\_x\right)^2 + \left(\mathbf{A}\_y\right)^2 + \left(\mathbf{A}\_z\right)^2} \tag{3}$$

where *Ax* is acceleration in the x-axis, *Ay* is in the y-axis, and *Ax* is in the z-axis.

A database was created containing fall activities and ADLs. All the six algorithms performed at around 95% with K-NN and LSM being the most accurate. The researcher suggests using these two algorithms for live data stream detections. This dataset is accessible at the University of Irvine Machine Learning Repository.

#### **2.2 Types and placement of sensors**

For the recording and collection of data, specific sensors are acquired. Several types of sensors exist. Wearable sensors are sensors that can be placed on the body of interest. Environmental sensors are sensors that are not embedded in a body of

#### *Autonomous Update of a Dataset for Anomaly Detection Services in Elderly Care Smart House DOI: http://dx.doi.org/10.5772/intechopen.103953*

interest. Data collection should be done at a regular frequency. However, we shall assume that the sensors wherever they would be placed would collect the same type of data at the same frequency. This is on the pretext that, if the location of the sensor is changed the frequency and the quality of data might as well change.

In article [14], two types of sensors are defined. Vision-based and sensor-based. The vision sensors use cameras of diverse types. These sensors however are not very much acceptable to the intended beneficiaries of the system. Another type is sensorbased, which includes wearables ambient sensors, and sensors on an object. This is the most used type as it is considered less intrusive and is more acceptable by potential beneficiaries. An accelerometer and gyroscope are two examples of wearable sensors. In article [15] three sensory systems are defined. These are wearables, vision, and ambient sensors which are a combination of visual with sound and location sensors. The visual sensors become more data accessing with the addition of location sensors and sound sensors. The human voice can also be used as input. A representation of the three types of sensors is shown below in **Figure 1**.

In article [16] a system of sensors is proposed. Environmental and wearables sensors are combined to identify the location and the motion of the subject person. Each of these has achieved a significant level of activity classification accuracy. It is declared that using only an accelerometer the accuracy is 54.19%, while if you combined an accelerometer with environmental sensors the accuracy is 97.42%.

In research [17] the authors say the approach is intrusive as it requires active dressing, and it could be monitoring specific categories and not extensive categories of activities. It also might require the continuous wearing of the sensors throughout the day to enable data sensing. Therefore, researchers argue that environmental sensors are much better than wearable sensors. They provide experiments using a visionbased sensor. The activities involved include sitting, standing, walking, sleeping, getting assistance, and using the bedside commode and background. The camera has two kinds of data frames. A thermal defame and depth frame. They adjust each frame to its best resolution for better results. Although the authors acknowledge the intrusive and costly nature of vision sensors, they insist that more can be done with visual sensors to be used in the detection of activities in houses of seniors.

In article [18] a voice is used as input for a smart house. The smart house must interpret this voice according to the training and evoke some devices to perform a particular action. The purpose of the article was to provide voice input with a secure connection to a smart house with IoT Network. In article [19] the author mentions that training is a difficult part of a detection system. This is because the training dataset can become obsolete when the individual trained on it changes their usual pattern. The mentioned in cases when senior develops diabetics or drift concept. With advanced age, this leads to body deterioration which results in changes in gait

**Figure 1.** *Types of sensors in the system design.*


machine, BDM, Decision Tree, ANN

CNN-Long short-term memory (LSTM)

Random Forest, Naive Bayes, Bayes Net, Logistic Regression, MLP, Radial basis function (RBF), Decision Table, Decision Trees(J48), Random Tree

Convolutional Neural Network - (ResNet-34 architecture)

Irvine Machine

CREST testbed,

Experimental testbed

Multi-layer perceptron (MLP) MobiAct 98.75%

UMA Dataset 86.63%

99.13%

95.80%

### **Table 1.**

*Listing of sensors and classifier methods in selected previous research.*

gyroscope, and magnetometer/campus

Gyroscope

sensor, Light sensors, Orientation and Motion sensors

thermal sensor camera

Gyroscope

7 15 Accelerometers and a

8 16 Thermometer, Humidity

9 17 Depth sensor Camera,

10 19 3D Accelerometer and

characteristics of seniors [20]. A summary of sensors and classifiers from related works is shown below in **Table 1**.
