**A Self Organizing Map Based Motion Classifier with an Extension to Fall Detection Problem and Its Implementation on a Smartphone**

Wattanapong Kurdthongmee

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/51002

## **1. Introduction**

[13] Nielsen, R. H. (1987). Counter Propagation Networks. *Applied Optics*, 26(23),

[14] Pantic, M., Valstar, M. F., Rademaker, R., & Maat, L. (2005). Webbased Database for Facial Expression Analysis. *Proc. IEEE Int. Conf. Multimedia and Expo*, 317-321.

[15] Gross, R. (2005). Face Databases, Handbook of Face Recognition. , S.Li and A.Jain,

[16] Lienhart, R., & Maydt, J. (2002). An Extended Set of Haar-like Features for Rapid Ob‐

ject Detection. *Proc. IEEE Int. Conf. Image Processing*, 1, 900-903.

4979-4984.

ed., Springer-Verlag

160 Developments and Applications of Self-Organizing Maps Applications of Self-Organizing Maps

Automatic classification of human motions is very useful in many application areas. The characteristics of the motion types and patterns can be used as an indicator of one''s mobility level, latent chronic diseases and aging process [1]. The motion types can be employed to further make a decision if one is at risk; i.e. the motion type or the transition between the motion types may be risky and is likely to cause a fall. Alternatively, the motion types may be useful as an indication to request for a close observation/attention; i.e. a jogging is higher risk and requires a close attention than a normal walk especially for elderly people. The in‐ dication may be used to support the feature of a video surveillance system for monitoring elderly people. With respect to these examples of application areas in combination with the requirement to automate monitoring of elderly people as a result of ''aging society'', the de‐ mands for an automatic motion classification have been increased. To observe and make re‐ lation to motion types, either an acceleration sensor or video system has been widely accepted as useful.

In this chapter, we present the application of the self organizing map (SOM) to an automatic classification of basic, i.e. activities of daily live - ADL, human motions. To be specific, SOM is employed to solve the human motion classification problem. In our proposed approach, SOM is trained with motion parameters captured from a specific wearer and used to per‐ form an adaptive motion types clustering and classification functions. This results in a code‐ book with different clusters of motion types whose parameters are similar. Instead of using a codebook alone, the algorithm is proposed in order to distinguish between different basic motion types whose motion parameters are similar and mapped to the same cluster as clear‐

© 2012 Kurdthongmee; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2012 Kurdthongmee; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

ly reflected by the codecook. In addition, the frequency of occurrences from different motion types resulted from SOM classifications are used to make decision for the most probable motion type. By mean of SOM, it makes the resulting application adaptable to wearers of different ages and motion conditions. With the success of classification of different motion types, we propose extending the algorithm to perform fall detection. In this case, SOM is trained and labeled with a majority of data from fall curves. As the characteristics of the fall curves is almost similar to the motion types with rapid transition of acceleration, the motion type is employed to support the differentiation between fall and other rapid transition mo‐ tion types. It can be summarized that the contributions of our research are twofolds. Firstly, the motion types classification algorithm employing SOM is proposed and validated for its correctness with respect to the continuous waveform of mixed motion types. Secondly, the motion types is successfully used to distinguish between fall and other motion types.

only one acceleration sensor worn at the waist and torso [4, 6. However, the single-acceler‐ ation sensor approach has difficulty in distinguishing between standing and sitting as both are upright postures, although a simplified scheme with tilt threshold to distinguish stand‐ ing and sitting has been proposed [4]. Standing and sitting postures can be distinguished by observing different orientations of body segments where multiple acceleration sensors are attached. For example, two acceleration sensors can be attached to the torso and thigh to distinguish standing and sitting postures from static activities [7, 8, 9]. Trunk tilt varia‐ tion due to sit-stand postural transitions can be measured by integrating the signal from a gyroscope attached to the chest of the subject [5]. Sit-stand postural transitions can be iden‐ tified according to the patterns of vertical acceleration from an acceleration sensor worn at

A Self Organizing Map Based Motion Classifier with an Extension to Fall Detection Problem and Its Implementation on

a Smartphone

163

http://dx.doi.org/10.5772/51002

Acceleration signals can be used to determine walking in ambulatory movement. Walking can be identified by frequency-domain analysis [4, 10]. It is characterized by a variance of over 0.02 g in vertical acceleration and frequency peak within 1–3 Hz in the signal spectrum [10]. Discrete wavelet transform is used to distinguish walking on a level ground and walk‐

Motion classification using statistical schemes utilize a supervised machine learning proce‐ dure, which associates an observation (or features) of movement to possible movement states in terms of the probability of the observation. Those schemes include, for example, knearest neighbor (kNN) classification [8, 12], support vector machines (SVM) [13, 14], Naive Bayes classifier [15, 16], Gaussian mixture model (GMM) [17] and hidden Markov model (HMM) [18, 19]. Naive Bayes classifier determines activities according to the probabilities of the signal pattern of the activities. In GMM approach, the likelihood function is not a typical Gaussian distribution. The weights and parameters describing probability of activities are obtained by the expectation-maximization algorithm. Transitions between activities can be described as a Markov chain that represents the likelihood (probability) of transitions be‐ tween possible activities (states). The HMM is applied to determine unknown states at any time according to observable activity features (extracted from accelerometry data) corre‐ sponding to the states. After the HMM is trained by example data, it can be used to deter‐

From our point of view, there are several drawbacks of the previously proposed approaches. That is to say the differences in the collected data among different persons, or even within the same person but different time and sensor variable sampling rate, which are very com‐ mon, are not taken into account. The previously proposed motion classifiers are almost all in the class of a pre-programmed system with the threshold for making decision from limited samples. This is in contrast to our proposed approach which relies on utilizing SOM to make it adaptable to a particular wearer. The movement nature of the wearer is taken into consid‐ eration and used for training and labeling and, in turn, used for fall detection and alert of

the wearer. The details of our proposed algorithm are given in the next sections.

the waist [6].

ing on a stairway [11].

mine possible activity state transitions.

The rest of this chapter is organized as follows. Section 2 presents related work which is fo‐ cused on the class of sensor attachment to body for motion classification and fall detection algorithms. The background of SOM, data capturing and SOM training and labeling stages are detailed in Section 3. The proposed algorithm for motion type classification and fall de‐ tection are then explained in Section 4. In Section 5, the validation results from the imple‐ mentation of the proposed algorithms are given and discussed. In addition, the extensions of the algorithm to the fall detection problem is then given. Eventually, the implementation of the algorithm on the platform of choice for practical use is presented in Section 6. Finally, the chapter is concluded in Section 7..

## **2. Related Work**

In this section, the survey of the previously proposed motion classification and fall detection algorithms are given. In general, the previously proposed motion classification algorithms can be classified into two categories: acceleration sensor based detection and video process‐ ing based detection. Motion classification using acceleration sensor has been widely studied and resulted in two difference classification schemes which are the threshold-based and stat‐ istical classification schemes [1]. Threshold-based motion classification takes advantage of known knowledge and information about the movements to be classified. It uses a hierarch‐ ical algorithm structure, a decision tree like, to discriminate between activity states. A set of empirically-derived thresholds for each classification subclass are required. A systematic ap‐ proach for motion classification based on a hierarchical decision tree is presented by [2]. A generic classification framework presented in [3] consists of a hierarchical binary tree for classifying postural transitions, falling, walking, and other movements using signals from a wearable triaxial acceleration sensor. This modular framework also allows modifying indi‐ vidual classification algorithm for particular purposes.

Tilt sensing is a basic function provided by acceleration sensors which respond to gravity or constant acceleration. Therefore, human postures, such as upright and lying, can be distinguished according to the magnitude of acceleration signals along sensitive axes from only one acceleration sensor worn at the waist and torso [4, 6. However, the single-acceler‐ ation sensor approach has difficulty in distinguishing between standing and sitting as both are upright postures, although a simplified scheme with tilt threshold to distinguish stand‐ ing and sitting has been proposed [4]. Standing and sitting postures can be distinguished by observing different orientations of body segments where multiple acceleration sensors are attached. For example, two acceleration sensors can be attached to the torso and thigh to distinguish standing and sitting postures from static activities [7, 8, 9]. Trunk tilt varia‐ tion due to sit-stand postural transitions can be measured by integrating the signal from a gyroscope attached to the chest of the subject [5]. Sit-stand postural transitions can be iden‐ tified according to the patterns of vertical acceleration from an acceleration sensor worn at the waist [6].

ly reflected by the codecook. In addition, the frequency of occurrences from different motion types resulted from SOM classifications are used to make decision for the most probable motion type. By mean of SOM, it makes the resulting application adaptable to wearers of different ages and motion conditions. With the success of classification of different motion types, we propose extending the algorithm to perform fall detection. In this case, SOM is trained and labeled with a majority of data from fall curves. As the characteristics of the fall curves is almost similar to the motion types with rapid transition of acceleration, the motion type is employed to support the differentiation between fall and other rapid transition mo‐ tion types. It can be summarized that the contributions of our research are twofolds. Firstly, the motion types classification algorithm employing SOM is proposed and validated for its correctness with respect to the continuous waveform of mixed motion types. Secondly, the

motion types is successfully used to distinguish between fall and other motion types.

the chapter is concluded in Section 7..

162 Developments and Applications of Self-Organizing Maps Applications of Self-Organizing Maps

vidual classification algorithm for particular purposes.

**2. Related Work**

The rest of this chapter is organized as follows. Section 2 presents related work which is fo‐ cused on the class of sensor attachment to body for motion classification and fall detection algorithms. The background of SOM, data capturing and SOM training and labeling stages are detailed in Section 3. The proposed algorithm for motion type classification and fall de‐ tection are then explained in Section 4. In Section 5, the validation results from the imple‐ mentation of the proposed algorithms are given and discussed. In addition, the extensions of the algorithm to the fall detection problem is then given. Eventually, the implementation of the algorithm on the platform of choice for practical use is presented in Section 6. Finally,

In this section, the survey of the previously proposed motion classification and fall detection algorithms are given. In general, the previously proposed motion classification algorithms can be classified into two categories: acceleration sensor based detection and video process‐ ing based detection. Motion classification using acceleration sensor has been widely studied and resulted in two difference classification schemes which are the threshold-based and stat‐ istical classification schemes [1]. Threshold-based motion classification takes advantage of known knowledge and information about the movements to be classified. It uses a hierarch‐ ical algorithm structure, a decision tree like, to discriminate between activity states. A set of empirically-derived thresholds for each classification subclass are required. A systematic ap‐ proach for motion classification based on a hierarchical decision tree is presented by [2]. A generic classification framework presented in [3] consists of a hierarchical binary tree for classifying postural transitions, falling, walking, and other movements using signals from a wearable triaxial acceleration sensor. This modular framework also allows modifying indi‐

Tilt sensing is a basic function provided by acceleration sensors which respond to gravity or constant acceleration. Therefore, human postures, such as upright and lying, can be distinguished according to the magnitude of acceleration signals along sensitive axes from Acceleration signals can be used to determine walking in ambulatory movement. Walking can be identified by frequency-domain analysis [4, 10]. It is characterized by a variance of over 0.02 g in vertical acceleration and frequency peak within 1–3 Hz in the signal spectrum [10]. Discrete wavelet transform is used to distinguish walking on a level ground and walk‐ ing on a stairway [11].

Motion classification using statistical schemes utilize a supervised machine learning proce‐ dure, which associates an observation (or features) of movement to possible movement states in terms of the probability of the observation. Those schemes include, for example, knearest neighbor (kNN) classification [8, 12], support vector machines (SVM) [13, 14], Naive Bayes classifier [15, 16], Gaussian mixture model (GMM) [17] and hidden Markov model (HMM) [18, 19]. Naive Bayes classifier determines activities according to the probabilities of the signal pattern of the activities. In GMM approach, the likelihood function is not a typical Gaussian distribution. The weights and parameters describing probability of activities are obtained by the expectation-maximization algorithm. Transitions between activities can be described as a Markov chain that represents the likelihood (probability) of transitions be‐ tween possible activities (states). The HMM is applied to determine unknown states at any time according to observable activity features (extracted from accelerometry data) corre‐ sponding to the states. After the HMM is trained by example data, it can be used to deter‐ mine possible activity state transitions.

From our point of view, there are several drawbacks of the previously proposed approaches. That is to say the differences in the collected data among different persons, or even within the same person but different time and sensor variable sampling rate, which are very com‐ mon, are not taken into account. The previously proposed motion classifiers are almost all in the class of a pre-programmed system with the threshold for making decision from limited samples. This is in contrast to our proposed approach which relies on utilizing SOM to make it adaptable to a particular wearer. The movement nature of the wearer is taken into consid‐ eration and used for training and labeling and, in turn, used for fall detection and alert of the wearer. The details of our proposed algorithm are given in the next sections.

## **3. Background, Data Capturing and SOM Training and Labeling Stages**

In this section, we briefly describe the background of SOM. The procedures for capturing data, the SOM training and labeling stages, and the proposed motion classification algo‐ rithm are then detailed.

#### **3.1. A Brief Introduction to SOM**

In general, SOM is one of the most prominent artificial neural network models adhering to the unsupervised learning paradigm [20]. It has been employed to solve problems in a wide variety of application domains. For the applications in engineering domain, it was elaborate‐ ly surveyed and reported in [21]. Generally speaking, the SOM model consists of a number of neural processing elements (PEs) or ''codebook''. Each of the PE, *i*, which is called ''code‐ word' is assigned an *n*-dimensional weight vector *mi* where *n* is the dimension of an input data. During the training stage, the iteration *t* starts with the selection of one input data *p(t)*. *p(t)* is presented to SOM and each codeword determines its activation by means of the dis‐ tance between *p(t)* and its own weight vector. The codeword with the lowest activation is referred to as the winner, *mc*, or the best matching unit (BMU) at the learning iteration *t*, i.e.:

$$m\_c(t) = \min\_i \quad \text{ } p(t) = m\_i(t) \quad . \tag{1}$$

Thus, similarities between input data that are presented in the *n*-dimensional input space are mirrored within the two-dimensional output space of SOM or SOM map. The training

A Self Organizing Map Based Motion Classifier with an Extension to Fall Detection Problem and Its Implementation on

a Smartphone

165

http://dx.doi.org/10.5772/51002

The classification stage is very similar to the learning stage with some exceptions. That is to say there is no need to perform adaptation to the winner codeword and its neighbours of the SOM map with respect to the input data. Instead, the label of the winner codeword corre‐ sponding to the input data is returned and used for further interpretation; i.e. if the input

Our proposed algorithm relies on using motion data which is captured from different basic daily activity types for training SOM. To efficiently and economically get such data, an ap‐ plication was developed to be executed on a smartphone instead of relying on an embedded data acquisition with a built-in acceleration sensor. A smartphone was targeted from the point of view that it provides all the necessary hardware to serve our purposes. A smart‐ phone with an Android operating system was selected as it is an open platform device 22. The platform, in general, provides support for interfacing with a built-in triaxial acceleration sensor via Application Program Interfaces (APIs). With respect to the developed application,

**•** Capture motion data from a triaxial acceleration sensor in as fixed a sampling period

**Figure 1.** (a) The main screen of the application running on the mobile phone and (b) the screen captured of a per‐

The Mobile Sensor Actuator Link (MSAL) API was employed to implement the applications running both on a smartphone and a personal computer with the required functionalities. The API is an open source and freely available from Ericsson Laboratory 23. Figure 1 shows the main screen of the application running on the smartphone and the screen captured of a

**•** Wirelessly transmit the captured motion data to the computer server via Bluetooth.

stage is terminated after the final SOM map is labelled with some known conditions.

data is mapped to the codeword with jogging motion type label.

the following roles are performed during the data capturing stage:

manner as possible,

sonal computer application.

**3.2 The Motion Data Capturing and Preparation Procedures**

The Euclidean distance (ED) is one of the most popular way to measure the distance be‐ tween *p(t)* and a codeword''s weight vector *mi (t)*. It is defined by the following equation:

$$d\left(p(t), m\_i(t)\right) = \sqrt{\left(p(t)\_1 - m\_i(t)\_1\right)^2 + \left(p(t)\_2 - m\_i(t)\_2\right)^2 + \dots + \left(p(t)\_n - m\_i(t)\_n\right)^2} \tag{2}$$

Finally, the weight vector of the winner codeword as well as the weight vectors of selected codewords in the vicinity of the winner are adapted. This adaptation is implemented as a gradual reduction of the component-wise difference between the input data and weight vec‐ tor of the codeword, i.e.:

$$m\_i(t+1) = m\_i(t) + a(t) \quad h\_{ci}(t) \quad \mathbb{L}p(t) \text{--} m\_i(t) \text{[}. \tag{3}$$

Geometrically speaking, the weight vector of codewords of the adapted units are moved a bit towards the input data. The amount of weight vector movement is guided by a learning rate, *α* , decreasing with time. The number of codewords that are affected by this adaptation is determined by a neighborhood function, *hci* which also decreases with time. This move‐ ment makes the distance between these codewords decrease and, thus, the weight vector of the codewords become more similar to the input data. The respective codeword is more likely to be a winner at future presentations of this input data. The consequence of adapting not only the winner alone but also a number of codewords in the neighborhood of the win‐ ner leads to a spatial clustering of similar input patterns in neighboring parts of the SOM. Thus, similarities between input data that are presented in the *n*-dimensional input space are mirrored within the two-dimensional output space of SOM or SOM map. The training stage is terminated after the final SOM map is labelled with some known conditions.

The classification stage is very similar to the learning stage with some exceptions. That is to say there is no need to perform adaptation to the winner codeword and its neighbours of the SOM map with respect to the input data. Instead, the label of the winner codeword corre‐ sponding to the input data is returned and used for further interpretation; i.e. if the input data is mapped to the codeword with jogging motion type label.

#### **3.2 The Motion Data Capturing and Preparation Procedures**

**3. Background, Data Capturing and SOM Training and Labeling Stages**

In this section, we briefly describe the background of SOM. The procedures for capturing data, the SOM training and labeling stages, and the proposed motion classification algo‐

In general, SOM is one of the most prominent artificial neural network models adhering to the unsupervised learning paradigm [20]. It has been employed to solve problems in a wide variety of application domains. For the applications in engineering domain, it was elaborate‐ ly surveyed and reported in [21]. Generally speaking, the SOM model consists of a number of neural processing elements (PEs) or ''codebook''. Each of the PE, *i*, which is called ''code‐

data. During the training stage, the iteration *t* starts with the selection of one input data *p(t)*. *p(t)* is presented to SOM and each codeword determines its activation by means of the dis‐ tance between *p(t)* and its own weight vector. The codeword with the lowest activation is referred to as the winner, *mc*, or the best matching unit (BMU) at the learning iteration *t*, i.e.:

The Euclidean distance (ED) is one of the most popular way to measure the distance be‐

<sup>2</sup> <sup>+</sup> (*p*(*t*)2−*mi*

Finally, the weight vector of the winner codeword as well as the weight vectors of selected codewords in the vicinity of the winner are adapted. This adaptation is implemented as a gradual reduction of the component-wise difference between the input data and weight vec‐

Geometrically speaking, the weight vector of codewords of the adapted units are moved a bit towards the input data. The amount of weight vector movement is guided by a learning rate, *α* , decreasing with time. The number of codewords that are affected by this adaptation is determined by a neighborhood function, *hci* which also decreases with time. This move‐ ment makes the distance between these codewords decrease and, thus, the weight vector of the codewords become more similar to the input data. The respective codeword is more likely to be a winner at future presentations of this input data. The consequence of adapting not only the winner alone but also a number of codewords in the neighborhood of the win‐ ner leads to a spatial clustering of similar input patterns in neighboring parts of the SOM.

(*t*)2)

(*t*) *p*(*t*)−*mi*

*mc*(*t*)=*mini p*(*t*)−*mi*

(*t*) + *α*(*t*) *hci*

(*t*)1)

where *n* is the dimension of an input

(*t*) . (1)

(*t*)*n*)

(*t*) . (3)

<sup>2</sup> (2)

*(t)*. It is defined by the following equation:

<sup>2</sup> <sup>+</sup> ... <sup>+</sup> (*p*(*t*)*<sup>n</sup>* <sup>−</sup>*mi*

rithm are then detailed.

**3.1. A Brief Introduction to SOM**

164 Developments and Applications of Self-Organizing Maps Applications of Self-Organizing Maps

word' is assigned an *n*-dimensional weight vector *mi*

tween *p(t)* and a codeword''s weight vector *mi*

*mi*

(*t* + 1)=*mi*

(*t*))= (*p*(*t*)1−*mi*

*d*(*p*(*t*), *mi*

tor of the codeword, i.e.:

Our proposed algorithm relies on using motion data which is captured from different basic daily activity types for training SOM. To efficiently and economically get such data, an ap‐ plication was developed to be executed on a smartphone instead of relying on an embedded data acquisition with a built-in acceleration sensor. A smartphone was targeted from the point of view that it provides all the necessary hardware to serve our purposes. A smart‐ phone with an Android operating system was selected as it is an open platform device 22. The platform, in general, provides support for interfacing with a built-in triaxial acceleration sensor via Application Program Interfaces (APIs). With respect to the developed application, the following roles are performed during the data capturing stage:


**Figure 1.** (a) The main screen of the application running on the mobile phone and (b) the screen captured of a per‐ sonal computer application.

The Mobile Sensor Actuator Link (MSAL) API was employed to implement the applications running both on a smartphone and a personal computer with the required functionalities. The API is an open source and freely available from Ericsson Laboratory 23. Figure 1 shows the main screen of the application running on the smartphone and the screen captured of a personal computer application. It is noted that the waveforms shown on the application screen in this case are three parameters along the *x, y* and *z*-axes from the acceleration sen‐ sors and the gyroscope. Only the parameters from the acceleration sensor are used in this experiment.

In contrast to the first group of captured data described earlier, the captured data files for the fifth and sixth motion types, which are the fall on the floor with support of bed and dif‐ ferent types of fall on the arm chair, were required to preprocess. This was done in order to

A Self Organizing Map Based Motion Classifier with an Extension to Fall Detection Problem and Its Implementation on

*a*= *ax*

<sup>2</sup> <sup>+</sup> *ay* <sup>2</sup> <sup>+</sup> *az*

Followings are our findings from several experimentations during the training stage:

**•** the preprocessing stage for the motion types of ''fall on the floor with support of bed'' and ''different types of fall on the arm chair'' can be eliminated and replaced by using multiple

**•** the combinations of following 5 motion parameters give rise to the highest degree of mo‐

the endpoint of the adjacent segment (*ai+2*), the slope of a segment, and the length of a seg‐

) , (*ai*+1−*ai*

(*ai*+1 − *ai* ) (*ti*+1 − *ti*

All of the preprocessed data files were converted, from the vector summed acceleration and time stamp format, to be in the form defined by Eq. 5 on a segment by segment basis. At this point, the procedures taken to create the training data set ready for the training stage can be

**•** Apply Eq. 4 to all elements of an array of raw acceleration data *(ax, ay, az)* in order to con‐ vert to an array of vector summed acceleration along with the time stamp format: *(ai*

The process is applied to all arrays of raw acceleration data and gives rise to a set of *Tm*

*Tm* arrays. In addition, the motion type index, *m*, is added to all members of the *Tm* arrays.

**•** Merge all *Tm* arrays together by taking all data from the arrays whose motion types are jogging, normal walk, walk upstairs and downstairs, stand-sit-stand, and fast walk and using 3 repeats of fall on the floor with support of bed and different types of fall on the arm chair. All members of the merged arrays are then randomly rearranged. At this point, the training data with random mixture of motion data from all motion types, *T*, is obtained.

In order to train and label SOM with the already prepared training data, the algorithms are implemented in the Matlab scripts. We keep the implementations to be as simple as possible as the algorithms will finally be exploited on the smartphone platform. Our training algo‐

*, ti*

*)*, which were relevant to the falls.

a Smartphone

167

http://dx.doi.org/10.5772/51002

<sup>2</sup> (4)

), the endpoint of a segment (*ai+1*),

)) (5)

*)* by applying Eq. 5 to members at *i*, *i+1*, and *i+2* of the

*, ti )*.

extract only motion waveforms, whose forms are *(ai*

set of training data from these two motion types.

ment. This can be represented by:

arrays where *m* is a motion type index.

**3.3. The SOM Training and Labeling Stages**

summarized as follows:

**•** Create an array of *(Sj*

rithm behaves by:

tion classification correctness: the origin of a segment (*ai*

(*ai*

*, Sj+1, Sj+2, Oj*

*, mj*

, *ai*+1, *ai*+2,

In the course of our experimentations, subjects were requested to perform the following mo‐ tion types, which cover their basic daily activity types, while the smartphone with the instal‐ led application previously detailed was attached to their waist: {jogging (0), normal walk (1), walk upstairs and downstairs (1), stand-sit-stand (2), fall on the floor with support of bed (3), different types of fall on the arm chair (4), fast walk (5)}. The numbers in the parentheses represent the annotations of the motion types on the codebook resulted from the SOM label‐ ing stage (to be described in Section 3.3. The first four and the last motion types were contin‐ uously captured for one minute and the fifth and sixth ones were requested to repeat 10 times each. The captured data for each motion type was saved by the server side application to a separate file. The captured motion data files for the first four and the last motion types required less preprocessing. This comes from the fact that these motion types could be per‐ formed almost continuously by subjects. As a result, the motion waveforms that represent these motion types have nearly continuous form. These data files were only required to con‐ vert from the raw data (separated *ax, ay,* and *az*) to be in the vector summed acceleration *a* form by applying Eq. 4. Doing this way created the data which are independent of the at‐ tachment orientation of the smartphone during data collection stage. Then, the time stamp *t* was attached to the vector summed acceleration data point by point in order to form the *(ai , ti )*-array. Figure 2 shows the motion waveforms in the vector summed acceleration form for 30 second periods of all captured motion types.

**Figure 2.** The motion waveforms in the vector summed acceleration form for 30 second periods of (a) jogging, (b) normal walk, (c) sit-stand-sit, (d) fall, (e) fall on the arm chair and (f) fast walk.

In contrast to the first group of captured data described earlier, the captured data files for the fifth and sixth motion types, which are the fall on the floor with support of bed and dif‐ ferent types of fall on the arm chair, were required to preprocess. This was done in order to extract only motion waveforms, whose forms are *(ai , ti )*, which were relevant to the falls.

$$a = \sqrt{a\_x^2 + a\_y^2 + a\_z^2} \tag{4}$$

Followings are our findings from several experimentations during the training stage:


$$\left(a\_{i\prime}, a\_{i+1\prime}, a\_{i+2\prime}, \frac{(a\_{i+1} - a\_i)}{(t\_{i+1} - t\_i)}, (a\_{i+1} - a\_i)\right) \tag{5}$$

All of the preprocessed data files were converted, from the vector summed acceleration and time stamp format, to be in the form defined by Eq. 5 on a segment by segment basis. At this point, the procedures taken to create the training data set ready for the training stage can be summarized as follows:


#### **3.3. The SOM Training and Labeling Stages**

personal computer application. It is noted that the waveforms shown on the application screen in this case are three parameters along the *x, y* and *z*-axes from the acceleration sen‐ sors and the gyroscope. Only the parameters from the acceleration sensor are used in this

In the course of our experimentations, subjects were requested to perform the following mo‐ tion types, which cover their basic daily activity types, while the smartphone with the instal‐ led application previously detailed was attached to their waist: {jogging (0), normal walk (1), walk upstairs and downstairs (1), stand-sit-stand (2), fall on the floor with support of bed (3), different types of fall on the arm chair (4), fast walk (5)}. The numbers in the parentheses represent the annotations of the motion types on the codebook resulted from the SOM label‐ ing stage (to be described in Section 3.3. The first four and the last motion types were contin‐ uously captured for one minute and the fifth and sixth ones were requested to repeat 10 times each. The captured data for each motion type was saved by the server side application to a separate file. The captured motion data files for the first four and the last motion types required less preprocessing. This comes from the fact that these motion types could be per‐ formed almost continuously by subjects. As a result, the motion waveforms that represent these motion types have nearly continuous form. These data files were only required to con‐ vert from the raw data (separated *ax, ay,* and *az*) to be in the vector summed acceleration *a* form by applying Eq. 4. Doing this way created the data which are independent of the at‐ tachment orientation of the smartphone during data collection stage. Then, the time stamp *t* was attached to the vector summed acceleration data point by point in order to form the *(ai*

*)*-array. Figure 2 shows the motion waveforms in the vector summed acceleration form for

**Figure 2.** The motion waveforms in the vector summed acceleration form for 30 second periods of (a) jogging, (b)

normal walk, (c) sit-stand-sit, (d) fall, (e) fall on the arm chair and (f) fast walk.

*,*

experiment.

166 Developments and Applications of Self-Organizing Maps Applications of Self-Organizing Maps

*ti*

30 second periods of all captured motion types.

In order to train and label SOM with the already prepared training data, the algorithms are implemented in the Matlab scripts. We keep the implementations to be as simple as possible as the algorithms will finally be exploited on the smartphone platform. Our training algo‐ rithm behaves by:


$$\beta(r,\ c) = a - \left(\sqrt{(r - r\_{BMIL})^2 + (c - c\_{BMIL})^2}\right) \frac{a}{2} \tag{6}$$

**4. Our Proposed Algorithms**

ment *St*

a threshold.

are the codebook *C* and an unknown motion type segment *St*

*motionFrequency*-array to ready it for the next period.

continuously captured motion data are presented.

Our algorithm is proposed to compensate for the imperfect clustering of codebook men‐ tioned earlier. It is illustrated in Algorithm 1. There are two inputs to the algorithm which

A Self Organizing Map Based Motion Classifier with an Extension to Fall Detection Problem and Its Implementation on

for its motion type with the form defined by Eq. 5. The codebook *C* is the one obtained from the training and labeling stages and assigned to the algorithm only once. It can, however, be changed to be specific to the motion conditions of users in practical usage. For a motion seg‐

The set of query result is as follows: *{typeJogging, typeWalk, typeSitStandSit, typeFall, typeFal‐ lOnArmChair, typeFastWalk}*. Instead of using the query result right away which is very prone to error due to the imperfect clustering, the algorithm only registers the current seg‐ ment motion type. This is performed by incrementing the counter at the corresponding in‐ dex of the motion type in the *motionFrequency*-array. The algorithm keeps performing in this way for the new incoming segments without making decision for a period of *samplePeriod*. At the end of the Periodg, the motion type can be concluded that it is the one whose the oc‐ currence frequency, or the value of the counter, within the *motionFrequency*-array is the high‐ est. With respect to the algorithm, these processes are performed within the first ifcondition. Upon finishing decision making of the motion type, the algorithm clears up the

The algorithm described so far can be used to make decision between the motion types in the fast and slow transition groups. It, however, fails to distinguish between a jogging and a fall. The failure comes from the fact that these two motion types have fairly similar segment characteristics. These are confirmed by the codebooks (see Figure 3 in the region A, B and C) which show us that codewords of these two motion types seem to be close neighbors of each other. The algorithm is, therefore, incorporated with additional features to differentiate be‐ tween these two motion types. Let's observe the second if-condition in Algorithm 1. First of all, the algorithm checks whether or not the segment under consideration is a falling edge whose slope is negative. If it is, the algorithm further uses the codebook query result, *seg‐ mentMotionType*, once again to check whether or not the segment is in the type of fall. Both testing results are used in combination with the current determination of the motion type result obtained from the first section of the algorithm. If the determination of the motion type result is of type jogging, it is likely that the segment under consideration is also of type jogging. Otherwise, the segment of type fall is detected alone and it is likely that a true fall is

likely to occur under an additional condition that the endpoint of the segment, *at*

In the next section, the experimental results after applying the proposed algorithm to the

, the algorithm finds the motion type, *segmentMotionType*, by querying the codebook.

; i.e. the segment to be queried

http://dx.doi.org/10.5772/51002

a Smartphone

169

, is less than

where *β(r, c)* is the learning coefficient of the codeword at *(r, c)*. *α* is the learning coeffi‐ cient of the best matching unit (BMU) codeword whose index is at *(rBMU, cBMU)*. Only the positive value of *β(r, c)* is used to update the weight of the codeword.

For the labeling stage of the algorithm, the training data *T* is once again presented to the co‐ debook resulted from the training stage. The BMU, whose index is at *(rBMU, cBMU)*, is searched for a given input segment *Sk* which is a member of *T*. The matching frequency of the code‐ word at *(rBMU, cBMU)* is then incremented. Upon all *Sk* which is a member of *T* has been visit‐ ed, the algorithm labels a codeword with the motion type whose frequency is the highest. It is noted that the matching frequency of a codeword is later used to make decision for the motion type of an unknown motion type segment. That is to say if an unknown motion type segment is found to match to the codeword at *(rm, cm)*, its motion type is the motion type of the codeword at *(rm, cm)*.

Figure 3 illustrates a codebook along with the projection maps on the training data planes for a configuration of *5×7* codewords. It can be clearly observed that the codewords are clus‐ tered very well. The separations between the clusters of slow transition motion types; nor‐ mal walk (1), walk upstairs and downstairs (1), stand-sit-stand (2), and fast transition motion types; jogging (0), fall on the floor with support of bed (3), different types of fall on the arm chair (4), fast walk (5), are clearly noticeable. It, however, shows some imperfect clusters with mixed fast transition motion types; i.e. the cluster at the bottom left of the map which consists of these motion types: jogging (0), fall on the floor with support of bed (3), different types of fall on the arm chair (4) and fast walk (5). These can be interpreted that the fast transition motion types have some common motion parameters. This is confirmed by a section of comparative acceleration waveforms of the fast and slow transition motion types in Figure 4. At this point, it can be summarized that the codebook cannot be employed alone to the problem of motion classification. In the next section, we detail our proposed algo‐ rithm that makes use of the queried results from the SOM to improve the correctness of mo‐ tion classification and to distinguish between each motion type in the fast transition group. The algorithm is also further designed to support detection of fall.

## **4. Our Proposed Algorithms**

**•** single visiting to a member of the training data, *T*, of size *k*,

allowed to change 256 times with a linear decay rate of <sup>1</sup>

*β*(*r*, *c*)=*α* −( (*r* −*rBMU* )

positive value of *β(r, c)* is used to update the weight of the codeword.

The algorithm is also further designed to support detection of fall.

**•** using a simple neighborhood function of the form:

the learning rate when *<sup>k</sup>*

168 Developments and Applications of Self-Organizing Maps Applications of Self-Organizing Maps

the codeword at *(rm, cm)*.

**•** using a simple learning function with linearly decayed learning rate. The learning rate is

<sup>256</sup> members of training data is visited.

where *β(r, c)* is the learning coefficient of the codeword at *(r, c)*. *α* is the learning coeffi‐ cient of the best matching unit (BMU) codeword whose index is at *(rBMU, cBMU)*. Only the

For the labeling stage of the algorithm, the training data *T* is once again presented to the co‐ debook resulted from the training stage. The BMU, whose index is at *(rBMU, cBMU)*, is searched for a given input segment *Sk* which is a member of *T*. The matching frequency of the code‐ word at *(rBMU, cBMU)* is then incremented. Upon all *Sk* which is a member of *T* has been visit‐ ed, the algorithm labels a codeword with the motion type whose frequency is the highest. It is noted that the matching frequency of a codeword is later used to make decision for the motion type of an unknown motion type segment. That is to say if an unknown motion type segment is found to match to the codeword at *(rm, cm)*, its motion type is the motion type of

Figure 3 illustrates a codebook along with the projection maps on the training data planes for a configuration of *5×7* codewords. It can be clearly observed that the codewords are clus‐ tered very well. The separations between the clusters of slow transition motion types; nor‐ mal walk (1), walk upstairs and downstairs (1), stand-sit-stand (2), and fast transition motion types; jogging (0), fall on the floor with support of bed (3), different types of fall on the arm chair (4), fast walk (5), are clearly noticeable. It, however, shows some imperfect clusters with mixed fast transition motion types; i.e. the cluster at the bottom left of the map which consists of these motion types: jogging (0), fall on the floor with support of bed (3), different types of fall on the arm chair (4) and fast walk (5). These can be interpreted that the fast transition motion types have some common motion parameters. This is confirmed by a section of comparative acceleration waveforms of the fast and slow transition motion types in Figure 4. At this point, it can be summarized that the codebook cannot be employed alone to the problem of motion classification. In the next section, we detail our proposed algo‐ rithm that makes use of the queried results from the SOM to improve the correctness of mo‐ tion classification and to distinguish between each motion type in the fast transition group.

<sup>2</sup> <sup>+</sup> (*<sup>c</sup>* <sup>−</sup>*cBMU* )

2) *α*

<sup>256</sup> . This is equivalent to changing

<sup>2</sup> (6)

Our algorithm is proposed to compensate for the imperfect clustering of codebook men‐ tioned earlier. It is illustrated in Algorithm 1. There are two inputs to the algorithm which are the codebook *C* and an unknown motion type segment *St* ; i.e. the segment to be queried for its motion type with the form defined by Eq. 5. The codebook *C* is the one obtained from the training and labeling stages and assigned to the algorithm only once. It can, however, be changed to be specific to the motion conditions of users in practical usage. For a motion seg‐ ment *St* , the algorithm finds the motion type, *segmentMotionType*, by querying the codebook. The set of query result is as follows: *{typeJogging, typeWalk, typeSitStandSit, typeFall, typeFal‐ lOnArmChair, typeFastWalk}*. Instead of using the query result right away which is very prone to error due to the imperfect clustering, the algorithm only registers the current seg‐ ment motion type. This is performed by incrementing the counter at the corresponding in‐ dex of the motion type in the *motionFrequency*-array. The algorithm keeps performing in this way for the new incoming segments without making decision for a period of *samplePeriod*. At the end of the Periodg, the motion type can be concluded that it is the one whose the oc‐ currence frequency, or the value of the counter, within the *motionFrequency*-array is the high‐ est. With respect to the algorithm, these processes are performed within the first ifcondition. Upon finishing decision making of the motion type, the algorithm clears up the *motionFrequency*-array to ready it for the next period.

The algorithm described so far can be used to make decision between the motion types in the fast and slow transition groups. It, however, fails to distinguish between a jogging and a fall. The failure comes from the fact that these two motion types have fairly similar segment characteristics. These are confirmed by the codebooks (see Figure 3 in the region A, B and C) which show us that codewords of these two motion types seem to be close neighbors of each other. The algorithm is, therefore, incorporated with additional features to differentiate be‐ tween these two motion types. Let's observe the second if-condition in Algorithm 1. First of all, the algorithm checks whether or not the segment under consideration is a falling edge whose slope is negative. If it is, the algorithm further uses the codebook query result, *seg‐ mentMotionType*, once again to check whether or not the segment is in the type of fall. Both testing results are used in combination with the current determination of the motion type result obtained from the first section of the algorithm. If the determination of the motion type result is of type jogging, it is likely that the segment under consideration is also of type jogging. Otherwise, the segment of type fall is detected alone and it is likely that a true fall is likely to occur under an additional condition that the endpoint of the segment, *at* , is less than a threshold.

In the next section, the experimental results after applying the proposed algorithm to the continuously captured motion data are presented.

**5. Experimental Results and Discussions**

tured motion data. Followings are the details of our motion data sets.

**Algorithm 1:** A fall detection algorithm employing SOM and motion segments transition: Rising edge.

During the course of our experimentations, the proposed algorithm was coded in Matlab script and verified its correctness with two sets of real and continuously captured motion data. The verification was performed on a personal computer. We avoided verifying the al‐ gorithm on a separate motion type waveform and reporting the false positive (FP) and fast negative (FN) results. It is true that this approach is commonly used in the previous publica‐ tions. From our point of view, it makes more sense to perform verifications on the continu‐ ous motion data as it ensures that the proposed algorithm can be practically used. In addition, the weakness of almost all proposed algorithms in this application domain is that they do not point out clearly how to distinguish between fall and jogging or fast walk. They only focus on finding the representative, average and standard deviation, threshold slope of falls, from different people with different types of fall, in order to make a correct decision of fall. Our verification approach, however, has a weakness as it lacks a standard motion data to benchmark the proposed algorithm. This left us no choice apart from employing our cap‐

A Self Organizing Map Based Motion Classifier with an Extension to Fall Detection Problem and Its Implementation on

a Smartphone

171

http://dx.doi.org/10.5772/51002

**Figure 3.** The codebook for a configuration of *5×7* codewords: (a) the original map, (b) - (f) the map projections on the origin of a segment (*ai* ), the endpoint of a segment (*ai+1*), the endpoint of the adjacent segment (*ai+2*), the slope of a segment, and the length of a segment, respectively. It is noted that the parameter value of a codeword is represent‐ ed by a color whose value is shown on the right hand side bar.

**Figure 4.** A section of comparative acceleration waveforms of the fast and slow transition motion types

## **5. Experimental Results and Discussions**

**Figure 3.** The codebook for a configuration of *5×7* codewords: (a) the original map, (b) - (f) the map projections on

a segment, and the length of a segment, respectively. It is noted that the parameter value of a codeword is represent‐

**Figure 4.** A section of comparative acceleration waveforms of the fast and slow transition motion types

), the endpoint of a segment (*ai+1*), the endpoint of the adjacent segment (*ai+2*), the slope of

the origin of a segment (*ai*

ed by a color whose value is shown on the right hand side bar.

170 Developments and Applications of Self-Organizing Maps Applications of Self-Organizing Maps

During the course of our experimentations, the proposed algorithm was coded in Matlab script and verified its correctness with two sets of real and continuously captured motion data. The verification was performed on a personal computer. We avoided verifying the al‐ gorithm on a separate motion type waveform and reporting the false positive (FP) and fast negative (FN) results. It is true that this approach is commonly used in the previous publica‐ tions. From our point of view, it makes more sense to perform verifications on the continu‐ ous motion data as it ensures that the proposed algorithm can be practically used. In addition, the weakness of almost all proposed algorithms in this application domain is that they do not point out clearly how to distinguish between fall and jogging or fast walk. They only focus on finding the representative, average and standard deviation, threshold slope of falls, from different people with different types of fall, in order to make a correct decision of fall. Our verification approach, however, has a weakness as it lacks a standard motion data to benchmark the proposed algorithm. This left us no choice apart from employing our cap‐ tured motion data. Followings are the details of our motion data sets.

**Algorithm 1:** A fall detection algorithm employing SOM and motion segments transition: Rising edge.

**•** Set 1: 78 seconds of normal walk, 41 seconds of fast walk, 37.5 seconds of jogging and 12 repeated falls with 10 seconds duration between a consecutive pair of falls.

instead of jogging. This is quite obvious in the middle of Figure 5(a) where the jogging mo‐ tion type is captured. The interesting point of the detection results is found after magnifying the detected parts of fall curves as illustrated in Figure 6. Clearly, it can be seen that the al‐ gorithm is able to make decisions if falls occur almost one segment ahead of the first impact. This is equivalent to approximately 133 mS with respect to the data sampling period used during the data capturing stage. It is very useful from the point of view that there exists a system to get a pre-impact alert signal and handle in the way to prevent severe impact. This

A Self Organizing Map Based Motion Classifier with an Extension to Fall Detection Problem and Its Implementation on

a Smartphone

173

http://dx.doi.org/10.5772/51002

**Figure 6.** Magnifying the detected fall curves to show that the algorithm can make decision before the first impact.

After verifying the proposed algorithm with different configurations of SOM map, we found a configuration that gave rise to an acceptable correctness in term of motion classi‐ fication result. This configuration has *8×8* codewords and the data preparation procedure differs from the previously described one in the way that it used only one, instead of three, data from the motion type of fall on the floor with support of bed. Figure 7 shows the classification results with respect to both motion data sets. The similarity between the verification results with respect to both data sets is that the algorithm can no longer de‐ tect fall. From Figure 7(a), the algorithm is capable of detection even a short period of normal walk with unusual waveforms which is the first peak of fast walk motion type. The fluctuations between normal and fast walks just a moment before jogging are also detected and correctly reported. In addition, the algorithm gives rise to a correct report during jogging motion type. The results in Figure 7(b) further confirms the correctness. The first two joggings are successfully detected. All motions ahead of the first jogging and in between the first and second and the second and the third joggings are correctly recog‐

At this point, we are trying to give an answer to the question why two configurations of SOM map lead to different detection results. It can be said that the first configuration of training data is designed with the aim to make the abruptly changes of segment, fall seg‐ ments - to be specific, detectable. In addition, the resulting SOM map must be capable of providing the current motion type to the algorithm. Because the motion type of either jog‐ ging or fast walk is used to reject the misjudgment of fall type. This is in contrast to the sec‐

is an open question for our future research project.

nized and reported.

**•** Set 2: Normal walk, fast walk, jogging, normal walk, fast walk, jogging, normal walk, fast walk, pause, fast walk, jogging and 10 repeated falls.

The experimental results with respect to these two sets of motion data are illustrated in Fig‐ ure 5. In the figure, the green waveforms are the captured motion data in the vector sum‐ med acceleration form. The red waveforms are the motion types determined by the algorithm. The blue peaks are the positions of fall detected by the algorithm. The red wave‐ forms and the blue peaks are automatically inserted by the algorithm during operation. The numbers labelled on the left bottom of the figures represent the motion type scale; i.e. the first motion type detected by the algorithm in Figure 5(a) is ''1'' which is a normal walk. It is noted that we avoid showing the motion type whose number is ''0'' (jogging) as it overlaps with the x-axis.

**Figure 5.** Experimental results after applying the algorithm to two different data: the green waveforms are the cap‐ tured motion data, the red waveforms are the motion types and the blue peaks are the detected falls.

With respect to Figure 5, it can be seen that the algorithm, in conjunction with the configura‐ tion of codebook detailed in Section 3.3, is capable of detecting the changes in motion type. It is capable of detecting all fall curves in both motion data sets. It, however, cannot correctly handle the case of jogging. That is to say the algorithm reports the fast walk motion type instead of jogging. This is quite obvious in the middle of Figure 5(a) where the jogging mo‐ tion type is captured. The interesting point of the detection results is found after magnifying the detected parts of fall curves as illustrated in Figure 6. Clearly, it can be seen that the al‐ gorithm is able to make decisions if falls occur almost one segment ahead of the first impact. This is equivalent to approximately 133 mS with respect to the data sampling period used during the data capturing stage. It is very useful from the point of view that there exists a system to get a pre-impact alert signal and handle in the way to prevent severe impact. This is an open question for our future research project.

**•** Set 1: 78 seconds of normal walk, 41 seconds of fast walk, 37.5 seconds of jogging and 12

**•** Set 2: Normal walk, fast walk, jogging, normal walk, fast walk, jogging, normal walk, fast

The experimental results with respect to these two sets of motion data are illustrated in Fig‐ ure 5. In the figure, the green waveforms are the captured motion data in the vector sum‐ med acceleration form. The red waveforms are the motion types determined by the algorithm. The blue peaks are the positions of fall detected by the algorithm. The red wave‐ forms and the blue peaks are automatically inserted by the algorithm during operation. The numbers labelled on the left bottom of the figures represent the motion type scale; i.e. the first motion type detected by the algorithm in Figure 5(a) is ''1'' which is a normal walk. It is noted that we avoid showing the motion type whose number is ''0'' (jogging) as it overlaps

**Figure 5.** Experimental results after applying the algorithm to two different data: the green waveforms are the cap‐

With respect to Figure 5, it can be seen that the algorithm, in conjunction with the configura‐ tion of codebook detailed in Section 3.3, is capable of detecting the changes in motion type. It is capable of detecting all fall curves in both motion data sets. It, however, cannot correctly handle the case of jogging. That is to say the algorithm reports the fast walk motion type

tured motion data, the red waveforms are the motion types and the blue peaks are the detected falls.

repeated falls with 10 seconds duration between a consecutive pair of falls.

walk, pause, fast walk, jogging and 10 repeated falls.

172 Developments and Applications of Self-Organizing Maps Applications of Self-Organizing Maps

with the x-axis.

**Figure 6.** Magnifying the detected fall curves to show that the algorithm can make decision before the first impact.

After verifying the proposed algorithm with different configurations of SOM map, we found a configuration that gave rise to an acceptable correctness in term of motion classi‐ fication result. This configuration has *8×8* codewords and the data preparation procedure differs from the previously described one in the way that it used only one, instead of three, data from the motion type of fall on the floor with support of bed. Figure 7 shows the classification results with respect to both motion data sets. The similarity between the verification results with respect to both data sets is that the algorithm can no longer de‐ tect fall. From Figure 7(a), the algorithm is capable of detection even a short period of normal walk with unusual waveforms which is the first peak of fast walk motion type. The fluctuations between normal and fast walks just a moment before jogging are also detected and correctly reported. In addition, the algorithm gives rise to a correct report during jogging motion type. The results in Figure 7(b) further confirms the correctness. The first two joggings are successfully detected. All motions ahead of the first jogging and in between the first and second and the second and the third joggings are correctly recog‐ nized and reported.

At this point, we are trying to give an answer to the question why two configurations of SOM map lead to different detection results. It can be said that the first configuration of training data is designed with the aim to make the abruptly changes of segment, fall seg‐ ments - to be specific, detectable. In addition, the resulting SOM map must be capable of providing the current motion type to the algorithm. Because the motion type of either jog‐ ging or fast walk is used to reject the misjudgment of fall type. This is in contrast to the sec‐ ond configuration which is designed to distinguish among the continuous types of motion. In the first configuration, we provide the SOM with the majority of data from multiple sets of fall curves. This results in the SOM map whose majority of codewords are labelled with fall type. On the other hand, the second configuration, the data set from fall motion type is minority and makes the resulting map with only a small number of codewords whose label are of fall type. At the same time, it increases the number of codewords whose label with another motion type. When it comes to querying a segment, the possibility of matching a segment to the codeword with fall type is therefore low.

Another explanation is that when the data from the motion type of fall on the floor with support of bed is minority, it also reduces the number of fall curves within the data. This comes from the fact that the data from the motion type of fall on the floor does not consist of fall curves only. It also has some parts of standing, preceding all fall curves, and sit to stand transition. The sit to stand transition could have similar segments to walking. This results in lowering the number of codewords with fall type and increasing the chance of matching a queried segment to other motion type.

> **Figure 7.** Experimental results after applying the algorithm to two different data: : the green waveforms are the cap‐ tured motion data, the red waveforms are the motion types. In this case, the codebook is prepared to make the algo‐

> A Self Organizing Map Based Motion Classifier with an Extension to Fall Detection Problem and Its Implementation on

a Smartphone

175

http://dx.doi.org/10.5772/51002

Figure 8 shows the screen captures of the main user interface application. In a normal opera‐ tion mode, the screen of Figure 8(a) is displayed as a main user interface. The waveform shown on the screen is the real-time vector summed acceleration waveform. The button on the lower left is used to provide interaction with a user to change parameters setting (see 8(b)): a phone number to alert in case of fall and some parameters related to decision mak‐ ing of the algorithm. Figure 8(c) is an entry point to the training and labeling stages of the algorithm. It is activated as a result of pressing the ''Train Mode''-button of the main user

In the training mode screen, the ''Start Capturing''-button activates the application to enter data capturing for training and labeling stages. The application captures all motion types in order started from jogging and ended with fast walk with the capturing duration adjustable in the parameters setup screen. The application makes use of the internal data for all fall mo‐ tion types without requiring a user to perform these risky motion types. When all motion types have already been captured, the application uses its own SOM implementation func‐ tions to perform training and labeling stages. The resulting codebook along with the labels can be visualized by use of a built-in feature of the application. Figure 8(d)-(f) shows the screen captures of the sample codebook. The codebook is displayed upon pressing a ''Show Map''-button. It is noted that the codebook is useful as it provides a feedback to an operator in order to justify if the training and labeling stages result in a codebook with an acceptable quality. From our testing, we have found that the self training and labeling stages can be avoided and the default codebook can be used in most cases. The testing results for a bigger

rithm correctly classify motion types.

interface screen (Figure 8(a)).

group of users is now under investigation.

#### **6. The Implementation for Practical Usability**

With the positive verification results of the proposed algorithm detailed in the previous sec‐ tion, we then move ahead to implement the algorithm on the suitable platform. The aim of this implementation is to make it applicable in real life. We avoid employing a self-devel‐ oped embedded system as a platform of choice. The reasons are that it is fairly difficult to design an embedded system with the least eye-catching, less power consumption, small size and high processing power. We have found that a smartphone platform which has been used during the data collection stage of our research is the most appropriate platform as de‐ tailed earlier. The algorithm is, therefore, integrated with some parts previously developed which are the acceleration sensor interfacing and background mode sensor sampling. In the prototype, the algorithm is partitioned into two parts: the background mode acceleration sensor query process and the main user interface application with features to support train‐ ing and labeling stages and system setting. The background mode is programmed to execute with a fixed interval of 100 mS. In reality, this is only the best case sampling period as the operating system could be busy to service the process. However, it does not matter since the algorithm is designed to tolerate for this limitation. Once the acceleration parameters has al‐ ready been sampled, the process dispatches the parameters to the main user interface appli‐ cation via an inter-process communication. This is done even the main user interface application is forced to operate in the background mode. It is ensured that either the fall de‐ tection or motion classification can be performed.

A Self Organizing Map Based Motion Classifier with an Extension to Fall Detection Problem and Its Implementation on a Smartphone http://dx.doi.org/10.5772/51002 175

ond configuration which is designed to distinguish among the continuous types of motion. In the first configuration, we provide the SOM with the majority of data from multiple sets of fall curves. This results in the SOM map whose majority of codewords are labelled with fall type. On the other hand, the second configuration, the data set from fall motion type is minority and makes the resulting map with only a small number of codewords whose label are of fall type. At the same time, it increases the number of codewords whose label with another motion type. When it comes to querying a segment, the possibility of matching a

Another explanation is that when the data from the motion type of fall on the floor with support of bed is minority, it also reduces the number of fall curves within the data. This comes from the fact that the data from the motion type of fall on the floor does not consist of fall curves only. It also has some parts of standing, preceding all fall curves, and sit to stand transition. The sit to stand transition could have similar segments to walking. This results in lowering the number of codewords with fall type and increasing the chance of matching a

With the positive verification results of the proposed algorithm detailed in the previous sec‐ tion, we then move ahead to implement the algorithm on the suitable platform. The aim of this implementation is to make it applicable in real life. We avoid employing a self-devel‐ oped embedded system as a platform of choice. The reasons are that it is fairly difficult to design an embedded system with the least eye-catching, less power consumption, small size and high processing power. We have found that a smartphone platform which has been used during the data collection stage of our research is the most appropriate platform as de‐ tailed earlier. The algorithm is, therefore, integrated with some parts previously developed which are the acceleration sensor interfacing and background mode sensor sampling. In the prototype, the algorithm is partitioned into two parts: the background mode acceleration sensor query process and the main user interface application with features to support train‐ ing and labeling stages and system setting. The background mode is programmed to execute with a fixed interval of 100 mS. In reality, this is only the best case sampling period as the operating system could be busy to service the process. However, it does not matter since the algorithm is designed to tolerate for this limitation. Once the acceleration parameters has al‐ ready been sampled, the process dispatches the parameters to the main user interface appli‐ cation via an inter-process communication. This is done even the main user interface application is forced to operate in the background mode. It is ensured that either the fall de‐

segment to the codeword with fall type is therefore low.

**6. The Implementation for Practical Usability**

tection or motion classification can be performed.

queried segment to other motion type.

174 Developments and Applications of Self-Organizing Maps Applications of Self-Organizing Maps

**Figure 7.** Experimental results after applying the algorithm to two different data: : the green waveforms are the cap‐ tured motion data, the red waveforms are the motion types. In this case, the codebook is prepared to make the algo‐ rithm correctly classify motion types.

Figure 8 shows the screen captures of the main user interface application. In a normal opera‐ tion mode, the screen of Figure 8(a) is displayed as a main user interface. The waveform shown on the screen is the real-time vector summed acceleration waveform. The button on the lower left is used to provide interaction with a user to change parameters setting (see 8(b)): a phone number to alert in case of fall and some parameters related to decision mak‐ ing of the algorithm. Figure 8(c) is an entry point to the training and labeling stages of the algorithm. It is activated as a result of pressing the ''Train Mode''-button of the main user interface screen (Figure 8(a)).

In the training mode screen, the ''Start Capturing''-button activates the application to enter data capturing for training and labeling stages. The application captures all motion types in order started from jogging and ended with fast walk with the capturing duration adjustable in the parameters setup screen. The application makes use of the internal data for all fall mo‐ tion types without requiring a user to perform these risky motion types. When all motion types have already been captured, the application uses its own SOM implementation func‐ tions to perform training and labeling stages. The resulting codebook along with the labels can be visualized by use of a built-in feature of the application. Figure 8(d)-(f) shows the screen captures of the sample codebook. The codebook is displayed upon pressing a ''Show Map''-button. It is noted that the codebook is useful as it provides a feedback to an operator in order to justify if the training and labeling stages result in a codebook with an acceptable quality. From our testing, we have found that the self training and labeling stages can be avoided and the default codebook can be used in most cases. The testing results for a bigger group of users is now under investigation.

successfully used to differentiate between fall and other motion types. The conducted ex‐ periments indicate that the algorithm gives rise to almost 100% of fall detection correctness with almost zero false for activities of daily living (ADLs). Above all, the algorithm can make decision if fall occurs one segment ahead of the first impact which is equivalent to ap‐ proximately 133 mS. It is very useful from The point of view that there exists a system to get a pre-impact alert signal and handle immediately in the way to prevent severe impact. To make it applicable, the algorithm is successfully implemented on the smartphone platform

A Self Organizing Map Based Motion Classifier with an Extension to Fall Detection Problem and Its Implementation on

a Smartphone

177

http://dx.doi.org/10.5772/51002

This work was supported by National Research Council of Thailand under the project: De‐ sign and development of an elderly motion pattern classification and pre-fall alert system. The author would like to thank all anonymous reviewers for their comments in previous

[1] Yang, C. C., & Hsu, Y. L. (2010). A Review of Accelerometry-Based Wearable Motion

[2] Kiani, K., Snijders, C. J., & Gelsema, E. S. (1997). Computerized Analysis of Daily Life Motor Activity for Ambulatory Monitoring. *Technol. Health. Care 1997*, 5, 307-318. [3] Mathie, M. J., Celler, B. G., Lovell, N. H., & Coster, A. C. F. (2004). Classification of Basic Daily Movements Using a Triaxial Accelerometer. *Med. Biol. Eng. Comput.*, 42,

[4] Karantonis, D. M., Narayanan, M. R., Mathie, M., Lovell, N. H., & Celler, B. G. (2006). Implementation of a Real-Time Human Movement Classifier Using a Triaxial Accel‐ erometer for Ambulatory Monitoring. *IEEE. Trans. Inf. Technol. Biomed.*, 10, 156-167.

[5] Najafi, B., Aminian,, K, Loew,, F, Blanc,, Y, & Robert, P. A. (2002). Measurement of Stand-Sit and Sit-Stand Transitions Using a Miniature Gyroscope and Its Application

Detectors for Physical Activity Monitoring. *Sensors*, 10(8), 7772-7788.

based on an Android operating system.

**Acknowledgements**

versions of this paper.

**Author details**

**References**

679-687.

Wattanapong Kurdthongmee\*

Walailak University, Thailand

Address all correspondence to: kwattana@wu.ac.th

**Figure 8.** Screen captures of the main user interface application on an Android smartphone.

## **7. Conclusions**

In this chapter, the novel motion classification algorithm in the class of body attachment sensor is proposed. A single triaxial acceleration sensor is employed by the algorithm with a capability to tolerate for a variable sampling rate. In contrast to previously proposed algo‐ rithms in the similar category which rely on using threshold techniques, our algorithm Em‐ ploys the self organizing map neuron network to perform an adaptive motion types clustering and classification functions. The motion data from a wearer is used to train the SOM in order to cluster motion parameters in relation to motion type of the wearer. Later, the motion type is obtained by querying the trained SOM given a motion segment on a realtime basis. With the success of classification of different motion types, we propose extending the algorithm to perform fall detection. In this case, SOM is trained and labeled with a ma‐ jority of data from fall curves. As the characteristics of the fall curves is almost similar to other motion types with rapid transition of acceleration, the motion type is employed to support the differentiation between fall and other rapid transition motion types. It can be summarized that the contributions of our research are twofolds. Firstly, the motion types classification algorithm employing SOM is proposed and validated for its correctness with respect to the continuous waveform of mixed motion types. Secondly, the motion types is successfully used to differentiate between fall and other motion types. The conducted ex‐ periments indicate that the algorithm gives rise to almost 100% of fall detection correctness with almost zero false for activities of daily living (ADLs). Above all, the algorithm can make decision if fall occurs one segment ahead of the first impact which is equivalent to ap‐ proximately 133 mS. It is very useful from The point of view that there exists a system to get a pre-impact alert signal and handle immediately in the way to prevent severe impact. To make it applicable, the algorithm is successfully implemented on the smartphone platform based on an Android operating system.

## **Acknowledgements**

This work was supported by National Research Council of Thailand under the project: De‐ sign and development of an elderly motion pattern classification and pre-fall alert system. The author would like to thank all anonymous reviewers for their comments in previous versions of this paper.

## **Author details**

Wattanapong Kurdthongmee\*

Address all correspondence to: kwattana@wu.ac.th

Walailak University, Thailand

## **References**

**Figure 8.** Screen captures of the main user interface application on an Android smartphone.

In this chapter, the novel motion classification algorithm in the class of body attachment sensor is proposed. A single triaxial acceleration sensor is employed by the algorithm with a capability to tolerate for a variable sampling rate. In contrast to previously proposed algo‐ rithms in the similar category which rely on using threshold techniques, our algorithm Em‐ ploys the self organizing map neuron network to perform an adaptive motion types clustering and classification functions. The motion data from a wearer is used to train the SOM in order to cluster motion parameters in relation to motion type of the wearer. Later, the motion type is obtained by querying the trained SOM given a motion segment on a realtime basis. With the success of classification of different motion types, we propose extending the algorithm to perform fall detection. In this case, SOM is trained and labeled with a ma‐ jority of data from fall curves. As the characteristics of the fall curves is almost similar to other motion types with rapid transition of acceleration, the motion type is employed to support the differentiation between fall and other rapid transition motion types. It can be summarized that the contributions of our research are twofolds. Firstly, the motion types classification algorithm employing SOM is proposed and validated for its correctness with respect to the continuous waveform of mixed motion types. Secondly, the motion types is

**7. Conclusions**

176 Developments and Applications of Self-Organizing Maps Applications of Self-Organizing Maps


in Fall Risk Evaluation in the Elderly. *IEEE Trans. on Biomedical Engineering*, 49(8), 843-851.

[18] Mannini, A., & Sabatini, A. M. (2010). Machine Learning Methods for Classifying Human Physical Activity from On-Body Accelerometers. *Sensors*, 10, 1154-1175.

A Self Organizing Map Based Motion Classifier with an Extension to Fall Detection Problem and Its Implementation on

a Smartphone

179

http://dx.doi.org/10.5772/51002

[19] Pober, D. M., Staudenmayer, J., Raphael, C., & Freedson, P. S. (2006). Development of Novel Techniques to Classify Physical Activity Mode Using Accelerometers. *Med.*

[20] Kohonen, T. (1990). The Self-Organizing Map. *Proc. of IEEE*, 78(9), 1990, 1464-1480.

tions of the self-organizing map. *Proc. of IEEE*, 84(10), 2002, 1358-1384.

oper-community/blog/mobile-sensor-actuator-link, 21-Dec-2011.

[21] Kohonen,, T., Oja, E., Simula, O., Visa, A., & Kangas, J. (2002). Engineering applica‐

[22] Android Developer. (2011). Android Developer Guide, Available from:, http://devel‐

[23] Ericsson Labs. (2011). Mobile Sensor Actuator Link. https://labs.ericsson.com/devel‐

*Sci. Sports Exerc.*, 38, 1626-1634.

oper.android.com/index.html, 21-Dec-2011.


[18] Mannini, A., & Sabatini, A. M. (2010). Machine Learning Methods for Classifying Human Physical Activity from On-Body Accelerometers. *Sensors*, 10, 1154-1175.

in Fall Risk Evaluation in the Elderly. *IEEE Trans. on Biomedical Engineering*, 49(8),

[6] Yang, C. C., & Hsu, Y., L. (2009). Development of a Wearable Motion Detector for Telemonitoring and Real-Time Identification of Physical Activity. *Telemed. J. E.*

[7] Veltink, P. H., Bussmann, B. J., de Vries, W., Martens, W. L., & van Lummel, R. C. (1996). Detection of Static and Dynamic Activities Using Uniaxial Accelerometers.

[8] Foerster, F., Smeja, M., & Fahrenberg,, J. (1999). Detection of Posture and Motion by Accelerometry: A Validation Study in Ambulatory Monitoring. *Comput. Human. Be‐*

[9] Lyons, G. M., Culhane K., M., Hilton, D., Grace, P. A., & Lyons,, D. (2005). A Descrip‐ tion of an Accelerometer-Based Mobility Monitoring Technique. *Med. Eng. Phys.*, 27,

[10] Ohtaki, Y., Susumago, M., Suzuki, A., Sagawa, K., Nagatomi, R., & Inooka,, H. (2005). Automatic Classification of Ambulatory Movements and Evaluation of Energy Con‐ sumptions Utilizing Accelerometers and a Barometer. *Microsyst. Technol.*, 11,

[11] Sekine, M., Tamura, T., Togawa, T., & Fukui, Y. (2000). Classification of Waist-Accel‐ eration Signals in a Continuous Walking Record. *Med. Eng. Phys.*, 22, 285-291.

[12] Bussmann, H. B., Reuvekamp, P. J., Veltink, P. H., Martens, W. L., & Stam, H. J. (1998). Validity and Reliability of Measurements Obtained with an "Activity Moni‐ tor" in People with and without a Transtibial Amputation. *Phys. Ther.*, 78, 989-998.

[13] Lau, H., Y., Tong, K. Y., & Zhu, H. (2009). Support Vector Machine for Classification of Walking Conditions of Persons after Stroke with Dropped Foot. *Hum. Mov. Sci.*,

[14] Zhang, T., Wang, J., Xu, L., & Liu, P. (2006). Fall Detection by Wearable Sensor and One-Class SVM Algorithm. *Intel. Comput. Signal Process. Pattern Recognit.*, 345,

[15] Huynh, T., & Schiele, B. (2006). Towards Less Supervision in Activity Recognition from Wearable Sensors. *Proceedings of the 10th IEEE International Symposium on Weara‐*

[16] Long, X., Yin, B., & Aarts, R. M. (2009). Single-Accelerometer-Based Daily Physical Activity Classification. *Proceedings of the 31st Annual International Conference of the*

[17] Allen, F. R., Ambikairajah, E., Lovell, N. H., & Celler, B. G. (2006). Classification of a Known Sequence of Motions and Postures from Accelerometry Data Using Adapted

*ble Computers, Montreux, Switzerland*, 11-14 October 2006, 3-10.

Gaussian Mixture Models. *Physiol. Meas.*, 27, 935-951.

*IEEE EMBS, Minneapolis, MN, USA*, 2-6 September 2009, 6107-6110.

843-851.

*Health*, 15, 62-72.

178 Developments and Applications of Self-Organizing Maps Applications of Self-Organizing Maps

*hav.*, 15, 571-583.

497-504.

1034-1040.

28, 504-514.

858-863.

*IEEE. Trans. Rehabil. Eng.*, 4, 375-385.


**Chapter 9**

**Using Self-Organizing Maps to Visualize, Filter and**

In the face of ever-growing of biological data at the genome scale (denoted as omics data) [1,2], investigators of virtually every aspect of biological research are shifting their attention to massive information extracted from omics data. The 'omics' refers to a complete set of bi‐ omolecules, such as DNAs, RNAs, proteins and other molecular entities. Omics data are produced by high-throughput technologies. At first, these technologies were known as cDNA microarray [3] and oligonucleotide chips [4]. Then, they were diversely evolved into ChIP-on-Chip [5] and ChIP-Sequencing [6,7], two-dimensional gel electrophoresis and mass spectrometry [8] and high-throughput two-hybrid screening [9]. Recently, they are high‐ lighted by next-generation sequencing technologies such as DNA-seq [10] and RNA-seq [11]. Because of these technological advances, biological information can be quantified in parallel and on a genome scale, but at a much-reduced cost. Nearly, omics data cover every aspect of biological information and thus secure the studies being carried out from a ge‐ nome-wise perspective. To name but a few examples, they can be used (i) to catalog the whole genome within a living organism (genomics), (ii) to monitor the gene expression at RNA level (transcriptomics) or at protein level (proteomics), (iii) to study the protein-pro‐ tein interactions (interactomics) and transcription factor-DNA binding patterns (regularo‐ mics), and (iv) to characterize DNA or histone modifications exerting on the chromosomes (epigenomics). These multi-layer omics data not just constitute a global overview of molecu‐ lar constituents, but also provide an opportunity for studying biological mechanisms. In contrast to conventional reductionism focusing on individual biomolecules, omics ap‐ proaches allow the study of emergent behaviors of biological systems. This conceptual ad‐ vance has led to the advent of systems biology [12], an interdisciplinary research field with

> © 2012 Zhang and Fang; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

> © 2012 Zhang and Fang; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

**Cluster Multidimensional Bio-Omics Data**

Additional information is available at the end of the chapter

the ultimate goal of *in silico* modeling of biological systems.

Ji Zhang and Hai Fang

http://dx.doi.org/10.5772/51702

**1. Introduction**
