iterations (epochs)

20–35 (timedomain) 55–85 (frequencydomain)

500 1 1 Yes (easy to train

Implementation on Android device

> No (too much memory consumption)

> memory consumption)

propagation on mobile devices)

SVM on mobile devices)

5–15 No (too much

5–15 No (hard to run back-

Approaches # parameters

CNN Approaches for Time Series Classification DOI: http://dx.doi.org/10.5772/intechopen.81170

CNN few data 1.2e + 06 (time-

TL full fine-tuning 1.2e + 06 (time-

TL limited finetuning

TL SVM (similar domains and across domains)

Table 4.

75

updated for one pass (1 batch)

domain) 7.1e + 05 (frequency-domain)

domain) 7.1e + 05 (frequency-domain)

1000 (500\*2) 14

Properties and resources used for the different techniques implemented in the experiment.

instances) pushes the three frameworks to overfit and be less efficient.

by 12.27 and 0.24% respectively in frequency-domain. Therefore, both

and light-weight framework for SMM recognition across subjects.

similar low- and mid-level features but different high-level features.

• "TL SVM across domains" framework yields a lower performance than "TL SVM similar domains" by 2.21% and 12.02% in time- and frequency-domain respectively. This implies the superiority in the SMM recognition task of low and mid-level features learned from SMMs over the ones learned from basic human movements. However, the latter features are more global. In time-

• "TL full fine-tuning" has a slightly higher performance than "TL limited finetuning" by 2.71 and 4.14% in time- and frequency-domain respectively,

suggesting that fine-tuning weights of layers 1, …, L 1 (where L is the number of CNN layers) is unnecessary since it does not improve SMM recognition significantly. This confirms the earliest assumption that atypical subjects have

• "TL SVM similar domains" and "TL SVM across domains" architectures perform better than "CNN few data" by 7.78 and 5.57% respectively in time-domain and

architectures engage in capturing more general features than "CNN few data". In terms of resources, one advantage of the two architectures over "CNN few data" is that the former converge much faster than the latter. Indeed, the former require 5–15 epochs (in both time and frequency domains) for full convergence while the latter needs 20–35 epochs and 55–85 epochs in time- and frequency-domain respectively for full convergence, as shown in Table 4. Another advantage resides in the number of parameters that have to be learned, which is 500 for the former (in both time and frequency domains) versus 1.2e + 06 and 7.1e + 05 for the latter in time and frequency domain respectively (Table 4). Hence, as opposed to "TL full fine-tuning" and "TL limited fine-tuning" frameworks, the "TL SVM" can be regarded as a global, fast

Time Series Analysis - Data, Methods, and Applications

CNN Approaches for Time Series Classification DOI: http://dx.doi.org/10.5772/intechopen.81170


### Table 4.

Properties and resources used for the different techniques implemented in the experiment.

parameters using backpropagation. And, knowing that backpropagation requires abundant data for proper training, a lack of training data (2000 SMM instances) pushes the three frameworks to overfit and be less efficient.


Approaches

74

S1

Time domain CNN few data TL full fine-tuning TL limited fine-tuning

TL SVM similar domains

TL SVM across domains

Frequency

CNN few data TL full fine-tuning

TL limited fine-tuning TL SVM similar domain TL SVM across domains

Table 3. Results of CNN approaches

 used in this experiment

 per domain (time or frequency)

 per subject, per study. Highest rates are in bold.

76.64

88.51

 87.98 90.54

 74.50

 91.56

 43.77

 93.11

 76.03

 94.2

 85.16

 74.67

 66.98

 93.66

 83.99

 79.78

 97.22

83.86

 95.24

 86.19

98.45

 92.71

 90.49

 84.99

 97.99

 92.22

 91.81

 94.59

 62.70

97.57

87.94

 98.36

 91.62

 87.08

 40.00

 97.82

 90.82

 85.14

97.22

 88.15

97.53

91.29

98.26

 92.17

 88.19

 52.17

 96.59

 91.98

 89.28

 96.55

 63.44

 93.13

 82.58

 94.61

 84.94

 76.42

 29.51

 93.66

 83.41

 79.54

 domain

72.02

75.73

 75.44

 75.37

 71.66

 74.40

 66.80

 90.69

 61.87

 92.19

 81.35

 58.13

 35.66

 88.44

 73.98

 72.29

76.44

56.53

91.74

63.37

 92.76

 84.86

 62.97

41.60

93.32

 80.55

74.50

Time Series Analysis - Data, Methods, and Applications

 56.50

 50.86

91.74

 63.86

 93.11

 85.88

62.62

 27.14

93.63

81.16

 71.09

71.31

59.04

91.67

 59.47

 91.89

 85.66

63.84

38.57

 92.24

82.31

73.79

 62.31

 52.98

 88.47

 60.61

 88.86

 80.90

 53.28

 16.00

 82.66

 75.82

 66.72

 S2

 S3

 S4

 S5

 S6

 S1

 S2

 S3

 S4

 S5

Study 1

Study 2

Mean

domain, the small rate difference (2.21%) between "TL SVM across domains" and "TL SVM similar domains" suggests that the low- and mid-level feature space generated by human activities shares common details with the one generated by movements of specific atypical subjects. This is not the case for frequency-domain series, which can be explained by the difference in the frequency range between human activities and SMMs. Indeed, the FFT amplitude of human activities is contained below 10 Hz, pre-training the CNN on human activity frequency-signals from 0 to 3 Hz and not from 0 to 10 Hz results in imperfect human activity features which, combined with the SVM, do not seem to yield good classification results on the recognition of SMMs. If we were to have a new target learning task whose data signals are within the same frequency range as data signals of the source learning task, then "TL SVM across domains" would have achieved the same performance as "TL SVM similar domains".

this paper) whose goal is to eliminate noise from time series, this robust CNN is an algorithm-level technique with acts at the level of loss functions by controlling high

The authors declare that they have no conflicts of interest.

Faculty of Science and Technology, University Hassan 1st, Settat, Morocco

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

\*Address all correspondence to: lamyaa.sadouk@gmail.com

provided the original work is properly cited.

error values caused by outliers.

CNN Approaches for Time Series Classification DOI: http://dx.doi.org/10.5772/intechopen.81170

Conflict of interest

Author details

Lamyaa Sadouk

77

• One advantage of "TL SVM similar domains" and "TL SVM across domains" is that they can be implemented in Android portable devices, as shown in Table 4. Indeed, an expert could receive continuous acceleration signals from the torso accelerometer of a subject, and label them on the fly (as SMM/non-SMM) as the subject performs his activities/movements. This results in annotated time series which are then preprocessed and fed into either "TL SVM similar domains" or "TL SVM across domains" for training. A one-minute recording of these signals is sufficient to train one of the two frameworks. Afterwards, this framework is ready to use for recognizing further SMMs on that same subject.
