**3.3 Human motion identification based on machine learning**

Many model construction techniques have been developed for time series recognition [59, 78, 79], including K-Nearest-Neighbor (KNN) [80], Support Vector Machines (SVMs) [81, 82], neural networks, decision trees [83, 84], Bayesian networks, the Hidden Markov model (HMM), LSTM-RNN, etc.

In this work, the recognition accuracy of the aforementioned classifiers with respect to three benchmark datasets was determined: *Dataset I: UTD Multimodal Human Action Dataset* (UTD MHAD [85]), *Dataset II: UTKinect-Action3D* [86], and

#### **Figure 6.**

*Normalizing the kinetics of user with limited mobility: (a) Compensation of disabled input channels via Deep Neural Network; (b) tCNN-enabled compensated kinetic status of a wheel-chaired user, who receives "virtual functional legs" (in yellow color).*

*VIGOR: A Versatile, Individualized and Generative ORchestrator to Motivate the Movement… DOI: http://dx.doi.org/10.5772/intechopen.96025*

**Figure 7.** *AppEn and SampEn for 25 joints: Comparison of an advanced users and beginner (Each subsequence consists of 25 frames): beginner has larger entropy.*

*Dataset III: Tai-Chi Yang-Style 24 movement* [22, 41] (an in-house Kinect skeletal dataset collected for Tai-Chi training). The experimental results showed that SVM and LSTM-RNN surpasses the other classifiers; particularly, LSTM-RNN has a superior recognition accuracy in case of limited number of training data (e.g., 200 training samples). However, LSTM-RNN suffers from unsatisfactory time performance [35]. Scalable algorithms for temporal neural network such as LSTM-RNN and temporal convolutional network (tCNN) need to be developed [46].

In this work, a musculoskeletal biomechanics guided loss function is used to formulate the objective of kinetics classifier:

$$\mathcal{L}(\boldsymbol{\theta}) = L\left(f(\mathbf{X}, \boldsymbol{\theta}), \mathbf{y}\right) + \varrho \mathcal{R}(\boldsymbol{\theta}), \tag{1}$$

where **y** is the pre-determined movement identity; *f* (**X***, θ*) is predicting movement identity of kinetics sequence **<sup>X</sup>** ¼ h *<sup>x</sup><sup>k</sup> <sup>t</sup>* , *y<sup>k</sup> <sup>t</sup>* , *z<sup>k</sup> t* � �,**f**Þ*<sup>k</sup> <sup>t</sup>*<sup>i</sup> � �*tm t*¼*t*<sup>0</sup> n (as defined in **Figure 5**, *t* is time step ranging from *t*<sup>0</sup> through *tm*, *k* is joint's identity); *θ* ∈ ℜ*<sup>n</sup>* indicates the parameters (weight and bias) of neural network; Rð Þ*<sup>θ</sup>* : <sup>ℜ</sup>*<sup>n</sup>* ! <sup>ℜ</sup> is the regularizer, whose importance is controlled by regularization strength ϱ∈ ℜ; and Lð Þ*<sup>θ</sup>* : <sup>ℜ</sup>*<sup>n</sup>* ! <sup>ℜ</sup> is actually regularized loss. The corresponding optimization method is called batch optimizer.

#### **3.4 Reconstruction of 4D instruction/feedback for users**

VIGOR can be also regarded as a real-time coaching system to help users improve their physical rehabilitation movement for optimal clinical effect. According to the measure and recognition result discussed above, VTCS generates real-time 4D instructions or guidance to users over virtual reality or augmented reality (AR) platform, as shown in the online video [22, 87, 88] addressed in our preliminary work.

### **4. Adaptive virtual limb generation**

To relieve the physical and psychological suffering of people with limited mobility, VIGOR develops an adaptive (versatile to various types of disability) and full-body-driven virtual limb generation system (all measurable body-parts will be used to formulate virtual limbs). The related technical contributions include: (1) According to specified kinetic script (e.g., dancing, running, etc.) and users' physical conditions, a hierarchical network is extracted from human musculoskeletal network, which is fabricated by multiple body components (e.g., muscles, bones, and joints, etc.) that are biomechanically, functionally, or neurally correlated with each other and exhibit mostly non-divergent kinetic behaviors. (2) The generated limb can be reconstructed over the VR/AR system, tactile actuator system, and motoring system.
