**1.3 VIGOR's infrastructure**

VIGOR aims to enable users an intelligent, four-dimensional (4D), partial control (e.g., virtual limb, which indicates that VIGOR can be driven by part of the inputs. In other words, VIGOR can tolerate and compensate for missing input when part of an input channel(s) is disabled), virtual-reality, and active-orthosis-enabled generative modality.

**Figure 2** shows the infrastructure of the VIGOR system. A deep-learning-based virtual coach, which is trained by Tai-Chi master's kinetic data, is the core module of VIGOR. By applying experience (obtained via deep learning) with other related knowledge such as biomechanics and medical pathology, VIGOR measures a user's movements, evaluates his/her performance in comparison to the Tai-Chi master, and offers real-time visual and tactile feedback to the user. Far more than an on-site real-time Tai-Chi instructor, VIGOR also adapts the master movements to accommodate a wide range of mobility restrictions and improvements over time.

**Figure 2.** *Infrastructure of VIGOR.*

### *VIGOR: A Versatile, Individualized and Generative ORchestrator to Motivate the Movement… DOI: http://dx.doi.org/10.5772/intechopen.96025*

The kinetic data for the Tai-Chi master and users are captured by different sensors, such as Microsoft Kinect and somatosensory sensors [39]. The fusion, transmission, storage, retrieval, management, and analytics [40] of sensory data are computationally and storage intensive. In VIGOR, an edge-computing-enabled network is exploited to connect the user with the virtual coach server. An edge server is employed to store and process the large volume of sensory data in real-time [41]. Integrated with Tensorflow, a deep learning library, VIGOR measures and predicts kinetic behavior of VIGOR users.

The system also provides the user with a multi-fold and panoramic 4D experience that includes visual, somatosensory information and direct physical support. 3D reconstruction and visualization with Unity3D allows the user to place themselves in a variety of different simulated spaces with a personalized virtual Tai-Chi coach walking them through Tai-Chi motions in a 3D world, supported by a softactuator based wearable device.

VIGOR is developed following "5S criteria" as follows: (1) Substantiation (or personalization) - VIGOR can provide user with personalized service according to their health condition and clinical requirements; (2) Simplicity - even those who are untrained or uneducated users can freely use VIGOR; (3) Skimpiness - only commodity hardware and software are used in VIGOR so that majority of people can afford it; (4) Scalability - VIGOR can satisfy the requirement of increasing number of users; (5) Speed - real-time response is needed to satisfy the requirement of users.

### **1.4 Research objectives and function modules of VIGOR**

The major **objective** of the VIGOR is to develop a state-of-art deep learning system to help, push, and coach the people, particularly those suffering from mobile disability, so that they can get engaged in physical activities.


As a matter of fact, machine learning approaches ignore the fundamental biomechanics law and clinical regulations for human motion and thus may result in illposed problems. Additionally, deeper and wider deep neural networks (DNNs) often require large sets of labeled data for effective training and suffer from extremely high computational complexity, preventing them from being deployed in real-time systems. As a result, there is a need to incorporate domain knowledge into DNNs [42, 43]. As one of the major contributions of this project, domain knowledge will be infused into DNNs through data augmentation, customizing loss function, or embedding knowledge block into NN as an independent module (e.g., dynamicsguided discriminator in the motion choreography module).

Enabled by the deep neural network and multimodal human-machine-interaction techniques, the VIGOR system consists of the following function modules:


Each research objective along with the specific challenges and tasks will be described in more detail in Sections 2–5 individually.
