**4.1 Pipeline of adaptive virtual limb generation**

The proposed work employs deep learning techniques such as autoencoder to generate virtual limbs [89] according to the observed kinetic behaviors of other body parts based on the following hypothesis: (1) The human body consists of multiple components such as muscles, bones, and joints, which are correlated with each other mechanically, neurally, and/or functionally. (2) Deep learning techniques such as autoencoder can be used to capture the kinetic pattern of human movement.

**Figure 8(a)** shows the flowchart of the adaptive virtual limb generation, which consists of the following critical aspects: (1) Formulating human musculoskeletal network [91] according to the functional, mechanical and neural correlation between each body component (muscle, joint, or bone). (2) Deriving hierarchical network (in the configuration of forest data structure) from the human musculoskeletal network according to the physical status of users, where the virtual limbs will form the leaves of a hierarchical tree. (3) Building visible autoencoder neural network according to the hierarchical network so that the kinetic behavior can be constructed according to the kinetic behavior of user's functional body parts measured by heterogeneous sensors. (4) Training the addressed visible autoencoder neural network according to specific human movement script such as walking, jogging, dancing, or any other physical activity. (5) Representing kinematic behavior about virtual limbs using VR/AR, tactile actuators, and active orthoses, which can directly stimulate users. **Figure 8(b)** shows the screenshot of virtual limb generation.
