**4.1. The virtual myoelectric prosthesis**

Although VR has been extensively used as an aiding tool for users of prosthetic devices, the interaction with the virtual world still needs to be improved, in order to provide a real immersive training environment. To do so, the research group headed by the authors, have developed new techniques for VR interaction and for detection and processing of EMG signals, in order to extract the correct commands issued by the user which, in turn, could be used to control the movements of a device in a VR environment [21, 28, 30, 38]. However, although a purely non-immersive VR environment showed some good results, it was thought that an Augmented Reality (AR) environment would provide a more realistic experience. Hence, an AR environment was designed so that images of the virtual device (the prosthesis) are combined with images of the real world, providing a realistic environment for training upper limb prosthetic users [39]. A simplified block diagram of the system is shown in Figure 11.

Virtual and Augmented Reality: A New Approach to Aid Users of Myoelectric Prostheses 421

During operation, the camera captures the image and locates a marker at the user´s shoulder. The algorithm then searches for a virtual object that corresponds to such marker

**Figure 13.** Image of a virtual object combined with the real world scene – an outsider point of view

As described in [39], the control inputs for the AR environment are generated by EMG signals, collected from remnant muscles. The raw EMG signal, detected by surface electrodes is conditioned and processed to find out which movement the user wants to perform. To do so, the areas of activity in the EMG data were detected (windowing) and the resulting signal was then processed to generate a set of features used by an artificial neural network classifier. Basically, each EMG contraction was represented by a set of Auto-Regressive (AR) coefficients calculated according to a modified algorithm described in [38]. According to the authors, the choice of a neural network as a classifier was due to its ability to learn and later recognize signals as being part of the same class of movement in real time. Also, depending on the level of amputation, different users may generate different levels of contractions of the remaining part of the limb, for the same class movement. Besides, even if a single user performs only isometric or isotonic contractions, there will not be two identical contractions for the same movement. The neural network was trained with four classes of movements (elbow flexion, elbow extension, wrist pronation and wrist supination). The

**Figure 12.** A Virtual Prosthesis designed by the authors.

(extracted from [39]).

(Figure 13) and inserts it into the real world, captured by a camera.

**Figure 11.** The authors' approach for a Virtual Myoelectric Prosthesis (extracted from [39]).

In the system, the user is fitted with a head mounted device that includes a camera, for capturing the real images of the user´s view point, and a display to show the mixed images (augmented: real and virtual). The EMG signals are collected and processed to generate inputs to the VR unit [21]. A processing center decides when to update both the virtual arm (Figure 12) and the augmented reality image, to further send them to the graphics user interface (the head mounted display).

**Figure 12.** A Virtual Prosthesis designed by the authors.

Computational Intelligence in Electromyography Analysis – 420 A Perspective on Current Applications and Future Challenges

Although VR has been extensively used as an aiding tool for users of prosthetic devices, the interaction with the virtual world still needs to be improved, in order to provide a real immersive training environment. To do so, the research group headed by the authors, have developed new techniques for VR interaction and for detection and processing of EMG signals, in order to extract the correct commands issued by the user which, in turn, could be used to control the movements of a device in a VR environment [21, 28, 30, 38]. However, although a purely non-immersive VR environment showed some good results, it was thought that an Augmented Reality (AR) environment would provide a more realistic experience. Hence, an AR environment was designed so that images of the virtual device (the prosthesis) are combined with images of the real world, providing a realistic environment for training upper limb prosthetic users [39]. A simplified block diagram of the

**Figure 11.** The authors' approach for a Virtual Myoelectric Prosthesis (extracted from [39]).

In the system, the user is fitted with a head mounted device that includes a camera, for capturing the real images of the user´s view point, and a display to show the mixed images (augmented: real and virtual). The EMG signals are collected and processed to generate inputs to the VR unit [21]. A processing center decides when to update both the virtual arm (Figure 12) and the augmented reality image, to further send them to the graphics user

**4.1. The virtual myoelectric prosthesis** 

system is shown in Figure 11.

interface (the head mounted display).

During operation, the camera captures the image and locates a marker at the user´s shoulder. The algorithm then searches for a virtual object that corresponds to such marker (Figure 13) and inserts it into the real world, captured by a camera.

**Figure 13.** Image of a virtual object combined with the real world scene – an outsider point of view (extracted from [39]).

As described in [39], the control inputs for the AR environment are generated by EMG signals, collected from remnant muscles. The raw EMG signal, detected by surface electrodes is conditioned and processed to find out which movement the user wants to perform. To do so, the areas of activity in the EMG data were detected (windowing) and the resulting signal was then processed to generate a set of features used by an artificial neural network classifier. Basically, each EMG contraction was represented by a set of Auto-Regressive (AR) coefficients calculated according to a modified algorithm described in [38]. According to the authors, the choice of a neural network as a classifier was due to its ability to learn and later recognize signals as being part of the same class of movement in real time. Also, depending on the level of amputation, different users may generate different levels of contractions of the remaining part of the limb, for the same class movement. Besides, even if a single user performs only isometric or isotonic contractions, there will not be two identical contractions for the same movement. The neural network was trained with four classes of movements (elbow flexion, elbow extension, wrist pronation and wrist supination). The

Computational Intelligence in Electromyography Analysis – 422 A Perspective on Current Applications and Future Challenges

results showed a near perfect performance of the classifier (95% to 100% rate of success). The output of the neural network is then used as control input (position and motion) to the virtual device, which can be rendered and mixed with the real world scene, as shown in Figure 14.

Virtual and Augmented Reality: A New Approach to Aid Users of Myoelectric Prostheses 423

Despite the progress achieved so far, the authors believe that, as technology advances, the use of virtual and augmented reality for controlling myoelectric prostheses should also undergo continuous improvements. These future developments should be focused on issues such as: (i) improving the modeling of the virtual devices, in order to increase the sense of realism when compared to actual prostheses; (ii) new adaptive protocols for controlling the virtual prosthesis, so that it could emulated different strategies and different joint actuators; and (iii) the design of new devices to provide physiological feedback, allowing the user to "feel" what the virtual prosthesis is actually doing, thus, increasing the feeling of a complete

and Adriano de Oliveira Andrade

*Laboratory of Computer Graphics and Virtual Reality, Faculty of Electrical Engineering,* 

The authors would like to express their gratitude to "Coordenação de Aperfeiçoamento de Pessoal de Nível Superior" (CAPES - Brazil), "Conselho Nacional de Desenvolvimento Científico e Tecnológico" (CNPq – Brazil) and "Fundação de Amparo à Pesquisa do Estado

[1] Davoodi R, Loeb GE. Real-Time Animation Software for Customized Training to Use Motor Prosthetic Systems. IEEE Transactions on Neural Systems and Rehabilitation

[2] Scheme E, Englehart K. Electromyogram Pattern Recognition for Control of Powered Upper-Limb Prostheses: State of the Art and Challenges for Clinical Use. Journal of

[3] Simon AM, Hargrove LJ, Lock BA, Kuiken TA. Target Achievement Control Test: Evaluating Real-time Myoelectric Pattern-recognition Control of Multifunctional Upper-limb Prostheses. Journal of Rehabilitation Research & Development 2011; 48(6)

*Laboratory of Biomedical Engineering, Faculty of Electrical Engineering,* 

de Minas Gerais" (FAPEMIG – MG – Brazil) for the financial support.

Rehabilitation Research & Development 2011; 48(6) 643–660.

Edgard Afonso Lamounier Júnior and Alexandre Cardoso

**6. Future directions** 

**Author details** 

Alcimar Barbosa Soares\*

**Acknowledgement** 

**7. References** 

619–628.

Corresponding Author

 \*

mix between the real and virtual worlds.

*Federal University of Uberlândia, Brazil* 

*Federal University of Uberlândia, Brazil* 

Engineering 2012; 20(2) 134-142.

**Figure 14.** User´s point of view within the AR environment (extracted from [39]).

Note that this system allows the user to interact within the virtual environment - the virtual myoelectric prosthesis can touch and grab other virtual objects embedded in the real scene (such as the cube and the kettle in Figure 14). Also, a strong cognitive feedback is provided by this real time mixture of virtual objects with the real environment, given the feeling that it is almost possible to touch the virtual arm with the real one, and vice-versa.
