**3. Virtual and augmented reality**

Computational Intelligence in Electromyography Analysis – 412 A Perspective on Current Applications and Future Challenges

and used to activate the actuator of the prosthesis.

muscle contraction; The electrical muscle activity is then captured by electrodes (normally in a socket attached to the stump), interpreted by customized programs in a microcontroller,

Many myoelectric prostheses employed a type of control called "two-site two-states", from which a pair of electrodes is placed on two distinct muscles. The contraction of one of these muscles produces the opening of the hand. The antagonist muscle is used in the same way to control the closing of the hand. As pointed out by Scott and Parker [8], this approach works in a manner analogous to the human body, i.e., two antagonistic muscles (or group of muscles) control the movement of a joint. However, as patients must learn to generate independent contractions of the muscles, which requires a high degree of concentration, the training can be lengthy and demand a lot of mental effort. There are also some situations in which it is not possible to find two available groups of muscles, or there might be more than one joint to be controlled. For these situations other controlling approaches have been developed [17]. For instance, the "one-site three-states", from which a little contraction of a muscle produces the closing of the hand, a strong contraction open it, and the lack muscle activity stops the hand. Figure 1 shows an example of a hand prosthesis controlled by electromyographic signals

captured by four pairs of dome electrodes, distributed around the residual limb ([18]).

**Figure 1.** An experimental setup for a myoelectric prosthesis, developed at the University of New

Currently there are a number of methods using proportional control based on the electrical activity of muscles to control the speed, torque and position of prosthetic joints. However, due to the nature of the myoelectric signal, errors and inaccuracies may occur [19]. Myoelectric signals can be detected using basically two types of electrodes: surface

Bruswick, Canada (extracted from [18]).

Virtual Reality (VR) can be defined as an advanced computer interface where the user can, in real time, navigate within a tridimensional environment interacting with its virtual objects. Such interaction is achieved in a very intuitive and natural way. To do so, multisensory devices are supplied [22].

In order to illustrate this concept, consider Figure 2, in which a user is shown standing inside a research laboratory. However, since she is equipped with multisensory devices (Head Mounted Display – HMD and hand (glove) sensors), a computer-based system provides her the feeling of being steeped into a different environment.

The system, known as BioSimMER© (from Sandia National Laboratories) [23], is used to train rescue personnel to respond to terrorist attacks. The screen on the top shows the working environment displayed only for the eyes of the health professional and the virtual patient exhibits realistic symptoms. Such facilities are not supported by traditional computer interfaces.

Therefore, to achieve the high level of natural interface required by VR systems, it is important to provide the user with the feeling of immersion and the ability for interaction. To reach such requirements, VR developers must guarantee: 1) Real life 3D object images from the user´s perspective; 2) The aptitude to track the user's motions, particularly his head and eye movements, and correspondingly adjust the images on the output device to reflect the change in perspective.

Virtual and Augmented Reality: A New Approach to Aid Users of Myoelectric Prostheses 415

A very well-known framework to support AR is the ARToolKit ([26]). It provides Computer Vision techniques to calculate position and orientation of a digital camera in relation to fiduciary markers. The augmentation is produced after a series of transformations, as shown in Figure 4. First, the real video image is transformed into a binary one. Then, this image is processed in order to determine square black regions (fiduciary markers – regions whose outline contour can be fitted by four line segments) containing an image pattern that is compared to patterns stored in a database. Next, the algorithm uses size-known squares and the pattern orientation as the base for the coordinates frame and to calculate the real position of the digital camera in relation to the physical marker. After that, the 3D objects

Nowadays, VR and AR systems have been intensively used for training and simulation.

• It provides "Learning by doing" - according to pedagogical studies, the learning curve and the amount of knowledge acquired are intensified when the apprentice plays an

• It supports virtual and interactive experiments/simulations - replacing physical counterparts that could pose health hazard or even be too expensive in real life;

The strategies adopted by systems like the ARToolKit are promising and relatively simple to be incorporated into final applications. However, the need for physical markers can limit their application in many areas. Hence, an interface that is able to represent the real environment, capture movements and sounds and transforming them into actions to interact with virtual objects, have been the focus of many recent researches seeking a human-computer dialogue closer to natural. That's why the expression "natural user

are placed over the fiduciary markers, and the final image is sent to the display.

**Figure 4.** ARToolKit workflow.

• It inspires creativity.

active role during the process;

According to [27] and [28], the main reasons for that are:

• It allows training to be executed outside classes and clinics; and

**Figure 2.** Experiment with virtual reality techniques and devices (extracted from [23]).

Augmented Reality (AR) is a technique that allows the integration of virtual objects within a real physical environment. Interaction is again supported by multisensory equipment. Essentially, a real scene, captured by a digital camera, is "augmented" with the insertion of virtual objects [24]. Figure 3 illustrates this concept.

**Figure 3.** An example of Augmented Reality extracted from [25]: (a) Positioning a fiduciary marker in mechanical part; (b) Heating distribution visualization through user´s glasses.

In Figure 3(a), an engineer uses a fiduciary square marker to identify the heating distribution throughout a mechanical part, as shown in Figure 3(b). However, this "virtual" information can only be seen by him, through the equipment and glasses he carries.

A very well-known framework to support AR is the ARToolKit ([26]). It provides Computer Vision techniques to calculate position and orientation of a digital camera in relation to fiduciary markers. The augmentation is produced after a series of transformations, as shown in Figure 4. First, the real video image is transformed into a binary one. Then, this image is processed in order to determine square black regions (fiduciary markers – regions whose outline contour can be fitted by four line segments) containing an image pattern that is compared to patterns stored in a database. Next, the algorithm uses size-known squares and the pattern orientation as the base for the coordinates frame and to calculate the real position of the digital camera in relation to the physical marker. After that, the 3D objects are placed over the fiduciary markers, and the final image is sent to the display.

**Figure 4.** ARToolKit workflow.

Computational Intelligence in Electromyography Analysis – 414 A Perspective on Current Applications and Future Challenges

the change in perspective.

from the user´s perspective; 2) The aptitude to track the user's motions, particularly his head and eye movements, and correspondingly adjust the images on the output device to reflect

**Figure 2.** Experiment with virtual reality techniques and devices (extracted from [23]).

virtual objects [24]. Figure 3 illustrates this concept.

Augmented Reality (AR) is a technique that allows the integration of virtual objects within a real physical environment. Interaction is again supported by multisensory equipment. Essentially, a real scene, captured by a digital camera, is "augmented" with the insertion of

**Figure 3.** An example of Augmented Reality extracted from [25]: (a) Positioning a fiduciary marker in

(a) (b)

In Figure 3(a), an engineer uses a fiduciary square marker to identify the heating distribution throughout a mechanical part, as shown in Figure 3(b). However, this "virtual"

information can only be seen by him, through the equipment and glasses he carries.

mechanical part; (b) Heating distribution visualization through user´s glasses.

Nowadays, VR and AR systems have been intensively used for training and simulation. According to [27] and [28], the main reasons for that are:


The strategies adopted by systems like the ARToolKit are promising and relatively simple to be incorporated into final applications. However, the need for physical markers can limit their application in many areas. Hence, an interface that is able to represent the real environment, capture movements and sounds and transforming them into actions to interact with virtual objects, have been the focus of many recent researches seeking a human-computer dialogue closer to natural. That's why the expression "natural user

#### Computational Intelligence in Electromyography Analysis – 416 A Perspective on Current Applications and Future Challenges

interface" has emerged as new computer interaction technology. It focuses on human abilities such as touch, vision, voice and higher cognitive functions, such as expression, perception and recall [29]. The main objective here is to give a physical meaning for the digital information. In so doing, data manipulation with the use of bare hands, gestures, voice commands and pattern recognition are supported.

Virtual and Augmented Reality: A New Approach to Aid Users of Myoelectric Prostheses 417

in the treatment, if exposed to a real arachnid in the initial sessions. Thus, AR is used to present the patient with virtual objects that reminds a spider (like a cartoon) and this object, gradually, becomes a very 'realistic' virtual one (with photorealism modeling techniques). Figure 8(a) shows a potential user wearing the system apparatus and Figure 8(b) shows the

**Figure 6.** VR multi-modal experimental setup for simulating surgical sutures (extracted from [33]).

**Figure 7.** (a) Virtual pre-wound tissue for suturing; (b) Virtual wound closed (extracted from [33]).

(a) (b)

**Figure 8.** (a) User wearing HMD for arachnophobia treatment; (b) The AR image, as seen by the user.

(a) (b)

image seen by the user.

Recently, Santos et al. [30] presented an application that uses gestures to interact with virtual objects in an Augmented Reality, based on Kinect© (a motion sensing input device developed by Microsoft Inc.). The application is not limited by environment, lighting or skin color and doesn´t require fiduciary markers. The system allows the user to perform operations on menus and interact with virtual objects solely by hand gestures (Figure 5).

**Figure 5.** Natural User Interface with Augmented Reality: (a) User selecting menu options; and (b) User manipulating a virtual object (extracted from [30]).

In recent past years, it has been observed a steady growth in the use of Virtual and Augmented Reality in health care [31, 32]. There a number of examples in the literature. However, to ilustrate the technique, let´s take two examples into account.

Payandeh and Shi [33] presented a mechanics-based interactive multi-modal environment designed to teach basic suturing and knotting techniques for simple skin or soft tissue wound closure. Two haptic devices are used for force-feedback, simulating the experience of suturing a real tissue (Figure 6).

That realist feeling was provided by a number computer-based techniques, such as massspring system, to simulate the deformable tissue (skin), mechanics-based techniques to simulate the deformations of a linear object (the suturing material) and collision detection for the interactions between the soft tissue and the needle. Figure 7 shows a pre-wound scene (a) and the result after the virtual suture (b).

Virtual and Augmented Reality have also been studied as a tool for phobia treatment [34]. As an example, consider the system describe in [35]. The project, which is accompanied by a psychologist, aims to design a system to gradually confrontate the patient with his object phobia. Clinical studies have shown that some patients cannot handle, or even do not evolve in the treatment, if exposed to a real arachnid in the initial sessions. Thus, AR is used to present the patient with virtual objects that reminds a spider (like a cartoon) and this object, gradually, becomes a very 'realistic' virtual one (with photorealism modeling techniques). Figure 8(a) shows a potential user wearing the system apparatus and Figure 8(b) shows the image seen by the user.

Computational Intelligence in Electromyography Analysis – 416 A Perspective on Current Applications and Future Challenges

voice commands and pattern recognition are supported.

manipulating a virtual object (extracted from [30]).

scene (a) and the result after the virtual suture (b).

suturing a real tissue (Figure 6).

interface" has emerged as new computer interaction technology. It focuses on human abilities such as touch, vision, voice and higher cognitive functions, such as expression, perception and recall [29]. The main objective here is to give a physical meaning for the digital information. In so doing, data manipulation with the use of bare hands, gestures,

Recently, Santos et al. [30] presented an application that uses gestures to interact with virtual objects in an Augmented Reality, based on Kinect© (a motion sensing input device developed by Microsoft Inc.). The application is not limited by environment, lighting or skin color and doesn´t require fiduciary markers. The system allows the user to perform operations on menus and interact with virtual objects solely by hand gestures (Figure 5).

**Figure 5.** Natural User Interface with Augmented Reality: (a) User selecting menu options; and (b) User

(a) (b)

In recent past years, it has been observed a steady growth in the use of Virtual and Augmented Reality in health care [31, 32]. There a number of examples in the literature.

Payandeh and Shi [33] presented a mechanics-based interactive multi-modal environment designed to teach basic suturing and knotting techniques for simple skin or soft tissue wound closure. Two haptic devices are used for force-feedback, simulating the experience of

That realist feeling was provided by a number computer-based techniques, such as massspring system, to simulate the deformable tissue (skin), mechanics-based techniques to simulate the deformations of a linear object (the suturing material) and collision detection for the interactions between the soft tissue and the needle. Figure 7 shows a pre-wound

Virtual and Augmented Reality have also been studied as a tool for phobia treatment [34]. As an example, consider the system describe in [35]. The project, which is accompanied by a psychologist, aims to design a system to gradually confrontate the patient with his object phobia. Clinical studies have shown that some patients cannot handle, or even do not evolve

However, to ilustrate the technique, let´s take two examples into account.

**Figure 6.** VR multi-modal experimental setup for simulating surgical sutures (extracted from [33]).

**Figure 7.** (a) Virtual pre-wound tissue for suturing; (b) Virtual wound closed (extracted from [33]).

**Figure 8.** (a) User wearing HMD for arachnophobia treatment; (b) The AR image, as seen by the user.

Based on the discussions above, we can infer that VR and AR incorporate a number of features with great potential to overcome some of the difficulties associated with the training of prosthetic users.

Virtual and Augmented Reality: A New Approach to Aid Users of Myoelectric Prostheses 419

channels, are required for successful management of the prosthesis. To minimize the number of channels, the authors proposed a three-bit ternary EMG command strategy. The users were asked to produce EMG bursts (by sudden contraction of a single muscle) and, if proper EMG thresholds could be defined, each burst was classified in three different levels. Each of those three levels were then given the digital values "0", "1" or "2" (no signal, low, high), corresponding to one ternary bit. In so doing, if the user generates three bits, he/she could generate up to 27 different combinations (commands) from a single muscle. However, since the commands starting with "0" (i.e., "0XY") were not valid, the three-bit ternary strategy allowed the generation of 18 effective commands. This means that, from a single muscle, the user could control up to 18 different functions/actions of the prosthesis. However, that is no easy task to learn. Hence, a special training device, based on VR simulation of the multifunctional prosthesis, was created to enable the learning of that "EMG command language". Only after the training process was finished, the prosthesis was fitted and real manipulative operation started. The authors report that all of the volunteers

In similar fashion, Resnik *et al.* [37] show the use of VR as an aiding tool for training users of advanced upper-limb prostheses. The device known as DEKA Arm (DEKA Research & Development Corporation) allows users 10 powered degrees of movement (Figure 10a). A VR environment program (Figure 10b) was created to allow users to practice controlling an

were able to successfully perform basic commands after about 45 minutes.

avatar, using the controls designed to operate the DEKA Arm in the real world.

**Figure 10.** (a) DEKA Arm displayed on manikin; (b) VR avatar (extracted from [37]).

(a) (b)

structured learning environment, due to cognitive deficits.

The authors report that the VR environment allows a gradual acclimatization to the arm, as the experience with the arm-control scheme prior to use of the physical arm allows a staged introduction of the new elements of the system. However, the system did not allow for interaction with virtual objects, i.e., it was not possible, for instance, the manipulation of an object with the virtual hand. Nevertheless, the system proved to be an important asset for upper-limb users who must master a large number of controls and for those who need a
