Perspective Chapter: Classification of Grasping Gestures for Robotic Hand Prostheses Using Deep Neural Networks

*Ruthber Rodríguez Serrezuela, Enrique Marañón Reyes, Roberto Sagaró Zamora and Alexander Alexeis Suarez Leon*

## **Abstract**

This research compares classification accuracy obtained with the classical classification techniques and the presented convolutional neural network for the recognition of hand gestures used in robotic prostheses for transradial amputees using surface electromyography (sEMG) signals. The first two classifiers are the most used in the literature: support vector machines (SVM) and artificial neural networks (ANN). A new convolutional neural network (CNN) architecture based on the AtzoriNet network is proposed to assess performance according to amputation-related variables. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods and The performance it is compared with other CNN proposed by other authors. The performance of the CNN is evaluated with different metrics, providing good results compared to those proposed by other authors in the literature.

**Keywords:** electromyography, convolutional neural networks, support vector machine, artificial neural network, underactuated hand prosthesis

## **1. Introduction**

Upper limb amputations are injuries that substantially limit a person's quality of life by drastically reducing the number of independent activities they perform in daily life (ADL). Current myoelectric prostheses are electronically controlled by the user's voluntary muscle contractions. A general scheme of how these and other devices that use biosignals work is presented in **Figure 1**. In this sense, the prostheses for amputees with higher performance follow this common pattern of development. There is a wide variety of very sophisticated myoelectric prostheses commercially available that use sEMG signals [1–5].

A relevant limitation in the development of pattern recognition methods for myoelectric control is that their tests are mainly performed offline. It is now established

**Figure 1.** *Most common configuration of human–machine interaction [6].*

that high offline precision, does not necessarily translate into accurate functional control of a physical prosthesis. In this sense, several recent studies have shown the discrepancy between "on and offline" in performance metrics [7–11]. However, very few studies have published validation results of pattern recognition in terms, and even fewer in clinical settings, relating the variability of the signal and the performance of the classifiers with those parameters related to amputation (disability index, length remaining limb, amputation time, phantom limb sensation, etc.) [12–15].

By other hand, the number of features extracted also depends on the number of EMG sensors and the feature extraction strategy for each sensor. Many investigations in have implemented alternatively, dimensionality reduction has been shown to be an effective feature projection (PC) method [16]. Among the most used methods are: principal component analysis (PCA) [17–19], linear-nonlinear PCA composite analysis, self-organizing feature maps [16] and supervised discretization together with PCA [20, 21].

Convolutional neural networks have been applied for myoelectric control with interest in inter-sessions/subjects and inter-session performance, in addition to many other applications in biomedical signal processing [22–24]. Some authors have commented on the advantages of these deep neural networks and their ability to assimilate the recognition of hand gestures corresponding to groups of sEMG signals. Although the results obtained come from a small number of investigations, their employment possibilities are promising [25–27].

However, most of the research has been carried out on healthy subjects. In recent decades, different authors [28–31] have shown that the variation of the signal over time in amputated patients is even greater than in healthy subjects. The EMG signal is weaker due to the amputation of certain muscle groups and as the amputation time elapses, the muscles become more atrophied and weak. There are also few databases of amputees, a situation that constitutes a significant obstacle for these researches. and for the gestures recognition at the international level [29, 30]. Additionally, amputees' performance was found to be proportional to residual limb size, indicating that an anthropomorphic model might be beneficial [28–31]. The previous findings motivated *Perspective Chapter: Classification of Grasping Gestures for Robotic Hand Prostheses… DOI: http://dx.doi.org/10.5772/intechopen.107344*

the study on the variance of results between amputee patients and fit populations under disturbances of dynamic factors such as the length of the remaining limb, age, level of atrophy, among others. That is why the results obtained in amputated patients are far from those presented.

## **2. Materials and methods**

#### **2.1 Databases subjects**

The review of these databases allows knowing the characteristics of the population involved and the signal capture protocols. The literature review showed that there are few databases with sEMG data collection in a significant number of patients, with subjects without known prior deficiencies and whose data are heterogeneous, so the most used is the NINAPRO database [32–34], which contains the electromyography recording using the system of 8 sEMG sensors Thalmic Labs - MYO. The data in this repository is free to use and is intended for use in developing hand gesture movement classifiers [22]. The NINAPRO database in its DB3 section establishes the parameters with which the sEMG data of 11 subjects with transradial amputation were recorded [35].

In the DB3 dataset, as explained above, the transradial amputee wears two Myo cuffs side by side. The superior MYO cuff is placed closest to the elbow with the first electrode at the radio-humeral joint, following the configuration of the NINAPRO electrode. The lower MYO cuff is placed just below the first one, closer to the amputation region (**Table 1**).

In order to build our own database, the subjects invited to participate in this stage are amputated subjects, without neurological deficiencies. Invited subjects followed the population parameters used in the NINAPRO Database [36, 37]. Ten male and


**Table 1.** *Clinical characteristics of subjects with amputation. NinaPro DB3.* female amputees ranging in age from 24 to 65 years participated in the experiments. The procedures were performed in accordance with the Declaration of Helsinki and were approved by the ethics committee of the Universidad del Tolima (approval number: N – 20,160,021). All subjects participated voluntarily, providing their written informed consent before the experimental procedures. Any amputee who has experience in the use of hand prostheses will be included in the study, registering in advance their experience in the use of passive or myoelectric prostheses.

**Aspects and demographic data to be recorded**: For each subject, age, sex, education level, related to the amputation: dominant hand, amputated side, year of amputation, cause, type of prosthesis if used or has been used, and level of amputation.

**Inclusion criteria**: Adults in an age range of 20–65 years, no history of neurological and/or psychiatric diseases, voluntary participation in the study and acceptance of the medical staff. Only the transradial level of amputation will be considered, amputations above the elbow or beyond the wrist will not be admitted to the study. Any noncompliance with these parameters becomes criteria for exclusion from the study. **Table 2** shows the characteristics of the amputee patients who participated in the trials.

#### **2.2 Sensor EMG MYO armband**

Data was recorded using the commercial MYO armband (MYO). MYO is a portable EMG sensor developed by Thalmic Lab and has eight dry electrode channels with a sampling rate of 200 Hz. It is a low cost, consumer grade device with a nine inertial measurement unit (IMU) [22], that connects wirelessly with the computer via Bluetooth. It is a non-invasive device, easier to use compared to conventional electrodes [38, 39]. Despite the low sampling frequency, its performance has been shown to be similar to that of full-band EMG recordings using conventional electrodes [22, 40], and the technology has been used in many studies [29, 35, 38] (**Figure 2**).

**sEMG recording:** Prior to carrying out the tests, the patients will be instructed on the experimental procedure and as a first step. The sensor operation will be calibrated for both limbs. To make the records in each gesture, the subjects will be seated comfortably in front of the computer with both limbs with their elbows flexed at 90


#### **Table 2.**

*Clinical characteristics of subjects with amputation.*

*Perspective Chapter: Classification of Grasping Gestures for Robotic Hand Prostheses… DOI: http://dx.doi.org/10.5772/intechopen.107344*

**Figure 2.**

*Signals acquisition through the application developed in Matlab 2020b. Author. Six movements were identified in MYO sensor to achieve grip improvement: power grip (AP), palm inward (PI), palm outward (PO), open hand (MO), pincer grip (AT) and rest (RE) (Figure 3).*

degrees and will be instructed to perform the gestures that are reflected on the monitor, in the case of amputated patients with the contralateral limb and with the amputated limb (**Figures 4** and **5**).

The graphic interface will provide the patient with the times for performing the tests and the state of rest (**Figure 5**). Amputee recordings were performed in repeated sessions for 1 week.


#### **Figure 3.**

*Gestures to identify with the MYO device.*

#### **Figure 4.**

*(a) Amputee patient in front of the computer with a graphic signal to perform the movements. (b) MYO device arrangement.*

#### **Figure 5.**

*User interface that indicates the imaginary movement to be performed and includes completion and rest time.*

The procedure carried out to capture the myoelectric signals is as follows: for each grip or gesture of the hand, 200 samples are taken during an interval of 30 seconds. Transitions are made between each of the six proposed gestures for spaces of 1 minute as recommended in [41]. sEMG signals were captured during several sessions and on different days of the week. The data of these myoelectric signals are stored in dataset for later offline processing.

#### **2.3 Signal pre-processing**

The segmentation and overlay methods used in this work improved the training efficiency by increasing the number of training samples based on recent work such as [17, 20, 42].

#### **2.4 Feature extraction**

Each captured sEMG signal is subdivided into 200 ms windows. The signal captured by the MYO was obtained at a frequency of 200hz [11, 21]. In order to be analyzed, it is divided every 200 ms, leaving a total of 40 data in each sub-window. Each sub-window has a 50% overlap with the immediately previous window, which allows increasing the number of samples and thus expanding the database obtained. The extraction described here is applied to each of the MYO channels. These data obtained for each of the channels [19], are concatenated horizontally, thus allowing a database to be obtained with 10 data features for each channel and a column with the information on the grasping gestures that are performed.

Different kinds of features extracted are used by different researchers, such as mean absolute value (MAV), root mean square (RMS), autoregression coefficients (AC), variance (VAR), standard deviation (SD), crossover by zero (CC), the length of the waveform (LO), Amplitude of Wilson (AW), slope of mean absolute value (PVAM). Features in the time domain were treated in [42]. These extracted features are used in SVM and in ANN and the raw signals for the CNN classifier.

#### **2.5 Classifiers**

Artificial neural networks Artificial neural networks (ANN) are a nonlinear classifier that simulates brain information processing through a series of weighted nodes, called neurons. Neurons are organized in layers and interconnect with each other to

*Perspective Chapter: Classification of Grasping Gestures for Robotic Hand Prostheses… DOI: http://dx.doi.org/10.5772/intechopen.107344*

create a network. ANNs use a nonlinear function of a linear combination of the inputs, where the coefficients of the linear combination are adaptive parameters. The basic model of an ANN will be described as a series of functional transformations. First, M linear combinations of the input variables are constructed *x*1, *x*2, … , *xD* in a way:

$$\mathfrak{a}\_{\mathfrak{j}} = \sum\_{i=1}^{D} \mathfrak{w}\_{ji}^{(1)} + \mathfrak{w}\_{j0}^{(1)} \tag{1}$$

where *j* ¼ 1, … , *M*, y the superscript (1) indicates that the corresponding parameters correspond to the first layer of the network. Parameters *w*ð Þ<sup>1</sup> *ji* are weights and the parameters *w*ð Þ<sup>1</sup> *<sup>j</sup>*<sup>0</sup> are polarization constants. The magnitude *aj* is named activation, and each activation is transformed using a nonlinear and differentiable activation function [22].

In ANN classifier (**Table 3**), the chosen hyper parameters are highlighted and the variations that were proposed for each of them. Among the hyper parameters that were varied are the weight optimizer, the activation method, the maximum number of iterations and the type of learning.

#### **2.6 Support vector machine**

Support vector machines are a very powerful method used to solve classification problems, it is highly efficient on multidimensional data sets. It consists of defining a hyperplane or decision limit that separates the samples into two groups, where those above the decision limit are classified as positive and those below it, are classified as negative. The fundamental objective is to maximize the margin M or distance between the neighboring samples called support vectors and the separating hyperplane (**Figure 6**) [43].

For the SVM classifier, the following kernels were used: linear, polynomial, Gaussian and sigmoid as shown in **Table 4**. In the linear and Gaussian kernels, the Gamma parameter was used with a value of 0.5. In the polynomial and sigmoid nuclei, the Degree parameter was used: between: 0.5 and 3, respectively. For all nuclei, the regularization constant C was used with values of 0.1, 1, 10 and 100 as recommended in the literature [44, 45].

#### **2.7 Convolutional neural networks**

An CNN is a deep learning algorithm able to collecting an input matrix of size M X N and attributing weights and biases in parallel under the constraints of a predictive


**Table 3.** *Hyper-parameters. ANN.*

#### **Figure 6.**

*Optimal separation hyperplane, for linearly separable classes.*


**Table 4.** *SVM classifier parameters.*

problem [46], resulting in specific features. A convolutional layer performs a dot product between two arrays, where one array is the set of learnable parameters and the other is known as the kernel, producing an activation map, as shown below:

$$G[m,n] = (f,h)[m,n] = \sum\_{j}^{M} \sum\_{k}^{N} h[j,k], f[m-j,n-k],\tag{2}$$

Where:

The input matrix is f and the kernel is denoted as h.

**m** is number of rows in the input matrix

**n** is the number of columns in the input matrix

**j** is the index for the offset in the rows of the input

**k** is the index for the offset in the columns of the input (**Figure 7**)

**Table 5** shows the parameters used in the CNN that were selected for the experiment carried out. Highlighted in the text are the hyper parameters: the batch size and the number of epochs that yielded the best results. The batch sizes were worked with values of 64, 128 and 256 respectively. The CNN architecture has four hidden convolutional layers and one output layer (**Figure 8**).

The first two hidden layers consist of 32 filters of size 8x1 and 3 � 3. The third consists of 64 filters of size 5 � 5. The fourth layer contains 64 filters of size 5 � 1, while the last one is a convolutional layer with six possible outputs with 1 � 1 filters, *Perspective Chapter: Classification of Grasping Gestures for Robotic Hand Prostheses… DOI: http://dx.doi.org/10.5772/intechopen.107344*

**Figure 7.** *Architecture of the convolutional neural network used.*

**Figure 8.** *Proposed model of CNN.*

corresponding to the six gestures to classify. Followed by Nonlinear Rectification Units (ReLU) and Dropout layer with a probability of 0.15 to put to zero the output of a hidden unit. Also, a subsampling layer performs maximum pooling in a 3 � 3 window after removal of the second and third layers. Finally, the last convolutional layer is followed by a Softmax activation function (**Table 5**).

#### **2.8 Metrics**

The confusion matrix is used to calculate many common classification metrics. The diagonal represents correct predictions and the other positions of the matrix indicate incorrect predictions. If the sample is positive and is classified as positive, i.e. correctly classified positive sample, and it is considered as a true positive (TP); if it is classified as negative, it is considered a false negative (FN). If the sample is negative and is classified as negative, it is considered true negative (TN); If the sample is negative and is classified as negative, it is considered true negative (TN); if it is classified as positive, it is counted as a false positive (FP), false alarm. The most common metrics are sensitivity (Se), specificity (Sp), which indicate the ability of the CNN to identify hand gestures. Accuracy (ACC) is used to assess overall detection performance and Precision (Pr) is used to measure model quality in posture classification tasks. Likewise, the F1 score (F1) is a measure of the precision of a test, it is the average precision and sensitivity of the classification. It has a maximum score of 1 (perfect accuracy and sensitivity) and a minimum of 0. Overall, it is a measure of the accuracy and robustness of your model. This metric gives information about the quantity that the model is able to identify. In this work, six commonly used evaluation metrics were used: accuracy, precision, sensitivity, specificity and F1 to evaluate the performance of CNN:

$$\text{Accuracy} \,(\text{ACC}) = \frac{\text{TP} + \text{TN}}{\text{TP} + \text{TN} + \text{FP} + \text{FN}} \tag{3}$$

$$Precision\ \left(\text{Pr}\right) = \frac{TP}{TP + FP} \tag{4}$$

$$\text{Sensitivity} \,(\text{Se}) = \frac{\text{TP}}{\text{TP} + \text{FN}} \tag{5}$$

$$\text{Specificity} \,(\text{Sp}) = \frac{\text{TN}}{\text{TN} + \text{FP}} \tag{6}$$

$$F1score\,(F1) = \frac{2^\*TP}{TP + FP + FN} \tag{7}$$


#### **Table 5.** *Hyper parameters tuning.*

*Perspective Chapter: Classification of Grasping Gestures for Robotic Hand Prostheses… DOI: http://dx.doi.org/10.5772/intechopen.107344*

## **3. Results**

The analysis in amputated patients is preceded by the great variability of the sEMG signal. **Figure 9** shows this behavior of the sEMG signals for the power grab gesture in amputated subjects. It is observed that the data are very scattered and do not have the same symmetry, either the same mean, either standard deviation between each one of patients.

## **3.1 ANN classifier**

The results obtained with this classifier are shown in **Table 6** in the specific patients. The own database has been used for the test data set with the following configuration: Optimizer: ADAM, activation function: ReLu, L2 regularization with a constant of 0.0001, a constant learning rate of 0.001, with three hidden layers and all layers with eight neurons.

On both data sets, that is, the test and trial data set, the ANN classifier showed an increase in the accuracy metric, especially when increasing the number of training

**Figure 9.**

*Box and Whisker Plot of the sEMG for the power grab gesture in amputees.*


#### **Table 6.**

*Accuracy results with the ANN classifier on the test data set at 100 epochs.*

epochs, passing on average from 76.66% to 100. periods with a minimum of 56.04% and a maximum of 85.77%, respectively. The average accuracy was consistent with the results shown by other authors for this classifier [35, 44, 45, 47].

In **Table 6**, superior results can also be seen on subjects P01, P02, P04, P05, P07 and P10, which is outstanding, since their accuracy was above 80% with this classifier with a standard deviation of 9.23. This low standard deviation indicates that most of the accuracies obtained tend to be clustered close to their mean.

#### **3.2 SVM classifier**

As mentioned above, different kernels were used: linear, polynomial, Gaussian and sigmoid. Likewise, we worked on the regularization constant C with values of 0.1, 1, 10 and 100. On both data sets with the SVM classifier using the RBF and Sigmoid kernels, the best results were obtained when evaluating the accuracy metric, obtaining results up to 80%. in this metric. **Table 7** shows the results obtained with all the kernels.

#### **3.3 CNN Classifier**

**Table 8** shows the comparative results of the CNN classifier evaluated in different patients using regularization techniques such as: early stopping, dropout and batch normalization. Which is a technique that tries to apply certain rules to know when it is time to stop training, so that there is no overfitting to the input data, nor under fitting. The average time required to train each convolutional neural network was 1 hour and 25 minutes. The average time needed to test the network was 15.2 s using a GPU NVIDIA Titan-V, 12 GB RAM HBM2 y 640 Tensor Cores.

**Table 9** shows a summary of the different classifiers by means of the accuracy metric in the different patients. In patients P01 and P02, the best classifier is ANN. Although the accuracy shown by the CNN is acceptable. These patients have a DASH index of 45 and 19, respectively. They also present different levels of amputation: one at 10 cm from the elbow and the other at the level of the wrist. Likewise, they present amputation times of 1 and 30 years, correspondingly. These clinical factors affect the performance of the classifiers.

## **4. Discussion**

From the results obtained, the following points are analyzed. **Figure 10** presents the confusion matrix of each of the classifiers (SVM, ANN and CNN) corresponding to the patient (P03 0 P09), It is observed that both the SVM and ANN show a low number of hits in the different gestures in this patient whose cause of amputation is due to congenital factors that further affect the variability of the signal. Even though only one case is analyzed in this work, this type of behavior has been reported by other research works in patients of this type.

Once again, the performance of the SVM and the ANN is significantly affected. The results of the present work show a significant accuracy rate for the classification of various classes of amputated subjects in comparison with other studies carried out (**Table 10**).


## *Perspective Chapter: Classification of Grasping Gestures for Robotic Hand Prostheses… DOI: http://dx.doi.org/10.5772/intechopen.107344*


**Table 7.**

*Results of the metrics with the SVM classifier in the specific patients.*


*Perspective Chapter: Classification of Grasping Gestures for Robotic Hand Prostheses… DOI: http://dx.doi.org/10.5772/intechopen.107344*

> **Table 8.**

*Comparative results of the CNN classifier of the different patients at different*

 *epochs.*

#### *Human-Robot Interaction - Perspectives and Applications*


#### **Table 9.**

*Accuracy metric comparison between all classifiers. The values in bold represent the highest accuracy values recorded by each patient in the classifiers.*

#### **Figure 10.**

*Confusion matrices of the different classifiers for the patient P09 (a) SVM (b) ANN (c) CNN using the metric of the accuracy of the data corresponding to the training (session two).*


#### **Table 10.**

*Studies conducted using CNN as an EMG-based hand prosthesis movement classifier in healthy subjects and amputated subjects.*

*Perspective Chapter: Classification of Grasping Gestures for Robotic Hand Prostheses… DOI: http://dx.doi.org/10.5772/intechopen.107344*

## **5. Conclusions**

Over the past decade, deep learning and convolutional neural networks have revolutionized several fields of machine learning, including speech recognition and computer vision. Therefore, its use is promising for obtaining better classification indexes of the sEMG signals if the great variation of these is considered in accordance with clinical variables of the amputation, all of which would contribute to closing the gap between the prosthesis market (which requires fast and robust control methods) and the results of recent scientific research in disability support technologies.

The protocol for obtaining sEMG measurements in amputated patients was applied, as well as the extraction and classification of the signal, all of which is consistent with the proposal for the integrated design of the prosthesis. A database of 10 amputated patients according to the 6 defined hand gestures was constructed. The data is publicly available in the repository of the Huila-Corhuila University Corporation (CORHUILA).

The classification accuracy obtained with CNN using the proposed architecture is 80.46%, but the most significant thing is its ability to obtain a higher performance in the classification between subjects in relation to parameters such as length of the remaining limb, years of amputation or disability index, compared with the results obtained by conventional classifiers such as the support vector machine and artificial neural networks.

## **Author details**

Ruthber Rodríguez Serrezuela1 , Enrique Marañón Reyes2 , Roberto Sagaró Zamora3 \* and Alexander Alexeis Suarez Leon4

1 University Corporation of Huila, Colombia

2 Centro de Estudios de Neurociencias y Procesamiento de Imágenes y Señales, Universidad de Oriente, Santiago de Cuba, Cuba

3 Departamento de Mecánica y Diseño (MyD), Tribology Group, Universidad de Oriente, Santiago de Cuba, Cuba

4 Biomedical Engineering Department, Universidad de Oriente, Santiago de Cuba, Cuba

\*Address all correspondence to: sagaro@uo.edu.cu

© 2023 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## **References**

[1] Fukaya N, Asfour T, Dillmann R, Toyama S. Development of a five-finger dexterous hand without feedback control: The TUAT/Karlsruhe humanoid hand. In: Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on. IEEE. November 2013. pp. 4533-4540

[2] Diftler MA, Mehling JS, Abdallah ME, Radford NA, Bridgwater LB, Sanders AM. et al. Robonaut 2-the first humanoid robot in space. In: Robotics and Automation (ICRA), 2011 IEEE International Conference on 2011, May. IEEE. pp. 2178-2183

[3] Chen Z, Lii NY, Wimböck T, Fan S, Liu H. Experimental evaluation of Cartesian and joint impedance control with adaptive friction compensation for the dexterous robot hand DLR-HIT II. International Journal of Humanoid Robotics. 2011;**8**(04): 649-671

[4] Sun W, Kong J, Wang X, Liu H. Innovative design method of the metamorphic hand. International Journal of Advanced Robotic Systems. 2018; **15**(1):1729881417754154

[5] Available from: http://es.bebionic. com/ [May 1, 2018]

[6] Azorin José M, et al. La Interacción de Personas con Discapacidad con el Computador: Experiencias y Posibilidades en Iberoamérica. Programa Iberoamericano de Ciencia y Tecnología para el Desarrollo (CYTED). 2013. ISBN-10: 84-15413-22-X

[7] Song Y, Mao J, Zhang Z, Huang H, Yuan W, Chen Y. A novel multiobjective shielding optimization method: DNN-PCA-NSGA-II. Annals of Nuclear Energy. 2021;**161**:108461

[8] Al-Fawa'reh M, Al-Fayoumi M, Nashwan S, Fraihat S. Cyber threat intelligence using PCA-DNN model to detect abnormal network behavior. Egyptian Informatics Journal. 2022;**23** (2):173-185

[9] Jiang N, Vujaklija I, Rehbaum H, Graimann B, Farina D. Is accurate mapping of EMG signals on kinematics needed for precise online myoelectric control? IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2014;**22**(3):549-558

[10] Roche AD. Clinical perspectives in upper limb prostheses: An update. Current Surgery Reports. 2019;**7**:5. DOI: 10.1007/s40137-019-0227-z

[11] Hahne JM, Markovic M, Farina D. User adaptation in myoelectric manmachine interfaces. Scientific Reports. 2017;**7**(1):4437

[12] Hargrove LJ, Lock BA, Simon AM. Pattern recognition control outperforms conventional myoelectric control in upper limb patients with targeted muscle reinnervation. In: Proceedings of 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE. 2013. pp. 1599–1602

[13] Wurth SM, Hargrove LJ. A real-time comparison between direct control, sequential pattern recognition control and simultaneous pattern recognition control using a Fitts' law style assessment procedure. Journal of Neuroengineering and Rehabilitation. 2014;**11**(1):91

[14] Kuiken TA, Miller LA, Turner K, Hargrove LJ. A comparison of pattern recognition control and direct control of a multiple degree-of-freedom transradial *Perspective Chapter: Classification of Grasping Gestures for Robotic Hand Prostheses… DOI: http://dx.doi.org/10.5772/intechopen.107344*

prosthesis. IEEE Journal of Translational Engineering in Health and Medicine. 2016;**4**:1-8

[15] Chu JU, Moon YJ, Lee SK, Kim SK, Mun MS. A supervised featureprojection- based-real-time EMG pattern recognition for multifunction myoelectric hand control. IEEE Transaction on Mechatronics. 2007; **12**(3):282-290

[16] Chu J-U, Moon I, Mun M-S. A realtime EMG pattern recognition system based on linear-nonlinear feature projection for a multifunction myoelectric hand. IEEE Transactions on Biomedical Engineering. 2006;**53**:2232-2239

[17] Guler NF, Kocer S. Classification of EMG signals using PCA and FFT. Journal of Medical Systems. 2005;**29**(3): 29241-29250

[18] Smith RJ, Tenore F, Huberdeau D, Etienne-Cummings R, Thakor NV. Continuous decoding of finger position from surface EMG signals for the control of powered prostheses. In: Proceedings of 30th Annual International Conference of the IEEE EMBS. Vancouver, British Columbia. August 20–25, 2008

[19] Wang JZ, Wang RC, Li F, Jiang MW, Jin DW. EMG signal classification for myoelectric teleoperating a dexterous robot hand. In: Proceedings of 27th Annual International conference of the IEEE EMBS; Shanghai, China. January 17–18, 2006

[20] Kiatpanichagij K, Afzulpurkar N. Use of supervised discretization with PCA in wavelet packet transformation-based surface electromyogram classification. Biomedical Signal Processing and Control. 2009;**4**(2):127-138

[21] Hargrove L, Guangline L, Englehart K, Hudgins B. Principal Component's analysis preprocessing for improved classification accuracies in pattern-recognition-based myoelectric control. IEEE Transactions on Biomedical Engineering. 2019;**56**(5): 1407-1414

[22] Côté-Allard U, Fall CL, Drouin A, Campeau-Lecours A, Gosselin C, Glette K, et al. Deep learning for electromyographic hand gesture signal classification using transfer learning. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2019; **27**(4):760-771

[23] Amamcherla N, Turlapaty A, Gokaraju B. A machine learning system for classification of emg signals to assist exoskeleton performance. In 2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). IEEE. October, 2018. pp. 1-4

[24] Boostani R, Moradi MH. Evaluation of the forearm EMG signal features for the control of a prosthetic hand. Physiological Measurement. 2003;**24**: 309-319

[25] Côté-Allard, Ulysse et al. Transfer learning for sEMG hand gestures recognition using convolutional neural networks. 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC) Banff Center, Banff, Canada. October 5-8, 2017

[26] Li C et al. PCA and deep learning based myoelectric grasping control of a prosthetic hand. Biomedical Engineering Online. 2018;**17**:107. DOI: 10.1186/ s12938-018-0539-8

[27] Wei W, Wong Y, Du Y, Hu Y, Kankanhalli M, Geng W. A multi-stream convolutional neural network for sEMGbased gesture recognition in musclecomputer interface. Pattern Recognition Letters. 2019;**119**:131-138

[28] Franti E et al. Methods of acquisition and signal processing for myoelectric control of artificial arms. Romanian Journal of Information Science and Technology. 2012;**15**(2):91-105

[29] Cognolato M, Atzori M, Marchesin C, Marangon S, Faccio D, Tiengo C, et al. Multifunction control and evaluation of a 3D printed hand prosthesis with the Myo armband by hand amputees. BioRxiv. 2018:445-460

[30] Díaz-Amador R, Mendoza-Reyes MA, Cárdenas-Barreras JL. Reducing the effects of muscle fatigue on upper limb myoelectric control using adaptive LDA. Ingeniería Electrónica, Automática y Comunicaciones. 2019;**40**(2):10-21

[31] Campbell E, Phinyomark A, Al-Timemy AH, Khushaba RN, Petri G, Scheme E. Differences in EMG feature space between able-bodied and amputee subjects for myoelectric control. In 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER) IEEE. 2019. pp. 33-36

[32] Yang Z, Jiang D, Sun Y, Tao B, Tong X, Jiang G, et al. Dynamic gesture recognition using surface EMG signals based on multi-stream residual network. Frontiers in Bioengineering and Biotechnology, 2021;**9**

[33] Bao T, Zaidi SAR, Xie S, Yang P, Zhang ZQ. A CNN-LSTM hybrid model for wrist kinematics estimation using surface electromyography. IEEE Transactions on Instrumentation and Measurement. 2020;**70**:1-9

[34] Liu J, Chen W, Li M, Kang X. Continuous recognition of multifunctional finger and wrist movements in amputee subjects based on sEMG and accelerometry. The Open Biomedical Engineering Journal. 2016;**10**:101

[35] Atzori M, Gijsberts A, Castellini C, Caputo B, Hager AGM, Elsig S, et al. Electromyography data for non-invasive naturally-controlled robotic hand prostheses. Scientific Data. 2014;**1**(1): 1-13

[36] Bird JJ, Kobylarz J, Faria DR, Ekárt A, Ribeiro EP. Cross-domain MLP and CNN transfer learning for biological signal processing: EEG and EMG. IEEE Access. 2020;**8**:54789-54801

[37] Akhlaghi N, Dhawan A, Khan AA, Mukherjee B, Diao G, Truong C, et al. Sparsity analysis of a sonomyographic muscle–Computer interface. IEEE Transactions on Biomedical Engineering. 2019;**67**(3):688-696

[38] Rabin N, Kahlon M, Malayev S, Ratnovsky A. Classification of human hand movements based on EMG signals using nonlinear dimensionality reduction and data fusion techniques. Expert Systems with Applications. 2020; **149**:113281

[39] Tsinganos P, Cornelis B, Cornelis J, Jansen B, Skodras A. Deep Learning in EMG-based Gesture Recognition. In: PhyCS. 2018. pp. 107-114

[40] Ramírez-Martínez D, Alfaro-Ponce M, Pogrebnyak O, Aldape-Pérez M, Argüelles-Cruz AJ. Hand movement classification using burg reflection coefficients. Sensors. 2019;**19**(3):475

[41] Dirgantara GP, Basari B. Optimized circuit and control for prosthetic arm based on myoelectric pattern recognition via power spectral density analysis. In AIP Conference Proceedings. AIP Publishing LLC. 2019;**2092**(1):020013

[42] Benatti S, Milosevic B, Farella E, Gruppioni E, Benini L. A prosthetic hand body area controller based on efficient

*Perspective Chapter: Classification of Grasping Gestures for Robotic Hand Prostheses… DOI: http://dx.doi.org/10.5772/intechopen.107344*

pattern recognition control strategies. Sensors. 2017;**17**(4):869

[43] Ortiz-Catalan M, Rouhani F, Branemark R, Hakansson B. Offline accuracy: A potentially misleading metric in myoelectric pattern recognition for prosthetic control. In: Proceedings of 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE. 2015. pp. 1140–1143

[44] Wang S, Chen B. Split-stack 2D-CNN for hand gestures recognition based on surface EMG decoding. In: 2020 Chinese Automation Congress (CAC). IEEE. November 2020. pp. 7084-7088

[45] Côté-Allard U, Gagnon-Turcotte G, Laviolette F, Gosselin B. A low-cost, wireless, 3-D-printed custom armband for sEMG hand gesture recognition. Sensors. 2019;**19**(12):2811

[46] Hassan HF, Abou-Loukh SJ, Ibraheem IK. Teleoperated robotic arm movement using electromyography signal with wearable Myo armband. Journal of King Saud University-Engineering Sciences. 2020;**32**(6): 378-387

[47] Ozdemir MA, Kisa DH, Guren O, Onan A, Akan A.. EMG based hand gesture recognition using deep learning. In: 2020 Medical Technologies Congress (TIPTEKNO). IEEE. 2020, November. pp. 1-4

[48] Chen L, Fu J, Wu Y, Li H, Zheng B. Hand gesture recognition using compact CNN via surface electromyography signals. Sensors. 2020;**20**(3):672

[49] Tsinganos P, Cornelis B, Cornelis J, Jansen B, Skodras A. Data augmentation of surface electromyography for hand gesture recognition. Sensors. 2020; **20**(17):4892

## **Chapter 4**
