**3.1 Pattern recognition system**

Feature extraction was carried out with each posture using the series of time domain, and processing with the Matlab software for six motions or positions of the hand. From **Figure 9**, it can be seen the system behavior trough the developed GUI. Below, eight myoelectric channels are shown. Likewise, at the bottom, on left, realtime motions or types of gestures can be observed.

**Figure 10** displays the results of feature extraction, in this case for the power grasp gesture. The less useful features are preferably deleted to decrease the

**Figure 9.** *Feature extraction for the open hand gesture trough GUI.*

**Figure 10.** *Feature extraction for the power grasp gesture.*

*Control Strategy for Underactuated Multi-Fingered Robot Hand Movement Using… DOI: http://dx.doi.org/10.5772/intechopen.93767*

computational time, especially in real-time. It can be observed that characteristic with less accuracy (37.18%) was the Wilson amplitude for the identification of the pronation, and the best one with the better identification (96.45%) was the mean absolute value (MAV) in a 50 mS window.

**Figure 11** and **Table 1** explain feature effectiveness gesture hand. From **Figure 11** and **Table 1**, it is concluded that the maximum percentage, 96.55% in this 50 mS window, was obtained with the characteristic of Wilson's Amplitude in the grasp gesture. Similarly, the lowest percentage of 37.18% in this window was also obtained with the same characteristic but in the pronation gesture. Likewise, the AVM in this 50 mS window presents an average of 82.31%, which is higher in all characteristics.

**Figure 12** and **Table 2** identify the features effectiveness for 100 mS windows. It is concluded that the maximum percentage, 99.25% in this 100 mS window, was obtained with the characteristic of MAV in the repose gesture. Similarly, the lowest percentage of 42.68% in this window was obtained with the VAR characteristic in the extension gesture. Likewise, the MAV in this 100 mS window presents an average of 85.36%, which is higher in all characteristics.

**Figure 13** and **Table 3** identify the features effectiveness for 250 mS windows. It is concluded that the maximum percentage, 99.53% in this window, was obtained with the characteristic of MAV in the repose gesture. Similarly, the lowest percentage of 37.18% in this window was obtained with the Wilson's Amplitude in the grasp gesture. Likewise, the MAV in this window presents an average of 86.06%, which is higher in all characteristics.

**Figure 11.** *Features extraction of myoelectric signals for the six gestures at 50 mS window.*


#### **Table 1.**

*Feature records for selected gestures at 50 mS window.*

**Figure 12.**

*Features extraction of myoelectric signals for the six gestures at 100 mS window.*


#### **Table 2.**

*Feature records for selected gestures at 100 mS window.*

**Figure 13.** *Features extraction of myoelectric signals for the six gestures at 250 mS window.*

From the previous data, it can be concluded that from the probability matrices obtained for each of the characteristics, with the highest accuracy was the time domain features of mean absolute value (MAV). It guarantees the highest probabilities of success were obtained compared to the other characteristics. Similarly, it is conclusive as in [18, 28], that with just one measure such as the MAV, the different movements can be fully identified.

Likewise, such features show a greater percentage of accuracy as the windowing increases in the samples analyzed by the different features in the time domain. These results match others like [15–17].

*Control Strategy for Underactuated Multi-Fingered Robot Hand Movement Using… DOI: http://dx.doi.org/10.5772/intechopen.93767*


**Table 3.**

*Feature records for selected gestures at 250 mS window.*

#### **3.2 Classification**

Regarding the neural network, created by using the Levenberg-Marquardt algorithm, a root mean squared error (RMSE) of 2.9437 � <sup>10</sup>�<sup>9</sup> was obtained, which was inferior to what was expected (10�<sup>9</sup> ) in only 16 training epochs (**Figure 14**).

The training correlation coefficient was 94.94% (**Figure 15**). The error in the training set (therefore the estimation of the real error) depends mostly on the exact sample, chosen for the training and the exact sample for the test (which are completely dependent on each other since they are mutually exclusive).

#### **3.3 CAD design**

The designed prototype was created as anthropomorphic hand and able to perform three types of hand grasping (tip, spherical, and cylindrical).

The direct and inverse kinematics of the prototype was developed based on the antrophormism of the robotic hand. **Figure 16** shows index finger prosthesis prototype model in order to calculate. The Denavit Hartenberg (DH) parameters. The dimensions of the phalanges are as follows: *L*<sup>1</sup> ¼ 2 *inch*, *L*<sup>2</sup> ¼ 1, 37 *inch*, and *L*<sup>3</sup> ¼ 1 *inch*.

**Figure 14.** *Training algorithm used in ANN validation.*

**Figure 15.** *Correlation between the experimental values and the ANN predicted values.*

**Figure 16.**

*Index finger prosthesis prototype model of a robotic hand.*

### The Denavit Hartenberg (DH) parameters of the finger are shown in **Table 4**:


#### **Table 4.**

*Denavit Hartenberg (DH) parameters of the finger.*

Direct kinematics allows knowing the position and orientation of the distal phalanx, which is:

$$P\_{\mathbf{x}} = L\_{\mathcal{3}} (\mathbf{C}\_1 \mathbf{C}\_{23} - \mathbf{S}\_1 \mathbf{S}\_{23}) + L\_2 \mathbf{C}\_{12} + L\_1 \mathbf{C}\_1 \tag{10}$$

$$P\_{\mathcal{Y}} = L\_{\mathcal{Y}} (\mathbf{S}\_1 \mathbf{C}\_{23} - \mathbf{C}\_1 \mathbf{S}\_{23}) + L\_{\mathcal{Z}} \mathbf{S}\_{12} + L\_1 \mathbf{S}\_1 \tag{11}$$

*Control Strategy for Underactuated Multi-Fingered Robot Hand Movement Using… DOI: http://dx.doi.org/10.5772/intechopen.93767*

Donde

$$\mathbf{S}\_1 = \text{Sim}\left(q\_i\right), \mathbf{C}\_i = \text{Cov}\left(q\_i\right), \mathbf{S}\_{\vec{\eta}} = \text{Sim}\left(q\_i + q\_j\right), \mathbf{C}\_{\vec{\eta}} = \text{Cov}\left(q\_i + q\_j\right) \tag{12}$$

The direct kinematics of the finger is

$$q\_1 = \tan^{-1}\left(\frac{P\_y}{P\_x}\right) - \tan^{-1}\left(\frac{L\_1\sin\left(q\_2\right) + L\_3\sin\left(q\_2\right)}{L\_1 + L\_2\cos\left(q\_2\right) + L\_3\cos\left(q\_2\right)}\right) \tag{13}$$

$$q\_2 = \cos^{-1} \frac{\left(-2L\_1L\_2 + 2L\_2L\_3 \pm \sqrt{\left(2L\_1L\_2 + 2L\_2L\_3\right)^2 - 16L\_1L\_3\left(L\_1^2 + L\_2^2 + L\_3^2 - P\_x^2 - P\_y^2 - 2L\_1L\_3\right)}\right)}{8L\_1L\_2} \tag{14}$$

and

$$q\_3 = kq\_2(14) \, k \approx \frac{7}{11} \tag{15}$$

The dynamic model of an n-articulation robot, which in Lagrange's equation can be written as:

$$\mathbf{M}(q)\ddot{q} + \mathbf{C}(q, \dot{q}) + \mathbf{G}(q) = \boldsymbol{\tau} \tag{16}$$

Where *q* is the vector of articulated variables, *τ* is the vector of generalized forces acting as the manipulating robot, *M q*ð Þ is the inertia matrix, *C q*ð Þ , *q*\_ is the centripetal forces matrix, and *G q*ð Þ is the gravity vector [28]:

$$L(q, \dot{q}) = K(q, \dot{q}) - U(q) \tag{17}$$

$$\frac{d}{dt}\left(\frac{\partial L(q,\dot{q})}{\partial \dot{q}}\right) - \frac{\partial L(q,\dot{q})}{\partial q\_i} = \tau\_i \ \dot{\imath} = 1,\ \dots \ 3\tag{18}$$

The kinetic energy of the finger can be expressed as

$$K\_1 = \frac{1}{2} m\_1 c\_1^2 \dot{q}\_1^2 + \frac{1}{2} I\_1 \dot{q}\_1^2 \tag{19}$$

$$\begin{aligned} K\_{2} &= \frac{1}{2} m\_{2} \left( \mathbf{C}\_{2}^{2} \dot{q}\_{12}^{2} - 2 \mathbf{C}\_{2}^{2} \dot{q}\_{1} \, \dot{q}\_{2} - 2 \mathbf{C}\_{2} l\_{1} \mathbf{C}os \begin{pmatrix} q\_{2} \end{pmatrix} \dot{q}\_{1} \dot{q}\_{2} + \mathbf{C}\_{2}^{2} \dot{q}\_{1}^{2} + 2 \mathbf{C}\_{2} l\_{1} \mathbf{C}os \begin{pmatrix} q\_{2} \end{pmatrix} \dot{q}\_{1}^{2} + l\_{1} \dot{q}\_{1}^{2} \right) \\ &+ \frac{1}{2} I\_{2} \left( \dot{q}\_{1} - \dot{q}\_{2} \right)^{2} \end{aligned}$$

$$\text{(20)}$$

$$\begin{aligned} K\_3 &= \frac{1}{2} m\_3 \left( 2C\_3 l\_1 \dot{q}\_1 \left( \dot{q}\_1 - \dot{q}\_2 + \dot{q}\_3 \cos \left( \dot{q}\_2 - \dot{q}\_3 \right) \right) \right. \\ &\left. + \left( 2C\_3 l\_2 \cos \left( q\_3 \right) + 2l\_1 l\_2 \cos \left( q\_2 \right) + C\_3^2 + l\_1^2 + l\_2^2 \right) \dot{q}\_1^2 \right. \\ &\left. + \left( \left( -4C\_3 l\_2 \cos \left( q\_3 \right) - 2l\_1 l\_2 \cos \left( q\_2 \right) - 2C\_3^2 - 2l\_2^2 \right) \dot{q}\_2^2 \right. \right. \\ &\left. + 2C\_3 \left( C\_3 + l\_2 \cos \left( q\_3 \right) \right) \dot{q}\_3 \right) \dot{q}\_1 + 2C\_3 l\_2 \cos \left( q\_3 \right) + C\_3^2 + l\_2^2 \right) \dot{q}\_2^2 \\ &- 2C\_3 \left( l\_2 \cos \left( q\_3 \right) + C\_3 \right) \dot{q}\_2 \dot{q}\_3 + C\_3^2 \dot{q}\_3^2 \right) + \frac{1}{2} I\_3 \left( \dot{q}\_1 - \dot{q}\_2 + \dot{q}\_3 \right)^2 \tag{21} \end{aligned} \tag{22}$$

The potential energy in the analyzed finger can be expressed as

$$U\_1 = -m\_1 \text{gC}\_1 \text{Cost}(q\_1) \tag{22}$$

$$U\_2 = -m\_2 g l\_1 \text{Cost}(q\_1) - m\_2 \text{gC}\_2 \text{Cost}(q\_1 - q\_2) \tag{23}$$

$$U\_3 = -m\_3gl\_1Cos(q\_1) - m\_3gl\_2Cos(q\_1 - q\_2) + m\_3gC\_3Cos(q\_1 - q\_2 + q\_3) \tag{24}$$

Finding solutions for Lagrange's equations, the dynamic model of the system is

*τ*<sup>1</sup> ¼ *m*3*l*1*c*3*Cos q*<sup>2</sup> � *q*<sup>3</sup> *<sup>q</sup>*€<sup>1</sup> � *<sup>q</sup>*€<sup>2</sup> <sup>þ</sup> *<sup>q</sup>*€<sup>3</sup> � *<sup>m</sup>*3*l*1*C*3*Sin q*<sup>2</sup> � *<sup>q</sup>*<sup>3</sup> *q*\_ <sup>1</sup> � *q*\_ <sup>2</sup> þ *q*\_ 3 *q*\_ <sup>2</sup> � *q*\_ 3 þ *m*3*l*1*c*3*Cos q*<sup>2</sup> � *q*<sup>3</sup> *<sup>q</sup>*€<sup>1</sup> � *<sup>m</sup>*3*l*1*C*3*Sin q*<sup>2</sup> � *<sup>q</sup>*<sup>3</sup> *q*\_ <sup>2</sup> � *q*\_ 3 *q*\_ 1 þ �2*m*3*l*2*C*3*Sin q*<sup>3</sup> *q*\_ <sup>3</sup> � 2*l*1ð Þ *m*2*c*<sup>2</sup> � *m*3*l*<sup>2</sup> *Sin q*<sup>2</sup> *q*\_ 2 *q*\_ 1 þ 2*m*3*l*2*Cos q*<sup>3</sup> <sup>þ</sup> <sup>2</sup>*l*1ð Þ *<sup>m</sup>*2*c*<sup>2</sup> � *<sup>m</sup>*3*l*<sup>2</sup> *Cos q*<sup>2</sup> <sup>þ</sup> *<sup>m</sup>*<sup>3</sup> *<sup>l</sup>* 2 <sup>1</sup> þ *l* 2 <sup>2</sup> þ *l* 2 3 <sup>þ</sup> *<sup>m</sup>*1*C*<sup>2</sup> 1 <sup>þ</sup> *<sup>m</sup>*2*<sup>l</sup>* 2 1 þ *m*2*c* 2 <sup>2</sup> þ *I*<sup>1</sup> þ *I*<sup>2</sup> þ *I*3�*q*€<sup>1</sup> þ 2*m*3*l*2*C*3*Sin q*<sup>3</sup> *q*\_ <sup>3</sup> þ *l*1ð Þ *m*2*c*<sup>2</sup> � *m*3*l*<sup>2</sup> *Sin q*<sup>2</sup> *q*\_ 2 *q*\_ 2 þ *m*3*l*2*C*3*Cos q*<sup>3</sup> <sup>þ</sup> *<sup>m</sup>*3*C*<sup>3</sup> <sup>2</sup> <sup>þ</sup> *<sup>I</sup>*<sup>3</sup> *<sup>q</sup>*€<sup>3</sup> � *<sup>m</sup>*3*l*2*C*3*Sin q*<sup>3</sup> *q*\_ 3 2 � 2*m*3*l*2*C*3*Cos q*<sup>3</sup> <sup>þ</sup> *<sup>l</sup>*1ð Þ *<sup>m</sup>*2*c*<sup>2</sup> � *<sup>m</sup>*3*l*<sup>2</sup> *Cos q*<sup>2</sup> <sup>þ</sup> *<sup>m</sup>*<sup>3</sup> *<sup>l</sup>* 2 <sup>2</sup> <sup>þ</sup> *<sup>C</sup>*<sup>2</sup> 3 <sup>þ</sup> *<sup>m</sup>*2*c*<sup>2</sup> <sup>2</sup> <sup>þ</sup> *<sup>I</sup>*<sup>2</sup> <sup>þ</sup> *<sup>I</sup>*<sup>3</sup> *<sup>q</sup>*€<sup>2</sup> � ð Þ *m*1*c*<sup>1</sup> � *m*1*l*<sup>1</sup> � *m*3*l*<sup>1</sup> *g Sin q*<sup>1</sup> <sup>þ</sup> ð Þ *<sup>m</sup>*2*C*<sup>2</sup> � *<sup>m</sup>*3*l*<sup>2</sup> *g Sin q*<sup>1</sup> � *<sup>q</sup>*<sup>2</sup> þ *m*3*c*3*g Sin q*<sup>1</sup> � *q*<sup>2</sup> þ *q*<sup>3</sup> 

(25) *τ*<sup>2</sup> ¼ �*m*3*l*1*c*3*Cos q*<sup>2</sup> � *q*<sup>3</sup> *<sup>q</sup>*€<sup>1</sup> <sup>þ</sup> *<sup>m</sup>*3*l*1*c*3*Sin q*<sup>2</sup> � *<sup>q</sup>*<sup>3</sup> *<sup>q</sup>*\_ <sup>2</sup> � *<sup>q</sup>*\_ 3 *q*\_ 1 þ 2*m*3*l*2*c*3*Sin q*<sup>3</sup> *q*\_ <sup>3</sup> þ *l*1ð Þ *m*2*c*<sup>2</sup> þ *m*3*l*<sup>2</sup> *Sin q*<sup>2</sup> *q*\_ 2 *q*\_ 1 � 2*m*3*l*2*c*3*Cos q*<sup>3</sup> <sup>þ</sup> *<sup>l</sup>*1ð Þ *<sup>m</sup>*2*c*<sup>2</sup> <sup>þ</sup> *<sup>m</sup>*3*l*<sup>2</sup> *Cos q*<sup>2</sup> <sup>þ</sup> *<sup>m</sup>*<sup>3</sup> *<sup>l</sup>* 2 <sup>2</sup> þ *c* 2 3 <sup>þ</sup> *<sup>m</sup>*2*<sup>c</sup>* 2 <sup>2</sup> þ *I*<sup>2</sup> þ *I*<sup>3</sup> *<sup>q</sup>*€<sup>1</sup> � 2*m*3*l*2*c*3*Sin q*<sup>3</sup> *q*\_ 2*q*\_ <sup>3</sup> þ 2*m*3*l*2*c*3*Cos q*<sup>3</sup> <sup>þ</sup> *<sup>m</sup>*<sup>3</sup> *<sup>l</sup>* 2 <sup>2</sup> þ *c* 2 3 <sup>þ</sup> *<sup>m</sup>*2*<sup>c</sup>* 2 <sup>2</sup> þ *I*<sup>2</sup> þ *I*<sup>3</sup> *<sup>q</sup>*€<sup>2</sup> � *m*3*l*2*c*3*Cos q*<sup>3</sup> <sup>þ</sup> *<sup>m</sup>*3*<sup>c</sup>* 2 <sup>3</sup> þ *I*<sup>3</sup> *<sup>q</sup>*€<sup>3</sup> <sup>þ</sup> *<sup>m</sup>*3*l*2*c*3*Sin q*<sup>3</sup> *q*\_ 3 2 þ *l*1ð Þ *m*2*c*<sup>2</sup> þ *m*3*l*<sup>2</sup> *Sin q*<sup>2</sup> *q*\_ 1 <sup>2</sup> � *<sup>q</sup>*\_ 1*q*\_ 2 � ð Þ *<sup>m</sup>*2*c*<sup>2</sup> <sup>þ</sup> *<sup>m</sup>*3*l*<sup>2</sup> *g Sin q*<sup>1</sup> � *<sup>q</sup>*<sup>2</sup> þ *m*3*l*1*c*3*Sin q*<sup>2</sup> � *q*<sup>3</sup> *q*\_ <sup>1</sup> � *q*\_ <sup>2</sup> þ *q*\_ 3 *q*\_ <sup>1</sup> � *m*3*c*3*g Sin q*<sup>1</sup> � *q*<sup>2</sup> þ *q*<sup>3</sup> (26)

$$\begin{aligned} \tau\_3 &= m\_3 l\_1 c\_3 \text{Co}\left(q\_2 - q\_3\right) \ddot{q}\_1 - m\_3 l\_4 c\_3 \text{Si}\left(q\_2 - q\_3\right) \left(\dot{q}\_2 - \dot{q}\_3\right) \dot{q}\_1 + \left[m\_3 c\_3^2 + I\_3\right] \ddot{q}\_3 \\ &- m\_3 l\_2 c\_3 \text{Si}\left(q\_3\right) \left(\dot{q}\_1 - \dot{q}\_2\right) \dot{q}\_3 + \left[m\_3 l\_2 c\_3 \text{Co}\left(q\_3\right) + m\_3 c\_3^2 + I\_3\right] \left(\ddot{q}\_1 - \ddot{q}\_2\right) \\ &- m\_3 c\_3 \left\{-g \text{Si}\left(q\_1 - q\_2 + q\_3\right) + \left[l\_1 \text{Si}\left(q\_2 - q\_3\right) \dot{q}\_1 - l\_2 \text{Si}\left(q\_3\right) \left(\dot{q}\_1 - \dot{q}\_2\right) \left(\dot{q}\_1 - \dot{q}\_2 + \dot{q}\_3\right)\right]\right\} \end{aligned} \tag{27}$$

To follow the track of position trajectories, the PD control is used for gravity compensation (**Figure 17**) [22], where

$$
\pi = K\_p \dot{q} + K\_\nu \dot{\dot{q}} + G(q) \tag{28}
$$

*Control Strategy for Underactuated Multi-Fingered Robot Hand Movement Using… DOI: http://dx.doi.org/10.5772/intechopen.93767*

**Figure 17.** *Proportional derivative (PD) control with gravity compensation.*

**Figure 18.** *PD control simulation in SimMechanics.*

**Figure 18** shows the SimMechanics tool is used to generate the trajectories of each of the fingers. Once the trajectories of each of the fingers have been generated, the workspace of each of them is generated.

## **3.4 Arduino connections to generate the interface with LabVIEW**

**Figure 19** shows the connection Arduino-computer and Arduino-servomotors as the fundamental relationships to get intercommunication and produce grasp

**Figure 19.** *Programming Arduino Uno, LabView, and DC motors.*

movements like cylindrical, spherical, and tip. Figure also shows the Proteus package simulation.

**Figure 20** shows the workspace of the index finger of the prototype robotic prosthesis generated from the SimMechanics. The implementation of the robotic hand prosthesis required an intensive learning of the different hand postures. A LabView application to perform the corresponding grasps will be carried out.

**Figure 21** shows the flowchart of the developed application in LabView. Several tests with the anthropomorphic subactuated robotic hand were carried out in order to confirm if modeling task was correctly performed by fingers joints for specific hand gesture.

**Figure 22** shows the trajectories that were generated with the developing software and implemented in LabView in order to reproduce the movement of the underactuated anthropomorphic robotic prosthesis.

**Figure 20.** *Index finger workspace of the robotic prosthesis prototype.*

*Control Strategy for Underactuated Multi-Fingered Robot Hand Movement Using… DOI: http://dx.doi.org/10.5772/intechopen.93767*

#### **Figure 21.**

*Flowchart of the developed application in LabView.*

**Figure 22.** *Anthropomorphic underactuated finger trajectories.*

**Figure 23.** *Cartesian error in the PD control.*

**Figure 24.** *Position of the fingers in different gestures. (a) Open hand, (b) precision grip, and (c) closed hand.*

PD controller parameters tuning methods were implemented, which were tuned manually by trial and error, and then an automatic parameter adjustment was used minimizing the integral of the tracking error by minimum squares. The PD controller makes fingertip tracking possible, thus allowing flexion and extension motions in the finger, in a way that the finger gets to a point without an important overshoot on axis y of 1.85 mm in a shorter time than 0.05 s. As shown in **Figure 23**, the values obtained are son *k* = 1, *ζ* = 0.01, *C* = 0.12, γ = 0.0015, the gains found were *Kp* = 0.004 and *Kd* = 0.035, respectively.

Finally, **Figure 24** shows the position of the fingers according to the different gestures according to the control system designed.

### **4. Conclusions**

The Myo armband is a wireless portable device developed by Thalmic Labs, capable of recording EMG through eight surface stainless steel electrodes, and has a sampling rate of 200 Hz. In addition, the Myo armband has an inertial unit of measure sensor (IMU) nine-axis, haptic feedback, and Bluetooth communication. These characteristics combined with a compact design, as it easily adjusts to the forearm, maintaining the distance between electrodes, lead to an acquisition system that is easier to use for prosthetic systems.

Mean absolute value (MAV), wavelength (WL), zero crossing (ZC), Wilson amplitude (WA), and variance were used to accomplish feature extraction by using a Myo armband device in different time series. The features extraction of the characteristics through the MAV allows the comparison between the values determined experimentally and predicted by the ANN, with a high level of effectiveness (94.94%).

*Control Strategy for Underactuated Multi-Fingered Robot Hand Movement Using… DOI: http://dx.doi.org/10.5772/intechopen.93767*

Multilayer network topology was used to carry out the design of the artificial neural network (feedforward and back-propagation). The mean square error value of 1*:*2041*x*10<sup>19</sup> was obtained by using the Levenberg-Marquardt training algorithm; however, this value was lower than desired (10<sup>3</sup> ) in only 16 training epochs.

A prototype of hand prostheses was developed based on the anthropometry, kinematics and dynamics of the hand joints, and the main dimensions of the joints as well as the fingers' trajectories to guarantee the different hand gestures. The direct and inverse kinematic model of the prototype robotic hand prosthesis based on the DH parameters was obtained. The simulations of the workspace and the PD control of the hand were performed by finding the Cartesian error set on the axis and 1.85 mm in a time less than 0.05 s. This allows the tracking of objectives attainable by the fingertips, developing movements of flexion and extension of the finger so that the finger reaches the point without an important overshoot.

A control system was developed based on the Arduino microcontroller architecture. The design system allows generating the joint trajectories of the hand prostheses prototype.
