**3. Basic concept of AI and machine learning (ML)**

#### **3.1 Machine learning**

*Service Robotics*

The functional loss due to amputation, spinal cord injury, brachial plexus injury or traumatic brain injury resulting loss of connection from brain to extremity and those residual/weakened extremities are not able to function as of healthy/intact limb. These lost structure & functions of extremities were being replaced by fitment of prosthetics and orthotic devices or rehabilitation aids. The conventional prosthesis which is a mechanical device only provide the basic function, similarly Orthosis provides the support to weaken parts not fully with out completely mimicking the lost section. The concept of biomechatronic is a sub-discipline of mechatronics. It is related to develop mechatronics systems which assist or restore to human body gave the prosthetics and orthotics concept to a new direction. A biomechatronic system has four units: Biosensors, Mechanical Sensors, Controller, and Actuator [6]. Biosensors detect intentions of human using biological reactions coming from nervous or muscle system. The controller acts as a translator among biological and electronic structures, and also monitors the activities of the biomechatronic device. Mechanical sensors measure data about the biomechatronic device and relay to the biosensor or controller. The actuator is an artificial muscle (robot mechanism) that produces force or movement to aid or replace native human body function. The areas of use of biomechatronic are orthotics, prosthesis, exoskeleton and rehabilitation robots, and neuroprosthesis. Robots are the intelligent devices that easily fulfill the requirements of cyclic movements in rehabilitation, better control over introduced forces; accurately reproduce required forces in

repetitive exercises and more precise in different situations [7].

**2. History of artificial intelligence (AI) in prosthetics and orthotics**

The first intelligent prosthesis developed by Chas. A. Blatchford & Sons, Ltd. in 1993 [8] and the improved version in 1995 named as Intelligent Prosthesis Plus [9] Blatchford in 1998 developed Adaptive prosthesis combining three actuation mechanisms of hydraulic, pneumatics and microprocessor. The fully microprocessor control knee developed in 1997 by Ottobock known as C-leg [10]. Rheo knee and power knee both developed by OSSUR in 2005 and 2006 subsequently uses onboard AI mechanism [11]. In late 2011 Ossur introduced the world first bionic leg with robotics mechanism known as "symbionic leg" and this time period the Genium X3 was lunched by Ottobock which allow backward walking and provide intuitive and natural motion during gait cycle [12]. On 2015 Blatchford group introduced Linx the world's first fully integrated limb has seven sensor and four CPU throughout the body of Leg. It allows coordination and synchronization of knee and ankle joint by sensing and analyzing data on user movement, activities, environment and terrain making standing up or walking on ramp more natural. The iwalk BiOM is the world first bionic foot with calf system commercially available from 2011 developed by Dr. Hugh Herr uses robotics mechanism to replicate the function of muscle and tendon with proprietary algorithm [13, 14]. The commercially available microprocessor control foot are Meridium (OttoBock, Germany), Elan (Blatchford, UK), Pro-prio (Össur, Iceland), Triton Smart Ankle (hereinafter referred as TSA) (Otto Bock, Germany), and Raize (Fil-lauer, USA) etc. available from 2011 in the market [15]. The first commercially available bionic hand lunched by Touch bionics in 2007 with individually powered digits and thumb has a choice of grip. The design again embedded with rotating thumb known as i- limb ultra and i- limb revolution designs implanted with Biosim and My i- limb app [16]. Bebionic was commercially available in the market in 2010 manufactured by RSL steeper and lunched by World congress, in 2017 it owned by Ottobock. Bebionic3 allows 14 different hold with two thumb position [17]. Michelangelo hand is the fully articulated robotic hand with electronically actuated thumb first fitted in the year 2010 developed by Ottobock [18]. The concept of brain

**18**

Machine learning contains elements of mathematics, statistics, and computer science, which is helping to drive advances in the development of artificial intelligence. It is the study of computer algorithms which expands and develops through experiences. This is a subset of AI as shown in **Figure 1**. The ML algorithm methods generally categorized two types supervised and unsupervised learning [20, 21].

#### *3.1.1 Supervised learning*

The method of predicting a model on a trained range of inputs learning function to maps the known output, which discover the pattern of new sets of data.

Example 1*:* To predict the model for microprocessor knee joint which is trained with numerous input or labeled data of the knee angle variation in different sub phase of gait cycle and apply on new amputee to predict the new data by the phase dependent pattern recognition approach.

Example 2*:* Intuitive myoelectric prosthesis or pattern recognition control prosthesis, FES.

Pattern recognition is an automatically recognition of pattern applied in data analysis, signal processing etc. when the pattern of algorithm trained from labeled data that is supervised learning. When the model of algorithm is fruitfully trained, the model can be used for the prediction of a new data. The ultimate goal of this ML is to develop a successful predictor function. The models of discrete or categorical categories of dependent variables are known as classification algorithm and with continuous value known as regression algorithm. Three basic steps followed to finalize a model are training, validating and application of algorithm to new data. Algorithm used for supervised learning are support vector machines, linear regression, linear discriminant analysis (LDA) etc. This is error based learning.

Example: Prediction of a model to relate the patient's energy consumption using Trans femoral prosthesis with the function of walking velocity in level surfaces.

The linear regression model for the above statement is:

$$\mathbf{Y} = \mathbf{b} + \mathbf{a}\mathbf{X} + \mathbf{e} \tag{1}$$

*Y* = Energy consumption (dependent variable)

*b* = Y intercept

*a* = Slope of the Line

*X* = Walking velocity (Independent variable), *e* = Error

The Logistic regression model is used to model the probability of a certain class or event such as pass fail, win/loss, healthy/sick etc. This is fall between 0 and 1 with categorical dependent variables.

Example: To predict a model for the successful or failed prosthetic rehabilitation within the categories of 50 meter walk test in level surface with combatable use of any assistive devices for successful and considered as fail if they could not complete the 50 meter walk test.

The model is predicted in terms of the probability (p) which are passing the 50 meter test are pass and could not cross 50 meter as fail.

$$\text{The model of this statement is}: \ln\left(\frac{p}{\mathbf{1}-p}\right) = a \star b\mathbf{X} \tag{2}$$

*p* = No of patient cross the level of 50 meter 1-*p* = No of patient could not able to cross The dependent variable Y (predictive) = *p*/(1-*p*) Independent Variable *X* = Type of prosthesis

#### *3.1.2 Unsupervised learning*

The algorithm of unsupervised learning finds a solution to unknown or unlabeled data which is not required any kind of supervision from human. It works of its own to gather information and allow performing more complex task compared to supervised learning. Cluster analysis and k means are the methods used for pattern formation for the new data.

Example: Intent detection algorithm with unlabeled data based on reference pattern is an unsupervised learning method used in microprocessor knee.

#### *3.1.3 Reinforcement learning (RL)*

This is concerned with how a software agent must take action in an environment to maximize the cumulative reward. The agent learns from the consequences of its actions and selects the choice from its past experiences and the new choices by the trial and error learning. This is generally output based learning. The components of the RL are agent and environment. The agent (Learner) learns about a policy (π) (strategy or approach that the agent uses to determine the next action based on the current state) by observing or interacting with the environment. All the possible steps followed by the agent during the process of learning are known as the "action" and current condition returned by the environment is "state". The approach that

**21**

**Figure 2.**

*Schematic diagram of flow of information with bento arm [22].*

*Application of Artificial Intelligence (AI) in Prosthetic and Orthotic Rehabilitation*

the agent uses to determine the next action based on the current state is known as "policy". The artificial intelligence gets either reward or penalties for the action the agent performs. The reward is an instant return from the environment to appraise the last action. The goal of an agent to maximize the reward based on the set of actions. The agent follows the concept of exploration and exploitation to get the optimal action value or rewards. The exploration is about exploring and capturing more information from the environment and exploitation uses the already known

Example: Learning from demonstration (LfD) of myoelectric prosthesis. In this method the policy to determine the next action is learned by different methods i.e. demonstration provided by the Prosthetist, learned from the action of similar prosthetic user or intact limb movement of prosthetic user. During process of demonstration the sequence of state action pairs are recorded for the training of prosthetic limb. The learning process for movement of amputated side with intact limb happens simultaneously. The intact limb considered as training limb and the amputated side prosthetic limb as control limb. During training procedure the agent or learner or amputee asked to perform same motion for both the limb the information from training limb create a prosthetic policy that map the state of action of the control limb. Robotic prosthesis can use its learned and state conditional policy for user during post training use. The training arm demonstrated the desired movement, position and grasp pattern to robotic or control arm. During initial training process the opening of the prosthetic arm may not be the similar to the training limb but when the training preceded the gradual opening of the hand work as a reward to the agent to pick up the appropriate movement and position for required opening of the prosthetic hand and proportional control for graded prehension. The schematic diagram of Bento arm using reinforcement learning shown in **Figure 2** [22]. Another example to understand the strategy of exploring and exploitation is to find out the exact position for placement of surface electrode in the residual limb of amputee. This is a trial and error method where surface electrodes are placed in different locations around the residual limb of the amputee to get the desired action potential to operate the prosthetic hand. The simultaneous activities of residual

*DOI: http://dx.doi.org/10.5772/intechopen.93903*

information to get the reward.

#### *Application of Artificial Intelligence (AI) in Prosthetic and Orthotic Rehabilitation DOI: http://dx.doi.org/10.5772/intechopen.93903*

the agent uses to determine the next action based on the current state is known as "policy". The artificial intelligence gets either reward or penalties for the action the agent performs. The reward is an instant return from the environment to appraise the last action. The goal of an agent to maximize the reward based on the set of actions. The agent follows the concept of exploration and exploitation to get the optimal action value or rewards. The exploration is about exploring and capturing more information from the environment and exploitation uses the already known information to get the reward.

Example: Learning from demonstration (LfD) of myoelectric prosthesis. In this method the policy to determine the next action is learned by different methods i.e. demonstration provided by the Prosthetist, learned from the action of similar prosthetic user or intact limb movement of prosthetic user. During process of demonstration the sequence of state action pairs are recorded for the training of prosthetic limb. The learning process for movement of amputated side with intact limb happens simultaneously. The intact limb considered as training limb and the amputated side prosthetic limb as control limb. During training procedure the agent or learner or amputee asked to perform same motion for both the limb the information from training limb create a prosthetic policy that map the state of action of the control limb. Robotic prosthesis can use its learned and state conditional policy for user during post training use. The training arm demonstrated the desired movement, position and grasp pattern to robotic or control arm. During initial training process the opening of the prosthetic arm may not be the similar to the training limb but when the training preceded the gradual opening of the hand work as a reward to the agent to pick up the appropriate movement and position for required opening of the prosthetic hand and proportional control for graded prehension. The schematic diagram of Bento arm using reinforcement learning shown in **Figure 2** [22]. Another example to understand the strategy of exploring and exploitation is to find out the exact position for placement of surface electrode in the residual limb of amputee. This is a trial and error method where surface electrodes are placed in different locations around the residual limb of the amputee to get the desired action potential to operate the prosthetic hand. The simultaneous activities of residual

**Figure 2.** *Schematic diagram of flow of information with bento arm [22].*

*Service Robotics*

*b* = Y intercept *a* = Slope of the Line

the 50 meter walk test.

*3.1.2 Unsupervised learning*

formation for the new data.

*3.1.3 Reinforcement learning (RL)*

with categorical dependent variables.

Example: Prediction of a model to relate the patient's energy consumption using

The Logistic regression model is used to model the probability of a certain class or event such as pass fail, win/loss, healthy/sick etc. This is fall between 0 and 1

Example: To predict a model for the successful or failed prosthetic rehabilitation within the categories of 50 meter walk test in level surface with combatable use of any assistive devices for successful and considered as fail if they could not complete

The model is predicted in terms of the probability (p) which are passing the 50

The algorithm of unsupervised learning finds a solution to unknown or unlabeled

data which is not required any kind of supervision from human. It works of its own to gather information and allow performing more complex task compared to supervised learning. Cluster analysis and k means are the methods used for pattern

pattern is an unsupervised learning method used in microprocessor knee.

Example: Intent detection algorithm with unlabeled data based on reference

This is concerned with how a software agent must take action in an environment to maximize the cumulative reward. The agent learns from the consequences of its actions and selects the choice from its past experiences and the new choices by the trial and error learning. This is generally output based learning. The components of the RL are agent and environment. The agent (Learner) learns about a policy (π) (strategy or approach that the agent uses to determine the next action based on the current state) by observing or interacting with the environment. All the possible steps followed by the agent during the process of learning are known as the "action" and current condition returned by the environment is "state". The approach that

The model of this Statement is : lnæ ö

*Y b aX e* =+ + (1)

ç ÷ è ø **1**-

*p*

*<sup>p</sup> = a + bX*

(2)

Trans femoral prosthesis with the function of walking velocity in level surfaces.

The linear regression model for the above statement is:

*X* = Walking velocity (Independent variable), *e* = Error

*Y* = Energy consumption (dependent variable)

meter test are pass and could not cross 50 meter as fail.

*p* = No of patient cross the level of 50 meter 1-*p* = No of patient could not able to cross The dependent variable Y (predictive) = *p*/(1-*p*) Independent Variable *X* = Type of prosthesis

**20**

muscle EMG signal and operation of connected Prosthetic hand provide a visual feedback to amputee and Prosthetist. Based on the feedback the Prosthetist keeps on exploring new site of the electrode in the residual limb until optimization is achieved. This technique helps the amputee to learn about the amount of muscle contraction which operates the prosthesis. The opening and different grasping pattern in sequence acts as a reward to perform more complex activities. In some cases many old user or experienced Prosthetist use the strategy of the exploitation rather than exploring the new site for electrode placement based on their past learning and experiences. Other examples are adaptive switch control myoelectric prosthesis, Power leg Prosthesis, etc.

#### **3.2 Deep learning**

This is a form of machine learning uses both supervised and unsupervised and subset of machine learning and AI. It uses the method of artificial neural network (ANN) with representation learning. ANN is inspired by the human brain neural network system whether human brain network is dynamic (Plastic) and analog at the same time the ANN is static and symbolic. It can learn, memorize, generalized and prompted modeling of biological neural system. ANNs are more effective to solve problems related to pattern recognition and matching, clustering and classification. The ANN consist of standard three layer input, output and hidden layer, the output layer can be the input layer for the next output the simple network of neural system shown in **Figure 3** [23], if there many hidden layer are present that ANN known as Deep Neural Networks", or briefly DNN, can be successfully expert to solve difficult problems. Deep learning models yield results more quickly than standard machine learning approaches. The propagation of function in ANN through input layer to output layer and the mathematical representation for this is:

$$s = f\left(\mathfrak{g}(w, x)\right) \tag{3}$$

**23**

*Application of Artificial Intelligence (AI) in Prosthetic and Orthotic Rehabilitation*

and understands the environment through robotics. ML computes the large amount of data to get a solution to the problem in terms of pattern recognition. Statistical machine learning embedded with speech recognition and natural language processing. Deep learning recognizes objects by computer vision through convolution neural network (CNN) and memorize past by recurrent neural network (RNN). The

Speak and Listen Speech recognition based on statistical learning system

Image processing by symbolic learning

Artificial neural networks

Human memorize the past Recurrent neural network (RNN) can use previous output as the input, so it remembers the data.

Recognize objects Convolutional neural network (CNN) recognizes the object and also differentiates from others.

Write and learn Natural Language processing (NLP) Eye can see Computer vision or symbolic vision

Ability to recognize pattern Pattern recognition by Machine learning

The methods or techniques used for the AI are classifier and prediction. Classifier is an algorithm that implements classification; the classifiers are Perceptron, Naïve Bayes, Decision trees, Logistic regression, K nearest Neighbor, AANN/DL and support vector machine [24]. Perceptron is the basic building block of the neural network it breakdown the complex network to smaller and simpler pieces. The classifier used in the myoelectric prosthetic hand is LDA classifier, Quadratic discriminant classifier and Multilayer perceptron neural network with linear activation functions etc. LDA (linear discriminant classifier) is a simple one that helps to reduce the dimension of the algorithm for application of neural

schematic diagram of AI and its functions are shown in **Figure 4**.

*Similarity between human intelligence and artificial intelligence (AI).*

*DOI: http://dx.doi.org/10.5772/intechopen.93903*

**Human can perform AI can perform**

*Layers of ANN (artificial neural network) [23].*

Understand the environment Robotics

Recognize the scene and create

Human brain formed by the networks of neurons

image

**Figure 3.**

**Table 1.**

(*s* = output, *x* = Input, *w* = corresponding weight of link between input and transfer function, j (*w x*, ) = linear combination of *w* and *x*, *f* (.) = transfer function.)

Example: EEG based pattern recognition which uses brain computer Interface (BCI) to control prosthetic arm, Neuroprosthesis etc.

#### **3.3 Other artificial intelligence (AI) techniques**

Artificial Intelligence is the intelligence of machine that simulates the human intelligence which programmed in such way that it thinks and act like human. It includes; reasoning, knowledge representation, planning, learning, natural language processing, perception, the ability to move and manipulate objects and many more subjects. AI has four main components Expert systems, Heuristic problem solving, Natural Language Processing (NLP) and Vision. In human the intelligent agents like eyes, ears, and other organs act as sensors, and hands, legs, mouth, and other body parts act as per instruction known as effectors similarly the robotic agent substitutes cameras and infrared range finders for the sensors and various motors for the effectors. A software agent has encoded bit strings as its precepts and actions. Similarity between human and artificial intelligence is shown in **Table 1**. AI can be divided into two categories as per its function as symbolic learning (SL) and machine learning (ML). SL is perform the functions like image processing through computer vision

*Application of Artificial Intelligence (AI) in Prosthetic and Orthotic Rehabilitation DOI: http://dx.doi.org/10.5772/intechopen.93903*

**Figure 3.** *Layers of ANN (artificial neural network) [23].*


#### **Table 1.**

*Service Robotics*

Power leg Prosthesis, etc.

**3.2 Deep learning**

transfer function,

function.)

j

(BCI) to control prosthetic arm, Neuroprosthesis etc.

**3.3 Other artificial intelligence (AI) techniques**

muscle EMG signal and operation of connected Prosthetic hand provide a visual feedback to amputee and Prosthetist. Based on the feedback the Prosthetist keeps on exploring new site of the electrode in the residual limb until optimization is achieved. This technique helps the amputee to learn about the amount of muscle contraction which operates the prosthesis. The opening and different grasping pattern in sequence acts as a reward to perform more complex activities. In some cases many old user or experienced Prosthetist use the strategy of the exploitation rather than exploring the new site for electrode placement based on their past learning and experiences. Other examples are adaptive switch control myoelectric prosthesis,

This is a form of machine learning uses both supervised and unsupervised and subset of machine learning and AI. It uses the method of artificial neural network (ANN) with representation learning. ANN is inspired by the human brain neural network system whether human brain network is dynamic (Plastic) and analog at the same time the ANN is static and symbolic. It can learn, memorize, generalized and prompted modeling of biological neural system. ANNs are more effective to solve problems related to pattern recognition and matching, clustering and classification. The ANN consist of standard three layer input, output and hidden layer, the output layer can be the input layer for the next output the simple network of neural system shown in **Figure 3** [23], if there many hidden layer are present that ANN known as Deep Neural Networks", or briefly DNN, can be successfully expert to solve difficult problems. Deep learning models yield results more quickly than standard machine learning approaches. The propagation of function in ANN through input layer to output layer and the mathematical representation for this is:

> *s f wx* = (j

(*s* = output, *x* = Input, *w* = corresponding weight of link between input and

Example: EEG based pattern recognition which uses brain computer Interface

Artificial Intelligence is the intelligence of machine that simulates the human intelligence which programmed in such way that it thinks and act like human. It includes; reasoning, knowledge representation, planning, learning, natural language processing, perception, the ability to move and manipulate objects and many more subjects. AI has four main components Expert systems, Heuristic problem solving, Natural Language Processing (NLP) and Vision. In human the intelligent agents like eyes, ears, and other organs act as sensors, and hands, legs, mouth, and other body parts act as per instruction known as effectors similarly the robotic agent substitutes cameras and infrared range finders for the sensors and various motors for the effectors. A software agent has encoded bit strings as its precepts and actions. Similarity between human and artificial intelligence is shown in **Table 1**. AI can be divided into two categories as per its function as symbolic learning (SL) and machine learning (ML). SL is perform the functions like image processing through computer vision

(*w x*, ) = linear combination of *w* and *x*, *f* (.) = transfer

( , )) (3)

**22**

*Similarity between human intelligence and artificial intelligence (AI).*

and understands the environment through robotics. ML computes the large amount of data to get a solution to the problem in terms of pattern recognition. Statistical machine learning embedded with speech recognition and natural language processing. Deep learning recognizes objects by computer vision through convolution neural network (CNN) and memorize past by recurrent neural network (RNN). The schematic diagram of AI and its functions are shown in **Figure 4**.

The methods or techniques used for the AI are classifier and prediction. Classifier is an algorithm that implements classification; the classifiers are Perceptron, Naïve Bayes, Decision trees, Logistic regression, K nearest Neighbor, AANN/DL and support vector machine [24]. Perceptron is the basic building block of the neural network it breakdown the complex network to smaller and simpler pieces. The classifier used in the myoelectric prosthetic hand is LDA classifier, Quadratic discriminant classifier and Multilayer perceptron neural network with linear activation functions etc. LDA (linear discriminant classifier) is a simple one that helps to reduce the dimension of the algorithm for application of neural

**Figure 4.** *AI and its functions.*

network model. Prediction is a method to predict a pattern an output noise free data with a model from input data in hidden layer.

Examples: EMG CNN based prosthetic hand, EGG based Mind controlled prosthesis with sensory feedback, robotic arm, exoskeleton Orthosis.
