*2.1.1 Being multi-modal*

A multi-modal experience of the world is achieved in humans through the sensory system which is made up of a vast array of sensors to provide vision, audition, touch, smell, balance, and proprioception. Any single function can be accomplished by more than one signal configuration from the neurons and different neuron clusters need not be limited to a single function. This type of redundancy ensures continuity in function where parts of the network can learn from each other without an external teacher.

The second characteristic is the time-locked correlations between several simultaneous inputs, which are a powerful tool for representation, both singly and in combination with various events and objects in the environment [25]. In realtime these activities are mapped to each other to discover "higher order regularities," for example, using a combination of touch and vision to understand texture or transparency.

#### *2.1.2 Being incremental*

In non-incremental learning, the entire training set is usually fixed and then presented in entirety or randomly sampled. However, it seems that systematic changes in the input patterns and their overlapping occurrence in time play a large part in determining the development process. As a child grows, the vision starts to couple with the hearing and helps organize attention. In hearing-impaired babies, we see disorganized attention and a consequent slower learning (this is

**117**

*Restoring Independent Living after Disability Using a Wearable Device: A Synergistic…*

common in stroke cases, where patients experience sensory overload and cognitive deficiencies). Co-ordination is a form of mapping of multi-modal learning and the way they map changes over the development time, using either changing patterns or additional sensory inputs which the infants are now able to voluntarily provide themselves through physical exploration. Shifts in inputs thus result from the infant's own behavior. Using the body and moving from one place to another presents new spatial-temporal patterns and alters the infant's perception of "objects, space and self." Experimental studies show that one of the factors that strongly influence biological intelligence is "ordering the training experiences in

Experiments by Ballard et al. [27] and Baldwin [28] show that children off-load short term memory to the world by linking objects and events to locations, using attention to selectively point to the world. It is an easy way to build coherence in the cognitive system and to keep contents of different information clusters separate

Initially the baby does not know what there is to learn. Babies can discover both the tasks to be learned and the solution to those tasks through exploration or non-goal directed action. One of the ways of exploration is spontaneous movement. As they contact objects in the environment, they progress from non-reaching to reaching. Thus, they seem to move from arousal to exploration to a selection of solutions from whatever space they can explore, which initially is limited. This type of learning is possible because of the multi-modal sensory system that builds maps from time-locked correlations starting with smaller spatial maps and expanding to

In early interaction with mothers, infants learn from a pattern of activity that tightly couples vision, audition, and touch to behavior. Mother and infant imitate each other to reinforce this coupling. A mature social partner can also build a cognitive framework by weaving their own behavior around the child's natural activity patterns. This is done by automatically selecting those patterns which they consider meaningful and helpful for the baby. They also serve to direct attention to an object or event to strengthen the coupling. This is done in the spatial as well as temporal aspects. The baby frequently looks for physical and directional support to manage the risks around exploration, to rest when tired and to crystallize goals through such

Language can be a regularity that is a "shared communicative system." It is also a symbol system where the relation between the symbol and events in the world are mainly arbitrary, e.g., there is no relation between the word "dog" and what it represents, by knowing the word we cannot know the animal. DeLoache [29] demonstrated the way children use scale models and pictures as symbols which are not too life-like. Children first learn subtle regularities from the words they absorb, and slowly it creates in them the ability to learn a word in one trial and do higher-order

*DOI: http://dx.doi.org/10.5772/intechopen.86011*

the right way" [26].

*2.1.3 Being physical*

from each other.

*2.1.4 Exploring*

larger ones.

*2.1.5 Being social*

imitation and coupling.

*2.1.6 Learning a language*

*Restoring Independent Living after Disability Using a Wearable Device: A Synergistic… DOI: http://dx.doi.org/10.5772/intechopen.86011*

common in stroke cases, where patients experience sensory overload and cognitive deficiencies). Co-ordination is a form of mapping of multi-modal learning and the way they map changes over the development time, using either changing patterns or additional sensory inputs which the infants are now able to voluntarily provide themselves through physical exploration. Shifts in inputs thus result from the infant's own behavior. Using the body and moving from one place to another presents new spatial-temporal patterns and alters the infant's perception of "objects, space and self." Experimental studies show that one of the factors that strongly influence biological intelligence is "ordering the training experiences in the right way" [26].

## *2.1.3 Being physical*

*Assistive and Rehabilitation Engineering*

**2.1 Learning in babies**

be summarized as under [23]:

1.Being multi-modal.

2.Being incremental.

3.Being physical.

4.Exploring.

5.Being social.

*2.1.1 Being multi-modal*

without an external teacher.

transparency.

*2.1.2 Being incremental*

and/or strategy necessary to respond to this sensory system [22]. This is very similar to infants who learn in a non-instructional manner rich in sensory experience, using a feedforward-feedback sampling process [23]. Like in infants, the presence of such plasticity may provide an opportunity for functional recovery after stroke, if the most appropriate strategies are learnt and the maladaptive ones unlearnt [24].

There is now a growing understanding about how the body affects learning. The embodiment hypothesis proposes that sensorimotor activity of the person as it interacts with the environment is central to the development of intelligence [23]. In this field of study, the six principles of learning that babies instinctively follow can

A multi-modal experience of the world is achieved in humans through the sensory system which is made up of a vast array of sensors to provide vision, audition, touch, smell, balance, and proprioception. Any single function can be accomplished by more than one signal configuration from the neurons and different neuron clusters need not be limited to a single function. This type of redundancy ensures continuity in function where parts of the network can learn from each other

The second characteristic is the time-locked correlations between several simultaneous inputs, which are a powerful tool for representation, both singly and in combination with various events and objects in the environment [25]. In realtime these activities are mapped to each other to discover "higher order regularities," for example, using a combination of touch and vision to understand texture or

In non-incremental learning, the entire training set is usually fixed and then presented in entirety or randomly sampled. However, it seems that systematic changes in the input patterns and their overlapping occurrence in time play a large part in determining the development process. As a child grows, the vision starts to couple with the hearing and helps organize attention. In hearing-impaired babies, we see disorganized attention and a consequent slower learning (this is

**2. The synergistic physio-neuro (SynPhNe) learning model**

6.Learning a language (symbolic representation).

**116**

Experiments by Ballard et al. [27] and Baldwin [28] show that children off-load short term memory to the world by linking objects and events to locations, using attention to selectively point to the world. It is an easy way to build coherence in the cognitive system and to keep contents of different information clusters separate from each other.

#### *2.1.4 Exploring*

Initially the baby does not know what there is to learn. Babies can discover both the tasks to be learned and the solution to those tasks through exploration or non-goal directed action. One of the ways of exploration is spontaneous movement. As they contact objects in the environment, they progress from non-reaching to reaching. Thus, they seem to move from arousal to exploration to a selection of solutions from whatever space they can explore, which initially is limited. This type of learning is possible because of the multi-modal sensory system that builds maps from time-locked correlations starting with smaller spatial maps and expanding to larger ones.

#### *2.1.5 Being social*

In early interaction with mothers, infants learn from a pattern of activity that tightly couples vision, audition, and touch to behavior. Mother and infant imitate each other to reinforce this coupling. A mature social partner can also build a cognitive framework by weaving their own behavior around the child's natural activity patterns. This is done by automatically selecting those patterns which they consider meaningful and helpful for the baby. They also serve to direct attention to an object or event to strengthen the coupling. This is done in the spatial as well as temporal aspects. The baby frequently looks for physical and directional support to manage the risks around exploration, to rest when tired and to crystallize goals through such imitation and coupling.

#### *2.1.6 Learning a language*

Language can be a regularity that is a "shared communicative system." It is also a symbol system where the relation between the symbol and events in the world are mainly arbitrary, e.g., there is no relation between the word "dog" and what it represents, by knowing the word we cannot know the animal. DeLoache [29] demonstrated the way children use scale models and pictures as symbols which are not too life-like. Children first learn subtle regularities from the words they absorb, and slowly it creates in them the ability to learn a word in one trial and do higher-order

generalization. Efficient learning through a form of language thus itself becomes learned behavior.

While new born babies have non-goal directed exploratory behavior, they soon graduate into a more goal-directed behavior. These goals are a result of their decision-making process which takes inputs from their emotions, knowledge, intelligence, and social partners (in this case maybe parents or elder siblings). The mature partner moderates the child's emotions and value system and therefore, his or her early decisions during the learning process. This may be done through instruction, dialogue, feedback, and body language.

When this is considered in the context of a stroke patient, the goals he or she sets for recovery would be influenced by the same factors and more so with increasing disability and physical and emotional dependence. If we break down the learning process into its two broad components, exploratory and goal-directed, then one can line up the two components as an illustration shown in **Figure 1**. The patient formulates a goal (as in recovery of a specific function such as eating) and can begin exploratory learning in that specific context. However, there may exist cognitive as well as physical and social constraints due to post-stroke disability. If a technology could augment these aspects so that constraints are reduced through an appropriately designed user interface, it may facilitate such a patient re-booting how he learnt as a baby.

The goal dictates the quality, direction, and extent of the exploration. In stroke patients, the immediate and longer-term goals that the patient sets for himself/herself could significantly affect extent and speed of recovery [30]. Behavior generation is built around a distributed network of responses such as approach, play, avoidance of obstacles and attention requisition, all of which may be affected adversely after stroke. Behaviors may excite or inhibit each other, where non-conflicting behaviors fire motor commands with the brain and muscle complementing each other in real-time.

#### **2.2 Integrating learning into functional recovery**

In a learning environment which requires multiple repetitions, not all of which are identical, as in re-learning a skill, **Figure 1** forms the basic element of the learning iterations. Several iterations will be required as part of the exploratory strategy over time, which may be represented by a cyclic model as shown in **Figure 2**. In this figure, the feedback and feedforward loops drive subsequent iterations, which may be similar or dissimilar. Goals and decisions, as a feedforward, drive multi-modal exploration. Incremental changes or achievements seen at brain and body levels through measurable and quantifiable feedback drive modifications in belief systems, thus impacting goals and decisions for further learning.

#### **Figure 1.**

*A composite learning behavior using the mind and physical body in a multi-modal fashion for goal-oriented exploration.*

**119**

**Figure 3.**

*Restoring Independent Living after Disability Using a Wearable Device: A Synergistic…*

However, such faculties of learning available to a normal person may or may not be available to a stroke patient. A typical stroke model adapted from Ito et al. [31] of how stroke affects the human system resulting in motor function impairment is shown in **Figure 3** with an augmentation of such impaired feedforward and feedback superimposed. In this figure, the pathways for motor commands from motor cortex and proprioceptive feedback from the musculoskeletal system are disrupted and hence, some alternate pathway is recommended shown by the "motor intention" and "motor actuation" blocks. This is a popular model implemented by the rehabilitation robotics community and those adopting the stimulation approach. Motor intention is usually sensed by a brain-computer interface or artificially induced by stimulation methods such as transcranial magnetic stimulation. Motor actuation is achieved by either electrical stimulation or mechanically driven robotic movement. Intention and actuation are bridged typically by some adaptive algorithm which may be based on feature extraction, a control strategy, and a feedback

*The self-regulated model of recovery of motor impairment after stroke adapted from Ito et al. [31].*

*DOI: http://dx.doi.org/10.5772/intechopen.86011*

*The proposed natural learning model using iterative, incremental changes.*

**Figure 2.**

*Restoring Independent Living after Disability Using a Wearable Device: A Synergistic… DOI: http://dx.doi.org/10.5772/intechopen.86011*

**Figure 2.** *The proposed natural learning model using iterative, incremental changes.*

However, such faculties of learning available to a normal person may or may not be available to a stroke patient. A typical stroke model adapted from Ito et al. [31] of how stroke affects the human system resulting in motor function impairment is shown in **Figure 3** with an augmentation of such impaired feedforward and feedback superimposed. In this figure, the pathways for motor commands from motor cortex and proprioceptive feedback from the musculoskeletal system are disrupted and hence, some alternate pathway is recommended shown by the "motor intention" and "motor actuation" blocks. This is a popular model implemented by the rehabilitation robotics community and those adopting the stimulation approach. Motor intention is usually sensed by a brain-computer interface or artificially induced by stimulation methods such as transcranial magnetic stimulation. Motor actuation is achieved by either electrical stimulation or mechanically driven robotic movement. Intention and actuation are bridged typically by some adaptive algorithm which may be based on feature extraction, a control strategy, and a feedback

*Assistive and Rehabilitation Engineering*

dialogue, feedback, and body language.

complementing each other in real-time.

**2.2 Integrating learning into functional recovery**

tems, thus impacting goals and decisions for further learning.

learned behavior.

learnt as a baby.

generalization. Efficient learning through a form of language thus itself becomes

While new born babies have non-goal directed exploratory behavior, they soon graduate into a more goal-directed behavior. These goals are a result of their decision-making process which takes inputs from their emotions, knowledge, intelligence, and social partners (in this case maybe parents or elder siblings). The mature partner moderates the child's emotions and value system and therefore, his or her early decisions during the learning process. This may be done through instruction,

When this is considered in the context of a stroke patient, the goals he or she sets for recovery would be influenced by the same factors and more so with increasing disability and physical and emotional dependence. If we break down the learning process into its two broad components, exploratory and goal-directed, then one can line up the two components as an illustration shown in **Figure 1**. The patient formulates a goal (as in recovery of a specific function such as eating) and can begin exploratory learning in that specific context. However, there may exist cognitive as well as physical and social constraints due to post-stroke disability. If a technology could augment these aspects so that constraints are reduced through an appropriately designed user interface, it may facilitate such a patient re-booting how he

The goal dictates the quality, direction, and extent of the exploration. In stroke patients, the immediate and longer-term goals that the patient sets for himself/herself could significantly affect extent and speed of recovery [30]. Behavior generation is built around a distributed network of responses such as approach, play, avoidance of obstacles and attention requisition, all of which may be affected adversely after stroke. Behaviors may excite or inhibit each other, where non-conflicting behaviors fire motor commands with the brain and muscle

In a learning environment which requires multiple repetitions, not all of which are identical, as in re-learning a skill, **Figure 1** forms the basic element of the learning iterations. Several iterations will be required as part of the exploratory strategy over time, which may be represented by a cyclic model as shown in **Figure 2**. In this figure, the feedback and feedforward loops drive subsequent iterations, which may be similar or dissimilar. Goals and decisions, as a feedforward, drive multi-modal exploration. Incremental changes or achievements seen at brain and body levels through measurable and quantifiable feedback drive modifications in belief sys-

*A composite learning behavior using the mind and physical body in a multi-modal fashion for goal-oriented* 

**118**

**Figure 1.**

*exploration.*

loop. Current technology, however, is not able to address the complex issue of hand function, which involves overlapping neuro-physio strategies and multiple degrees of freedom. At most, simple movements may be possible [32] which has been shown to not adequately impact function for the highly heterogeneous stroke affected population. Gross movements can be expected to improve with very high number of repetitions, thus enabling the brain to rewire itself in a limited way. However, there is poor evidence that such gross movement practice translates significantly into function. Therefore, the modification to the above model is proposed, incorporating the feedforward and feedback elements modeled in **Figure 2** as a form of augmentation to help overcome the deficits through the learning route.

The augmented feedback may be delivered visually via a muscle-brain-computer interface. The feedforward in the form of appropriate audio-visual inputs, which lead the human to attempt a series of desired actions through imitation, is known to facilitate recovery [33]. Moreover, there is evidence of perception transferring to action and more importantly, from action to perception [34]. The augmented feedback is expected to drive motor intention and exploration while the feedforward is expected to prime the brain for motor actuation and goal directed learning through imitation. From a functional improvement perspective, the augmented feedback may be customized for a person using time-locked parameters as follows:


The brain and the body are inseparably linked, and both contribute significantly for neuroplasticity to occur and health parameters to improve [35]. Based on this understanding of how human learning may be applied practically in the context of post-stroke rehabilitation, this study was conceived with the following assumptions:


This paper describes a bio-mechatronics approach to understanding where re-learning is misled or failing and uses a "feedforward-feedback" modality to help chronic stroke subjects train gross movements (as measured by Fugyl Meyer Upper Extremity Motor Assessment scale) and functional, timed-task capabilities (as measured by Action Research Arm Test). The SynPhNe system employs learning and training principles similar to that which babies seem to use in the design of its user interface, to leverage the mechanism of "self-regulation" or "self-correction." The study explores to what extent such real-time "self-correction" alone, in the absence of any form of external stimulation or robotic assistance, impacts the

**121**

*Restoring Independent Living after Disability Using a Wearable Device: A Synergistic…*

recovery of functional ability in the stroke impaired, as a prelude to building a safe, effective, easy-to-use technology which would be useful for patients to augment

At the time of development, motor theory, learning principles and stroke rehabilitation challenges listed in Section 1 suggested that the SynPhNe rehabilitation platform should facilitate such learning keeping in mind the constraints faced

• EEG and EMG biofeedback with video-based feed-forward provided the

• Incremental learning—use of biofeedback to highlight small changes in the muscle and the brain signals with their transitions and associating this with the

• Exploratory learning, using the hand for real world tasks perceived as important but difficult (for example, use of chopsticks), as well as understanding how to achieve various relaxation and attention states while in dynamic move-

• Simulation of a "mature social partner" or instructor, perhaps in the form of an instructor led video which a patient could watch and follow and the smiley icon which indicates the successful management of the desired brain state while

• Teaching a new, universal language, i.e., making the subject aware of how to interpret and self-regulate muscle and brain activity at a signal level.

• Following the cyclic learning process shown in **Figure 2**, as a sensory-led,

The wearable data capture unit (WDCU) acquires data from eight channels of EMG through an arm gear and eight channels of EEG data through a head gear and transmits the data simultaneously to the PC using a USB cable (**Figure 4**). The design of this arm gear has been previously reported in a separate paper by the authors along with design and testing of the amplification circuit [37]. The software running on the PC processes these signals from 16 channels and combines them in a time locked manner for presentation on the screen as real-time feedback showing muscle over-activation and under-activation as cartoon characters (EMG signal as agonist-antagonist koala bears climbing up or down a tree, EEG signal as a smiley face). While EMG signals are used as feedback by squaring and averaging the amplitude within a running window updated every 10 milliseconds, the EEG signals were converted to frequency band using a Fast Fourier Transform and the alpha

gross movements and tasks performed with various degrees of success.

ment using the feedforward-feedback modality.

intuitive, self-sustaining, and reinforcing cycle.

*DOI: http://dx.doi.org/10.5772/intechopen.86011*

*3.1.1 Design principles for SynPhNe system*

multi-modal environment.

executing physical tasks.

*3.1.2 System description*

therapy hours at home.

**3.1 Technology description**

**3. Methods**

by stroke patients.

*Restoring Independent Living after Disability Using a Wearable Device: A Synergistic… DOI: http://dx.doi.org/10.5772/intechopen.86011*

recovery of functional ability in the stroke impaired, as a prelude to building a safe, effective, easy-to-use technology which would be useful for patients to augment therapy hours at home.
