**Meet the editor**

Dr. Seyyed Abed Hosseini received his BSc and MSc degrees in Electrical Engineering and Biomedical Engineering in 2006 and 2009, respectively. He received his PhD degree in Electrical Engineering from the Ferdowsi University of Mashhad, Iran, in 2016. He has 10 years of teaching experience and 1 year of industry experience. He has published over 55 peer-reviewed articles and

book chapters in the field of emotion and attention studies. He is a lecturer at the Research Center of Biomedical Engineering (RCBME), Islamic Azad University, Mashhad Branch, Iran. He is a senior researcher at the Center of Excellence on Soft Computing and Intelligent Information Processing (SCIIP), Iran. His research interests include cognitive neuroscience, electroencephalography (EEG) studies, magnetoencephalography (MEG) studies, functional magnetic resonance imaging (fMRI), event-related potential (ERP) signals, emotion recognition, seizure detection, seizure prediction, brain-computer interface (BCI), and neurofeedback. The book "Cognitive and Computational Neuroscience - Principles, Algorithms and Applications" attempts to introduce the different soft computing approaches and technologies for identification of different brain states, from a historical development, focusing particularly on the recent development of the field and its specialization within neuropsychology, cognitive science, neuroscience, and engineering.

Contents

**Preface VII**

Seyyed Abed Hosseini

**Surroundings 7**

Janusz Sobecki

Chapter 1 **Introductory Chapter: Cognitive and Computational**

Daya Shankar Gupta and Silmar Teixeira

and Gómez Alvarado Héctor Fernando

Chapter 4 **Spiking Central Pattern Generators through Reverse Engineering of Locomotion Patterns 41**

Chapter 5 **Characterizing Motor System to Improve Training Protocols**

Chapter 6 **Computational Models of Consciousness-Emotion Interactions in Social Robotics: Conceptual Framework 79**

Chapter 3 **Cognitive and Computational Neuroscience: Principles,**

Chapter 2 **Convergence of Action, Reaction, and Perception via Neural Oscillations in Dynamic Interaction with External**

**Neuroscience - Principles, Algorithms, and Applications 1**

**Algorithms, and Applications in Surveillance Context 27** Lozada Torres Edwin Fabricio, Martínez Campaña Carlos Eduardo

Andrés Espinal, Marco Sotelo-Figueroa, Héctor J. Estrada-García, Manuel Ornelas-Rodríguez and Horacio Rostro-Gonzalez

**Used in Brain-Machine Interfaces Based on Motor Imagery 57** Luz Maria Alonso-Valerdi and Andrés Antonio González-Garrido

Remigiusz Szczepanowski, Małgorzata Gakis, Krzysztof Arent and

## Contents

#### **Preface XI**


Preface

of the chapters.

basic neuroscience to clinical medicine.

I hope you enjoy your reading.

*Neuroscience* is a discipline that employs the tools and language of anatomy, linguistics, physiology, biochemistry, pharmacology, neurology, molecular biology, philosophy, bio‐ medical engineering, psychology, and psychiatry. *Cognitive neuroscience* relates cognitive and behavioral functions to the underlying brain mechanisms. *Computational neuroscience* uses data to construct rigorous computational or mathematical models of brain function. Put them together, these new disciplines are the key to explaining the relationship between the brain and the behavior. The level for considering the cognitive and computational neuro‐ science is characterized by three main research lines: (1) microscopic scale - the activity and function of a single nerve; (2) mesoscopic scale - the activity of a local group of nerves; and

The cognitive and computational neuroscience research profile of this project is character‐ ized by four main research lines: (1) language and communication; (2) perception, cognition, reasoning, action, and control mechanism; (3) memory and plasticity; and (4) brain networks and neuronal communication. The project was undertaken at the request of IntechOpen. During this period of intensive efforts, all the chapters were reviewed and revised accord‐ ingly to meet the high-quality standards of IntechOpen and my vision for the whole concept

This book will provide the audiences with the most recent evidence from different disci‐ plines in brain studies on the wide range of researchers in an integrative way toward *Cogni‐ tive and Computational Neuroscience - Principles, Algorithms, and Applications*. The hope is that the information provided in this book will trigger new researches that will help to connect

I would like to thank Ms. Marijana Francetic for her valuable comments and suggestions to improve the quality of this book. I would also like to thank all the authors, without whose

I would like to thank Dr. Mohammad-Ali Khalilzadeh for his excellent guidance and sup‐ port during this process. I also benefited from debating issues with my friends and family.

> **Dr. Seyyed Abed Hosseini** Ferdowsi University of Mashhad

> > Mashhad, Iran

(3) macroscopic scale - tissues, organs, organ system, and organism.

cooperation I would not have been able to conduct this analysis.

## Preface

*Neuroscience* is a discipline that employs the tools and language of anatomy, linguistics, physiology, biochemistry, pharmacology, neurology, molecular biology, philosophy, bio‐ medical engineering, psychology, and psychiatry. *Cognitive neuroscience* relates cognitive and behavioral functions to the underlying brain mechanisms. *Computational neuroscience* uses data to construct rigorous computational or mathematical models of brain function. Put them together, these new disciplines are the key to explaining the relationship between the brain and the behavior. The level for considering the cognitive and computational neuro‐ science is characterized by three main research lines: (1) microscopic scale - the activity and function of a single nerve; (2) mesoscopic scale - the activity of a local group of nerves; and (3) macroscopic scale - tissues, organs, organ system, and organism.

The cognitive and computational neuroscience research profile of this project is character‐ ized by four main research lines: (1) language and communication; (2) perception, cognition, reasoning, action, and control mechanism; (3) memory and plasticity; and (4) brain networks and neuronal communication. The project was undertaken at the request of IntechOpen. During this period of intensive efforts, all the chapters were reviewed and revised accord‐ ingly to meet the high-quality standards of IntechOpen and my vision for the whole concept of the chapters.

This book will provide the audiences with the most recent evidence from different disci‐ plines in brain studies on the wide range of researchers in an integrative way toward *Cogni‐ tive and Computational Neuroscience - Principles, Algorithms, and Applications*. The hope is that the information provided in this book will trigger new researches that will help to connect basic neuroscience to clinical medicine.

I would like to thank Ms. Marijana Francetic for her valuable comments and suggestions to improve the quality of this book. I would also like to thank all the authors, without whose cooperation I would not have been able to conduct this analysis.

I would like to thank Dr. Mohammad-Ali Khalilzadeh for his excellent guidance and sup‐ port during this process. I also benefited from debating issues with my friends and family.

I hope you enjoy your reading.

**Dr. Seyyed Abed Hosseini** Ferdowsi University of Mashhad Mashhad, Iran

**Chapter 1**

**Provisional chapter**

**Introductory Chapter: Cognitive and Computational**

**1. Cognitive and computational neuroscience: principles, algorithms,** 

The term "computational neuroscience" was introduced by Schwartz [1] through the organization of a conference in California in 1985. Cognitive and computational neuroscience evaluates the different brain functions (e.g., attention, emotion, perception, learning, consciousness, anesthesia, cognition, and memory) in terms of the information processing procedure of the brain [2]. This topic is an interdisciplinary issue that links the diverse backgrounds of neuroscience, cognitive science, psychology, mathematics, biomedical engineering, computer science, robotics, and physics. Therefore, the main idea of this book is to present a general

Cognitive and computational neuroscience has many medical and engineering applications such as rehabilitation [3], psychology and psychiatric disorders (e.g., depression, chronic addiction, post-traumatic stress disorder, dementia, attention deficit hyperactivity disorder, and autism) [4], brain-computer interface [3, 5], human-computer interaction [6], neurofeedback [7, 8], marketing [9], robotic [10], and decision-making [11]. Research in cognitive and computational neuroscience is categorized into four main topics, including experimental neuroscience (e.g., electrophysiology, neuron, synapse, synaptic plasticity, memory, conditioning, learning, consciousness, vision, neuroimaging), theoretical neuroscience (e.g., models of neurons, single-neuron modeling, spiking networks, network dynamics, behaviors of the brain networks, mathematical models of the brain activity, sensory processing, connectivity analysis), dynamical systems (e.g., synchronization, oscillators, pattern formation, chaos),

**Introductory Chapter: Cognitive and Computational**

Seyyed Abed Hosseini

**and applications**

**2. Related works**

Seyyed Abed Hosseini

Additional information is available at the end of the chapter

framework for the researchers from diverse fields.

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.72824

**Neuroscience - Principles, Algorithms, and Applications**

**Neuroscience - Principles, Algorithms, and Applications**

DOI: 10.5772/intechopen.72824

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution,

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

and reproduction in any medium, provided the original work is properly cited.

**Provisional chapter**

#### **Introductory Chapter: Cognitive and Computational Neuroscience - Principles, Algorithms, and Applications Neuroscience - Principles, Algorithms, and Applications**

**Introductory Chapter: Cognitive and Computational**

DOI: 10.5772/intechopen.72824

Seyyed Abed Hosseini Seyyed Abed Hosseini Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.72824

### **1. Cognitive and computational neuroscience: principles, algorithms, and applications**

The term "computational neuroscience" was introduced by Schwartz [1] through the organization of a conference in California in 1985. Cognitive and computational neuroscience evaluates the different brain functions (e.g., attention, emotion, perception, learning, consciousness, anesthesia, cognition, and memory) in terms of the information processing procedure of the brain [2]. This topic is an interdisciplinary issue that links the diverse backgrounds of neuroscience, cognitive science, psychology, mathematics, biomedical engineering, computer science, robotics, and physics. Therefore, the main idea of this book is to present a general framework for the researchers from diverse fields.

#### **2. Related works**

Cognitive and computational neuroscience has many medical and engineering applications such as rehabilitation [3], psychology and psychiatric disorders (e.g., depression, chronic addiction, post-traumatic stress disorder, dementia, attention deficit hyperactivity disorder, and autism) [4], brain-computer interface [3, 5], human-computer interaction [6], neurofeedback [7, 8], marketing [9], robotic [10], and decision-making [11]. Research in cognitive and computational neuroscience is categorized into four main topics, including experimental neuroscience (e.g., electrophysiology, neuron, synapse, synaptic plasticity, memory, conditioning, learning, consciousness, vision, neuroimaging), theoretical neuroscience (e.g., models of neurons, single-neuron modeling, spiking networks, network dynamics, behaviors of the brain networks, mathematical models of the brain activity, sensory processing, connectivity analysis), dynamical systems (e.g., synchronization, oscillators, pattern formation, chaos),

Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons

and computational intelligence (e.g., neural networks, graph theory, reinforcement learning, pattern recognition, evolutionary computation, information theory, statistics, and signal processing).

**Author details**

Mashhad, Iran

**References**

Seyyed Abed Hosseini

Address all correspondence to: hosseyni@mshdiau.ac.ir

Neurobiology. 1998;**8**(2):263-271

devices. Computer. 2008;**41**(10):52-56

Science. 2007;**318**(5850):598-602

2015.7337958

2014;**8**:1008

Research Center of Biomedical Engineering, Mashhad Branch, Islamic Azad University,

Introductory Chapter: Cognitive and Computational Neuroscience - Principles, Algorithms…

http://dx.doi.org/10.5772/intechopen.72824

3

[2] Churchland PS, Koch C, Sejnowski TJ. What is computational neuroscience? In: Schwartz EL, editor. Computational Neuroscience. Cambridge: MIT Press; 1993. pp. 46-55

[3] Hosseini SA, Naghibi-Sistani MB, Akbarzadeh-T MR. A two-dimensional brain-computer interface based on visual selective attention by Magnetoencephalograph (MEG)

[4] Swanson J, Castellanos FX, Murias M, LaHoste G, Kennedy J. Cognitive neuroscience of attention deficit hyperactivity disorder and hyperkinetic disorder. Current Opinion in

[5] Hosseini SA, Akbarzadeh-T MR, Naghibi-Sistani MB. Hybrid approach in recognition of visual covert selective spatial attention based on MEG signals. In: IEEE International Conference on Fuzzy Systems (FUZZ). Istanbul: IEEE; 2015. DOI: 10.1109/FUZZ-IEEE.

[7] Subramanian L et al. Real-time functional magnetic resonance imaging neurofeedback for treatment of Parkinson's disease. Journal of Neuroscience. 2011;**31**(45):16309-16317

[8] Ros T, Baars BJ, Lanius RA, Vuilleumier P. Tuning pathological brain oscillations with neurofeedback: A systems neuroscience framework. Frontiers in Human Neuroscience.

[9] Khushaba RN, Wise C, Kodagoda S, Louviere J, Kahn BE, Townsend C. Consumer neuroscience: Assessing the brain response to marketing stimuli using electroencephalo-

[10] McFarland DJ, Wolpaw JR. Brain-computer interface operation of robotic and prosthetic

[11] Sanfey AG. Social decision-making: Insights from game theory and neuroscience.

gram (EEG) and eye tracking. Expert System Application. 2013;**40**(9):3803-3812

[1] Schwartz EL. Computational Neuroscience. Cambridge: MIT Press; 1993

signals. Tabriz Journal of Electrical Engineering. 2015;**45**(2):65-74

[6] Dix A. Human-Computer Interaction. Berlin: Springer; 2009

Suitable brain signals and images are usually used according to invasive or non-invasive acquisition techniques. Therefore, non-invasive techniques, such as electroencephalography (EEG) [12, 13], event-related potentials (ERPs) [14, 15], magnetoencephalography (MEG) [3, 16], functional magnetic resonance imaging (fMRI) [17], positron emission tomography (PET) [18], transcranial direct current stimulation (tDCS) [19], and transcranial magnetic stimulation (TMS) [20], are generally preferred.

This section presents a detailed discussion of previous related works on different methods based on epilepsy and seizure detection along with different machine-learning approaches. In one study, Hosseini et al. [21] proposed a qualitative and quantitative analysis of EEG signals for epileptic seizure recognition. Hosseini et al. [22] proposed an approach for seizure and epilepsy recognition using chaos analysis of EEG signals. Hosseini [23] proposed a hybrid method based on higher order spectra (HOS) for recognition of seizure and epilepsy using EEG and electrocorticography (ECOG) signals.

Several studies have been proposed for the presentation of functional models, conceptual models, bio-inspired frameworks, signal processing approaches, image processing approaches, and electrophysiology studies based on cognitive processes, including emotion, stress, and attention. In one study, Hosseini et al. [24, 25] proposed a labeling approach of EEG signals in emotional stress state using self-assessment and psychophysiological signals. Hosseini [26] and Hosseini et al. [27–29] presented an HOS approach for emotional stress detection using EEG signals. Hosseini et al. [30, 31] designed an emotion recognition system using entropy analysis of EEG signals. Hosseini et al. [32] proposed an improved model of the behavioral calcium channels in the hippocampus CA1 cells during stress.

In another study, Hosseini et al. [33] proposed an emotional stress recognition system using psychophysiological and EEG signals. Hosseini et al. [34] proposed different features including fractal dimension, wavelet coefficients, and Lempel-Ziv complexity of EEG signals for emotional stress recognition. Hosseini et al. [35] presented a cognitive and computational framework of brain activity during emotional stress. Hosseini et al. [36] presented a cognitive and computational framework of the brain activity in emotional stress state. Hosseini [37] proposed attention and emotion recognition systems based on biological images and signals. Hosseini and Naghibi [38] proposed a computationally improved model of brain activity in the visual attentional state. Hosseini proposed [39] a computationally bio-inspired model of brain activity in the selective attentional state and its application for estimating the depth of anesthesia.

This chapter attempts to introduce the different approaches, principles, applications, and theories in cognitive and computational neuroscience, from a historical development, focusing particularly on the recent development of the field and its specialization within psychology, computational neuroscience, and engineering.

### **Author details**

and computational intelligence (e.g., neural networks, graph theory, reinforcement learning, pattern recognition, evolutionary computation, information theory, statistics, and signal

2 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

Suitable brain signals and images are usually used according to invasive or non-invasive acquisition techniques. Therefore, non-invasive techniques, such as electroencephalography (EEG) [12, 13], event-related potentials (ERPs) [14, 15], magnetoencephalography (MEG) [3, 16], functional magnetic resonance imaging (fMRI) [17], positron emission tomography (PET) [18], transcranial direct current stimulation (tDCS) [19], and transcranial magnetic stimulation

This section presents a detailed discussion of previous related works on different methods based on epilepsy and seizure detection along with different machine-learning approaches. In one study, Hosseini et al. [21] proposed a qualitative and quantitative analysis of EEG signals for epileptic seizure recognition. Hosseini et al. [22] proposed an approach for seizure and epilepsy recognition using chaos analysis of EEG signals. Hosseini [23] proposed a hybrid method based on higher order spectra (HOS) for recognition of seizure and epilepsy using

Several studies have been proposed for the presentation of functional models, conceptual models, bio-inspired frameworks, signal processing approaches, image processing approaches, and electrophysiology studies based on cognitive processes, including emotion, stress, and attention. In one study, Hosseini et al. [24, 25] proposed a labeling approach of EEG signals in emotional stress state using self-assessment and psychophysiological signals. Hosseini [26] and Hosseini et al. [27–29] presented an HOS approach for emotional stress detection using EEG signals. Hosseini et al. [30, 31] designed an emotion recognition system using entropy analysis of EEG signals. Hosseini et al. [32] proposed an improved model of the behavioral calcium channels in the hippocampus CA1 cells

In another study, Hosseini et al. [33] proposed an emotional stress recognition system using psychophysiological and EEG signals. Hosseini et al. [34] proposed different features including fractal dimension, wavelet coefficients, and Lempel-Ziv complexity of EEG signals for emotional stress recognition. Hosseini et al. [35] presented a cognitive and computational framework of brain activity during emotional stress. Hosseini et al. [36] presented a cognitive and computational framework of the brain activity in emotional stress state. Hosseini [37] proposed attention and emotion recognition systems based on biological images and signals. Hosseini and Naghibi [38] proposed a computationally improved model of brain activity in the visual attentional state. Hosseini proposed [39] a computationally bio-inspired model of brain activity in the selective attentional state and its application for estimating the

This chapter attempts to introduce the different approaches, principles, applications, and theories in cognitive and computational neuroscience, from a historical development, focusing particularly on the recent development of the field and its specialization within psychology,

processing).

during stress.

depth of anesthesia.

computational neuroscience, and engineering.

(TMS) [20], are generally preferred.

EEG and electrocorticography (ECOG) signals.

Seyyed Abed Hosseini

Address all correspondence to: hosseyni@mshdiau.ac.ir

Research Center of Biomedical Engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran

#### **References**


[12] Hosseini SA, Quantification of EEG signals for evaluation of emotional stress level [MSc thesis]. Biomedical Department, Faculty of Engineering, Islamic Azad University Mashhad Branch; 2009

stress state. In: 2010 International Conference on Biomedical Engineering and Computer Science (ICBECS). Wuhan, China: IEEE; 2010. pp. 1-6. DOI: 10.1109/ICBECS.2010.5462520

Introductory Chapter: Cognitive and Computational Neuroscience - Principles, Algorithms…

http://dx.doi.org/10.5772/intechopen.72824

5

[26] Hosseini SA. Classification of brain activity in emotional states using HOS analysis.

[27] Hosseini SA, Khalilzadeh MA, Naghibi-Sistani MB, Niazmand V. Higher order spectra analysis of EEG signals in emotional stress states. In: 2010 Second International Conference on Information Technology and Computer Science (ITCS). Kiev, Ukraine:

[28] Hosseini SA, Khalilzadeh MA, Homam M. Emotional stress detection using nonlinear and higher order spectra features in EEG signal. Journal of Electrical Engineering.

[29] Hosseini SA, Khalilzadeh MA. Qualitative and quantitative evaluation of EEG signals in emotional state with through higher order spectra, In: 3rd Iranian Congress on Fuzzy

and Intelligent Systems. Yazd: Intelligent Systems Scientific Society of Iran; 2009

[30] Khalilzadeh MA, Homam SM, Hosseini SA, Niazmand V. Qualitative and quantitative evaluation of brain activity in emotional stress. Iranian Journal of Neurology. 2010;

[31] Hosseini SA, Naghibi-Sistani MB. Emotion recognition method using entropy analysis of EEG signals. International Journal of Image Graphics Signal Process. 2011;**3**(5):30

[32] Hosseini SA, Khalilzadeh MA, Homam SM. Modeling of the behavioral calcium channels in the hippocampus cells, during stress. Iranian Journal of Biomedical Engineering.

[33] Hosseini SA, Khalilzadeh MA, Changiz S. Emotional stress recognition system for affective computing based on bio-signals. Journal of Biological Systems. 2010;**18**(Spec.

[34] Hosseini SA, Khalilzadeh MA, Naghibi-Sistani MB, Homam SM. Emotional stress recognition using a new fusion link between electroencephalogram and peripheral signals.

[35] Hosseini SA, Khalilzadeh MA, Homam M. A cognitive and computational model of brain activity during emotional stress. Advances Cognitive Science. 2010;**12**(2):1-14 [36] Hosseini SA, Khalilzadeh MA, Homam SM, Azarnoosh M. Presenting a cognitive map and computational model of the brain activity in emotional stress state. Journal of

[37] Hosseini SA. Introductory chapter: Emotion and attention recognition based on biological signals and images. Emotion and Attention Recognition Based on Biological Signals and Images. In: Hosseini SA editor. Book Chapter, pp. 1-3, InTech. 2017. DOI:

International Journal of Image Graphics Signal Process. 2012;**4**(1):21

IEEE; 2010. pp. 60-63. DOI: 10.1109/ITCS.2010.21

Iranian Journal of Neurology. 2015;**14**(3):142

Advances Cognitive Science. 2010;**12**(1):1-16

2009;**39**(2):13-24

**8**(28):605-618

2010;**4**:23-32

01):101-114

10.5772/66483


stress state. In: 2010 International Conference on Biomedical Engineering and Computer Science (ICBECS). Wuhan, China: IEEE; 2010. pp. 1-6. DOI: 10.1109/ICBECS.2010.5462520

[26] Hosseini SA. Classification of brain activity in emotional states using HOS analysis. International Journal of Image Graphics Signal Process. 2012;**4**(1):21

[12] Hosseini SA, Quantification of EEG signals for evaluation of emotional stress level [MSc thesis]. Biomedical Department, Faculty of Engineering, Islamic Azad University

[13] Schomer DL, Da Silva FL. Niedermeyer's Electroencephalography: Basic Principles, Clinical Applications, and Related Fields. Philadelphia: Lippincott Williams & Wilkins;

[14] Hosseini SA, Akbarzadeh-T MR, Naghibi-Sistani MB. Evaluation of visual selective attention by event related potential analysis in brain activity. Tabriz Journal of Electrical

[15] Rugg MD, Coles MG. Electrophysiology of Mind: Event-Related Brain Potentials and

[16] Hansen P, Kringelbach M, Salmelin R. MEG: An Introduction to Methods. Oxford:

[17] Huettel SA, Song AW, McCarthy G. Functional Magnetic Resonance Imaging. Vol. 1.

[18] Bailey DL, Townsend DW, Valk PE, Maisey MN. Positron Emission Tomography. Berlin:

[19] Fregni F, Boggio PS, Nitsche M, Pascual-Leone A. Transcranial direct current stimula-

[20] George MS, Nahas Z, Lisanby SH, Schlaepfer T, Kozel FA, Greenberg BD. Transcranial magnetic stimulation. Neurosurgery Clinics of North America. 2003;**14**(2):283-301 [21] Hosseini SA, Akbarzadeh-T MR, Naghibi-Sistani MB. Qualitative and quantitative evaluation of EEG signals in epileptic seizure recognition. International Journal of Intelligent

[22] Hosseini SA, Akbarzadeh-T M-R, Naghibi-Sistani M-B. Methodology for epilepsy and epileptic seizure recognition using chaos analysis of brain signals. Intelligent Technologies and Techniques for Pervasive Computing. In: Kolomvatsos K, Anagnostopoulos C, Hadjiefthymiades S, Editors. Book Chapter 2, IGI Global. pp. 20-36, May 2013. DOI:

[23] Hosseini SA. A hybrid approach based on higher order spectra for clinical recognition of seizure and epilepsy using brain activity. Journal of Basic and Clinical Neuroscience.

[24] Hosseini SA, Naghibi-Sistani M-B. Classification of emotional stress using brain activity. Applied Biomedical Engineering. In: Gargiulo GD, McEwan A, editors. Book Chapter

[25] Hosseini SA, Khalilzadeh MA. Emotional stress recognition system using EEG and psychophysiological signals: Using new labelling process of EEG signals in emotional

Mashhad Branch; 2009

Engineering. 2015;**46**(1):13-24

Oxford University Press; 2010

System Application. 2013;**6**:41-46

10.4018/978-1-4666-4038-2.ch002

Springer; 2005

2017;**8**(6)

Cognition. Oxford: Oxford University Press; 1995

4 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

Sunderland: Sinauer Associates Sunderland; 2004

tion. The British Journal of Psychiatry. 2005;**186**(5):446-447

14, InTech. pp. 313-336, August 2011. DOI: 10.5772/18294

2012


[38] Hosseini SA, Naghibi-Sistani M-B. A computationally improved model of brain activity in visual attentional state. Advances Cognitive Science. 2017;**19**(1):1-13

**Chapter 2**

**Provisional chapter**

**Convergence of Action, Reaction, and Perception via**

There has been a considerable interest in the role of time-dimension in functions of the brain, which has been limited to time perception and timing of behavior. However, during past few years it has become increasingly clear that the role of the time-dimension includes other complex cognitive functions, such as motor control of a vehicle, sensory perception and processing imageries to name a few. Role of the accurate representation of time-dimension is important for several neural mechanisms, which include temporal coupling, coincidence detection, and processing of Shannon information. These mechanisms play key roles in processing information during the interaction of the brain with

**Keywords:** temporal processing of information, temporal coupling, time-dimension in

Physical time-dimension is an integral part of information processing in the brain by virtue of its role in representing the information as a spike pattern on the time axis [1]. Accordingly, the final product of information processing taking place in the brain, such as perception, action, or interaction with external environment, is dependent on the accurate representation of timedimension in neural circuits. The physical surroundings with which humans interact is four dimensional, three geometric dimensions, and the time-dimension. Psychological time has been a subject of intellectual curiosity for most of the known history, but time-dimension

the brain, neural clocks, timing behavior, muscle synergy, action-reaction

**via Neural Oscillations in Dynamic Interaction with** 

**Convergence of Action, Reaction, and Perception** 

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

DOI: 10.5772/intechopen.76397

**Neural Oscillations in Dynamic Interaction with**

**External Surroundings**

**External Surroundings**

http://dx.doi.org/10.5772/intechopen.76397

the physical surroundings.

**Abstract**

**1. Introduction**

Daya Shankar Gupta and Silmar Teixeira

Daya Shankar Gupta and Silmar Teixeira

Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

[39] Hosseini SA. A computationally inspired model of brain activity in selective attentional state and its application for estimating the depth of anesthesia [PhD thesis]. Electrical Department, Faculty of Engineering, Ferdowsi University of Mashhad; 2016

#### **Convergence of Action, Reaction, and Perception via Neural Oscillations in Dynamic Interaction with External Surroundings Convergence of Action, Reaction, and Perception via Neural Oscillations in Dynamic Interaction with External Surroundings**

DOI: 10.5772/intechopen.76397

Daya Shankar Gupta and Silmar Teixeira Daya Shankar Gupta and Silmar Teixeira

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.76397

#### **Abstract**

[38] Hosseini SA, Naghibi-Sistani M-B. A computationally improved model of brain activity

[39] Hosseini SA. A computationally inspired model of brain activity in selective attentional state and its application for estimating the depth of anesthesia [PhD thesis]. Electrical

Department, Faculty of Engineering, Ferdowsi University of Mashhad; 2016

in visual attentional state. Advances Cognitive Science. 2017;**19**(1):1-13

6 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

There has been a considerable interest in the role of time-dimension in functions of the brain, which has been limited to time perception and timing of behavior. However, during past few years it has become increasingly clear that the role of the time-dimension includes other complex cognitive functions, such as motor control of a vehicle, sensory perception and processing imageries to name a few. Role of the accurate representation of time-dimension is important for several neural mechanisms, which include temporal coupling, coincidence detection, and processing of Shannon information. These mechanisms play key roles in processing information during the interaction of the brain with the physical surroundings.

**Keywords:** temporal processing of information, temporal coupling, time-dimension in the brain, neural clocks, timing behavior, muscle synergy, action-reaction

#### **1. Introduction**

Physical time-dimension is an integral part of information processing in the brain by virtue of its role in representing the information as a spike pattern on the time axis [1]. Accordingly, the final product of information processing taking place in the brain, such as perception, action, or interaction with external environment, is dependent on the accurate representation of timedimension in neural circuits. The physical surroundings with which humans interact is four dimensional, three geometric dimensions, and the time-dimension. Psychological time has been a subject of intellectual curiosity for most of the known history, but time-dimension

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

has been studied as the fourth physical dimension only during the last century [2, 3]. Timedimension, unlike other physical qualities, is never perceived as a novelty but only reported as the flow of time [1], and therefore, it is not easy to study by observation alone.

ability to store the temporal order of events, helps to provide priors for the control of interac-

Convergence of Action, Reaction, and Perception via Neural Oscillations in Dynamic Interaction…

http://dx.doi.org/10.5772/intechopen.76397

9

In addition, these activities require multiple temporal couplings of sensory inputs with motor responses at the level of a single individual. The use of limited energy stores by muscles must be optimized, which will constrain the central nervous system to recruit only certain muscle

Ability of individuals to communicate in groups, which is important for foraging, depends upon the temporal coupling of same brain circuits of different individuals to same stimuli, which may be hand gestures. Such brain circuits, found in the posterior parietal and motor areas, form mirror mechanisms in monkeys [9–11]. This is supported by imaging studies done in humans, which showed that the posterior parietal areas and premotor areas became active during action-observation and imitation [12, 13]. Moreover, rich reciprocal connections are present between different areas of the posterior parietal cortex and the premotor cortex in monkeys, which provide the anatomical basis of the mirror neuron mechanism [14, 15].

Representation of time unit by regular events is inherent in the definition of a regular event that repeats itself after the same interval every time. Time units, such as seconds, measured by swings of pendulum in a mechanical clock, can help in the measurement of a duration by counting the number of seconds or swings of a pendulum. Using this analogy, a neural temporal unit is defined as the interval between two adjacent regular spikes, spike bursts, and is

According to the pacemaker-accumulator model, when neural temporal units are added (or counted) by the accumulator, it processes neural time intervals for various subjective or motor tasks. According to this model, if the neural temporal units represented by neural oscillators in the brain's timing circuits are smaller on the physical time scale, then subjective time reported in the task will be greater than the elapsed physical time. This will be the result of a greater number of neural temporal units present within a given external time duration. As predicted by the pacemaker-accumulator model, a greater number of neural temporal units within a timed interval will lead to subjective overestimation of intervals. This is supported by a study in which entrainment using visual flickers with faster frequency increased time measurement in a time reproduction task [17]. Entrainment by faster flickers increases the frequency of neural oscillators in the brain, which leads to a smaller temporal unit—a result of entrainment by faster rate of oscillations. Another study used auditory click trains to increase the speed of neural clocks, and its effect on pair-wise duration comparison and verbal time estimation task, and had arrived at similar conclusions [18]. However, not all entrainment studies agree with this conclusion [19]. Thus, a different role of neural oscillator is suggested within the modular clock model [16]. According to this formulation, the role of rhythmic activity is to only represent a physical property of the time-dimension in various neural clock mechanisms. Rhythmic activities are shown to be important for cognitive functions and various forms of behavior as reviewed by Herbst and Landau [20], but its precise role is yet to be

tion with the environment as events unfold.

**3. Representation of time-dimension in the brain**

proposed to represent time units in neural circuits [16].

activation patterns.

understood.

Time-dimension plays a key role in many aspects of information processing in the brain. Temporal coupling of two or more events, neural or physical, occurs when they share the same coordinate on time-axis given that time-axis is represented accurately for all events. This allows the binding of events at the level of neural circuits or external physical surroundings. External events that are temporally coupled after processing in neural circuits lead to the subjective experience of entire repertoire of sensory inputs.

In coincidence detection, coincident activation of a third neuron by two oscillator circuits is proposed to play a role in analysis of frequency of auditory tones in the brain stem [4, 5]. The mental time travel is a fundamental ability of the human brain to "mentally" relocate oneself to a time point in the past or future [6]. The mental time travel allows the projection of self in past or future by referring a temporal order of events, which is processed by the hippocampus [7].

#### **2. Interaction with external physical surroundings**

Interval timing functions of the brain have arguably played a key role in the survival of the human species during most of their existence. Humans have been sustained by hunting and foraging [8], which requires interaction with external objects with a variety of physical characteristics, such speed, hardness, elasticity, and matter state—fluid or solid. These interactions are both (a) feedforward motor and (b) sensory—scene searching and sensory input. The feedforward motor control of external objects will be processed by sensory inputs resulting from reactional forces and impedance control (**Figure 3**). Mental time travel, by virtue of the

**Figure 1.** Depicts modular neural clock mechanism. Modular clock mechanism shown here is a prototype for the modular connections that form networks, processing information during interactions with external physical environment. The neural clock mechanism (C) can be synchronized with motor circuits (A) or sensory circuits (B) by low-frequency oscillations (broken rectangular lines).

ability to store the temporal order of events, helps to provide priors for the control of interaction with the environment as events unfold.

In addition, these activities require multiple temporal couplings of sensory inputs with motor responses at the level of a single individual. The use of limited energy stores by muscles must be optimized, which will constrain the central nervous system to recruit only certain muscle activation patterns.

Ability of individuals to communicate in groups, which is important for foraging, depends upon the temporal coupling of same brain circuits of different individuals to same stimuli, which may be hand gestures. Such brain circuits, found in the posterior parietal and motor areas, form mirror mechanisms in monkeys [9–11]. This is supported by imaging studies done in humans, which showed that the posterior parietal areas and premotor areas became active during action-observation and imitation [12, 13]. Moreover, rich reciprocal connections are present between different areas of the posterior parietal cortex and the premotor cortex in monkeys, which provide the anatomical basis of the mirror neuron mechanism [14, 15].

#### **3. Representation of time-dimension in the brain**

has been studied as the fourth physical dimension only during the last century [2, 3]. Timedimension, unlike other physical qualities, is never perceived as a novelty but only reported

Time-dimension plays a key role in many aspects of information processing in the brain. Temporal coupling of two or more events, neural or physical, occurs when they share the same coordinate on time-axis given that time-axis is represented accurately for all events. This allows the binding of events at the level of neural circuits or external physical surroundings. External events that are temporally coupled after processing in neural circuits lead to the

In coincidence detection, coincident activation of a third neuron by two oscillator circuits is proposed to play a role in analysis of frequency of auditory tones in the brain stem [4, 5]. The mental time travel is a fundamental ability of the human brain to "mentally" relocate oneself to a time point in the past or future [6]. The mental time travel allows the projection of self in past or future by referring a temporal order of events, which is processed by the hippocampus [7].

Interval timing functions of the brain have arguably played a key role in the survival of the human species during most of their existence. Humans have been sustained by hunting and foraging [8], which requires interaction with external objects with a variety of physical characteristics, such speed, hardness, elasticity, and matter state—fluid or solid. These interactions are both (a) feedforward motor and (b) sensory—scene searching and sensory input. The feedforward motor control of external objects will be processed by sensory inputs resulting from reactional forces and impedance control (**Figure 3**). Mental time travel, by virtue of the

**Figure 1.** Depicts modular neural clock mechanism. Modular clock mechanism shown here is a prototype for the modular connections that form networks, processing information during interactions with external physical environment. The neural clock mechanism (C) can be synchronized with motor circuits (A) or sensory circuits (B) by low-frequency

oscillations (broken rectangular lines).

as the flow of time [1], and therefore, it is not easy to study by observation alone.

subjective experience of entire repertoire of sensory inputs.

8 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

**2. Interaction with external physical surroundings**

Representation of time unit by regular events is inherent in the definition of a regular event that repeats itself after the same interval every time. Time units, such as seconds, measured by swings of pendulum in a mechanical clock, can help in the measurement of a duration by counting the number of seconds or swings of a pendulum. Using this analogy, a neural temporal unit is defined as the interval between two adjacent regular spikes, spike bursts, and is proposed to represent time units in neural circuits [16].

According to the pacemaker-accumulator model, when neural temporal units are added (or counted) by the accumulator, it processes neural time intervals for various subjective or motor tasks. According to this model, if the neural temporal units represented by neural oscillators in the brain's timing circuits are smaller on the physical time scale, then subjective time reported in the task will be greater than the elapsed physical time. This will be the result of a greater number of neural temporal units present within a given external time duration. As predicted by the pacemaker-accumulator model, a greater number of neural temporal units within a timed interval will lead to subjective overestimation of intervals. This is supported by a study in which entrainment using visual flickers with faster frequency increased time measurement in a time reproduction task [17]. Entrainment by faster flickers increases the frequency of neural oscillators in the brain, which leads to a smaller temporal unit—a result of entrainment by faster rate of oscillations. Another study used auditory click trains to increase the speed of neural clocks, and its effect on pair-wise duration comparison and verbal time estimation task, and had arrived at similar conclusions [18]. However, not all entrainment studies agree with this conclusion [19]. Thus, a different role of neural oscillator is suggested within the modular clock model [16]. According to this formulation, the role of rhythmic activity is to only represent a physical property of the time-dimension in various neural clock mechanisms. Rhythmic activities are shown to be important for cognitive functions and various forms of behavior as reviewed by Herbst and Landau [20], but its precise role is yet to be understood.

### **4. Modular connections of neural clocks: basis for timing functions**

Modular model of distributed neural clocks is proposed by Gupta [16] for interval timing functions of the brain, such as timed motor movements, time reproduction, and time estimation. As depicted in the schematic in **Figure 1**, the proposed neural clock mechanism has three main modular components [16]: (1) calibration module, which is sensory and motor circuits of the brain that are involved in feedback interaction with the external four-dimensional surrounding; (2) endogenous neural oscillator to represent physical time in neural circuits; and (3) a clock mechanism for timing the behavioral response.

ramping activity of neurons, are important for coding and decoding information, leading to

Convergence of Action, Reaction, and Perception via Neural Oscillations in Dynamic Interaction…

http://dx.doi.org/10.5772/intechopen.76397

11

Quantitative measurements, such as time intervals, are likely represented in neural circuits in numerical representations [26], such as spike patterns, which can be read as the binary numbers [1, 27]. Studies suggest that the information about behaviorally relevant quantities such as timing behavior is not represented by the rates of spikes but rather by the intervals between their arrivals at synapses [26]. Coincidental activation of neurons by two different sources in a periodicity analysis model is proposed by Langner and Bahmer [28] for analysis of auditory

Although the neurobiological basis of information processing, underlying the timing of behavior, remains far from clear [26], some consensus is present, such as, neurons encode sensory information using a small number of active neurons, called sparse coding [29]. Independent activation of a small number of neurons is consistent with the cytoarchitectonic data that

Modular connections of local circuits forming dynamic networks are supported by cytoarchitectural and electrophysiological data from the study of the cortex. The cerebral cortex is divided into tiny computational units of mm range size, called the canonical microcircuits [30]. The neurons forming the canonical microcircuits have limited but conserved patterns of inputs and outputs [31]. Although the neurons within a canonical microcircuit are interconnected in specific patterns, the connectivity rates between most neuron pairs in the cortex are very low, which are less than 10–20% in most cases [31]. Due to a low level of connectivity among neurons, different combinations of multiple canonical microcircuits can be configured into a large variety of neuronal circuits and, therefore, provide the ability to perform a wide variety of computations. This feature is particularly useful for the role of small areas of the cortex to act as relatively independent modules in neural networks.

Moreover, inputs relayed to the cortex are organized in spatial patterns. This, combined with a little direct interaction between the canonical circuits in horizontal direction, results in independence of small cortical areas, which allows small cortical areas to act as relatively independent local circuits or modules. Local circuits are interconnected by synchronization, processing information to allow the brain to interact with the external physical world. The synchronization of local circuits is due to the oscillating states of excitability and inhibition, which allows neurons to fire during a specific phase of a long-range oscillation when neurons are excitable—coupling the modules of a neural network [27, 32, 33]. Periodic excitability of neurons during synchronization, due to pacing by inhibitory neurons, produces oscillating extracellular currents that are recorded as neural oscillations [34], which show different patterns during different behaviors. The behavioral significance of synchronization is due to the temporal coupling of neural events that underlie action

show a low level of connectivity among the neurons of the cortex.

**5. Cytoarchitectural basis of modular connections**

timed behavioral responses [16].

signals in the brain stem.

and perception.

The functional role of the calibration module in the neural clock mechanism is to transfer information about the physical time from external surroundings into neural circuits. Physical time information is transferred into neural circuits when motor and sensory information is processed during an interaction of the brain with external surroundings. During the feedback process resulting from the interaction with physical surroundings, circuits associated with motor and sensory functions produce neuronal activities that parallel the interactions between effector organs, muscles, sensory organs, and external physical objects. A comparison of the intervals between changes external to the body and the intervals between corresponding feedback changes in neural activities in the brain is proposed to serve as a basis for the calibration of neural clocks.

Endogenous neural oscillator is the second component of the proposed modular neural clock mechanism. Neural oscillators are the rhythmic neural activities within the brain, such as neural oscillations, periodic bursts, or rhythmic circuits. The idea of neural oscillator to represent time-dimension is very old, which is based on the intuitive role of the pendulum in mechanical clocks. Treisman [21] had originally proposed pacemaker-accumulator model. According to this model, a neural oscillator generates pulses, which are accumulated by a counter to encode time intervals in neural circuits [22].

Instead of serving as the source for temporal units for pulse accumulation in the Treisman model, the endogenous neural oscillator in the modular clock mechanism only represents a property of physical time. Further, note that the periodicity of the endogenous oscillator does not simply represent a number that is added numerically to process time intervals for neural or psychological processes. However, as mentioned later, the numerical quantification of time intervals in neural processes is likely encoded by spike patterns and their temporal relationship. Neural oscillators, representing physical time, along with the calibration module and various task-specific circuits, synchronously generate information in networks, forming modular clock mechanism (**Figure 1**) to encode timed behavior.

Task-relevant neural clock is the third module, which is generally a part of local circuits, distributed across the central nervous system. For example, the neural timers for visual time reproduction tasks in seconds range are present in the right dorsolateral prefrontal cortex [23–25].

At present, it is not clear how neural patterns, representing information, are coded and decoded to result in behavior, such as timing movements or time estimation. It is likely that a combination of different patterns, such as spike patterns, logic states of neural circuits, ramping activity of neurons, are important for coding and decoding information, leading to timed behavioral responses [16].

Quantitative measurements, such as time intervals, are likely represented in neural circuits in numerical representations [26], such as spike patterns, which can be read as the binary numbers [1, 27]. Studies suggest that the information about behaviorally relevant quantities such as timing behavior is not represented by the rates of spikes but rather by the intervals between their arrivals at synapses [26]. Coincidental activation of neurons by two different sources in a periodicity analysis model is proposed by Langner and Bahmer [28] for analysis of auditory signals in the brain stem.

Although the neurobiological basis of information processing, underlying the timing of behavior, remains far from clear [26], some consensus is present, such as, neurons encode sensory information using a small number of active neurons, called sparse coding [29]. Independent activation of a small number of neurons is consistent with the cytoarchitectonic data that show a low level of connectivity among the neurons of the cortex.

#### **5. Cytoarchitectural basis of modular connections**

**4. Modular connections of neural clocks: basis for timing functions**

(3) a clock mechanism for timing the behavioral response.

10 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

the calibration of neural clocks.

encode time intervals in neural circuits [22].

modular clock mechanism (**Figure 1**) to encode timed behavior.

Modular model of distributed neural clocks is proposed by Gupta [16] for interval timing functions of the brain, such as timed motor movements, time reproduction, and time estimation. As depicted in the schematic in **Figure 1**, the proposed neural clock mechanism has three main modular components [16]: (1) calibration module, which is sensory and motor circuits of the brain that are involved in feedback interaction with the external four-dimensional surrounding; (2) endogenous neural oscillator to represent physical time in neural circuits; and

The functional role of the calibration module in the neural clock mechanism is to transfer information about the physical time from external surroundings into neural circuits. Physical time information is transferred into neural circuits when motor and sensory information is processed during an interaction of the brain with external surroundings. During the feedback process resulting from the interaction with physical surroundings, circuits associated with motor and sensory functions produce neuronal activities that parallel the interactions between effector organs, muscles, sensory organs, and external physical objects. A comparison of the intervals between changes external to the body and the intervals between corresponding feedback changes in neural activities in the brain is proposed to serve as a basis for

Endogenous neural oscillator is the second component of the proposed modular neural clock mechanism. Neural oscillators are the rhythmic neural activities within the brain, such as neural oscillations, periodic bursts, or rhythmic circuits. The idea of neural oscillator to represent time-dimension is very old, which is based on the intuitive role of the pendulum in mechanical clocks. Treisman [21] had originally proposed pacemaker-accumulator model. According to this model, a neural oscillator generates pulses, which are accumulated by a counter to

Instead of serving as the source for temporal units for pulse accumulation in the Treisman model, the endogenous neural oscillator in the modular clock mechanism only represents a property of physical time. Further, note that the periodicity of the endogenous oscillator does not simply represent a number that is added numerically to process time intervals for neural or psychological processes. However, as mentioned later, the numerical quantification of time intervals in neural processes is likely encoded by spike patterns and their temporal relationship. Neural oscillators, representing physical time, along with the calibration module and various task-specific circuits, synchronously generate information in networks, forming

Task-relevant neural clock is the third module, which is generally a part of local circuits, distributed across the central nervous system. For example, the neural timers for visual time reproduction tasks in seconds range are present in the right dorsolateral prefrontal cortex [23–25].

At present, it is not clear how neural patterns, representing information, are coded and decoded to result in behavior, such as timing movements or time estimation. It is likely that a combination of different patterns, such as spike patterns, logic states of neural circuits, Modular connections of local circuits forming dynamic networks are supported by cytoarchitectural and electrophysiological data from the study of the cortex. The cerebral cortex is divided into tiny computational units of mm range size, called the canonical microcircuits [30]. The neurons forming the canonical microcircuits have limited but conserved patterns of inputs and outputs [31]. Although the neurons within a canonical microcircuit are interconnected in specific patterns, the connectivity rates between most neuron pairs in the cortex are very low, which are less than 10–20% in most cases [31]. Due to a low level of connectivity among neurons, different combinations of multiple canonical microcircuits can be configured into a large variety of neuronal circuits and, therefore, provide the ability to perform a wide variety of computations. This feature is particularly useful for the role of small areas of the cortex to act as relatively independent modules in neural networks.

Moreover, inputs relayed to the cortex are organized in spatial patterns. This, combined with a little direct interaction between the canonical circuits in horizontal direction, results in independence of small cortical areas, which allows small cortical areas to act as relatively independent local circuits or modules. Local circuits are interconnected by synchronization, processing information to allow the brain to interact with the external physical world. The synchronization of local circuits is due to the oscillating states of excitability and inhibition, which allows neurons to fire during a specific phase of a long-range oscillation when neurons are excitable—coupling the modules of a neural network [27, 32, 33]. Periodic excitability of neurons during synchronization, due to pacing by inhibitory neurons, produces oscillating extracellular currents that are recorded as neural oscillations [34], which show different patterns during different behaviors. The behavioral significance of synchronization is due to the temporal coupling of neural events that underlie action and perception.

#### **6. Role of oscillations in the representation of time-dimension**

Since the discovery of neural oscillations in electroencephalography (EEG) by Hans Berger at the University of Jena, our understanding about its importance in cognitive functions has grown exponentially. Theoretical consideration as well as experimental evidence suggests that time-dimension is represented in the central nervous system by rhythmic activity of neural oscillations [1, 16, 27]. The importance of neural oscillations in cognitive functions became known when synchronized neuronal firing pattern, which was tightly correlated with the phase and amplitude of an oscillatory local field potential in the cat visual cortex, was reported in 1989 by Gray and Singer [35]. Furthermore, the stimuli were correlated to the amplitude of the oscillatory field in specific neuron clusters [35]. Neural oscillations represent a common unit of physical time-dimension in information processing when they synchronize different parts of the brain into networks [27].

of the primate brain. Survival in many circumstances depends on the temporal coupling of action with the perception during the interaction with the external physical surroundings.

Convergence of Action, Reaction, and Perception via Neural Oscillations in Dynamic Interaction…

http://dx.doi.org/10.5772/intechopen.76397

13

Depending upon the demands of a task, such as speed, the cost of failure, and the degree of coupling between action and perception, may vary. To couple feedforward motor output with sensory inputs on a small temporal scale, a more accurate representation of time information

Coincidence detection refers to occurrence of an event only when two or more events take place synchronously. Oscillations are hypothesized to play a role in decoding the temporal information in ramping neuronal activities [16] that are commonly observed in the cortex [47–50]. Coincidence detection would play a role in generating the information that produces timed behavior. This information is processed when coincidence detector neuron is stimulated by both excitatory presynaptic terminals controlled by gamma oscillations [51] and an increasing excitatory input coming from a ramping neuronal activity (**Figure 2**). This

**Figure 2.** This figure shows how coincidence activation can result in the analysis of temporal information represented by neuronal ramping activities, commonly observed in the cortex. The ramping neuronal activity provides an increasing input to neuron that is synchronized by a high frequency nested within a low frequency (C). The neuron synchronized by (C) is excited by high frequency (periodicity is τhigh) gamma oscillations, within a specific phase of low frequency oscillation (periodicity is τlow). The excitation of neuron (B) will result in an activity (coincidence detection), if there are three simultaneous events (1) input from neuron with climbing output (A) after reaching a particular level of activity (2) phase of long-range oscillations that allows gamma cycles (3) excitatory phase of gamma cycles. Thus, the integration period resulting from coincidence analysis will be represented by the formula shown within the figure, where m and n are integers.

**9. Role of coincidence detection in information processing**

in neural circuits is required.

Accumulating body of evidence now suggests that beta-range neural oscillations represent physical time information in the brain [16, 19, 27, 36–40]. A recent study has concluded that beta oscillations play an important role in the retention and manipulation of time information held in working memory [37]. A causal relationship between beta oscillations and the control of movements [41] has been shown, which further suggests that beta oscillations are responsible for coupling the neural-timer mechanism with the motor circuits for the control of movements.

#### **7. Representation of time-dimension in lower motor circuits**

Central pattern generators (CPG) are networks of neurons in the spinal cord-forming oscillators that play a role in hierarchical control, generating rhythmic motor activities in animals, such as walking and chewing [42]. The rhythmic activity of CPG networks, according to the formulations of distributed modular clock mechanism, represents time-dimension in spinal cord motor circuits that help to control the temporal characteristics of locomotion. Although, CPG activity is observed after deafferentation or spinal cord injury, the sensory inputs, especially proprioceptive signals, are crucial for its role in locomotion [43]. The function of proprioceptive signals is likely the calibration of neural temporal units represented by rhythmic activity of the CPG [16].

The evidence for the direct role of spinal cord CPG networks in human locomotion is scant and is mostly indirect [44, 45]. Some beneficial effects are seen in spinal cord injury patients following locomotor training [46], which can be attributed in part to the plastic changes in spinal cord CPG networks following the training, which updates physical time-dimension information from sensory, especially proprioceptive inputs during training sessions [16].

#### **8. Role of temporal coupling in information processing**

Representation of time-dimension in the brain is important for human and nonhuman primates' ability to survive. As argued earlier, the representation of time-dimension in neural circuits plays a key role in the information processing underlying complex cognitive functions of the primate brain. Survival in many circumstances depends on the temporal coupling of action with the perception during the interaction with the external physical surroundings.

Depending upon the demands of a task, such as speed, the cost of failure, and the degree of coupling between action and perception, may vary. To couple feedforward motor output with sensory inputs on a small temporal scale, a more accurate representation of time information in neural circuits is required.

#### **9. Role of coincidence detection in information processing**

**6. Role of oscillations in the representation of time-dimension**

12 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

different parts of the brain into networks [27].

Since the discovery of neural oscillations in electroencephalography (EEG) by Hans Berger at the University of Jena, our understanding about its importance in cognitive functions has grown exponentially. Theoretical consideration as well as experimental evidence suggests that time-dimension is represented in the central nervous system by rhythmic activity of neural oscillations [1, 16, 27]. The importance of neural oscillations in cognitive functions became known when synchronized neuronal firing pattern, which was tightly correlated with the phase and amplitude of an oscillatory local field potential in the cat visual cortex, was reported in 1989 by Gray and Singer [35]. Furthermore, the stimuli were correlated to the amplitude of the oscillatory field in specific neuron clusters [35]. Neural oscillations represent a common unit of physical time-dimension in information processing when they synchronize

Accumulating body of evidence now suggests that beta-range neural oscillations represent physical time information in the brain [16, 19, 27, 36–40]. A recent study has concluded that beta oscillations play an important role in the retention and manipulation of time information held in working memory [37]. A causal relationship between beta oscillations and the control of movements [41] has been shown, which further suggests that beta oscillations are responsible for coupling the neural-timer mechanism with the motor circuits for the control of movements.

Central pattern generators (CPG) are networks of neurons in the spinal cord-forming oscillators that play a role in hierarchical control, generating rhythmic motor activities in animals, such as walking and chewing [42]. The rhythmic activity of CPG networks, according to the formulations of distributed modular clock mechanism, represents time-dimension in spinal cord motor circuits that help to control the temporal characteristics of locomotion. Although, CPG activity is observed after deafferentation or spinal cord injury, the sensory inputs, especially proprioceptive signals, are crucial for its role in locomotion [43]. The function of proprioceptive signals is likely the calibration of neural temporal units represented by rhythmic activity of the CPG [16]. The evidence for the direct role of spinal cord CPG networks in human locomotion is scant and is mostly indirect [44, 45]. Some beneficial effects are seen in spinal cord injury patients following locomotor training [46], which can be attributed in part to the plastic changes in spinal cord CPG networks following the training, which updates physical time-dimension information from sensory, especially proprioceptive inputs during training sessions [16].

Representation of time-dimension in the brain is important for human and nonhuman primates' ability to survive. As argued earlier, the representation of time-dimension in neural circuits plays a key role in the information processing underlying complex cognitive functions

**7. Representation of time-dimension in lower motor circuits**

**8. Role of temporal coupling in information processing**

Coincidence detection refers to occurrence of an event only when two or more events take place synchronously. Oscillations are hypothesized to play a role in decoding the temporal information in ramping neuronal activities [16] that are commonly observed in the cortex [47–50]. Coincidence detection would play a role in generating the information that produces timed behavior. This information is processed when coincidence detector neuron is stimulated by both excitatory presynaptic terminals controlled by gamma oscillations [51] and an increasing excitatory input coming from a ramping neuronal activity (**Figure 2**). This

**Figure 2.** This figure shows how coincidence activation can result in the analysis of temporal information represented by neuronal ramping activities, commonly observed in the cortex. The ramping neuronal activity provides an increasing input to neuron that is synchronized by a high frequency nested within a low frequency (C). The neuron synchronized by (C) is excited by high frequency (periodicity is τhigh) gamma oscillations, within a specific phase of low frequency oscillation (periodicity is τlow). The excitation of neuron (B) will result in an activity (coincidence detection), if there are three simultaneous events (1) input from neuron with climbing output (A) after reaching a particular level of activity (2) phase of long-range oscillations that allows gamma cycles (3) excitatory phase of gamma cycles. Thus, the integration period resulting from coincidence analysis will be represented by the formula shown within the figure, where m and n are integers.

coincidence detection model is based on the periodicity analyzing model for auditory signals in the brain stem proposed by Langner and Bahmer [28].

matrix factorization of EMG patterns. Prior to the analysis, EMG data is adjusted by subtracting the tonic component of EMG activity responsible for postural activity and balance [60] or

Convergence of Action, Reaction, and Perception via Neural Oscillations in Dynamic Interaction…

The maximum number of dimensions (D) in muscle synergies after extraction is the number (D) of all muscles that could play a role in the movement. After the extraction of muscle synergies from EMG patterns, the minimum number (N) of synergies is computed, such that they account for 80–90% of variability of the movements by choosing either coefficient of determination (CD) or variability accounted for (VAF) criteria. It has been observed that a small number of extracted muscle synergies can account for most of the variability of movements, while others account for 10–20% of the variability [59]. This provides a solution to the redundancy problem: few patterns of movements are preferred over far larger number of possible movements.

> i=1 N

temporally, represented by a time-varying coefficient (Ci(t) for ith synergy at time t). In this muscle (ith) synergy, fixed weights of contraction of muscles are multiplied by a time-varying non-negative coefficient Ci to obtain muscle activation contributed by each synergy (Wi) [58]. Time invariant or spatial synergies can be described at a more abstract level as the uniform modulation of a D-dimensional vector field wherein the amplitude of the vector is the fixed

> i=1 N

and added to generate muscle activity pattern [59].

**11.3. Recruitment of muscle synergies: a computational strategy by the brain to optimize motor movements within the constraints of musculoskeletal system,** 

In time-varying synergy, there are multiple waveforms corresponding to muscle contractions or vector amplitudes of individual synergies. Synchronous contraction waveforms of different muscles are multiplied by the same coefficient (Ci) for the ith synergy and shifted in time

The computational models of muscle synergies reveal the modular control of muscles during movements. The muscle synergy recruitment in computational model can be explained by two orthogonal components. The spatial synergy model (Eq. 1) suggests that signals originating from a cortical circuit (from convergent to divergent) (defined as type I) control multiple muscles by sending sending signals of different strengths to different muscle with a fixed amplitude relationship between them, while another circuit (defined as type II) modulates

for ith synergy. Each time-invariant ith synergy is activated by the nervous system

In time-invariant synergy, there is a fixed level of contractions (weights) of muscles (M<sup>1</sup>

Ci(t)Wi (1)

http://dx.doi.org/10.5772/intechopen.76397

CiW(t − ti) (2)

, M2 … 15

by normalizing EMG data as discussed earlier [61].

*11.2.1. Spatial or time-invariant synergy model*

levels of contraction of individual muscles.

m〈t〉 = ∑

**energy cost, and external physical surroundings**

*11.2.2. Time-varying synergy model*

Mn)—W<sup>i</sup>

by a delay ti

m(t) = ∑

### **10. Input of time-dimension during the Cochlear processing contributes to sound perception**

Psychoacoustical studies have indicated that perception of speech is not adequately accounted by place frequency mechanisms [52]. But the temporal information represented in sounds is also important in the perception of speech [52]. Therefore, it noteworthy that recent theoretical work and a growing number of experimental studies indicate that time-dimension is an integral part of information processing underlying perceptual functions of the cortex [16, 27].

Most natural sounds are modulated in amplitude, which can be explained by a mixture of sound waves of slightly different frequencies producing destructive interference near the tails and summation near the peak in the center [53, 54]. Thus, time-dimension is represented by the modulation frequency in addition to fine oscillations of air pressures causing sound waves. The oscillations of both frequencies, forming the structure of natural sounds, represent physical time-dimension [16]. The processing of sound waves by the cochlea produces amplitude modulation (AM) signals in the brain stem. The spike structure of AM signals is phase locked to changes in pressure produced by amplitude-modulated sound waves during the transduction by the cochlea. Studies also suggest the presence of tonotopic organization of subpopulations of neurons tuned to modulation frequencies [53], consistent with the transduction of time information in modulated sound waves, which is later processed in the auditory cortical areas contributing to perceptual qualities of sound.

#### **11. Role of time-dimension in movements**

#### **11.1. What are muscle synergies?**

Muscle synergies represent the central nervous system's response to the redundancy problem in motor movements. There are many more degrees of movements possible than are the number of muscle activation patterns that can produce movements [55, 56]. Several studies in vertebrates and non-vertebrates demonstrate the presence of elements or muscle synergies, from which complex patterns of motor movements can be constructed [56–58].

#### **11.2. Computational models of muscle synergies**

According to the computational models of muscle synergy, a synergy can be described as a D-dimensional vector field, where D is the number of muscles involved in movements [59, 60]. The level of contraction of each muscle, represented by weighting coefficient (Wi) multiplied by a coefficient (Ci) to yield (Ci(t)Wi), represents one of the dimensions. The level of contraction (Coefficient\*W, W is the weight of contraction) is referred to as synergy in the following computational models of synergy. The synergies are extracted using non-negative matrix factorization of EMG patterns. Prior to the analysis, EMG data is adjusted by subtracting the tonic component of EMG activity responsible for postural activity and balance [60] or by normalizing EMG data as discussed earlier [61].

The maximum number of dimensions (D) in muscle synergies after extraction is the number (D) of all muscles that could play a role in the movement. After the extraction of muscle synergies from EMG patterns, the minimum number (N) of synergies is computed, such that they account for 80–90% of variability of the movements by choosing either coefficient of determination (CD) or variability accounted for (VAF) criteria. It has been observed that a small number of extracted muscle synergies can account for most of the variability of movements, while others account for 10–20% of the variability [59]. This provides a solution to the redundancy problem: few patterns of movements are preferred over far larger number of possible movements.

#### *11.2.1. Spatial or time-invariant synergy model*

coincidence detection model is based on the periodicity analyzing model for auditory signals

Psychoacoustical studies have indicated that perception of speech is not adequately accounted by place frequency mechanisms [52]. But the temporal information represented in sounds is also important in the perception of speech [52]. Therefore, it noteworthy that recent theoretical work and a growing number of experimental studies indicate that time-dimension is an integral part of information processing underlying perceptual functions of the cortex [16, 27]. Most natural sounds are modulated in amplitude, which can be explained by a mixture of sound waves of slightly different frequencies producing destructive interference near the tails and summation near the peak in the center [53, 54]. Thus, time-dimension is represented by the modulation frequency in addition to fine oscillations of air pressures causing sound waves. The oscillations of both frequencies, forming the structure of natural sounds, represent physical time-dimension [16]. The processing of sound waves by the cochlea produces amplitude modulation (AM) signals in the brain stem. The spike structure of AM signals is phase locked to changes in pressure produced by amplitude-modulated sound waves during the transduction by the cochlea. Studies also suggest the presence of tonotopic organization of subpopulations of neurons tuned to modulation frequencies [53], consistent with the transduction of time information in modulated sound waves, which is later processed in the

Muscle synergies represent the central nervous system's response to the redundancy problem in motor movements. There are many more degrees of movements possible than are the number of muscle activation patterns that can produce movements [55, 56]. Several studies in vertebrates and non-vertebrates demonstrate the presence of elements or muscle synergies,

According to the computational models of muscle synergy, a synergy can be described as a D-dimensional vector field, where D is the number of muscles involved in movements [59, 60]. The level of contraction of each muscle, represented by weighting coefficient (Wi) multiplied by a coefficient (Ci) to yield (Ci(t)Wi), represents one of the dimensions. The level of contraction (Coefficient\*W, W is the weight of contraction) is referred to as synergy in the following computational models of synergy. The synergies are extracted using non-negative

from which complex patterns of motor movements can be constructed [56–58].

**10. Input of time-dimension during the Cochlear processing** 

auditory cortical areas contributing to perceptual qualities of sound.

**11. Role of time-dimension in movements**

**11.2. Computational models of muscle synergies**

**11.1. What are muscle synergies?**

in the brain stem proposed by Langner and Bahmer [28].

14 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

**contributes to sound perception**

$$\mathbf{m(t)} = \sum\_{i=1}^{N} \mathbf{Ci(t)Wi} \tag{1}$$

In time-invariant synergy, there is a fixed level of contractions (weights) of muscles (M<sup>1</sup> , M2 … Mn)—W<sup>i</sup> for ith synergy. Each time-invariant ith synergy is activated by the nervous system temporally, represented by a time-varying coefficient (Ci(t) for ith synergy at time t). In this muscle (ith) synergy, fixed weights of contraction of muscles are multiplied by a time-varying non-negative coefficient Ci to obtain muscle activation contributed by each synergy (Wi) [58]. Time invariant or spatial synergies can be described at a more abstract level as the uniform modulation of a D-dimensional vector field wherein the amplitude of the vector is the fixed levels of contraction of individual muscles.

#### *11.2.2. Time-varying synergy model*

$$\mathbf{m}(\mathbf{t}) = \sum\_{i=1}^{N} \mathbf{C} \mathbf{i} \mathbf{W}(\mathbf{t} - \mathbf{t}) \tag{2}$$

In time-varying synergy, there are multiple waveforms corresponding to muscle contractions or vector amplitudes of individual synergies. Synchronous contraction waveforms of different muscles are multiplied by the same coefficient (Ci) for the ith synergy and shifted in time by a delay ti and added to generate muscle activity pattern [59].

#### **11.3. Recruitment of muscle synergies: a computational strategy by the brain to optimize motor movements within the constraints of musculoskeletal system, energy cost, and external physical surroundings**

The computational models of muscle synergies reveal the modular control of muscles during movements. The muscle synergy recruitment in computational model can be explained by two orthogonal components. The spatial synergy model (Eq. 1) suggests that signals originating from a cortical circuit (from convergent to divergent) (defined as type I) control multiple muscles by sending sending signals of different strengths to different muscle with a fixed amplitude relationship between them, while another circuit (defined as type II) modulates the activity of type I circuits, producing a time-varying modulation of fixed levels of muscle contractions constrained by the amplitude relationship, reflecting the spatial demands of the task. During a movement, synergies are recruited by sending signals from type II circuits to type I circuits. When several synergies (Wi) (type I circuit) are recruited by several corresponding sources of time-varying signals (type II circuit), it leads to the movement to be constructed by a computational mechanism. The orthogonal components of spatial synergies are detected by decomposing EMG patterns.

of muscle synergies is also due to the numerous specific, reciprocal connections between the regions of the parietal cortical areas and frontal motor areas [66]. These specific multiple, reciprocal connections may form the basis for the recruitment of muscle synergies as well as the temporally evolving map of the neural representation of the map of the endpoint of the task. Furthermore, it is likely that the direct effect of state estimator module of the optimum feedback control is on the neural representation of the endpoint of the task rather than on muscle activities. The muscles are controlled from frontal motor areas, which are functionally connected to temporo-parietal areas where the multimodal integration of sensory informa-

Convergence of Action, Reaction, and Perception via Neural Oscillations in Dynamic Interaction…

http://dx.doi.org/10.5772/intechopen.76397

17

Although motor movements in humans are smooth, but the motor performance shows a large variability from trial to trial. This large variability in the movements is a reflection of inherent noise in the motor circuits, also called signal generated noise, in addition to the noise present in sensory circuits, and the external source of sensory inputs. Optimum feedback control is used by the central nervous system to modify feedback signal to control some index of motor function, such as minimization of endpoint errors or achieving a maximum jump. The control of motor outcome is optimum when it meets the spatiotemporal constraints of a task, such as hand meeting a ball during a catch. As a result, there is a decrease in the variations along the trajectory of the task; however, this is accompanied by the increase in variations in task-irrelevant trajectories. State estimator functions provide outputs to the

A modification of optimum feedback control would consider the feedforward motor output via synergy recruitment. State estimator would recruit a limited number of muscle synergies that would account for 80–90% of variability of the movements according to VAF or CD criteria. The mechanism to explain the control of the recruitment of synergies must take into the constraints that limit movements. Muscle contraction velocity profiles, due to the constraints of viscoelastic properties as well as the biochemical events underlying sliding filament mechanism, are bell-shaped. The allowed muscle contractions must minimize the energy cost in the use of skeletal muscles, which is imposed as inheritable features on neural circuits due to evolutionary pressures. Furthermore, only certain movements are allowed by the joints, which depend on various factors, such as the shape and the structure of joints. The function of state estimator depends upon sensory inputs and efferent copy of cortical motor commands, which provides estimation of online changes occurring in the state of the musculoskeletal

Please note that sensory inputs and efferent copies of motor commands must be integrated with time-dimension in order to serve as state estimator for the optimum control of motor movements via synergy recruitment. The early online control of movements likely involves the integration of unimodal sensory state estimates instead of a single multimodal state estimate [68]. Studies have shown that during reaching movement task, joint angle variability peaked mid-way during the task [69], but there is a high accuracy at the endpoint [70], which

tion—proprioceptive, vestibular, and visual—takes place [64, 65].

feedback controller which helps to produce the optimum outcome [67].

suggests optimum feedback control of movements.

**11.5. Optimum feedback control theory**

system [67].

The type II circuits, which produce time-varying signals, represent time-dimension in the information processing that underlies the control of movements. Due to the representation of time-dimension, type II circuits are likely to be controlled by dynamic inputs, such as speed and temporal coupling. Type I circuits are responsible for representing the spatial directions of movements. Muscle contractions represented by type I circuit will determine the set of conditions, representing limb and joint positions for specific directions required for external task conditions. Furthermore, types I and II circuits are orthogonal or statistically independent, which is suggested by the decomposition of EMG patterns by the matrix factorization.

It is noteworthy that movements in humans are smooth. This suggests that muscles are not controlled individually by independent feedback processes, which would increase variations between contraction states of individual muscles, thereby decreasing the smoothness of movements. Instead, the muscles in each spatial synergy is controlled by a single signal (represented by time-varying coefficient in Eq. 1), which helps to reduce the number of variables being controlled, leading to an overall reduction in variations within movements. Thus, muscle synergies represent simpler computational solutions implemented by the central nervous system for controlling movements. Moreover, only a limited number of synergies can be recruited within the constraints of musculoskeletal system.

#### **11.4. Control of muscle synergy recruitment**

According to the classical view of the control of reaching movements to catch a ball, there is a creation of a neural representation of endpoint of the task, such as the hand meeting the ball in the physical space to execute a catch [62, 63]. This neural representation is formed by an initial approximation, which then evolves temporally [63]. The motor movements are produced by recruiting a limited number of synergies by the central nervous system, which can be directed by the neural representation of the prospective endpoint of the task. State estimators, which are discussed later, play a key role in optimum feedback control as it would predict the neural representation of the endpoint. The time-varying coefficient represents the time-dimension in the information processing movements, as it is modulated with time. Thus, time-varying signal (Ci) helps to determine the speed of movement, while the other orthogonal component, muscle synergies, is a response to spatial and musculoskeletal constraints.

Temporo-parietal cortical areas are believed to play a significant role in the feedback processes that help to represent the musculoskeletal system in the external four-dimensional environment [64, 65]. Studies of the computational models of muscle synergies indicate that the nervous system recruits a limited number of synergies that optimizes according to a temporally evolving map of the neural representation of the hand meeting the ball. Recruitment of muscle synergies is also due to the numerous specific, reciprocal connections between the regions of the parietal cortical areas and frontal motor areas [66]. These specific multiple, reciprocal connections may form the basis for the recruitment of muscle synergies as well as the temporally evolving map of the neural representation of the map of the endpoint of the task. Furthermore, it is likely that the direct effect of state estimator module of the optimum feedback control is on the neural representation of the endpoint of the task rather than on muscle activities. The muscles are controlled from frontal motor areas, which are functionally connected to temporo-parietal areas where the multimodal integration of sensory information—proprioceptive, vestibular, and visual—takes place [64, 65].

#### **11.5. Optimum feedback control theory**

the activity of type I circuits, producing a time-varying modulation of fixed levels of muscle contractions constrained by the amplitude relationship, reflecting the spatial demands of the task. During a movement, synergies are recruited by sending signals from type II circuits to type I circuits. When several synergies (Wi) (type I circuit) are recruited by several corresponding sources of time-varying signals (type II circuit), it leads to the movement to be constructed by a computational mechanism. The orthogonal components of spatial synergies

The type II circuits, which produce time-varying signals, represent time-dimension in the information processing that underlies the control of movements. Due to the representation of time-dimension, type II circuits are likely to be controlled by dynamic inputs, such as speed and temporal coupling. Type I circuits are responsible for representing the spatial directions of movements. Muscle contractions represented by type I circuit will determine the set of conditions, representing limb and joint positions for specific directions required for external task conditions. Furthermore, types I and II circuits are orthogonal or statistically independent, which is suggested by the decomposition of EMG patterns by the matrix factorization.

It is noteworthy that movements in humans are smooth. This suggests that muscles are not controlled individually by independent feedback processes, which would increase variations between contraction states of individual muscles, thereby decreasing the smoothness of movements. Instead, the muscles in each spatial synergy is controlled by a single signal (represented by time-varying coefficient in Eq. 1), which helps to reduce the number of variables being controlled, leading to an overall reduction in variations within movements. Thus, muscle synergies represent simpler computational solutions implemented by the central nervous system for controlling movements. Moreover, only a limited number of synergies can be

According to the classical view of the control of reaching movements to catch a ball, there is a creation of a neural representation of endpoint of the task, such as the hand meeting the ball in the physical space to execute a catch [62, 63]. This neural representation is formed by an initial approximation, which then evolves temporally [63]. The motor movements are produced by recruiting a limited number of synergies by the central nervous system, which can be directed by the neural representation of the prospective endpoint of the task. State estimators, which are discussed later, play a key role in optimum feedback control as it would predict the neural representation of the endpoint. The time-varying coefficient represents the time-dimension in the information processing movements, as it is modulated with time. Thus, time-varying signal (Ci) helps to determine the speed of movement, while the other orthogonal component,

Temporo-parietal cortical areas are believed to play a significant role in the feedback processes that help to represent the musculoskeletal system in the external four-dimensional environment [64, 65]. Studies of the computational models of muscle synergies indicate that the nervous system recruits a limited number of synergies that optimizes according to a temporally evolving map of the neural representation of the hand meeting the ball. Recruitment

muscle synergies, is a response to spatial and musculoskeletal constraints.

are detected by decomposing EMG patterns.

16 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

recruited within the constraints of musculoskeletal system.

**11.4. Control of muscle synergy recruitment**

Although motor movements in humans are smooth, but the motor performance shows a large variability from trial to trial. This large variability in the movements is a reflection of inherent noise in the motor circuits, also called signal generated noise, in addition to the noise present in sensory circuits, and the external source of sensory inputs. Optimum feedback control is used by the central nervous system to modify feedback signal to control some index of motor function, such as minimization of endpoint errors or achieving a maximum jump. The control of motor outcome is optimum when it meets the spatiotemporal constraints of a task, such as hand meeting a ball during a catch. As a result, there is a decrease in the variations along the trajectory of the task; however, this is accompanied by the increase in variations in task-irrelevant trajectories. State estimator functions provide outputs to the feedback controller which helps to produce the optimum outcome [67].

A modification of optimum feedback control would consider the feedforward motor output via synergy recruitment. State estimator would recruit a limited number of muscle synergies that would account for 80–90% of variability of the movements according to VAF or CD criteria. The mechanism to explain the control of the recruitment of synergies must take into the constraints that limit movements. Muscle contraction velocity profiles, due to the constraints of viscoelastic properties as well as the biochemical events underlying sliding filament mechanism, are bell-shaped. The allowed muscle contractions must minimize the energy cost in the use of skeletal muscles, which is imposed as inheritable features on neural circuits due to evolutionary pressures. Furthermore, only certain movements are allowed by the joints, which depend on various factors, such as the shape and the structure of joints. The function of state estimator depends upon sensory inputs and efferent copy of cortical motor commands, which provides estimation of online changes occurring in the state of the musculoskeletal system [67].

Please note that sensory inputs and efferent copies of motor commands must be integrated with time-dimension in order to serve as state estimator for the optimum control of motor movements via synergy recruitment. The early online control of movements likely involves the integration of unimodal sensory state estimates instead of a single multimodal state estimate [68]. Studies have shown that during reaching movement task, joint angle variability peaked mid-way during the task [69], but there is a high accuracy at the endpoint [70], which suggests optimum feedback control of movements.

Optimum feedback control depends on a set of distributed circuits, among which the primary motor cortex appears to play a key role. It has been shown that the primary motor cortex receives inputs from several brain areas, which include the premotor cortex, primary somatosensory cortex, posterior parietal cortex, and pathways via the thalamus from the cerebellum, forming some of the structures involved in optimum feedback control. A recent study also shows that if feedback controller is represented in the primary motor cortex, the optimum feedback control describes multiple representation of preferred directions of torque or movements in which muscles are most active [71]. This is consistent with the role of the primary motor cortex as the site where individual synergies, involved in constructing voluntary movements, are recruited by the activation of individual spatial synergies by time-varying signals.

#### **12. Controlling a motor vehicle: a special case of convergence of action, reaction, and perception**

Newton's third law of motion states that for every action, there is an equal and opposite reaction. Newton's third law can be applied to analyze the interaction of the brain with the fourdimensional physical surroundings. The action, which results from movements, produces the reactional forces on the human body, resulting in changes in the activity of mechanoreceptors, which then modifies the activity of the musculoskeletal system responsible for movements in a feedback process. Thus, pairs of forces, action, and reaction forces during interaction between primates and their environment lead to changes in the activities of sensory and motor functions of the brain via feedback processes. In addition, successful interaction between the brain and the external environment depends on several other factors, which includes the representation of time and spatial dimensions in neural circuits of the brain. During this interaction, there is a complex interplay between feedforward motor process and sensory inputs. An important example of such a complex interaction is a driver controlling a motor vehicle (**Figure 3**). The control of the vehicle involved is due to two main motor actions, pressing the gas pedal to change the speed of the vehicle and the steering wheel to change the direction. The motor control of the vehicle, which is the feedforward motor prediction based on internally generated top-down feedforward output, is directed by a negative feedback process involving sensory inputs of different modalities.

frequency, orientation, texture, object identity, as well as size of a space [72]. A recent study has shown that the networking of the visual cortex with retrosplenial cortex is important for mental time travel, which would play a role in constructing prospective scenes [74], such as seeing an increasing number of brake lights will suggest a slowing traffic. The self-projection in time, called mental time travel, relies on various neural structures, encoding memory, mental imagery, and self, which is critical for the judgments during interactions with the physical environment [6]. Mental time travel recruits a network, which includes the anteromedial tem-

**Figure 3.** It depicts the interaction between feedforward motor output and feedback sensory input resulting from the stimulation of proprioceptors by reactional forces acting on the foot and hand. Visual information from the scene is

Convergence of Action, Reaction, and Perception via Neural Oscillations in Dynamic Interaction…

http://dx.doi.org/10.5772/intechopen.76397

19

poral, posterior parietal, inferior frontal, temporo-parietal, and insular cortices [6].

actively gathered to provide additional but crucial sensory input.

**13. Biochemical and genetic basis of inter-individual variations in** 

A recent study of review-based evidence, which examined a large number of studies, suggests a key role of the dopaminergic system in various temporal functions of the brain [75]. These conclusions are based on various combinations of genetic, pharmacological, physiological, and

**timing functions**

Visual input, after processing in the posterior parietal cortex, is relayed to the primary motor cortex, which will be important for the control of movements with the help of visual cues. The premotor cortex is involved with the planning of movements and the internally generated signals, while the primary somatosensory cortex plays a role in processing inputs related to the perturbations of musculoskeletal system during movements.

Scene processing involves recognizing the environment, searching the information in the environment, and navigating through the environment [72]. Visual scene-selective regions are occipital place area, parahippocampal place area, and retrosplenial complex, which are on the lateral occipital, ventral temporal, and medial parietal cortical surfaces, respectively [72, 73]. Parahippocampal place area is believed to reflect a wide range of properties, such as spatial Convergence of Action, Reaction, and Perception via Neural Oscillations in Dynamic Interaction… http://dx.doi.org/10.5772/intechopen.76397 19

Optimum feedback control depends on a set of distributed circuits, among which the primary motor cortex appears to play a key role. It has been shown that the primary motor cortex receives inputs from several brain areas, which include the premotor cortex, primary somatosensory cortex, posterior parietal cortex, and pathways via the thalamus from the cerebellum, forming some of the structures involved in optimum feedback control. A recent study also shows that if feedback controller is represented in the primary motor cortex, the optimum feedback control describes multiple representation of preferred directions of torque or movements in which muscles are most active [71]. This is consistent with the role of the primary motor cortex as the site where individual synergies, involved in constructing voluntary movements, are recruited by the activation of individual spatial synergies by time-varying signals.

18 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

**12. Controlling a motor vehicle: a special case of convergence of** 

Newton's third law of motion states that for every action, there is an equal and opposite reaction. Newton's third law can be applied to analyze the interaction of the brain with the fourdimensional physical surroundings. The action, which results from movements, produces the reactional forces on the human body, resulting in changes in the activity of mechanoreceptors, which then modifies the activity of the musculoskeletal system responsible for movements in a feedback process. Thus, pairs of forces, action, and reaction forces during interaction between primates and their environment lead to changes in the activities of sensory and motor functions of the brain via feedback processes. In addition, successful interaction between the brain and the external environment depends on several other factors, which includes the representation of time and spatial dimensions in neural circuits of the brain. During this interaction, there is a complex interplay between feedforward motor process and sensory inputs. An important example of such a complex interaction is a driver controlling a motor vehicle (**Figure 3**). The control of the vehicle involved is due to two main motor actions, pressing the gas pedal to change the speed of the vehicle and the steering wheel to change the direction. The motor control of the vehicle, which is the feedforward motor prediction based on internally generated top-down feedforward output, is directed by a negative feedback process involving sensory

Visual input, after processing in the posterior parietal cortex, is relayed to the primary motor cortex, which will be important for the control of movements with the help of visual cues. The premotor cortex is involved with the planning of movements and the internally generated signals, while the primary somatosensory cortex plays a role in processing inputs related to

Scene processing involves recognizing the environment, searching the information in the environment, and navigating through the environment [72]. Visual scene-selective regions are occipital place area, parahippocampal place area, and retrosplenial complex, which are on the lateral occipital, ventral temporal, and medial parietal cortical surfaces, respectively [72, 73]. Parahippocampal place area is believed to reflect a wide range of properties, such as spatial

the perturbations of musculoskeletal system during movements.

**action, reaction, and perception**

inputs of different modalities.

**Figure 3.** It depicts the interaction between feedforward motor output and feedback sensory input resulting from the stimulation of proprioceptors by reactional forces acting on the foot and hand. Visual information from the scene is actively gathered to provide additional but crucial sensory input.

frequency, orientation, texture, object identity, as well as size of a space [72]. A recent study has shown that the networking of the visual cortex with retrosplenial cortex is important for mental time travel, which would play a role in constructing prospective scenes [74], such as seeing an increasing number of brake lights will suggest a slowing traffic. The self-projection in time, called mental time travel, relies on various neural structures, encoding memory, mental imagery, and self, which is critical for the judgments during interactions with the physical environment [6]. Mental time travel recruits a network, which includes the anteromedial temporal, posterior parietal, inferior frontal, temporo-parietal, and insular cortices [6].

#### **13. Biochemical and genetic basis of inter-individual variations in timing functions**

A recent study of review-based evidence, which examined a large number of studies, suggests a key role of the dopaminergic system in various temporal functions of the brain [75]. These conclusions are based on various combinations of genetic, pharmacological, physiological, and psychophysical evidence [75]. This review suggests an important role of certain key molecules, which influence the concentration of synaptic dopamine, in various timing functions of the brain. These molecules includes catechol-O-methyltransferase (COMT) that degrades synaptic dopamine and dopamine transporter (DAT), which removes dopamine from the synaptic cleft. Studies of common gene polymorphisms, COMT gene (COMT Val158Met) and DAT gene (SLC6A3 3'-VNTR variant), suggest the involvement of dopaminergic system in time perception.

**References**

(xlii, 326 pages)

p. 284

2012;**36**(1):341-349

Research. 1983;**52**(3):333-355

Brain Research. 1981;**44**(4):450-454

[1] Gupta DS, Merchant H. Editorial: Understanding the role of the time-dimension in the

Convergence of Action, Reaction, and Perception via Neural Oscillations in Dynamic Interaction…

http://dx.doi.org/10.5772/intechopen.76397

21

[2] Neville EH. The Fourth Dimension. Cambridge: Cambridge University Press; 1921. p. 4 [3] Petkov V, Minkowski H. Minkowski spacetime: A hundred years later. In: Fundamental Theories of Physics. Vol. 165. Dordrecht/New York: Springer; 2010. p. 1 Online resource

[4] Langner G. Evidence for neuronal periodicity detection in the auditory system of the Guinea fowl: Implications for pitch analysis in the time domain. Experimental Brain

[5] Langner G. Neuronal mechanisms for pitch analysis in the time domain. Experimental

[6] Arzy S et al. Subjective mental time: The functional architecture of projecting the self to

[7] Lehn H et al. A specific role of the human hippocampus in recall of temporal sequences.

[8] Maisels CK. The emergence of civilization: From hunting and gathering to agriculture,

[9] Gallese V et al. Action recognition in the premotor cortex. Brain. 1996;**119**(Pt 2):593-609 [10] Rizzolatti G et al. Premotor cortex and the recognition of motor actions. Brain Research

[11] Liebal K, Müller C, Pika S. Gestural communication in nonhuman and human primates, Benjamins current topics. Philadelphia: John Benjamins Publishing Company xiv; 2007.

[12] Caspers S et al. ALE meta-analysis of action observation and imitation in the human

[13] Molenberghs P, Cunnington R, Mattingley JB. Brain regions with mirror properties: A meta-analysis of 125 human fMRI studies. Neuroscience and Biobehavioral Reviews.

[14] Grefkes C, Fink GR. The functional organization of the intraparietal sulcus in humans

[15] Rozzi S, Coude G. Grasping actions and social interaction: Neural bases and anatomical

[16] Gupta DS. Processing of sub- and supra-second intervals in the primate brain results from the calibration of neuronal oscillators via sensory, motor, and feedback processes.

past and future. European Journal of Neurology. 2009;**30**(10):2009-2017

cities, and the state in the Near East. New York: Routledge; 1990. p. 395

The Journal of Neuroscience. 2009;**29**(11):3475-3484

Cognitive Brain Research. 1996;**3**(2):131-141

brain. Neuroimage. 2010;**50**(3):1148-1167

Frontiers in Psychology. 2014;**5**:816

and monkeys. Journal of Anatomy. 2005;**207**(1):3-17

circuitry in the monkey. Frontiers in Psychology. 2015;**6**:973

brain information processing. Frontiers in Psychology. 2017;**8**:240

Study of the effects of dopamine agonists, such as cocaine and metamphetamine and dopamine antagonists, haloperidol, on peak interval task suggests that both attentional and clock mechanisms are dependent on dopaminergic neurotransmission to some extent [76]. As argued earlier, cognitive functions are functionally dependent on timing as time-dimension is a part of the physical environment with which the brain interacts. In addition, neural oscillations, which represent time-dimension in neural circuits [16], help to form networks that form the basis of perception and action coupling [27]. Thus, we propose that dopaminergic system is one of the main chemical bases of timing circuits. This is consistent with the anatomical evidence, showing the extensive presence of dopamine terminals in layer I, with more specific presence in deeper cortical layers V and VI, which has neurons, projecting to the thalamus and striatum [77]. Dopamine can also play roles in the maintenance of homeostasis of neural circuits via cortico-striatal-thalamic-cortical loops [16]. This is consistent with the importance of dopaminergic system in cognitive and timing functions [78]. This view is further supported by the well-known role played by abnormal dopaminergic system in schizophrenia, which is a disorder of cognitive functions and time perception. Furthermore, the genetic variations in expression of molecules related to dopaminergic system in the brain are likely to contribute significantly to the variations in timing and cognitive functions between individuals.

#### **14. Conclusion**

Time-dimension plays a key role in all aspects of the brain functions. However, the focus study of time-dimension has been on a limited number of functions, which include timing behavior, subjective time perception, and temporal order. In this book chapter, we have extended the importance of time-dimension in the study of other aspects, such as movements, perception, and highlighted the importance of temporal coupling of neural and physical events during the interaction with external environment.

#### **Author details**

Daya Shankar Gupta1 \* and Silmar Teixeira<sup>2</sup>

\*Address all correspondence to: dayagup@gmail.com

1 Department of Biology, Camden County College, Blackwood, NJ, USA

2 Brain Mapping and Plasticity Laboratory, Federal University of Piauí (UFPI), Parnaíba, Brazil

#### **References**

psychophysical evidence [75]. This review suggests an important role of certain key molecules, which influence the concentration of synaptic dopamine, in various timing functions of the brain. These molecules includes catechol-O-methyltransferase (COMT) that degrades synaptic dopamine and dopamine transporter (DAT), which removes dopamine from the synaptic cleft. Studies of common gene polymorphisms, COMT gene (COMT Val158Met) and DAT gene (SLC6A3 3'-VNTR variant), suggest the involvement of dopaminergic system in time perception. Study of the effects of dopamine agonists, such as cocaine and metamphetamine and dopamine antagonists, haloperidol, on peak interval task suggests that both attentional and clock mechanisms are dependent on dopaminergic neurotransmission to some extent [76]. As argued earlier, cognitive functions are functionally dependent on timing as time-dimension is a part of the physical environment with which the brain interacts. In addition, neural oscillations, which represent time-dimension in neural circuits [16], help to form networks that form the basis of perception and action coupling [27]. Thus, we propose that dopaminergic system is one of the main chemical bases of timing circuits. This is consistent with the anatomical evidence, showing the extensive presence of dopamine terminals in layer I, with more specific presence in deeper cortical layers V and VI, which has neurons, projecting to the thalamus and striatum [77]. Dopamine can also play roles in the maintenance of homeostasis of neural circuits via cortico-striatal-thalamic-cortical loops [16]. This is consistent with the importance of dopaminergic system in cognitive and timing functions [78]. This view is further supported by the well-known role played by abnormal dopaminergic system in schizophrenia, which is a disorder of cognitive functions and time perception. Furthermore, the genetic variations in expression of molecules related to dopaminergic system in the brain are likely to contribute

20 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

significantly to the variations in timing and cognitive functions between individuals.

Time-dimension plays a key role in all aspects of the brain functions. However, the focus study of time-dimension has been on a limited number of functions, which include timing behavior, subjective time perception, and temporal order. In this book chapter, we have extended the importance of time-dimension in the study of other aspects, such as movements, perception, and highlighted the importance of temporal coupling of neural and physical events during

**14. Conclusion**

**Author details**

Brazil

Daya Shankar Gupta1

the interaction with external environment.

\* and Silmar Teixeira<sup>2</sup>

1 Department of Biology, Camden County College, Blackwood, NJ, USA

2 Brain Mapping and Plasticity Laboratory, Federal University of Piauí (UFPI), Parnaíba,

\*Address all correspondence to: dayagup@gmail.com


[17] Kanai R et al. Time dilation in dynamic visual display. Journal of Vision. 2006;**6** (12):1421-1430

[34] Buzsaki G, Watson BO. Brain rhythms and neural syntax: Implications for efficient coding of cognitive content and neuropsychiatric disease. Dialogues in Clinical Neuroscience.

Convergence of Action, Reaction, and Perception via Neural Oscillations in Dynamic Interaction…

http://dx.doi.org/10.5772/intechopen.76397

23

[35] Gray CM, Singer W. Stimulus-specific neuronal oscillations in orientation columns of cat visual cortex. Proceedings of the National Academy of Sciences of the United States of

[36] Chang A, Bosnyak DJ, Trainor LJ. Unpredicted pitch modulates beta oscillatory power during rhythmic entrainment to a tone sequence. Frontiers in Psychology. 2016;**7**:327 [37] Chen Y, Huang X. Modulation of Alpha and Beta oscillations during an n-back task with

[38] Kononowicz TW, van Rijn H. Single trial beta oscillations index time estimation.

[39] Cirelli LK et al. Beat-induced fluctuations in auditory cortical beta-band activity: Using

[40] Bartolo R, Merchant H. Beta oscillations are linked to the initiation of sensory-cued movement sequences and the internal guidance of regular tapping in the monkey. The

[41] Feurra M et al. Frequency-dependent tuning of the human motor system induced by transcranial oscillatory potentials. The Journal of Neuroscience. 2011;**31**(34):12165-12170

[42] Haghpanah SA, Farahmand F, Zohoor H. Modular neuromuscular control of human locomotion by central pattern generator. Journal of Biochemistry. 2017;**53**:154-162 [43] MacKay-Lyons M. Central pattern generation of locomotion: A review of the evidence.

[44] Molinari M. Plasticity properties of CPG circuits in humans: Impact on gait recovery.

[45] Iosa M et al. Editorial: Neuro-motor control and feed-forward models of locomotion in

[46] Behrman AL, Harkema SJ. Locomotor training after human spinal cord injury: A series

[47] Leon MI, Shadlen MN. Representation of time by neurons in the posterior parietal cortex

[48] Lebedev MA, O'Doherty JE, Nicolelis MA. Decoding of temporal intervals from cortical

[49] Schneider BA, Ghose GM. Temporal production signals in parietal cortex. PLoS Biology.

[50] Durstewitz D. Neural representation of interval time. Neuroreport. 2004;**15**(5):745-749

ensemble activity. Journal of Neurophysiology. 2008;**99**(1):166-186

varying temporal memory load. Frontiers in Psychology. 2015;**6**:2031

EEG to measure age-related changes. Frontiers in Psychology. 2014;**5**:742

2012;**14**(4):345-367

America. 1989;**86**(5):1698-1702

Neuropsychologia. 2015;**75**:381-389

Physical Therapy. 2002;**82**(1):69-83

Brain Research Bulletin. 2009;**78**(1):22-25

humans. Frontiers in Human Neuroscience. 2015;**9**:306

of case studies. Physical Therapy. 2000;**80**(7):688-700

of the macaque. Neuron. 2003;**38**(2):317-327

2012;**10**(10):e1001413

Journal of Neuroscience. 2015;**35**(11):4635-4640


[34] Buzsaki G, Watson BO. Brain rhythms and neural syntax: Implications for efficient coding of cognitive content and neuropsychiatric disease. Dialogues in Clinical Neuroscience. 2012;**14**(4):345-367

[17] Kanai R et al. Time dilation in dynamic visual display. Journal of Vision. 2006;**6**

[18] Penton-Voak IS et al. Speeding up an internal clock in humans? Effects of click trains on subjective duration. Journal of Experimental Psychology. Animal Behavior Processes.

[19] Wiener M, Kanai R. Frequency tuning for temporal perception and prediction. Current

[20] Herbst SK, Landau AN. Rhythms for cognition: The case of temporal processing. Current

[21] Treisman M. Temporal discrimination and the indifference interval. Implications for a

[22] Fontes R et al. Time perception mechanisms at central nervous system. Neurology

[23] Jones CR et al. The right dorsolateral prefrontal cortex is essential in time reproduction: An investigation with repetitive transcranial magnetic stimulation. Experimental Brain

[24] Koch G et al. Underestimation of time perception after repetitive transcranial magnetic

[25] Ustun S, Kale EH, Cicek M. Neural networks for time perception and working memory.

[27] Gupta DS, Chen L. Brain oscillations in perception, timing and action. Current Opinion

[28] Langner G, Benson C. The Neural Code of Pitch and Harmony. Vol. XIV. Cambridge:

[29] Olshausen BA, Field DJ. Sparse coding of sensory inputs. Current Opinion in

[30] Miller KD. Canonical computations of cerebral cortex. Current Opinion in Neurobiology.

[31] Harris KD, Shepherd GM. The neocortical circuit: Themes and variations. Nature

[32] Poppel E. A hierarchical model of temporal perception. Trends in Cognitive Sciences.

[33] Engel AK, Fries P, Singer W. Dynamic predictions: Oscillations and synchrony in top-

down processing. Nature Reviews. Neuroscience. 2001;**2**(10):704-716

[26] Gallistel CR. The coding question. Trends in Cognitive Sciences. 2017;**21**(7):498-508

model of the "internal clock". Psychological Monographs. 1963;**77**(13):1-31

(12):1421-1430

1996;**22**(3):307-320

International. 2016;**8**(1):5939

Research. 2004;**158**(3):366-372

Opinion in Behavioral Sciences. 2016;**8**:1-6

22 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

Opinion in Behavioral Sciences. 2016;**8**:85-93

stimulation. Neurology. 2003;**60**(11):1844-1846

Frontiers in Human Neuroscience. 2017;**11**:83

Cambridge University Press; 2015, 227 pages

in Behavioral Sciences. 2016;**8**:161-166

Neurobiology. 2004;**14**(4):481-487

Neuroscience. 2015;**18**(2):170-181

2016;**37**:75-84

1997;**1**(2):56-61


[51] Fries P. Rhythms for cognition: Communication through coherence. Neuron. 2015; **88**(1):220-235

[68] Oostwoud Wijdenes L, Medendorp WP. State estimation for early feedback responses in reaching: Intramodal or multimodal? Frontiers in Integrative Neuroscience. 2017;**11**:38

Convergence of Action, Reaction, and Perception via Neural Oscillations in Dynamic Interaction…

http://dx.doi.org/10.5772/intechopen.76397

25

[69] Kruger M, Eggert T, Straube A. Joint angle variability in the time course of reaching

[70] Gordon J, Ghilardi MF, Ghez C.Accuracy of planar reaching movements. I.Independence of direction and extent variability. Experimental Brain Research. 1994;**99**(1):97-111 [71] Ueyama Y. Optimal feedback control to describe multiple representations of primary motor cortex neurons. Journal of Computational Neuroscience. 2017;**43**(1):93-106 [72] Malcolm GL, Groen II, Baker CI. Making sense of real-world scenes. Trends in Cognitive

[73] Kveraga K, Bar M. Scene Vision: Making Sense of What We See. Vol. VIII. Cambridge:

[74] Villena-Gonzalez M et al. Individual variation in the propensity for prospective thought is associated with functional integration between visual and retrosplenial cortex. Cortex.

[75] Marinho V et al. The dopaminergic system dynamic in the time perception: A review of the evidence. The International Journal of Neuroscience. 2018;**128**(3):262-282

[76] Buhusi CV, Meck WH. Differential effects of methamphetamine and haloperidol on the

[77] Haber SN. The place of dopamine in the cortico-basal ganglia circuit. Neuroscience.

[78] Ortuno F et al. Functional neural networks of time perception: Challenge and opportunity for schizophrenia research. Schizophrenia Research. 2011;**125**(2-3):129-135

control of an internal clock. Behavioral Neuroscience. 2002;**116**(2):291-297

movements. Clinical Neurophysiology. 2011;**122**(4):759-766

Sciences. 2016;**20**(11):843-856

The MIT Press; 2014. p. 312

2018;**99**:224-234

2014;**282**:248-257


[68] Oostwoud Wijdenes L, Medendorp WP. State estimation for early feedback responses in reaching: Intramodal or multimodal? Frontiers in Integrative Neuroscience. 2017;**11**:38

[51] Fries P. Rhythms for cognition: Communication through coherence. Neuron. 2015;

[52] Rosen S. Temporal information in speech: Acoustic, auditory and linguistic aspects. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences.

[53] Eguia MC, Garcia GC, Romano SA. A biophysical model for modulation frequency encoding in the cochlear nucleus. Journal of Physiology, Paris. 2010;**104**(3-4):118-127 [54] Joris PX, Schreiner CE, Rees A. Neural processing of amplitude-modulated sounds.

[56] Ting LH, McKay JL. Neuromechanics of muscle synergies for posture and movement.

[57] Flash T, Hochner B. Motor primitives in vertebrates and invertebrates. Current Opinion

[58] Ivanenko YP et al. Temporal components of the motor patterns expressed by the human spinal cord reflect foot kinematics. Journal of Neurophysiology. 2003;**90**(5):3555-3565 [59] d'Avella A, Saltiel P, Bizzi E. Combinations of muscle synergies in the construction of a

[60] d'Avella A, Lacquaniti F. Control of reaching movements by muscle synergy combina-

[61] Burden A. How should we normalize electromyograms obtained from healthy participants? What we have learned from over 25 years of research. Journal of Electromyography

[62] Cisek P, Grossberg S, Bullock D. A cortico-spinal model of reaching and proprioception under multiple task constraints. Journal of Cognitive Neuroscience. 1998;**10**(4):425-444

[63] Franklin DW et al. Temporal evolution of spatial computations for visuomotor control.

[64] Borra E, Luppino G. Functional anatomy of the macaque temporo-parieto-frontal con-

[65] Kheradmand A, Winnick A. Perception of upright: Multisensory convergence and the

[66] Rizzolatti G, Luppino G, Matelli M. The organization of the cortical motor system: New concepts. Electroencephalography and Clinical Neurophysiology. 1998;**106**(4):283-296

[67] Scott SH. Optimal feedback control and the neural basis of volitional motor control.

role of temporo-parietal cortex. Frontiers in Neurology. 2017;**8**:552

[55] Bernshtein NA. The Co-ordination and Regulation of Movements. 1967

natural motor behavior. Nature Neuroscience. 2003;**6**(3):300-308

tions. Frontiers in Computational Neuroscience. 2013;**7**:42

Current Opinion in Neurobiology. 2007;**17**(6):622-628

24 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

**88**(1):220-235

1992;**336**(1278):367-373

Physiological Reviews. 2004;**84**(2):541-577

in Neurobiology. 2005;**15**(6):660-666

and Kinesiology. 2010;**20**(6):1023-1035

nectivity. Cortex. 2017;**97**:306-326

The Journal of Neuroscience. 2016;**36**(8):2329-2341

Nature Reviews Neuroscience. 2004;**5**(7):532-546


**Chapter 3**

**Provisional chapter**

**Cognitive and Computational Neuroscience: Principles,**

Today, working with human behavior is vitally important, especially if we consider the impact neuroscience and security systems. The responsibility of monitoring in a conventional way is in charge of a human agent (vigilant). On the other hand, a vigilant cannot be aware at all times. He can only be aware for 20 minutes which is the time he can monitor four cameras simultaneously; after that, the task of surveillance ceases to make sense. This reveals one of the shortcomings of surveillance (SV) systems. Whether a surveillance system provides a warning of an activity or situation makes it as important as the selection of the technological elements that allowed it to be captured. Security systems based on intelligent technologies have had an accelerated development in recent times detection and identification of car registration numbers, detection of static objects in tracks, and detection of pedestrians circulating on not permitted routes. The reuse of methodolo-

gies, procedures, and ontologies is described in this chapter of the book.

**Keywords:** cognitive, surveillance, applications, algorithms, behavior

**Cognitive and Computational Neuroscience: Principles,** 

DOI: 10.5772/intechopen.73035

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution,

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

and reproduction in any medium, provided the original work is properly cited.

The analyst of sequences of video is a very important topic at the time of using this strategy for surveillance of security places. The problem consists in understand any success of the real life saved in a video camera; this process for recognized persons, objects, vehicles, danger places, alarms, etc. is the principal goal of this work [1]. Use the term monitoring to conceptualize the process of collection and selection of activities according to the relevance of the situation that needs to be identified. This process is part from the monitoring of signals or images that allow

**Algorithms, and Applications in Surveillance Context**

**Algorithms, and Applications in Surveillance Context**

Lozada Torres Edwin Fabricio,

Lozada Torres Edwin Fabricio,

Martínez Campaña Carlos Eduardo and

Martínez Campaña Carlos Eduardo and

Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

Gómez Alvarado Héctor Fernando

Gómez Alvarado Héctor Fernando

http://dx.doi.org/10.5772/intechopen.73035

**Abstract**

**1. Introduction**

**Provisional chapter**

#### **Cognitive and Computational Neuroscience: Principles, Algorithms, and Applications in Surveillance Context Algorithms, and Applications in Surveillance Context**

**Cognitive and Computational Neuroscience: Principles,** 

DOI: 10.5772/intechopen.73035

Lozada Torres Edwin Fabricio, Martínez Campaña Carlos Eduardo and Gómez Alvarado Héctor Fernando Martínez Campaña Carlos Eduardo and Gómez Alvarado Héctor Fernando Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.73035

Lozada Torres Edwin Fabricio,

#### **Abstract**

Today, working with human behavior is vitally important, especially if we consider the impact neuroscience and security systems. The responsibility of monitoring in a conventional way is in charge of a human agent (vigilant). On the other hand, a vigilant cannot be aware at all times. He can only be aware for 20 minutes which is the time he can monitor four cameras simultaneously; after that, the task of surveillance ceases to make sense. This reveals one of the shortcomings of surveillance (SV) systems. Whether a surveillance system provides a warning of an activity or situation makes it as important as the selection of the technological elements that allowed it to be captured. Security systems based on intelligent technologies have had an accelerated development in recent times detection and identification of car registration numbers, detection of static objects in tracks, and detection of pedestrians circulating on not permitted routes. The reuse of methodologies, procedures, and ontologies is described in this chapter of the book.

**Keywords:** cognitive, surveillance, applications, algorithms, behavior

#### **1. Introduction**

The analyst of sequences of video is a very important topic at the time of using this strategy for surveillance of security places. The problem consists in understand any success of the real life saved in a video camera; this process for recognized persons, objects, vehicles, danger places, alarms, etc. is the principal goal of this work [1]. Use the term monitoring to conceptualize the process of collection and selection of activities according to the relevance of the situation that needs to be identified. This process is part from the monitoring of signals or images that allow

Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons

characterization of the situation of interest. Thus, the general objective of this monitoring is to identify suspicious situations in order to ensure that activities and situations are normal, so it informs about possible abnormalities that may occur [2].

Ubiquitous computing also supports the development of surveillance (SV) systems through the recognition of activities of the high semantic level based on multisensory monitoring which goes from the capture to the interpretation of the signal. Through the different stages, the abstracted information provided by the sensors is associated with the activities that happen on the stage person watching television, person takes their medicine, and person calls by phone. In general, this type of multisensory recognition has been applied to infer activities of people's daily lives [3]. The analysis of multisensory information requires a high degree of abstraction from the low semantic level that does not produce the necessary details to understand the detail of what happened in a scenario. This semantic gap is clearly identified in SV systems that process multisensory signals as they pass directly from the sensory signal to interpret the situation. This interpretation always depends on the knowledge, the expression capacity, and the specific language of the scorer. Some researches propose solutions to eliminate the semantic gap. Most of them are based on structures that start from the low semantic level to obtain a high level that allows quality descriptions that help in search and recovery of activities and situations in the SV systems [4, 5]. In the next subchapter, we explain the solutions to SV based on cognitive neurosciences.

#### **2. Symda project**

In spite of all the research efforts, it has not been possible to integrate the SV systems in a single functional structure. This is an idea that would allow improvements in the interpretation of situations in a scenario. There are architectures that group multisensory systems in order to help the human operator to make decisions according to identifying a situation of interest, a theme must be developed from the technological combination. With this topic the SIMDA group has carried out projects that propose the integration of different technologies and the semantic conceptualization of situations [4]: AVANZA, CICYT 2004, CYCYT 2007, and INT3. The contribution of INT3 is fundamental to this work, as they obtained from Horus, a multisensory framework for monitoring and detecting activities, integrating multisensory systems into a single processing unit, as shown in **Figure 1**.

and adaptability, all based on the security model in video surveillance systems. For this it is necessary to obtain elements such as entities, procedures, and the relationships between them. In event-based systems, the MVC provides information about changes in the application and provides a representation that adapts to the needs of the user. The model receives inputs to the application and interacts with it to update the objects and to represent the new information [5]. We propose to work with knowledge structures, which collect generalities and particularities of the situations of interest in order to automatically identify them in the monitored scenarios [6, 7]. During the last decade, the ontologies are used in applications for the areas of natural language processing, e-commerce, intelligent information integration, information consultation, database integration, bioinformatics, education, and semantic web, among others. These ontologies provide a vocabulary and organization of concepts that represent a conceptual framework for the analysis, discussion, or consultation of information from a scenario. But, there is a need to perform reasoning tasks which modules or tools must be integrated into a single conceptual, methodological, and technological framework. These modules must be coupled to Horus framework, in order to infer activities of the high semantic

Cognitive and Computational Neuroscience: Principles, Algorithms, and Applications…

http://dx.doi.org/10.5772/intechopen.73035

29

**Figure 1.** Horus modular system (Source: Castillo et al. (2012)).

level (see **Figure 2**).

As shown in **Figure 1**, Horus is a modular architecture for management of multisensory inputs, incorporating a model of conceptualization that allows sharing information of interest among multiple scenarios. Multisensitive sources are mainly related to image sensors, since they are the most widespread for monitoring tasks although other technological sensors, such as wireless sensor networks (WSN), are also integrated into INT3-Horus generic objects. The framework is distributed and hybrid. The remote nodes perform not only the lower level processing but also data acquisition, while a central node is responsible for collecting the information and its coalescence. The proposal consists in the identification of things and the monitoring of human behavior [5]. This is eminently complex since there are multiple objects and types of behavior. The model has input and output interfaces, which allow reuse Cognitive and Computational Neuroscience: Principles, Algorithms, and Applications… http://dx.doi.org/10.5772/intechopen.73035 29

**Figure 1.** Horus modular system (Source: Castillo et al. (2012)).

characterization of the situation of interest. Thus, the general objective of this monitoring is to identify suspicious situations in order to ensure that activities and situations are normal, so it

Ubiquitous computing also supports the development of surveillance (SV) systems through the recognition of activities of the high semantic level based on multisensory monitoring which goes from the capture to the interpretation of the signal. Through the different stages, the abstracted information provided by the sensors is associated with the activities that happen on the stage person watching television, person takes their medicine, and person calls by phone. In general, this type of multisensory recognition has been applied to infer activities of people's daily lives [3]. The analysis of multisensory information requires a high degree of abstraction from the low semantic level that does not produce the necessary details to understand the detail of what happened in a scenario. This semantic gap is clearly identified in SV systems that process multisensory signals as they pass directly from the sensory signal to interpret the situation. This interpretation always depends on the knowledge, the expression capacity, and the specific language of the scorer. Some researches propose solutions to eliminate the semantic gap. Most of them are based on structures that start from the low semantic level to obtain a high level that allows quality descriptions that help in search and recovery of activities and situations in the SV systems [4, 5]. In the next subchapter, we explain the solutions to SV based on cognitive neurosciences.

In spite of all the research efforts, it has not been possible to integrate the SV systems in a single functional structure. This is an idea that would allow improvements in the interpretation of situations in a scenario. There are architectures that group multisensory systems in order to help the human operator to make decisions according to identifying a situation of interest, a theme must be developed from the technological combination. With this topic the SIMDA group has carried out projects that propose the integration of different technologies and the semantic conceptualization of situations [4]: AVANZA, CICYT 2004, CYCYT 2007, and INT3. The contribution of INT3 is fundamental to this work, as they obtained from Horus, a multisensory framework for monitoring and detecting activities, integrating multisensory systems

As shown in **Figure 1**, Horus is a modular architecture for management of multisensory inputs, incorporating a model of conceptualization that allows sharing information of interest among multiple scenarios. Multisensitive sources are mainly related to image sensors, since they are the most widespread for monitoring tasks although other technological sensors, such as wireless sensor networks (WSN), are also integrated into INT3-Horus generic objects. The framework is distributed and hybrid. The remote nodes perform not only the lower level processing but also data acquisition, while a central node is responsible for collecting the information and its coalescence. The proposal consists in the identification of things and the monitoring of human behavior [5]. This is eminently complex since there are multiple objects and types of behavior. The model has input and output interfaces, which allow reuse

informs about possible abnormalities that may occur [2].

28 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

into a single processing unit, as shown in **Figure 1**.

**2. Symda project**

and adaptability, all based on the security model in video surveillance systems. For this it is necessary to obtain elements such as entities, procedures, and the relationships between them. In event-based systems, the MVC provides information about changes in the application and provides a representation that adapts to the needs of the user. The model receives inputs to the application and interacts with it to update the objects and to represent the new information [5]. We propose to work with knowledge structures, which collect generalities and particularities of the situations of interest in order to automatically identify them in the monitored scenarios [6, 7]. During the last decade, the ontologies are used in applications for the areas of natural language processing, e-commerce, intelligent information integration, information consultation, database integration, bioinformatics, education, and semantic web, among others. These ontologies provide a vocabulary and organization of concepts that represent a conceptual framework for the analysis, discussion, or consultation of information from a scenario. But, there is a need to perform reasoning tasks which modules or tools must be integrated into a single conceptual, methodological, and technological framework. These modules must be coupled to Horus framework, in order to infer activities of the high semantic level (see **Figure 2**).

to infer a situation. In an alternate way, it is to have the semantic capacity needed to adapt its structure to the appearance of new conceptualizations of activities and situations as a product of learning acquired by computer algorithms. Based on the interest of knowledge situations, there are two types of tasks: (a) conceptualize and model the knowledge of the human expert, when it exists (depending on the basic activities recognizable from the sensors or the processing of video images), and (b) conceptualize and model situations where the knowledge does not exist although it is possible to find it in records (case bases) of the situations that are intended to be identified. In this case, the required process is particularly complex [8, 9], and it will require the use of intelligent algorithms for identification. Literals (a) and (b) are studied in this research, since we work with the knowledge of the expert when he can clearly describe the scenarios and situations, in addition to scenarios where there is some expert knowledge which is not precise and is intended to find automatically the situations of interest. Here, situations are composed for activities that individually are not clearly suspicious, but when analyzed in a certain sequence and repetition, they do

Cognitive and Computational Neuroscience: Principles, Algorithms, and Applications…

http://dx.doi.org/10.5772/intechopen.73035

31

The use of SV systems based on closed-circuit television (CCTV) cameras has grown exponentially over the last decade. Especially, concern for security as a result of emerging international terrorism has led experts to anticipate a greater diffusion of these systems as well as their integration into a global remote monitoring network [10]. Analyzes the latest advances in the multisensory SV systems use by companies that produce this type of technology with an emphasis on their manufacturing, added value, other products differences, and its use. This analysis focuses on the SV systems based on cameras and sensors for surveillance. The result of the analysis allows answers to questions such as the following: Would it be useful to be able to track people in different areas and places? Is it possible to check for false alarms in establishments or simply monitor a trade from the

Applications of this type are already commercially available, allowing access from a single control center to images of CCTV systems in various geographically distributed environments. For example, the synchronized video acquisition system is developed to interoperate with SV systems in order to act as an object trajectory server. It consists of a series of navigational instruments that allow the direct geo-referencing of each of the images captured by the video camera in post-processing in a common reference time for all navigation instruments and for all sensors used in the capture system video. Remote sensing allows to have information about an object or surface through the analysis and processing of the data supplied by different sensors that are synchronized. In addition, it associates the time and Global Positioning System (GPS) with the image generated by the video [7]. As a result systems are capable of analyzing the video of different subsystems and interpreting what happens in the images. Applications of examples that can have video acquisition and synchronization systems are real-time monitoring of traffic conditions, forest fire control, natural disaster monitoring, and geo-referenced video projection in public virtual machines such as Google Earth

reflect to be.

comfort of home?

and Virtual Visualizations [7].

**Figure 2.** Semantic model: the knowledge structure is used to model scenarios, activities, and situations.

The hypothesis to be verified is that, through the design of ontologies and semantic technologies easy to use, reuse and modulate it can inferre situations of high semantic level and the rapid prototype of SV systems with a similar level of abstraction that a human agent has. To verify our hypothesis, ontology must fulfill the following aspects:


to infer a situation. In an alternate way, it is to have the semantic capacity needed to adapt its structure to the appearance of new conceptualizations of activities and situations as a product of learning acquired by computer algorithms. Based on the interest of knowledge situations, there are two types of tasks: (a) conceptualize and model the knowledge of the human expert, when it exists (depending on the basic activities recognizable from the sensors or the processing of video images), and (b) conceptualize and model situations where the knowledge does not exist although it is possible to find it in records (case bases) of the situations that are intended to be identified. In this case, the required process is particularly complex [8, 9], and it will require the use of intelligent algorithms for identification. Literals (a) and (b) are studied in this research, since we work with the knowledge of the expert when he can clearly describe the scenarios and situations, in addition to scenarios where there is some expert knowledge which is not precise and is intended to find automatically the situations of interest. Here, situations are composed for activities that individually are not clearly suspicious, but when analyzed in a certain sequence and repetition, they do reflect to be.

The use of SV systems based on closed-circuit television (CCTV) cameras has grown exponentially over the last decade. Especially, concern for security as a result of emerging international terrorism has led experts to anticipate a greater diffusion of these systems as well as their integration into a global remote monitoring network [10]. Analyzes the latest advances in the multisensory SV systems use by companies that produce this type of technology with an emphasis on their manufacturing, added value, other products differences, and its use. This analysis focuses on the SV systems based on cameras and sensors for surveillance. The result of the analysis allows answers to questions such as the following: Would it be useful to be able to track people in different areas and places? Is it possible to check for false alarms in establishments or simply monitor a trade from the comfort of home?

The hypothesis to be verified is that, through the design of ontologies and semantic technologies easy to use, reuse and modulate it can inferre situations of high semantic level and the rapid prototype of SV systems with a similar level of abstraction that a human agent has. To

**Figure 2.** Semantic model: the knowledge structure is used to model scenarios, activities, and situations.

**1.** It is a semantic multisensory referential framework, which reduces the difficulty of working with different types of signals from different sensors that together with the semantic conceptualization of the signal allows to obtain the latest in the appropriate characteristics

**2.** Systems based on ontologies conceptualize the information that comes from the case in the investigation. Applying this theory to video surveillance systems, it is possible through semantics to infer and have retrospective analysis, that is, perform the activities even after

**3.** Import ontologies. It adapts its structure to be combined with other ontologies developed in different domains. This is in order to reuse representations of knowledge in different

**4.** Conceptualize and infer activities. It is the process in which the knowledge of the expert is used to conceptualize activities, or rules or axioms for inference are applied to the activities that are recorded in the scenario. We are tasked with inferring activities of the medium and high semantic level, while Horus is tasked with the low semantic

**5.** Conceptualize and infer situations. It is to allow the vigilant or expert to establish the relationships between the activities that happen in it and to design rules and semantic axioms

verify our hypothesis, ontology must fulfill the following aspects:

30 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

they have occurred.

areas of science.

level.

of the behaviors and activities developed in the context under study.

Applications of this type are already commercially available, allowing access from a single control center to images of CCTV systems in various geographically distributed environments. For example, the synchronized video acquisition system is developed to interoperate with SV systems in order to act as an object trajectory server. It consists of a series of navigational instruments that allow the direct geo-referencing of each of the images captured by the video camera in post-processing in a common reference time for all navigation instruments and for all sensors used in the capture system video. Remote sensing allows to have information about an object or surface through the analysis and processing of the data supplied by different sensors that are synchronized. In addition, it associates the time and Global Positioning System (GPS) with the image generated by the video [7]. As a result systems are capable of analyzing the video of different subsystems and interpreting what happens in the images. Applications of examples that can have video acquisition and synchronization systems are real-time monitoring of traffic conditions, forest fire control, natural disaster monitoring, and geo-referenced video projection in public virtual machines such as Google Earth and Virtual Visualizations [7].

#### **3. Ontology and agents**

Analyze surveillance system based on cameras which have been done by using three different methods: the first one has been developed using an expert knowledge, the second one was learned from recorded videos, and the third one has been developed as a refinement taking into account evaluation with ground truth. This project was deployed in Madrid-Barajas airport; this technology is used to support ground traffic management inside the Advanced Surface Movement. Model-Based Reasoning (MBR) modeling of semantic reasoning allows the resolution of problems in the identification of activities in space and time. This is a basic theme of the same, since the temporal management must be strictly linked to the occurrence of the facts. However, there are still two fundamental problems in this application: the degree of dependency between the model use and the domain and the reutilization on the systems when the domains change. The implementation aim is to help solving the problem of temporal diagnosis for environments of high conceptual complexity integrating MBR and ontologies for domain knowledge representation. A traditional system of security and vigilance is that the caretaker is alert toward what happens in the zone which needs security. In this kind of system, the quality of the vigilance has a direct relationship with the human capacities of the caretaker, which are incremented with the use of security cameras, motion sensors, etc. A minimum requirement for security systems is the ability to analyze multiple objects or groups of objects in real time [8, 11]. The main objective of this study is to calculate some parameters for the performance evaluation of the tracking system in order to identify an alarm human behavior. Here, it is necessary—the problem—to consider a sequence of previously recorded videos as well as subsequent processes in which a human operator takes notes of the images and places marked on each video frame. Using ontology with agents for running the system, we proposed the next system (**Figure 3**).

Contextual object: The object of a scene conditioned to the appearance of activities, events,

Cognitive and Computational Neuroscience: Principles, Algorithms, and Applications…

http://dx.doi.org/10.5772/intechopen.73035

33

Objective tracked: The object of interest located in the area of interest, directly related to

We have proposed the syntax to describe states, events, and activities. These meta-concepts

Restrictions: Relationships between the concepts that allow obtaining basic characteristics of

**Figure 3.** The sensors detecting an object or person, after the ontology using this information to determinate the actions

Oscillating movement: Constant movement related to the fact, it can be provoked or not.

Prohibited components: That does not correspond to the scene, activity, or context.

ROI: Region of interest. The context, area, or region of interest.

Components: What allows to obtain a description of a context.

After this work using the CAREGIVER:

are described with a name and four parts:

Physical objects: Semantically related produce facts.

and activities.

the scene.

and answer with the rules.

semantic tracking.

The first step only uses the two classes in the CARETAKER ONTOLOGY (**Figure 2**), person and objects, to determinate how the system function and learn from this. Without using the speak recognition and write recognition (see Crubèzy, Connor, Buckeridge, Pincus, Musen: Ontology-Centered Syndromic Surveillance for Bioterrorism.), the structure is similar at the bioterrorism ontology (**Figure 4**).

The CARETAKER ONTOLOGY has other classes and properties (**Figure 5**).

Now, in this job selection, the classes are person and object. To use these classes, it is necessary that the image catcher for the sensors at the office can be s processed to recognize people, objects, and activities. In CARETAKER it is possible to see these elements for the ontology (see A Real-Time Scene Understanding System for Airport Apron Monitoring: AVITRACK Project): the AVITRACK project has the same structure for the ontology.

Scene analysis terminology:

Scene: Place of development of activities, events, and activities.

Physical object and interest: Thing in tracking or tracking that is in the study area. These objects allow their relationship to obtain the contextual object, since this occurs after the relationship between physical objects. The movement of this type of objects can be random and unprovoked.

Contextual object: The object of a scene conditioned to the appearance of activities, events, and activities.

Objective tracked: The object of interest located in the area of interest, directly related to semantic tracking.

ROI: Region of interest. The context, area, or region of interest.

Oscillating movement: Constant movement related to the fact, it can be provoked or not.

After this work using the CAREGIVER:

**3. Ontology and agents**

32 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

we proposed the next system (**Figure 3**).

bioterrorism ontology (**Figure 4**).

Scene analysis terminology:

unprovoked.

Analyze surveillance system based on cameras which have been done by using three different methods: the first one has been developed using an expert knowledge, the second one was learned from recorded videos, and the third one has been developed as a refinement taking into account evaluation with ground truth. This project was deployed in Madrid-Barajas airport; this technology is used to support ground traffic management inside the Advanced Surface Movement. Model-Based Reasoning (MBR) modeling of semantic reasoning allows the resolution of problems in the identification of activities in space and time. This is a basic theme of the same, since the temporal management must be strictly linked to the occurrence of the facts. However, there are still two fundamental problems in this application: the degree of dependency between the model use and the domain and the reutilization on the systems when the domains change. The implementation aim is to help solving the problem of temporal diagnosis for environments of high conceptual complexity integrating MBR and ontologies for domain knowledge representation. A traditional system of security and vigilance is that the caretaker is alert toward what happens in the zone which needs security. In this kind of system, the quality of the vigilance has a direct relationship with the human capacities of the caretaker, which are incremented with the use of security cameras, motion sensors, etc. A minimum requirement for security systems is the ability to analyze multiple objects or groups of objects in real time [8, 11]. The main objective of this study is to calculate some parameters for the performance evaluation of the tracking system in order to identify an alarm human behavior. Here, it is necessary—the problem—to consider a sequence of previously recorded videos as well as subsequent processes in which a human operator takes notes of the images and places marked on each video frame. Using ontology with agents for running the system,

The first step only uses the two classes in the CARETAKER ONTOLOGY (**Figure 2**), person and objects, to determinate how the system function and learn from this. Without using the speak recognition and write recognition (see Crubèzy, Connor, Buckeridge, Pincus, Musen: Ontology-Centered Syndromic Surveillance for Bioterrorism.), the structure is similar at the

Now, in this job selection, the classes are person and object. To use these classes, it is necessary that the image catcher for the sensors at the office can be s processed to recognize people, objects, and activities. In CARETAKER it is possible to see these elements for the ontology (see A Real-Time Scene Understanding System for Airport Apron Monitoring: AVITRACK

Physical object and interest: Thing in tracking or tracking that is in the study area. These objects allow their relationship to obtain the contextual object, since this occurs after the relationship between physical objects. The movement of this type of objects can be random and

The CARETAKER ONTOLOGY has other classes and properties (**Figure 5**).

Project): the AVITRACK project has the same structure for the ontology.

Scene: Place of development of activities, events, and activities.

We have proposed the syntax to describe states, events, and activities. These meta-concepts are described with a name and four parts:

Physical objects: Semantically related produce facts.

Components: What allows to obtain a description of a context.

Prohibited components: That does not correspond to the scene, activity, or context.

Restrictions: Relationships between the concepts that allow obtaining basic characteristics of the scene.

**Figure 3.** The sensors detecting an object or person, after the ontology using this information to determinate the actions and answer with the rules.

**Figure 4.** Our CARETAKER ONTOLOGY.

Description of a model and an associated instance, respectively, of a primitive state, a primitive event, and an event composed of multiple agents (see (CAREGIVER)). The code to conceptualization:

**PrimitiveState**\_**Inside\_zone**

**Figure 5.** CARETAKER ONTOLOGY.

Cognitive and Computational Neuroscience: Principles, Algorithms, and Applications…

http://dx.doi.org/10.5772/intechopen.73035

35

**PhysicalObjects**\_ (p: Person, z: Zone)

package surveillanceontology;

import jade.content.Concept;

public class Persona implements Concept {

```
 private String nombres;
 public String getNombres() {
 return nombres;
 }
 public void setNombres(String n) {
 nombres = n;
 }
```

```
}
```
**Figure 5.** CARETAKER ONTOLOGY.

#### **PrimitiveState**\_**Inside\_zone**

#### **PhysicalObjects**\_

Description of a model and an associated instance, respectively, of a primitive state, a primitive event, and an event composed of multiple agents (see (CAREGIVER)). The code to

conceptualization:

package surveillanceontology; import jade.content.Concept;

**Figure 4.** Our CARETAKER ONTOLOGY.

34 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

private String nombres;

return nombres;

nombres = n;

}

 } }

public String getNombres() {

public void setNombres(String n) {

public class Persona implements Concept {

(p: Person, z: Zone)

**4. Alarm-agent-caretaker**

**5. The style**

img(i, <sup>j</sup>) <sup>=</sup>

knowing the difference

**1.** Person in the office **2.** Persons in the office **3.** Objects in the desk

for object and actions.

This job uses the Protégé software for using the CARETAKER ontologies and it configurates the ontology for our benefit. At the time, it is necessary to transform the classes in the CARETAKER ONTOLOGY in java classes using the BeanGenerator plugging for this process. This is important because the agents use the ontology in java code when the alarm generates the agent. Using this generation puts the data in the java CARETAKER ONTOLOGY, and this ontology returns as the decisions for the agent. For this reason, communication takes place in two ways: the first time from sensor-agent-ontology and the second after ontology agent.

Cognitive and Computational Neuroscience: Principles, Algorithms, and Applications…

When the sensor catches a person/object/action, the system uses this steps to determinate if it is a person, object, or action. After these, data is sending at the CARETAKER ONTOLOGY for

imgN(i, j) = 1 − img(i, j)

On the other side, detecting movement is necessary to establish the difference between the bottom image (principal image) and the image caught by sensors. Using this equation for

In this moment the system determinates if the person is in the office and if an object was subtracted from the office or another action in the scene. Good, the system has the words person and object in a file with the name saved; for example, in **Figure 3** the system recognizes the people, and this image is labeling with the word person recognition. The process is the same

Well, when the systems recognize a person in the class (CARETAKER ONTOLOGY), it is an instance because it is necessary to identify if the person has the permission for this site or not. In this moment the alarm is generated: the system recognizes a person and the rule. If the

imgG(i, j) > umbral entonces img(i, j) = 1

imgG(i, <sup>j</sup>) <sup>&</sup>lt; umbral entonces img(i, <sup>j</sup>) <sup>=</sup> 0

http://dx.doi.org/10.5772/intechopen.73035

<sup>∑</sup>*<sup>n</sup>* (*restamn*) (2)

(1)

37

taking decisions. The equation for the image binary is:

*suma* <sup>=</sup> <sup>∑</sup>*<sup>m</sup>*

⎧ ⎪ ⎨ ⎪ ⎩

This methodology was used in this job to recognize these activities:

```
Constraints:
(p in z)
Instance:
S1: PrimitiveState_Inside_zone_(Hector, ZonaProhibida)
CompositeEvent SigOfficeEntrance
PhysicalObjects:
(e : Persona[worker], r: Persona[Hector]
Components:
((c1: PrimitiveState Inside_zone (e, "Back_Counter"))
(c2: PrimitiveEvent Changes_zone(r, "Entrance_Zone", "Front_Count")
(c3: PrimitiveState Inside_zone (e,"Safe"))
(c4: PrimitiveState Inside_zone(r,"Safe")))
Constraints:
((duration-of(c3) >= 1 second)
(c2 during c1)
(c2 before c3)
(c1 before c3)
(c2 before c4)
(c4 during c3))
Instance:
```
**e2:** CompositeEvent SigOfficeEntrance

Ontology has two uses: (a) semantics for the occurrence of events, through states and events with little granularity in which the occurrence of physical events is highlighted. Here, we describe the attributes of each thing and the relationships between them, in order to obtain a clear description of the scene under study. The levels of implementation allow the use of the restrictions based on time and space, which means that the states and events that occur in the video can have their location and semantic description. This basic corpus can be refined depending on the needs. Refined concepts are more difficult to extract from videos. For example, they may need posture analysis algorithms. The concept "holding an object" is perceived differently depending on the posture but also in the properties of the held object. Holding a gun is perceptually different from holding a luggage.

The proposed corpus should be seen as an extendable basis. The issue is now to define tools and protocols to allow a collaborative extension of the corpus.

#### **4. Alarm-agent-caretaker**

This job uses the Protégé software for using the CARETAKER ontologies and it configurates the ontology for our benefit. At the time, it is necessary to transform the classes in the CARETAKER ONTOLOGY in java classes using the BeanGenerator plugging for this process. This is important because the agents use the ontology in java code when the alarm generates the agent. Using this generation puts the data in the java CARETAKER ONTOLOGY, and this ontology returns as the decisions for the agent. For this reason, communication takes place in two ways: the first time from sensor-agent-ontology and the second after ontology agent.

#### **5. The style**

**Constraints**:

(p in z)

**Instance:**

PhysicalObjects:

**Components:**

**Constraints:**

**(c2 during c1) (c2 before c3) (c1 before c3) (c2 before c4) (c4 during c3))**

**Instance:**

**((duration-of(c3) >= 1 second)**

**e2:** CompositeEvent SigOfficeEntrance

gun is perceptually different from holding a luggage.

and protocols to allow a collaborative extension of the corpus.

S1: **PrimitiveState**\_**Inside\_zone**\_(Hector, ZonaProhibida)

36 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

**((c1: PrimitiveState Inside\_zone (e, "Back\_Counter"))**

**(c2: PrimitiveEvent Changes\_zone(r, "Entrance\_Zone", "Front\_Count")**

Ontology has two uses: (a) semantics for the occurrence of events, through states and events with little granularity in which the occurrence of physical events is highlighted. Here, we describe the attributes of each thing and the relationships between them, in order to obtain a clear description of the scene under study. The levels of implementation allow the use of the restrictions based on time and space, which means that the states and events that occur in the video can have their location and semantic description. This basic corpus can be refined depending on the needs. Refined concepts are more difficult to extract from videos. For example, they may need posture analysis algorithms. The concept "holding an object" is perceived differently depending on the posture but also in the properties of the held object. Holding a

The proposed corpus should be seen as an extendable basis. The issue is now to define tools

CompositeEvent SigOfficeEntrance

(e : Persona[worker], r: Persona**[Hector]**

**(c3: PrimitiveState Inside\_zone (e,"Safe")) (c4: PrimitiveState Inside\_zone(r,"Safe")))** When the sensor catches a person/object/action, the system uses this steps to determinate if it is a person, object, or action. After these, data is sending at the CARETAKER ONTOLOGY for taking decisions. The equation for the image binary is:

$$\begin{aligned} \text{taking decisions.}\\ \text{The equation for the image binary is:}\\ \text{img(i,j)} &= \begin{cases} \text{imgG(i,j)} > \text{unbral sentences } \text{img(i,j)} = 1\\ \text{imgG(i,j)} < \text{unbral sentences } \text{img(i,j)} = 0\\ \text{imgN(i,j)} = 1 - \text{img(i,j)} \end{cases} \end{aligned} \tag{1}$$

On the other side, detecting movement is necessary to establish the difference between the bottom image (principal image) and the image caught by sensors. Using this equation for knowing the difference

$$
gamma = \sum\_{\pi} \sum\_{\pi} (\text{resta}\_{\text{nn}}) \tag{2}
$$

This methodology was used in this job to recognize these activities:


In this moment the system determinates if the person is in the office and if an object was subtracted from the office or another action in the scene. Good, the system has the words person and object in a file with the name saved; for example, in **Figure 3** the system recognizes the people, and this image is labeling with the word person recognition. The process is the same for object and actions.

Well, when the systems recognize a person in the class (CARETAKER ONTOLOGY), it is an instance because it is necessary to identify if the person has the permission for this site or not. In this moment the alarm is generated: the system recognizes a person and the rule. If the person is in the office when the time is higher at 21:h00, then it calls the security in charge to review the person in the place. Security is subclass from person class.

[3] Chikhaoui B, Wang S, Pigot H. A new algorithm based on sequential pattern mining for person identification in ubiquitous environments. In: Proceedings of the 4th International Workshop on Knowledge Discovery form Sensor Data (ACM SensorKDD '10); 2010. pp.

Cognitive and Computational Neuroscience: Principles, Algorithms, and Applications…

http://dx.doi.org/10.5772/intechopen.73035

39

[4] Simon F, Zhuang Y. A security model for detection suspicious patterns in physical environment. In: Third International Symposium of Information Assurance and Security;

[5] Castillo JC, Fernandez-Caballero A, Loopez MT. A review on intelligent monitoring and activity interpretation. Revista Iberoamericana de Inteligencia Artificial.

[6] Bredmond J, Corvee E, Patiño J, Thonnat M. CARETAKER Project [En línea]. 2008. Available from: http://www-sop.inria.fr/members/Francois.Bremond/topicsText/care-

[7] Town C. Ontological inference for image and video analysis. Machine Vision and

[8] Sunico V. Reconocimiento de Imágenes: Usuarios, Segmentos de Usuarios, Gestos, Emociones [En línea]. 2008. Available from: http://jm.sunico.org/4007/06/48/reconocimiento-de-imagenes-usuarios-segmentos-de-usuarios-gestos-emociones-y-empatia/

[9] Hu W, Wang L, Maybank S. Survey on visual surveillance of object motion and behaviors. IEEE Transactions on Systems, Man and Cybernetics, Part C. 2004;**34**(3):334-354 [10] Honovich J. IPSurveillance [En línea]. 2010. Available from: http://ipvideomarket.info/

[11] Albusac Jiménez JA. Vigilancia Inteligente: Modelado de Entornos Reales e Interpretación

de Conductas para la Seguridad. España: Castilla-La Mancha; 2008

20-28

2007. pp. 221-226

2017;**20**(59):53-69

Applications. 2006;**17**:94-115

[Último acceso: Septiembre 21, 2010]

[Último acceso: Noviembre 21, 2010]

takerProject.html [Último acceso: Julio 8, 2011]

The action reviews if the person is in the ontology due to the connection between person recognition and the decisions after the alarm. At the same time, when the security person reviews at people recognized in the site, the message will come back before reviewing the person.

A difficult remaining problem is the segmentation process. Indeed, in order to be classified, images have to be segmented to allow descriptor computation. The symbolic description made by the expert may help finding the image processing tasks required for extracting the pertinent information from the provided images.

As an example, an object described with the "granulated texture" concept may be segmented with a texture-based segmentation algorithm. The regions of interest selected by the expert (see 1) in this work use the img (i, j) to correct this problem.

The system uses the natural language for decisions: the agents are programmed using JADE (Java Agent Development Framework) and the natural language coded to communicate the agents:

Jade enterprise

Alarmado (generating the alarm)

Security central (receive the alarm and using the ontology for making a decisions)

On the other side, the system uses the file saved to communicate the sensors with the alarm agent. This is important because the alarm has code lines to review each 10 seconds if the file.txt has the word: person recognition, object recognition, or actions after producing the after process.

#### **Author details**

Lozada Torres Edwin Fabricio1 , Martínez Campaña Carlos Eduardo1 and Gómez Alvarado Héctor Fernando2 \*

\*Address all correspondence to: hf.gomez@uta.edu.ec

1 Universidad Regional Autónoma de los Andes, Ambato, Ecuador

2 Universidad Técnica de Ambato, Huachi Chico, Ambato, Ecuador

#### **References**


[3] Chikhaoui B, Wang S, Pigot H. A new algorithm based on sequential pattern mining for person identification in ubiquitous environments. In: Proceedings of the 4th International Workshop on Knowledge Discovery form Sensor Data (ACM SensorKDD '10); 2010. pp. 20-28

person is in the office when the time is higher at 21:h00, then it calls the security in charge to

The action reviews if the person is in the ontology due to the connection between person recognition and the decisions after the alarm. At the same time, when the security person reviews at people recognized in the site, the message will come back before reviewing the person.

A difficult remaining problem is the segmentation process. Indeed, in order to be classified, images have to be segmented to allow descriptor computation. The symbolic description made by the expert may help finding the image processing tasks required for extracting the

As an example, an object described with the "granulated texture" concept may be segmented with a texture-based segmentation algorithm. The regions of interest selected by the expert

The system uses the natural language for decisions: the agents are programmed using JADE (Java Agent Development Framework) and the natural language coded to communicate the

On the other side, the system uses the file saved to communicate the sensors with the alarm agent. This is important because the alarm has code lines to review each 10 seconds if the file.txt has the word: person recognition, object recognition, or actions after producing the after

, Martínez Campaña Carlos Eduardo1

[1] Collins RT, Lipton AJ, Kanade T. A System for Video Surveillance and Monitoring.

[2] Waters S. How to Identify Shoplifters [En línea]. 27 Octubre 2016. Available from: https:// www.thebalance.com/how-to-identify-shoplifters-2890263 [Último acceso: Marzo 20,

and

Security central (receive the alarm and using the ontology for making a decisions)

\*

1 Universidad Regional Autónoma de los Andes, Ambato, Ecuador 2 Universidad Técnica de Ambato, Huachi Chico, Ambato, Ecuador

\*Address all correspondence to: hf.gomez@uta.edu.ec

review the person in the place. Security is subclass from person class.

38 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

pertinent information from the provided images.

agents:

process.

**Author details**

**References**

2013]

Lozada Torres Edwin Fabricio1

Gómez Alvarado Héctor Fernando2

CiteSeeer; 2000. pp. 50-62

Jade enterprise

Alarmado (generating the alarm)

(see 1) in this work use the img (i, j) to correct this problem.


**Chapter 4**

Provisional chapter

**Spiking Central Pattern Generators through Reverse**

DOI: 10.5772/intechopen.72348

In robotics, there have been proposed methods for locomotion of nonwheeled robots based on artificial neural networks; those built with plausible neurons are called spiking central pattern generators (SCPGs). In this chapter, we present a generalization of reported deterministic and stochastic reverse engineering methods for automatically designing SCPG for legged robots locomotion systems; such methods create a spiking neural network capable of endogenously and periodically replicating one or several rhythmic signal sets, when a spiking neuron model and one or more locomotion gaits are given as inputs. Designed SCPGs have been implemented in different robotic controllers for a variety of robotic platforms. Finally, some aspects to improve and/or

Keywords: central pattern generators, spiking neural networks, reverse engineering,

Since its beginning, robotics has been a research field strongly influenced by nature. For robotic platforms where wheels to displace themselves are not used, researchers have taken inspiration not only in the physical form of living beings as archetypes of their designs (e.g., legged, finned or winged robotic platforms), but also in the mechanisms to allow their locomotion (e.g., walking, swimming or flying), mainly known as central pattern generators (CPGs) [1]. In biology, the basis of CPGs was settled down in Brown's studies about how the rhythmic movements of locomotion in living beings are created [2]. In [3], Brown experimentally discovered that neurons,

> © The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

distribution, and reproduction in any medium, provided the original work is properly cited.

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

complement these SCPG-based locomotion systems are pointed out.

Spiking Central Pattern Generators through Reverse

**Engineering of Locomotion Patterns**

Engineering of Locomotion Patterns

Andrés Espinal, Marco Sotelo-Figueroa,

Andrés Espinal, Marco Sotelo-Figueroa,

Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

Héctor J. Estrada-García,

Héctor J. Estrada-García,

Horacio Rostro-Gonzalez

Horacio Rostro-Gonzalez

Abstract

1. Introduction

Manuel Ornelas-Rodríguez and

Manuel Ornelas-Rodríguez and

http://dx.doi.org/10.5772/intechopen.72348

metaheuristic, locomotion patterns

#### **Spiking Central Pattern Generators through Reverse Engineering of Locomotion Patterns** Spiking Central Pattern Generators through Reverse Engineering of Locomotion Patterns

DOI: 10.5772/intechopen.72348

Andrés Espinal, Marco Sotelo-Figueroa, Héctor J. Estrada-García, Manuel Ornelas-Rodríguez and Horacio Rostro-Gonzalez Andrés Espinal, Marco Sotelo-Figueroa, Héctor J. Estrada-García, Manuel Ornelas-Rodríguez and Horacio Rostro-Gonzalez

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.72348

#### Abstract

In robotics, there have been proposed methods for locomotion of nonwheeled robots based on artificial neural networks; those built with plausible neurons are called spiking central pattern generators (SCPGs). In this chapter, we present a generalization of reported deterministic and stochastic reverse engineering methods for automatically designing SCPG for legged robots locomotion systems; such methods create a spiking neural network capable of endogenously and periodically replicating one or several rhythmic signal sets, when a spiking neuron model and one or more locomotion gaits are given as inputs. Designed SCPGs have been implemented in different robotic controllers for a variety of robotic platforms. Finally, some aspects to improve and/or complement these SCPG-based locomotion systems are pointed out.

Keywords: central pattern generators, spiking neural networks, reverse engineering, metaheuristic, locomotion patterns

#### 1. Introduction

Since its beginning, robotics has been a research field strongly influenced by nature. For robotic platforms where wheels to displace themselves are not used, researchers have taken inspiration not only in the physical form of living beings as archetypes of their designs (e.g., legged, finned or winged robotic platforms), but also in the mechanisms to allow their locomotion (e.g., walking, swimming or flying), mainly known as central pattern generators (CPGs) [1]. In biology, the basis of CPGs was settled down in Brown's studies about how the rhythmic movements of locomotion in living beings are created [2]. In [3], Brown experimentally discovered that neurons,

© The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons © 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

which inhibiting each other, generate periodically rhythmic activities that control bending and tension of muscles involved in locomotion. Moreover, GPGs have been linked to other unconscious activities besides locomotion such as swallowing, digestion, heart beating and breathing [4, 5].

of SCPG-based locomotion systems, three types of legged robotic platforms have been particularly used such as hexapod, quadruped and biped robots. Particularly, the used robotic platforms have 3 degrees of freedom (DOFs) or servomotors per leg, that is, the hexapod has 18 DOFs, the quadruped has 12 DOFs and the biped has 6 DOFs. Although for the hexapod and the quadruped, just two DOFs per leg were used; the two are closer to the body of robots and directly related to the movement of the robot. Figure 1 shows the robotic platforms with a specific label for identifying the

Spiking Central Pattern Generators through Reverse Engineering of Locomotion Patterns

http://dx.doi.org/10.5772/intechopen.72348

43

Servomotors in the robotic platforms are handled by a processing unit, which in reviewed works, a SCPG is embedded into them to provide a locomotion mechanism to the robots. In Figure 2, we present the electronic boards, which have been used as processing units such as

Figure 1. Robotic platforms from Lynxmotion company. (a) Phoenix hexapod robot model, (b) symmetric quadruped robot model and (c) Brat biped robot model. In ovals are marked and labeled the servomotors used in their locomotion where letters C, F and a stand for Coxa, femur and ankle, respectively, letters L and R represent left and right sides, and

Figure 2. Boards for robot control. Processing units: (a) Arduino board, BotBoarduino for Lynxmotion robots, (b) FPGA Spartan 6 XEM6310-LX45 board from OpalKelly, and (c) Raspberry Pi 3 Model B board. Servo controller: (d) SSC-32

position of their respective servomotors.

numbers mark the number according to their position (taken from [25]).

board.

Furthermore, CPGs have become a suitable alternative to nonbiologically inspired methods for locomotion systems of nonwheeled robotic platforms [6]; this is due to several and interesting features of CPGs such as adaptability, rhythmicity, stability and variety [7]. The CPG-based locomotion systems have been successfully designed and implemented at software and/or hardware levels for different nonwheeled robotic platforms [8] such as walking robots (biped [9], quadruped [10], hexapod [11] and octopod [12]), swimming robots [13], flying robots [14]), among others (i.e., snake robots [15] and salamander robots [16]). Although vast amount of works made and reported in the state of the art about CPG-based locomotion systems, there is not a general and standard methodology to build CPGs [6]; however, working with CPGs commonly involves the following three phases [7]: (1) choosing the processing unit model, the coupling type and the connectivity topology (modeling and analysis), (2) dealing with parameter tuning, usually solved by optimization methods and gait transition, to handle variation on gaits as type or frequency (modulation) and (3) executing the designed CPG at the software and/or hardware level (implementation).

In this chapter, we focus on locomotion systems for legged robots, which are based on spiking central pattern generators (SCPGs) and reverse engineering methods for automatically design them. The study and implementation of SCPGs as locomotion systems have been barely explored and compared with other CPGs, which are built with oscillators or low-plausible neuron models (see [6, 7, 17] as reference). The SCPGs are built with plausible neuron models known as spiking neurons, models that define the third generation of artificial neural networks [18]; these neuron models naturally receive and send spatio-temporal information as generating rhythmic patterns are required for CPGs. The SCPGs have been designed and implemented as locomotion systems for robotic platforms such as bipeds [19–21, 25], quadrupeds [23, 25] and hexapods [22, 24–27], where the design methodologies used in [19–21, 27] tend to follow the phases proposed in [7], while in [22–26] reverse engineering methods are used. Basically, a reverse engineering method to design SCPG-based locomotion systems for robotic platforms uses either deterministic or stochastic optimization methods, which, given an input set of discretized rhythmic signals and a fixed spiking neuron model, are capable of defining a spiking neural network (SNN), including both synaptic connections and weights, that endogenously and periodically replicates the input set of discretized rhythmic signals, which contribute to locomotion of a robotic platform. Herein, we present a generalization of reverse engineering methods to design SCPG-based locomotion systems for robotic platforms based on details of implementations of reviewed works.

#### 2. Robotic platforms and controllers

Nowadays, there are a variety of robotic platforms, and each of them has particular technical specifications in design, displacement ways and so on. In reviewed works, for real implementations of SCPG-based locomotion systems, three types of legged robotic platforms have been particularly used such as hexapod, quadruped and biped robots. Particularly, the used robotic platforms have 3 degrees of freedom (DOFs) or servomotors per leg, that is, the hexapod has 18 DOFs, the quadruped has 12 DOFs and the biped has 6 DOFs. Although for the hexapod and the quadruped, just two DOFs per leg were used; the two are closer to the body of robots and directly related to the movement of the robot. Figure 1 shows the robotic platforms with a specific label for identifying the position of their respective servomotors.

which inhibiting each other, generate periodically rhythmic activities that control bending and tension of muscles involved in locomotion. Moreover, GPGs have been linked to other unconscious activities besides locomotion such as swallowing, digestion, heart beating and

42 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

Furthermore, CPGs have become a suitable alternative to nonbiologically inspired methods for locomotion systems of nonwheeled robotic platforms [6]; this is due to several and interesting features of CPGs such as adaptability, rhythmicity, stability and variety [7]. The CPG-based locomotion systems have been successfully designed and implemented at software and/or hardware levels for different nonwheeled robotic platforms [8] such as walking robots (biped [9], quadruped [10], hexapod [11] and octopod [12]), swimming robots [13], flying robots [14]), among others (i.e., snake robots [15] and salamander robots [16]). Although vast amount of works made and reported in the state of the art about CPG-based locomotion systems, there is not a general and standard methodology to build CPGs [6]; however, working with CPGs commonly involves the following three phases [7]: (1) choosing the processing unit model, the coupling type and the connectivity topology (modeling and analysis), (2) dealing with parameter tuning, usually solved by optimization methods and gait transition, to handle variation on gaits as type or frequency (modulation) and (3) executing the designed CPG at the software

In this chapter, we focus on locomotion systems for legged robots, which are based on spiking central pattern generators (SCPGs) and reverse engineering methods for automatically design them. The study and implementation of SCPGs as locomotion systems have been barely explored and compared with other CPGs, which are built with oscillators or low-plausible neuron models (see [6, 7, 17] as reference). The SCPGs are built with plausible neuron models known as spiking neurons, models that define the third generation of artificial neural networks [18]; these neuron models naturally receive and send spatio-temporal information as generating rhythmic patterns are required for CPGs. The SCPGs have been designed and implemented as locomotion systems for robotic platforms such as bipeds [19–21, 25], quadrupeds [23, 25] and hexapods [22, 24–27], where the design methodologies used in [19–21, 27] tend to follow the phases proposed in [7], while in [22–26] reverse engineering methods are used. Basically, a reverse engineering method to design SCPG-based locomotion systems for robotic platforms uses either deterministic or stochastic optimization methods, which, given an input set of discretized rhythmic signals and a fixed spiking neuron model, are capable of defining a spiking neural network (SNN), including both synaptic connections and weights, that endogenously and periodically replicates the input set of discretized rhythmic signals, which contribute to locomotion of a robotic platform. Herein, we present a generalization of reverse engineering methods to design SCPG-based locomotion systems for robotic platforms based

Nowadays, there are a variety of robotic platforms, and each of them has particular technical specifications in design, displacement ways and so on. In reviewed works, for real implementations

breathing [4, 5].

and/or hardware level (implementation).

on details of implementations of reviewed works.

2. Robotic platforms and controllers

Servomotors in the robotic platforms are handled by a processing unit, which in reviewed works, a SCPG is embedded into them to provide a locomotion mechanism to the robots. In Figure 2, we present the electronic boards, which have been used as processing units such as

Figure 1. Robotic platforms from Lynxmotion company. (a) Phoenix hexapod robot model, (b) symmetric quadruped robot model and (c) Brat biped robot model. In ovals are marked and labeled the servomotors used in their locomotion where letters C, F and a stand for Coxa, femur and ankle, respectively, letters L and R represent left and right sides, and numbers mark the number according to their position (taken from [25]).

Figure 2. Boards for robot control. Processing units: (a) Arduino board, BotBoarduino for Lynxmotion robots, (b) FPGA Spartan 6 XEM6310-LX45 board from OpalKelly, and (c) Raspberry Pi 3 Model B board. Servo controller: (d) SSC-32 board.

Figure 3. System block diagram of robotic controller coupled with servomotors of robotic platforms through a servo controller.

Arduino (Figure 2a), Field Programmable Gate Array (FPGA) (Figure 2b) and a Raspberry Pi 3 Model B (Figure 2c) boards. Also, a servo controller board (Figure 2d) is required to send the output of the processing units to the servomotors of the robotic platforms.

Basically, the integration of the processing boards and platforms works as follows: an embedded SCPG into a processing board generates rhythmic signals, which are sent to the legs through a servo controller. This converts the spiking activity generated by the SCPGs into a control signal (voltage). The transmission process is carried out by using the RS-232 communication protocol. Figure 3 shows a block diagram of this integration.

Thus, an important aspect to achieve locomotion in robotic platforms is to design a SCPG according to the capabilities of the processing board, which, for reviewed works, must exactly replicate and periodically generate specific rhythmic patterns. In Section 3, we describe in detail the functionality of SCPGs and methods used for designing them.

how a step of real hexapod insect is interpreted and extended to a step of robot hexapod. Notice in Figure 4b that leg sketch coincides with a black rectangle in Figure 4a as leg is on the ground, and at this point the leg displacement that contributes to whole walking action occurs. Figure 4c shows the rhythmic signals over time to move a leg according to locomotion gait in Figure 4a; the coxa moves to front with spike events and to back without spike events, while the femur moves to down with spike events and to up in the absence of spike events. Finally, the combination of such movements according to the presence and absence of spikes of all legs

each row, darker rectangles represent a spike and lighter ones indicate the absence of spike.

Figure 4. Representation of extrapolation of steps observed of hexapod insect into discrete rhythmic signals for SCPGbased locomotion design. (a) Walking form of stick insect reported in [28], (b) extrapolation of a step into position of a leg over time and (c) proposal of a step (femur row) with additional information (coxa row) represented as spike trains, in

Spiking Central Pattern Generators through Reverse Engineering of Locomotion Patterns

http://dx.doi.org/10.5772/intechopen.72348

45

Later, Espinal et al. generated SCPG-based locomotion systems for quadruped and hexapod robots based on Rostro-Gonzalez's idea by using the same discrete spiking neuron model (Section 3.2.1) and a stochastic reverse engineering (Section 4.2) for designing them. There were designed and implemented SCPG-based locomotion systems for quadruped robots in [23] and hexapod robots in [24]; the difference with the Rostro's work is that more compact SNN topologies were achieved, and it was achieved to design a single SCPG capable of generating the three original locomotion gaits for hexapod robots and was extended for

Lately, SCPG-based locomotion systems for hexapods, quadrupeds and bipeds were designed by Guerra-Hernandez et al. in [25]. In his work, there was proposed a locomotion gait for biped robots following the Rostro's idea, and SNNs were designed by using the discrete spiking neuron model and a variant of stochastic reverse engineering (Section 4.2) proposed

provokes locomotion of legged robot.

quadruped ones as well.

by Espinal et al.

#### 3. SCPG-based locomotion system

The SCPGs, reviewed in this chapter, can generate endogenously discrete rhythmic signals; in other words, each periodical signal of a locomotion gait is represented by means of a spike train with spike times occurring periodically. This idea was firstly presented by Rostro-Gonzalez et al. in [22], where SCPGs built with discrete spiking neurons (Section 3.2.1) were automatically designed by using a deterministic reverse engineering method (Section 4.1) to imitate walking forms of a stick insect; based on steps sketches of walking forms of stick insect reported in [28], Rostro proposed three sets of discrete rhythmic signals as locomotion gaits (Section 3.1) to achieve hexapod robot locomotion by means of designing one SCPG for each of them.

In Figure 4, Rostro-Gonzalez's approach to achieve locomotion for six-legged robots through discrete events over time by means of SNNs is schematized. In Figure 4a, a walking form of stick insect reported in [28] is presented; black rectangles represent a leg on ground, while white ones represent a leg in the air. The L1 row is marked with dotted rectangle to exemplify

Arduino (Figure 2a), Field Programmable Gate Array (FPGA) (Figure 2b) and a Raspberry Pi 3 Model B (Figure 2c) boards. Also, a servo controller board (Figure 2d) is required to send the

Figure 3. System block diagram of robotic controller coupled with servomotors of robotic platforms through a servo

Basically, the integration of the processing boards and platforms works as follows: an embedded SCPG into a processing board generates rhythmic signals, which are sent to the legs through a servo controller. This converts the spiking activity generated by the SCPGs into a control signal (voltage). The transmission process is carried out by using the RS-232 commu-

Thus, an important aspect to achieve locomotion in robotic platforms is to design a SCPG according to the capabilities of the processing board, which, for reviewed works, must exactly replicate and periodically generate specific rhythmic patterns. In Section 3, we describe in

The SCPGs, reviewed in this chapter, can generate endogenously discrete rhythmic signals; in other words, each periodical signal of a locomotion gait is represented by means of a spike train with spike times occurring periodically. This idea was firstly presented by Rostro-Gonzalez et al. in [22], where SCPGs built with discrete spiking neurons (Section 3.2.1) were automatically designed by using a deterministic reverse engineering method (Section 4.1) to imitate walking forms of a stick insect; based on steps sketches of walking forms of stick insect reported in [28], Rostro proposed three sets of discrete rhythmic signals as locomotion gaits (Section 3.1) to achieve hexapod robot locomotion by means of designing one SCPG for each of

In Figure 4, Rostro-Gonzalez's approach to achieve locomotion for six-legged robots through discrete events over time by means of SNNs is schematized. In Figure 4a, a walking form of stick insect reported in [28] is presented; black rectangles represent a leg on ground, while white ones represent a leg in the air. The L1 row is marked with dotted rectangle to exemplify

output of the processing units to the servomotors of the robotic platforms.

44 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

nication protocol. Figure 3 shows a block diagram of this integration.

detail the functionality of SCPGs and methods used for designing them.

3. SCPG-based locomotion system

them.

controller.

Figure 4. Representation of extrapolation of steps observed of hexapod insect into discrete rhythmic signals for SCPGbased locomotion design. (a) Walking form of stick insect reported in [28], (b) extrapolation of a step into position of a leg over time and (c) proposal of a step (femur row) with additional information (coxa row) represented as spike trains, in each row, darker rectangles represent a spike and lighter ones indicate the absence of spike.

how a step of real hexapod insect is interpreted and extended to a step of robot hexapod. Notice in Figure 4b that leg sketch coincides with a black rectangle in Figure 4a as leg is on the ground, and at this point the leg displacement that contributes to whole walking action occurs. Figure 4c shows the rhythmic signals over time to move a leg according to locomotion gait in Figure 4a; the coxa moves to front with spike events and to back without spike events, while the femur moves to down with spike events and to up in the absence of spike events. Finally, the combination of such movements according to the presence and absence of spikes of all legs provokes locomotion of legged robot.

Later, Espinal et al. generated SCPG-based locomotion systems for quadruped and hexapod robots based on Rostro-Gonzalez's idea by using the same discrete spiking neuron model (Section 3.2.1) and a stochastic reverse engineering (Section 4.2) for designing them. There were designed and implemented SCPG-based locomotion systems for quadruped robots in [23] and hexapod robots in [24]; the difference with the Rostro's work is that more compact SNN topologies were achieved, and it was achieved to design a single SCPG capable of generating the three original locomotion gaits for hexapod robots and was extended for quadruped ones as well.

Lately, SCPG-based locomotion systems for hexapods, quadrupeds and bipeds were designed by Guerra-Hernandez et al. in [25]. In his work, there was proposed a locomotion gait for biped robots following the Rostro's idea, and SNNs were designed by using the discrete spiking neuron model and a variant of stochastic reverse engineering (Section 4.2) proposed by Espinal et al.

Lately, in [26], Perez-Trujillo et al. designed SCPG-based locomotion systems for hexapod robots based on Rostro-Gonzalez's, Espinal's and Guerra-Hernandez's works. The contribution of Perez-Trujillo's work was to create SCPG-based locomotion systems built with a nondiscrete spiking neuron model (Section 3.2.2), and the reverse engineering method was a variant stochastic one (Section 4.2).

The reviewed works are summarized in Table 1, including robotic platforms and processing boards as well as reverse engineering method and spiking neuron model used.

Following subsections complement the description of SCPG-based locomotion systems. In Section 3.1, the different rhythmic signal sets reported for each locomotion gait to robotic platforms are shown. Section 3.2 describes the spiking neuron models that have been used to define SNN as SCPGs on robot locomotion.

Figure 5. Hexapod robot locomotion gaits proposed in [22] (figures taken from [26]). (a) Walk gait, (b) jog gait and (c) run

Spiking Central Pattern Generators through Reverse Engineering of Locomotion Patterns

http://dx.doi.org/10.5772/intechopen.72348

47

Figure 6. Quadruped robot locomotion gaits proposed in [23] (figures taken from [26]). (a) Walk gait, (b) jog gait and (c)

gait.

run gait.

Figure 7. Biped walking locomotion gait proposed in [26].

#### 3.1. Locomotion patterns

The locomotion patterns contain the discrete rhythmic signals for each servomotor of a robotic platform. Each of them serves to define a specific locomotion gait that a SCPG must replicate endogenously and periodically. Besides, they are used for engineering reverse methods (Section 4) to design SCPGs. In Figures 5–7, show the different designed discrete rhythmic signals for hexapod, quadruped and biped robots, respectively; for each row corresponds a servomotor with the same label according to the robotic platform.

#### 3.2. Spiking neuron models

#### 3.2.1. Beslon-Mazet-Soula neuron model

The Belson-Mazet-Soula (BMS) neuron [29] is a discrete-time model derived from a wellknown spiking neuron, that is, the integrate-and-fire [30] model. The BMS neuron model is defined by Eqs. (1) and (2), and they describe the behavior of the i-th neuron over time k; former equation models its membrane potential Vi, and the last equation defines its firing state Zi.


Table 1. Summary of Legged Robot Locomotion System Configurations and reverse engineering methods.

Spiking Central Pattern Generators through Reverse Engineering of Locomotion Patterns http://dx.doi.org/10.5772/intechopen.72348 47

Figure 5. Hexapod robot locomotion gaits proposed in [22] (figures taken from [26]). (a) Walk gait, (b) jog gait and (c) run gait.

Figure 6. Quadruped robot locomotion gaits proposed in [23] (figures taken from [26]). (a) Walk gait, (b) jog gait and (c) run gait.

Figure 7. Biped walking locomotion gait proposed in [26].

Lately, in [26], Perez-Trujillo et al. designed SCPG-based locomotion systems for hexapod robots based on Rostro-Gonzalez's, Espinal's and Guerra-Hernandez's works. The contribution of Perez-Trujillo's work was to create SCPG-based locomotion systems built with a nondiscrete spiking neuron model (Section 3.2.2), and the reverse engineering method was a

The reviewed works are summarized in Table 1, including robotic platforms and processing

Following subsections complement the description of SCPG-based locomotion systems. In Section 3.1, the different rhythmic signal sets reported for each locomotion gait to robotic platforms are shown. Section 3.2 describes the spiking neuron models that have been used to

The locomotion patterns contain the discrete rhythmic signals for each servomotor of a robotic platform. Each of them serves to define a specific locomotion gait that a SCPG must replicate endogenously and periodically. Besides, they are used for engineering reverse methods (Section 4) to design SCPGs. In Figures 5–7, show the different designed discrete rhythmic signals for hexapod, quadruped and biped robots, respectively; for each row corresponds a

The Belson-Mazet-Soula (BMS) neuron [29] is a discrete-time model derived from a wellknown spiking neuron, that is, the integrate-and-fire [30] model. The BMS neuron model is defined by Eqs. (1) and (2), and they describe the behavior of the i-th neuron over time k; former equation models its membrane potential Vi, and the last equation defines its firing state Zi.

Author/Work Robotic platform Robotic controller Spiking neuron model Reverse engineering method

Rostro et al. [22] Hexapod FPGA BMS Neuron Deterministic Espinal et al. [23] Quadruped Arduino BMS Neuron Stochastic

Espinal et al. [24] Hexapod FPGA BMS Neuron Stochastic Guerra et al. [25] Hexapod FPGA BMS Neuron Variant stochastic

Perez et al. [26] Hexapod Raspberry Pi 3 LIF Neuron Variant stochastic

Table 1. Summary of Legged Robot Locomotion System Configurations and reverse engineering methods.

boards as well as reverse engineering method and spiking neuron model used.

46 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

servomotor with the same label according to the robotic platform.

variant stochastic one (Section 4.2).

define SNN as SCPGs on robot locomotion.

3.1. Locomotion patterns

3.2. Spiking neuron models

3.2.1. Beslon-Mazet-Soula neuron model

Hexapod FPGA

Quadruped FPGA Biped FPGA

$$V\_i[k] = \gamma V\_i[k-1](1 - Z\_i[k-1]) + \sum\_{j=1}^{N} W\_{i\bar{j}} Z\_j[k-1] + I\_i^{ext} \tag{1}$$

In Eq. (1), γ ∈½ � 0; 1 represents the leaky factor. The number of spiking neurons into the neural network is given by N. The synaptic strength weights are given by Wij. The I ext <sup>i</sup> is either a varying or constant external stimuli; due to that, CPGs endogenously produce periodic patterns I ext <sup>i</sup> ¼ 0:

$$Z\_i[k] = \begin{cases} 1 & \text{if } V\_i[k] \ge \theta \\ 0 & \text{otherwise} \end{cases} \tag{2}$$

Basically, the generalization defines an input-process-output system where its inputs are a fixed spiking neuron model (Section 3.2) and one or more sets of discrete rhythmic signals (Section 3.1), and the process is defined according to an optimization method, which can be deterministic (Section 4.1) or stochastic (Section 4.2). Finally, the output is generally a partially connected, directed and weighted graph that defines all aspects of a SNN to behave as

Spiking Central Pattern Generators through Reverse Engineering of Locomotion Patterns

http://dx.doi.org/10.5772/intechopen.72348

49

Next, the optimization is briefly described, and their strengths and weaknesses are pointed

This method, originally proposed by Rostro in [34], was created to build SNNs to replicate recorded biological neural dynamics. It was developed to work with the BMS spiking neuron

details. With Eqs. (1) and (2), a linear programming system can be formulated to determine the synaptic connections and weights of SNNs, which replicates locomotion gaits represented as rhythmic spiking dynamics through the evolution of all Vi½ � k (the membrane potential of each spiking neuron into the network), which are not known. Now, we define the expression: Zi½ �¼ k 0 ⇔ Vi½ � k < θ and Zi½ �¼ k 1 ⇔ Vi½ � k ≥ θ; where θ ¼ 1 for simplification. Last expression

By substituting Eq. (6) in the last expression, we can get a linear programming system [35],

γτZj½ �þ <sup>k</sup> � <sup>τ</sup> � <sup>1</sup> Iik<sup>τ</sup> (6)

<sup>i</sup> ½ � k � τ with: τik ¼ k� arg min<sup>l</sup>><sup>0</sup> Zi f g ½ �¼ l � 1 1 ; see [34] for derivation

a SCPG.

where Iik<sup>τ</sup> <sup>¼</sup> <sup>P</sup><sup>τ</sup>ik

<sup>τ</sup>¼<sup>0</sup> <sup>γ</sup><sup>τ</sup><sup>I</sup>

given the expression in Eq. (7).

ð Þ ext

can be written as next inequality: Zi½ �¼ k 0 ⇔ Vi½ � k < θ.

4.1. Deterministic method: Simplex method

model (Section 3.2.1). It is described in [22] as follows:

Figure 8. Diagram of reverse engineering method for designing SCPG.

First, Eq. (1) must be rewritten in form expressed in Eq. (6).

Vi½ �¼ <sup>k</sup> <sup>X</sup> N

j¼1

WijX<sup>τ</sup>ik τ¼0

out.

For Eq. (2), the fixed firing threshold is given by θ. Eq. (2) is used in Eq. (1) for tracking spike occurrence (Zi½ �Þ k and resetting the membrane potential of i-th neuron 1ð Þ � Zi½ � k .

#### 3.2.2. Integrate-and-fire neuron model

The integrate-and-fire (I&F) neuron [30], basically, models the evolution of its membrane potential's state over time as a resistor-capacitor (RC) circuit. In particular, the current-based leaky integrate-and-fire (LIF) model, or "if\_curr\_exp" model in the PyNN library [31], is a LIF neuron with a fixed firing threshold and exponentially decaying postsynaptic conductance given in Eq. (3); besides, the model requires of tau\_refrac to define the refractory value and v\_thresh to set the fixed firing threshold.

$$\frac{dv}{dt} = \frac{\text{i}e + \text{ii} + \text{i\\_off\^set} + \text{i\\_inj}}{\text{c\\_m}} + \frac{v\\_rest - v}{tau\\_m} \tag{3}$$

In Eq. (3), the membrane potential is represented with v. The excitatory and inhibitory current injections ie and ii are modelled by differential equations in Eqs. (4) and (5), respectively. The i\_offset stands for a base input current, and i\_inj is an external current injection; both added each timestep, but i\_inj ¼ 0 due to the nature of CPGs.

$$\frac{d\bar{e}}{dt} = -\frac{\bar{e}}{\
tau\\_syn\\_E} \tag{4}$$

$$\frac{d\ddot{\mathbf{u}}}{dt} = -\frac{\ddot{\mathbf{u}}}{\tan\underline{\mathbf{s}}\,\underline{\mathbf{v}}\,\underline{\mathbf{u}}\,\underline{I}}\tag{5}$$

Finally, tau\_syn\_E and tau\_syn\_I, in Eqs. (5) and (6), are excitatory and inhibitory input current decay time constant.

#### 4. Reverse engineering methods for designing SCPGs

In this section, we describe reverse engineering methods for automatically design SNNs by defining both their topology and synaptic weights. The reviewed methods can be generalized in diagram shown in Figure 8.

Spiking Central Pattern Generators through Reverse Engineering of Locomotion Patterns http://dx.doi.org/10.5772/intechopen.72348 49

Figure 8. Diagram of reverse engineering method for designing SCPG.

Vi½ �¼ <sup>k</sup> <sup>γ</sup>Vi½ � <sup>k</sup> � <sup>1</sup> <sup>ð</sup><sup>1</sup> � Zi½ � <sup>k</sup> � <sup>1</sup> Þ þ<sup>X</sup>

48 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

network is given by N. The synaptic strength weights are given by Wij. The I

(

occurrence (Zi½ �Þ k and resetting the membrane potential of i-th neuron 1ð Þ � Zi½ � k .

dt <sup>¼</sup> ie <sup>þ</sup> ii <sup>þ</sup> i\_offset <sup>þ</sup> i\_inj c\_m

die

dii

4. Reverse engineering methods for designing SCPGs

Zi½ �¼ k

terns I ext <sup>i</sup> ¼ 0:

3.2.2. Integrate-and-fire neuron model

v\_thresh to set the fixed firing threshold.

current decay time constant.

in diagram shown in Figure 8.

dv

each timestep, but i\_inj ¼ 0 due to the nature of CPGs.

N

j¼1

In Eq. (1), γ ∈½ � 0; 1 represents the leaky factor. The number of spiking neurons into the neural

varying or constant external stimuli; due to that, CPGs endogenously produce periodic pat-

For Eq. (2), the fixed firing threshold is given by θ. Eq. (2) is used in Eq. (1) for tracking spike

The integrate-and-fire (I&F) neuron [30], basically, models the evolution of its membrane potential's state over time as a resistor-capacitor (RC) circuit. In particular, the current-based leaky integrate-and-fire (LIF) model, or "if\_curr\_exp" model in the PyNN library [31], is a LIF neuron with a fixed firing threshold and exponentially decaying postsynaptic conductance given in Eq. (3); besides, the model requires of tau\_refrac to define the refractory value and

In Eq. (3), the membrane potential is represented with v. The excitatory and inhibitory current injections ie and ii are modelled by differential equations in Eqs. (4) and (5), respectively. The i\_offset stands for a base input current, and i\_inj is an external current injection; both added

dt ¼ � ie

dt ¼ � ii

Finally, tau\_syn\_E and tau\_syn\_I, in Eqs. (5) and (6), are excitatory and inhibitory input

In this section, we describe reverse engineering methods for automatically design SNNs by defining both their topology and synaptic weights. The reviewed methods can be generalized

1 if Vi½ � k ≥ θ 0 otherwise

WijZj½ �þ k � 1 I

<sup>þ</sup> v\_rest � <sup>v</sup>

tau\_m (3)

tau\_syn\_E (4)

tau\_syn\_I (5)

ext

<sup>i</sup> (1)

ext

<sup>i</sup> is either a

(2)

Basically, the generalization defines an input-process-output system where its inputs are a fixed spiking neuron model (Section 3.2) and one or more sets of discrete rhythmic signals (Section 3.1), and the process is defined according to an optimization method, which can be deterministic (Section 4.1) or stochastic (Section 4.2). Finally, the output is generally a partially connected, directed and weighted graph that defines all aspects of a SNN to behave as a SCPG.

Next, the optimization is briefly described, and their strengths and weaknesses are pointed out.

#### 4.1. Deterministic method: Simplex method

This method, originally proposed by Rostro in [34], was created to build SNNs to replicate recorded biological neural dynamics. It was developed to work with the BMS spiking neuron model (Section 3.2.1). It is described in [22] as follows:

First, Eq. (1) must be rewritten in form expressed in Eq. (6).

$$V\_i[k] = \sum\_{j=1}^{N} W\_{ij} \sum\_{\tau=0}^{\tau\_{ik}} \gamma^{\tau} Z\_{\dot{\gamma}}[k - \tau - 1] + I\_{ik\tau} \tag{6}$$

where Iik<sup>τ</sup> <sup>¼</sup> <sup>P</sup><sup>τ</sup>ik <sup>τ</sup>¼<sup>0</sup> <sup>γ</sup><sup>τ</sup><sup>I</sup> ð Þ ext <sup>i</sup> ½ � k � τ with: τik ¼ k� arg min<sup>l</sup>><sup>0</sup> Zi f g ½ �¼ l � 1 1 ; see [34] for derivation details. With Eqs. (1) and (2), a linear programming system can be formulated to determine the synaptic connections and weights of SNNs, which replicates locomotion gaits represented as rhythmic spiking dynamics through the evolution of all Vi½ � k (the membrane potential of each spiking neuron into the network), which are not known. Now, we define the expression: Zi½ �¼ k 0 ⇔ Vi½ � k < θ and Zi½ �¼ k 1 ⇔ Vi½ � k ≥ θ; where θ ¼ 1 for simplification. Last expression can be written as next inequality: Zi½ �¼ k 0 ⇔ Vi½ � k < θ.

By substituting Eq. (6) in the last expression, we can get a linear programming system [35], given the expression in Eq. (7).

#### 50 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

$$
\varepsilon\_i = A\_i w\_i + b\_i \ge 0 \tag{7}
$$

of expected words of two grammars is as follows: id1, weight<sup>1</sup>

are just syntactically correct or any generated word is just valid.

can replicate different locomotion gaits.

5. Discussion and conclusion

• Strengths:

• Weaknesses:

[24]) and Differential Evolution [42] (used in [25, 26]).

The connectivity defined by a word has at least one connection and a maximum of connections according to number of neurons into the SNNs. The main difference between both representations is that words of CG representation are syntactically and semantically correct, this means that any generated word is valid and there are not repeated indexes, while words of CFG BNF

The fitness function is usually defined by the problem; to solve this, three fitness functions based on SPIKE [39] (used in [23, 24, 26]) and Victor-Purpura distances [40] (used in [25]) to compare similarity between generated spike train and target spike train to guide the search process have been proposed. Basically, first and second distance-based fitness functions are for generating one SCPG per locomotion gait; the difference is that the second one looks for minimal presynaptic connectivity and the first one does not care about number of presynaptic connections. The third distance-based fitness function allows to generate a single SCPG, which

The search engine in GE is usually a metaheuristic algorithm, which tries to improve the quality of solutions. For reviewed works, three different algorithms have been used: Univariate Marginal Distribution Algorithm [40] (used in 23), (1 + 1)-Evolution Strategy [41] (used in

• It can design SNNs which use either BMS spiking neuron model or LIF neuron model. • It can handle design criteria to design compact SNN topologies or SNN, which can

• The process must be executed several times to build a single SNN; due to that, it

Nowadays, autonomous robot locomotion is still a valid problem that has been partially solved in robotics. Particularly, locomotion of nonwheeled robotic platforms is a problem highly susceptible for trying to be solved by means of bioinspired algorithms known as CPGs. However, sometimes working with CPGs may represent a problem itself since its design; this is due to the different choices that must be made before implement the CPG according to [7]. In

For implementation details, see [23–25]. Next, the features of this method are listed:

replicate different neural dynamics or locomotion gaits.

defines synaptic connection and weights one neuron at time.

• It has not been tested on other design problems than design SCPGs.

zfflfflfflfflfflfflffl}|fflfflfflfflfflfflffl{ 1st configured synapse

Spiking Central Pattern Generators through Reverse Engineering of Locomotion Patterns

∣⋯∣ idn, weightn zfflfflfflfflfflfflffl}|fflfflfflfflfflfflffl{ <sup>n</sup>th configured synapse

http://dx.doi.org/10.5772/intechopen.72348

.

51

where

$$\bullet \quad A\_i = \begin{pmatrix} \cdots & \cdots & \cdots \\ \cdots & (2Z\_i[k]-1) \sum\_{\tau=0}^{\tau\_k} \gamma^{\tau} Z\_j[k-\tau-1] & \cdots \\ \cdots & \cdots & \cdots \end{pmatrix}.$$


Solving the aforementioned formulated linear programming problem in Eq. (7), by using a simple method (or any existing linear programming solver), we obtain the synaptic weights of all neurons and, indirectly, a SNN topology.

Next, the features of this method are listed as follows:

	- The definition of whole SNN is made by executing the method once.
	- It has been successfully used as a reverse engineering method for designing SNNs, which replicate recorded biological neural dynamics.
	- It can design SNN by using only BMS spiking neuron models.
	- It generates one SNN for replicating just one neural dynamic pattern.

#### 4.2. Stochastic method: Grammar-based genetic programming

For stochastic reverse engineering methods, evolutionary algorithms have been used; particularly, a variant of well-known genetic programming [36] called grammatical evolution (GE) [37]. Practically, GE is an optimization tool that searches approximated optimal solutions by representing them indirectly for a given problem; thus, working with GE to solve a specific problem requires four components: problem instance(s), representation of solutions, a fitness function to evaluate solution's quality and a search engine.

For the SCPG design problem, two types of representations have been proposed: one as a Context-Free Grammar (CFG) in Backus-Naur Form (BNF) and another as a Christiansen Grammar (CG) to use GE (in [25, 26]) and a variant called Christiansen Grammar Evolution (CGE) [38] (in [23, 24]), respectively. In general, both representations describe languages that define the presynaptic connectivity (including weights) of a specific spiking neuron; the common structure of expected words of two grammars is as follows: id1, weight<sup>1</sup> zfflfflfflfflfflfflffl}|fflfflfflfflfflfflffl{ 1st configured synapse ∣⋯∣ idn, weightn zfflfflfflfflfflfflffl}|fflfflfflfflfflfflffl{ <sup>n</sup>th configured synapse . The connectivity defined by a word has at least one connection and a maximum of connections according to number of neurons into the SNNs. The main difference between both representations is that words of CG representation are syntactically and semantically correct, this means that any generated word is valid and there are not repeated indexes, while words of CFG BNF are just syntactically correct or any generated word is just valid.

The fitness function is usually defined by the problem; to solve this, three fitness functions based on SPIKE [39] (used in [23, 24, 26]) and Victor-Purpura distances [40] (used in [25]) to compare similarity between generated spike train and target spike train to guide the search process have been proposed. Basically, first and second distance-based fitness functions are for generating one SCPG per locomotion gait; the difference is that the second one looks for minimal presynaptic connectivity and the first one does not care about number of presynaptic connections. The third distance-based fitness function allows to generate a single SCPG, which can replicate different locomotion gaits.

The search engine in GE is usually a metaheuristic algorithm, which tries to improve the quality of solutions. For reviewed works, three different algorithms have been used: Univariate Marginal Distribution Algorithm [40] (used in 23), (1 + 1)-Evolution Strategy [41] (used in [24]) and Differential Evolution [42] (used in [25, 26]).

For implementation details, see [23–25]. Next, the features of this method are listed:

• Strengths:

ei ¼ Aiwi þ bi ≥ 0 (7)

where

• Ai ¼

0

BB@

• wi <sup>¼</sup> <sup>⋯</sup> Wij <sup>⋯</sup> � �<sup>t</sup>

• Strengths:

• Weaknesses:

⋯⋯⋯

50 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

⋯⋯⋯

Zj½ � k � τ � 1 ⋯

• The definition of whole SNN is made by executing the method once.

which replicate recorded biological neural dynamics.

4.2. Stochastic method: Grammar-based genetic programming

function to evaluate solution's quality and a search engine.

• It can design SNN by using only BMS spiking neuron models.

• It generates one SNN for replicating just one neural dynamic pattern.

For stochastic reverse engineering methods, evolutionary algorithms have been used; particularly, a variant of well-known genetic programming [36] called grammatical evolution (GE) [37]. Practically, GE is an optimization tool that searches approximated optimal solutions by representing them indirectly for a given problem; thus, working with GE to solve a specific problem requires four components: problem instance(s), representation of solutions, a fitness

For the SCPG design problem, two types of representations have been proposed: one as a Context-Free Grammar (CFG) in Backus-Naur Form (BNF) and another as a Christiansen Grammar (CG) to use GE (in [25, 26]) and a variant called Christiansen Grammar Evolution (CGE) [38] (in [23, 24]), respectively. In general, both representations describe languages that define the presynaptic connectivity (including weights) of a specific spiking neuron; the common structure

1

CCA

Solving the aforementioned formulated linear programming problem in Eq. (7), by using a simple method (or any existing linear programming solver), we obtain the synaptic weights of

• It has been successfully used as a reverse engineering method for designing SNNs,

Xτik τ¼0 γτ

⋯ ð Þ 2Zi½ �� k 1

• bi <sup>¼</sup> ð Þ <sup>⋯</sup> ð Þ <sup>2</sup>Zi½ �� <sup>k</sup> <sup>1</sup> ð Þ Iik<sup>τ</sup> � <sup>1</sup> <sup>⋯</sup> <sup>t</sup>

• ei <sup>¼</sup> ð Þ <sup>⋯</sup> ð Þ <sup>2</sup>Zi½ �� <sup>k</sup> <sup>1</sup> ð Þ Vi½ �� <sup>k</sup> <sup>1</sup> <sup>⋯</sup> <sup>t</sup>

all neurons and, indirectly, a SNN topology.

Next, the features of this method are listed as follows:

	- The process must be executed several times to build a single SNN; due to that, it defines synaptic connection and weights one neuron at time.
	- It has not been tested on other design problems than design SCPGs.

#### 5. Discussion and conclusion

Nowadays, autonomous robot locomotion is still a valid problem that has been partially solved in robotics. Particularly, locomotion of nonwheeled robotic platforms is a problem highly susceptible for trying to be solved by means of bioinspired algorithms known as CPGs. However, sometimes working with CPGs may represent a problem itself since its design; this is due to the different choices that must be made before implement the CPG according to [7]. In this chapter, we have explored researches made on SCPG field, a particular type of CPGs, which have been barely explored to date. The SCPGs are built with spiking neurons, a plausible neuron model, which handle similar information as such observed in biological neural systems. We specifically focus on SCPG designed by approaches that allow to dispense with human experts for explicitly define each CPG design phase. These kinds of works use reverse engineering methods to solve de SCPG design problem as an optimization one. By means of these methods, there are generated weighted and directed graphs as SNNs, which endogenously generate rhythmic discrete signals to allow locomotion of legged robots.

References

84980

16/j.neunet.2013.04.005

[1] Floreano D, Ijspeert AJ, Schaal S. Robotics and neuroscience. Current Biology. 2014;

Spiking Central Pattern Generators through Reverse Engineering of Locomotion Patterns

http://dx.doi.org/10.5772/intechopen.72348

53

[2] Brown TG. The intrinsic factors in the act of progression in the mammal. Proceedings of the Royal Society of London. Series B, containing papers of a biological character. 1911;

[3] Brown TG. On the nature of the fundamental activity of the nervous centres; together with an analysis of the conditioning of rhythmic activity in progression, and a theory of the evolution of function in the nervous system. The Journal of Physiology. 1914;48(1):18-46.

[4] Marder E, Bucher D. Central pattern generators and the control of rhythmic movements.

[5] Patel LN. Central pattern generators: Optimisation and application. In: Chiong R, editor. Nature-Inspired Algorithms for Optimisation. Berlin, Heidelberg: Springer Berlin Heidel-

[6] Ijspeert AJ. Central pattern generators for locomotion control in animals and robots: A review. Neural Networks. 2008;21(6):642-653. DOI: 10.1016/j.neunet.2008.03.014

[7] Yu J, Tan M, Chen J, Zhang J. A survey on CPG-inspired control models and system implementation. IEEE Transactions on Neural Networks and Learning Systems. 2014;

[8] Barron-Zambrano JH, Torres-Huitzil C. CPG implementations for robot locomotion: Analysis and design. In: Dutta A, editor. Robotic Systems-Applications, Control and

[9] Endo G, Morimoto J, Matsubara T, Nakanishi J, Cheng G. Learning CPG-based biped locomotion with a policy gradient method: Application to a humanoid robot. The International Journal of Robotics Research. 2008;27(2):213-228. DOI: 10.1177/02783649070

[10] Liu C, Chen Q, Wang D. CPG-inspired workspace trajectory generation and adaptive locomotion control for quadruped robots. IEEE Transactions on Systems, Man, and Cybernetics. Part B, Cybernetics. 2011;41(3):867-880. DOI: 10.1109/TSMCB.2010.2097589

[11] Barron-Zambrano JH, Torres-Huitzil C. FPGA implementation of a configurable neuromorphic CPG-based locomotion controller. Neural Networks. 2013;45:50-61. DOI: 10.10

[12] Inagaki S, Yuasa H, Suzuki T, Arai T. Wave CPG model for autonomous decentralized multi-legged robot: Gait generation and walking speed control. Robotics and Autono-

[13] Jia X, Chen Z, Petrosino JM, Hamel WR, Zhang M. Biological undulation inspired swimming robot. In: Okamura A, editor. Robotics and Automation (ICRA), 2017 IEEE International

mous Systems. 2006;54(2):118-126. DOI: 10.1016/j.robot.2005.09.021

Current Biology. 2001;11(23):R986-R996. DOI: 10.1016/S0960-9822(01)00581-4

berg; 2009. pp. 235-260. DOI: 10.1007/978-3-642-00267-0\_8

Programming. InTech; 2012. pp. 161-182. DOI: 10.5772/25827

25(3):441-456. DOI: 10.1109/TNNLS.2013.2280596

24(18):R910-R920. DOI: 10.1016/j.cub.2014.07.058

84(572):308-319. DOI: 10.1098/rspb.1911.0077

DOI: 10.1113/jphysiol.1914.sp001646

Biological CPGs do not work in isolation; they depend on the information interaction with other parts of the central nervous system [32]; even, external afferent inputs are used to shape their outputs [33]. Based on the aforementioned, the next step of SCPG-based locomotion systems could be their integration in navigation systems to endow them with sensors and can build more robust and plausible bioinspired algorithms.

Finally, there are other reasons to keep studying and implementing SCPGs, which go far beyond the locomotion of nonwheeled robots, their possible application in other areas, like medicine in developing prosthetic robotic devices for patients with spinal damage or amputated limbs [20].

#### Acknowledgements

The authors wish to thank the Consejo Nacional de Ciencia y Tecnología (CONACyT) for the support through the Computational Neuroscience: Theory of Neuromorphic Systems Development project, N. 1961, the University of Guanajuato and National Technology of Mexico. Horacio Rostro-Gonzalez wishes to thank the University of Guanajuato for the support provided through a sabbatical year.

#### Author details

Andrés Espinal<sup>1</sup> \*, Marco Sotelo-Figueroa<sup>1</sup> , Héctor J. Estrada-García<sup>2</sup> , Manuel Ornelas-Rodríguez<sup>3</sup> and Horacio Rostro-Gonzalez4,5

\*Address all correspondence to: aespinal@ugto.mx

1 Department of Organizational Studies, DCEA-University of Guanajuato, Guanajuato, Mexico

2 Department of Electrical Engineering, DICIS-University of Guanajuato, Salamanca, Mexico

3 Division of Postgraduate Studies and Research, ITL-National Technology of Mexico, Leon, Mexico

4 Department of Electronic Engineering, DICIS-University of Guanajuato, Salamanca, Mexico

5 Neuroscientific System Theory, Department of Electrical and Computer Engineering, Technical University of Munich, Munich, Germany

#### References

this chapter, we have explored researches made on SCPG field, a particular type of CPGs, which have been barely explored to date. The SCPGs are built with spiking neurons, a plausible neuron model, which handle similar information as such observed in biological neural systems. We specifically focus on SCPG designed by approaches that allow to dispense with human experts for explicitly define each CPG design phase. These kinds of works use reverse engineering methods to solve de SCPG design problem as an optimization one. By means of these methods, there are generated weighted and directed graphs as SNNs, which endoge-

Biological CPGs do not work in isolation; they depend on the information interaction with other parts of the central nervous system [32]; even, external afferent inputs are used to shape their outputs [33]. Based on the aforementioned, the next step of SCPG-based locomotion systems could be their integration in navigation systems to endow them with sensors and can

Finally, there are other reasons to keep studying and implementing SCPGs, which go far beyond the locomotion of nonwheeled robots, their possible application in other areas, like medicine in developing prosthetic robotic devices for patients with spinal damage or ampu-

The authors wish to thank the Consejo Nacional de Ciencia y Tecnología (CONACyT) for the support through the Computational Neuroscience: Theory of Neuromorphic Systems Development project, N. 1961, the University of Guanajuato and National Technology of Mexico. Horacio Rostro-Gonzalez wishes to thank the University of Guanajuato for the support pro-

1 Department of Organizational Studies, DCEA-University of Guanajuato, Guanajuato, Mexico 2 Department of Electrical Engineering, DICIS-University of Guanajuato, Salamanca, Mexico 3 Division of Postgraduate Studies and Research, ITL-National Technology of Mexico, Leon,

4 Department of Electronic Engineering, DICIS-University of Guanajuato, Salamanca, Mexico

5 Neuroscientific System Theory, Department of Electrical and Computer Engineering,

, Héctor J. Estrada-García<sup>2</sup>

,

nously generate rhythmic discrete signals to allow locomotion of legged robots.

build more robust and plausible bioinspired algorithms.

52 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

\*, Marco Sotelo-Figueroa<sup>1</sup>

\*Address all correspondence to: aespinal@ugto.mx

Technical University of Munich, Munich, Germany

Manuel Ornelas-Rodríguez<sup>3</sup> and Horacio Rostro-Gonzalez4,5

tated limbs [20].

Acknowledgements

vided through a sabbatical year.

Author details

Andrés Espinal<sup>1</sup>

Mexico


Conference on 29 May-3 June 2017. Singapore: IEEE; 2017. pp. 4795-4800. DOI: 10.1109/ ICRA.2017.7989558

[26] Newton AJH, Seidenstein AH, McDougal RA, Pérez-Cervera A, Huguet G, Tere M, et al. 26th annual computational neuroscience meeting (CNS\*2017): Part 3. BMC Neuroscience.

Spiking Central Pattern Generators through Reverse Engineering of Locomotion Patterns

http://dx.doi.org/10.5772/intechopen.72348

55

[27] Cuevas-Arteaga B, Dominguez-Morales JP, Rostro-Gonzalez H, Espinal A, Jimenez-Fernandez AF, Gomez-Rodriguez F, et al. A SpiNNaker application: Design, implementation and validation of SCPGs. In: Rojas I, Joya G, Catala A, editors. 14th International Work-Conference on Artificial Neural Networks (IWANN 2017); June 14–16; Cadiz,

[28] Grabowska M, Godlewska E, Schmidt J, Daun-Gruhn S. Quadrupedal gaits in hexapod animals–inter-leg coordination in free-walking adult stick insects. Journal of Experimen-

[29] Soula H, Beslon G, Mazet O. Spontaneous dynamics of asymmetric random recurrent spiking neural networks. Neural Computation. 2006;18(1):60-79. DOI: 10.1162/089976606774841567 [30] Gerstner W, Kistler WM. Spiking Neuron Models: Single Neurons, Populations, Plastic-

[31] Davison A, Brüderle D, Eppler JM, Kremkow J, Muller E, Pecevski D, et al. PyNN: A common interface for neuronal network simulators. Frontiers in Neuroinformatics.

[32] Arena P. The central pattern generator: A paradigm for artificial locomotion. Soft Computing-A Fusion of Foundations, Methodologies and Applications. 2000;4(4):251-

[33] MacKay-Lyons M. Central pattern generation of locomotion: A review of the evidence.

[34] Rostro-Gonzalez H, Cessac B, Viéville T. Parameter estimation in spiking neural networks: A reverse-engineering approach. Journal of Neural Engineering. 2012;9(2):026024.

[35] Bixby RE. Implementing the simplex method: The initial basis. ORSA Journal on Com-

[36] Koza JR. Genetic Programming: On the Programming of Computers by Means of Natural

[37] Ryan C, Collins JJ, O'Neill M. Grammatical evolution: Evolving programs for an arbitrary language. In: Banzhaf W, Poli R, Schoenauer M, Fogarty TC, editors. Proceedings Genetic Programming: First European Workshop, EuroGP'98 Paris, France, April 14–15, 1998; Berlin,

[38] Ortega A, De La Cruz M, Alfonseca M. Christiansen grammar evolution: Grammatical evolution with semantics. IEEE Transactions on Evolutionary Computation. 2007;11(1):

[39] Kreuz T, Chicharro D, Houghton C, Andrzejak RG, Mormann F. Monitoring spike train synchrony. Journal of Neurophysiology. 2013;109(5):1457-1472. DOI: 10.1152/jn.00873.2012

Heidelberg: Springer Berlin Heidelberg; 1998. pp. 83-96. DOI: 10.1007/BFb0055930

Spain. Cham: Springer; 2017. pp. 548-559. DOI: 10.1007/978-3-319-59153-7\_47

2017;18(Suppl 1):96-176. DOI: 10.1186/s12868-017-0372-1

tal Biology. 2012;215(24):4255-4266. DOI: 10.1242/jeb.073643

Physical Therapy. 2002;82(1):96-83. DOI: 10.1093/ptj/82.1.69

puting. 1992;4(3):267-284. DOI: 10.1287/ijoc.4.3.267

ity. Cambridge University Press; 2002. 480 p

2009;2:11. DOI: 10.3389/neuro.11.011.2008

266. DOI: 10.1007/s005000000051

DOI: 10.1088/1741-2560/9/2/026024

Selection. MIT Press; 1992. 819 p

77-90. DOI: 10.1109/TEVC.2006.880327


[26] Newton AJH, Seidenstein AH, McDougal RA, Pérez-Cervera A, Huguet G, Tere M, et al. 26th annual computational neuroscience meeting (CNS\*2017): Part 3. BMC Neuroscience. 2017;18(Suppl 1):96-176. DOI: 10.1186/s12868-017-0372-1

Conference on 29 May-3 June 2017. Singapore: IEEE; 2017. pp. 4795-4800. DOI: 10.1109/

[14] Chung SJ, Dorothy M. Neurobiologically inspired control of engineered flapping fligh. Journal of Guidance, Control, and Dynamics. 2010;33(2):440-453. DOI: 10.2514/1.45311 [15] Wang Z, Gao Q, Zhao H. CPG-inspired locomotion control for a snake robot basing on nonlinear oscillators. Journal of Intelligent and Robotic Systems. 2017;85(2):209-227. DOI:

[16] Ding R, Yu J, Yang Q, Tan M. Dynamic modelling of a CPG-controlled amphibious biomimetic swimming robot. International Journal of Advanced Robotic Systems.

[17] Wu Q, Liu C, Zhang J, Chen Q. Survey of locomotion control of legged robots inspired by biological concept. Science in China Series F: Information Sciences. 2009;52(10):1715-1729.

[18] Maass W. Networks of spiking neurons: The third generation of neural network models. Neural Networks. 1997;10(9):1659-1671. DOI: 10.1016/S0893-6080(97)00011-7

[19] Lewis MA, Tenore F, Etienne-Cummings R. CPG design using inhibitory networks. In: Proceedings of the 2005 IEEE International Conference on Robotics and Automation. ICRA 2005; 18–22 April 2005; Barcelona, Spain. IEEE; 2005. pp. 3682-3687. DOI: 10.1109/

[20] Russell A, Orchard G, Etienne-Cummings R. Configuring of spiking central pattern generator networks for bipedal walking using genetic algorithms. In: IEEE International Symposium on Circuits and Systems. ISCAS 2007; 27–30 May 2007; New Orleans, LA, USA.

[21] Russell A, Orchard G, Dong Y, Mihalas S, Niebur E, Tapson J, et al. Optimization methods for spiking neurons and networks. IEEE Transactions on Neural Networks. 2010;21(12):

[22] Rostro-Gonzalez H, Cerna-Garcia PA, Trejo-Caballero G, Garcia-Capulin CH, Ibarra-Manzano MA. Avina-Cervantes, et al. a CPG system based on spiking neurons for hexapod robot locomotion. Neurocomputing. 2015;170:47-54. DOI: 10.1016/j.neucom.2015.03.090 [23] Espinal A, Rostro-Gonzalez H, Carpio M, Guerra-Hernandez EI, Ornelas-Rodriguez M, Puga-Soberanes HJ, et al. Quadrupedal robot locomotion: A biologically inspired approach and its hardware implementation. Computational Intelligence and Neuroscience. 2016;

[24] Espinal A, Rostro-Gonzalez H, Carpio M, Guerra-Hernandez EI, Ornelas-Rodriguez M, Sotelo-Figueroa M. Design of spiking central pattern generators for multiple locomotion gaits in hexapod robots by christiansen grammar evolution. Frontiers in Neurorobotics.

[25] Guerra-Hernandez EI, Espinal A, Batres-Mendoza P, Garcia-Capulin C, Romero-Troncoso R, Rostro-Gonzalez H. A FPGA-based neuromorphic locomotion system for multi-legged

robots. IEEE Access. 2017;5:8301-8312. DOI: 10.1109/ACCESS.2017.2696985

IEEE; 2007. pp. 1525-1528. DOI: 10.1109/ISCAS.2007.378701

1950-1962. DOI: 10.1109/TNN.2010.2083685

2016:14. DOI: 10.1155/2016/5615618

2016;10:13. DOI: 10.3389/fnbot.2016.00006

ICRA.2017.7989558

s10846–016–0373-9

2013;10(4):11. DOI: 10.5772/56059

54 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

DOI: 10.1007/s11432-009-0169-7

ROBOT.2005.1570681


**Chapter 5**

**Provisional chapter**

**Characterizing Motor System to Improve Training**

**Characterizing Motor System to Improve Training** 

**Motor Imagery**

**Abstract**

course.

training protocols

**Motor Imagery**

Luz Maria Alonso-Valerdi and

Luz Maria Alonso-Valerdi and Andrés Antonio González-Garrido

Andrés Antonio González-Garrido

http://dx.doi.org/10.5772/intechopen.72667

Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

**Protocols Used in Brain-Machine Interfaces Based on**

Motor imagery (MI)-based brain-machine interface (BMI) is a technology under development that actively modifies users' perception and cognition through mental tasks, so as to decode their intentions from their neural oscillations, and thereby bringing some kind of activation. So far, MI as control task in BMIs has been seen as a skill that must be acquired, but neither user conditions nor controlled learning conditions have been taken into account. As motor system is a complex mechanism trained along lifetime, and MI-based BMI attempts to decode motor intentions from neural oscillations in order to put a device into action, motor mechanisms should be considered when prototyping BMI systems. It is hypothesized that the best way to acquire MI skills is following the same rules humans obey to move around the world. On this basis, new training paradigms consisting of ecological environments, identification of control tasks according to the ecological environment, transparent mapping, and multisensory feedback are proposed in this chapter. These new MI training paradigms take advantages of previous knowledge of users and facilitate the generation of mental image due to the automatic development of sensory predictions and motor behavior patterns in the brain. Furthermore, the effectuation of MI as an actual movement would make users feel that their mental images are being executed, and the resulting sensory feedback may allow forward model readjusting the imaginary movement in

**Protocols Used in Brain-Machine Interfaces Based on** 

DOI: 10.5772/intechopen.72667

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution,

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

and reproduction in any medium, provided the original work is properly cited.

**Keywords:** motor system, forward model, inverse model, brain-machine interfaces,

[42] Feoktistov V. Differential Evolution: In Search of Solutions. New York: Springer; 2006. 195 p

**Provisional chapter**

#### **Characterizing Motor System to Improve Training Protocols Used in Brain-Machine Interfaces Based on Motor Imagery Protocols Used in Brain-Machine Interfaces Based on Motor Imagery**

**Characterizing Motor System to Improve Training** 

DOI: 10.5772/intechopen.72667

Luz Maria Alonso-Valerdi and Andrés Antonio González-Garrido Andrés Antonio González-Garrido Additional information is available at the end of the chapter

Luz Maria Alonso-Valerdi and

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.72667

#### **Abstract**

[40] Simon D. Evolutionary Optimization Algorithms. John Wiley & Sons; 2013. 772 p

56 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

2007. 628 p

195 p

[41] Engelbrecht AP. Computational Intelligence: An Introduction. 2nd ed. Weley: Chichester;

[42] Feoktistov V. Differential Evolution: In Search of Solutions. New York: Springer; 2006.

Motor imagery (MI)-based brain-machine interface (BMI) is a technology under development that actively modifies users' perception and cognition through mental tasks, so as to decode their intentions from their neural oscillations, and thereby bringing some kind of activation. So far, MI as control task in BMIs has been seen as a skill that must be acquired, but neither user conditions nor controlled learning conditions have been taken into account. As motor system is a complex mechanism trained along lifetime, and MI-based BMI attempts to decode motor intentions from neural oscillations in order to put a device into action, motor mechanisms should be considered when prototyping BMI systems. It is hypothesized that the best way to acquire MI skills is following the same rules humans obey to move around the world. On this basis, new training paradigms consisting of ecological environments, identification of control tasks according to the ecological environment, transparent mapping, and multisensory feedback are proposed in this chapter. These new MI training paradigms take advantages of previous knowledge of users and facilitate the generation of mental image due to the automatic development of sensory predictions and motor behavior patterns in the brain. Furthermore, the effectuation of MI as an actual movement would make users feel that their mental images are being executed, and the resulting sensory feedback may allow forward model readjusting the imaginary movement in course.

**Keywords:** motor system, forward model, inverse model, brain-machine interfaces, training protocols

Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons

#### **1. Introduction**

Electroencephalography (EEG) has become a standard brain imaging tool due to its viability for recording the brain activity. Typically, EEG has been analyzed by quantifying and qualifying neural oscillations. In broad terms, "neural oscillations" can be defined as spatial, temporal, and spectral patterns that are associated with particular perceptual, cognitive, motor, and emotional processes [1]. A large and growing body of literature has investigated neural oscillations as EEG feature in all directions: from neurological to technological perspectives. In terms of Neurosciences, research into brain response has a long history. Brain responses have been traditionally studied on the basis of event-related experiments, where time-locked and phase-locked responses (i.e., event-related potentials) along with time-locked but not necessary phase-locked responses (i.e., event-related (de) synchronization) have been essentially analyzed [2, 3]. In the case of technology, research into neural interfaces has taken a leading role. A neural interface is a system that permits to reintegrate the sensory-motor loop, accessing directly to brain information. There are three main types of neural interfaces: (1) sensory interfaces, which artificially activate the sensory system; (2) cognitive interfaces, which try to re-establish the communication of the neural networks; and (3) motor interfaces, which translate neural oscillations into control commands for a device of interest [4]. In particular, motor interfaces are known as brainmachine interfaces (BMI).

mental tasks, or reactively by applying external stimulation (visual, auditory, or somatosensory). Users' intentions are typically decoded by reducing EEG signal noise, extracting neurophysiological features associated with the mental task or the external stimulus in use, and adapting a mathematical model by using those features [5]. Once a model has been calibrated, the BMI attempts to predict users' intentions so as to bring activation in different ways, including neurorehabilitation, communication, neuro-prosthesis, domotic environments, or neuro-feedback [6].

Characterizing Motor System to Improve Training Protocols Used in Brain-Machine Interfaces…

http://dx.doi.org/10.5772/intechopen.72667

59

Although BMI investigation was firstly undertaken in the 60s, these systems are still laboratory prototypes because (1) it is unknown how EEG features are linked to perception and cognition (human side); (2) users have been not involved in the system design, are usually not well instructed, and not guided during the user-system adaption (human-machine interaction); and (3) computational decodification of EEG signals is not enough efficient (system side). In particular, active systems have been much more challengeable to set up since they depend on the user mental effort, rather than spontaneous neural responses as reactive systems do [7]. However, active systems can be controlled by "real" users' intentions since they do not depend on external stimulation as reactive systems do. Furthermore, mental tasks strengthen other neural mechanisms, apart from those related to mental task per se. A case in point is motor imagery (MI). MI refers to the generation and maintenance of imaginary movements. As MI is motor activity, mental tasks related to MI activate the central and peripheral nervous system almost to the same extent that actual movements do it [8–10]. This property of MI-related tasks increases the technical and clinical applicability of BMIs, as well as our interest to define the

Over the past few years, the user has been identified as the main component of the MI-based BMI structure, but who has been frequently ignored in the BMI design [11, 12]. According to [13, 14], there are three factors and three conditioners that directly influence the user performance in an MI-based BMI. See **Figure 2**. On one hand, *factors* have been categorized into (1) user state, (2) user traits, and (3) user conditions. A *user state* can be regarded as the result of many physiological and psychological processes that regulate brain and body in an attempt to put the individual in an optimal condition to meet the environment demands [15]. User state includes emotions such as mood, and cognition such as motivation, mastery, confidence, competence, self-efficiency, and fear. *User traits* refer to behavior, capabilities and abilities that define a person, including personality (tension and self-reliance), and cognitive profile (attention span, attentional abilities, attitude toward work, memory span, visual-motor coordination, learning style, and abstractedness). *User conditions* are associated with demographic information such as age and gender, and lifestyle such as playing music instruments, practicing sports, playing video games, typing, and full body movement either for working or entertainment. On the other hand, *conditioners* have been grouped in (1) user-technology relationship, (2) attention, and (3) spatial abilities. *User-technology relationship* is the level of computer anxiety and sense of agency that a user poses. Computer anxiety refers to the fear and tension produced by the use of technology, while sense of agency is the belief and feeling of being the entity who is causing an action. *Attention system* is responsible for maintaining a state of vigilance (alerting function), selecting information (orienting function), and managing mental resources, which are moreover limited (executive control). Finally, *spatial abilities* are considered the skill to generate, maintain, scan, and manipulate mental images. Spatial

In a nutshell, BMI has been the operationalization of Neuroscience research advances.

scope of the present chapter to MI-based BMIs.

BMIs are technology under development that modify users' perception and cognition in order to decode their intentions from their neural oscillations, and thereby bringing some kind of activation. See **Figure 1**. Human perception and cognition can be manipulated actively through

**Figure 1.** Structure of a brain-machine interface (BMI). The basic structure of a BMI is based on user, control task (endogenous or exogenous), data acquisition, signal processing, feature extraction, dimensional reduction, classification, activation (e.g., neuro-feedback, neuro-prosthesis, domotic environments), and feedback.

mental tasks, or reactively by applying external stimulation (visual, auditory, or somatosensory). Users' intentions are typically decoded by reducing EEG signal noise, extracting neurophysiological features associated with the mental task or the external stimulus in use, and adapting a mathematical model by using those features [5]. Once a model has been calibrated, the BMI attempts to predict users' intentions so as to bring activation in different ways, including neurorehabilitation, communication, neuro-prosthesis, domotic environments, or neuro-feedback [6]. In a nutshell, BMI has been the operationalization of Neuroscience research advances.

**1. Introduction**

machine interfaces (BMI).

Electroencephalography (EEG) has become a standard brain imaging tool due to its viability for recording the brain activity. Typically, EEG has been analyzed by quantifying and qualifying neural oscillations. In broad terms, "neural oscillations" can be defined as spatial, temporal, and spectral patterns that are associated with particular perceptual, cognitive, motor, and emotional processes [1]. A large and growing body of literature has investigated neural oscillations as EEG feature in all directions: from neurological to technological perspectives. In terms of Neurosciences, research into brain response has a long history. Brain responses have been traditionally studied on the basis of event-related experiments, where time-locked and phase-locked responses (i.e., event-related potentials) along with time-locked but not necessary phase-locked responses (i.e., event-related (de) synchronization) have been essentially analyzed [2, 3]. In the case of technology, research into neural interfaces has taken a leading role. A neural interface is a system that permits to reintegrate the sensory-motor loop, accessing directly to brain information. There are three main types of neural interfaces: (1) sensory interfaces, which artificially activate the sensory system; (2) cognitive interfaces, which try to re-establish the communication of the neural networks; and (3) motor interfaces, which translate neural oscillations into control commands for a device of interest [4]. In particular, motor interfaces are known as brain-

58 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

BMIs are technology under development that modify users' perception and cognition in order to decode their intentions from their neural oscillations, and thereby bringing some kind of activation. See **Figure 1**. Human perception and cognition can be manipulated actively through

**Figure 1.** Structure of a brain-machine interface (BMI). The basic structure of a BMI is based on user, control task (endogenous or exogenous), data acquisition, signal processing, feature extraction, dimensional reduction, classification,

activation (e.g., neuro-feedback, neuro-prosthesis, domotic environments), and feedback.

Although BMI investigation was firstly undertaken in the 60s, these systems are still laboratory prototypes because (1) it is unknown how EEG features are linked to perception and cognition (human side); (2) users have been not involved in the system design, are usually not well instructed, and not guided during the user-system adaption (human-machine interaction); and (3) computational decodification of EEG signals is not enough efficient (system side). In particular, active systems have been much more challengeable to set up since they depend on the user mental effort, rather than spontaneous neural responses as reactive systems do [7]. However, active systems can be controlled by "real" users' intentions since they do not depend on external stimulation as reactive systems do. Furthermore, mental tasks strengthen other neural mechanisms, apart from those related to mental task per se. A case in point is motor imagery (MI). MI refers to the generation and maintenance of imaginary movements. As MI is motor activity, mental tasks related to MI activate the central and peripheral nervous system almost to the same extent that actual movements do it [8–10]. This property of MI-related tasks increases the technical and clinical applicability of BMIs, as well as our interest to define the scope of the present chapter to MI-based BMIs.

Over the past few years, the user has been identified as the main component of the MI-based BMI structure, but who has been frequently ignored in the BMI design [11, 12]. According to [13, 14], there are three factors and three conditioners that directly influence the user performance in an MI-based BMI. See **Figure 2**. On one hand, *factors* have been categorized into (1) user state, (2) user traits, and (3) user conditions. A *user state* can be regarded as the result of many physiological and psychological processes that regulate brain and body in an attempt to put the individual in an optimal condition to meet the environment demands [15]. User state includes emotions such as mood, and cognition such as motivation, mastery, confidence, competence, self-efficiency, and fear. *User traits* refer to behavior, capabilities and abilities that define a person, including personality (tension and self-reliance), and cognitive profile (attention span, attentional abilities, attitude toward work, memory span, visual-motor coordination, learning style, and abstractedness). *User conditions* are associated with demographic information such as age and gender, and lifestyle such as playing music instruments, practicing sports, playing video games, typing, and full body movement either for working or entertainment. On the other hand, *conditioners* have been grouped in (1) user-technology relationship, (2) attention, and (3) spatial abilities. *User-technology relationship* is the level of computer anxiety and sense of agency that a user poses. Computer anxiety refers to the fear and tension produced by the use of technology, while sense of agency is the belief and feeling of being the entity who is causing an action. *Attention system* is responsible for maintaining a state of vigilance (alerting function), selecting information (orienting function), and managing mental resources, which are moreover limited (executive control). Finally, *spatial abilities* are considered the skill to generate, maintain, scan, and manipulate mental images. Spatial

of the environment. These representations allow the individual to organize, coordinate, and execute purposefully designed motor plans aimed to maintain internal stability and achieve

Characterizing Motor System to Improve Training Protocols Used in Brain-Machine Interfaces…

http://dx.doi.org/10.5772/intechopen.72667

61

The motor plan is conceived in the cerebral cortex. The primary motor cortex, or M1, is located in the frontal lobe of the brain, and its main role consists in generating neural impulses that control the performance of movement on the contralateral side of the body. This is possible as M1 has a particular somatotopic representation of the body parts, in which those parts with more complex movements—e.g. hands—have larger representations. Moreover, the posterior parietal cortex, the premotor cortex, and the supplementary motor area also participate by using the visuospatial information to plan the complex movements and build a complex sensory guidance of each movement. These brain regions, commonly known as secondary motor cortices, send information to both primary motor cortex and brainstem motor structures in order to control the motor performance. They accomplish this goal acting on corticofugal neurons that give rise to corticospinal projections, the corticospinal tract, which ultimately

On the other hand, the "basal ganglia" (striatum and globus pallidus) and related nuclei (subthalamic nucleus, substantia nigra, and pedunculopontine nucleus) constitute a group of subcortical nuclei primarily engaged in motor control, also playing important roles such as motor learning, executive functions and behavior, and emotions. These complementary pathways control posture and balance, coarse movements of the proximal muscles, and coordinate head, neck, and eye movements in response to visual targets. See **Figure 3** for illustrative purposes [18].

**Figure 3.** Schematic and simplified representation of the dynamic underlying a goal-directed movement, highlighting the most relevant neural substrates involved in this complex process. The term "switching cost" refers to the cost of

different specific goals.

end at striated muscle [16, 17].

adjusting the mental control setting to novel demands.

**Figure 2.** Factors and conditioners that directly influence the user performance during a brain-machine communication. This model has been constructed on the basis of the theoretical framework presented in [13, 14].

abilities can be of two types: small-scale and large-scale. Small-scale abilities refer to generate and transform small shapes and easy-to-handle objects, whereas large-scale abilities refer to spatial navigation [14].

As can be seen, evolutionary genetics, skill acquisition along lifespan, and sensory-cognitive information and resources determine the production quality of motor mental images, which in turn determines the modulation level of EEG signals used to decode user intentions. The modulation level of EEG signals due to MI activity defines if a good or poor brain-computer communication is established. On this basis, the present chapter gives an account of movement production (Section 2), provides an overview of neural oscillations associated with movement production (Section 3), and explores the ways in which humans produce movements (Section 4) so as to propose new training protocols based on how human learn, predict, and act (Section 5). MI is a skill that must be acquired, and possibly, an optimal way to fulfill this task is to lay down the same rules followed by humans when they interact with their environment.

#### **2. Movement production**

#### **2.1. Generating movements**

Most of the human behaviors involve motor function, which implies the complex and coordinated functional participation of several anatomic structures. The brain integrates the information from different sensory systems in order to construct specific internal representations of the environment. These representations allow the individual to organize, coordinate, and execute purposefully designed motor plans aimed to maintain internal stability and achieve different specific goals.

The motor plan is conceived in the cerebral cortex. The primary motor cortex, or M1, is located in the frontal lobe of the brain, and its main role consists in generating neural impulses that control the performance of movement on the contralateral side of the body. This is possible as M1 has a particular somatotopic representation of the body parts, in which those parts with more complex movements—e.g. hands—have larger representations. Moreover, the posterior parietal cortex, the premotor cortex, and the supplementary motor area also participate by using the visuospatial information to plan the complex movements and build a complex sensory guidance of each movement. These brain regions, commonly known as secondary motor cortices, send information to both primary motor cortex and brainstem motor structures in order to control the motor performance. They accomplish this goal acting on corticofugal neurons that give rise to corticospinal projections, the corticospinal tract, which ultimately end at striated muscle [16, 17].

On the other hand, the "basal ganglia" (striatum and globus pallidus) and related nuclei (subthalamic nucleus, substantia nigra, and pedunculopontine nucleus) constitute a group of subcortical nuclei primarily engaged in motor control, also playing important roles such as motor learning, executive functions and behavior, and emotions. These complementary pathways control posture and balance, coarse movements of the proximal muscles, and coordinate head, neck, and eye movements in response to visual targets. See **Figure 3** for illustrative purposes [18].

abilities can be of two types: small-scale and large-scale. Small-scale abilities refer to generate and transform small shapes and easy-to-handle objects, whereas large-scale abilities refer to

**Figure 2.** Factors and conditioners that directly influence the user performance during a brain-machine communication.

This model has been constructed on the basis of the theoretical framework presented in [13, 14].

60 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

As can be seen, evolutionary genetics, skill acquisition along lifespan, and sensory-cognitive information and resources determine the production quality of motor mental images, which in turn determines the modulation level of EEG signals used to decode user intentions. The modulation level of EEG signals due to MI activity defines if a good or poor brain-computer communication is established. On this basis, the present chapter gives an account of movement production (Section 2), provides an overview of neural oscillations associated with movement production (Section 3), and explores the ways in which humans produce movements (Section 4) so as to propose new training protocols based on how human learn, predict, and act (Section 5). MI is a skill that must be acquired, and possibly, an optimal way to fulfill this task is to lay down

Most of the human behaviors involve motor function, which implies the complex and coordinated functional participation of several anatomic structures. The brain integrates the information from different sensory systems in order to construct specific internal representations

the same rules followed by humans when they interact with their environment.

spatial navigation [14].

**2. Movement production**

**2.1. Generating movements**

**Figure 3.** Schematic and simplified representation of the dynamic underlying a goal-directed movement, highlighting the most relevant neural substrates involved in this complex process. The term "switching cost" refers to the cost of adjusting the mental control setting to novel demands.

Despite the concurrence of multiple parallel loops and re-entering circuits that functionally engage complex temporary associations on an extended repertoire of neural structures, a regular movement is effortlessly carried out by healthy adults, due to the continuous converging streaming of visual, somatosensory, and postural information to the cerebral systems underpinning motor acts. In fact, the motor system is hierarchically organized in a way in which the primary motor cortex and several premotor areas crucial for planning and coordinating the sequence of movements are directly related with brain stem and spinal cord structures via neural projections. These relationships allow the upper brain structures to dynamically control the peripheral muscles, whereas several feedback circuits provide useful ascending information with the aim to maintain or adjust the motor commands if the situation demands it.

to the premotor cortex and integrated with information from prefrontal cortex about action goals and contexts before final motor output is sent to primary motor cortex, transmitted via the corticospinal tracts, and then modulated by the cerebellum and basal ganglia [20, 21].

Characterizing Motor System to Improve Training Protocols Used in Brain-Machine Interfaces…

http://dx.doi.org/10.5772/intechopen.72667

63

The premotor regions have an important role to play in motor planning and the outlining of the motor sequence of the forthcoming goal-directed action. In this sense, a decreased activity in the parietal operculum has been correlated with activity in the lateral premotor cortex, the medial cingulate motor cortex, and supplementary motor cortices. In addition, the left dorsolateral prefrontal cortex, the anterior cingulate motor cortex and bilaterally, the insular

The cerebellum is another important region that is thought to represent the timing of our goal-directed actions. The neocerebellum has been found to relate with the control and planning of voluntary movements while the intermediate cerebellum seems to be involved in regulating the quality of the movement. It has been argued that the cerebellum is a key predictive component in the conceptualization of the internal models of motor control, probably due to its extensive projections, through the thalamus, to the premotor and prefrontal cortices [24].

Mental imagery is a multimodal construct supporting the formation and maintenance of inner representations of either previously perceived images or feelings, or foreseeing upcoming events in the absence of external sensory input. Within this framework, MI alludes to the dynamic state or mental representation of a given motor action that is rehearsed in working

There is enough empirical evidence pointing out that imagined stimuli are treated in the same way as other direct sensory stimulations because they engage in multisensory interactions with stimuli directly perceived. In this regard, during real and imagined movements, brain functional activity seems to focus on related neural networks. It is not surprising then that MI draws on the similar neural circuits that are used in actual perception and motor control, involving networks

Depending on the MI task used, multiple neuroimaging studies have revealed the functional participation of motor, premotor, and supplementary motor cortices, which are consistently activated during motor imagery and are also major components of the interconnected network for motor intention. In addition, other structures play an important functional role in motor imagery as it occurs with the cerebellum, the basal ganglia, the superior and inferior parietal lobules, and the precuneus. Therefore, several authors have concurred in the notion that when performing MI, main differences with motor performance probably lie in the inhibition of some

MI and movement observation (MO) have traditionally been studied as separate processes that activate the motor system without any actual motor execution. However, motor and perception

cortices are also functionally involved [22, 23].

memory without any explicit motor act [25].

associated with memory and emotion.

motor commands triggering movements [26].

**2.5. Observing a movement**

**2.4. Imagining a movement**

#### **2.2. Categorizing the movements**

In general, movements have been categorized as a) *reflexive*: involuntary coordinated patterns of muscle contraction and relaxation, predominantly based on spinal cord mechanisms; b) *rhythmic* (e.g. quadrupedal locomotion): repetitive motor patterns involving spinal cord and brain stem circuits, and c) *voluntary*: goal-directed mechanisms involving extended motor cortical areas, brain stem, cerebellum, basal ganglia, the pyramidal and extra-pyramidal pathways, among many others [19].

Learning refines the motor programs underlying voluntary movements. Several studies have shown significant changes in the anatomic maps of the motor programs through learning, usually referred as "implicit", a term used to define changes that cannot be explicitly described in general statements (e.g. to verbally explain the learning process of riding a bicycle).

#### **2.3. Motor intention**

During the last few years, generous empirical evidence has been gathered on the fact that goal-directed and non-goal-directed movements have different neural correlates. This simple distinction has enormous implications at the understanding of behavioral actions, and neurorehabilitation in general.

An essential challenge in the area of perception and motor control has been to identify the sensory-motor and cognitive processes associated with accurate goal-directed movements. In this context, it is important to note that motor behaviors are based on strategies developed over a lifetime of interacting with objects in the environment and that they are not always conscious strategies. They almost instantaneously ponder variables as body posture, cognitive evaluation, emotional attributes, and position of the target at movement initiation, trajectory and speed of the movement, gravidity effect, among several others.

The meaningful goal-directed movements have been studied in several contexts, sensory pathways, and using a wide variety of experimental tasks. However, a clear timeline delineating brain functional engagement to support these movements seems to be far from delineated.

Briefly, information from the spatial senses converges within the parietal cortex, where it is integrated and transformed into motor-relevant reference coordinates. This information is sent to the premotor cortex and integrated with information from prefrontal cortex about action goals and contexts before final motor output is sent to primary motor cortex, transmitted via the corticospinal tracts, and then modulated by the cerebellum and basal ganglia [20, 21].

The premotor regions have an important role to play in motor planning and the outlining of the motor sequence of the forthcoming goal-directed action. In this sense, a decreased activity in the parietal operculum has been correlated with activity in the lateral premotor cortex, the medial cingulate motor cortex, and supplementary motor cortices. In addition, the left dorsolateral prefrontal cortex, the anterior cingulate motor cortex and bilaterally, the insular cortices are also functionally involved [22, 23].

The cerebellum is another important region that is thought to represent the timing of our goal-directed actions. The neocerebellum has been found to relate with the control and planning of voluntary movements while the intermediate cerebellum seems to be involved in regulating the quality of the movement. It has been argued that the cerebellum is a key predictive component in the conceptualization of the internal models of motor control, probably due to its extensive projections, through the thalamus, to the premotor and prefrontal cortices [24].

#### **2.4. Imagining a movement**

Despite the concurrence of multiple parallel loops and re-entering circuits that functionally engage complex temporary associations on an extended repertoire of neural structures, a regular movement is effortlessly carried out by healthy adults, due to the continuous converging streaming of visual, somatosensory, and postural information to the cerebral systems underpinning motor acts. In fact, the motor system is hierarchically organized in a way in which the primary motor cortex and several premotor areas crucial for planning and coordinating the sequence of movements are directly related with brain stem and spinal cord structures via neural projections. These relationships allow the upper brain structures to dynamically control the peripheral muscles, whereas several feedback circuits provide useful ascending information with the aim to maintain or adjust the motor commands if the situation demands it.

62 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

In general, movements have been categorized as a) *reflexive*: involuntary coordinated patterns of muscle contraction and relaxation, predominantly based on spinal cord mechanisms; b) *rhythmic* (e.g. quadrupedal locomotion): repetitive motor patterns involving spinal cord and brain stem circuits, and c) *voluntary*: goal-directed mechanisms involving extended motor cortical areas, brain stem, cerebellum, basal ganglia, the pyramidal and extra-pyramidal pathways, among

Learning refines the motor programs underlying voluntary movements. Several studies have shown significant changes in the anatomic maps of the motor programs through learning, usually referred as "implicit", a term used to define changes that cannot be explicitly described in

During the last few years, generous empirical evidence has been gathered on the fact that goal-directed and non-goal-directed movements have different neural correlates. This simple distinction has enormous implications at the understanding of behavioral actions, and neuro-

An essential challenge in the area of perception and motor control has been to identify the sensory-motor and cognitive processes associated with accurate goal-directed movements. In this context, it is important to note that motor behaviors are based on strategies developed over a lifetime of interacting with objects in the environment and that they are not always conscious strategies. They almost instantaneously ponder variables as body posture, cognitive evaluation, emotional attributes, and position of the target at movement initiation, trajectory

The meaningful goal-directed movements have been studied in several contexts, sensory pathways, and using a wide variety of experimental tasks. However, a clear timeline delineating brain functional engagement to support these movements seems to be far from delineated.

Briefly, information from the spatial senses converges within the parietal cortex, where it is integrated and transformed into motor-relevant reference coordinates. This information is sent

and speed of the movement, gravidity effect, among several others.

general statements (e.g. to verbally explain the learning process of riding a bicycle).

**2.2. Categorizing the movements**

many others [19].

**2.3. Motor intention**

rehabilitation in general.

Mental imagery is a multimodal construct supporting the formation and maintenance of inner representations of either previously perceived images or feelings, or foreseeing upcoming events in the absence of external sensory input. Within this framework, MI alludes to the dynamic state or mental representation of a given motor action that is rehearsed in working memory without any explicit motor act [25].

There is enough empirical evidence pointing out that imagined stimuli are treated in the same way as other direct sensory stimulations because they engage in multisensory interactions with stimuli directly perceived. In this regard, during real and imagined movements, brain functional activity seems to focus on related neural networks. It is not surprising then that MI draws on the similar neural circuits that are used in actual perception and motor control, involving networks associated with memory and emotion.

Depending on the MI task used, multiple neuroimaging studies have revealed the functional participation of motor, premotor, and supplementary motor cortices, which are consistently activated during motor imagery and are also major components of the interconnected network for motor intention. In addition, other structures play an important functional role in motor imagery as it occurs with the cerebellum, the basal ganglia, the superior and inferior parietal lobules, and the precuneus. Therefore, several authors have concurred in the notion that when performing MI, main differences with motor performance probably lie in the inhibition of some motor commands triggering movements [26].

#### **2.5. Observing a movement**

MI and movement observation (MO) have traditionally been studied as separate processes that activate the motor system without any actual motor execution. However, motor and perception action representations are closely interrelated to such a degree that perceiving another person's action triggers comparable representations as performing the action. This effect has been called "motor resonance."

**3. Neural oscillations in movement production and beyond**

sensory and cognitive information, and production of motor behavior [31, 32].

ent movements around a joint, such as extension and flexion of the wrist [35].

into five frequency bands: delta, theta, alpha, beta, and gamma [33].

as lower-1 alpha band and generally reflects levels of alertness [31].

(2–8 Hz) activity before the stimulus occurs [32].

Sensory stimulation, cognitive activities, and motor behavior result in amplitude suppression or in amplitude enhancement of the EEG signals, depending on the degree of synchronization of the neural oscillations. Such degree of synchronization is reflected in various frequency bands. Moreover, the synchronization mechanism of the neural oscillations does not only reflect processing of physical and psychological events, but it also appears prior to the event occurrence. The association of this EEG modulation with specific events is known as eventrelated oscillation (ERO). EROs can be of two types: event-related synchronization (ERS) and event-related desynchronization (ERD). If EEG rhythms increase their synchrony and thus their amplitude, an ERS arises. Otherwise, an ERD appears. ERS reflexes awake-restful states, inhibition processes, rebound events, attention-related demands (e.g., attentive expectation of relevant stimulus omission, working memory activation, and episodic short-term memory task), and cognitive-mnemonic processes. Oppositely, ERD is involved in the processing of

Characterizing Motor System to Improve Training Protocols Used in Brain-Machine Interfaces…

http://dx.doi.org/10.5772/intechopen.72667

65

EROs are characterized by four parameters: spatial location, magnitude, latency, and reactive frequency band. Among those parameters, the frequency is the key parameter to understand how humans interact with their environment. Historically, neural oscillations have been studied

Delta band oscillations (below 4 Hz) are indicative of deep sleep in adults and appear during long attention tasks [34]. They have been also found to carry information pertaining to differ-

Theta band oscillations resonate at the frequency band 4–8 Hz and emanate from the frontal midline due to audio-visual information encoding, attention demands, memory retrieval, and cognitive load. Moreover, these oscillations enhance after practice on the cognitive tasks at hand. They are more prevalent when the individual is focused and relaxed, and prolonged activity is related to selective attention [33, 36]. The upper theta band (6–8 Hz) is also known

At a neurophysiological level, anticipating sensory events resets the phase of slow, delta-theta

Alpha band rhythms oscillate at frequencies between 8 and 12 Hz and may come from frontal, temporal, parietal, and occipital regions. Overall, enhancement and suppression of alpha band rhythms are respectively associated with top-down and bottom-up information processing [37].

**3.1. Overview**

*3.1.1. Delta band oscillations*

*3.1.2. Theta band oscillations*

*3.1.3. Alpha band oscillations*

In terms of neural substrates, evidence indicates that when observing a movement, there is a significant activation on caudal supplementary motor area, bilateral cerebellum and precuneus, but also involving the basal ganglia, the inferior parietal cortex, ventral premotor cortex, and left insula. On this subject, cortico-motor activity is significantly increased while combining MI and MO, compared to either MI or MO independently. This has led to the theory that they are both concurrent processes, in which action representation might be implemented by the dynamic interaction between perception and executive brain networks, thus opening interesting possibilities for practitioners in motor learning and rehabilitation settings [27–29].

#### **2.6. Motor disabilities**

Motor impairment—or physical disability—is a common outcome from a wide range of diseases and health conditions that affects almost one of eight adults in America. In broad terms, disability is understood as the inability to engage in any substantial gainful activity in view of confirmable physical or mental impairment(s), lasting for not less than 12 months. Physical disability encompasses limitations in individual physical functioning, mobility, dexterity, or stamina. They are rarely confined to a particular disturbance of motor capabilities and also impact psychological, social, economic, and the quality of life of the affected individuals [30].

Motor disabilities can be broadly divided into two major groups, according to their inflicting conditions: a) traumatic injuries and b) congenital conditions and diseases.

#### *2.6.1. Traumatic injuries*

There are several limitations subsequent to traumatic injuries that not only include motor impairments. Neurological sequelae can also involve cognitive impairments or sensory disabilities such as deafness and blindness post-neurotrauma, limb deformations or amputation, paralysis subsequent to spinal cord injury that can affect both arms and legs—quadriplegia, both legs—paraplegia, or a more unusual combination of the limbs.

#### *2.6.2. Congenital conditions and diseases*

Several congenital conditions such as cerebral palsy, muscular dystrophy, etc., can lead to different types of motor disabilities. In addition, several degenerative nerve diseases (e.g. Parkinson's disease, multiple sclerosis, amyotrophic lateral sclerosis/Lou Gehrig's disease, etc.), and other neurological conditions (e.g. stroke, central nervous system vascular accidents, peripheral neuropathies, etc.) can also produce different degrees of motor disabilities, even including an extreme form of motor impairment termed as "locked-in syndrome," in which voluntary control of almost all muscles is lost, yet retaining a normal cognitive functioning.

### **3. Neural oscillations in movement production and beyond**

#### **3.1. Overview**

action representations are closely interrelated to such a degree that perceiving another person's action triggers comparable representations as performing the action. This effect has been called

64 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

In terms of neural substrates, evidence indicates that when observing a movement, there is a significant activation on caudal supplementary motor area, bilateral cerebellum and precuneus, but also involving the basal ganglia, the inferior parietal cortex, ventral premotor cortex, and left insula. On this subject, cortico-motor activity is significantly increased while combining MI and MO, compared to either MI or MO independently. This has led to the theory that they are both concurrent processes, in which action representation might be implemented by the dynamic interaction between perception and executive brain networks, thus opening interesting possibilities for practitioners in motor learning and rehabilitation

Motor impairment—or physical disability—is a common outcome from a wide range of diseases and health conditions that affects almost one of eight adults in America. In broad terms, disability is understood as the inability to engage in any substantial gainful activity in view of confirmable physical or mental impairment(s), lasting for not less than 12 months. Physical disability encompasses limitations in individual physical functioning, mobility, dexterity, or stamina. They are rarely confined to a particular disturbance of motor capabilities and also impact psychological, social, economic, and the quality of life of the affected

Motor disabilities can be broadly divided into two major groups, according to their inflicting

There are several limitations subsequent to traumatic injuries that not only include motor impairments. Neurological sequelae can also involve cognitive impairments or sensory disabilities such as deafness and blindness post-neurotrauma, limb deformations or amputation, paralysis subsequent to spinal cord injury that can affect both arms and legs—quadriplegia,

Several congenital conditions such as cerebral palsy, muscular dystrophy, etc., can lead to different types of motor disabilities. In addition, several degenerative nerve diseases (e.g. Parkinson's disease, multiple sclerosis, amyotrophic lateral sclerosis/Lou Gehrig's disease, etc.), and other neurological conditions (e.g. stroke, central nervous system vascular accidents, peripheral neuropathies, etc.) can also produce different degrees of motor disabilities, even including an extreme form of motor impairment termed as "locked-in syndrome," in which voluntary control

conditions: a) traumatic injuries and b) congenital conditions and diseases.

both legs—paraplegia, or a more unusual combination of the limbs.

of almost all muscles is lost, yet retaining a normal cognitive functioning.

"motor resonance."

settings [27–29].

individuals [30].

*2.6.1. Traumatic injuries*

*2.6.2. Congenital conditions and diseases*

**2.6. Motor disabilities**

Sensory stimulation, cognitive activities, and motor behavior result in amplitude suppression or in amplitude enhancement of the EEG signals, depending on the degree of synchronization of the neural oscillations. Such degree of synchronization is reflected in various frequency bands. Moreover, the synchronization mechanism of the neural oscillations does not only reflect processing of physical and psychological events, but it also appears prior to the event occurrence. The association of this EEG modulation with specific events is known as eventrelated oscillation (ERO). EROs can be of two types: event-related synchronization (ERS) and event-related desynchronization (ERD). If EEG rhythms increase their synchrony and thus their amplitude, an ERS arises. Otherwise, an ERD appears. ERS reflexes awake-restful states, inhibition processes, rebound events, attention-related demands (e.g., attentive expectation of relevant stimulus omission, working memory activation, and episodic short-term memory task), and cognitive-mnemonic processes. Oppositely, ERD is involved in the processing of sensory and cognitive information, and production of motor behavior [31, 32].

EROs are characterized by four parameters: spatial location, magnitude, latency, and reactive frequency band. Among those parameters, the frequency is the key parameter to understand how humans interact with their environment. Historically, neural oscillations have been studied into five frequency bands: delta, theta, alpha, beta, and gamma [33].

#### *3.1.1. Delta band oscillations*

Delta band oscillations (below 4 Hz) are indicative of deep sleep in adults and appear during long attention tasks [34]. They have been also found to carry information pertaining to different movements around a joint, such as extension and flexion of the wrist [35].

#### *3.1.2. Theta band oscillations*

Theta band oscillations resonate at the frequency band 4–8 Hz and emanate from the frontal midline due to audio-visual information encoding, attention demands, memory retrieval, and cognitive load. Moreover, these oscillations enhance after practice on the cognitive tasks at hand. They are more prevalent when the individual is focused and relaxed, and prolonged activity is related to selective attention [33, 36]. The upper theta band (6–8 Hz) is also known as lower-1 alpha band and generally reflects levels of alertness [31].

At a neurophysiological level, anticipating sensory events resets the phase of slow, delta-theta (2–8 Hz) activity before the stimulus occurs [32].

#### *3.1.3. Alpha band oscillations*

Alpha band rhythms oscillate at frequencies between 8 and 12 Hz and may come from frontal, temporal, parietal, and occipital regions. Overall, enhancement and suppression of alpha band rhythms are respectively associated with top-down and bottom-up information processing [37].

According to the functional roles of the alpha band rhythms, they can be categorized into mu, occipital, and tau rhythms. Mu-rhythms (or sensory-motor rhythms) arise from the sensorymotor cortex at both bandwidths 8–12 and 16–24 Hz. Enhancement and suppression of mu rhythms are due to sensory stimulation, motor activity, cognitive processes, and emotional influences [31]. Particularly, synchronization of mu rhythms increases in line with attention demands, sensory encoding, inhibition processes, and rebound events. On the other hand, suppression of mu rhythms responds to complexity, level of difficulty, and relevance of the tasks in progress. The nature of the task is reflected on the bandwidth reactivity. For example, general tasks associated with arousal, attention, effort, and expectancy produce lower alpha (8–10 Hz) desynchronization widespread over the whole scalp. In contrast, specific tasks related to information processing, selective attention, and motor activity elicit upper alpha (10–12 Hz) topographically restricted desynchronization [31, 38, 39]. Note that "information processing" refers to feature extraction, stimulus identification, and response preparation. Occipital alpha rhythms respond to mental effort expended on the processing of visual relevant stimuli. Maximum suppression of occipital alpha rhythms is expected between 200 and 300 ms after stimulus onset. The ERD effect moves toward parietal regions, reaching a longer duration than the one reached over occipital regions, particularly within the lower alpha frequency band [33].Tau alpha rhythms (or mid-temporal third rhythms) have been associated with auditory stimulation, and are obviously expected over the temporal lobe. However, these rhythms are hardly recorded over the scalp owing to the anatomical limitations [31].

**3.2. EROs in voluntary movements**

*3.2.1. ERD within the upper alpha band*

*3.2.2. ERS within the upper alpha band*

*3.2.3. EROs within the Rolandic beta band*

*3.2.4. ERS within the gamma band*

ERD/surround ERS [43].

how these EROs are reproduced during MI is described.

recovery in the period of 2 or 3 seconds following the movement [42].

tor area and has a maximum around 1 second post-movement [43].

Voluntary movements are produced in three phases: planning, execution, and recovery. During the three phases, voluntary movements provoke EROs within alpha, beta, and gamma bands. These EROs are also generated in the course of imaginary movements to some extent. In this section, the generation of EROs during voluntary movement is first explained, and then

Characterizing Motor System to Improve Training Protocols Used in Brain-Machine Interfaces…

http://dx.doi.org/10.5772/intechopen.72667

67

Voluntary movements result in a somato-topically specific and topographically restricted desynchronization of the upper alpha band over the sensory-motor cortical area. This desynchronization starts around 2 seconds prior to movement onset over the contralateral side and becomes bilaterally symmetrical immediately before execution of movement. This ERD shows a slow

During movement preparation and execution, desynchronization of the upper alpha band is often accompanied by ERS over occipital areas. This ERS can also appear after movement over areas that displayed ERD before. It has been hypothesized that this ERS is produced by deactivated cortical areas. The ERD/ERS effect within the upper alpha band is known as focal

Desynchronization of the Rolandic beta band starts around 1 second prior to movement onset over the contralateral sensorimotor area, becoming bilaterally symmetrical during movement execution. This beta ERD recovers in less than 1 second, much faster than upper alpha ERD. After the beta ERD recovery, an ERS around 20 Hz appears. Note that this beta ERS occurs while the upper alpha ERD exists. This post-movement beta ERS is a relatively robust phenomenon because it has been found after finger, hand, arm, and foot movements. However, it is larger in hand movements. The beta ERS is dominant over the contralateral primary sensorimo-

It has been found that voluntary movements also provoke ERS within the gamma band. Gamma reactivity is predominantly generated over the primary sensorimotor area. The location of gamma ERS varies with type of movement, i.e., it is somato-topically distributed. Gamma ERS appears as a sharp power increase around 36 Hz shortly before movement-onset. However, this is rarely found in the human EEG. Gamma ERS also reveals a maximum around 40 Hz during execution of movement. This ERS is considered as a stage of active information processing [44].

#### *3.1.4. Beta band oscillations*

Beta band oscillations function as a resetting mechanism, which permits neural networks to work repeatedly. These oscillations also play an important role in the top-down process that takes place during predictions [32]. According to their topographical origin, they can be identified as central, frontal, and occipital beta band oscillations. *Central beta band oscillations* are related to cognitive-motor tasks, and relaxation states preceded by strong activations. *Frontal beta band* mainly oscillates around 19 Hz, are post-stimulus events, and are associated with stimulus assessment, level of difficulty, and decision making. Finally, the *occipital beta band rhythms* occur in response to visual stimulation followed by mental relaxation [33].

#### *3.1.5. Gamma band oscillations*

Gamma band rhythms oscillate near 30 Hz during linguistic processing of meaningful words and near 40 Hz during sensory encoding, perceptual-cognitive functions, and motor behaviors. With regard to 40 Hz-rhythms, these are phase-locked to the stimulus and short-lasting, and appear 100 ms post-stimulus in sensory-motor tasks. In contrast, these are induced and late-appearing oscillations that might achieve maximum synchrony over the fronto-central region in perceptual-cognitive tasks [36, 40, 41].

According to [32], while predicting when predominantly involves low-frequency oscillations, predicting what points to a combined role of gamma and beta oscillations.

#### **3.2. EROs in voluntary movements**

According to the functional roles of the alpha band rhythms, they can be categorized into mu, occipital, and tau rhythms. Mu-rhythms (or sensory-motor rhythms) arise from the sensorymotor cortex at both bandwidths 8–12 and 16–24 Hz. Enhancement and suppression of mu rhythms are due to sensory stimulation, motor activity, cognitive processes, and emotional influences [31]. Particularly, synchronization of mu rhythms increases in line with attention demands, sensory encoding, inhibition processes, and rebound events. On the other hand, suppression of mu rhythms responds to complexity, level of difficulty, and relevance of the tasks in progress. The nature of the task is reflected on the bandwidth reactivity. For example, general tasks associated with arousal, attention, effort, and expectancy produce lower alpha (8–10 Hz) desynchronization widespread over the whole scalp. In contrast, specific tasks related to information processing, selective attention, and motor activity elicit upper alpha (10–12 Hz) topographically restricted desynchronization [31, 38, 39]. Note that "information processing" refers to feature extraction, stimulus identification, and response preparation. Occipital alpha rhythms respond to mental effort expended on the processing of visual relevant stimuli. Maximum suppression of occipital alpha rhythms is expected between 200 and 300 ms after stimulus onset. The ERD effect moves toward parietal regions, reaching a longer duration than the one reached over occipital regions, particularly within the lower alpha frequency band [33].Tau alpha rhythms (or mid-temporal third rhythms) have been associated with auditory stimulation, and are obviously expected over the temporal lobe. However, these rhythms are hardly recorded over the scalp owing to the anatomical limitations [31].

66 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

Beta band oscillations function as a resetting mechanism, which permits neural networks to work repeatedly. These oscillations also play an important role in the top-down process that takes place during predictions [32]. According to their topographical origin, they can be identified as central, frontal, and occipital beta band oscillations. *Central beta band oscillations* are related to cognitive-motor tasks, and relaxation states preceded by strong activations. *Frontal beta band* mainly oscillates around 19 Hz, are post-stimulus events, and are associated with stimulus assessment, level of difficulty, and decision making. Finally, the *occipital beta band rhythms* occur in response to visual stimulation followed by mental relaxation [33].

Gamma band rhythms oscillate near 30 Hz during linguistic processing of meaningful words and near 40 Hz during sensory encoding, perceptual-cognitive functions, and motor behaviors. With regard to 40 Hz-rhythms, these are phase-locked to the stimulus and short-lasting, and appear 100 ms post-stimulus in sensory-motor tasks. In contrast, these are induced and late-appearing oscillations that might achieve maximum synchrony over the fronto-central

According to [32], while predicting when predominantly involves low-frequency oscillations,

predicting what points to a combined role of gamma and beta oscillations.

*3.1.4. Beta band oscillations*

*3.1.5. Gamma band oscillations*

region in perceptual-cognitive tasks [36, 40, 41].

Voluntary movements are produced in three phases: planning, execution, and recovery. During the three phases, voluntary movements provoke EROs within alpha, beta, and gamma bands. These EROs are also generated in the course of imaginary movements to some extent. In this section, the generation of EROs during voluntary movement is first explained, and then how these EROs are reproduced during MI is described.

#### *3.2.1. ERD within the upper alpha band*

Voluntary movements result in a somato-topically specific and topographically restricted desynchronization of the upper alpha band over the sensory-motor cortical area. This desynchronization starts around 2 seconds prior to movement onset over the contralateral side and becomes bilaterally symmetrical immediately before execution of movement. This ERD shows a slow recovery in the period of 2 or 3 seconds following the movement [42].

#### *3.2.2. ERS within the upper alpha band*

During movement preparation and execution, desynchronization of the upper alpha band is often accompanied by ERS over occipital areas. This ERS can also appear after movement over areas that displayed ERD before. It has been hypothesized that this ERS is produced by deactivated cortical areas. The ERD/ERS effect within the upper alpha band is known as focal ERD/surround ERS [43].

#### *3.2.3. EROs within the Rolandic beta band*

Desynchronization of the Rolandic beta band starts around 1 second prior to movement onset over the contralateral sensorimotor area, becoming bilaterally symmetrical during movement execution. This beta ERD recovers in less than 1 second, much faster than upper alpha ERD. After the beta ERD recovery, an ERS around 20 Hz appears. Note that this beta ERS occurs while the upper alpha ERD exists. This post-movement beta ERS is a relatively robust phenomenon because it has been found after finger, hand, arm, and foot movements. However, it is larger in hand movements. The beta ERS is dominant over the contralateral primary sensorimotor area and has a maximum around 1 second post-movement [43].

#### *3.2.4. ERS within the gamma band*

It has been found that voluntary movements also provoke ERS within the gamma band. Gamma reactivity is predominantly generated over the primary sensorimotor area. The location of gamma ERS varies with type of movement, i.e., it is somato-topically distributed. Gamma ERS appears as a sharp power increase around 36 Hz shortly before movement-onset. However, this is rarely found in the human EEG. Gamma ERS also reveals a maximum around 40 Hz during execution of movement. This ERS is considered as a stage of active information processing [44].

#### **3.3. EROs in imaginary movements**

As MI relies on the same mechanism as actual movements, it is not surprising to observe similar EROs during imaginary movements. Specifically, upper alpha ERD during MI is very similar to upper alpha ERD observed during the planning phase of motor executions, i.e., it is locally restricted to the contralateral sensorimotor areas. In both cases, ERD may reflect a type of readiness or pre-setting of neural networks in sensory-motor areas. Similarly, Rolandic beta ERD appears during MI as it does during movement preparation. After the MI activity, the Rolandic beta ERS found post-movement over the pre-central region of the brain is reflected as well [43, 44]. These EROs are illustrated in **Figure 4**.

**4. Sensory-motor system in motor skill acquisition**

**4.1. Modular selection and identification for control (MOSAIC) model**

prioceptive ones. Eventually, movements become habitual behaviors.

Movement is the means whereby individuals interact with other individuals and their environment. Most of motor information is gathered along the human lifetime, but there are also few motor skills genetically and evolutionarily inherited. In general, movements are skills acquired by learning, and are the result of transforming sensory and cognitive inputs into motor outputs. As motor system is a complex mechanism trained along lifetime, and MI-based BMI attempts to decode motor intentions from neural oscillations in order to put a device into action, motor mechanisms should be considered when prototyping BMI systems. Understanding motor processing and control, and including thereafter such motor mechanisms in the BMI architecture could lead to solve BMI drawbacks at the source. On this basis, the main issues addressed in this section are: (1) how humans execute movements, (2) relevance of somatosensory information in movement processing and control, and (3) the role of MI in sensory and motor systems.

Characterizing Motor System to Improve Training Protocols Used in Brain-Machine Interfaces…

http://dx.doi.org/10.5772/intechopen.72667

69

Movements are skills that humans need to acquire along with their life-time through an errorand-trial process, which depends on the reduction of kinematic (geometry and speed) and dynamic (force) error detected through somatosensory channels, primarily visual and pro-

To produce a movement, prediction (forward model) turns motor intentions into expected sensory-cognitive consequences, whereas control (inverse model) turns desired consequences into motor commands. This model is known as modular selection and identification for control (MOSAIC) model. The transformation from sensory-cognitive into motor signals according to MOSAIC model is as follows. Firstly, motor behavior patterns are predicted according to previously acquired knowledge (memory), and simultaneously, sensory predictions are made by scanning the working environment (context). Secondly, motor behavior patterns and sensory predictions are used to make a motor prediction. Thirdly, those predictions are turned out to be movements, and thereby modifying the working environment. Finally, envi-

ronmental changes cause sensory feedback used to adjust motor behavior [51, 52].

demands [53].

**4.2. Sensory feedback in the motor system**

Motor system depends on several forward models that run simultaneously. Each of those forward models is paired with a corresponding inverse model as is illustrated in **Figure 5A**. Note that the controller of the inverse model weights its output in accordance with the matching between sensory feedback and motor prediction. In this way, every forward-inverse model pair contributes correspondingly to motor execution, and depending on the environmental

Sensory feedback, result of environmental changes caused by motor execution, is not only compared with motor prediction to readjust motor execution. Sensory information collected from the working environment also leads to perceptual learning. From **Figure 5A**, it can be seen that sensory information feed forwards the forward model. This means that sensory

#### **3.4. Neural oscillations in BMI**

As was aforementioned, a BMI is an emerging technology that aims to achieve interaction between humans and their environment by making use of their neural oscillations. In order to establish brain-machine communication, systems can employ MI-related mental tasks to modulate user neural oscillations, and thus extracting wealthy information to generate an action. As was discussed in this section, MI activity produces well-established ERD/ERS patterns, which have been moderately used to control BMIs. However, it is also well-known that MI-based BMIs require long training sessions and are not suitable for all people [45–47].

From **Figure 2**, it can be clearly seen that the level of synchronization of neural oscillations depends on a wide variety of factors and conditioners. The human-environment interaction through BMI engages a large number of sensory, cognitive, and motor processes, which modulate neural oscillations beyond the MI-related mental tasks. Typically, BMI users are rigorously trained to dominate MI skills, so as to magnify the ERD/ERS effects on the scalp, regardless of evolutionary genetics, skill acquisition along lifespan, and sensory-cognitive information and resources at a time (**Figure 2**). BMI is not only associated with the analysis of EEG signals prior to, during and after MI activity [48–50], but it is also related to previous knowledge of user, current user state, user profile, and environmental conditions: factors that determine the level of synchrony of neural oscillations as well.

**Figure 4.** Neural oscillations in MI activity. Similar to voluntary movements, MI is produced in three phases: planning, execution, and recovery. During the three phases, MI modulates neural oscillations in alpha, beta, and gamma bands.

### **4. Sensory-motor system in motor skill acquisition**

**3.3. EROs in imaginary movements**

**3.4. Neural oscillations in BMI**

of synchrony of neural oscillations as well.

as well [43, 44]. These EROs are illustrated in **Figure 4**.

68 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

As MI relies on the same mechanism as actual movements, it is not surprising to observe similar EROs during imaginary movements. Specifically, upper alpha ERD during MI is very similar to upper alpha ERD observed during the planning phase of motor executions, i.e., it is locally restricted to the contralateral sensorimotor areas. In both cases, ERD may reflect a type of readiness or pre-setting of neural networks in sensory-motor areas. Similarly, Rolandic beta ERD appears during MI as it does during movement preparation. After the MI activity, the Rolandic beta ERS found post-movement over the pre-central region of the brain is reflected

As was aforementioned, a BMI is an emerging technology that aims to achieve interaction between humans and their environment by making use of their neural oscillations. In order to establish brain-machine communication, systems can employ MI-related mental tasks to modulate user neural oscillations, and thus extracting wealthy information to generate an action. As was discussed in this section, MI activity produces well-established ERD/ERS patterns, which have been moderately used to control BMIs. However, it is also well-known that MI-based BMIs require long training sessions and are not suitable for all people [45–47].

From **Figure 2**, it can be clearly seen that the level of synchronization of neural oscillations depends on a wide variety of factors and conditioners. The human-environment interaction through BMI engages a large number of sensory, cognitive, and motor processes, which modulate neural oscillations beyond the MI-related mental tasks. Typically, BMI users are rigorously trained to dominate MI skills, so as to magnify the ERD/ERS effects on the scalp, regardless of evolutionary genetics, skill acquisition along lifespan, and sensory-cognitive information and resources at a time (**Figure 2**). BMI is not only associated with the analysis of EEG signals prior to, during and after MI activity [48–50], but it is also related to previous knowledge of user, current user state, user profile, and environmental conditions: factors that determine the level

**Figure 4.** Neural oscillations in MI activity. Similar to voluntary movements, MI is produced in three phases: planning, execution, and recovery. During the three phases, MI modulates neural oscillations in alpha, beta, and gamma bands.

Movement is the means whereby individuals interact with other individuals and their environment. Most of motor information is gathered along the human lifetime, but there are also few motor skills genetically and evolutionarily inherited. In general, movements are skills acquired by learning, and are the result of transforming sensory and cognitive inputs into motor outputs. As motor system is a complex mechanism trained along lifetime, and MI-based BMI attempts to decode motor intentions from neural oscillations in order to put a device into action, motor mechanisms should be considered when prototyping BMI systems. Understanding motor processing and control, and including thereafter such motor mechanisms in the BMI architecture could lead to solve BMI drawbacks at the source. On this basis, the main issues addressed in this section are: (1) how humans execute movements, (2) relevance of somatosensory information in movement processing and control, and (3) the role of MI in sensory and motor systems.

#### **4.1. Modular selection and identification for control (MOSAIC) model**

Movements are skills that humans need to acquire along with their life-time through an errorand-trial process, which depends on the reduction of kinematic (geometry and speed) and dynamic (force) error detected through somatosensory channels, primarily visual and proprioceptive ones. Eventually, movements become habitual behaviors.

To produce a movement, prediction (forward model) turns motor intentions into expected sensory-cognitive consequences, whereas control (inverse model) turns desired consequences into motor commands. This model is known as modular selection and identification for control (MOSAIC) model. The transformation from sensory-cognitive into motor signals according to MOSAIC model is as follows. Firstly, motor behavior patterns are predicted according to previously acquired knowledge (memory), and simultaneously, sensory predictions are made by scanning the working environment (context). Secondly, motor behavior patterns and sensory predictions are used to make a motor prediction. Thirdly, those predictions are turned out to be movements, and thereby modifying the working environment. Finally, environmental changes cause sensory feedback used to adjust motor behavior [51, 52].

Motor system depends on several forward models that run simultaneously. Each of those forward models is paired with a corresponding inverse model as is illustrated in **Figure 5A**. Note that the controller of the inverse model weights its output in accordance with the matching between sensory feedback and motor prediction. In this way, every forward-inverse model pair contributes correspondingly to motor execution, and depending on the environmental demands [53].

#### **4.2. Sensory feedback in the motor system**

Sensory feedback, result of environmental changes caused by motor execution, is not only compared with motor prediction to readjust motor execution. Sensory information collected from the working environment also leads to perceptual learning. From **Figure 5A**, it can be seen that sensory information feed forwards the forward model. This means that sensory

(encompassed under the user condition category) influences directly user performance. To date, user ability to produce motor mental images has been somehow quantified psychologically and neuro-physiologically to evaluate the user potential to control a MI-based BMI [13]. However, the role of context, along with sensory prediction, has been overlooked. Based on **Figure 5B**, the production of imaginary movements depends on both motor repertoires built along lifetime and environmental conditions. Furthermore, if a MI-based BMI attempts to put into action MI tasks, the resulting environmental changes will necessarily produce sensory feedback that must be collected, and then, provided to the forward model in order to readjust MI activity. That is, MI is a mental rehearsal that proceeds from forward motor model, which is intended to be effected through BMI, and which should be readjusted by sensory feedback. Following this line of though, it is proposed to restructure current training paradigms used to train BMI users on the basis of forward model, sensory feedback, and perceptual learning.

Characterizing Motor System to Improve Training Protocols Used in Brain-Machine Interfaces…

http://dx.doi.org/10.5772/intechopen.72667

71

**5. Toward training paradigms based on how human learn, predict,** 

**5.1. How to design new paradigms based on the sensory-motor system functioning** 

Similar to actual movements, imaginary movements are predicted in line with motor repertoires built along lifetime, and sensory predictions made through context scanning (**Figure 5B**). Therefore, the first step to design a training paradigm is to create a favorable and familiar environment, which provides at a first glance the sufficient sensory information about which imaginary movements are needed to interact with such environment. This first step refers to the creation of an *ecological environment*. The second step is to identify the necessary imaginary movements in line with the nature of the working environment. Note that the selected imaginary movements are used to modulate EEG signals, and thus getting control of the system. For this reason, the selected imaginary movements are known as *control tasks*. The third step is to modify the working environment as if imaginary movements were being actually executed. This achieves consistency between what is imagined and how that mental image is effectuated. Frequently, the set of imaginary movements that user performs to establish brain-machine communication is not strongly related to the control panel of the

paradigms based on the sensory-motor system are proposed.

**to achieve MI skill acquisition?**

The interest on MI-based BMIs has been growing exponentially. Although the idea of direct brain-machine communication is very attractive stand alone, BMIs as a tool in Neurosciences to investigate sensorimotor transformations of the nervous system has magnified BMI research [58]. So far, the major issue to debate in BMI research has been system performance. As has been herein discussed, user conditioners and factors are closely associated with system performance (**Figure 2**), and in turn, all those conditioners and factors are related to the acquisition of MI skills. If imaginary movements became automatic, brain-machine communication would be natural and efficient. It is hypothesized that the best way to acquire MI skills is following the same rules humans obey to move around the world. Hereunder, new training

**and act**

**Figure 5.** Motor system according to the modular selection and identification for control (MOSAIC) model, and sensory feedback. (A) System diagram to execute actual movements. (B) System diagram to achieve interaction with the environment through MI-based BMI by both sensory and motor systems.

feedback is used to make new sensory predictions, and it influences motor behaviors. As learning is a process that involves changes in behavior that arise from interaction with the environment [52], it means that sensory feedback does not only confirm or contradict motor prediction, but it also promotes perceptual learning.

Recent neuroimaging evidence suggests that perceptual learning promotes neural plasticity over sensory-motor cortices, and increases connectivity between such areas of the brain. Furthermore, the effect of perceptual learning is durable [54, 55]. This means that somatosensory function plays a vital role in motor (re) learning. As motor skill (re) acquisition is determined by sensory and motor systems, MI-based BMIs should be designed in terms of both systems. At present, only motor system is considered in the BMI architecture. However, if sensory feedback is properly given, perceptual learning will be gained, which in turn will achieve the acquisition of MI skills.

#### **4.3. Motor imagery as a result of sensory and motor systems**

Up to now, MI as control task in BMIs has been seen as a skill that must be acquired, but neither user conditions nor controlled learning conditions have been taken into account. Only recently, when MI-based control has not been achieved by anyone at any time [7, 11], those two conditioners started to be investigated [14].

Turning now to **Figure 5B**, it can be seen that MI is managed by forward models, fact that has been shown in previous neurophysiological studies [56, 57]. This indicates that MI depends on sensory predictions and motor behavior patterns, proceeding respectively from context scanning and previous knowledge. As is illustrated in **Figure 2**, previous knowledge of users (encompassed under the user condition category) influences directly user performance. To date, user ability to produce motor mental images has been somehow quantified psychologically and neuro-physiologically to evaluate the user potential to control a MI-based BMI [13]. However, the role of context, along with sensory prediction, has been overlooked. Based on **Figure 5B**, the production of imaginary movements depends on both motor repertoires built along lifetime and environmental conditions. Furthermore, if a MI-based BMI attempts to put into action MI tasks, the resulting environmental changes will necessarily produce sensory feedback that must be collected, and then, provided to the forward model in order to readjust MI activity. That is, MI is a mental rehearsal that proceeds from forward motor model, which is intended to be effected through BMI, and which should be readjusted by sensory feedback. Following this line of though, it is proposed to restructure current training paradigms used to train BMI users on the basis of forward model, sensory feedback, and perceptual learning.

### **5. Toward training paradigms based on how human learn, predict, and act**

feedback is used to make new sensory predictions, and it influences motor behaviors. As learning is a process that involves changes in behavior that arise from interaction with the environment [52], it means that sensory feedback does not only confirm or contradict motor

**Figure 5.** Motor system according to the modular selection and identification for control (MOSAIC) model, and sensory feedback. (A) System diagram to execute actual movements. (B) System diagram to achieve interaction with the

Recent neuroimaging evidence suggests that perceptual learning promotes neural plasticity over sensory-motor cortices, and increases connectivity between such areas of the brain. Furthermore, the effect of perceptual learning is durable [54, 55]. This means that somatosensory function plays a vital role in motor (re) learning. As motor skill (re) acquisition is determined by sensory and motor systems, MI-based BMIs should be designed in terms of both systems. At present, only motor system is considered in the BMI architecture. However, if sensory feedback is properly given, perceptual learning will be gained, which in turn will

Up to now, MI as control task in BMIs has been seen as a skill that must be acquired, but neither user conditions nor controlled learning conditions have been taken into account. Only recently, when MI-based control has not been achieved by anyone at any time [7, 11], those

Turning now to **Figure 5B**, it can be seen that MI is managed by forward models, fact that has been shown in previous neurophysiological studies [56, 57]. This indicates that MI depends on sensory predictions and motor behavior patterns, proceeding respectively from context scanning and previous knowledge. As is illustrated in **Figure 2**, previous knowledge of users

prediction, but it also promotes perceptual learning.

environment through MI-based BMI by both sensory and motor systems.

70 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

**4.3. Motor imagery as a result of sensory and motor systems**

two conditioners started to be investigated [14].

achieve the acquisition of MI skills.

The interest on MI-based BMIs has been growing exponentially. Although the idea of direct brain-machine communication is very attractive stand alone, BMIs as a tool in Neurosciences to investigate sensorimotor transformations of the nervous system has magnified BMI research [58]. So far, the major issue to debate in BMI research has been system performance. As has been herein discussed, user conditioners and factors are closely associated with system performance (**Figure 2**), and in turn, all those conditioners and factors are related to the acquisition of MI skills. If imaginary movements became automatic, brain-machine communication would be natural and efficient. It is hypothesized that the best way to acquire MI skills is following the same rules humans obey to move around the world. Hereunder, new training paradigms based on the sensory-motor system are proposed.

#### **5.1. How to design new paradigms based on the sensory-motor system functioning to achieve MI skill acquisition?**

Similar to actual movements, imaginary movements are predicted in line with motor repertoires built along lifetime, and sensory predictions made through context scanning (**Figure 5B**). Therefore, the first step to design a training paradigm is to create a favorable and familiar environment, which provides at a first glance the sufficient sensory information about which imaginary movements are needed to interact with such environment. This first step refers to the creation of an *ecological environment*. The second step is to identify the necessary imaginary movements in line with the nature of the working environment. Note that the selected imaginary movements are used to modulate EEG signals, and thus getting control of the system. For this reason, the selected imaginary movements are known as *control tasks*. The third step is to modify the working environment as if imaginary movements were being actually executed. This achieves consistency between what is imagined and how that mental image is effectuated. Frequently, the set of imaginary movements that user performs to establish brain-machine communication is not strongly related to the control panel of the system. For example, imaginary movements of mouth, foot, left hand and right hand are often mapped respectively to move forward, move backward, turn left and turn right. This kind of mapping causes confusion, and makes difficult the user-system adaptation, since not only MI skill acquisition is necessary, but also the correlation between mental rehearsal and control panel. The consistency between imaginary movements and control mechanisms is referred to as *transparent mapping*. Finally, the last step is to provide *sensory feedback* to obtain perceptual information about the environmental changes effected by the MI activity in use.

By way of illustration of this MI training paradigm, the following scenario is constructed. If the working environment is a photo album on a mobile, and the interaction task is to slide photos, the control task should be to imagine sliding the index finger from left to right. With respect to sensory feedback, this can be given in three modalities: (1) auditory, playing a sweeping sound while the current photo is being replaced by the next one; (2) visual, sliding from one photo to another; and (3) tactile, producing a vibration in the hand of interest, similar to the one perceived from mobile devices. The MI training paradigm, along with this exemplification, has been outlined in **Table 1**. The complete picture (forward model, MI process, neural oscillations, and sensory feedback) of this scenario is provided in **Figure 6**.

patterns in the brain. Furthermore, the effectuation of MI as an actual movement will make users feel that their mental images are being executed, and are changing their working environment. The external changes give sensory feedback to users, which allows forward model

**Figure 6.** MI training paradigm based on the forward model of the motor system, MI process, neural oscillations, and

Characterizing Motor System to Improve Training Protocols Used in Brain-Machine Interfaces…

http://dx.doi.org/10.5772/intechopen.72667

73

On the other hand, the present MI training paradigms can help to reduce *computer anxiety* since users are interacting with commonly used devices inside a familiar context. They can also increase *sense of agency* since what they imagine is what is executed, and even more, they feel it. *Attention* could increase as well, since users are doing what they like to do. Moreover, if the ecological environment is personalized for each user, attention could be even higher. Finally, at the time of selecting small-scale *spatial abilities* related to activities of daily living,

The interest on MI-based BMIs has been growing exponentially. Although the idea of direct brain-machine communication is very attractive stand alone, BMIs as a tool in Neurosciences to investigate sensorimotor transformations of the nervous system has magnified BMI research. Of particular interest is the neural mechanism behind the motor system, because movement is the only way human beings have for interacting with the world. When this system is malfunctioning, people eventually or suddenly lose their autonomy, what leads to overcome several socio-economical pitfalls. Only in Mexico, around 15.9 million people have some kind of limitation, either mental or physical. This means that 6% of the total population in the country has a poor quality of life. According to the National Institute of Statistics and Geography (2014), mobility restrictions are the most recurrent disability and they are typically

Unfortunately, MI-based BMIs are still a laboratory prototype since not anyone at any time can control the system. The system functionality greatly depends on the modulation of EEG signals by means of MI-related tasks. MI as control task in BMIs has been seen as a skill that must be acquired, but neither user conditions nor controlled learning conditions have been taken into

readjusting the imaginary movement in course.

**6. Conclusion**

sensory feedback.

the generation and maintenance of mental images can be facilitated.

associated with aging process, traumatic injuries or congenital conditions.

It is worth noting that there are several neural oscillations related to MI process (**Figure 4**); however, in **Figure 6**, only those previously estimated to improve BMI performance were considered. In [48, 49], it was found that pre-stimulus sensory-motor rhythms can predict user performance, and can lead to better classifiable EEG patterns as well. In [50], it was demonstrated that the most optimal features to differentiate MI tasks were post-MI period, rather than peri-MI period. Nevertheless, the signal analysis is not limited to this proposal.

#### **5.2. Why should these paradigms increase MI-based BMI performance?**

These new MI training paradigms take advantages of previous knowledge of users since they supply meaningful contexts. These paradigms can facilitate the generation and maintenance of mental image due to the automatic development of sensory predictions and motor behavior


**Table 1.** Motor imagery training paradigm based on motor prediction mechanisms (forward model of motor system) and sensory feedback.

Characterizing Motor System to Improve Training Protocols Used in Brain-Machine Interfaces… http://dx.doi.org/10.5772/intechopen.72667 73

**Figure 6.** MI training paradigm based on the forward model of the motor system, MI process, neural oscillations, and sensory feedback.

patterns in the brain. Furthermore, the effectuation of MI as an actual movement will make users feel that their mental images are being executed, and are changing their working environment. The external changes give sensory feedback to users, which allows forward model readjusting the imaginary movement in course.

On the other hand, the present MI training paradigms can help to reduce *computer anxiety* since users are interacting with commonly used devices inside a familiar context. They can also increase *sense of agency* since what they imagine is what is executed, and even more, they feel it. *Attention* could increase as well, since users are doing what they like to do. Moreover, if the ecological environment is personalized for each user, attention could be even higher. Finally, at the time of selecting small-scale *spatial abilities* related to activities of daily living, the generation and maintenance of mental images can be facilitated.

#### **6. Conclusion**

**Step Description Exemplification**

than peri-MI period. Nevertheless, the signal analysis is not limited to this proposal.

**5.2. Why should these paradigms increase MI-based BMI performance?**

environment.

ecological environment.

and control mechanisms.

environmental changes.

❷ Control task Identification of control tasks according to the

❹ Sensory feedback Multisensory feedback in order to perceive

Create a favorable and familiar environment, which provides at a first glance the sufficient sensory information about which imaginary movements are needed to interact with such

These new MI training paradigms take advantages of previous knowledge of users since they supply meaningful contexts. These paradigms can facilitate the generation and maintenance of mental image due to the automatic development of sensory predictions and motor behavior

system. For example, imaginary movements of mouth, foot, left hand and right hand are often mapped respectively to move forward, move backward, turn left and turn right. This kind of mapping causes confusion, and makes difficult the user-system adaptation, since not only MI skill acquisition is necessary, but also the correlation between mental rehearsal and control panel. The consistency between imaginary movements and control mechanisms is referred to as *transparent mapping*. Finally, the last step is to provide *sensory feedback* to obtain perceptual

By way of illustration of this MI training paradigm, the following scenario is constructed. If the working environment is a photo album on a mobile, and the interaction task is to slide photos, the control task should be to imagine sliding the index finger from left to right. With respect to sensory feedback, this can be given in three modalities: (1) auditory, playing a sweeping sound while the current photo is being replaced by the next one; (2) visual, sliding from one photo to another; and (3) tactile, producing a vibration in the hand of interest, similar to the one perceived from mobile devices. The MI training paradigm, along with this exemplification, has been outlined in **Table 1**. The complete picture (forward model, MI process, neural oscillations, and sensory feedback) of this scenario is provided in **Figure 6**.

It is worth noting that there are several neural oscillations related to MI process (**Figure 4**); however, in **Figure 6**, only those previously estimated to improve BMI performance were considered. In [48, 49], it was found that pre-stimulus sensory-motor rhythms can predict user performance, and can lead to better classifiable EEG patterns as well. In [50], it was demonstrated that the most optimal features to differentiate MI tasks were post-MI period, rather

information about the environmental changes effected by the MI activity in use.

72 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

Consistency between imaginary movements

**Table 1.** Motor imagery training paradigm based on motor prediction mechanisms (forward model of motor system)

Photo album on mobile.

Slide index finger from left

Slide photo from left to right.

Sweeping sound, tactile sensation, virtual hand.

to right.

❶ Ecological

❸ Transparent

and sensory feedback.

mapping

environment

The interest on MI-based BMIs has been growing exponentially. Although the idea of direct brain-machine communication is very attractive stand alone, BMIs as a tool in Neurosciences to investigate sensorimotor transformations of the nervous system has magnified BMI research. Of particular interest is the neural mechanism behind the motor system, because movement is the only way human beings have for interacting with the world. When this system is malfunctioning, people eventually or suddenly lose their autonomy, what leads to overcome several socio-economical pitfalls. Only in Mexico, around 15.9 million people have some kind of limitation, either mental or physical. This means that 6% of the total population in the country has a poor quality of life. According to the National Institute of Statistics and Geography (2014), mobility restrictions are the most recurrent disability and they are typically associated with aging process, traumatic injuries or congenital conditions.

Unfortunately, MI-based BMIs are still a laboratory prototype since not anyone at any time can control the system. The system functionality greatly depends on the modulation of EEG signals by means of MI-related tasks. MI as control task in BMIs has been seen as a skill that must be acquired, but neither user conditions nor controlled learning conditions have been taken into account. In this chapter, it has been proposed new training protocols based on how human learn, predict, and act. Possibly, an optimal way to master MI tasks is to lay down the same rules followed by humans when they interact with their environment. This can reduce computer anxiety, increase sense of agency and attention, and facilitate the acquisition of small-scale spatial abilities.

[7] Jeunet C, Jahanpour E, Lotte F.Why standard brain-computer interface (BCI) training protocols should be changed: An experimental study. Journal of Neural Engineering. 2016;

Characterizing Motor System to Improve Training Protocols Used in Brain-Machine Interfaces…

http://dx.doi.org/10.5772/intechopen.72667

75

[8] Marchal-Crespo L, Zimmermann R, Lambercy O, Edelmann J, Fluet MC, Wolf M, Gassert R, Riener R. Motor execution detection based on autonomic nervous system responses.

[9] Pfurtscheller G, Solis-Escalante T, Barry RJ, Klobassa DS, Neuper C, Müller-Purtz GR. Brisk heart rate and EEG changes during execution and withholding of cue-paced

[10] Pfurtscheller G, Leeb R, Slater M. Cardiac responses induced during thought-based control of a virtual environment. International Journal of Psychophysiology. 2006:134-140 [11] Jeunet C, Lotte F. Why and how to use intelligent tutoring systems to adapt MI-BCI

[12] Lotte F, Larrue F, Mühl C. Flaws in current human training protocols for spontaneous brain-computer interfaces: Lessons learned from instructional design. Frontiers in Human

[13] Jeunet C, N'Kaoua B, Lotte F. Towards a cognitive model of mi-bci user training, Bordeaux:

[14] Jeunet C, N'Kaoua B, Lotte F. Advances in user-training for mental-imagery-based BCI control: Psychological and cognitive factors and their neural correlates. Progress in Brain

[15] Zander TO, Kothe C, Jatzev S, Gaertner M. Enhancing human-computer interaction with input from active and passive brain-computer interfaces. In: Brain-Computer Interfaces. Applying our Minds to Human-Computer Interaction. Tan DS, Nijholt A, eds, London:

[16] Mirabella G, Lebedev MA. Interfacing to the brain's motor decisions. Journal of Neuro-

[17] Ranganathan R, Scheidt RA. Organizing and reorganizing coordination patterns. Progress

[18] Mannella F, Baldassarre G. Selection of cortical dynamics for motor behaviour by the

[19] Hallett M. Volitional control of movement: The physiology of free will. Clinical Neuro-

[20] Elliot D, Lyons J, Hayes SJ, Burkitt JJ, Roberts JW, Grierson LE, Hanson S, Bennett SJ. The multiple process model of goal-directed reaching revisited. Neuroscience & Bio-

[21] Jaravle G, Binsted G, Spence C.Tactile suppression in goal-directed movement. Psychonomic

in Motor Control. Springer International Publishing. 2016;**1**:327-349

basal ganglia. Biological Cybernetics. 2015;**109**(6):575-595

foot motor imagery. Frontiers in Human Neurosciences. 2013;**7**(379):1-9

training to each user. In: 6th International BCI Meeting, Graz, 2016

**13**(3):p. 036024

Physiological Measurements. 2013;**34**(1):35-51

https://hal.archives-ouvertes.fr/hal-01519476, 2017

Neurosciences. 2013;**7**(568):9-19

Research. 2016;**228**(1):3-35

Springer; 2010. pp. 181-200

physiology. 2017;**117**(3):1305-1319

physiology. 2007;**118**(6):1179-1192

behavioral Reviews. 2017;**72**(1):95-110

Bulletin & Review. 2017;**24**(4):1060-1076

#### **Conflict of interest**

No conflicts of interest are declared by authors.

#### **Author details**

Luz Maria Alonso-Valerdi<sup>1</sup> \* and Andrés Antonio González-Garrido<sup>2</sup>

\*Address all correspondence to: luzalonsoval@gmail.com

1 Escuela de Ingeniería y Ciencias, Tecnológico de Monterrey, Monterrey, N.L., México

2 Instituto de Neurociencias, Centro Universitario de Ciencias Biológicas y Agropecuarias, Universidad de Guadalajara, Guadalajara, Jal., México

#### **References**


[7] Jeunet C, Jahanpour E, Lotte F.Why standard brain-computer interface (BCI) training protocols should be changed: An experimental study. Journal of Neural Engineering. 2016; **13**(3):p. 036024

account. In this chapter, it has been proposed new training protocols based on how human learn, predict, and act. Possibly, an optimal way to master MI tasks is to lay down the same rules followed by humans when they interact with their environment. This can reduce computer anxiety, increase sense of agency and attention, and facilitate the acquisition of small-scale

\* and Andrés Antonio González-Garrido<sup>2</sup>

1 Escuela de Ingeniería y Ciencias, Tecnológico de Monterrey, Monterrey, N.L., México

2 Instituto de Neurociencias, Centro Universitario de Ciencias Biológicas y Agropecuarias,

[1] Cohen MX.Where does EEG come from and what does it mean? Trends in Neurosciences.

[2] Kappenman ES, Luck SJ. ERP components: The ups and downs of brainwave recordings. In: Kappenman ES, Luck SJ, editors. The Oxford Handbook of Event-Related Potential

[3] Bastiaansen M, Mazaheri A, Jensen O. Beyond ERPs: Oscillatory Neuronal Dynamics. In: The Oxford Handbook of Event-Related Potential Components. New York: Oxford

[4] Sanchez JC, Principe JC. Brain-Machine Interface Engineering. In: Introduction to Neural Interfaces, Synthesis Lectures on Biomedical Engineering. Morgan & Claypool; 2007.

[5] Lotte F, Bougrain L, Clerc M. Electroencephalography (EEG)-based brain-computer inter-

[6] Alamdari N, Haider A, Arefin R, Verma AK, Tarakolian K, Fazel-Rezai R.A review of methods and applications of brain computer interface systems. In: IEEE International Conference

faces. Encyclopedia of Electrical and Electronics Engineering: Wiley; 2015

Components. China: Oxford University Press; 2012. pp. 1-30

on Electro Information Technology (EIT). Dakota; 2016

spatial abilities.

**Conflict of interest**

**Author details**

**References**

pp. 1-15

Luz Maria Alonso-Valerdi<sup>1</sup>

2017;**40**(4):208-218

University Press; 2011. pp. 1-21

No conflicts of interest are declared by authors.

\*Address all correspondence to: luzalonsoval@gmail.com

74 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

Universidad de Guadalajara, Guadalajara, Jal., México


[22] Numan R. A prefrontal-hippocampal comparator for goal-directed behavior: The intentional self and episodic memory. Frontiers in Behavioral Neuroscience. 2015;**9**(323):1-19

[37] Benedek M, Bergner S, Konen T, Fink A, Neubauer AC. EEG alpha synchronization is related to top-down processing in convergent and divergent thinking. Neuropsychologia.

Characterizing Motor System to Improve Training Protocols Used in Brain-Machine Interfaces…

http://dx.doi.org/10.5772/intechopen.72667

77

[38] Fink A, Grabner RH, Neuper C, Neubauer AC, Alpha Band EEG. Dissociation with increas-

[39] Neubauer AC, Fink A, Grabner RH. Sensitivity of alpha band ERD to individual differ-

[40] Ward LM. Synchronous neural oscillations and cognitive processes. Trends in Cognitive

[41] Altermaller EO, Munte TF, Gerloff C.Neurocognitive functions and the EEG.Niedermeyer E, Lopes da Silva F, eds. In: Electroencephalography: Basic principles, Clinical Applications and Related Fields. 5th ed. vol. 31. Philadelphia: Lippincott Williams and Wilkins;

[42] Pfurtscheller G, Neuper C. Motor imagery and direct brain-computer communication.

[43] Neuper C, Wörtz M, Pfurtscheller G. ERD/ERS patterns reflecting sensorimotor activa-

[44] Szurhaj W, Derambure P. Intracerebral study of gamma oscillations in the human senso-

[45] Blankertz B, Sannelli C, Halder S, Hammer EM, Kubler A, Muller KR, Curio G, Dickhaus T. Neurophysiological predictor of SMR-based BCI performance. NeuroImage. 2010;**51**(4):

[46] Grosse-Wentrup M, Schölkopf B. High gamma-power predicts performance in sensorimotor-rhythm brain-computer interfaces. Journal of Neural Engineering. 2012;**9**(4):p.

[47] Hammer EM, Halder S, Blankertz B, Sannell C, Dickhaus T, Kleith S, Muller KR, Kubler A. Psychological predictors of SMR-BCI performance. Biological Psychology. 2012;**89**(1):

[48] Bamdadian A, Guan C, Ang KK, Xu J. The predictive role of pre-cue EEG rhythms on MI-based BCI classification performance. Journal of Neuroscience Methods. 2014;**235**:

[49] Maeder CL, Sannelli C, Haufe S, Blankertz B. Pre-stimulus sensorimotor rhythms influence brain–computer interface classification performance. Neural Systems and

[50] Thomas E, Fruitet J, Clerc M. Combining ERD and ERS features to create a system-paced

[51] Haith AM, Krakauer JW. Model-based and model-free mechanisms of human motor learning. Progress in Motor Control. 2013, vol. Springer, no. New York:pp. 1-21

Rehabilitation Engineering, IEEE Transactions on. 2012;**20**(5):653-662

BCI. Journal of Neuroscience Methods. 2013;**216**(2):96-103

tion and deactivation. Progress in Brain Research. 2006;**159**:211-222

rimotor cortex. Progress in Brain Research. 2006;**159**:297-310

ing task demands. Cognitive Brain Research. 2005;**24**:252-259

ence in cognition. Progress in Brain Research. 2006;**159**:167-178

2011;**49**(2):3505-3511

Sciences. 2003;**7**(1):553-559

Proceedings of the IEEE. 2001;**89**(7):1123-1134

2005. pp. 661-683

1303-1309

046001

80-86

138-144


[37] Benedek M, Bergner S, Konen T, Fink A, Neubauer AC. EEG alpha synchronization is related to top-down processing in convergent and divergent thinking. Neuropsychologia. 2011;**49**(2):3505-3511

[22] Numan R. A prefrontal-hippocampal comparator for goal-directed behavior: The intentional self and episodic memory. Frontiers in Behavioral Neuroscience. 2015;**9**(323):1-19

[23] Riley MR, Constantinidis C, Constantinidis C. Role of prefrontal persistent activity in

[24] Caligiore D, Pezzulo G, Baldassarre G, Bostan AC, Strick PL, Doya K, Helmich A, Dirkx M, Houk J, Jörntell H, Lago-Rodriguez A. Consensus paper: Towards a systems-level view of cerebellar function: The interplay between cerebellum, basal ganglia, and cortex. The

[25] Eaves DL, Riach M, Holmes PS, Wright DJ. Motor imagery during action observation: A brief review of evidence, theory and future research opportunities. Frontiers in Neuro-

[26] Guillot A, Di Rienzo F, Macintyre T, Moran A, Collet C. Imagining is not doing but involves specific motor commands: A review of experimental data related to motor inhibition.

[27] Bassolini M, Campanella M, Bove M, Pozzo T, Fatiga L. Training the motor cortex by observing the actions of others during immobilization. Cerebral Cortex. 2013;**24**(12):

[28] Vogt S, Di Rienzo F, Collet C, Collins A, Guillot A. Multiple roles of motor imagery during action observation. Frontiers in Human Neurosciences. 2013;**7**(807):p. 1:13

[29] Mulder T. Motor imagery and action observation: Cognitive tools for rehabilitation.

[30] Lindsay S, Kingsnorth S, Mcdougall C, Keating H. A systematic review of self-management interventions for children and youth with physical disabilities. Disability and

[31] Pineda JA. The functional significance of mu rhythms: Translating "seeing" and "hear-

[32] Arnal LH, Giraud AL. Cortical oscillations and sensory predictions. Trends in cognitive

[33] Kropotov JD. Part I: EEG Rhythms. In: Quantitative EEG, Event-Related Potentials and Neurotherapy. 1st ed. San Diego, California: Academic Press – Elsevier; 2009, pp. 1-180

[34] Kirmizialsan E, Bayraktaroglu Z, Gurvit H, Keskin Y, Emre M, Demiralp T. Comparative analysis of event-related potentials during go/NoGo and CPT: Decomposition of electrphysiological makers of response inhibition and sustained attetion. Brain Research.

[35] Vuckovic A, Sepulveda F. Delta band contribution on cue based single trial classification of real and imaginary wrist movements. Medical and Biological Computing and

[36] Krause CM. Brain electric oscillations and cognitive processes. In: Experimental Methods in Neuropsychology. vol. 21, Hugdahl K, ed. Amsterdam: Springer; 2003. pp. 111-130

working memory. Frontiers in Systems Neuroscience. 2016;**9**(181):1-14

Cerebellum. 2017;**16**(1):203-229

Rehabilitation. 2014;**36**(4):276-288

2006;**1104**(1):114-128

Engineering. 2008;**46**(6):529-539

Frontiers in Human Neurosciences. 2012;**6**(247):1-22

76 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

Journal of Neural Transmission. 2007;**114**(10):1265-1278

ing" into doing. Brain Research Reviews. 2005;**50**:57-68

sciences. Trends in Cognitive Sciences. 2012;**16**(7):390-398

science. 2016;**10**(514):1-10

3268-3276


[52] Wolpert DM, Ghahramani Z, Flanagan JR. Perspectives and problems in motor learning. Trends in Cognitive Sciences. 2001;**5**(11):487-494

**Chapter 6**

**Provisional chapter**

**Computational Models of Consciousness-Emotion**

**Computational Models of Consciousness-Emotion** 

Remigiusz Szczepanowski, Małgorzata Gakis,

Remigiusz Szczepanowski, Małgorzata Gakis,

Additional information is available at the end of the chapter

Additional information is available at the end of the chapter

Krzysztof Arent and Janusz Sobecki

Krzysztof Arent and Janusz Sobecki

http://dx.doi.org/10.5772/intechopen.72369

detection and threshold theories.

signal-detection theory

**1. Introduction**

**Abstract**

**Interactions in Social Robotics: Conceptual Framework**

There is a little information on how to design a social robot that effectively executes consciousness-emotion (C-E) interaction in a socially acceptable manner. In fact, development of such socially sophisticated interactions depends on models of human highlevel cognition implemented in the robot's design. Therefore, a fundamental research problem of social robotics in terms of effective C-E interaction processing is to define a computational architecture of the robotic system in which the cognitive-emotional integration occurs and determine cognitive mechanisms underlying consciousness along with its subjective aspect in detecting emotions. Our conceptual framework rests upon assumptions of a computational approach to consciousness, which points out that consciousness and its subjective aspect are specific functions of the human brain that can be implemented into an artificial social robot's construction. Such research framework of developing C-E addresses a field of machine consciousness that indicates important computational correlates of consciousness in such an artificial system and the possibility to objectively describe such mechanisms with quantitative parameters based on signal-

**Keywords:** social robot, consciousness-emotion interaction, machine consciousness,

**Interactions in Social Robotics: Conceptual Framework**

DOI: 10.5772/intechopen.72369

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution,

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,

distribution, and reproduction in any medium, provided the original work is properly cited.

and reproduction in any medium, provided the original work is properly cited.

It is widely acknowledged that a social robot should be an embodied agent which can communicate with people easily, using both verbal and nonverbal signals [1]. Such a robot needs to have a wide range of social and cognitive skills [2, 3] to understand human behavior and to be intuitively understood by people. However, it should be noted that nowadays there is


**Provisional chapter**

#### **Computational Models of Consciousness-Emotion Interactions in Social Robotics: Conceptual Framework Interactions in Social Robotics: Conceptual Framework**

**Computational Models of Consciousness-Emotion** 

DOI: 10.5772/intechopen.72369

Remigiusz Szczepanowski, Małgorzata Gakis, Krzysztof Arent and Janusz Sobecki Krzysztof Arent and Janusz Sobecki Additional information is available at the end of the chapter

Remigiusz Szczepanowski, Małgorzata Gakis,

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/intechopen.72369

#### **Abstract**

[52] Wolpert DM, Ghahramani Z, Flanagan JR. Perspectives and problems in motor learning.

[53] Wolpert DM, Flanagan JR. Motor prediction. Current Biology. 2001;**11**(18):R729-R732

[54] Darainy M, Vahdat S, Ostry DJ. Plasticity in the human motor system induced by perceptual learning. In: 40th Annual Northeast Bioengineering Conference (NEBEC), Boston, 2014

[55] Ostry DJ, Gribble PL. Sensory plasticity in human motor learning. Trends in Neuro-

[56] Davidson PR, Wolpert DM. Widespread access to predictive models in the motor sys-

[57] Flanagan JR, Vetter P, Johansson RS, Wolpart DM. Prediction precedes control in motor

[58] Golub MD, Chase SM, Batista AP, Byron BY. Brain–computer interfaces for dissecting cognitive processes underlying sensorimotor control. Current Opinion in Neurobiology.

tem: A short review. Journal of Neural Engineering. 2005;**2**(3):p. S313

Trends in Cognitive Sciences. 2001;**5**(11):487-494

78 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

learning. Current Biology. 2003;**13**(2):146-150

sciences. 2016;**39**(2):114-123

2016;**37**(1):53-58

There is a little information on how to design a social robot that effectively executes consciousness-emotion (C-E) interaction in a socially acceptable manner. In fact, development of such socially sophisticated interactions depends on models of human highlevel cognition implemented in the robot's design. Therefore, a fundamental research problem of social robotics in terms of effective C-E interaction processing is to define a computational architecture of the robotic system in which the cognitive-emotional integration occurs and determine cognitive mechanisms underlying consciousness along with its subjective aspect in detecting emotions. Our conceptual framework rests upon assumptions of a computational approach to consciousness, which points out that consciousness and its subjective aspect are specific functions of the human brain that can be implemented into an artificial social robot's construction. Such research framework of developing C-E addresses a field of machine consciousness that indicates important computational correlates of consciousness in such an artificial system and the possibility to objectively describe such mechanisms with quantitative parameters based on signaldetection and threshold theories.

**Keywords:** social robot, consciousness-emotion interaction, machine consciousness, signal-detection theory

#### **1. Introduction**

It is widely acknowledged that a social robot should be an embodied agent which can communicate with people easily, using both verbal and nonverbal signals [1]. Such a robot needs to have a wide range of social and cognitive skills [2, 3] to understand human behavior and to be intuitively understood by people. However, it should be noted that nowadays there is

Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons

a gap between the requirements concerning a social robot and their implementations. It is due to imperfection in technology and deficiencies in theory in various areas ranging from psychology through computer science up to classical robotics. Despite of intense technological efforts over the last two decades in terms of developing high-level cognition models for human-robot interaction (HRI), so far, robot's constructions have been hardly equipped with such competency. Here, we focus on issues concerning development on processing the interaction between consciousness and emotions in a social robot.

**3. Consciousness-emotion interaction as functions of the human** 

Philosophy of designing robotic systems inspired by human high-level cognition, including attentional and perceptual processes, is commonly used and known as a biologically inspired approach (see [11–13]). Some studies have also indicated the possibility of implementing into the social robot a computational architecture, which is inspired by a neurobiological basis of the brain [14]. For instance, there are well-developed robotic control systems of high-level cognition that implement a feature-integration theory of attention [15] or a model of saliencybased attentional search mechanisms [16] that have been intensively verified both behavior-

Computational Models of Consciousness-Emotion Interactions in Social Robotics: Conceptual…

http://dx.doi.org/10.5772/intechopen.72369

81

Contemporary brain research suggests that the interaction of cognition and emotion may be crucial for a social robot's design [14, 17]. For instance, Pessoa [14] argues that the fundamental problem is to determine an organization of the robotic system in which cognition and emotion are intertwined in a general information-processing architecture. In general, such informationprocessing architecture should be viewed as a general theory that describes important components of the system and relations between such components [18]. In this way, the adopted architecture can determine an organization of the cognitive system and general principles of information processing in the robotic system. Therefore, goal-directed or conscious behavior of a social robot in terms of recognizing human affective states will require understanding how complex cognitive and affective processing should be mapped into a robotic information-processing system that performs computational algorithms to integrate C-E interactions effectively as the human brain does it. In fact, brain research indicates that there is no decisive evidence what kind of organization of information processing is ultimate to mediate the C-E interaction effectively. Many neuroscientists (see [19, 20]) indicate that there is a functional division in the brain between low-level processes of emotion regulation (for instance, linked with amygdala activation) and higher order processes that are associated with frontal and parietal cortical activity involved in conscious goal-directed behavior. In addition, according to modern neurobiological accounts (see [21]), the amygdala synchronizes and modulates access to affective stimuli in such a way that their representations are stronger (exert a stronger influence on behavior) than neutral stimuli. Thus, selection of specific architecture can determine how a specialized C-E interaction system should be organized; it should also enable to define specific components of such system that are attributed to specific brain structures as well as describe how computations underlie high-level cognitive processes underlying such interaction. Following this line of reasoning, it is possible that the architecture of the C-E interaction in the social robot may be either structured into blocks (a theoretical system that processes sequentially, in which the knowledge is hierarchical, etc.), or modules (there are independent, autonomous, distributed modules handled by a central processor, e.g. [22]) or represents some kind of non-modular organization in which information processing is inspired by neurocomputations for which simple interactions between processing units are going on [23]. It is therefore important in terms of social robots to set up theoretical criteria to analyze potential architecture of the C-E interaction in the brain regarding structural components and functionality of the social robot's system.

**brain**

ally and computationally [13].

#### **2. Designing human-robot interaction**

Designing efficient HRI is a basic research problem of modern social robotics [1, 4]. This is mainly due to a technological struggle to make a construction of robots that is intended to share space with humans and support them in daily life in a socially acceptable manner. The joint efforts of modern research including cognitive psychology, developmental psychology, philosophy of mind, and modern technology such as artificial intelligence and machine learning show that creating effective HRI depends on the implementation of human highlevel cognition into a robot's system. For example, emotions in the context of social robots have attracted a considerable attention for the last two decades [5]. It is expected that artificial emotions increase plausibility of interactions including predictability of a robot behavior. The well-known idea of a "theory of mind" describing our ability to mentalize others' internal states was captured by theoretical accounts by Baron-Cohen [6] and Leslie [7] and finally was used to construct a Cog humanoid robot with the usage of current technology [3]. In addition, endowing of a robot in a theory of mind [3] could allow the robot to detect, recognize, interpret, and react to a human behavior and hence to make interaction more human-like. There are a lot of works concerning emotions, computational models of emotions in psychology, and computer science, but there is no result to date that would considerably improve a social robot behavior. Attempts to implement and verify a computational model of emotions in a control system of a real robot have been undertaken systematically for many years. For example, emotional system of Kismet designed from scratch is strongly inspired by various theories of emotions in humans [2]. An affective, computational model of mind fearnot affective mind architecture (FAtiMA) [8] was implemented in the robot FLASH [9]. The works in [10–12] are examples of systems that were verified using agentbased modeling software and possess a potential in the context of implementation in robots. The experience gained to date points to three areas of challenges.

Nowadays, sensory systems of robots are insufficient to detect social events, such as human emotions, intentions, attention points, etc. Clear and natural expressing of emotions and other internal states by a robot requires advanced and expensive mechatronic solutions. Computational model of consciousness and emotions can be interpreted as compound components of a higher level part of the robot control architecture. Therefore, implementation of such models requires them to be formally complete and adequate that is not guaranteed by the current psychological research.

### **3. Consciousness-emotion interaction as functions of the human brain**

a gap between the requirements concerning a social robot and their implementations. It is due to imperfection in technology and deficiencies in theory in various areas ranging from psychology through computer science up to classical robotics. Despite of intense technological efforts over the last two decades in terms of developing high-level cognition models for human-robot interaction (HRI), so far, robot's constructions have been hardly equipped with such competency. Here, we focus on issues concerning development on processing the inter-

Designing efficient HRI is a basic research problem of modern social robotics [1, 4]. This is mainly due to a technological struggle to make a construction of robots that is intended to share space with humans and support them in daily life in a socially acceptable manner. The joint efforts of modern research including cognitive psychology, developmental psychology, philosophy of mind, and modern technology such as artificial intelligence and machine learning show that creating effective HRI depends on the implementation of human highlevel cognition into a robot's system. For example, emotions in the context of social robots have attracted a considerable attention for the last two decades [5]. It is expected that artificial emotions increase plausibility of interactions including predictability of a robot behavior. The well-known idea of a "theory of mind" describing our ability to mentalize others' internal states was captured by theoretical accounts by Baron-Cohen [6] and Leslie [7] and finally was used to construct a Cog humanoid robot with the usage of current technology [3]. In addition, endowing of a robot in a theory of mind [3] could allow the robot to detect, recognize, interpret, and react to a human behavior and hence to make interaction more human-like. There are a lot of works concerning emotions, computational models of emotions in psychology, and computer science, but there is no result to date that would considerably improve a social robot behavior. Attempts to implement and verify a computational model of emotions in a control system of a real robot have been undertaken systematically for many years. For example, emotional system of Kismet designed from scratch is strongly inspired by various theories of emotions in humans [2]. An affective, computational model of mind fearnot affective mind architecture (FAtiMA) [8] was implemented in the robot FLASH [9]. The works in [10–12] are examples of systems that were verified using agentbased modeling software and possess a potential in the context of implementation in robots.

action between consciousness and emotions in a social robot.

80 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

The experience gained to date points to three areas of challenges.

the current psychological research.

Nowadays, sensory systems of robots are insufficient to detect social events, such as human emotions, intentions, attention points, etc. Clear and natural expressing of emotions and other internal states by a robot requires advanced and expensive mechatronic solutions. Computational model of consciousness and emotions can be interpreted as compound components of a higher level part of the robot control architecture. Therefore, implementation of such models requires them to be formally complete and adequate that is not guaranteed by

**2. Designing human-robot interaction**

Philosophy of designing robotic systems inspired by human high-level cognition, including attentional and perceptual processes, is commonly used and known as a biologically inspired approach (see [11–13]). Some studies have also indicated the possibility of implementing into the social robot a computational architecture, which is inspired by a neurobiological basis of the brain [14]. For instance, there are well-developed robotic control systems of high-level cognition that implement a feature-integration theory of attention [15] or a model of saliencybased attentional search mechanisms [16] that have been intensively verified both behaviorally and computationally [13].

Contemporary brain research suggests that the interaction of cognition and emotion may be crucial for a social robot's design [14, 17]. For instance, Pessoa [14] argues that the fundamental problem is to determine an organization of the robotic system in which cognition and emotion are intertwined in a general information-processing architecture. In general, such informationprocessing architecture should be viewed as a general theory that describes important components of the system and relations between such components [18]. In this way, the adopted architecture can determine an organization of the cognitive system and general principles of information processing in the robotic system. Therefore, goal-directed or conscious behavior of a social robot in terms of recognizing human affective states will require understanding how complex cognitive and affective processing should be mapped into a robotic information-processing system that performs computational algorithms to integrate C-E interactions effectively as the human brain does it. In fact, brain research indicates that there is no decisive evidence what kind of organization of information processing is ultimate to mediate the C-E interaction effectively. Many neuroscientists (see [19, 20]) indicate that there is a functional division in the brain between low-level processes of emotion regulation (for instance, linked with amygdala activation) and higher order processes that are associated with frontal and parietal cortical activity involved in conscious goal-directed behavior. In addition, according to modern neurobiological accounts (see [21]), the amygdala synchronizes and modulates access to affective stimuli in such a way that their representations are stronger (exert a stronger influence on behavior) than neutral stimuli. Thus, selection of specific architecture can determine how a specialized C-E interaction system should be organized; it should also enable to define specific components of such system that are attributed to specific brain structures as well as describe how computations underlie high-level cognitive processes underlying such interaction. Following this line of reasoning, it is possible that the architecture of the C-E interaction in the social robot may be either structured into blocks (a theoretical system that processes sequentially, in which the knowledge is hierarchical, etc.), or modules (there are independent, autonomous, distributed modules handled by a central processor, e.g. [22]) or represents some kind of non-modular organization in which information processing is inspired by neurocomputations for which simple interactions between processing units are going on [23]. It is therefore important in terms of social robots to set up theoretical criteria to analyze potential architecture of the C-E interaction in the brain regarding structural components and functionality of the social robot's system.

#### **4. Consciousness-emotion interaction and machine consciousness approach: establishing formal assumptions**

The computational approach to consciousness [25] also points out that an additional potential computational correlate of consciousness is represented by relational properties between the components of human knowledge. According to Reggia and colleagues [25], assumptions of higher order theory (HOT) of consciousness [36] nicely fit with this computational aspect. In particular, HOT postulates a computational correlate of consciousness which is the relationship between stimulus representation and its corresponding subjective knowledge of being conscious of the first-order representation (metarepresentation) [25]. In fact, modeling studies of consciousness and emotion [33, 37] showed that an adequate computational approach which considers the relation between consciousness and emotion can be described within the SDT framework. Szczepanowski [33] with his original proposal has shown that the SDT computational model may consider the fact that consciousness and emotions interact with one another. In addition, such computational SDT model of consciousness allows for a hierarchy of the information processing associated with conscious detection of emotion, that is, higher order processing requires prior discriminations of emotion at the lower level. This suggests that the relational relationship between the components of knowledge underlying architec-

Computational Models of Consciousness-Emotion Interactions in Social Robotics: Conceptual…

http://dx.doi.org/10.5772/intechopen.72369

83

The machine consciousness framework also indicates that consciousness is characterized by a specific information-processing mode [25, 38]. Some theoretical accounts emphasize effectiveness of such conscious processing, and it has been argued that the information content in conscious state is processed globally [38]. For instance, Dehaene [39] who is an advocate of such line of reasoning has shown that global processing in the brain may be linked with activation of extensive long-distance neuronal connections that link several separate brain areas, including prefrontal areas that are not activated in another processing mode [38]. Indeed, such conscious processing mode may stand for a computational correlate of consciousness that explains the nature of conscious access that involves subject's disposition to action and mobilizes and integrates mental functions that operate independently and differ in terms of tasks under the unconscious condition [38]. In the context of conscious affective processing, it seems likely that activation of the global processing mode may operate on an "all-or-none" or discrete fashion when emotional stimuli enter consciousness [37, 40]. In fact, Szczepanowski [33] based on a Krantz threshold theory [32] demonstrated that preferences for affective representation to access consciousness may be the threshold processing. Thus, preferential conscious processing of emotion in the brain may arise from the fact that activation strength of affective stimuli to enter consciousness is characterized in the discrete manner [33, 37, 40]. This implies that in the case of affective information, the robotic system could be implemented with the global processing mode based on thresholds to be able for handling effective and

Thus, with the abovementioned assumptions, our conceptual framework shows that the computational organization underlying the C-E interaction in the robotic system should correspond to an architecture of affective computing in the brain [14, 41] and should be based on computational correlates of consciousness [25] by including (i) a low-level representation correlate which enables robot's objective conscious perception of emotion, (ii) a metacognitive correlate of robot's subjective knowledge of emotion, and (iii) a conscious processing mode based on global access to the emotion content. Here, we will explain in detail the idea of mod-

eling computational correlates of C-E interactions with mathematical frameworks.

ture of the C-E interaction could be crucial for a social robot's design.

natural HRI.

Besides specific architecture of cognition for social robots, the essential problem of designing effective HRI is to analyze conscious behavior of the robot by considering human conscious knowledge and therefore considering subjectivity experience that accompanies consciousness (phenomenal aspects of consciousness; see [24, 25]). In our opinion, such a research problem should be embedded within the area of machine consciousness that can identify critical computational correlates of consciousness [26] to establish HRI. According to this computational approach, consciousness and its subjective experience can be explained by higher level cognition that is grounded in neurocomputations in the brain [25]. This approach not only allows for development of machine consciousness but also attempts to explain a so-called hard problem of consciousness that is related to inability to objectively measure phenomenal aspects of consciousness (see [27]). In fact, the theories of machine consciousness have been successively implemented in artificial environments (e.g., system CLARION; see [28]); some attempts were made in terms of implementing them into robotic systems [29].

Given such philosophical physicalism [30], we assume that consciousness of the robot can be addressed within an information-processing framework in terms of behavior control, information integration, attention and access to the information, or ways of expressing internal states of the robot. According to this framework, social robots are embodied, socially intelligent agents, operating in the human environment [1, 11, 12]. Our conceptual framework attempts to solve the problem of modeling consciousness-emotion interactions using the machine consciousness approach. Below, we will demonstrate that feasibility of hypothesized computational correlates of consciousness for the C-E interaction in a social robot system is formally allowed within on a signal-detection theory (SDT) [31] and a threshold theory [32, 33].

#### **5. Modeling consciousness-emotion interaction using a combination of signal-detection and threshold approaches**

According to Reggia and colleagues [25], the machine consciousness approach indicates that a possible computational correlate of consciousness is representational property defined as a possible way of encoding incoming information in the cognitive system. It is postulated in this account that such representations may be a pattern of neuronal activity that is encoded in the current states of the neuronal network [34]. For example, in a study on visual awareness with backward masking [35], patterns of conscious behavior are described as human ability to detect emotion under a forced-choice condition within a series of signal (e.g., mimic fear expression) and noise trials (e.g., neutral face expression) (see [36]). The assumption that consciousness is the ability to differentiate signal from noise based on choice behavior has enabled researchers to use a signal-detection theory (SDT) to quantify consciousness of emotion with objective sensitivity parameters [37]. It is therefore clear that conscious behavior identified with the SDT parameters can be used as a computational correlate to determine objective representations of the C-E interactions in the social robot's construction.

The computational approach to consciousness [25] also points out that an additional potential computational correlate of consciousness is represented by relational properties between the components of human knowledge. According to Reggia and colleagues [25], assumptions of higher order theory (HOT) of consciousness [36] nicely fit with this computational aspect. In particular, HOT postulates a computational correlate of consciousness which is the relationship between stimulus representation and its corresponding subjective knowledge of being conscious of the first-order representation (metarepresentation) [25]. In fact, modeling studies of consciousness and emotion [33, 37] showed that an adequate computational approach which considers the relation between consciousness and emotion can be described within the SDT framework. Szczepanowski [33] with his original proposal has shown that the SDT computational model may consider the fact that consciousness and emotions interact with one another. In addition, such computational SDT model of consciousness allows for a hierarchy of the information processing associated with conscious detection of emotion, that is, higher order processing requires prior discriminations of emotion at the lower level. This suggests that the relational relationship between the components of knowledge underlying architecture of the C-E interaction could be crucial for a social robot's design.

**4. Consciousness-emotion interaction and machine consciousness** 

Besides specific architecture of cognition for social robots, the essential problem of designing effective HRI is to analyze conscious behavior of the robot by considering human conscious knowledge and therefore considering subjectivity experience that accompanies consciousness (phenomenal aspects of consciousness; see [24, 25]). In our opinion, such a research problem should be embedded within the area of machine consciousness that can identify critical computational correlates of consciousness [26] to establish HRI. According to this computational approach, consciousness and its subjective experience can be explained by higher level cognition that is grounded in neurocomputations in the brain [25]. This approach not only allows for development of machine consciousness but also attempts to explain a so-called hard problem of consciousness that is related to inability to objectively measure phenomenal aspects of consciousness (see [27]). In fact, the theories of machine consciousness have been successively implemented in artificial environments (e.g., system CLARION; see [28]); some attempts were

Given such philosophical physicalism [30], we assume that consciousness of the robot can be addressed within an information-processing framework in terms of behavior control, information integration, attention and access to the information, or ways of expressing internal states of the robot. According to this framework, social robots are embodied, socially intelligent agents, operating in the human environment [1, 11, 12]. Our conceptual framework attempts to solve the problem of modeling consciousness-emotion interactions using the machine consciousness approach. Below, we will demonstrate that feasibility of hypothesized computational correlates of consciousness for the C-E interaction in a social robot system is formally

allowed within on a signal-detection theory (SDT) [31] and a threshold theory [32, 33].

**5. Modeling consciousness-emotion interaction using a combination** 

objective representations of the C-E interactions in the social robot's construction.

According to Reggia and colleagues [25], the machine consciousness approach indicates that a possible computational correlate of consciousness is representational property defined as a possible way of encoding incoming information in the cognitive system. It is postulated in this account that such representations may be a pattern of neuronal activity that is encoded in the current states of the neuronal network [34]. For example, in a study on visual awareness with backward masking [35], patterns of conscious behavior are described as human ability to detect emotion under a forced-choice condition within a series of signal (e.g., mimic fear expression) and noise trials (e.g., neutral face expression) (see [36]). The assumption that consciousness is the ability to differentiate signal from noise based on choice behavior has enabled researchers to use a signal-detection theory (SDT) to quantify consciousness of emotion with objective sensitivity parameters [37]. It is therefore clear that conscious behavior identified with the SDT parameters can be used as a computational correlate to determine

**approach: establishing formal assumptions**

82 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

made in terms of implementing them into robotic systems [29].

**of signal-detection and threshold approaches**

The machine consciousness framework also indicates that consciousness is characterized by a specific information-processing mode [25, 38]. Some theoretical accounts emphasize effectiveness of such conscious processing, and it has been argued that the information content in conscious state is processed globally [38]. For instance, Dehaene [39] who is an advocate of such line of reasoning has shown that global processing in the brain may be linked with activation of extensive long-distance neuronal connections that link several separate brain areas, including prefrontal areas that are not activated in another processing mode [38]. Indeed, such conscious processing mode may stand for a computational correlate of consciousness that explains the nature of conscious access that involves subject's disposition to action and mobilizes and integrates mental functions that operate independently and differ in terms of tasks under the unconscious condition [38]. In the context of conscious affective processing, it seems likely that activation of the global processing mode may operate on an "all-or-none" or discrete fashion when emotional stimuli enter consciousness [37, 40]. In fact, Szczepanowski [33] based on a Krantz threshold theory [32] demonstrated that preferences for affective representation to access consciousness may be the threshold processing. Thus, preferential conscious processing of emotion in the brain may arise from the fact that activation strength of affective stimuli to enter consciousness is characterized in the discrete manner [33, 37, 40]. This implies that in the case of affective information, the robotic system could be implemented with the global processing mode based on thresholds to be able for handling effective and natural HRI.

Thus, with the abovementioned assumptions, our conceptual framework shows that the computational organization underlying the C-E interaction in the robotic system should correspond to an architecture of affective computing in the brain [14, 41] and should be based on computational correlates of consciousness [25] by including (i) a low-level representation correlate which enables robot's objective conscious perception of emotion, (ii) a metacognitive correlate of robot's subjective knowledge of emotion, and (iii) a conscious processing mode based on global access to the emotion content. Here, we will explain in detail the idea of modeling computational correlates of C-E interactions with mathematical frameworks.

#### **6. Signal-detection theory to encode objective consciousness of emotion in a social robot**

The SDT theory assumes that the ability of human subject to perceive a stimulus is described by the probability of deciding whether a signal or noise stimulus was presented in a given trial [31]. The fluctuations of a stimulus presented within series of trials, for example, manipulated with a time exposure, or visibility of the stimulus, are determined by Gaussian probability density functions [31, 33, 42]. Because of presentations of two stimulus types under the forced-choice detection condition, participant within experimental condition produces correct (a hit (*H*) and correct rejection (CR)) and incorrect responses (a false alarm (*FA*) and miss (*M*)). The ability to detect a stimulus is then described by a sensitivity parameter *d*' Type I, which conceptually corresponds to a difference in mean values from the probability distributions for the signal and noise. In addition to the sensitivity measure, the detection theory also provides a bias measure *c*Type 1, which determines the participant's tendency to favor either "yes" or "no" responses during the detection process. Based on probability distributions, the receiver operating curve (ROC) is computed whose course determines the participant's ability to detect stimuli. According to the SDT, the task performance above the chance level will indicate conscious perception as measured by a significant nonzero sensitivity index (*d*' Type I > 0). Similar conclusions are formulated when a size of the area under the ROC curve is above the level of 0.5 which is characterized by the so-called parameter *A*' Type 1. In fact, according to Lau [42], the SDT sensitivity measure of consciousness in detection tasks is not sufficient and in terms of consciousness, it is important to determine decision criteria for detecting a stimulus based on the c parameter rather than discrimination ability per se. For instance, the SDT interpretation of behavior in blindsight patient with visual cortex damage who deny any visual sensation in the resultant visual field defect but can nonetheless detect the visual emotion stimuli presented in the area [43] would indicate a nonzero value *d*' Type I and paradoxically conscious perception. Therefore, in terms of the consciousness measure, establishing and maintaining appropriate decision-making are critical when detecting stimuli, rather than using sensitivity values *d*' Type I which rather would refer to the basic effectiveness of the information processing [42].

**7. Signal-detection approach to encode metacognitive consciousness** 

Objective theories of consciousness link consciousness to the ability of detecting incoming external stimuli by choice [45]. According to this view, consciousness is described as sensory processing that ignores first-person experience underlying subjective knowledge (metacognition) of its own representation of processing the incoming information. The problem of consciousness and its relations to metacognition has been viewed a central topic in consciousness research [46] which fits well the HOT approach [47]. This theory of consciousness is now considered to be a main framework that explains how people are aware of their conscious states [47]. On the one hand, HOT implies an assumption that consciousness depends on the presence of metacognition [48, 49]; on the other hand, there are opposite claims that metacognition is a prerequisite for the emergence of consciousness [50, 51]. According to this second assumption, consciousness is a first-person metarepresentation which refers to the ability to acquire knowledge about first-order mental states [52]. This second HOT view is well documented by studies on conscious learning with a neuronal network approach in which the brain via learning processes the information about external world and creates its own re-representation on how it is to be in a conscious state of the processed information [53, 54]. In fact, both the HOT theory and connectionist model are consistent with the signal-detection framework (see [53]). Following the HOT view on consciousness, our conceptual framework assumes that a computational correlate of consciousness is relational property between first-order representation of emotion and metacognition [25]. Adopting such architecture of the machine consciousness

**Figure 1.** General idea of robotic system with measurement function of objective conscious perception of human

Computational Models of Consciousness-Emotion Interactions in Social Robotics: Conceptual…

http://dx.doi.org/10.5772/intechopen.72369

85

**of emotion in a social robot**

emotions (inspired by [35]).

In terms of machine consciousness, it seems to be clear that the SDT approach by estimating sensitivity of first-order detection of emotion and bias can allow to determine computational correlates of social robot's objective knowledge about human affective states. A hypothetical robotic system (see **Figure 1**) with the functionality of objective consciousness of emotion may be equipped with emotion recognition algorithms that constantly analyze human expressions based on sequences of affective stimuli within time events and will then result in online SDT computations that simulate objective consciousness about recognized human affective state. In such a way, the use of the detection theory will enable to capture one of the key properties of conscious knowledge associated with choice behavior [44] in a possible robotic system.

Computational Models of Consciousness-Emotion Interactions in Social Robotics: Conceptual… http://dx.doi.org/10.5772/intechopen.72369 85

**6. Signal-detection theory to encode objective consciousness of** 

84 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

The SDT theory assumes that the ability of human subject to perceive a stimulus is described by the probability of deciding whether a signal or noise stimulus was presented in a given trial [31]. The fluctuations of a stimulus presented within series of trials, for example, manipulated with a time exposure, or visibility of the stimulus, are determined by Gaussian probability density functions [31, 33, 42]. Because of presentations of two stimulus types under the forced-choice detection condition, participant within experimental condition produces correct (a hit (*H*) and correct rejection (CR)) and incorrect responses (a false alarm (*FA*) and miss (*M*)). The ability to detect a stimulus is then described by a

ues from the probability distributions for the signal and noise. In addition to the sensitivity measure, the detection theory also provides a bias measure *c*Type 1, which determines the participant's tendency to favor either "yes" or "no" responses during the detection process. Based on probability distributions, the receiver operating curve (ROC) is computed whose course determines the participant's ability to detect stimuli. According to the SDT, the task performance above the chance level will indicate conscious perception

formulated when a size of the area under the ROC curve is above the level of 0.5 which

sensitivity measure of consciousness in detection tasks is not sufficient and in terms of consciousness, it is important to determine decision criteria for detecting a stimulus based on the c parameter rather than discrimination ability per se. For instance, the SDT interpretation of behavior in blindsight patient with visual cortex damage who deny any visual sensation in the resultant visual field defect but can nonetheless detect the visual emotion

conscious perception. Therefore, in terms of the consciousness measure, establishing and maintaining appropriate decision-making are critical when detecting stimuli, rather than

In terms of machine consciousness, it seems to be clear that the SDT approach by estimating sensitivity of first-order detection of emotion and bias can allow to determine computational correlates of social robot's objective knowledge about human affective states. A hypothetical robotic system (see **Figure 1**) with the functionality of objective consciousness of emotion may be equipped with emotion recognition algorithms that constantly analyze human expressions based on sequences of affective stimuli within time events and will then result in online SDT computations that simulate objective consciousness about recognized human affective state. In such a way, the use of the detection theory will enable to capture one of the key properties of conscious knowledge associated with choice behavior [44] in a possible

as measured by a significant nonzero sensitivity index (*d*'

stimuli presented in the area [43] would indicate a nonzero value *d*'

is characterized by the so-called parameter *A*'

Type I, which conceptually corresponds to a difference in mean val-

Type I which rather would refer to the basic effectiveness of the

Type I > 0). Similar conclusions are

Type I and paradoxically

Type 1. In fact, according to Lau [42], the SDT

**emotion in a social robot**

sensitivity parameter *d*'

using sensitivity values *d*'

robotic system.

information processing [42].

**Figure 1.** General idea of robotic system with measurement function of objective conscious perception of human emotions (inspired by [35]).

#### **7. Signal-detection approach to encode metacognitive consciousness of emotion in a social robot**

Objective theories of consciousness link consciousness to the ability of detecting incoming external stimuli by choice [45]. According to this view, consciousness is described as sensory processing that ignores first-person experience underlying subjective knowledge (metacognition) of its own representation of processing the incoming information. The problem of consciousness and its relations to metacognition has been viewed a central topic in consciousness research [46] which fits well the HOT approach [47]. This theory of consciousness is now considered to be a main framework that explains how people are aware of their conscious states [47]. On the one hand, HOT implies an assumption that consciousness depends on the presence of metacognition [48, 49]; on the other hand, there are opposite claims that metacognition is a prerequisite for the emergence of consciousness [50, 51]. According to this second assumption, consciousness is a first-person metarepresentation which refers to the ability to acquire knowledge about first-order mental states [52]. This second HOT view is well documented by studies on conscious learning with a neuronal network approach in which the brain via learning processes the information about external world and creates its own re-representation on how it is to be in a conscious state of the processed information [53, 54]. In fact, both the HOT theory and connectionist model are consistent with the signal-detection framework (see [53]).

Following the HOT view on consciousness, our conceptual framework assumes that a computational correlate of consciousness is relational property between first-order representation of emotion and metacognition [25]. Adopting such architecture of the machine consciousness indicates that metarepresentation is distinct from first-order representation and may require separate neuronal structures in the brain. In fact, brain research provides empirical evidence for feasibility of such architecture of neurocomputations showing that metarepresentation may be associated with activation of prefrontal and parietal regions [36], while low-level representation may be responsible for fast emotion recognition which depends on the amygdala [17, 55]. There is also convincing evidence of independence between these two types of knowledge representations from behavioral measurements of dissociation between correctness of performance in perceptual tasks and metacognitive awareness of such performance [33, 55] as well as neuronal instances of such dissociations in the brain [17, 56]. Common-sense intuition of brain activity also supports such view claiming that conscious knowledge about the stimulus does not relate to physical qualities of the perceived stimulus, but considers internal representations of the stimulus, which in turn refer to specific brain activation associated with stimulus perception [53]. It is worth mentioning that metacognition as higher level cognition, including monitoring, control processes, and evaluation, is sequential by nature [18]. Several computer modeling studies, for example, post-decision wagering procedures [57], demonstrate that metacognitive sequential strategies lead to consciousness of a stimulus. In the same vein, our brain study on metacognition with event-related potentials (ERPs) showed that metacognitive knowledge is crucial for conscious processing of emotion [58]. Similarly, a masking study with neural network simulations [54] shows that metacognitive knowledge can be underlined by a specific computational base for making conscious and unconscious decisions in terms of emotion detection. Unquestionably, empirical studies on consciousness and metacognition linked to the problem of accuracy of metacognitive knowledge, and its neurobiological and computational basis suggests that HOT is a theory that can be empirically verified.

cognitive processes during detection of affective states may be a subject of empirical research. In addition, the interaction between consciousness and emotion can be related to decision-making processes which may be a result of computational-cognitive processes in the brain [33] and there-

**Figure 2.** Measurements of consciousness and metacognition with second-order sensitivity and bias parameters based

Computational Models of Consciousness-Emotion Interactions in Social Robotics: Conceptual…

http://dx.doi.org/10.5772/intechopen.72369

87

The above premises suggest that the hierarchical SDT model of consciousness can be used to determine computational correlates of robot's consciousness and its subjective experience of emotion. According to such SDT view, subjective conscious feelings of the robot may be related to execution of second-order operations on internally generated information from previous processing linked with detection of incoming stimuli from environment registered by a sensory system of the robot. We therefore assume that such conceptualization of machine consciousness within the robot is necessary to effectively regulate robot's behavior in terms of participation of metacognition in executing conscious control of cognition, in response to

The third research domain for encoding C-E interactions by a social robot is to determine a computational correlate of global processing mode for consciousness of emotion. We assume that an adequate implementation of global information processing of emotion in the robotic system can be enabled by a threshold theory [33, 37]. As many experimental studies have demonstrated, representation of affective information is preferred to be accessed to conscious processing [60, 61]. For instance, in the area of consciousness research, backward masking studies provide substantial evidence that visual awareness occurs in the "all-or-none" fashion [62]. In the context of the masking task, this indicates that during stimulus detection, there is some sudden stepping-like burst of activation due to an incoming stimulus to enable transition between nonconscious and conscious states [63]. Some researchers suggest that such specific activation occurs in the brain as a threshold needed to activate access consciousness (see, for instance,

fore potentially implemented into artificial environments of the social robot.

**8. Threshold approach to establish access consciousness for** 

emerging affective information [18, 59].

on SDT (source: [46]).

**encoding C-E interaction by social robot**

Here, it is important to indicate that Szczepanowski [33] has shown that the relation between consciousness and emotion predicted by HOT can be modeled numerically with a signaldetection theory. In particular, SDT modeling has shown that under the emotion detection condition, subjective experience that expresses subjective feelings that accompany the firstorder representation of affective stimuli can be embraced in the model by including participant's confidence responses [33, 55]. With regard to such SDT and HOT views, metacognition about task performance can be measured with a secondary sensitivity parameter *d'*Type II (see **Figure 2**). Evaluation of metacognition can include also *c*Type II parameter which is a secondorder bias that identifies metacognitive strategies leading either to under- or overconfidence in task performance evaluations [46]. In this way, the second-order SDT measurements of consciousness provide objective information on subjective feelings of perceived affective stimuli.

In fact, Szczepanowski [33] has demonstrated that the SDT model of consciousness can embrace a hierarchical organization of affective processing, that is, objective information of performance in the emotion detection task must be reflected in a hierarchically higher level of processing. In this computational model of consciousness, there is an objective sensitivity measure of the perceived affective information (e.g., parametric first-order sensitivity *d'*Type I > 0 or nonparametric *A'*Type I > 0.5) as well as an objective measure of metacognition (e.g., parametric second-order sensitivity *d'*Type II > 0 or nonparametric *A'*Type II > .5 indices). The validity of this hierarchical SDT model was empirically proved with visual masking experiments with emotional faces (e.g., [35, 57]). In fact, the modeling outcomes based on SDT show that human consciousness with accompanying Computational Models of Consciousness-Emotion Interactions in Social Robotics: Conceptual… http://dx.doi.org/10.5772/intechopen.72369 87

indicates that metarepresentation is distinct from first-order representation and may require separate neuronal structures in the brain. In fact, brain research provides empirical evidence for feasibility of such architecture of neurocomputations showing that metarepresentation may be associated with activation of prefrontal and parietal regions [36], while low-level representation may be responsible for fast emotion recognition which depends on the amygdala [17, 55]. There is also convincing evidence of independence between these two types of knowledge representations from behavioral measurements of dissociation between correctness of performance in perceptual tasks and metacognitive awareness of such performance [33, 55] as well as neuronal instances of such dissociations in the brain [17, 56]. Common-sense intuition of brain activity also supports such view claiming that conscious knowledge about the stimulus does not relate to physical qualities of the perceived stimulus, but considers internal representations of the stimulus, which in turn refer to specific brain activation associated with stimulus perception [53]. It is worth mentioning that metacognition as higher level cognition, including monitoring, control processes, and evaluation, is sequential by nature [18]. Several computer modeling studies, for example, post-decision wagering procedures [57], demonstrate that metacognitive sequential strategies lead to consciousness of a stimulus. In the same vein, our brain study on metacognition with event-related potentials (ERPs) showed that metacognitive knowledge is crucial for conscious processing of emotion [58]. Similarly, a masking study with neural network simulations [54] shows that metacognitive knowledge can be underlined by a specific computational base for making conscious and unconscious decisions in terms of emotion detection. Unquestionably, empirical studies on consciousness and metacognition linked to the problem of accuracy of metacognitive knowledge, and its neurobiological and computa-

86 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

tional basis suggests that HOT is a theory that can be empirically verified.

Here, it is important to indicate that Szczepanowski [33] has shown that the relation between consciousness and emotion predicted by HOT can be modeled numerically with a signaldetection theory. In particular, SDT modeling has shown that under the emotion detection condition, subjective experience that expresses subjective feelings that accompany the firstorder representation of affective stimuli can be embraced in the model by including participant's confidence responses [33, 55]. With regard to such SDT and HOT views, metacognition about task performance can be measured with a secondary sensitivity parameter *d'*Type II (see **Figure 2**). Evaluation of metacognition can include also *c*Type II parameter which is a secondorder bias that identifies metacognitive strategies leading either to under- or overconfidence in task performance evaluations [46]. In this way, the second-order SDT measurements of consciousness provide objective information on subjective feelings of perceived affective stimuli.

In fact, Szczepanowski [33] has demonstrated that the SDT model of consciousness can embrace a hierarchical organization of affective processing, that is, objective information of performance in the emotion detection task must be reflected in a hierarchically higher level of processing. In this computational model of consciousness, there is an objective sensitivity measure of the perceived affective information (e.g., parametric first-order sensitivity *d'*Type I > 0 or nonparametric *A'*Type I > 0.5) as well as an objective measure of metacognition (e.g., parametric second-order sensitivity *d'*Type II > 0 or nonparametric *A'*Type II > .5 indices). The validity of this hierarchical SDT model was empirically proved with visual masking experiments with emotional faces (e.g., [35, 57]). In fact, the modeling outcomes based on SDT show that human consciousness with accompanying

**Figure 2.** Measurements of consciousness and metacognition with second-order sensitivity and bias parameters based on SDT (source: [46]).

cognitive processes during detection of affective states may be a subject of empirical research. In addition, the interaction between consciousness and emotion can be related to decision-making processes which may be a result of computational-cognitive processes in the brain [33] and therefore potentially implemented into artificial environments of the social robot.

The above premises suggest that the hierarchical SDT model of consciousness can be used to determine computational correlates of robot's consciousness and its subjective experience of emotion. According to such SDT view, subjective conscious feelings of the robot may be related to execution of second-order operations on internally generated information from previous processing linked with detection of incoming stimuli from environment registered by a sensory system of the robot. We therefore assume that such conceptualization of machine consciousness within the robot is necessary to effectively regulate robot's behavior in terms of participation of metacognition in executing conscious control of cognition, in response to emerging affective information [18, 59].

#### **8. Threshold approach to establish access consciousness for encoding C-E interaction by social robot**

The third research domain for encoding C-E interactions by a social robot is to determine a computational correlate of global processing mode for consciousness of emotion. We assume that an adequate implementation of global information processing of emotion in the robotic system can be enabled by a threshold theory [33, 37]. As many experimental studies have demonstrated, representation of affective information is preferred to be accessed to conscious processing [60, 61]. For instance, in the area of consciousness research, backward masking studies provide substantial evidence that visual awareness occurs in the "all-or-none" fashion [62]. In the context of the masking task, this indicates that during stimulus detection, there is some sudden stepping-like burst of activation due to an incoming stimulus to enable transition between nonconscious and conscious states [63]. Some researchers suggest that such specific activation occurs in the brain as a threshold needed to activate access consciousness (see, for instance,

**Figure 3**), there are three mental states associated with perception, that is, the absence of ~D detection, D detection, and D\* superdetection, and two thresholds, that is, upper and lower ones [32]. Detection of a target stimulus (probabilities P1 and P2) leads to mental states of D and D \* (detection and superdetection), while detection of stimulus noise, described by the probability *q*, leads to a lack of detection ~D. The decision space described in the threshold detection theory is rectangular, and the ROC curve is linear as shown in [33]. It was demonstrated that participant who can consciously access to the stimulus content produces ideal observer behavior that can be estimated the P2 parameter [33]. Hence, the threshold model can predict situations in which the highest confidence is generated when there is conscious access to emotion content. Indeed, computational evidence for the threshold-like processing is an important discovery, since, so far, another view on perception has dominated in experimental psychology claiming that perception is continuous and should be described primarily by the Gaussian distribution [31]. Thus, in our conceptual framework of machine consciousness, we assume that conscious detection of emotion by the social robot engages global processing mode in the "all-or-none" fashion, and we propose to model these C-E interactions with the use of an innovative computational approach based on the Krantz's threshold theory [32].

Computational Models of Consciousness-Emotion Interactions in Social Robotics: Conceptual…

http://dx.doi.org/10.5772/intechopen.72369

89

As opposed to a typical application of industrial robots, a social robot needs to be considered as a social being with whom humans should be cooperating given a specific task structure. Therefore, the basic research aims of social robotics should be to determine computational models of the consciousness-emotion interaction designed to be implemented into a robotic platform. The request of preciseness in the context of computational models of emotions requires more research including related areas such as models of C-E interactions. This is a new research area in social robotics, and therefore it is potentially attractive from the perspective of development of computational models of emotion that are suitable for implementation in robots and contribute a new quality to the behavior of robots. It is believed that extending social robot competences and functionalities of HRI with C-E interactions will result in

It seems that the abovementioned modeling outcomes of the C-E interaction based on the signal and threshold approaches are original contributions not only in the field of cognitive psychology but are crucial in the area of social robotics in terms of the possibility to implement high-level cognition into a social robot that effectively processes HRI in social domain [3, 4, 41, 64, 65]. In our conceptual framework, consciousness of emotion is the ability to detect affective information in the forced-choice condition, regardless choice decisions are low-level representation (features of the stimulus) or metarepresentation (subjective knowledge). In this way, consciousness may be attributed to an extremely simple function that can be associated with detection of different types of signals in the mind [33] and simply implemented into a social robot's design. In fact, adoption of the computational approach to consciousness that are based on quantitative detection parameters indicates that consciousness along with its subjective aspect is a specific function of the human brain and can be implemented into an

increasing acceptability of the social robot by the end user.

artificial social robot's construction.

**9. Conclusions**

**Figure 3.** Linear ROC curve predicted by the Krantz's threshold model (source: [37]).

[63]). Indeed, Szczepanowski [33] has demonstrated that under a backward masking task, perception of fearful face happens in the "all-or-none" fashion and may be a factor explaining why this emotion information is preferable to conscious access. In particular, it appeared that in the visual masking experiments, several participants presented characteristic patterns of metacognition in terms of confidence in such a way that for the highest confidence, there are almost always hits without false alarms [37]. Because such observed response patterns followed ideal observer's behavior (hit responses without highest false alarms), the masking data have been successfully modeled with a Krantz's threshold detection theory [32, 33, 37]. This computational evidence that conscious perception of emotion is threshold-like processing implicates that under conditions in which stimulus strength is sufficiently large, the information content of the stimulus may be broadcasted in the system globally. This threshold-like information-processing approach to consciousness suggests that decision-making underlying emotion perception follows a discrete mental states' arrangement and its corresponding probabilities in terms of establishing conscious behavioral responses to affective information. Therefore, according to the outcomes from the threshold model, conscious processing in detecting emotion can activate global access to knowledge about emotion that manifests itself in ideal behavior of the observer.

The abovementioned outcomes suggest that global access to affective content in terms of metacognition (meta-knowledge) involves thresholds [33]. In other words, access consciousness may be activated for the highest confidence ratings on the "all-or-none" basis. In this way, conscious access to representation of emotions and metacognition can be quantified with signal parameters predicted in the Krantz model [32]. In the three-state threshold model (see **Figure 3**), there are three mental states associated with perception, that is, the absence of ~D detection, D detection, and D\* superdetection, and two thresholds, that is, upper and lower ones [32]. Detection of a target stimulus (probabilities P1 and P2) leads to mental states of D and D \* (detection and superdetection), while detection of stimulus noise, described by the probability *q*, leads to a lack of detection ~D. The decision space described in the threshold detection theory is rectangular, and the ROC curve is linear as shown in [33]. It was demonstrated that participant who can consciously access to the stimulus content produces ideal observer behavior that can be estimated the P2 parameter [33]. Hence, the threshold model can predict situations in which the highest confidence is generated when there is conscious access to emotion content. Indeed, computational evidence for the threshold-like processing is an important discovery, since, so far, another view on perception has dominated in experimental psychology claiming that perception is continuous and should be described primarily by the Gaussian distribution [31]. Thus, in our conceptual framework of machine consciousness, we assume that conscious detection of emotion by the social robot engages global processing mode in the "all-or-none" fashion, and we propose to model these C-E interactions with the use of an innovative computational approach based on the Krantz's threshold theory [32].

#### **9. Conclusions**

[63]). Indeed, Szczepanowski [33] has demonstrated that under a backward masking task, perception of fearful face happens in the "all-or-none" fashion and may be a factor explaining why this emotion information is preferable to conscious access. In particular, it appeared that in the visual masking experiments, several participants presented characteristic patterns of metacognition in terms of confidence in such a way that for the highest confidence, there are almost always hits without false alarms [37]. Because such observed response patterns followed ideal observer's behavior (hit responses without highest false alarms), the masking data have been successfully modeled with a Krantz's threshold detection theory [32, 33, 37]. This computational evidence that conscious perception of emotion is threshold-like processing implicates that under conditions in which stimulus strength is sufficiently large, the information content of the stimulus may be broadcasted in the system globally. This threshold-like information-processing approach to consciousness suggests that decision-making underlying emotion perception follows a discrete mental states' arrangement and its corresponding probabilities in terms of establishing conscious behavioral responses to affective information. Therefore, according to the outcomes from the threshold model, conscious processing in detecting emotion can activate global access to knowledge about emotion that manifests itself in ideal behavior of the observer. The abovementioned outcomes suggest that global access to affective content in terms of metacognition (meta-knowledge) involves thresholds [33]. In other words, access consciousness may be activated for the highest confidence ratings on the "all-or-none" basis. In this way, conscious access to representation of emotions and metacognition can be quantified with signal parameters predicted in the Krantz model [32]. In the three-state threshold model (see

**Figure 3.** Linear ROC curve predicted by the Krantz's threshold model (source: [37]).

88 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

As opposed to a typical application of industrial robots, a social robot needs to be considered as a social being with whom humans should be cooperating given a specific task structure. Therefore, the basic research aims of social robotics should be to determine computational models of the consciousness-emotion interaction designed to be implemented into a robotic platform. The request of preciseness in the context of computational models of emotions requires more research including related areas such as models of C-E interactions. This is a new research area in social robotics, and therefore it is potentially attractive from the perspective of development of computational models of emotion that are suitable for implementation in robots and contribute a new quality to the behavior of robots. It is believed that extending social robot competences and functionalities of HRI with C-E interactions will result in increasing acceptability of the social robot by the end user.

It seems that the abovementioned modeling outcomes of the C-E interaction based on the signal and threshold approaches are original contributions not only in the field of cognitive psychology but are crucial in the area of social robotics in terms of the possibility to implement high-level cognition into a social robot that effectively processes HRI in social domain [3, 4, 41, 64, 65]. In our conceptual framework, consciousness of emotion is the ability to detect affective information in the forced-choice condition, regardless choice decisions are low-level representation (features of the stimulus) or metarepresentation (subjective knowledge). In this way, consciousness may be attributed to an extremely simple function that can be associated with detection of different types of signals in the mind [33] and simply implemented into a social robot's design. In fact, adoption of the computational approach to consciousness that are based on quantitative detection parameters indicates that consciousness along with its subjective aspect is a specific function of the human brain and can be implemented into an artificial social robot's construction.

We believe that simplicity of such signal and threshold detection approaches that allow studying consciousness and its accompanying perceptual and metacognitive processes with the quantitative analysis will be optimal for implementing the C-E interactions into a social robot's system. Our successful attempts to operationalize desirable C-E interactions in the social robot within the signal-detection and threshold frameworks may provide valuable guidelines for implementation formal characteristics of conscious behavior into a social robot's construction and subsequently will be generalized for a much broader area of HRI. Finally, our understanding of cognitive mechanisms underlying consciousness and its subjective aspects will be the input to advance cognitive sciences, including philosophy of mind. In this way, our project will build a cross-disciplinary approach in designing effective HRI and machine consciousness that combine cognitive sciences and social robotics.

**References**

[1] Breazeal C, Dautenhahn K, Kanda T. Social robotics. In: Siciliano B, Khatib O, editors. Springer Handbook of Robotics. 2nd ed. Cham: Springer International Publishing; 2016.

Computational Models of Consciousness-Emotion Interactions in Social Robotics: Conceptual…

http://dx.doi.org/10.5772/intechopen.72369

91

[2] Breazeal C. Emotion and sociable humanoid robots. International Journal of Human-

[3] Scassellati B.Theory of mind for a humanoid robot. Autonomous Robots. 2002;**12**(1):13-24.

[4] Lemaignan S, Warnier M, Sisbot EA, Clodic A, Alami R. Artificial cognition for social human–robot interaction: An implementation. Artificial Intelligence. 2017;**247**:45-69 [5] Marsella S, Gratch J, Petta P. Computational models of emotion. In: Scherer KR, Banziger T, Roesch E, editors. Blueprint for Affective Computing (Series in Affective Science). 1st ed. Oxford

[6] Baron-Cohen S. Mindblindness: An Essay on Autism and Theory of Mind. Cambridge:

[7] Leslie AM. ToMM, ToBY, and Agency: Core architecture and domain specificity. In: Hirschfeld LA, Gelman SA, editors. Mapping the Mind: Domain Specificity in Cognition

[8] Dias J, Mascarenhas S, Paiva A. FAtiMA modular: Towards an agent architecture with a generic appraisal framework. Emotion Modeling. Springer International Publishing;

[9] Arent K, Tchoń K. Roboty społeczne—Postępy robotyki Prace Naukowe Politechniki

[10] Bach J. Principles of Synthetic Intelligence PSI: An Architecture of Motivated Cognition. New York: Oxford University Press; 2009. Oxford Scholarship Online, 2009. DOI:

[11] Marsella SC, Gratch J. EMA: A process model of appraisal dynamics. Cognitive Systems

[12] Fong T, Nourbakhsh I, Dautenhahn K. A survey of socially interactive robots. Robotics

[13] Itti L, Koch C. Computational modelling of visual attention. Nature Reviews Neuro-

[14] Pessoa, Luiz. Do Intelligent Robots Need Emotion?. Trends in Cognitive Sciences,

[15] Treisman AM, Gelade G.A feature-integration theory of attention. Cognitive Psychology.

and Culture. Cambridge: Cambridge University Press; 1994. pp. 119-148

Warszawskiej. Elektronika. 2012;**182**(2):629-648 (in Polish)

10.1093/acprof:oso/9780195370676.001.0001

and Autonomous Systems. 2003;**42**(3):143-166

Computer Studies. 2003;**59**:119-155. DOI: 10.1016/S1071-5819(03)00018-1

pp. 1935-1972. DOI: 10.1007/978-3-319-32552-1\_72

University Press; 2010. pp. 21-46. ISBN: 9780199566709

DOI: 10.1023/A:1013298507114

MIT Press; 1997

2014. pp. 44-56

Research. 2009;**10**:70-90

science. 2001;**2**:194-203

2017;**21**(11):817-819

1980;**12**:97-136

#### **Acknowledgements**

We would like to thank our respective universities and departments for the financial support. This research was partially funded by Institute of Psychology at University of Zielona Gora (Poland) and Wrocław University of Science and Technology (Poland) under Grant No. 0401/0142/17 and Grant No. 0401/0098/17.

#### **Conflict of interest**

Remigiusz Szczepanowski declares that he has no conflict of interest. Małgorzata Gakis declares that she has no conflict of interest. Krzysztof Arent declares that he has no conflict of interest. Janusz Sobecki declares that he has no conflict of interest.

#### **Author details**

Remigiusz Szczepanowski<sup>1</sup> \*, Małgorzata Gakis<sup>2</sup> , Krzysztof Arent<sup>3</sup> and Janusz Sobecki<sup>4</sup>

\*Address all correspondence to: rszczepanowski@uz.zgora.pl

1 Institute of Psychology, Faculty of Education, Psychology and Sociology, University of Zielona Góra, Zielona Góra, Poland

2 Faculty of Psychology in Wroclaw, SWPS University of Social Sciences and Humanities, Wroclaw, Poland

3 Department of Cybernetics and Robotics, Electronics Faculty, Wroclaw University of Science and Technology, Wroclaw, Poland

4 Department of Computer Science, Faculty of Computer Science and Management, Wroclaw University of Science and Technology, Wrocław, Poland

#### **References**

We believe that simplicity of such signal and threshold detection approaches that allow studying consciousness and its accompanying perceptual and metacognitive processes with the quantitative analysis will be optimal for implementing the C-E interactions into a social robot's system. Our successful attempts to operationalize desirable C-E interactions in the social robot within the signal-detection and threshold frameworks may provide valuable guidelines for implementation formal characteristics of conscious behavior into a social robot's construction and subsequently will be generalized for a much broader area of HRI. Finally, our understanding of cognitive mechanisms underlying consciousness and its subjective aspects will be the input to advance cognitive sciences, including philosophy of mind. In this way, our project will build a cross-disciplinary approach in designing effective

90 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

HRI and machine consciousness that combine cognitive sciences and social robotics.

We would like to thank our respective universities and departments for the financial support. This research was partially funded by Institute of Psychology at University of Zielona Gora (Poland) and Wrocław University of Science and Technology (Poland) under Grant No.

Remigiusz Szczepanowski declares that he has no conflict of interest. Małgorzata Gakis declares that she has no conflict of interest. Krzysztof Arent declares that he has no conflict of interest.

1 Institute of Psychology, Faculty of Education, Psychology and Sociology, University of

2 Faculty of Psychology in Wroclaw, SWPS University of Social Sciences and Humanities,

3 Department of Cybernetics and Robotics, Electronics Faculty, Wroclaw University of

4 Department of Computer Science, Faculty of Computer Science and Management,

, Krzysztof Arent<sup>3</sup>

and Janusz Sobecki<sup>4</sup>

**Acknowledgements**

**Conflict of interest**

**Author details**

Wroclaw, Poland

Remigiusz Szczepanowski<sup>1</sup>

Zielona Góra, Zielona Góra, Poland

Science and Technology, Wroclaw, Poland

0401/0142/17 and Grant No. 0401/0098/17.

Janusz Sobecki declares that he has no conflict of interest.

\*Address all correspondence to: rszczepanowski@uz.zgora.pl

Wroclaw University of Science and Technology, Wrocław, Poland

\*, Małgorzata Gakis<sup>2</sup>


[16] Itti L, Koch C. A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research. 2000;**40**(10):1489-1506

[34] Cleeremans A. Computational correlates of consciousness. Progress in Brain Research.

Computational Models of Consciousness-Emotion Interactions in Social Robotics: Conceptual…

http://dx.doi.org/10.5772/intechopen.72369

93

[35] Szczepanowski R, Pessoa L. Fear perception: Can objective and subjective awareness

[36] Lau HC, Rosenthal D. Empirical support for higher-order theories of conscious aware-

[37] Szczepanowski R. Conscious access to fear-relevant information is mediated by thresh-

[38] Baars BJ. The conscious access hypothesis: Origins and recent evidence. Trends in

[39] Dehaene S. Consciousness and the brain: Deciphering how the brain codes our thoughts.

[40] Szczepanowski R, Traczyk J, Fan Z, Harvey L Jr. Preferential access to emotion under attentional blink: Evidence for threshold phenomenon. Polish Psychological Bulletin.

[41] Picard RW, Vyzas E, Healey J. Toward machine emotional intelligence: Analysis of affective physiological state. IEEE Transactions on Pattern Analysis and Machine Intelligence.

[42] Lau HC. A higher order Bayesian decision theory of consciousness. Progress in Brain

[43] de Gelder B, Pourtois G, van Raamsdonk M, Vroomen J, Weiskrantz L. Unseen stimuli modulate conscious visual experience: Evidence from inter-hemi-spheric summation.

[44] Seth AK, Dienes Z, Cleeremans A, Overgaard M, Pessoa L. Measuring consciousness: Relating behavioural and neurophysiological approaches. Trends in Cognitive Sciences.

[45] Eriksen CW. Discrimination and learning without awareness: A methodological survey

[46] Fleming SM, Lau HC. How to measure metacognition. Frontiers in Human Neuroscience.

[49] Dennett DC. Sweet Dreams: Philosophical Obstacles to a Science of Consciousness.

[50] Koriat A. Metacognition and Consciousness. In: Zelazo P, Moscovitch M, editors. Cambridge Handbook of Consciousness. New York: Cambridge University Press;

measures be dissociated. Journal of Vision. 2007;**7**(4):10

ness. Trends in Cognitive Sciences. 2011;**15**(8):365-373

old. Polish Psychological Bulletin. 2011;**42**(2):56-64

New York: Penguin. 2014. ISBN: 978-0-670-02543-5

and evaluation. Psychological Review. 1960;**67**:279-300

[47] Rosenthal DM. Consciousness and Mind. Oxford: Clarendon Press; 2005

[48] Dennett DC. Are we explaining consciousness yet? Cognition. 2001;**79**:221-237

Cognitive Sciences. 2002;**6**(1):47-52

2005;**150**:81-98

2015;**46**(1):127-132

2001;**23**(10):1175-1191

Research. 2007;**168**:35-48

2008;**12**(8):314-321

2007. pp. 289-325

2014;**8**:443

Neuroreport. 2001;**12**(2):385-391

Cambridge: The MIT Press; 2005


[34] Cleeremans A. Computational correlates of consciousness. Progress in Brain Research. 2005;**150**:81-98

[16] Itti L, Koch C. A saliency-based search mechanism for overt and covert shifts of visual

[17] LeDoux JE, Brown RA. Higher-order theory of emotional consciousness. Proceedings of

[18] Nosal CS. Psychologiczne Modelu umysłu [Psychological Model of the Human Mind].

[19] Tsuchiya N, Adolphs R. Emotion and consciousness. Trends in Cognitive Sciences.

[20] Pessoa L. The Cognitive-emotional Brain: From Interactions to Integration. Cambridge:

[21] Mitchell DG, Greening SG. Conscious perception of emotional stimuli brain mecha-

[23] Rumelhart DE, McClelland JL. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Foundations. Vol. 1. Cambridge, MA, USA: MIT Press;

[24] Block N. On a confusion about a function of consciousness. Behavioral and Brain

[25] Reggia JA, Huang DW, Katz G. Exploring the computational explanatory gap. Philo-

[26] Reggia JA. Conscious machines: The AI perspective. In AAAI Fall Symposium Series;

[27] Chalmers DJ. The Conscious Mind: In Search of a Fundamental Theory. New York,

[28] Sun R, Franklin S. Computational models of consciousness. In: Zelazo P, Moscovitch M, editors. Cambridge Handbook of Consciousness. New York: Cambridge University

[29] Reggia JA. The rise of machine consciousness: Studying consciousness with computa-

[31] Macmillan NA, Creelman CD. Detection Theory: A User's Guide. (Mahway, New

[32] Krantz DH. Threshold theories of signal detection. Psychological Review. 1969;**76**(3):

[33] Szczepanowski R. Świadome i nieświadome przetwarzanie emocji w mózgu. Modelowanie w ramach teorii detekcji sygnałów [Conscious and unconscious processing of emo-

tion in the brain. Modeling with signal detection approach]. Warsaw: PWN; 2014

[30] Searle J. Mind: A Brief Introduction. New York: Oxford University Press; 2004

attention. Vision Research. 2000;**40**(10):1489-1506

92 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

nisms. The Neuroscientist. 2012;**18**(4):386-398

[22] Fodor J. The Modularity of Mind. Cambridge, MA: MIT Press; 1983

Warszawa: PWN; 1990

Sciences. 1995;**18**:227-247

September, 2014; North America

Oxford: Oxford University Press; 1996

tional models. Neural Networks. 2013;**44**:112-131

Jersey: Lawrence Erlbaum Associates, Inc.); 2005

sophies. 2017;**2**(1):5

Press; 2007. pp. 151-174

308-324

2007;**11**(4):158-167

MIT Press; 2013

1986

the National Academy of Sciences. 2017;**14**(10):2016-2025


[51] Karmiloff-Smith A. Beyond Modularity: A Developmental Perspective on Cognitive

[52] Timmermans B, Schilbach L, Pasquali A, Cleeremans A. Higher-order thoughts in action: Consciousness as an unconscious re-description process. Philosophical Transactions of

[53] Cleeremans A. Frontiers: The radical plasticity thesis: How the brain learns to be con-

[54] Szczepanowski R, Wierzchoń M, Szulżycki M. Neuronal network and awareness measures of post-decision wagering behavior in detecting masked emotional faces. Cognitive

[55] Szczepanowski R. Signal detection approach in modeling consciousness-emotion inter-

[56] Lau HC. Are we studying consciousness yet? In: Weiskrantz L, David M, editors. Frontiers of Consciousness: Cichele Lectures. Oxford: Oxford University Press; 2008.

[57] Szczepanowski R. Absence of advantageous wagering does not mean that awareness is

[58] Wierzchoń M, Wronka E, Paulewicz B, Szczepanowski R. Post-decision wagering affects metacognitive awareness of emotional stimuli: An event related potential study. PLoS

[59] Cichoń E, Szczepanowski R. Mechanizmy tłumienia niepożądanych odczuć i myśli w ujęciu metapoznawczym [Metacognitive approaches toward supression mechanisms of

[60] Milders M, Sahraie A, Logan S, Donnellon N. Awareness of faces is modulated by their

[61] Yang E, Zald DH, Blake R. Fearful expressions gain preferential access to awareness dur-

[62] Sergent C, Dehaene S. Is consciousness a gradual phenomenon? Evidence for an all-ornone bifurcation during the attentional blink. Psychological Science. 2004;**15**(11):720-728

[63] Dehaene, S. Conscious and Nonconscious Processes: Distinct Forms of Evidence Accumulation? In: Biological Physics. Rivasseau V, editor. Springer Basel; 2011. pp. 141-

[64] Paiva A, Leite I, Ribeiro T. Emotion modeling for social robots. In Calvo R, D'Mello S, Gratch J, Kappas A, editors. The Oxford handbook of affective computing, New York:

[65] Pereira A, Leite I, Mascarenhas S, Martinho C, Paiva A, Lamers MH, Verbeek FJ. Using Empathy to Improve Human-Robot Relationships, Human-Robot Personal Relationships: Third International Conference, HRPR. Berlin Heidelberg: Springer; 2010. pp. 130-138

unwanted thoughts and emotions]. Rocznik Kognitywistyczny. 2015;**8**:79-89

fully abolished. Consciousness and Cognition. 2010;**19**(1):426-431

Science. Cambridge, MA: MIT Press; 1992

94 Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

Computation. 2017;**9**(1):457-467

pp. 245-258

One. 2016;**11**(8):e0159516

168. ISBN: 978-3-0346-0427-7

the Royal Society B: Biological Sciences. 2012;**367**:1412-1423

scious. Frontiers in Consciousness Research. 2011;**2**(86):1-12

actions. Acta Neuropsychologica. 2017;**15**(1):89-96

emotional meaning. Emotion. 2006;**6**(1):10-17

Oxford University Press; 2014. pp. 296-308

ing continuous flash suppression. Emotion. 2007;**7**(4):882

## *Edited by Seyyed Abed Hosseini*

The book "Cognitive and Computational Neuroscience - Principles, Algorithms and Applications" will answer the following question and statements:

	- We know a lot about the brain!

Key: models (should) make novel testable predictions on both neural and behavioral levels.

Models are useful tools for guiding experiments.

The hope is that the information provided in this book will trigger new researches that will help to connect basic neuroscience to clinical medicine.

Published in London, UK © 2018 IntechOpen © liulolo / iStock

Cognitive and Computational Neuroscience - Principles, Algorithms and Applications

Cognitive and Computational

Neuroscience

Principles, Algorithms and Applications

*Edited by Seyyed Abed Hosseini*