Preface

Researchers have, for a long time, tried to, and are today highly successful at, demystifying the human brain through brain signals. In the field of computer science, early attempts were more algorithmic, focused on artificial neural networks in order to solve complex software engineering problems. The advent of possibilities offered by Brain Computer Interface (BCI) has opened up avenues for a plethora of research and development projects. The latest published advancement in the field is the famous "thought-to-text" application, whereby an individual thought could be transformed into textual information. One may guess the potentials for such an advancement in the medicine field.

The mathematical armada to allow digital processing of brain signals, coupled with progression in sensors and electrodes has prompted innovative products on the market. Capturing raw data from one's brain with in-built software for analysis could be produced as a bundle on the market at affordable prices. With more computational power and visualisation techniques, future generations of neuroscientists will undoubtedly be in a more knowledgeable situation, probably the notion of computer interface with the brain might look more natural that it is now.

Nonetheless, BCI remains an ocean to be explored and this book aims at capturing the ongoing activities globally. The chapters comprise deep research with solid mathematical foundations supplemented with experimental results. Moreover readers will find extensive lists of references pertaining to BCI and original contributions of the chapters from the authors. The manuscript is therefore a trusted primary source of information on BCI.

*New Frontiers in Brain Computer Interfaces* is a compilation of recent achievements ranging from optical technologies, mental task-based BCI, speech enhancement, brain dynamics, and brain-computer modelling. Authors across the globe have formulated the chapters to be accessible to students, lecturers, and researchers, as well as readers passionate about the topic. This book aims to disseminate the latest findings and research work in BCI. There is potential for future research for PhD students as well as food for thought about the next generation of BCI technologies.

We wish to thank all the authors for their efforts and good will to make this book project possible. Without their contributions, recent knowledge about BCI would remain hidden. A vote of thanks to the IntechOpen Author Service Manager Ms Dolores Kuzelj, the technical board, and the commissioning editor. Providing open access to knowledge is undoubtedly a noble cause.

> **Dr. Nawaz Mohamudally** Associate Professor, University of Technology, Port Louis, Mauritius

## **Manish Putteeraj**

School of Health Sciences, University of Technology, Port Louis, Mauritius

## **Seyyed Abed Hosseini**

**1**

**Chapter 1**

**1. Introduction**

areas for probing scientists.

**1.1 Demystifying the brain**

Introductory Chapter: The "DNA

The technological advances made in the recent years have moved beyond the conventional research and design framework that used to singularly focus on the problem at hand and resolve it using the best approaches within the same field. Current systems cater for crossbridges across disciplines for problem solving, a process that creates multiple opportunities toward sustainable and cutting-edge innovations. This design of inter-relationality across dimensions has served well in developing modern techniques such as the use of brain waves to understand human behavior and similar methods toward the introduction of machine learning (ML) stemming from artificial intelligence (AI). This has also led to popular and insightful methods such as brain-computer interfaces (BCI) that are gaining much momentum especially in modern medicine. However, much needs to be done in the field of neurosciences and computer systems to exploit the resources for their respective progression and nurture the existing ecosystem. This chapter will provide an overview of a "DNA model" concept that shows the relative interdependence of brain sciences and computer systems in research and unravel unexplored

The mammalian brain is arguably the most complex organ of the body with over 100 billion neurons and glial cells which are scattered across the lobes for specific functionality. The neurodevelopmental process consists of multiple stages inclusive of migration, differentiation, maturation, synaptogenesis, pruning, and myelination, among others, which provides the basis for brain development [1]. ML and AI are overlap constructs of neurocomputing, which can be said to be founded on principles of synaptogenesis, a biological process forming the basis of signal integration at the brain level. In a simplistic overview, the mammalian body responds to the environment based on a sensory input integrated at the neuronal level to trigger relevant output. This is also reflected in the decision-making process, whereby the brain computes multiple scenarios for comparison based on the information crunched in different brain regions before initiating the action for the desired value [2]. Applicability of this type of brain process has been made possible using modern noninvasive neuroimaging techniques such as electroencephalograms (EEGs), enabling the visualization and patterning of brain activity for informed decisions. Research by Poli et al. (2013) [3] has effectively demonstrated such applicability of BCI using a neuroscience platform to enhance noncommunicative group-based decision-making process solely relying on a supervised machine learning platform

Model" of Neurosciences and

*Manish Putteeraj and Shah Nawaz Ali Mohamudally*

Computer Systems

Research Center of Biomedical Engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran

## **Chapter 1**

## Introductory Chapter: The "DNA Model" of Neurosciences and Computer Systems

*Manish Putteeraj and Shah Nawaz Ali Mohamudally*

## **1. Introduction**

The technological advances made in the recent years have moved beyond the conventional research and design framework that used to singularly focus on the problem at hand and resolve it using the best approaches within the same field. Current systems cater for crossbridges across disciplines for problem solving, a process that creates multiple opportunities toward sustainable and cutting-edge innovations. This design of inter-relationality across dimensions has served well in developing modern techniques such as the use of brain waves to understand human behavior and similar methods toward the introduction of machine learning (ML) stemming from artificial intelligence (AI). This has also led to popular and insightful methods such as brain-computer interfaces (BCI) that are gaining much momentum especially in modern medicine. However, much needs to be done in the field of neurosciences and computer systems to exploit the resources for their respective progression and nurture the existing ecosystem. This chapter will provide an overview of a "DNA model" concept that shows the relative interdependence of brain sciences and computer systems in research and unravel unexplored areas for probing scientists.

## **1.1 Demystifying the brain**

The mammalian brain is arguably the most complex organ of the body with over 100 billion neurons and glial cells which are scattered across the lobes for specific functionality. The neurodevelopmental process consists of multiple stages inclusive of migration, differentiation, maturation, synaptogenesis, pruning, and myelination, among others, which provides the basis for brain development [1]. ML and AI are overlap constructs of neurocomputing, which can be said to be founded on principles of synaptogenesis, a biological process forming the basis of signal integration at the brain level. In a simplistic overview, the mammalian body responds to the environment based on a sensory input integrated at the neuronal level to trigger relevant output. This is also reflected in the decision-making process, whereby the brain computes multiple scenarios for comparison based on the information crunched in different brain regions before initiating the action for the desired value [2]. Applicability of this type of brain process has been made possible using modern noninvasive neuroimaging techniques such as electroencephalograms (EEGs), enabling the visualization and patterning of brain activity for informed decisions. Research by Poli et al. (2013) [3] has effectively demonstrated such applicability of BCI using a neuroscience platform to enhance noncommunicative group-based decision-making process solely relying on a supervised machine learning platform

with an eightfold cross-validation approach. Although at its embryonic stage, this type of research can be further exploited in real-life settings to exclude the argumentative part and enhance the time taken for group-based decision-making process.

## **1.2 Brain circuitry and artificial neural networks**

Brain circuitry, i.e., the synaptic connectivity of individual neurons in one or more brain regions, is vital for a potent signal integration and transmission. Similarly, neuronal interaction via synapses has been shown to be critical for memory recall and learning, via recurrent signal generation at those areas of connectivity [4, 5]. This is comparable to the application of artificial neural networks (ANN) using soft fuzzy logic to compute multiple variables to autonomously generate tailored solutions based on adaptability as well as the governance of signal transmission via weights, a feature that to a certain extent mimics synaptic plasticity of the brain in memory formation and recall [6, 7]. The crossbridge between neurosciences and computer systems is not only reflected in the setup/programming of the system but also in terms of information being fed in real time for fine-tuning of outputs. Using supervised learning algorithms such as the back-propagation algorithms is necessary to assist in marginalizing the gap between expected and actual outputs and render information parsing and predictability meaningful [8]. This physiologic term is termed as the feedback loop system enabling the correction of any deviations at the systemic level or normalization of neuromodulatory signals via neurofeedback systems. Interestingly, ANN also recreates a biological neuronal system via its "artificial firing at the nodal region" akin to action potentials, mediating passage of signals downstream for a source to its effector region [9]. This feedforwarding process in artificial setups also termed as axonal saltatory conduction in the brain has been found to be efficient for the speed of signal transmission.

## **1.3 Analysis of brain signals**

Innovations have demonstrated the use of brain waves and software recreated from a biological system, to enhance modern medicine for better diagnostic and treatment possibilities. As reviewed by Guggisberg, Koch's [10] probabilistic tractography algorithms can be used to determine the extent of damage to neuronal connectivity following a stroke episode among other rehabilitative techniques such as repetitive transcranial electric stimulation. Of interest, EEG-based BCI architecture has been a tremendous asset in patients suffering from neuromuscular disorders, hence facilitation of simple movement/locomotor remission aided by neuro-prosthetics. Such feats have been developed using noninvasive methods for signal acquisition, bio-signal amplifier and filter to increase the signal-to-noise ratio, exclusion of physiological artifacts, EEG feature extraction and classification as the cue for output using linear or nonlinear classifiers in the form of support vector machines (SVMs), or ANNs, among others [11, 12]. While research is at full steam with respect to BCI-controlled prosthetics, much has been done in terms of platforms used to increase accuracy of the artificial limbs as demonstrated by the application of analyzing the EEG signals using a quadratic time-frequency distribution (QTFD) coupled with a two-layer classification framework to distinguish between individual finger movement within the same hand, hence increasing the resolution and specificity of finger control [13]. Within those lines, Lange et al. [14] processed EEG data using spectrally weighted common spatial patterns (spec-CSP) for feature extraction to correlate it with electromyogram (EMG) recordings for more potent data classification and refined movements. The application of such technology with a neuroscience platform in modern age medicine is inexhaustive.

**3**

**Author details**

*Introductory Chapter: The "DNA Model" of Neurosciences and Computer Systems*

Much progress has been made in the cardiovascular area for coronary abnormality detection inclusive of arrhythmias and infarctions [15, 16], splicing circadian patterns with respect to sleep [17] and fatigue detection [18], prosthetic vision [19], and deep brain stimulators [20], among others. The human-robotic collaboration also forms an intricate and well-established research area for such application, given that commercialization of such products assisting production plants and surgeries are well documented. However, as with all technological innovations, there are certain limitations which are yet to be addressed. Using ML in the field of diagnostic and treatment is always accompanied by the dataset limitation such that the decreased availability of features to be fed into the system can impact on ML performance, especially in disease diagnostics [21]. This is further reinforced by the vulnerability of the system given its dependence on the data used for training; hence, erroneous or biased data will result in flawed outputs. In the case of using machine learning for psychological profiling, the fact that shared symptoms are common across certain mental illnesses, accuracy of predictability would be affected given the nuanced symptomatic classifications [22]. Aside from common methodological factors such as confounding variables and transboundary access to datasets for training algorithms, the major limitation still remains that machine learning cannot as yet include sentient features and thus in the context of roboticshuman collaboration or even medical-related decision-making process, implemen-

*DOI: http://dx.doi.org/10.5772/intechopen.88713*

tation of such technology requires further research.

Manish Putteeraj and Shah Nawaz Ali Mohamudally\*

\*Address all correspondence to: alimohamudally@umail.utm.ac.mu

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

University of Technology, Mauritius

provided the original work is properly cited.

**2. Conclusion**

*Introductory Chapter: The "DNA Model" of Neurosciences and Computer Systems DOI: http://dx.doi.org/10.5772/intechopen.88713*

## **2. Conclusion**

*New Frontiers in Brain-Computer Interfaces*

**1.3 Analysis of brain signals**

**1.2 Brain circuitry and artificial neural networks**

with an eightfold cross-validation approach. Although at its embryonic stage, this type of research can be further exploited in real-life settings to exclude the argumentative part and enhance the time taken for group-based decision-making process.

Brain circuitry, i.e., the synaptic connectivity of individual neurons in one or more brain regions, is vital for a potent signal integration and transmission. Similarly, neuronal interaction via synapses has been shown to be critical for memory recall and learning, via recurrent signal generation at those areas of connectivity [4, 5]. This is comparable to the application of artificial neural networks (ANN) using soft fuzzy logic to compute multiple variables to autonomously generate tailored solutions based on adaptability as well as the governance of signal transmission via weights, a feature that to a certain extent mimics synaptic plasticity of the brain in memory formation and recall [6, 7]. The crossbridge between neurosciences and computer systems is not only reflected in the setup/programming of the system but also in terms of information being fed in real time for fine-tuning of outputs. Using supervised learning algorithms such as the back-propagation algorithms is necessary to assist in marginalizing the gap between expected and actual outputs and render information parsing and predictability meaningful [8]. This physiologic term is termed as the feedback loop system enabling the correction of any deviations at the systemic level or normalization of neuromodulatory signals via neurofeedback systems. Interestingly, ANN also recreates a biological neuronal system via its "artificial firing at the nodal region" akin to action potentials, mediating passage of signals downstream for a source to its effector region [9]. This feedforwarding process in artificial setups also termed as axonal saltatory conduction in

the brain has been found to be efficient for the speed of signal transmission.

Innovations have demonstrated the use of brain waves and software recreated from a biological system, to enhance modern medicine for better diagnostic and treatment possibilities. As reviewed by Guggisberg, Koch's [10] probabilistic tractography algorithms can be used to determine the extent of damage to neuronal connectivity following a stroke episode among other rehabilitative techniques such as repetitive transcranial electric stimulation. Of interest, EEG-based BCI architecture has been a tremendous asset in patients suffering from neuromuscular disorders, hence facilitation of simple movement/locomotor remission aided by neuro-prosthetics. Such feats have been developed using noninvasive methods for signal acquisition, bio-signal amplifier and filter to increase the signal-to-noise ratio, exclusion of physiological artifacts, EEG feature extraction and classification as the cue for output using linear or nonlinear classifiers in the form of support vector machines (SVMs), or ANNs, among others [11, 12]. While research is at full steam with respect to BCI-controlled prosthetics, much has been done in terms of platforms used to increase accuracy of the artificial limbs as demonstrated by the application of analyzing the EEG signals using a quadratic time-frequency distribution (QTFD) coupled with a two-layer classification framework to distinguish between individual finger movement within the same hand, hence increasing the resolution and specificity of finger control [13]. Within those lines, Lange et al. [14] processed EEG data using spectrally weighted common spatial patterns (spec-CSP) for feature extraction to correlate it with electromyogram (EMG) recordings for more potent data classification and refined movements. The application of such technology with a neuroscience platform in modern age medicine is inexhaustive.

**2**

Much progress has been made in the cardiovascular area for coronary abnormality detection inclusive of arrhythmias and infarctions [15, 16], splicing circadian patterns with respect to sleep [17] and fatigue detection [18], prosthetic vision [19], and deep brain stimulators [20], among others. The human-robotic collaboration also forms an intricate and well-established research area for such application, given that commercialization of such products assisting production plants and surgeries are well documented. However, as with all technological innovations, there are certain limitations which are yet to be addressed. Using ML in the field of diagnostic and treatment is always accompanied by the dataset limitation such that the decreased availability of features to be fed into the system can impact on ML performance, especially in disease diagnostics [21]. This is further reinforced by the vulnerability of the system given its dependence on the data used for training; hence, erroneous or biased data will result in flawed outputs. In the case of using machine learning for psychological profiling, the fact that shared symptoms are common across certain mental illnesses, accuracy of predictability would be affected given the nuanced symptomatic classifications [22]. Aside from common methodological factors such as confounding variables and transboundary access to datasets for training algorithms, the major limitation still remains that machine learning cannot as yet include sentient features and thus in the context of roboticshuman collaboration or even medical-related decision-making process, implementation of such technology requires further research.

## **Author details**

Manish Putteeraj and Shah Nawaz Ali Mohamudally\* University of Technology, Mauritius

\*Address all correspondence to: alimohamudally@umail.utm.ac.mu

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## **References**

[1] Gibb R, Kovalchuk A. Chapter 1 brain development. In: Gibb R, Kolb B, editors. The Neurobiology of Brain and Behavioral Development. Amsterdam, Netherlands: Elsevier, Academic Press; 2018. pp. 3-27

[2] Wunderlich K, Rangel A, Doherty JP. Neural computations underlying actionbased decision making in the human brain. Proceedings of the National Academy of Sciences of the United States of America. 2009;**106**(40):17199

[3] Poli R, et al. Improving decisionmaking based on visual perception via a collaborative brain-computer interface. In: 2013 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA); 2013

[4] Hasselmo ME, Schnell E, Barkai E. Dynamics of learning and recall at excitatory recurrent synapses and cholinergic modulation in rat hippocampal region CA3. The Journal of Neuroscience. 1995;**15**(7):5249

[5] Silva AJ. Molecular and cellular cognitive studies of the role of synaptic plasticity in memory. Journal of Neurobiology. 2003;**54**(1):224-237

[6] Pagel JF, Kirshtein P. Chapter six neural networks: The hard and software logic. In: Pagel JF, Kirshtein P, editors. Machine Dreaming and Consciousness. San Diego: Academic Press; 2017. pp. 83-92

[7] Neves G, Cooke SF, Bliss TVP. Synaptic plasticity, memory and the hippocampus: A neural network approach to causality. Nature Reviews Neuroscience. 2008;**9**:65

[8] Chen Y-PP et al. 9.15 bioinformatics. In: Liu H-W, Mander L, editors. Comprehensive Natural Products II. Oxford: Elsevier; 2010. pp. 569-593

[9] Faber TL, Chen JI, Garcia EV. Chapter 4 - SPECT processing, quantification, and display. In: Zaret BL, Beller GA, editors. Clinical Nuclear Cardiology. Fourth ed. Philadelphia: Mosby; 2010. pp. 53-71

[10] Guggisberg AG et al. Brain networks and their relevance for stroke rehabilitation. Clinical Neurophysiology. 2019;**130**(7):1098-1124

[11] Bansal D, Mahajan R. Chapter 2 - EEG-based brain-computer interfacing (BCI). In: Bansal D, Mahajan R, editors. EEG-Based Brain-Computer Interfaces. Amsterdam, Netherlands: Elsevier, Academic Press; 2019. pp. 21-71

[12] Kumar DK et al. Prosthetic hand control: A multidisciplinary review to identify strengths, shortcomings, and the future. Biomedical Signal Processing and Control. 2019;**53**:101588

[13] Alazrai R, Alwanni H, Daoud MI. EEG-based BCI system for decoding finger movements within the same hand. Neuroscience Letters. 2019;**698**:113-120

[14] Lange G et al. Classification of electroencephalogram data from hand grasp and release movements for BCI controlled prosthesis. Procedia Technology. 2016;**26**:374-381

[15] Acharya UR et al. Application of deep convolutional neural network for automated detection of myocardial infarction using ECG signals. Information Sciences. 2017;**415-416**:190-198

[16] Majumdar A, Ward R. Robust greedy deep dictionary learning for ECG arrhythmia classification. In:

**5**

*Introductory Chapter: The "DNA Model" of Neurosciences and Computer Systems*

*DOI: http://dx.doi.org/10.5772/intechopen.88713*

2017 International Joint Conference on Neural Networks (IJCNN); 2017

[17] Bin X, et al. Electrooculogram based sleep stage classification using deep belief network. In: 2015 International Joint Conference on Neural Networks

[19] Ge C et al. A spiking neural network

simulated prosthetic vision. Information

[20] Benabid AL et al. Chapter 5 - deep brain stimulation: BCI at large, where are we going to? In: Schouenborg J, Garwicz M, Danielsen N, editors. Progress in Brain Research. Amsterdam, Netherlands: Elsevier, Academic Press;

[21] Alizadehsani R et al. Machine learning-based coronary artery disease diagnosis: A comprehensive review. Computers in Biology and Medicine.

[22] Walter M et al. Translational machine learning for psychiatric neuroimaging. Progress in Neuro-Psychopharmacology and Biological

Psychiatry. 2019;**91**:113-121

[18] Du L, et al. Detecting driving fatigue with multimodal deep learning. In: 2017 8th International IEEE/EMBS Conference on Neural Engineering

model for obstacle avoidance in

Sciences. 2017;**399**:30-42

(IJCNN); 2015

(NER). 2017

2011. pp. 71-82

2019;**111**:103346

*Introductory Chapter: The "DNA Model" of Neurosciences and Computer Systems DOI: http://dx.doi.org/10.5772/intechopen.88713*

2017 International Joint Conference on Neural Networks (IJCNN); 2017

[17] Bin X, et al. Electrooculogram based sleep stage classification using deep belief network. In: 2015 International Joint Conference on Neural Networks (IJCNN); 2015

[18] Du L, et al. Detecting driving fatigue with multimodal deep learning. In: 2017 8th International IEEE/EMBS Conference on Neural Engineering (NER). 2017

[19] Ge C et al. A spiking neural network model for obstacle avoidance in simulated prosthetic vision. Information Sciences. 2017;**399**:30-42

[20] Benabid AL et al. Chapter 5 - deep brain stimulation: BCI at large, where are we going to? In: Schouenborg J, Garwicz M, Danielsen N, editors. Progress in Brain Research. Amsterdam, Netherlands: Elsevier, Academic Press; 2011. pp. 71-82

[21] Alizadehsani R et al. Machine learning-based coronary artery disease diagnosis: A comprehensive review. Computers in Biology and Medicine. 2019;**111**:103346

[22] Walter M et al. Translational machine learning for psychiatric neuroimaging. Progress in Neuro-Psychopharmacology and Biological Psychiatry. 2019;**91**:113-121

**4**

pp. 83-92

*New Frontiers in Brain-Computer Interfaces*

[1] Gibb R, Kovalchuk A. Chapter 1 brain development. In: Gibb R, Kolb B, editors. The Neurobiology of Brain and Behavioral Development. Amsterdam, Netherlands: Elsevier, Academic Press;

Natural Products II. Oxford: Elsevier;

quantification, and display. In: Zaret BL, Beller GA, editors. Clinical Nuclear Cardiology. Fourth ed. Philadelphia:

networks and their relevance for stroke rehabilitation. Clinical Neurophysiology.

[11] Bansal D, Mahajan R. Chapter 2 - EEG-based brain-computer interfacing (BCI). In: Bansal D, Mahajan R, editors. EEG-Based Brain-Computer Interfaces. Amsterdam, Netherlands: Elsevier, Academic Press; 2019. pp. 21-71

[12] Kumar DK et al. Prosthetic hand control: A multidisciplinary review to identify strengths, shortcomings, and the future. Biomedical Signal Processing

[13] Alazrai R, Alwanni H, Daoud MI. EEG-based BCI system for decoding finger movements within the same hand. Neuroscience Letters.

[14] Lange G et al. Classification of electroencephalogram data from hand grasp and release movements for BCI controlled prosthesis. Procedia Technology. 2016;**26**:374-381

[15] Acharya UR et al. Application of deep convolutional neural network for automated detection of myocardial infarction using ECG signals. Information Sciences.

[16] Majumdar A, Ward R. Robust greedy deep dictionary learning for ECG arrhythmia classification. In:

2017;**415-416**:190-198

and Control. 2019;**53**:101588

2019;**698**:113-120

[9] Faber TL, Chen JI, Garcia EV. Chapter 4 - SPECT processing,

[10] Guggisberg AG et al. Brain

2010. pp. 569-593

Mosby; 2010. pp. 53-71

2019;**130**(7):1098-1124

[2] Wunderlich K, Rangel A, Doherty JP. Neural computations underlying actionbased decision making in the human brain. Proceedings of the National Academy of Sciences of the United States of America. 2009;**106**(40):17199

[3] Poli R, et al. Improving decisionmaking based on visual perception via a collaborative brain-computer interface. In: 2013 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA); 2013

[4] Hasselmo ME, Schnell E,

Neuroscience. 1995;**15**(7):5249

[5] Silva AJ. Molecular and cellular cognitive studies of the role of synaptic plasticity in memory. Journal of Neurobiology. 2003;**54**(1):224-237

[6] Pagel JF, Kirshtein P. Chapter six neural networks: The hard and software logic. In: Pagel JF, Kirshtein P, editors. Machine Dreaming and Consciousness. San Diego: Academic Press; 2017.

[7] Neves G, Cooke SF, Bliss TVP. Synaptic plasticity, memory and the hippocampus: A neural network approach to causality. Nature Reviews

Neuroscience. 2008;**9**:65

[8] Chen Y-PP et al. 9.15 bioinformatics. In: Liu H-W, Mander L, editors. Comprehensive

Barkai E. Dynamics of learning and recall at excitatory recurrent synapses and cholinergic modulation in rat hippocampal region CA3. The Journal of

2018. pp. 3-27

**References**

Chapter 2

Abstract

using optogenetics.

1. Introduction

discoveries are still ahead.

7

optogenetics, human brain connectivity

Near-Infrared Optical

Interface Systems

Korshakov Alexei Vyacheslavovich

Technologies in Brain-Computer

This chapter presents a comprehensive review of near-infrared spectrometry (NIRS) and related methods, implemented in brain-computer interfaces (BCI). Basic physical principles of such devices are described. Reviews supply readers with summary of recent development in dynamics and perspectives of the field in question. Examples of NIRS usage in BCI systems are provided and different experimental paradigms are described. Review not only deals mainly with noninvasive NIRS-BCIs but also covers some instances of usage of neighboring fields methods (such as EEG, for instance) for the sake of their importance in so-called hybrid BCI systems and/or in fundamental research, which may be less relevant in case of separate application of different encephalographic methods. As potentially

beneficial for NIRS-BCIs, the phenomena of fast optical signals (FOS) are described, and some research on connectivity, including those based on NIRS, is covered. Some attention is paid to the perspective for future BCI's construction

electroencephalography (EEG), magnetoencephalography (MEG), fMRI,

Keywords: brain computer interfaces (BCI), near-infrared spectrometry (NIRS),

Most of us routinely manipulates of objects in the real world on an everyday basis. But, some people, for example, subjected to severe nervous system damage, may lack this necessary and basic ability. On the other hand, we may not limit our objects manipulations only with hands, but imagine some external actuator, not a part of a human body, but still controlled by one's will and suitable for the specific target manipulation. Such a substitution or addition of a human's abilities is certainly welcomed in the modern world. And although mentioned, the idea seems to be tempting, and its development was suspended for a long time because of difficulty in solving key component of the problem. Key component here is as precise as possible interpretation and transmitting commands born in human mind to external device of any sort. This task can be accomplished at some level by brain-computer interface technology (or BCIs for short). BCI problem was stated several decades ago. Since then, a lot of ground has been covered, yet a lot of

## Chapter 2

## Near-Infrared Optical Technologies in Brain-Computer Interface Systems

Korshakov Alexei Vyacheslavovich

## Abstract

This chapter presents a comprehensive review of near-infrared spectrometry (NIRS) and related methods, implemented in brain-computer interfaces (BCI). Basic physical principles of such devices are described. Reviews supply readers with summary of recent development in dynamics and perspectives of the field in question. Examples of NIRS usage in BCI systems are provided and different experimental paradigms are described. Review not only deals mainly with noninvasive NIRS-BCIs but also covers some instances of usage of neighboring fields methods (such as EEG, for instance) for the sake of their importance in so-called hybrid BCI systems and/or in fundamental research, which may be less relevant in case of separate application of different encephalographic methods. As potentially beneficial for NIRS-BCIs, the phenomena of fast optical signals (FOS) are described, and some research on connectivity, including those based on NIRS, is covered. Some attention is paid to the perspective for future BCI's construction using optogenetics.

Keywords: brain computer interfaces (BCI), near-infrared spectrometry (NIRS), electroencephalography (EEG), magnetoencephalography (MEG), fMRI, optogenetics, human brain connectivity

## 1. Introduction

Most of us routinely manipulates of objects in the real world on an everyday basis. But, some people, for example, subjected to severe nervous system damage, may lack this necessary and basic ability. On the other hand, we may not limit our objects manipulations only with hands, but imagine some external actuator, not a part of a human body, but still controlled by one's will and suitable for the specific target manipulation. Such a substitution or addition of a human's abilities is certainly welcomed in the modern world. And although mentioned, the idea seems to be tempting, and its development was suspended for a long time because of difficulty in solving key component of the problem. Key component here is as precise as possible interpretation and transmitting commands born in human mind to external device of any sort. This task can be accomplished at some level by brain-computer interface technology (or BCIs for short). BCI problem was stated several decades ago. Since then, a lot of ground has been covered, yet a lot of discoveries are still ahead.

In present time, there are many "types" of BCIs build on different principles. Among many varieties of BCI, we are willing to emphasize on one, based on registration, interpretation, or classification of activity patterns of human cerebral cortex, registered by the means of one of the existing encephalographic methods, as the most popular. There are other paradigms, but only one is mentioned that allows to build BCI realization based on most encephalographic methods, such as electroencephalogram (EEG), magnetoencephalogram (MEG), fMRI, and so on (for example, see [1] of MEG realization). Not all of such realizations are of practical value due to usually bulky design. Additionally, we need to consider a family of new encephalographic methods, "fully" developed relatively recently, based on light propagation and interaction with living tissues. Light here must be considered in a wide sense—as radiation of different wavelengths (i.e., visible, infrared and something in the middle—near infrared). As such, the near-infrared spectrometry method can be introduced, providing yet another way to achieve scientific and practical goals and get additional fundamental results on nervous system.

obtain image of chemicals distribution in targeted tissue [10, 12]. That is sufficient

Selection of chromophores (i.e., physiologically relevant substances) depends on light sources' wavelength built in specific piece of equipment. Certain wavelength guarantees preferred interaction of light with chromatophore, structurally important for specific biological tissue. From the physiological point of view, among most important chromophores, one can name hemoglobin, glucose, myoglobin, and

Overwhelming majority of NIRS medical devices, also applicable for BCIs, are designed for cerebral hemodynamics measurement purposes. These devices use 735–760 and 810–860 nm wavelength light sources for target deoxygenated hemoglobin and oxy-hemoglobin correspondingly and utilizes so-called continuous wave experimental paradigm, meaning measuring only power reduction of light beam at detectors, passing through highly scattering mediums, in comparison with initial power, generated by sources [14]. Time delays, phase, and frequency parameter changes are neglected, although there are exceptions [9, 15]. "Continuous wave" device allows building "images" or distribution maps of oxy- and deoxyhemoglobin concentration changes, and measures tissue oxygenation index (TOI) and normal-

One must notice, how close such a hemodynamics activity observations make NIRS related to fMRI BOLD. Both methods measure blood oxygenation levels in

NIRS devices itself, usually, consists of one and up to several tens of light sources and detectors. Each possible pair "source-detector" forms a "channel" informative or not; i.e., whether it passes through zone where intensive neural activity is occurring or not and whether emitted by the source-optode light fades away passing through tissues or not. Analog-digital converter (ADC) read detectors' output and after filtering and preprocessing, usually conducted in the form of moving average filter to filter out heart beating, respiratory slow waves, and other nonphysiological artifacts and information about hemodynamics of volume, located "in between and a little bit in depth" in relation to selected source-detector optode position, ready for analysis. Inevitably on the path from source to detector, a portion of radiation will be lost in tissues and will never get in the detector. The other portion diffusely reflected from target volume will be weakened and get in to detector, where it can

Layout of sources and detectors in a manner, when the distance between them is approximately 3 cm on the scalp surface, allows ensuring probing depth of 3–5 cm, sufficient for detecting an activation of human brain cortical areas, although this occurs indirectly, through metabolic effects. Intensive neural activity (usually well differs from background activity) is a process accompanied by oxygen delivery, its absorption by neurons, and evacuation of metabolic products [18–20]. Interpreta-

On the other hand, such a set up allows conducting EEG measurements in parallel with NIRS and in the direct proximity of NIRS channels. This is a key for building so-called hybrid BCIs. In principle, in hybrid BCIs, not only and not exclusively EEG can be applied, but EEG is most popular, simple, accessible, and technically usually does not disturb functioning of NIRS devices in any way and

Among all methods to describe light propagation in matter, one must notice at least two. First, radiative transfer equation (RTE) (and its various approaches, like diffusion approach) is precise, but difficult to handle and almost impossible to solve for relatively complex problem types [21]. Second, on the other hand, is the phenomenological "modified Beer-Lambert law," specifically "designed" for simplified

for many applications, including various types of BCIs.

DOI: http://dx.doi.org/10.5772/intechopen.83345

Near-Infrared Optical Technologies in Brain-Computer Interface Systems

cytochrome-c-oxidase [13].

their own way [6, 9].

vice versa.

9

ized total hemoglobin index (nTHI) [16].

be quantitatively estimated [17, 18].

tion of such activity is practically basic of any BCI.

## 2. Theoretical bases of near-infrared spectroscopy in biological measurements

Measurements of a substance's characteristics and its complex properties by the means of observing radiation passed through mentioned substance are not a new concept—X-ray imaging is common in medical practice nowadays. Similarly to imaging in visible and plain infrared range of electromagnetic (EM) spectrum, NIRS imaging method utilizes light of near-infrared range to obtain information on tissue properties. In present, there are widely used "optical" (mostly nearinfrared—2100–2400 nm wave length range) pulsometer for measurements of heart rate and measurements of glucose level [2–5]. Probing to obtain more "deep" biological parameters is beginning to be routinely implemented, e.g., measuring cerebral oxygenation during cardiac surgeries and in BCIs as a next step.

In general, all infrared range of EM spectrum can be conditionally divided in three subranges: near (λ = 740–2500 nm), middle (λ = 2500 nm–50 μm), and far (λ = 50–2000 μm). Relatively, low radiation absorption by tissues of human body is a distinctive feature of near-infrared range. As a result, for NIR light, one may observe bigger penetration depths—up to several centimeters (maximum 3–5 cm) [6–10]. Thus, some types of biological tissue, to a certain extent, are virtually transparent for named spectrum range. Nevertheless, during light propagation in tissues, elastic scattering processes are very strong and that sets limits to penetration depths on the other hand. In these conditions, beam weakening can be explained predominantly with isotropization and measurements on adult humans, for example, possible only in "reflected light." "Permeate light" measurements are still possible in some cases. For instance, in infants, NIRS method can be applied to diagnose birth brain damage and detectors of NIRS equipment can be located on the opposite (in relation to light sources) side of infant head [11]. This is possible because of the higher optical transparency of infants' bones, skin, and skull covers. One must admit, however, that case mentioned above is certainly not a "BCI application" for several reasons.

All those factors and frequently relatively large value of optical density of biological matter limits our ability to obtain sharp, contrast, and precise images of small (such as mm-size voxels in fMRI) probing volumes. We must, however, stress the fact that this is true only for today's technique and technology.

Nevertheless, one can obtain the estimation of chromophore concentration distribution in a little bit large volumes (usually several cubic centimeters) and thus

## Near-Infrared Optical Technologies in Brain-Computer Interface Systems DOI: http://dx.doi.org/10.5772/intechopen.83345

obtain image of chemicals distribution in targeted tissue [10, 12]. That is sufficient for many applications, including various types of BCIs.

Selection of chromophores (i.e., physiologically relevant substances) depends on light sources' wavelength built in specific piece of equipment. Certain wavelength guarantees preferred interaction of light with chromatophore, structurally important for specific biological tissue. From the physiological point of view, among most important chromophores, one can name hemoglobin, glucose, myoglobin, and cytochrome-c-oxidase [13].

Overwhelming majority of NIRS medical devices, also applicable for BCIs, are designed for cerebral hemodynamics measurement purposes. These devices use 735–760 and 810–860 nm wavelength light sources for target deoxygenated hemoglobin and oxy-hemoglobin correspondingly and utilizes so-called continuous wave experimental paradigm, meaning measuring only power reduction of light beam at detectors, passing through highly scattering mediums, in comparison with initial power, generated by sources [14]. Time delays, phase, and frequency parameter changes are neglected, although there are exceptions [9, 15]. "Continuous wave" device allows building "images" or distribution maps of oxy- and deoxyhemoglobin concentration changes, and measures tissue oxygenation index (TOI) and normalized total hemoglobin index (nTHI) [16].

One must notice, how close such a hemodynamics activity observations make NIRS related to fMRI BOLD. Both methods measure blood oxygenation levels in their own way [6, 9].

NIRS devices itself, usually, consists of one and up to several tens of light sources and detectors. Each possible pair "source-detector" forms a "channel" informative or not; i.e., whether it passes through zone where intensive neural activity is occurring or not and whether emitted by the source-optode light fades away passing through tissues or not. Analog-digital converter (ADC) read detectors' output and after filtering and preprocessing, usually conducted in the form of moving average filter to filter out heart beating, respiratory slow waves, and other nonphysiological artifacts and information about hemodynamics of volume, located "in between and a little bit in depth" in relation to selected source-detector optode position, ready for analysis. Inevitably on the path from source to detector, a portion of radiation will be lost in tissues and will never get in the detector. The other portion diffusely reflected from target volume will be weakened and get in to detector, where it can be quantitatively estimated [17, 18].

Layout of sources and detectors in a manner, when the distance between them is approximately 3 cm on the scalp surface, allows ensuring probing depth of 3–5 cm, sufficient for detecting an activation of human brain cortical areas, although this occurs indirectly, through metabolic effects. Intensive neural activity (usually well differs from background activity) is a process accompanied by oxygen delivery, its absorption by neurons, and evacuation of metabolic products [18–20]. Interpretation of such activity is practically basic of any BCI.

On the other hand, such a set up allows conducting EEG measurements in parallel with NIRS and in the direct proximity of NIRS channels. This is a key for building so-called hybrid BCIs. In principle, in hybrid BCIs, not only and not exclusively EEG can be applied, but EEG is most popular, simple, accessible, and technically usually does not disturb functioning of NIRS devices in any way and vice versa.

Among all methods to describe light propagation in matter, one must notice at least two. First, radiative transfer equation (RTE) (and its various approaches, like diffusion approach) is precise, but difficult to handle and almost impossible to solve for relatively complex problem types [21]. Second, on the other hand, is the phenomenological "modified Beer-Lambert law," specifically "designed" for simplified

In present time, there are many "types" of BCIs build on different principles.

Among many varieties of BCI, we are willing to emphasize on one, based on registration, interpretation, or classification of activity patterns of human cerebral cortex, registered by the means of one of the existing encephalographic methods, as the most popular. There are other paradigms, but only one is mentioned that allows to build BCI realization based on most encephalographic methods, such as electroencephalogram (EEG), magnetoencephalogram (MEG), fMRI, and so on (for example, see [1] of MEG realization). Not all of such realizations are of practical value due to usually bulky design. Additionally, we need to consider a family of new encephalographic methods, "fully" developed relatively recently, based on light propagation and interaction with living tissues. Light here must be considered in a wide sense—as radiation of different wavelengths (i.e., visible, infrared and something in the middle—near infrared). As such, the near-infrared spectrometry method can be introduced, providing yet another way to achieve scientific and practical goals and get additional fundamental results on nervous system.

2. Theoretical bases of near-infrared spectroscopy in biological

tissue properties. In present, there are widely used "optical" (mostly nearinfrared—2100–2400 nm wave length range) pulsometer for measurements of heart rate and measurements of glucose level [2–5]. Probing to obtain more "deep" biological parameters is beginning to be routinely implemented, e.g., measuring cerebral oxygenation during cardiac surgeries and in BCIs as a next step.

Measurements of a substance's characteristics and its complex properties by the means of observing radiation passed through mentioned substance are not a new concept—X-ray imaging is common in medical practice nowadays. Similarly to imaging in visible and plain infrared range of electromagnetic (EM) spectrum, NIRS imaging method utilizes light of near-infrared range to obtain information on

In general, all infrared range of EM spectrum can be conditionally divided in three subranges: near (λ = 740–2500 nm), middle (λ = 2500 nm–50 μm), and far (λ = 50–2000 μm). Relatively, low radiation absorption by tissues of human body is a distinctive feature of near-infrared range. As a result, for NIR light, one may observe bigger penetration depths—up to several centimeters (maximum 3–5 cm) [6–10]. Thus, some types of biological tissue, to a certain extent, are virtually transparent for named spectrum range. Nevertheless, during light propagation in tissues, elastic scattering processes are very strong and that sets limits to penetration depths on the other hand. In these conditions, beam weakening can be explained predominantly with isotropization and measurements on adult humans, for example, possible only in "reflected light." "Permeate light" measurements are still possible in some cases. For instance, in infants, NIRS method can be applied to diagnose birth brain damage and detectors of NIRS equipment can be located on the opposite (in relation to light sources) side of infant head [11]. This is possible because of the higher optical transparency of infants' bones, skin, and skull covers. One must admit, however, that case mentioned above is certainly not a "BCI

All those factors and frequently relatively large value of optical density of biological matter limits our ability to obtain sharp, contrast, and precise images of small (such as mm-size voxels in fMRI) probing volumes. We must, however, stress

Nevertheless, one can obtain the estimation of chromophore concentration distribution in a little bit large volumes (usually several cubic centimeters) and thus

the fact that this is true only for today's technique and technology.

measurements

New Frontiers in Brain-Computer Interfaces

application" for several reasons.

8

description of light and radiation, in general propagation in turbid media [22, 23]. Brain and its covers from the NIR-light viewpoint can be considered as possessors of strong turbid properties. As such, the law states that weakening on initial beam depends on value of extinction coefficients, treated as constants, specific for every chemical or chromophore, like both oxy- and deoxyhemoglobin and other [24]. Due to individual scattering acts of spreading photons, their path from entry point to leaving point from turbid media is curvilinear. The so-called differential pathway factor (DPF) serve the purpose of description of this phenomenon in modified Beer-Lambert law. The law in question must be used with caution, and some attention must be paid to borders in which mentioned coefficients variate [25]. Neural tissue has extremely rich blood microcirculation [19], which makes modified Beer-Lambert law at the scale of those blood flows, desired for observation, look like rather a gross approach. In reality, NIR-light interaction process with hemoglobin molecules is much more complicated. For instance, hemoglobin itself has a complex molecular structure and thus influences light scattering and absorption (scattering particles form factor). In addition, hemoglobin molecules are not free blood elements, but are included in dynamically changing structured particles—erythrocytes, which chaotically move in blood stream and in turn have their own form features [20, 22, 26]. All these factors on the microlevel play important role in scattering and absorption processes, but on today's observable scales allowed by modern equipment (1–5 cm in linear dimensions), they hardly can be called significant. This makes modified Beer-Lambert law applicable to described class of problems.

selection. This means inclusion in and/or exclusion from consideration, channels with informative or less informative data output. Otherwise, channels selection procedure require of execution of some usually complex searching algorithm, often time-consuming. Such process of relevant channel selection leads to increase BCI target state classifiability for it is sampling most informative channels, related to

Near-Infrared Optical Technologies in Brain-Computer Interface Systems

DOI: http://dx.doi.org/10.5772/intechopen.83345

Resting-state connectivity research is also beneficial as it allows developing approaches and methods, needed for understanding of cortex regions' interrelations [37]. Similarly, functional connectivity research allows understanding interrelations during some task performance. Registered activity patterns are not stable in their nature. They undergo a fluent and perpetual change. Thus, understanding of their spatial and temporal interrelations; i.e., connectivity, probably will allow predicting or at very least mark their changing borders during mental task execution. As a result, such consideration may allow increasing BCI functionality and stability in general. As a good firm method for connectivity research, one must name independent component analysis (ICA) [39, 40]. ICA capable to separate whole data to a set of spatially and temporarily independent components, thus, allow separating useful, informative data from noise and artifacts (such as oculographic artifacts produced by eyes movements) and increase BCI performance. It can be used for better or more precise relating electrical or metabolic, i.e., NIRS-registered activity to a specific cerebral cortex region and hence to a mental state. For example, ICA component with localization mainly focused in motor regions, by association can be viewed as a reflection of some motor act or motor act imagination [41]. Alternatively, localized in Broca area, ICA components signalize of acoustic perception [42]. Among others, processing methods may be useful methods for signal separation in spatially and temporal localized components or for specific process-driven components, one should mention empirical mode decomposition method and its

In relation to connectivity problem in research, one must also notice usage of repetitive transcranial magnetic stimulation as a more invasive measure [45].

the last turn—fairly low price in comparison, for example, with fMRI setups.

In order to evaluate present day level of research in the field in question, one must in some way characterize ongoing work, information on which conveys for scientific community through existed papers. To accomplish such a task, one must bring in some form of classification for those objects. Thus, carried out by the general analysis of texts and analysis of keywords, we come up with publications' hierarchical classification system. This classification system forms framework in which the further information is organized (see Table 1). This classification system was constructed by the analysis of available papers and their keywords, most often used in them. This system is "emergent, "i.e., was formed on precedents. In event of occurrence of publication, which is not related to one of already existing categories in hierarchy, the new category was reserved. Roots of a tree of hierarchy are the

Speaking of fundamental research and role of optics in brain research, one must mention optogenetics—quickly developing field utilizing light (of NIR range in some cases) to activate and/or observe genetically modified neural tissues [46, 47]. It will not be far from reality to say that besides those interesting and rapidly developing fields, there are plenty of room for evolution in conventional NIRS-based and hybrid BCI. NIRS-based BCI and imaging technology have many advantages. Among them, one may name noninvasiveness, safety for users, portability and not in

task at hand [38].

modifications [43, 44].

3. Material's analysis basics

11

Tomography of cranium in NIR spectrum at present virtually is not widely implemented, especially in context of application of BCIs considered here. Among the reasons of such state of affairs are poor quality, images resolution, with demanding of opposite, and some other general restrictions. Although technology is itself promising and some pieces of optical tomography equipment exist, it can be explained by significant complexity and yet not sufficiently developed by mathematical methods of tomographic image processing, acquired in NIR wave length and as well by some technical difficulties. Nevertheless, some approaches to the problem's solution exist (for example, see diffuse optical tomography or DOT [27–30]).

Ongoing perfection process of scientific equipment, mathematical, and technical measurement methods allows now to detect some signal features, which were impossible to observe before. In particular, there is some research conducting on so to speak "portability" on NIRS, methods earlier developed for detecting evoked potentials essentially by the means of EEG. Among such researches, one may notice [31–35] reports on recording of so-called fast optical signals (FOS), which in terms of authors exactly are optical analogue of low latency EEG that evoked potentials. Reliable registration technique of such phenomenon will allow creating NIRS BCI based on evoked potential paradigm.

It is also necessary to mark fundamental research in the field. A good example of such makes research on connectivity, conducted with NIRS [36, 37]. It may give an additional data always needed during constructing applied equipment for specific tasks, for instance in clinics for poststroke or neurotrauma patients, where brain activity patterns suffer serious changes, and in healthy BCI's users. Generally, research on connectivity of different cortex regions allows to detect not only groups of NIRS channels registering activity specific to the BCI task, but also allows to assume channels, say, with relatively high noise levels as informative, and thus produces more information to identify activity pattern and hence increases BCI system performance. This is also true regardless to modality of registration: NIRS, EEG, etc. Information on connectivity at the stage of development (testing and adaptation to some group of individuals or to a single user) of particular BCI system, can, for example, ease such calculatively difficult process as channels

## Near-Infrared Optical Technologies in Brain-Computer Interface Systems DOI: http://dx.doi.org/10.5772/intechopen.83345

selection. This means inclusion in and/or exclusion from consideration, channels with informative or less informative data output. Otherwise, channels selection procedure require of execution of some usually complex searching algorithm, often time-consuming. Such process of relevant channel selection leads to increase BCI target state classifiability for it is sampling most informative channels, related to task at hand [38].

Resting-state connectivity research is also beneficial as it allows developing approaches and methods, needed for understanding of cortex regions' interrelations [37]. Similarly, functional connectivity research allows understanding interrelations during some task performance. Registered activity patterns are not stable in their nature. They undergo a fluent and perpetual change. Thus, understanding of their spatial and temporal interrelations; i.e., connectivity, probably will allow predicting or at very least mark their changing borders during mental task execution. As a result, such consideration may allow increasing BCI functionality and stability in general. As a good firm method for connectivity research, one must name independent component analysis (ICA) [39, 40]. ICA capable to separate whole data to a set of spatially and temporarily independent components, thus, allow separating useful, informative data from noise and artifacts (such as oculographic artifacts produced by eyes movements) and increase BCI performance. It can be used for better or more precise relating electrical or metabolic, i.e., NIRS-registered activity to a specific cerebral cortex region and hence to a mental state. For example, ICA component with localization mainly focused in motor regions, by association can be viewed as a reflection of some motor act or motor act imagination [41]. Alternatively, localized in Broca area, ICA components signalize of acoustic perception [42]. Among others, processing methods may be useful methods for signal separation in spatially and temporal localized components or for specific process-driven components, one should mention empirical mode decomposition method and its modifications [43, 44].

In relation to connectivity problem in research, one must also notice usage of repetitive transcranial magnetic stimulation as a more invasive measure [45].

Speaking of fundamental research and role of optics in brain research, one must mention optogenetics—quickly developing field utilizing light (of NIR range in some cases) to activate and/or observe genetically modified neural tissues [46, 47].

It will not be far from reality to say that besides those interesting and rapidly developing fields, there are plenty of room for evolution in conventional NIRS-based and hybrid BCI. NIRS-based BCI and imaging technology have many advantages. Among them, one may name noninvasiveness, safety for users, portability and not in the last turn—fairly low price in comparison, for example, with fMRI setups.

## 3. Material's analysis basics

In order to evaluate present day level of research in the field in question, one must in some way characterize ongoing work, information on which conveys for scientific community through existed papers. To accomplish such a task, one must bring in some form of classification for those objects. Thus, carried out by the general analysis of texts and analysis of keywords, we come up with publications' hierarchical classification system. This classification system forms framework in which the further information is organized (see Table 1). This classification system was constructed by the analysis of available papers and their keywords, most often used in them. This system is "emergent, "i.e., was formed on precedents. In event of occurrence of publication, which is not related to one of already existing categories in hierarchy, the new category was reserved. Roots of a tree of hierarchy are the

description of light and radiation, in general propagation in turbid media [22, 23]. Brain and its covers from the NIR-light viewpoint can be considered as possessors of strong turbid properties. As such, the law states that weakening on initial beam depends on value of extinction coefficients, treated as constants, specific for every chemical or chromophore, like both oxy- and deoxyhemoglobin and other [24]. Due to individual scattering acts of spreading photons, their path from entry point to leaving point from turbid media is curvilinear. The so-called differential pathway factor (DPF) serve the purpose of description of this phenomenon in modified Beer-Lambert law. The law in question must be used with caution, and some attention must be paid to borders in which mentioned coefficients variate [25]. Neural tissue has extremely rich blood microcirculation [19], which makes modified Beer-Lambert law at the scale of those blood flows, desired for observation, look like rather a gross approach. In reality, NIR-light interaction process with hemoglobin molecules is much more complicated. For instance, hemoglobin itself has a complex molecular structure and thus influences light scattering and absorption (scattering particles form factor). In addition, hemoglobin molecules are not free blood elements, but are included in dynamically changing structured particles—erythrocytes, which chaotically move in blood stream and in turn have their own form features [20, 22, 26]. All these factors on the microlevel play important role in scattering and absorption processes, but on today's observable scales allowed by modern equipment (1–5 cm in linear dimensions), they hardly can be called significant. This makes modified Beer-

Tomography of cranium in NIR spectrum at present virtually is not widely implemented, especially in context of application of BCIs considered here. Among the reasons of such state of affairs are poor quality, images resolution, with demanding of opposite, and some other general restrictions. Although technology is itself promising and some pieces of optical tomography equipment exist, it can be explained by significant complexity and yet not sufficiently developed by mathematical methods of tomographic image processing, acquired in NIR wave length and as well by some technical difficulties. Nevertheless, some approaches to the problem's solution exist

Ongoing perfection process of scientific equipment, mathematical, and technical

It is also necessary to mark fundamental research in the field. A good example of such makes research on connectivity, conducted with NIRS [36, 37]. It may give an additional data always needed during constructing applied equipment for specific tasks, for instance in clinics for poststroke or neurotrauma patients, where brain activity patterns suffer serious changes, and in healthy BCI's users. Generally, research on connectivity of different cortex regions allows to detect not only groups of NIRS channels registering activity specific to the BCI task, but also allows to assume channels, say, with relatively high noise levels as informative, and thus produces more information to identify activity pattern and hence increases BCI system performance. This is also true regardless to modality of registration: NIRS, EEG, etc. Information on connectivity at the stage of development (testing and adaptation to some group of individuals or to a single user) of particular BCI system, can, for example, ease such calculatively difficult process as channels

measurement methods allows now to detect some signal features, which were impossible to observe before. In particular, there is some research conducting on so to speak "portability" on NIRS, methods earlier developed for detecting evoked potentials essentially by the means of EEG. Among such researches, one may notice [31–35] reports on recording of so-called fast optical signals (FOS), which in terms of authors exactly are optical analogue of low latency EEG that evoked potentials. Reliable registration technique of such phenomenon will allow creating NIRS BCI

Lambert law applicable to described class of problems.

New Frontiers in Brain-Computer Interfaces

(for example, see diffuse optical tomography or DOT [27–30]).

based on evoked potential paradigm.

10


The second subtype was formed by types of described experiments, such as: real experiments on functioning model of BCI, the description of "classical" for NIRS experiments (for instance, various memory tasks, finger or a palm tapping, mental arithmetic), etc. The last category was related to "not classical" experiment, i.e., mental tasks or problems solving and were rarely or not published at all earlier. A separate category of the given subtype has been allocated as fast optical signals (FOS) or "optical evoke potentials" registration with or without signal averaging and so on. Researches on connectivity and optogenetics were not included in classification system and were treated separately, for they constitute merely a support

Near-Infrared Optical Technologies in Brain-Computer Interface Systems

DOI: http://dx.doi.org/10.5772/intechopen.83345

The third subtype was whether the considered one in article BCI was a "hybrid." Along with separate EEG and NIRS measurements, there are papers on hybrids of EEG + NIRS and hybrids of other nature, for example NIRS + fMRI out there. The fourth criterion of classification was the brain area over which measurements were conducted (for example, motor areas, frontal and prefrontal areas of human cerebral cortex, temporal area, an occipital cortex or something else, for instance EEG-measurements were conducted over motor areas, and NIRS measure-

The fifth criterion was whether there is an achievement of the goal or objective declared in the paper, i.e., whether work can be treated or treated by authors as

At last, the sixth subtype of hierarchical emergent classification was devoted to papers in which some special characteristics or unique feature were presented. Among such features were "how many types of experiments were described and conducted in work, reported by the paper" or whether BCI was used in work for

Certainly, the given classification is not complete and a closed system. Also, it is not unique. However, its application is quite justified, since, at least, it is "to some extent" stable, i.e., robust and stable to the new information addition. It is possible to come to such a conclusion, having taken into consideration fact, that actually the classification structure has ceased to change after processing only of about 25 publications from the considered pool, chosen in a random manner. Also, it is necessary to notice that there were some categories which by their nature have not been presented in a considered pool of publications during analysis, but were included in (or better to say drawn into) classification system. For example, a class of articles telling about distribution of light propagated through a biological tissue and physic properties of this process, FOS, which only indirectly connected to BCI, and so on. Focus of such articles is displaced aside from practical applications of BCI to general physical laws on the basis of which any of NIRS devices are constructed and to fundamental research of nervous system. Nevertheless, some processed papers can be additionally classified with the last category, yet bearing in mind the fact of their distant relations to the main subject (i.e., distant from the roots

4. Latest achievements in constructing BCI over NIRS and "hybrid" BCI

Research presented here provides with a detailed analysis of over 100 articles with various time of publishing. Total number of papers under consideration, possibly only indirectly connected with BCI subject, but connected to the related fields, was 178. Some of these papers used NIRS equipment in BCI applications, or for the comparative analysis and test for reliability of NIRS data versus fMRI-BOLD, or for other fundamental or technical purposes. The earliest publication that has come into

value in the BCI context.

success.

13

ments—over frontal and prefrontal areas).

branches) of classification system described.

management of the robot or external assistive actuator, etc.

## Table 1.

Hierarchical emergent classification system.

basic types of the publications, for example, whether article is the review, whether it represents the description of real experiments in ERS/ERD- or EP paradigms (event-related synchronization/desynchronization or evoked potentials), or it represents the description of a new unique method of research.

## Near-Infrared Optical Technologies in Brain-Computer Interface Systems DOI: http://dx.doi.org/10.5772/intechopen.83345

The second subtype was formed by types of described experiments, such as: real experiments on functioning model of BCI, the description of "classical" for NIRS experiments (for instance, various memory tasks, finger or a palm tapping, mental arithmetic), etc. The last category was related to "not classical" experiment, i.e., mental tasks or problems solving and were rarely or not published at all earlier. A separate category of the given subtype has been allocated as fast optical signals (FOS) or "optical evoke potentials" registration with or without signal averaging and so on. Researches on connectivity and optogenetics were not included in classification system and were treated separately, for they constitute merely a support value in the BCI context.

The third subtype was whether the considered one in article BCI was a "hybrid." Along with separate EEG and NIRS measurements, there are papers on hybrids of EEG + NIRS and hybrids of other nature, for example NIRS + fMRI out there.

The fourth criterion of classification was the brain area over which measurements were conducted (for example, motor areas, frontal and prefrontal areas of human cerebral cortex, temporal area, an occipital cortex or something else, for instance EEG-measurements were conducted over motor areas, and NIRS measurements—over frontal and prefrontal areas).

The fifth criterion was whether there is an achievement of the goal or objective declared in the paper, i.e., whether work can be treated or treated by authors as success.

At last, the sixth subtype of hierarchical emergent classification was devoted to papers in which some special characteristics or unique feature were presented. Among such features were "how many types of experiments were described and conducted in work, reported by the paper" or whether BCI was used in work for management of the robot or external assistive actuator, etc.

Certainly, the given classification is not complete and a closed system. Also, it is not unique. However, its application is quite justified, since, at least, it is "to some extent" stable, i.e., robust and stable to the new information addition. It is possible to come to such a conclusion, having taken into consideration fact, that actually the classification structure has ceased to change after processing only of about 25 publications from the considered pool, chosen in a random manner. Also, it is necessary to notice that there were some categories which by their nature have not been presented in a considered pool of publications during analysis, but were included in (or better to say drawn into) classification system. For example, a class of articles telling about distribution of light propagated through a biological tissue and physic properties of this process, FOS, which only indirectly connected to BCI, and so on. Focus of such articles is displaced aside from practical applications of BCI to general physical laws on the basis of which any of NIRS devices are constructed and to fundamental research of nervous system. Nevertheless, some processed papers can be additionally classified with the last category, yet bearing in mind the fact of their distant relations to the main subject (i.e., distant from the roots branches) of classification system described.

## 4. Latest achievements in constructing BCI over NIRS and "hybrid" BCI

Research presented here provides with a detailed analysis of over 100 articles with various time of publishing. Total number of papers under consideration, possibly only indirectly connected with BCI subject, but connected to the related fields, was 178. Some of these papers used NIRS equipment in BCI applications, or for the comparative analysis and test for reliability of NIRS data versus fMRI-BOLD, or for other fundamental or technical purposes. The earliest publication that has come into

basic types of the publications, for example, whether article is the review, whether it represents the description of real experiments in ERS/ERD- or EP paradigms (event-related synchronization/desynchronization or evoked potentials), or it rep-

resents the description of a new unique method of research.

General type (subtype #1)

Real functioning prototype of BCI over ERD paradigm

Real functioning prototype of BCI over EP paradigm

Unique method description

Table 1.

12

Hierarchical emergent classification system.

Reviews Real BCI

Experiment type (subtype #2)

New Frontiers in Brain-Computer Interfaces

experiments description

Classical experiments (finger tapping, memory tasks)

No experiments described in detail

Nonclassical experiments Hybrid type (subtype #3)

EEG + NIRS hybrid

Other hybrids (not an

FOS EEG only,

EEG + NIRS)

but some hybrid declared (+NIRS)

Just about everything in existence

EEG only Frontal &

Observation area (subtype #4)

NIRS only Motor area Review's

prefrontal area

Temporal area Failed to

Occipital area Failed to

Something else (For instance, EEG over motor area, NIRS over frontal area)

Result type (subtype #5)

statements

achieve success

achieve success, but there is a hope for the future

Additional type (subtype #6)

Nothing special (reviews or methodical works)

Several types of experiments considered (motor area, Broca area, etc.)

One type of experiment described only, and indirect fundamental conclusions were drawn by results

Robot control

Article about a choice of classifiers, usage of exotic classifiers, research directed on improvement of work of classifiers or algorithms of a choice

Paper about technical aspects of NIRS of equipment designing

Modeling of neural activity/ hemodynamics

of features

Paper about experiment with a big deviation toward technical details of instrument and equipment

experiment described

Success! Only one type of

the view was dated by 1993, the latest by 2017. Thus, this affirms that research of a human brain and nervous system utilizing NIRS is a new and fast developing field of science. Data confirm this statement as shown in Figure 1, where it is well visible, that the dynamics of number of publications on past years shows general growth. The year 2017 and beyond, of course, at the moment is not indicative and can be analyzed only in future in retrospective. Despite this, due to the increasing interest to NIRS, to BCI technology, and to related fields, the tendency will most certainly remain.

Among types of experiments, leading position (by number) is occupied by papers in which experiments in both ERS/ERD—and EP paradigms were described. Among them, there was some number of papers about classical type experiments (i.e., accepted and well known in cognitive research, for instance mental arithmetic). Smaller, but nevertheless the noticeable share constitutes publications in which there were no detailed descriptions of experiments, or they were not conducted at all. Still smaller portion describes "nonclassical" experiments. See [48] for experiments, where motor cortex activity was recorded, in which movements done by the person peeling apples were described. Paper [49] describes BCI, one of which states the intention of the user to carry out "speech activity, "i.e., intention to say something. Such state was utilized as a sign of examinee's wish to use the BCI system. Recognition accuracy of such mental state reaches 73%. Segments of NIRS records, in which the examinee pronounced words, can be distinguished from "rest state."

So for, imagination of movements and tapping the motor areas corresponds, for cognitive tasks—frontal and prefrontal cortex areas, to visual recognition, naturally—an occipital cortex. Experiments with auditory system, for example, definition of the fact of audibility of a sound, pronouncing of internal speech, demand registration in temporal area and prefrontal cortex. All these ways of registration are traced in articles of a considered pool. In all those experiments, hybrid registration also took place. Usually, EEG recording was conducted over motor cortex, and NIRS recording was conducted over frontal and prefrontal areas. Most papers fell into this category—almost 44% from the general number of publications. Proceeding from experience of NIRS registration, it is supreme strategy of BCI construction since it gives possibility of simultaneous EEG registration of event-related desynchronization of brain rhythms, providing advantages in time resolution, and, reliability of NIRS cognitive answers. Preferences of registration

areas, and, hence, to experiment types are illustrated in Figure 3.

Near-Infrared Optical Technologies in Brain-Computer Interface Systems

DOI: http://dx.doi.org/10.5772/intechopen.83345

Figure 2.

Figure 3.

15

Registration areas (observation area type & paper count).

BCI hybrid type (papers types & count).

Smallest party was formed by works published on an FOS, connectivity, and optogenetics, but last two were viewed more like as an addition here.

Despite the obvious advantages widely described in the scientific periodic literature, hybrid BCIs are just not exclusively popular. BCIs over NIRS-only are leading by quantity of publications (at least in considered pool till 2017 year). EEG + NIRS hybrids actually take only the second place. The third occupies BCI over EEG only, still containing a considerable quantity of references to work with NIRS. The remaining numbers are made by articles with description of BCI over EEG in which the future research in BCIs over NIRS was declared or with the comparison works of two technologies. Papers in which hybrid BCI over NIRS and some another "modality" (other than EEG) was described constitutes the smallest pool. Distribution of publications by quantity according to these properties is resulted in Figure 2.

Areas over which usually NIRS optodes and/or EEG electrodes were situated correspond to type of a mental task, which is planned to work with in particular experiment.

Figure 1. Number of publications on "NIRS" by years.

Near-Infrared Optical Technologies in Brain-Computer Interface Systems DOI: http://dx.doi.org/10.5772/intechopen.83345

the view was dated by 1993, the latest by 2017. Thus, this affirms that research of a human brain and nervous system utilizing NIRS is a new and fast developing field of science. Data confirm this statement as shown in Figure 1, where it is well visible, that the dynamics of number of publications on past years shows general growth. The year 2017 and beyond, of course, at the moment is not indicative and can be analyzed only in future in retrospective. Despite this, due to the increasing interest to NIRS, to BCI technology, and to related fields, the tendency will most certainly

Among types of experiments, leading position (by number) is occupied by papers in which experiments in both ERS/ERD—and EP paradigms were described. Among them, there was some number of papers about classical type experiments (i.e., accepted and well known in cognitive research, for instance mental arithmetic). Smaller, but nevertheless the noticeable share constitutes publications in which there were no detailed descriptions of experiments, or they were not

conducted at all. Still smaller portion describes "nonclassical" experiments. See [48] for experiments, where motor cortex activity was recorded, in which movements done by the person peeling apples were described. Paper [49] describes BCI, one of which states the intention of the user to carry out "speech activity, "i.e., intention to say something. Such state was utilized as a sign of examinee's wish to use the BCI system. Recognition accuracy of such mental state reaches 73%. Segments of NIRS records, in which the examinee pronounced words, can be distinguished from "rest

Smallest party was formed by works published on an FOS, connectivity, and

Despite the obvious advantages widely described in the scientific periodic literature, hybrid BCIs are just not exclusively popular. BCIs over NIRS-only are leading by quantity of publications (at least in considered pool till 2017 year). EEG + NIRS hybrids actually take only the second place. The third occupies BCI over EEG only, still containing a considerable quantity of references to work with NIRS. The remaining numbers are made by articles with description of BCI over EEG in which the future research in BCIs over NIRS was declared or with the comparison works of

optogenetics, but last two were viewed more like as an addition here.

two technologies. Papers in which hybrid BCI over NIRS and some another

"modality" (other than EEG) was described constitutes the smallest pool. Distribution of publications by quantity according to these properties is resulted in Figure 2. Areas over which usually NIRS optodes and/or EEG electrodes were situated correspond to type of a mental task, which is planned to work with in particular

remain.

New Frontiers in Brain-Computer Interfaces

state."

experiment.

Figure 1.

14

Number of publications on "NIRS" by years.

So for, imagination of movements and tapping the motor areas corresponds, for cognitive tasks—frontal and prefrontal cortex areas, to visual recognition, naturally—an occipital cortex. Experiments with auditory system, for example, definition of the fact of audibility of a sound, pronouncing of internal speech, demand registration in temporal area and prefrontal cortex. All these ways of registration are traced in articles of a considered pool. In all those experiments, hybrid registration also took place. Usually, EEG recording was conducted over motor cortex, and NIRS recording was conducted over frontal and prefrontal areas. Most papers fell into this category—almost 44% from the general number of publications. Proceeding from experience of NIRS registration, it is supreme strategy of BCI construction since it gives possibility of simultaneous EEG registration of event-related desynchronization of brain rhythms, providing advantages in time resolution, and, reliability of NIRS cognitive answers. Preferences of registration areas, and, hence, to experiment types are illustrated in Figure 3.

Figure 3. Registration areas (observation area type & paper count).

compared in relation to implementation of visual feedback and placebo as a feedback. The first case has shown motility substantial improvement. In [55], NIRS research of Broca area for purpose of Chinese language vowels internal pronouncing recognition was conducted. Paper represents an attempt to introduce a measure of consciousnesses presence for the paralyzed patients and patients with a locked in syndrome (LIS). The study was constructed around concept of "consciousness index" and attempted to introduce objective numerical criterion of consciousness

Near-Infrared Optical Technologies in Brain-Computer Interface Systems

DOI: http://dx.doi.org/10.5772/intechopen.83345

Work [56] was devoted to various types of mental task usage efficiency estimation in NIRS-BCI. Till now, studies in the field of NIRS-BCI have been focused on increase in accuracy of various mental tasks' classification. In the given paper, search of mental states pairs which would be the best from the point of view of BCI customization, i.e., best classifiable in given conditions. In event-related hemodynamic response on eight channels, NIRS system had been recorded for various mental tasks in seven subjects. The beginning of a condition simulation for the examinee was designated by sounds followed by a 15-s pause for elimination "aftershock" in hemodynamics caused by sound. The pause between trails varied from 10 to 15 s for elimination of accustoming effect. Mental conditions or states under investigation were the following: LMI—left motor imagining (i.e., imagining of motor activity by the left arm), RMI—right motor imagining, FMI—foot motor imagining, SING—mental singing, SUB—sequential subtraction of small numbers, MUL—mental multiplication, ROT—mental rotation of a given three-dimensional (3-D) geometric figure, and WRT—mental character writing. Based on this approach, a set of "recommended" mental tasks with rather big classifications accuracy represents the following list: "mental multiplication," "mental rotation," and "motor imagination of the right hand." Pairs, formed of any two mental states from three listed above, show the highest mean accuracy of classification by utilizing method of linear discriminant analysis (LDA), most used in BCI applications, as authors stated. Authors of article expect that their results will be useful to reduction of time spent in research, technical, and methodical works, on definition of the best individually specific mental conditions, and their combinations. LDA, as classification algorithm, utilized three features: oxy-, deoxy-, and full hemoglobin. Naturally, choice of the best mental state was made upon the best distinguish ability of LDA algorithms. Also, it was stated that selection was influenced by searching on channel

position dependencies, i.e., searching the most informative channel.

Often, the most natural to the examinee was the imagination of movement of their own hands and that is applied in many BCI systems based on an ERD/ERS paradigm. Work [57] represents research and the proof of fact that imagination of right and left hand movements produces distinguishable patterns of hemodynamic activity, which can be classified with the linear classifier and, thus, applicable in BCI systems. Conditions of experiments are rather a standard scheme nowadays: 10 healthy examinees, kinesthetic imagination, inflections of the left and right hands, and command indications were presented on the computer screen. Signals from right- and left-lateral motor cortex have been registered simultaneously with the use of multichannel NIRS system of a continuous wave type. Linear discriminant analysis was used as the classifier, which has allowed achieving an average on examinees classification accuracy of 73.35 and 83.0% for the right and left hands, accordingly. In works [58, 59], classification of multichannel "NIRS patterns" is considered for states of motor imagination in order to construct BCI for people with the limited abilities. In work [59], the previous studies of other authors were mentioned in which other paradigms were also considered, for example, imagination of movements and recall of specific emotions, concentrations, and electric stimulation (reaction to it). The general scheme of classical experiments, information gathering,

presence in the patient.

17

Figure 4. Special features of considered papers.

By this analysis, one can say that the most number of papers represents publications of work results upon only one—"pilot" experiment with NIRS or with hybrid modality. Equivalently, a big share constitutes the works, which do not have any outstanding properties (mainly its reviews, and also the works, results of which have descriptive or methodical properties). Third place by frequency of occurrence is divided by works in which different types of experiments were conducted and by works in which some fundamental conclusions were drawn from those results. Articles about mathematical features of BCI application or about classifiers of NIRS signals form another category. Here, papers on choice of signals' features selection algorithms are also included (including automatic feature selection).

The remaining small part is made by articles devoted to technical aspects of BCI over NIRS devices and systems construction, control of assistive robots and external actuators, and artificial limbs. Hereby, NIRS begins to connect with innovation process in area of anthropomorphic mechanisms designing in addition to medicine. One also must note articles on hemodynamics studying. Such situation can be generalized by phrase: "NIRS only starts to extend to those directions." Mentioned features of a considered pool of articles are illustrated in Figure 4 (for explanations of section names, see Table 1).

## 5. Particular examples of BCI on NIRS and hybrid BCI

Let us now introduce description of some interesting specific examples on considered device type. One should see [50–52] to know about technical features and designs of NIRS devices and applications. It is possible to read about potential of commercial application of considered technologies in [53]. Paper [54] represents the review of the most recent achievements in the field of BCI uses in rehabilitation of poststroke patients with a stress on methodology, which pointed out that results of such rehabilitation led to improvement of the patients' motor act. Some challenges and unresolved problems are also discussed. Efficiency of NIRS-BCI was

## Near-Infrared Optical Technologies in Brain-Computer Interface Systems DOI: http://dx.doi.org/10.5772/intechopen.83345

compared in relation to implementation of visual feedback and placebo as a feedback. The first case has shown motility substantial improvement. In [55], NIRS research of Broca area for purpose of Chinese language vowels internal pronouncing recognition was conducted. Paper represents an attempt to introduce a measure of consciousnesses presence for the paralyzed patients and patients with a locked in syndrome (LIS). The study was constructed around concept of "consciousness index" and attempted to introduce objective numerical criterion of consciousness presence in the patient.

Work [56] was devoted to various types of mental task usage efficiency estimation in NIRS-BCI. Till now, studies in the field of NIRS-BCI have been focused on increase in accuracy of various mental tasks' classification. In the given paper, search of mental states pairs which would be the best from the point of view of BCI customization, i.e., best classifiable in given conditions. In event-related hemodynamic response on eight channels, NIRS system had been recorded for various mental tasks in seven subjects. The beginning of a condition simulation for the examinee was designated by sounds followed by a 15-s pause for elimination "aftershock" in hemodynamics caused by sound. The pause between trails varied from 10 to 15 s for elimination of accustoming effect. Mental conditions or states under investigation were the following: LMI—left motor imagining (i.e., imagining of motor activity by the left arm), RMI—right motor imagining, FMI—foot motor imagining, SING—mental singing, SUB—sequential subtraction of small numbers, MUL—mental multiplication, ROT—mental rotation of a given three-dimensional (3-D) geometric figure, and WRT—mental character writing. Based on this approach, a set of "recommended" mental tasks with rather big classifications accuracy represents the following list: "mental multiplication," "mental rotation," and "motor imagination of the right hand." Pairs, formed of any two mental states from three listed above, show the highest mean accuracy of classification by utilizing method of linear discriminant analysis (LDA), most used in BCI applications, as authors stated. Authors of article expect that their results will be useful to reduction of time spent in research, technical, and methodical works, on definition of the best individually specific mental conditions, and their combinations. LDA, as classification algorithm, utilized three features: oxy-, deoxy-, and full hemoglobin. Naturally, choice of the best mental state was made upon the best distinguish ability of LDA algorithms. Also, it was stated that selection was influenced by searching on channel position dependencies, i.e., searching the most informative channel.

Often, the most natural to the examinee was the imagination of movement of their own hands and that is applied in many BCI systems based on an ERD/ERS paradigm. Work [57] represents research and the proof of fact that imagination of right and left hand movements produces distinguishable patterns of hemodynamic activity, which can be classified with the linear classifier and, thus, applicable in BCI systems. Conditions of experiments are rather a standard scheme nowadays: 10 healthy examinees, kinesthetic imagination, inflections of the left and right hands, and command indications were presented on the computer screen. Signals from right- and left-lateral motor cortex have been registered simultaneously with the use of multichannel NIRS system of a continuous wave type. Linear discriminant analysis was used as the classifier, which has allowed achieving an average on examinees classification accuracy of 73.35 and 83.0% for the right and left hands, accordingly.

In works [58, 59], classification of multichannel "NIRS patterns" is considered for states of motor imagination in order to construct BCI for people with the limited abilities. In work [59], the previous studies of other authors were mentioned in which other paradigms were also considered, for example, imagination of movements and recall of specific emotions, concentrations, and electric stimulation (reaction to it). The general scheme of classical experiments, information gathering,

By this analysis, one can say that the most number of papers represents publications of work results upon only one—"pilot" experiment with NIRS or with hybrid modality. Equivalently, a big share constitutes the works, which do not have any outstanding properties (mainly its reviews, and also the works, results of which have descriptive or methodical properties). Third place by frequency of occurrence is divided by works in which different types of experiments were conducted and by works in which some fundamental conclusions were drawn from those results. Articles about mathematical features of BCI application or about classifiers of NIRS signals form another category. Here, papers on choice of signals' features selection

The remaining small part is made by articles devoted to technical aspects of BCI over NIRS devices and systems construction, control of assistive robots and external actuators, and artificial limbs. Hereby, NIRS begins to connect with innovation process in area of anthropomorphic mechanisms designing in addition to medicine. One also must note articles on hemodynamics studying. Such situation can be generalized by phrase: "NIRS only starts to extend to those directions." Mentioned features of a considered pool of articles are illustrated in Figure 4 (for explanations

Let us now introduce description of some interesting specific examples on considered device type. One should see [50–52] to know about technical features and designs of NIRS devices and applications. It is possible to read about potential of commercial application of considered technologies in [53]. Paper [54] represents the review of the most recent achievements in the field of BCI uses in rehabilitation of poststroke patients with a stress on methodology, which pointed out that results of such rehabilitation led to improvement of the patients' motor act. Some challenges and unresolved problems are also discussed. Efficiency of NIRS-BCI was

algorithms are also included (including automatic feature selection).

5. Particular examples of BCI on NIRS and hybrid BCI

of section names, see Table 1).

Figure 4.

16

Special features of considered papers.

New Frontiers in Brain-Computer Interfaces

signal reception, choice and extraction of features, and a command/classification estimation is described. The detailed description of NIRS signals processing is resulted; trends, typical for NIRS signal, and ways to obviate them by means of averaging were also mentioned, and two types of averaging were applied. Some theoretical aspects based on NIRS were given, such as explanation of nature of changes in hemodynamics, accompanied by neurons activity. Problems of influence of respiratory cycles, Mayer's waves, and heart responses described were also considered. As reliable methods of a noise reduction in NIRS, general linear model (GLM) and ICA were offered and used. Main objective of this paper was to find a difference between an "active" (i.e., "key-") and "passive" mental states using NIRS.

on the average, having an improved accuracy indicator on 5% (p < 0.01) in comparison with the use of EEG only. Thus, the concept of hybrid BCI had been introduced. Nevertheless, the long delay of the hemodynamic answer can interfere with the improvement of the general accuracy of classification. Moreover, authors have found out that NIRS and EEG complement each other in sense of the information capacity. Therefore, those methods are applied together and are reliable multimodal imaging techniques. The 24-channel NIRS was used in which readings were averaged in time. As features, classifier LDA was used. It appeared to be that results of classification for EEG and NIRS correlate, but with some time lag that is

Near-Infrared Optical Technologies in Brain-Computer Interface Systems

The low time resolution is an essential lack of NIRS, and it is underlined in number of works, including with orientation to practical application of BCI. In [67], it is noticed that work of any BCI over NIRS is accompanied by a delay in several seconds that limits practical application of this system in a real world. Here, support vector machine (SVM) classification method for definition of a true mental state (finger tapping and rest) was used. Estimations of various spatial features' efficiency, such as signal history, history of a gradient of a signal, and spatial distribution of oxy- and deoxyhemoglobin, are resulted. It is revealed that the delay for decoding of changes in a behavioral condition can be reduced to 50% (from 4.8 down to 2.4 s) that essentially improves indicators of BCI over NIRS. Results of classification in terms of accuracy that reached depending on a set of features

It is necessary to note the work [68] in which NIRS was used with fundamental purposes. The quality of the evoked potentials in the visual cortex was studied in dependence of age of the examinee. Two groups of examinees were considered by age about 21 and about 71 years. Both groups have shown increase in change of concentration of oxygenated hemoglobin and in reduction of deoxygenated hemoglobin during stimulation by a visual chess pattern. However, people in their 70s, on the average, gave more variable hemodynamic answer and often had comparable level of hemoglobin concentrations during time of stimulation versus rest condition —a base line. More young people had essentially high concentration of oxygenated and deoxygenated hemoglobin in each test without dependence from type of stimulation (p < 0.05). Average variability associated with effect of age has made 88% on oxygenated and 91% on deoxygenated hemoglobin. Experiments with visual stimuli has shown rapid falling of cortical hemodynamic answer with age, independently from stimulus' parameters. Thus, authors do the conclusion that hemodynamic answer can be treated as age characteristic. Area V1 was studied. Results

Works [69–73] were devoted to cognitive research, namely "memory" group; besides in [74], methods of psychology for improvement of BCI performance were used. In [70], patterns of hemodynamic activity build-up estimation during perception of numbers by examinees ("mental arithmetic "task) were investigated. Researches used NIRS with high optode density in scalp positioning (348 channels), and standard mathematical methods of preliminary signals processing were applied. For features'selection, applied algorithms were used, which governs the choice of the best channels or a best feature set. Thus structure of classification features in array of channels were reduced and varied for better BCI performance. One can name this as "greedy" algorithm; also, the combined algorithm with cutting out superfluous features is considered. The effective number of channels for classification, therefore, was reduced. Classification was made by SVM; and accuracy of distinction of "difficult mental arithmetic tasks" from a rest condition reached about 100%, "easy mental arithmetic tasks"—also about 100%. Check of reliability

applied were considered. Maximum achieved accuracy was 87%.

connected with the NIRS nature.

DOI: http://dx.doi.org/10.5772/intechopen.83345

were analyzed with ANOVA.

concluded the result.

19

It would be desirable to note also the work [60] in which working out handmovement-direction-detection methods were described, proceeding from the NIRS hemodynamic response of the motor acts induced by the examinee and registered from motor area. The paper pursues the aim of perfection applications of assistive technologies. In total, 64 specially distributed optodes were used in experimental setup. Examinees had to produce "free" (i.e. with no constraints) hands' movements' in orthogonal directions (x- and y-directions in a horizontal plane). As features, changes in concentrations of oxy-, deoxyhemoglobin, their sum, and their difference were considered. Full delivery of oxygen and its evacuation have been calculated for local neural populations in the motor cortex, which underlaid the optode positions. The analysis of these signals has shown that such movements can be distinguished in space and time depending on a directions of movements. Thus, by analyzing of existed profiles of brain activation, it is possible to identify unique directions of movement of a hand in real time. This work can be considered as a precursor of BCI in which states can be changed slowly or "gradually"; thus, such a device can be termed gradual BCI.

The idea of rehabilitation with the use of the robotized artificial limb has been presented more than 10 years ago. Since then, their clinical reliability and efficiency have been reported in a considerable quantity of works (see [61–63]).

As one can note, branch of research connected with a robotics is a rapidly developing direction inside the basic stream of research in the field related to BCI. In [64], the idea of NIRS usage in robot management and possibility of its technical realization is considered. Actually, it is the review of the hardware and software complexes realizing the considered function.

Probably, the principal cause of frequent use of the evoked potentials is the simplicity of their registration and the simplicity of mathematical methods of their processing. But, this is not the case with other paradigms. On the other hand, as an example of elaborate methods for signal processing, one may consider the paper [65], where methods of chaotic dynamics were used in order to analyze fNIRS signal in BCI application. The aim was to recognize left and right hands' motor imagery. Authors also used principal components analysis for the exclusion of highfrequency noise signals and mutual information criterion for some windows of signal. The aim of this paper is to investigate the chaotic property of hemoglobin changes of the blood within motor cortex by Lyapunov analysis. Such a result was achieved—the paper stated that NIRS signals have chaotic properties.

In [66], the noninvasive BCIs for use in neuroprosthesis are described. Works in the field of EEG indicate that the big accuracy and stability for such BCI are essential. The question of whether NIRS method is capable to improve BCI based on EEG was discussed in the paper. Both the methods have been applied simultaneously to record sensomotor rhythm. Research includes work with real and imagined movements. Results of work say that simultaneous EEG and NIRS records can essentially improve classification of motor imagination accuracy up to about 90%,

## Near-Infrared Optical Technologies in Brain-Computer Interface Systems DOI: http://dx.doi.org/10.5772/intechopen.83345

on the average, having an improved accuracy indicator on 5% (p < 0.01) in comparison with the use of EEG only. Thus, the concept of hybrid BCI had been introduced. Nevertheless, the long delay of the hemodynamic answer can interfere with the improvement of the general accuracy of classification. Moreover, authors have found out that NIRS and EEG complement each other in sense of the information capacity. Therefore, those methods are applied together and are reliable multimodal imaging techniques. The 24-channel NIRS was used in which readings were averaged in time. As features, classifier LDA was used. It appeared to be that results of classification for EEG and NIRS correlate, but with some time lag that is connected with the NIRS nature.

The low time resolution is an essential lack of NIRS, and it is underlined in number of works, including with orientation to practical application of BCI. In [67], it is noticed that work of any BCI over NIRS is accompanied by a delay in several seconds that limits practical application of this system in a real world. Here, support vector machine (SVM) classification method for definition of a true mental state (finger tapping and rest) was used. Estimations of various spatial features' efficiency, such as signal history, history of a gradient of a signal, and spatial distribution of oxy- and deoxyhemoglobin, are resulted. It is revealed that the delay for decoding of changes in a behavioral condition can be reduced to 50% (from 4.8 down to 2.4 s) that essentially improves indicators of BCI over NIRS. Results of classification in terms of accuracy that reached depending on a set of features applied were considered. Maximum achieved accuracy was 87%.

It is necessary to note the work [68] in which NIRS was used with fundamental purposes. The quality of the evoked potentials in the visual cortex was studied in dependence of age of the examinee. Two groups of examinees were considered by age about 21 and about 71 years. Both groups have shown increase in change of concentration of oxygenated hemoglobin and in reduction of deoxygenated hemoglobin during stimulation by a visual chess pattern. However, people in their 70s, on the average, gave more variable hemodynamic answer and often had comparable level of hemoglobin concentrations during time of stimulation versus rest condition —a base line. More young people had essentially high concentration of oxygenated and deoxygenated hemoglobin in each test without dependence from type of stimulation (p < 0.05). Average variability associated with effect of age has made 88% on oxygenated and 91% on deoxygenated hemoglobin. Experiments with visual stimuli has shown rapid falling of cortical hemodynamic answer with age, independently from stimulus' parameters. Thus, authors do the conclusion that hemodynamic answer can be treated as age characteristic. Area V1 was studied. Results were analyzed with ANOVA.

Works [69–73] were devoted to cognitive research, namely "memory" group; besides in [74], methods of psychology for improvement of BCI performance were used. In [70], patterns of hemodynamic activity build-up estimation during perception of numbers by examinees ("mental arithmetic "task) were investigated. Researches used NIRS with high optode density in scalp positioning (348 channels), and standard mathematical methods of preliminary signals processing were applied. For features'selection, applied algorithms were used, which governs the choice of the best channels or a best feature set. Thus structure of classification features in array of channels were reduced and varied for better BCI performance. One can name this as "greedy" algorithm; also, the combined algorithm with cutting out superfluous features is considered. The effective number of channels for classification, therefore, was reduced. Classification was made by SVM; and accuracy of distinction of "difficult mental arithmetic tasks" from a rest condition reached about 100%, "easy mental arithmetic tasks"—also about 100%. Check of reliability concluded the result.

signal reception, choice and extraction of features, and a command/classification estimation is described. The detailed description of NIRS signals processing is resulted; trends, typical for NIRS signal, and ways to obviate them by means of averaging were also mentioned, and two types of averaging were applied. Some theoretical aspects based on NIRS were given, such as explanation of nature of changes in hemodynamics, accompanied by neurons activity. Problems of influence of respiratory cycles, Mayer's waves, and heart responses described were also considered. As reliable methods of a noise reduction in NIRS, general linear model (GLM) and ICA were offered and used. Main objective of this paper was to find a difference between an "active" (i.e., "key-") and "passive" mental states using

It would be desirable to note also the work [60] in which working out handmovement-direction-detection methods were described, proceeding from the NIRS hemodynamic response of the motor acts induced by the examinee and registered from motor area. The paper pursues the aim of perfection applications of assistive technologies. In total, 64 specially distributed optodes were used in experimental setup. Examinees had to produce "free" (i.e. with no constraints) hands' movements' in orthogonal directions (x- and y-directions in a horizontal plane). As features, changes in concentrations of oxy-, deoxyhemoglobin, their sum, and their difference were considered. Full delivery of oxygen and its evacuation have been calculated for local neural populations in the motor cortex, which underlaid the optode positions. The analysis of these signals has shown that such movements can be distinguished in space and time depending on a directions of movements. Thus, by analyzing of existed profiles of brain activation, it is possible to identify unique directions of movement of a hand in real time. This work can be considered as a precursor of BCI in which states can be changed slowly or "gradually"; thus, such a

The idea of rehabilitation with the use of the robotized artificial limb has been presented more than 10 years ago. Since then, their clinical reliability and efficiency

As one can note, branch of research connected with a robotics is a rapidly developing direction inside the basic stream of research in the field related to BCI. In [64], the idea of NIRS usage in robot management and possibility of its technical realization is considered. Actually, it is the review of the hardware and software

Probably, the principal cause of frequent use of the evoked potentials is the simplicity of their registration and the simplicity of mathematical methods of their processing. But, this is not the case with other paradigms. On the other hand, as an example of elaborate methods for signal processing, one may consider the paper [65], where methods of chaotic dynamics were used in order to analyze fNIRS signal in BCI application. The aim was to recognize left and right hands' motor imagery. Authors also used principal components analysis for the exclusion of highfrequency noise signals and mutual information criterion for some windows of signal. The aim of this paper is to investigate the chaotic property of hemoglobin changes of the blood within motor cortex by Lyapunov analysis. Such a result was

In [66], the noninvasive BCIs for use in neuroprosthesis are described. Works in

essential. The question of whether NIRS method is capable to improve BCI based on EEG was discussed in the paper. Both the methods have been applied simultaneously to record sensomotor rhythm. Research includes work with real and imagined movements. Results of work say that simultaneous EEG and NIRS records can essentially improve classification of motor imagination accuracy up to about 90%,

have been reported in a considerable quantity of works (see [61–63]).

achieved—the paper stated that NIRS signals have chaotic properties.

the field of EEG indicate that the big accuracy and stability for such BCI are

NIRS.

18

device can be termed gradual BCI.

New Frontiers in Brain-Computer Interfaces

complexes realizing the considered function.

Ref. [75] is a typical example of papers concerning methodical maintenance of NIRS experiments. Research focuses on developing a method for choosing effective training data sets for BCI training. In particular, BCI was considered for definition of concentration of examinees during experiment. This research also was devoted to integration of EEG and NIRS for their joint usage in "cognitive level" BCI applications. This term designates level of the interest of the examinee in experiment process. For construction of such interface, the paradigm based on P300 evoked potential was used. In this work, two experiments are presented: first—the mathematical task with NIRS-only measurements took place. Hybrid EEG + NIRS was used under EP paradigm (P300), and BCI system of type "ON-OFF machine" was constructed on its basis. Experiments had shown that with the use of NIRS, it is possible to differentiate concentration conditions (mental activity) and to distinguish them from level of mental rest. The second experiment, however, has revealed essential and statistically significant result only in EEG.

Nevertheless, it is a very interesting problem, which can be solved only with the use

In [33], authors demonstrate that optical methods can be used to detect rapid

In experiments, described in [31], animal photos were presented to examinee, who needs to press button, when a picture was "well known." That is so-called Go/no-Go paradigm. After the experiment was carried out, the map of correlations between EEG and FOS was built. In this paper, averaging of a signal was applied to allocate FOS responses. Thus, Ref. [31] stated that NIRS method is sensitive to hemodynamics of a brain (in the article, the term "slow signals" was used), and as well to fast responses of neural activity, or so-called fast optical signals or FOS for short. Registration of the FOS is difficult due to their nature and assumes low level of signal/noise ratio of experimental setup. Authors managed to register authenti-

At present, the necessity for more "fine" research on hemodynamic response's features registered in the NIR modality in order to obtain more detailed description of its features from the point of view of physiology is obvious. In [83], experimental confirmation presented that there is a time difference between hemodynamic answers in oxy- and deoxyhemoglobin. Authors revealed that a time profile of hemoglobin-concentration-change curve is desynchronized; i.e., their levels do not fall, and their sum and the difference do not rise simultaneously. Such measurements can reveal a difference in brain activity patterns, while hand movements in directions left-right and forward-back considered in experiments were described in

"Connectivity" research can be treated as another example of theoretical research, beneficial to BCI system improvement. NIRS data acquisition systems were introduced here just recently. For basic information about connectivity, see

For information about connectivity modeling and NIRS/EEG phantom construction for testing connectivity models, see [86]. Authors offer "testbed," i.e., phantom for simultaneous NIRS/EEG recordings for rapid model testing for plausi-

For comprehensive evaluation of utilizing NIRS in connectivity research, resting state, and functional as well and especially their dynamic characteristics, one should see [89]. Authors also point to disadvantage of NIRS usage in this type of studies —"low" penetration depth limits research only to cerebral cortex. On the other hand, this may allow excluding off the most low-level nervous functions from the

Ref. [90] is devoted to the research of functional connectivity of neuronal mechanisms underlying the reactions occurring during experiments in Go/No-Go paradigm, in relation to human development problem. The paper compares such reactions of children and adults examinees, and in particular, comes to the conclusion that motor-related activation did not differ between age groups. This means that at least within the limits of mentioned paradigm, there is virtually no difference in movement realizations and in their control patterns of cortical activity. This statement brings such patterns in position of more universal

This can be summarized as ongoing process of initial accumulation of facts and potentially useful information. Their implementation in BCI technology may lead to

Finally, let us point the reader to optogenetics research [46]. Papers on the

subject do not directly relate to BCI technology directly, but more like to

detectable target, independent from age of examinee.

bility, although biological interpretation of connectivity is simplified. NIRS processing software "NAVI" as a part of testbed design was used [87, 88].

of a multichannel and high-frequency NIRS with superb parameters.

changes in functional connectivity during cognitive processes.

Near-Infrared Optical Technologies in Brain-Computer Interface Systems

DOI: http://dx.doi.org/10.5772/intechopen.83345

cally FOS for 11 examinees, simultaneously with EEG registration.

mentioned paper.

[84, 85].

consideration.

productive results.

21

As for the struggle against artifacts, Ref. [76] brings it on a new level. The work sets as its purpose working out methods of artifacts cutout and implementation of those methods. Not only artifacts of true or "internal" physiological nature were considered (such as relatively stable heart beating waves, appearing on HbR and HbO concentrations' changes curves), but also artifacts of physiological nature, caused by external factors. For example, there were considered changes in signal, conditioned by sudden attraction of attention to external objects or conditions. A number of external distracting factors, in particular, sharp sounds, distracting noise, are considered. The whole purpose of this work was to develop compensation for distracting factors, for NIRS-BCI to satisfy practical conditions. In the article, the system of filtration of the mentioned types of the artifacts based on hidden Markov chains (HMM) is considered. In [72], struggle against artifacts was carried out by medicamentous methods—by the local application of vasodilator drug.

One of the techniques directed to improvement of classifiers functioning within BCI is the utilization of so-called hybrid BCI in which signals about a condition of the examinee were received by means of at least two devices of various modalities [51, 66, 76–81]. In [81], the concept of hybrid BCI in general—not only in sense of various modalities, but also in sense of various paradigms of record (ERD/EP), was introduced. The basic criteria of such device with reference to practice were given. The necessary quality standards were discussed. Various types of BCI, for example "the brain switch," were considered, basically focusing on an artificial hand limb, which operates by means of EP.

One can consider [82] as a good "head first" text about nervous processes studying techniques. Here, it is noticed that NIRS possesses the low time resolution and can even interfere with transitions between conditions. To cut out this lack, it was recommended to use it together with EEG. In general, the article affirms that the BCI based on NIRS are inexpensive, but does not show high-quality results. Various methods of preprocessing (CAR, SL, ICA, CSP, PCA, SVD, CSSP, Freq-Norm, LAT, LKF, and CSSD) are considered and estimated. Most often used are ICA, CAR, LS, PCA, CSP, and adaptive filtering. In this paper, the hybrid BCI on the basis of NIRS and MEG were also mentioned and also stated that they are disproportionately expensive, considering achievable results.

FOS responses [31–34] do not directly relate to BCI, but nevertheless were considered for a possibility to build optical EP BCI based on such phenomena. FOS usage can increase "reaction time" of hemodynamic responses and hence can improve NIRS-FOS BCI performance. FOS registration demands extreme temporal resolution on NIRS equipment. Analogically, research of correlation with fMRI also difficult in that sense that fMRI-BOLD signal with "reception" time about 2.5 s has no sufficient time resolution to obtain information on "fast" reactions.

## Near-Infrared Optical Technologies in Brain-Computer Interface Systems DOI: http://dx.doi.org/10.5772/intechopen.83345

Nevertheless, it is a very interesting problem, which can be solved only with the use of a multichannel and high-frequency NIRS with superb parameters.

In [33], authors demonstrate that optical methods can be used to detect rapid changes in functional connectivity during cognitive processes.

In experiments, described in [31], animal photos were presented to examinee, who needs to press button, when a picture was "well known." That is so-called Go/no-Go paradigm. After the experiment was carried out, the map of correlations between EEG and FOS was built. In this paper, averaging of a signal was applied to allocate FOS responses. Thus, Ref. [31] stated that NIRS method is sensitive to hemodynamics of a brain (in the article, the term "slow signals" was used), and as well to fast responses of neural activity, or so-called fast optical signals or FOS for short. Registration of the FOS is difficult due to their nature and assumes low level of signal/noise ratio of experimental setup. Authors managed to register authentically FOS for 11 examinees, simultaneously with EEG registration.

At present, the necessity for more "fine" research on hemodynamic response's features registered in the NIR modality in order to obtain more detailed description of its features from the point of view of physiology is obvious. In [83], experimental confirmation presented that there is a time difference between hemodynamic answers in oxy- and deoxyhemoglobin. Authors revealed that a time profile of hemoglobin-concentration-change curve is desynchronized; i.e., their levels do not fall, and their sum and the difference do not rise simultaneously. Such measurements can reveal a difference in brain activity patterns, while hand movements in directions left-right and forward-back considered in experiments were described in mentioned paper.

"Connectivity" research can be treated as another example of theoretical research, beneficial to BCI system improvement. NIRS data acquisition systems were introduced here just recently. For basic information about connectivity, see [84, 85].

For information about connectivity modeling and NIRS/EEG phantom construction for testing connectivity models, see [86]. Authors offer "testbed," i.e., phantom for simultaneous NIRS/EEG recordings for rapid model testing for plausibility, although biological interpretation of connectivity is simplified. NIRS processing software "NAVI" as a part of testbed design was used [87, 88].

For comprehensive evaluation of utilizing NIRS in connectivity research, resting state, and functional as well and especially their dynamic characteristics, one should see [89]. Authors also point to disadvantage of NIRS usage in this type of studies —"low" penetration depth limits research only to cerebral cortex. On the other hand, this may allow excluding off the most low-level nervous functions from the consideration.

Ref. [90] is devoted to the research of functional connectivity of neuronal mechanisms underlying the reactions occurring during experiments in Go/No-Go paradigm, in relation to human development problem. The paper compares such reactions of children and adults examinees, and in particular, comes to the conclusion that motor-related activation did not differ between age groups. This means that at least within the limits of mentioned paradigm, there is virtually no difference in movement realizations and in their control patterns of cortical activity. This statement brings such patterns in position of more universal detectable target, independent from age of examinee.

This can be summarized as ongoing process of initial accumulation of facts and potentially useful information. Their implementation in BCI technology may lead to productive results.

Finally, let us point the reader to optogenetics research [46]. Papers on the subject do not directly relate to BCI technology directly, but more like to

Ref. [75] is a typical example of papers concerning methodical maintenance of NIRS experiments. Research focuses on developing a method for choosing effective training data sets for BCI training. In particular, BCI was considered for definition of concentration of examinees during experiment. This research also was devoted to integration of EEG and NIRS for their joint usage in "cognitive level" BCI applications. This term designates level of the interest of the examinee in experiment process. For construction of such interface, the paradigm based on P300 evoked potential was used. In this work, two experiments are presented: first—the mathematical task with NIRS-only measurements took place. Hybrid EEG + NIRS was used under EP paradigm (P300), and BCI system of type "ON-OFF machine" was constructed on its basis. Experiments had shown that with the use of NIRS, it is possible to differentiate concentration conditions (mental activity) and to

distinguish them from level of mental rest. The second experiment, however, has

As for the struggle against artifacts, Ref. [76] brings it on a new level. The work sets as its purpose working out methods of artifacts cutout and implementation of those methods. Not only artifacts of true or "internal" physiological nature were considered (such as relatively stable heart beating waves, appearing on HbR and HbO concentrations' changes curves), but also artifacts of physiological nature, caused by external factors. For example, there were considered changes in signal, conditioned by sudden attraction of attention to external objects or conditions. A number of external distracting factors, in particular, sharp sounds, distracting noise, are considered. The whole purpose of this work was to develop compensation for distracting factors, for NIRS-BCI to satisfy practical conditions. In the article, the system of filtration of the mentioned types of the artifacts based on hidden Markov chains (HMM) is considered. In [72], struggle against artifacts was carried out by medicamentous methods—by the local application of vasodilator drug.

One of the techniques directed to improvement of classifiers functioning within BCI is the utilization of so-called hybrid BCI in which signals about a condition of the examinee were received by means of at least two devices of various modalities [51, 66, 76–81]. In [81], the concept of hybrid BCI in general—not only in sense of various modalities, but also in sense of various paradigms of record (ERD/EP), was introduced. The basic criteria of such device with reference to practice were given. The necessary quality standards were discussed. Various types of BCI, for example "the brain switch," were considered, basically focusing on an artificial hand limb,

One can consider [82] as a good "head first" text about nervous processes studying techniques. Here, it is noticed that NIRS possesses the low time resolution and can even interfere with transitions between conditions. To cut out this lack, it was recommended to use it together with EEG. In general, the article affirms that the BCI based on NIRS are inexpensive, but does not show high-quality results. Various methods of preprocessing (CAR, SL, ICA, CSP, PCA, SVD, CSSP, Freq-Norm, LAT, LKF, and CSSD) are considered and estimated. Most often used are ICA, CAR, LS, PCA, CSP, and adaptive filtering. In this paper, the hybrid BCI on the basis of NIRS and MEG were also mentioned and also stated that they are

FOS responses [31–34] do not directly relate to BCI, but nevertheless were considered for a possibility to build optical EP BCI based on such phenomena. FOS usage can increase "reaction time" of hemodynamic responses and hence can improve NIRS-FOS BCI performance. FOS registration demands extreme temporal resolution on NIRS equipment. Analogically, research of correlation with fMRI also difficult in that sense that fMRI-BOLD signal with "reception" time about 2.5 s has

disproportionately expensive, considering achievable results.

no sufficient time resolution to obtain information on "fast" reactions.

revealed essential and statistically significant result only in EEG.

New Frontiers in Brain-Computer Interfaces

which operates by means of EP.

20

neurobiology as a whole. In the context of problem at hand, it makes sense to concentrate on papers, which aims to use some kind of NIRS or optical equipment. optical properties of a living tissue and physical features of distribution of nearinfrared range radiation in biological substances possess complex structure. Hybrid interfaces are focused mainly on conducting NIRS cognitive experiments (in which some cognitive effects or conditions exploited and responses were acquired with NIRS) in synchrony or in parallel with registration of motor activity with the EEG usage only. There are exceptionally few publications, concerning the technical details and improvement of characteristics of the spectrometer equipment. Technical features are not addressed in detail even in papers that were using devices of "own manufacture." The field of NIRS devices with technical features, obviously, is assigned exclusively to manufacturers of the equipment. The insignificant part of papers, reviewed here, was devoted to the usage of NIRS/EEG BCI in one setup with assistive robotic devices, artificial limbs, and exoskeletons. Due to pressing nature of this matter and under a condition of extraordinary results, occurrences in the near future in this field, obviously the share of such publications, will only grow, probably, involving in the pool of papers on the technical details of such installa-

Near-Infrared Optical Technologies in Brain-Computer Interface Systems

DOI: http://dx.doi.org/10.5772/intechopen.83345

tions as a whole, including NIRS and robotics.

Author details

Russian Federation

23

Korshakov Alexei Vyacheslavovich1,2,3

(IHNA&NPh RAS), Moscow, Russian Federation

provided the original work is properly cited.

\*Address all correspondence to: korshakov\_av@mail.ru

1 Pirogov Russian National Research Medical University (RNRMU), Moscow,

2 National Research Center "Kurchatov Institute", Moscow, Russian Federation

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

3 Institute of Higher Nervous Activity and Neurophysiology of RAS

In this way, Refs. [91, 92] describe methods, which utilize NIR radiation in confocal microscope in order of correction and control of neural cell growth. Further developments of this direction can be found in [93], where laser microsurgery of cellular membrane was described.

Ref. [47] was devoted to optogenetics methods of neural cells activation with high spatial precision by making those cells produce reaction when irradiated with light. This is achievable by the means of transfection of cell genome with gene constructs, which corresponds to channelrhodopsin 2 protein. Activation of transfected cells develops upon exposure to light of NIR range (860–1028 nm). Such manipulations may provide a researcher with method of external activation of nerve cells', deeply situated in experimental animals' brain and with high spatial resolution. Experiments in the field also features ability to "read" neurons' activity patterns, induced on purpose. This can be achieved by variation of wavelength and intensity of excitatory radiation. This direction of thought is interesting because it represents fully optical method for monitoring, excitation, and detection of neural activity on the cellular level. One may hypothesize that in distant enough perspective, such a research may lead to obtaining new and more precise data on brain structure, functioning, and connections. Also, new generation of genetically coded voltage sensitive dyes can yield state-of-the-art methods and techniques of acquiring new data on functioning of neuron cells' population level [94]. Alas, in their present state, optogenetics methods are very suitable and useful for fundamental research (usually not even in vivo), but they have not yet leveled up with practical demands of problem discussed here.

At last, it is necessary to notice that despite essential successes in the field of construction of BCI and machine interfaces (these are researches that generate about thousands of publications every year), progress in the field of creation of devices would allow patients with a full paralysis and "locked-in" syndrome to interact with an external world, which is, for now, possible to characterize only as moderate [95].

## 6. Conclusion

Objectively, the quantity of works in dynamics for more than 10 years grows, and the considered area of researches extends. Also, the subjects of papers in the field extend in more details, growing with new data.

The fact that the considerable part of papers in the field is reviews attracts attention. It can testify to the big number of scientific personnel, which began or begin at the moment the work on a considered problem. Also from this number, it is possible to draw indirect conclusions about quantity of scientific personnel.

The offered system of classification of publications was given here in hope that it will be useful for researchers' choice of a direction in considered area and in order to aid more effective development of this direction as a whole.

There are a big number of articles in which only the near-infrared range spectrometry is used (without EEG and other means of neuroimaging) and also the ERD/ERS experimental paradigm most frequent and limited enough circle of cognitive effects was measured.

Researches, in which hybrid interfaces were used in comparison with aforementioned, are scarce. Even less papers concerning fast optical signals, fundamental physiological questions with the use of NIRS or connection of nervous activity, and

## Near-Infrared Optical Technologies in Brain-Computer Interface Systems DOI: http://dx.doi.org/10.5772/intechopen.83345

optical properties of a living tissue and physical features of distribution of nearinfrared range radiation in biological substances possess complex structure. Hybrid interfaces are focused mainly on conducting NIRS cognitive experiments (in which some cognitive effects or conditions exploited and responses were acquired with NIRS) in synchrony or in parallel with registration of motor activity with the EEG usage only. There are exceptionally few publications, concerning the technical details and improvement of characteristics of the spectrometer equipment. Technical features are not addressed in detail even in papers that were using devices of "own manufacture." The field of NIRS devices with technical features, obviously, is assigned exclusively to manufacturers of the equipment. The insignificant part of papers, reviewed here, was devoted to the usage of NIRS/EEG BCI in one setup with assistive robotic devices, artificial limbs, and exoskeletons. Due to pressing nature of this matter and under a condition of extraordinary results, occurrences in the near future in this field, obviously the share of such publications, will only grow, probably, involving in the pool of papers on the technical details of such installations as a whole, including NIRS and robotics.

## Author details

neurobiology as a whole. In the context of problem at hand, it makes sense to concentrate on papers, which aims to use some kind of NIRS or optical equipment. In this way, Refs. [91, 92] describe methods, which utilize NIR radiation in confocal microscope in order of correction and control of neural cell growth. Further developments of this direction can be found in [93], where laser microsurgery

Ref. [47] was devoted to optogenetics methods of neural cells activation with high spatial precision by making those cells produce reaction when irradiated with light. This is achievable by the means of transfection of cell genome with gene constructs, which corresponds to channelrhodopsin 2 protein. Activation of transfected cells develops upon exposure to light of NIR range (860–1028 nm). Such manipulations may provide a researcher with method of external activation of nerve cells', deeply situated in experimental animals' brain and with high spatial resolution. Experiments in the field also features ability to "read" neurons' activity patterns, induced on purpose. This can be achieved by variation of wavelength and intensity of excitatory radiation. This direction of thought is interesting because it represents fully optical method for monitoring, excitation, and detection of neural activity on the cellular level. One may hypothesize that in distant enough perspective, such a research may lead to obtaining new and more precise data on brain structure, functioning, and connections. Also, new generation of genetically coded voltage sensitive dyes can yield state-of-the-art methods and techniques of acquiring new data on functioning of neuron cells' population level [94]. Alas, in their present state, optogenetics methods are very suitable and useful for fundamental research (usually not even in vivo), but they have not yet leveled up with practical

At last, it is necessary to notice that despite essential successes in the field of construction of BCI and machine interfaces (these are researches that generate about thousands of publications every year), progress in the field of creation of devices would allow patients with a full paralysis and "locked-in" syndrome to interact with an external world, which is, for now, possible to characterize only as

Objectively, the quantity of works in dynamics for more than 10 years grows, and the considered area of researches extends. Also, the subjects of papers in the

The fact that the considerable part of papers in the field is reviews attracts attention. It can testify to the big number of scientific personnel, which began or begin at the moment the work on a considered problem. Also from this number, it is

The offered system of classification of publications was given here in hope that it will be useful for researchers' choice of a direction in considered area and in order to

There are a big number of articles in which only the near-infrared range spectrometry is used (without EEG and other means of neuroimaging) and also the ERD/ERS experimental paradigm most frequent and limited enough circle of cog-

Researches, in which hybrid interfaces were used in comparison with aforementioned, are scarce. Even less papers concerning fast optical signals, fundamental physiological questions with the use of NIRS or connection of nervous activity, and

possible to draw indirect conclusions about quantity of scientific personnel.

of cellular membrane was described.

New Frontiers in Brain-Computer Interfaces

demands of problem discussed here.

field extend in more details, growing with new data.

aid more effective development of this direction as a whole.

moderate [95].

6. Conclusion

nitive effects was measured.

22

Korshakov Alexei Vyacheslavovich1,2,3

1 Pirogov Russian National Research Medical University (RNRMU), Moscow, Russian Federation

2 National Research Center "Kurchatov Institute", Moscow, Russian Federation

3 Institute of Higher Nervous Activity and Neurophysiology of RAS (IHNA&NPh RAS), Moscow, Russian Federation

\*Address all correspondence to: korshakov\_av@mail.ru

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## References

[1] Lal TL, Schröder M, Hill NJ, Preissl H, Hinterberger T, Mellinger J, et al. A brain computer interface with online feedback based on magnetoencephalography. In: Proceedings of the 22nd International Conference on Machine Learning (ICML 2005); 07-11 August 2005; Bonn, Germany. New York: ACM; 2005. p. 465–472. DOI: 10.1145/1102351.1102410

[2] Tamada JA, Lesho M, Tierney MJ. Keeping watch on glucose. IEEE Spectrum. 2002;39(4):52-57. DOI: 10.1109/6.993789

[3] Ishizawa H, Muro A, Takano T, Honda K, Kanai H. Non-invasive blood glucose measurement based on ATR infrared spectroscopy. In: SICE Annual Conference (SICE 2008); 20–22 August 2008; Tokyo, Japan. Tokyo: IEEE; 2008. p. 321-324. DOI: 10.1109/ SICE.2008.4654672

[4] Smith JL. The Pursuit of Noninvasive Glucose. 5th ed. 2017 [Internet]. 2017. Available from: https://www. researchgate.net/publication/ 317267760\_The\_Pursuit\_of\_ Noninvasive\_Glucose\_5th\_Edition.pdf [Accessed: 2018-12-29]

[5] Khalil OS. Spectroscopic and clinical aspects of noninvasive glucose measurements. Clinical Chemistry. 1999;45:165-177

[6] León-Carrión J, León-Domínguez U. Functional Near-Infrared Spectroscopy (fNIRS): Principles and Neuroscientific Applications. In: Bright P, editor. Neuroimaging. Rijeka: IntechOpen; 2012. p. 47-74. DOI: 10.5772/23146

[7] Song S, Kobayashi Y, Fujie MG. Monte-Carlo simulation of light propagation considering characteristic of near-infrared LED and evaluation on tissue phantom. In: 1st CIRP Conference on BioManufacturing (BioM 2013); 3–5 March 2013; Tokyo, Japan: Elsevier;

2013. p. 25-30. DOI: 10.1016/j. procir.2013.01.005

[8] Delpy DT, Cope M, van der Zee P, Arridge SR, Wray S, Wyatt JS. Estimation of optical pathlength through tissue from direct time of flight measurement. Physics in Medicine and Biology. 1988;33(12):1433-1442. DOI: 10.1088/0031-9155/33/12/008

[14] Tuchin VV. Handbook of Optical Biomedical Diagnostics. Chichester, Bellingham: SPIE Press; 2002. PM107

DOI: http://dx.doi.org/10.5772/intechopen.83345

Near-Infrared Optical Technologies in Brain-Computer Interface Systems

Technical and Theoretical Literature Publishing House; 1951. 288 p

[23] IUPAC. Compendium of Chemical Terminology. 2nd ed. (the "Gold Book"); 1997. Online corrected version: (2006–) "Beer–Lambert law". Available from: http://goldbook.iupac.org/

[24] Takatani S, Graham MD. Theoretical analysis of diffuse reflectance from a two-layer tissue model. IEEE Transactions on Biomedical Engineering. 1979;26(12):656-664

[25] Kim JG, Liu H. Variation of

haemoglobin extinction coefficients can cause errors in the determination of haemoglobin concentration measured by near-infrared spectroscopy. Physics in Medicine and Biology. 2007;52: 6295-6322. DOI: 10.1088/0031-9155/52/

[26] Jonson C, Gay A. The influence of non-ionizing electro-magnitic radiation on biological media and systems. TIIER.

[27] Harry LG, Xu Y, Pei Y, Barbour RL. Spatial deconvolution technique to improve the accuracy of reconstructed three-dimensional diffuse optical tomographic images. Applied Optics. 2005;44(6):941-953. DOI: 10.1364/

[28] Schmitz CH, Klemer DP, Hardin R, Katz MS, Pei Y, Graber HL, et al. Design and implementation of dynamic nearinfrared optical tomographic imaging instrumentation for simultaneous dualbreast measurements. Applied Optics. 2005;44(11):2140-2153. DOI: 10.1364/

[29] Xu Y, Graber HL, Barbour RL. Image correction algorithm for functional three-dimensional diffuse optical tomography brain imaging. Applied Optics. 2007;46(10):1693-1704.

DOI: 10.1364/AO.46.001693

B00626.html

20/014

1972;T60(6):49-79

AO.44.000941

AO.44.002140

[15] Torricelli A, Contini D, Pifferi A, Caffini M, Re R, Zucchelli L, et al. Time domain functional NIRS imaging for human brain mapping. NeuroImage. 2014;85:28-50. DOI: 10.1016./j. neuroimage.2013.05.105

[16] Naulaers G, Morren G, Van Huffel S, Casaer P, Devlieger H. Cerebral tissue oxygenation index in very premature infants. Archives of Disease in

Childhood. Fetal and Neonatal Edition. 2002;87:F189-F192. DOI: 10.1136/

[17] Cui X, Bray S, Bryant DM, Glover

comparison of NIRS and fMRI across multiple cognitive tasks. NeuroImage. 2011;54(4):2808-2821. DOI: 10.1016/j.

[18] Ward TE. Hybrid optical–electrical brain computer interfaces, practices and possibilities. In: Allison B, Dunne S, Leeb R, Del R, Millan J, Nijholt A, editors. Towards Practical Brain-Computer Interfaces. Biological and

Springer; 2012. pp. 17-40. DOI: 10.1007/

[19] Kolosovskii BN. Circulation of blood in brain. In: Kolosovskii BN, editor. Moscow: Medgiz; 1951. 371 p

[20] Schmidt RF, Thews G, editors. Human Physiology. Berlin: Springer-

[21] Chandrasekhar S. Radiative

Transfer. New York: Dover Publications

[22] Shifrin KS. Scattering of Light in Turbid Media. Leningrad: State

GH, Reiss AL. A quantitative

neuroimage.2010.10.069

Medical Physics. Biomedical Engineering. Berlin, Heidelberg:

978-3-642-29746-5\_2

Verlag; 1989

Inc; 1960. 393 p

25

1110 p

fn.87.3.F189

[9] Bakker A, Smith B, Ainslie P, Smith K. Near-infrared spectroscopy. In: Ainslie P, editor. Applied Aspects of Ultrasonography in Humans. Rijeka: IntechOpen; 2012. p. 65-88. DOI: 10.5772/324939

[10] Gagnon L, Yucel MA, Dehaes M, Cooper RJ, Perdue KL, Selb J, et al. Quantification of the cortical contribution to the NIRS signal over the motor cortex using concurrent NIRSfMRI measurements. NeuroImage. 2012; 59(4):3933-3940. DOI: 10.1016/j. neuroimage.2011.10.054

[11] Cope M, Delpy D. System for longterm measurement of cerebral blood and tissue oxygenation on newborn infants by near infra-red transillumination. Medical and Biological Engineering and Computing. 1988;26(3):289-294

[12] Durduran T, Choe R, Baker WB, Yodh AG. Diffuse optics for tissue monitoring and tomography. Reports on Progress in Physics. 2010;73(076701): 43. DOI: 10.1088/0034-4885/73/7/ 076701

[13] Arifler D, Zhu T, Madaan S, Tachtsidis I. Optimal wavelength combinations for near-infrared spectroscopic monitoring of changes in brain tissue hemoglobin and cytochrome c oxidase concentrations. Biomedical Optics Express. 2015;6(3): 933-947. DOI: 10.1364/BOE.6.000933

Near-Infrared Optical Technologies in Brain-Computer Interface Systems DOI: http://dx.doi.org/10.5772/intechopen.83345

[14] Tuchin VV. Handbook of Optical Biomedical Diagnostics. Chichester, Bellingham: SPIE Press; 2002. PM107 1110 p

References

[1] Lal TL, Schröder M, Hill NJ, Preissl H, Hinterberger T, Mellinger J, et al. A brain computer interface with

Proceedings of the 22nd International Conference on Machine Learning (ICML 2005); 07-11 August 2005; Bonn, Germany. New York: ACM; 2005. p. 465–472. DOI: 10.1145/1102351.1102410

New Frontiers in Brain-Computer Interfaces

2013. p. 25-30. DOI: 10.1016/j.

10.1088/0031-9155/33/12/008

[8] Delpy DT, Cope M, van der Zee P, Arridge SR, Wray S, Wyatt JS. Estimation of optical pathlength

through tissue from direct time of flight measurement. Physics in Medicine and Biology. 1988;33(12):1433-1442. DOI:

[9] Bakker A, Smith B, Ainslie P, Smith K. Near-infrared spectroscopy. In: Ainslie P, editor. Applied Aspects of Ultrasonography in Humans. Rijeka: IntechOpen; 2012. p. 65-88. DOI:

[10] Gagnon L, Yucel MA, Dehaes M, Cooper RJ, Perdue KL, Selb J, et al. Quantification of the cortical

contribution to the NIRS signal over the motor cortex using concurrent NIRSfMRI measurements. NeuroImage. 2012; 59(4):3933-3940. DOI: 10.1016/j.

[11] Cope M, Delpy D. System for longterm measurement of cerebral blood and tissue oxygenation on newborn

Biological Engineering and Computing.

[12] Durduran T, Choe R, Baker WB, Yodh AG. Diffuse optics for tissue monitoring and tomography. Reports on Progress in Physics. 2010;73(076701): 43. DOI: 10.1088/0034-4885/73/7/

[13] Arifler D, Zhu T, Madaan S, Tachtsidis I. Optimal wavelength combinations for near-infrared

brain tissue hemoglobin and

spectroscopic monitoring of changes in

cytochrome c oxidase concentrations. Biomedical Optics Express. 2015;6(3): 933-947. DOI: 10.1364/BOE.6.000933

procir.2013.01.005

10.5772/324939

neuroimage.2011.10.054

infants by near infra-red transillumination. Medical and

1988;26(3):289-294

076701

[2] Tamada JA, Lesho M, Tierney MJ. Keeping watch on glucose. IEEE Spectrum. 2002;39(4):52-57. DOI:

[3] Ishizawa H, Muro A, Takano T, Honda K, Kanai H. Non-invasive blood glucose measurement based on ATR infrared spectroscopy. In: SICE Annual Conference (SICE 2008); 20–22 August 2008; Tokyo, Japan. Tokyo: IEEE; 2008.

[4] Smith JL. The Pursuit of Noninvasive Glucose. 5th ed. 2017 [Internet]. 2017.

Noninvasive\_Glucose\_5th\_Edition.pdf

[5] Khalil OS. Spectroscopic and clinical

[6] León-Carrión J, León-Domínguez U. Functional Near-Infrared Spectroscopy (fNIRS): Principles and Neuroscientific Applications. In: Bright P, editor. Neuroimaging. Rijeka: IntechOpen; 2012. p. 47-74. DOI: 10.5772/23146

[7] Song S, Kobayashi Y, Fujie MG. Monte-Carlo simulation of light propagation considering characteristic of near-infrared LED and evaluation on tissue phantom. In: 1st CIRP Conference on BioManufacturing (BioM 2013); 3–5 March 2013; Tokyo, Japan: Elsevier;

p. 321-324. DOI: 10.1109/ SICE.2008.4654672

Available from: https://www. researchgate.net/publication/ 317267760\_The\_Pursuit\_of\_

aspects of noninvasive glucose measurements. Clinical Chemistry.

[Accessed: 2018-12-29]

1999;45:165-177

24

online feedback based on magnetoencephalography. In:

10.1109/6.993789

[15] Torricelli A, Contini D, Pifferi A, Caffini M, Re R, Zucchelli L, et al. Time domain functional NIRS imaging for human brain mapping. NeuroImage. 2014;85:28-50. DOI: 10.1016./j. neuroimage.2013.05.105

[16] Naulaers G, Morren G, Van Huffel S, Casaer P, Devlieger H. Cerebral tissue oxygenation index in very premature infants. Archives of Disease in Childhood. Fetal and Neonatal Edition. 2002;87:F189-F192. DOI: 10.1136/ fn.87.3.F189

[17] Cui X, Bray S, Bryant DM, Glover GH, Reiss AL. A quantitative comparison of NIRS and fMRI across multiple cognitive tasks. NeuroImage. 2011;54(4):2808-2821. DOI: 10.1016/j. neuroimage.2010.10.069

[18] Ward TE. Hybrid optical–electrical brain computer interfaces, practices and possibilities. In: Allison B, Dunne S, Leeb R, Del R, Millan J, Nijholt A, editors. Towards Practical Brain-Computer Interfaces. Biological and Medical Physics. Biomedical Engineering. Berlin, Heidelberg: Springer; 2012. pp. 17-40. DOI: 10.1007/ 978-3-642-29746-5\_2

[19] Kolosovskii BN. Circulation of blood in brain. In: Kolosovskii BN, editor. Moscow: Medgiz; 1951. 371 p

[20] Schmidt RF, Thews G, editors. Human Physiology. Berlin: Springer-Verlag; 1989

[21] Chandrasekhar S. Radiative Transfer. New York: Dover Publications Inc; 1960. 393 p

[22] Shifrin KS. Scattering of Light in Turbid Media. Leningrad: State

Technical and Theoretical Literature Publishing House; 1951. 288 p

[23] IUPAC. Compendium of Chemical Terminology. 2nd ed. (the "Gold Book"); 1997. Online corrected version: (2006–) "Beer–Lambert law". Available from: http://goldbook.iupac.org/ B00626.html

[24] Takatani S, Graham MD. Theoretical analysis of diffuse reflectance from a two-layer tissue model. IEEE Transactions on Biomedical Engineering. 1979;26(12):656-664

[25] Kim JG, Liu H. Variation of haemoglobin extinction coefficients can cause errors in the determination of haemoglobin concentration measured by near-infrared spectroscopy. Physics in Medicine and Biology. 2007;52: 6295-6322. DOI: 10.1088/0031-9155/52/ 20/014

[26] Jonson C, Gay A. The influence of non-ionizing electro-magnitic radiation on biological media and systems. TIIER. 1972;T60(6):49-79

[27] Harry LG, Xu Y, Pei Y, Barbour RL. Spatial deconvolution technique to improve the accuracy of reconstructed three-dimensional diffuse optical tomographic images. Applied Optics. 2005;44(6):941-953. DOI: 10.1364/ AO.44.000941

[28] Schmitz CH, Klemer DP, Hardin R, Katz MS, Pei Y, Graber HL, et al. Design and implementation of dynamic nearinfrared optical tomographic imaging instrumentation for simultaneous dualbreast measurements. Applied Optics. 2005;44(11):2140-2153. DOI: 10.1364/ AO.44.002140

[29] Xu Y, Graber HL, Barbour RL. Image correction algorithm for functional three-dimensional diffuse optical tomography brain imaging. Applied Optics. 2007;46(10):1693-1704. DOI: 10.1364/AO.46.001693

[30] Habermehl C, Schmitz CH, Steinbrink J. Contrast enhanced highresolution diffuse optical tomography of the human brain using ICG. Optics Express. 2011;19(19):18636-18644. DOI: 10.1364/OE.19.018636

[31] Medvedev AV, Borisov SV, Gandjbakhche AH, VanMeter J, Kainerstorfer JM. Seeing electroencephalogram through the skull: Imaging prefrontal cortex with fast optical signal. Biomedical Optics. 2010; 15(6):061702. DOI: 10.1117/1.3505007

[32] Huang J, Wang S, Jia S, Mo D, Chen H-C. Cortical dynamics of semantic processing during sentence comprehension: Evidence from eventrelated optical signals. PLoS One. 2013;8 (8):e70671. DOI: 10.1371/journal. pone.0070671

[33] Medvedev AV, Kainerstorfer JM, Borisov SV, VanMetera J. Functional connectivity in the prefrontal cortex measured by near-infrared spectroscopy during ultrarapid object recognition. Journal of Biomedical Optics. 2011;16 (1):016008. DOI: 10.1117/1.3533266

[34] Chiarelli AM, Romani GL, Merla A. Fast optical signals in the sensorimotor cortex: General linear convolution model applied to multiple source– detector distance-based data. NeuroImage. 2014;85:245-254. DOI: 10.1016/j.neuroimage.2013.07.021

[35] Medvedev AV, Kainerstorfer J, Borisov SV, Barbour RL, VanMeter J. Event-related fast optical signal in a rapid object recognition task: Improving detection by the independent component analysis. Brain Research. 2008;1236:145-158. DOI: 10.1016/j. brainres.2008.07.122

[36] Wang J, Dong Q, Niu H. The minimum resting-state fNIRS imaging duration for accurate and stable mapping of brain connectivity network in children. Scientific Reports. 2017;

7(1):6461. DOI: 10.1038/s41598-017- 06340-7

Bioinformatics. 2014;9(1):296-308.

DOI: http://dx.doi.org/10.5772/intechopen.83345

Near-Infrared Optical Technologies in Brain-Computer Interface Systems

[48] Okamoto M, Dan H, Shimizu K, Takeo K, Amita T, Oda I, et al. Multimodal assessment of cortical activation during apple peeling by NIRS and fMRI. NeuroImage. 2004;21: 1275-1288. DOI: 10.1016/j. neuroimage.2003.12.003

[49] Herff C, Heger D, Putze F, Guan C, Schultz T. Self-paced BCI with NIRS

Proceedings of the Fifth International Brain-Computer Interface Meeting 2013 (International BCI Meeting 2013); 3–7 June 2013; Asilomar Conference Center, Pacific Grove, California. Graz: Graz University of Technology Publishing House; 2013. Article ID: 111 DOI: 10.3217/978-3-85125-260-6-111

[50] Xu G, Xu LX, Li D, Liu X. A DAQdevice-based continuous wave nearinfrared spectroscopy system for measuring human functional brain activity. Computational and

Mathematical Methods in Medicine. 2014;2014:107320. 9 p. DOI: 10.1155/

[51] Almajidy RK, Le KS, Hofmanna UG. Novel near infrared sensors for hybrid

America, 2015. p. 95361H. DOI: 10.1364/

[52] Matthews F, Soraghan C, Ward T, Markham C, Pearlmutter B. Software platform for rapid prototyping of NIRS brain computer interfacing techniques. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society (IEEE Engineering in Medicine and Biology Society.

Conference 2008); 20–24 August 2008; Vancouver, Canada: IEEE; 2008. p.

4840-4843. DOI: 10.1109/ IEMBS.2008.4650297

BCI applications. In: European Conference on Biomedical Optics (ECBO 2015). Advanced Microscopy Techniques IV; and Neurophotonics II; 21–25 June 2015; Munich Germany. Washington: Optical Society of

ECBO.2015.95361H

2014/107320

based on speech activity. In:

[43] Huang NE, Shen Z, Long SR, Wu MC, Shih HH, Zheng Q, Yen NC, Tung CC, Liu HH. The Empirical Mode Decomposition and the Hilbert Spectrum for Nonlinear and

Nonstationary Time Series Analysis. Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences. 1998;454(1971):903–995. DOI: 10.1098/

[44] Korshakov AV. NIRS signal processing by the means of modified empirical mode decomposition method

for the purpose of separating

[45] Kozel FA, Tian F, Dhamne S, Croarkin PE, McClintock SM, Elliott A, et al. Using simultaneous repetitive transcranial magnetic stimulation/ functional near infrared spectroscopy (rTMS/fNIRS) to measure brain activation and connectivity.

NeuroImage. 2009;47:1177-1184. DOI: 10.1016/j.neuroimage.2009.05.016

Adamantidis A, de Lecea L, Deisseroth K. Circuit-breakers: Optical technologies for probing neural signals and systems. Nature Reviews Neuroscience. 2007;8:

[47] Mohanty SK, Reinscheid RK, Liu X, Okamura N, Krasieva TB, Berns MW.

channelrhodopsin-2 sensitized excitable cells with high spatial resolution using two-photon excitation with a nearinfrared laser microbeam. Biophysical Journal. 2008;95:11. DOI: 10.1529/

[46] Zhang F, Aravanis AM,

577-581. DOI: 10.1038/nrn2192

In-depth activation of

biophysj.108.130187

27

hemodynamic responses for BCI target mental states with specific buildup times. In: International Congress "Neuroscience for Medicine and Psychology"; 1–11 June 2016; Sudak, Crimea, Russia. Moscow: MAX Press;

DOI: 10.17537/2014.9.296

rspa.1998.0193

2016. p. 224-225

[37] Racz FS, Mukli P, Nagy Z, Eke A. Increased prefrontal cortex connectivity during cognitive challenge assessed by fNIRS imaging. Biomedical Optics Express. 2017;8(8):3842-3855. DOI: 10.1364/BOE.8.003842

[38] Bobrov PD, Isaev MH, Korshakov AV, Oganesyan VV, Kerechanin Y, Popodko A, et al. Sources of electrophysiological and foci of hemodynamic brain activity most relevant for controlling a hybrid brain–computer interface based on classification of EEG patterns and nearinfrared spectrography signals during motor imagery. Human Physiology. 2016;42(3):241-251. DOI: 10.1134/ S036211971603004X

[39] Cichocki A, Amari SI. Adaptive Blind Signal and Image Processing, Learning Algorithms and Application. Chichester: John Wiley & Sons, Ltd.; 2002

[40] Beckmann CF, DeLuca M, Devlin JT, Smith SM. Investigations into resting-state connectivity using independent component analysis. Philosophical Transactions of the Royal Society B. 2005;360:1001-1013. DOI: 10.1098/rstb.2005.1634

[41] Frolov A, Bobrov P, Mokienko O, Húsek D, Chernikova L, Konovalov R, et al. Sources of EEG activity most relevant to performance of braincomputer interface based on motor imagery. Neural Network World. 2012; 22(1):21-37. DOI: 10.14311/ NNW.2012.22.002

[42] Korshakov AV, Polikarpov MA, Ustinin MN, Sychev VV, Rykunov SD, Naurzakov SP, et al. Registration and analysis of precise frequency EEG/MEG responses of human brain auditory cortex to monaural sound stimulation with fixed frequency components. Mathematical Biology and

Near-Infrared Optical Technologies in Brain-Computer Interface Systems DOI: http://dx.doi.org/10.5772/intechopen.83345

Bioinformatics. 2014;9(1):296-308. DOI: 10.17537/2014.9.296

[30] Habermehl C, Schmitz CH, Steinbrink J. Contrast enhanced highresolution diffuse optical tomography of the human brain using ICG. Optics Express. 2011;19(19):18636-18644. DOI:

New Frontiers in Brain-Computer Interfaces

7(1):6461. DOI: 10.1038/s41598-017-

[37] Racz FS, Mukli P, Nagy Z, Eke A. Increased prefrontal cortex connectivity during cognitive challenge assessed by fNIRS imaging. Biomedical Optics Express. 2017;8(8):3842-3855. DOI:

[38] Bobrov PD, Isaev MH, Korshakov AV, Oganesyan VV, Kerechanin Y, Popodko A, et al. Sources of electrophysiological and foci of hemodynamic brain activity most relevant for controlling a hybrid brain–computer interface based on classification of EEG patterns and nearinfrared spectrography signals during motor imagery. Human Physiology. 2016;42(3):241-251. DOI: 10.1134/

[39] Cichocki A, Amari SI. Adaptive Blind Signal and Image Processing, Learning Algorithms and Application. Chichester:

[40] Beckmann CF, DeLuca M, Devlin JT, Smith SM. Investigations into resting-state connectivity using independent component analysis. Philosophical Transactions of the Royal Society B. 2005;360:1001-1013. DOI:

[41] Frolov A, Bobrov P, Mokienko O, Húsek D, Chernikova L, Konovalov R, et al. Sources of EEG activity most relevant to performance of braincomputer interface based on motor imagery. Neural Network World. 2012;

[42] Korshakov AV, Polikarpov MA, Ustinin MN, Sychev VV, Rykunov SD, Naurzakov SP, et al. Registration and analysis of precise frequency EEG/MEG responses of human brain auditory cortex to monaural sound stimulation with fixed frequency components.

John Wiley & Sons, Ltd.; 2002

10.1098/rstb.2005.1634

22(1):21-37. DOI: 10.14311/

Mathematical Biology and

NNW.2012.22.002

10.1364/BOE.8.003842

S036211971603004X

06340-7

[31] Medvedev AV, Borisov SV, Gandjbakhche AH, VanMeter J, Kainerstorfer JM. Seeing

electroencephalogram through the skull: Imaging prefrontal cortex with fast optical signal. Biomedical Optics. 2010; 15(6):061702. DOI: 10.1117/1.3505007

[32] Huang J, Wang S, Jia S, Mo D, Chen H-C. Cortical dynamics of semantic

comprehension: Evidence from eventrelated optical signals. PLoS One. 2013;8 (8):e70671. DOI: 10.1371/journal.

[33] Medvedev AV, Kainerstorfer JM, Borisov SV, VanMetera J. Functional connectivity in the prefrontal cortex measured by near-infrared spectroscopy during ultrarapid object recognition. Journal of Biomedical Optics. 2011;16 (1):016008. DOI: 10.1117/1.3533266

[34] Chiarelli AM, Romani GL, Merla A. Fast optical signals in the sensorimotor cortex: General linear convolution model applied to multiple source– detector distance-based data. NeuroImage. 2014;85:245-254. DOI: 10.1016/j.neuroimage.2013.07.021

[35] Medvedev AV, Kainerstorfer J, Borisov SV, Barbour RL, VanMeter J. Event-related fast optical signal in a rapid object recognition task: Improving

detection by the independent component analysis. Brain Research. 2008;1236:145-158. DOI: 10.1016/j.

[36] Wang J, Dong Q, Niu H. The minimum resting-state fNIRS imaging duration for accurate and stable

mapping of brain connectivity network in children. Scientific Reports. 2017;

brainres.2008.07.122

26

processing during sentence

pone.0070671

10.1364/OE.19.018636

[43] Huang NE, Shen Z, Long SR, Wu MC, Shih HH, Zheng Q, Yen NC, Tung CC, Liu HH. The Empirical Mode Decomposition and the Hilbert Spectrum for Nonlinear and Nonstationary Time Series Analysis. Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences. 1998;454(1971):903–995. DOI: 10.1098/ rspa.1998.0193

[44] Korshakov AV. NIRS signal processing by the means of modified empirical mode decomposition method for the purpose of separating hemodynamic responses for BCI target mental states with specific buildup times. In: International Congress "Neuroscience for Medicine and Psychology"; 1–11 June 2016; Sudak, Crimea, Russia. Moscow: MAX Press; 2016. p. 224-225

[45] Kozel FA, Tian F, Dhamne S, Croarkin PE, McClintock SM, Elliott A, et al. Using simultaneous repetitive transcranial magnetic stimulation/ functional near infrared spectroscopy (rTMS/fNIRS) to measure brain activation and connectivity. NeuroImage. 2009;47:1177-1184. DOI: 10.1016/j.neuroimage.2009.05.016

[46] Zhang F, Aravanis AM, Adamantidis A, de Lecea L, Deisseroth K. Circuit-breakers: Optical technologies for probing neural signals and systems. Nature Reviews Neuroscience. 2007;8: 577-581. DOI: 10.1038/nrn2192

[47] Mohanty SK, Reinscheid RK, Liu X, Okamura N, Krasieva TB, Berns MW. In-depth activation of channelrhodopsin-2 sensitized excitable cells with high spatial resolution using two-photon excitation with a nearinfrared laser microbeam. Biophysical Journal. 2008;95:11. DOI: 10.1529/ biophysj.108.130187

[48] Okamoto M, Dan H, Shimizu K, Takeo K, Amita T, Oda I, et al. Multimodal assessment of cortical activation during apple peeling by NIRS and fMRI. NeuroImage. 2004;21: 1275-1288. DOI: 10.1016/j. neuroimage.2003.12.003

[49] Herff C, Heger D, Putze F, Guan C, Schultz T. Self-paced BCI with NIRS based on speech activity. In: Proceedings of the Fifth International Brain-Computer Interface Meeting 2013 (International BCI Meeting 2013); 3–7 June 2013; Asilomar Conference Center, Pacific Grove, California. Graz: Graz University of Technology Publishing House; 2013. Article ID: 111 DOI: 10.3217/978-3-85125-260-6-111

[50] Xu G, Xu LX, Li D, Liu X. A DAQdevice-based continuous wave nearinfrared spectroscopy system for measuring human functional brain activity. Computational and Mathematical Methods in Medicine. 2014;2014:107320. 9 p. DOI: 10.1155/ 2014/107320

[51] Almajidy RK, Le KS, Hofmanna UG. Novel near infrared sensors for hybrid BCI applications. In: European Conference on Biomedical Optics (ECBO 2015). Advanced Microscopy Techniques IV; and Neurophotonics II; 21–25 June 2015; Munich Germany. Washington: Optical Society of America, 2015. p. 95361H. DOI: 10.1364/ ECBO.2015.95361H

[52] Matthews F, Soraghan C, Ward T, Markham C, Pearlmutter B. Software platform for rapid prototyping of NIRS brain computer interfacing techniques. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society (IEEE Engineering in Medicine and Biology Society. Conference 2008); 20–24 August 2008; Vancouver, Canada: IEEE; 2008. p. 4840-4843. DOI: 10.1109/ IEMBS.2008.4650297

[53] Van Erp J, Lotte BF, Tangermann M. Brain-computer interfaces: Beyond medical applications. Computer. 2012; 45(4):26-34. DOI: 10.1109/MC.2012.107

[54] Ang KK, Guan C. Brain-computer interface in stroke rehabilitation. Computing Science and Engineering. 2013;7(2):139-146. DOI: 10.5626/ JCSE.2013.7.2.139

[55] Li WS. FOMs of consciousness measurement. In: International Conference on Artificial Intelligence and Industrial Engineering (AIIE 2015); 26–27 July 2015; Phuket. Thailand. Paris: Atlantis Press; p. 121-125. DOI: 10.2991/ aiie-15.2015.34

[56] Hwang HJ, Lim JH, Kim DW, Ima CH. Evaluation of various mental task combinations for near-infrared spectroscopy-based brain-computer interfaces. Biomedical Optics. 2014;19 (7). 07 Enhanced performance by a hybrid NIRS–EEG 7005 (July 2014). DOI: 10.1117/1.JBO.19.7.077005

[57] Naseer N, Hong KS. Classification of functional near-infrared spectroscopy signals corresponding to the right- and left-wrist motor imagery for development of a brain–computer interface. Neuroscience Letters. 2013; 553:84-89. DOI: 10.1016/j. neulet.2013.08.021

[58] Sitaram R, Zhang H, Guan C, Thulasidas M, Hoshi Y, Ishikawa A, et al. Temporal classification of multichannel near-infrared spectroscopy signals of motor imagery for developing a brain–computer interface. NeuroImage. 2007;34: 1416-1427. DOI: 10.1016/j. neuroimage.2006.11.00

[59] Tomita Y, Vialatte F-B, Dreyfus G, Mitsukura Y, Bakardjian H, Cichocki A. Bimodal BCI using simultaneously NIRS and EEG. IEEE Transactions on Biomedical Engineering. 2014;61(4):

1274-1284. DOI: 10.1109/ TBME.2014.2300492

[60] Tam ND, Zouridakis G. Optical imaging of motor cortical hemodynamic response to directional arm movements using near-infrared spectroscopy. International Journal of Biological Engineering. 2013;3(2):11-17. DOI: 10.5923/j.ijbe.20130302.01

functional near infrared spectroscopic measurement of brain activity. International Journal of Biological and Life Sciences. 2008;4:34-43. DOI:

DOI: http://dx.doi.org/10.5772/intechopen.83345

Near-Infrared Optical Technologies in Brain-Computer Interface Systems

study hemodynamic changes during activation of brain function in human adults. Neuroscience Letters. 1993;154: 101-104. DOI: 10.1016/0304-3940(93)

[73] Matsuda G, Hiraki K. Sustained decrease in oxygenated hemoglobin during video games in the dorsal prefrontal cortex: A NIRS study of children. NeuroImage. 2006;29:706-711. DOI: 10.1016/j.gturoimagt.2005.08.019

[74] Guirgis M, Falk TH, Power S, Blain S, Chau T. Harnessing physiological responses to improve NIRS-based braincomputer interface performance. In: ISSNIP Biosignals Biorobotics; 4–6 January 2010; Vitoria, Brazil, New York:

[75] Gupta CN, Palaniappan R. Using EEG and NIRS for rain-computer interface and cognitive performance measures: A pilot study. International Journal of Cognitive Performance Support. 2013;1(1):69. ISSN 1742-7207. DOI: 10.1504/ IJCPS.2013.053576

[76] Falk TH, Guirgis M, Power S, Chau TT. Taking NIRS-BCIs outside the lab: Towards achieving robustness against environment noise. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2011;19(2):136-146. DOI:

10.1109/TNSRE.2010.2078516

[Accessed: 2018-12-29]

[78] Khan MJ, Hong MJ, Hong KS. Decoding of four movement directions using hybrid NIRS-EEG brain-computer

[77] Khan MJ, Hong KS, Naseer N, Bhutta MR, Yoon SH. Hybrid EEG-NIRS BCI for rehabilitation using differentsource brain signals [Internet] 2014. Available from: https://www. researchgate.net/profile/Noman\_ Naseer/publication/271508614\_Hybrid\_ EEG-NIRS\_BCI\_for\_rehabilitation\_ using\_different\_brain\_signals/links/ 55876a8008aeb0cdade0bbe5.pdf

IEEE; 2010. p. 59-62

90181-J

[66] Fazli S, Mehnert J, Steinbrink J, Curio G, Villringer A, Muller K-R, et al. Enhanced performance by a hybrid NIRS–EEG brain computer interface. NeuroImage. 2012;59:519-529. DOI: 10.1016/j.neuroimage.2011.07.084

[67] Cui X, Bray S, Reiss AL. Speeded near infrared spectroscopy (NIRS) response detection. PLoS One. 2010; 5(11):e15474. DOI: 10.1371/ journal.

[68] Ward LM, Aitchison RT, Tawse M, Simmers AJ, Shahani U. Reduced haemodynamic response in the ageing visual cortex measured by absolute fNIRS. PLoS One. 2015;10(4):e0125012. DOI: 10.1371/journal.pone.01250122015

[69] Limongi T, Di Sante G, Ferrari M, Quaresima V. Detecting mental calculation related frontal cortex

oxygenation changes for brain computer interface using multi-channel functional near infrared topography. International Journal of Bioelectromagnetism. 2009;

[70] Ang KK, Yu J, Guan C. Extracting and selecting discriminative features from high density NIRS-based BCI for numerical cognition. In: International Joint Conference on Neural Networks (WCCI 2012); 10–15 June 2012;

Brisbane, Australia. IEEE; 2012. p. 1716- 1721. DOI: 10.1109/IJCNN.2012.6252604

[71] Fishburn FA, Norr ME, Medvedev AB, Vaidya CJ. Sensitivity off NIRS to cognitive state and load. Frontiers in Human Neuroscience. 2014;8:76. DOI:

10.3389/fnhum.2014.00076

29

[72] Villringer A, Planck J, Hock C, Schlenkofer L, Dirnagl U. Near infrared spectroscopy (NIRS): A new tool to

10.5281/zenodo.1084415

pone.0015474

11(2):86-90

[61] Volpe BT, Krebs HI, Hogan EL, Diels CM, Aisen ML. Robot training enhanced motor outcome in patients with stroke maintained over 3 years. Neurology. 1999;53(8):1874-1876

[62] Ferraro M, Palazzolo JJ, Krol J, Krebs HI, Hogan N, Volpe BT. Robotaided sensorimotor arm training improves outcome in patients with chronic stroke. Neurology. 2003;61(11): 1604-1607. DOI: 10.1212/01. WNL.0000095963.00970.68

[63] Colombo R, Pisano F, Micera S, Mazzone A, Delconte C, Carrozza MC, et al. Robotic techniques for upper limb evaluation and rehabilitation of stroke patients. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2005;13(3):311-324. DOI: 10.1109/ NNSRE.2005.848352

[64] Strait M, Scheutz M. Building a literal bridge between robotics and neuroscience using functional near infrared spectroscopy (NIRS) [Internet]. 1999. Available from: https:// www.researchgate.net/profile/ Matthias\_Scheutz/publication/ 262774032\_Building\_a\_literal\_bridge\_ between\_robotics\_and\_neuroscience\_ using\_functional\_near\_infrared\_ spectroscopy/links/ 55e436c908ae2fac47215330/Building-aliteral-bridge-between-robotics-andneuroscience-using-functional-nearinfrared-spectroscopy.pdf [Accessed: 2019-01-02]

[65] Soe NN, Nakagawa M. Chaotic properties of hemodynamic response in Near-Infrared Optical Technologies in Brain-Computer Interface Systems DOI: http://dx.doi.org/10.5772/intechopen.83345

functional near infrared spectroscopic measurement of brain activity. International Journal of Biological and Life Sciences. 2008;4:34-43. DOI: 10.5281/zenodo.1084415

[53] Van Erp J, Lotte BF, Tangermann M. Brain-computer interfaces: Beyond medical applications. Computer. 2012; 45(4):26-34. DOI: 10.1109/MC.2012.107

New Frontiers in Brain-Computer Interfaces

1274-1284. DOI: 10.1109/ TBME.2014.2300492

10.5923/j.ijbe.20130302.01

[60] Tam ND, Zouridakis G. Optical imaging of motor cortical hemodynamic response to directional arm movements using near-infrared spectroscopy. International Journal of Biological Engineering. 2013;3(2):11-17. DOI:

[61] Volpe BT, Krebs HI, Hogan EL, Diels CM, Aisen ML. Robot training enhanced motor outcome in patients with stroke maintained over 3 years. Neurology. 1999;53(8):1874-1876

[62] Ferraro M, Palazzolo JJ, Krol J, Krebs HI, Hogan N, Volpe BT. Robotaided sensorimotor arm training improves outcome in patients with chronic stroke. Neurology. 2003;61(11):

[63] Colombo R, Pisano F, Micera S, Mazzone A, Delconte C, Carrozza MC, et al. Robotic techniques for upper limb evaluation and rehabilitation of stroke patients. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2005;13(3):311-324. DOI: 10.1109/

[64] Strait M, Scheutz M. Building a literal bridge between robotics and neuroscience using functional near infrared spectroscopy (NIRS)

www.researchgate.net/profile/ Matthias\_Scheutz/publication/ 262774032\_Building\_a\_literal\_bridge\_ between\_robotics\_and\_neuroscience\_ using\_functional\_near\_infrared\_

[Internet]. 1999. Available from: https://

55e436c908ae2fac47215330/Building-aliteral-bridge-between-robotics-andneuroscience-using-functional-nearinfrared-spectroscopy.pdf [Accessed:

[65] Soe NN, Nakagawa M. Chaotic properties of hemodynamic response in

1604-1607. DOI: 10.1212/01. WNL.0000095963.00970.68

NNSRE.2005.848352

spectroscopy/links/

2019-01-02]

[54] Ang KK, Guan C. Brain-computer interface in stroke rehabilitation. Computing Science and Engineering. 2013;7(2):139-146. DOI: 10.5626/

[55] Li WS. FOMs of consciousness measurement. In: International Conference on Artificial Intelligence and Industrial Engineering (AIIE 2015); 26–27 July 2015; Phuket. Thailand. Paris: Atlantis Press; p. 121-125. DOI: 10.2991/

[56] Hwang HJ, Lim JH, Kim DW, Ima CH. Evaluation of various mental task combinations for near-infrared spectroscopy-based brain-computer interfaces. Biomedical Optics. 2014;19 (7). 07 Enhanced performance by a hybrid NIRS–EEG 7005 (July 2014). DOI: 10.1117/1.JBO.19.7.077005

[57] Naseer N, Hong KS. Classification of functional near-infrared spectroscopy signals corresponding to the right- and

left-wrist motor imagery for development of a brain–computer interface. Neuroscience Letters. 2013;

553:84-89. DOI: 10.1016/j. neulet.2013.08.021

[58] Sitaram R, Zhang H, Guan C, Thulasidas M, Hoshi Y, Ishikawa A, et al. Temporal classification of multichannel near-infrared

spectroscopy signals of motor imagery for developing a brain–computer interface. NeuroImage. 2007;34: 1416-1427. DOI: 10.1016/j. neuroimage.2006.11.00

[59] Tomita Y, Vialatte F-B, Dreyfus G, Mitsukura Y, Bakardjian H, Cichocki A. Bimodal BCI using simultaneously NIRS

and EEG. IEEE Transactions on Biomedical Engineering. 2014;61(4):

28

JCSE.2013.7.2.139

aiie-15.2015.34

[66] Fazli S, Mehnert J, Steinbrink J, Curio G, Villringer A, Muller K-R, et al. Enhanced performance by a hybrid NIRS–EEG brain computer interface. NeuroImage. 2012;59:519-529. DOI: 10.1016/j.neuroimage.2011.07.084

[67] Cui X, Bray S, Reiss AL. Speeded near infrared spectroscopy (NIRS) response detection. PLoS One. 2010; 5(11):e15474. DOI: 10.1371/ journal. pone.0015474

[68] Ward LM, Aitchison RT, Tawse M, Simmers AJ, Shahani U. Reduced haemodynamic response in the ageing visual cortex measured by absolute fNIRS. PLoS One. 2015;10(4):e0125012. DOI: 10.1371/journal.pone.01250122015

[69] Limongi T, Di Sante G, Ferrari M, Quaresima V. Detecting mental calculation related frontal cortex oxygenation changes for brain computer interface using multi-channel functional near infrared topography. International Journal of Bioelectromagnetism. 2009; 11(2):86-90

[70] Ang KK, Yu J, Guan C. Extracting and selecting discriminative features from high density NIRS-based BCI for numerical cognition. In: International Joint Conference on Neural Networks (WCCI 2012); 10–15 June 2012; Brisbane, Australia. IEEE; 2012. p. 1716- 1721. DOI: 10.1109/IJCNN.2012.6252604

[71] Fishburn FA, Norr ME, Medvedev AB, Vaidya CJ. Sensitivity off NIRS to cognitive state and load. Frontiers in Human Neuroscience. 2014;8:76. DOI: 10.3389/fnhum.2014.00076

[72] Villringer A, Planck J, Hock C, Schlenkofer L, Dirnagl U. Near infrared spectroscopy (NIRS): A new tool to

study hemodynamic changes during activation of brain function in human adults. Neuroscience Letters. 1993;154: 101-104. DOI: 10.1016/0304-3940(93) 90181-J

[73] Matsuda G, Hiraki K. Sustained decrease in oxygenated hemoglobin during video games in the dorsal prefrontal cortex: A NIRS study of children. NeuroImage. 2006;29:706-711. DOI: 10.1016/j.gturoimagt.2005.08.019

[74] Guirgis M, Falk TH, Power S, Blain S, Chau T. Harnessing physiological responses to improve NIRS-based braincomputer interface performance. In: ISSNIP Biosignals Biorobotics; 4–6 January 2010; Vitoria, Brazil, New York: IEEE; 2010. p. 59-62

[75] Gupta CN, Palaniappan R. Using EEG and NIRS for rain-computer interface and cognitive performance measures: A pilot study. International Journal of Cognitive Performance Support. 2013;1(1):69. ISSN 1742-7207. DOI: 10.1504/ IJCPS.2013.053576

[76] Falk TH, Guirgis M, Power S, Chau TT. Taking NIRS-BCIs outside the lab: Towards achieving robustness against environment noise. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2011;19(2):136-146. DOI: 10.1109/TNSRE.2010.2078516

[77] Khan MJ, Hong KS, Naseer N, Bhutta MR, Yoon SH. Hybrid EEG-NIRS BCI for rehabilitation using differentsource brain signals [Internet] 2014. Available from: https://www. researchgate.net/profile/Noman\_ Naseer/publication/271508614\_Hybrid\_ EEG-NIRS\_BCI\_for\_rehabilitation\_ using\_different\_brain\_signals/links/ 55876a8008aeb0cdade0bbe5.pdf [Accessed: 2018-12-29]

[78] Khan MJ, Hong MJ, Hong KS. Decoding of four movement directions using hybrid NIRS-EEG brain-computer interface. Frontiers in Human Neuroscience. 2014;8:244. DOI: 10.3389/fnhum.2014.00244

[79] Khan MJ, Hong KH, Naseer N, Bhutta MR. Multi-decision detection using EEG-NIRS based hybrid braincomputer interface (BCI). In: 20th Annual Meeting of the Organization for Human Brain Mapping (OHMB 2014); 8–13 June 2014; Hamburg, Germany. AI Attendee Interactive; 2014. p. 1-5

[80] Koo B, Lee HG, Nam Y, Kang H, Kohd CS, Shin HC, et al. A hybrid NIRS-EEG system for self-paced brain computer interface with online motor imagery. Journal of Neuroscience Methods. 2015;244:26-32. DOI: 10.1016/ j.jneumeth.2014.04.016

[81] Pfurtscheller G, Allison BZ, Brunner C, Bauernfeind G, Solis-Escalante T, Scherer R, et al. The hybrid BCI. Frontiers in Neuroscience. 2010;4:30. DOI: 10.3389/fnpro.2010.00003

[82] Lakshmi R, Prasad TV, Prakash VC. Survey on EEG signal processing methods. International Journal of Advanced Research in Computer Science and Software Engineering. 2014;4(1):84-91

[83] Tam ND, Zouridakis G. Temporal decoupling of oxy- and deoxyhemoglobin hemodynamic responses detected by functional near-infrared spectroscopy (fNIRS). Journal of Biomedical Engineering and Medical Imaging. 2014;1(2):18-28. DOI: 10.14738/jbemi.12.146

[84] Biswal BB, VanKylen J, Hyde JS. Simultaneous assessment of flow and BOLD signals in resting-statefunctional connectivity maps. NMR in Biomedicine. 1997;10(4–5):165-170. DOI: 10.1002/(sici)1099-1492(199706/ 08)10:4/5<165::aidnbm454>3.0.co;2-7

[85] Niu H, Wang J, Zhao T, Shu N, He Y. Revealing topological organization of human brain functional networks with resting-state functional near infrared spectroscopy. PLoS One. 2012;7(9): e45771. DOI: 10.1371/journal. pone.0045771

Neuroscience Methods. 2012;209:

[92] Ehrlicher AJ, Betz T, Stuhrmann B, Koch D, Milner V, Raizen MG, et al. Guiding neuronal growth with light. PNAS. 2002;99(25):16024-16028. DOI:

DOI: http://dx.doi.org/10.5772/intechopen.83345

Near-Infrared Optical Technologies in Brain-Computer Interface Systems

[93] Ilina IV, Ovchinnikov AV, Sitnikov DS, Chefonov OV, Agranat MB, Khramova YV, et al. Microsurgery of cell membrane with femtosecond laser pulses for cell fusion and optical injection. In: International Conference Advanced Laser Technologies (ALT '12); 2–6 September 2012. Vol. 1. Thun, Bern, Switzerland: Bern Open Publishing;

[94] Nikitin ES, Aseev NA, Balaban PM. Improvements in the optical recording of neuron activity using voltagedependent dyes. Neuroscience and Behavioral Physiology. 2015;45(2): 132-139. DOI: 10.1007/s11055-015-

[95] Birbaumer N, Gallegos-Ayala G, Wildgruber M, Silvoni S, Soekadar SR. Direct brain control and communication in paralysis. Brain Topography. 2014; 27(1):4-11. DOI: 10.1007/s10548-013-

168-177. DOI: 10.1016/j. jneumeth.2012.02.006

10.1073/pnas.252631899

2012. DOI: 10.12684/alt.1.61

0050-7

0282-1

31

[86] Barbour RL, Graber HL, Xu Y, Pei Y, Schmitz CH, Pfeil DS, et al. A programmable laboratory testbed in support of evaluation of functional brain activation and connectivity. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2012;20(2): 170-183. DOI: 10.1109/ TNSRE.2012.2185514

[87] Pei Y, Wang Z, Barbour RL. NAVI: A problem solving environment (PSE) for NIRS data analysis [Internet]. 2006. Available from: https://www. researchgate.net/profile/Yaling\_Pei/ publication/228412067\_NAVI\_A\_ problem\_solving\_environment\_PSE\_ for\_NIRS\_data\_analysis/links/ 5592819608ae7921246e787b/NAVI-Aproblem-solving-environment-PSE-for-NIRS-data-analysis.pdf [Accessed: 2018-12-29]

[88] NIRx fNIRS Analysis Environment user's Guide. Available from: http://otg. downstate.edu/Publication/NIRxPac kage\_02.pdf

[89] Li Z, Liu H, Liao X, Xu J, Liu W, Tian F, et al. Dynamic functional connectivity revealed by resting-state functional near-infrared spectroscopy. Biomedical Optics Express. 2015;6(7): 2337-2352. DOI: 10.1364/BOE.6.002337

[90] Mehnert J, Akhrif A, Telkemeyer S, Rossi S, Schmitz CH, Steinbrink J, et al. Developmental changes in brain activation and functional connectivity during response inhibition in the early childhood brain. Brain and Development. 2013;35:894-904. DOI: 10.1016/j.braindev.2012.11.006

[91] Ebbesen CL, Bruus H. Analysis of laser-induced heating in optical neuronal guidance. Journal of

Near-Infrared Optical Technologies in Brain-Computer Interface Systems DOI: http://dx.doi.org/10.5772/intechopen.83345

Neuroscience Methods. 2012;209: 168-177. DOI: 10.1016/j. jneumeth.2012.02.006

interface. Frontiers in Human Neuroscience. 2014;8:244. DOI: 10.3389/fnhum.2014.00244

[79] Khan MJ, Hong KH, Naseer N, Bhutta MR. Multi-decision detection using EEG-NIRS based hybrid braincomputer interface (BCI). In: 20th Annual Meeting of the Organization for Human Brain Mapping (OHMB 2014); 8–13 June 2014; Hamburg, Germany. AI Attendee Interactive; 2014. p. 1-5

New Frontiers in Brain-Computer Interfaces

human brain functional networks with resting-state functional near infrared spectroscopy. PLoS One. 2012;7(9): e45771. DOI: 10.1371/journal.

[86] Barbour RL, Graber HL, Xu Y, Pei Y, Schmitz CH, Pfeil DS, et al. A programmable laboratory testbed in support of evaluation of functional brain activation and connectivity. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2012;20(2):

[87] Pei Y, Wang Z, Barbour RL. NAVI: A problem solving environment (PSE) for NIRS data analysis [Internet]. 2006.

5592819608ae7921246e787b/NAVI-Aproblem-solving-environment-PSE-for-NIRS-data-analysis.pdf [Accessed:

[88] NIRx fNIRS Analysis Environment user's Guide. Available from: http://otg. downstate.edu/Publication/NIRxPac

[89] Li Z, Liu H, Liao X, Xu J, Liu W, Tian F, et al. Dynamic functional connectivity revealed by resting-state functional near-infrared spectroscopy. Biomedical Optics Express. 2015;6(7): 2337-2352. DOI: 10.1364/BOE.6.002337

[90] Mehnert J, Akhrif A, Telkemeyer S, Rossi S, Schmitz CH, Steinbrink J, et al. Developmental changes in brain activation and functional connectivity during response inhibition in the early

Development. 2013;35:894-904. DOI: 10.1016/j.braindev.2012.11.006

[91] Ebbesen CL, Bruus H. Analysis of laser-induced heating in optical neuronal guidance. Journal of

childhood brain. Brain and

pone.0045771

170-183. DOI: 10.1109/ TNSRE.2012.2185514

Available from: https://www. researchgate.net/profile/Yaling\_Pei/ publication/228412067\_NAVI\_A\_ problem\_solving\_environment\_PSE\_ for\_NIRS\_data\_analysis/links/

2018-12-29]

kage\_02.pdf

[80] Koo B, Lee HG, Nam Y, Kang H, Kohd CS, Shin HC, et al. A hybrid NIRS-

[81] Pfurtscheller G, Allison BZ, Brunner C, Bauernfeind G, Solis-Escalante T, Scherer R, et al. The hybrid BCI. Frontiers in Neuroscience. 2010;4:30. DOI: 10.3389/fnpro.2010.00003

[82] Lakshmi R, Prasad TV, Prakash VC. Survey on EEG signal processing methods. International Journal of Advanced Research in Computer Science and Software Engineering.

[83] Tam ND, Zouridakis G. Temporal decoupling of oxy- and deoxyhemoglobin hemodynamic responses detected by functional near-infrared spectroscopy (fNIRS). Journal of Biomedical Engineering and Medical Imaging. 2014;1(2):18-28. DOI:

[84] Biswal BB, VanKylen J, Hyde JS. Simultaneous assessment of flow and BOLD signals in resting-statefunctional

Biomedicine. 1997;10(4–5):165-170. DOI: 10.1002/(sici)1099-1492(199706/ 08)10:4/5<165::aidnbm454>3.0.co;2-7

[85] Niu H, Wang J, Zhao T, Shu N, He Y. Revealing topological organization of

EEG system for self-paced brain computer interface with online motor imagery. Journal of Neuroscience Methods. 2015;244:26-32. DOI: 10.1016/

j.jneumeth.2014.04.016

2014;4(1):84-91

10.14738/jbemi.12.146

connectivity maps. NMR in

30

[92] Ehrlicher AJ, Betz T, Stuhrmann B, Koch D, Milner V, Raizen MG, et al. Guiding neuronal growth with light. PNAS. 2002;99(25):16024-16028. DOI: 10.1073/pnas.252631899

[93] Ilina IV, Ovchinnikov AV, Sitnikov DS, Chefonov OV, Agranat MB, Khramova YV, et al. Microsurgery of cell membrane with femtosecond laser pulses for cell fusion and optical injection. In: International Conference Advanced Laser Technologies (ALT '12); 2–6 September 2012. Vol. 1. Thun, Bern, Switzerland: Bern Open Publishing; 2012. DOI: 10.12684/alt.1.61

[94] Nikitin ES, Aseev NA, Balaban PM. Improvements in the optical recording of neuron activity using voltagedependent dyes. Neuroscience and Behavioral Physiology. 2015;45(2): 132-139. DOI: 10.1007/s11055-015- 0050-7

[95] Birbaumer N, Gallegos-Ayala G, Wildgruber M, Silvoni S, Soekadar SR. Direct brain control and communication in paralysis. Brain Topography. 2014; 27(1):4-11. DOI: 10.1007/s10548-013- 0282-1

Chapter 3

Abstract

Speech Enhancement Using

an Iterative Posterior NMF

Over the years, miscellaneous methods for speech enhancement have been proposed, such as spectral subtraction (SS) and minimum mean square error (MMSE) estimators. These methods do not require any prior knowledge about the speech and noise signals nor any training stage beforehand, so they are highly flexible and allow implementation in various situations. However, these algorithms usually assume that the noise is stationary and are thus not good at dealing with

nonstationary noise types, especially under low signal-to-noise (SNR) conditions. To overcome the drawbacks of the above methods, nonnegative matrix factorization (NMF) is introduced. NMF approach is more robust to nonstationary noise. In this chapter, we are actually interested in the application of speech enhancement using NMF approach. A speech enhancement method based on regularized nonnegative matrix factorization (NMF) for nonstationary Gaussian noise is proposed. The spectral components of speech and noise are modeled as Gamma and Rayleigh, respectively. We propose to adaptively estimate the sufficient statistics of these

distributions to obtain a natural regularization of the NMF criterion.

Keywords: nonnegative matrix factorization (NMF), speech enhancement, signal-to-noise ratio (SNR), expectation maximization (EM) algorithms,

Over the past several decades, there has been a large research interest in the problem of single-channel sound source separation. Such work focuses on the task of separating a single mixture recording into its respective sources and is motivated by the fact that real-world sounds are inherently constructed by many individual sounds (e.g., human speakers, musical instruments, background noise, etc.). While source separation is difficult, the topic is highly motivated by many outstanding problems in audio signal processing and machine learning, including the following:

1. Speech denoising and enhancement—the task of removing background noise (e.g., wind, babble, etc.) from recorded speech and improving speech intelligibility for human listeners and/or automatic speech recognizers

processing audio based on semantic properties of the recording such as tempo,

2. Content-based analysis and processing—the task of extracting and/or

Sunnydayal Vanambathina

posterior regularization (PR)

rhythm, and/or pitch

33

1. Introduction

## Chapter 3

## Speech Enhancement Using an Iterative Posterior NMF

Sunnydayal Vanambathina

## Abstract

Over the years, miscellaneous methods for speech enhancement have been proposed, such as spectral subtraction (SS) and minimum mean square error (MMSE) estimators. These methods do not require any prior knowledge about the speech and noise signals nor any training stage beforehand, so they are highly flexible and allow implementation in various situations. However, these algorithms usually assume that the noise is stationary and are thus not good at dealing with nonstationary noise types, especially under low signal-to-noise (SNR) conditions. To overcome the drawbacks of the above methods, nonnegative matrix factorization (NMF) is introduced. NMF approach is more robust to nonstationary noise. In this chapter, we are actually interested in the application of speech enhancement using NMF approach. A speech enhancement method based on regularized nonnegative matrix factorization (NMF) for nonstationary Gaussian noise is proposed. The spectral components of speech and noise are modeled as Gamma and Rayleigh, respectively. We propose to adaptively estimate the sufficient statistics of these distributions to obtain a natural regularization of the NMF criterion.

Keywords: nonnegative matrix factorization (NMF), speech enhancement, signal-to-noise ratio (SNR), expectation maximization (EM) algorithms, posterior regularization (PR)

## 1. Introduction

Over the past several decades, there has been a large research interest in the problem of single-channel sound source separation. Such work focuses on the task of separating a single mixture recording into its respective sources and is motivated by the fact that real-world sounds are inherently constructed by many individual sounds (e.g., human speakers, musical instruments, background noise, etc.). While source separation is difficult, the topic is highly motivated by many outstanding problems in audio signal processing and machine learning, including the following:


## 2. Nonnegative matrix factorization

## 2.1 NMF model

Nonnegative matrix factorization is a process that approximates a single nonnegative matrix as the product of two nonnegative matrices. It is defined by

$$V \approx \text{WH} \tag{1}$$

This process can be seen in Figure 1 [3], where a two-measure piano passage of "Mary Had a Little Lamb" is shown alongside a spectrogram and an NMF factorization. Notice how W captures the harmonic content of the three pitches of the passage and H captures the time onsets and gains of the individual notes. Also note that Nz is typically chosen manually or using a model selection procedure such as cross-validation and Nf and Nt are a function of the overall recording length and

This leads to two related interpretations of how NMF models spectrogram data. The first interpretation is that the columns of V (i.e., short-time segments of the mixture signal) are approximated as a weighted sum of basis vectors as shown in

<sup>j</sup>¼<sup>1</sup>Hj1Wj <sup>∑</sup><sup>K</sup>

The second interpretation is that the entire matrix V is approximated as a sum of

NMF of a piano performing "Mary had a little lamb" for two measures with Nz = 3. Notice how matrix W captures the harmonic content of the three pitches of the passage and matrix H captures the time onsets and gains

NMF interpretation I. the columns of V (i.e., short-time segments of the mixture signal) are approximated as a

<sup>j</sup>¼<sup>1</sup>Hj2Wj <sup>∑</sup><sup>K</sup>

h i (2)

<sup>j</sup>¼<sup>1</sup>HjNtWj

STFT parameters (transform length, zero-padding size, and hop size).

3 7 5≈ ∑Nz

Figure 2 and Eq. (2):

2 6 4

j jj j V1 V2 V3……VNt j jj j

Speech Enhancement Using an Iterative Posterior NMF DOI: http://dx.doi.org/10.5772/intechopen.84976

matrix "layers," as shown in Figure 3 and Eq. (3).

V ≈

Figure 1.

Figure 2.

35

weighted sum or mixture of basis vectors W [3].

of the individual notes [3].

V ∈ RNf �Nt <sup>þ</sup> is a nonnegative input matrix. <sup>W</sup> <sup>∈</sup> <sup>R</sup>Nf �Nz <sup>þ</sup> is a matrix of basis vectors, basis functions, or dictionary elements; H ∈ RNz�Nt <sup>þ</sup> is a matrix of corresponding activations, weights, or gains; and Nf is the number of rows of the input matrix. Nt is the number of columns of the input matrix; Nz is the number of basis vectors [1].

V ∈ RNf �Nt <sup>þ</sup> —original nonnegative input data matrix


W ∈ RNf �Nz <sup>þ</sup> —matrix of basis vectors, basis functions, or dictionary elements.


H ∈ RNz�Nt <sup>þ</sup> —matrix of activations, weights, encodings, or gains.


When used for audio applications, NMF is typically used to model spectrogram data or the magnitude of STFT data [2]. That is, we take a single-channel recording, transform it into the time-frequency domain using the STFT, take the magnitude or power V, and then approximate the result as V ≈WH. In doing so, NMF approximates spectrogram data as a linear combination of prototypical frequencies or spectra (i.e., basis vectors) over time.

Speech Enhancement Using an Iterative Posterior NMF DOI: http://dx.doi.org/10.5772/intechopen.84976

This process can be seen in Figure 1 [3], where a two-measure piano passage of "Mary Had a Little Lamb" is shown alongside a spectrogram and an NMF factorization. Notice how W captures the harmonic content of the three pitches of the passage and H captures the time onsets and gains of the individual notes. Also note that Nz is typically chosen manually or using a model selection procedure such as cross-validation and Nf and Nt are a function of the overall recording length and STFT parameters (transform length, zero-padding size, and hop size).

This leads to two related interpretations of how NMF models spectrogram data. The first interpretation is that the columns of V (i.e., short-time segments of the mixture signal) are approximated as a weighted sum of basis vectors as shown in Figure 2 and Eq. (2):

$$\mathbf{V} \approx \begin{bmatrix} | & | & | & | \\ \mathbf{V1} & \mathbf{V2} & \mathbf{V3} & \dots & \mathbf{V\_{N\_i}} \\ | & | & | & | \end{bmatrix} \approx \begin{bmatrix} \sum\_{j=1}^{N\_i} H\_{j1} \mathbf{W\_j} & \sum\_{j=1}^{K} H\_{j2} \mathbf{W\_j} & \sum\_{j=1}^{K} H\_{jN\_i} \mathbf{W\_j} \end{bmatrix} \tag{2}$$

The second interpretation is that the entire matrix V is approximated as a sum of matrix "layers," as shown in Figure 3 and Eq. (3).

### Figure 1.

3.Music transcription—the task of notating an audio recording into a musical representation such as a musical score, guitar tablature, or other symbolic

4.Audio-based forensics—the task of examining, comparing, and evaluating

5. Audio restoration—the task of removing imperfections such as noise, hiss,

6.Music remixing and content creation—the task of creating a new musical work by manipulating the content of one or more previously existing recordings

Nonnegative matrix factorization is a process that approximates a single non-

corresponding activations, weights, or gains; and Nf is the number of rows of the input matrix. Nt is the number of columns of the input matrix; Nz is the number of

<sup>þ</sup> —matrix of basis vectors, basis functions, or dictionary elements.

• A column represents a basis vector, basis function, or dictionary element.

• Each column is not orthonormal, but commonly normalized to one.

<sup>þ</sup> —matrix of activations, weights, encodings, or gains.

When used for audio applications, NMF is typically used to model spectrogram data or the magnitude of STFT data [2]. That is, we take a single-channel recording, transform it into the time-frequency domain using the STFT, take the magnitude or power V, and then approximate the result as V ≈WH. In doing so, NMF approximates spectrogram data as a linear combination of prototypical frequencies or

• A row represents the gain of a corresponding basis vector.

• Each row is not orthonormal, but sometimes normalized to one.

V ≈WH (1)

<sup>þ</sup> is a matrix of basis

<sup>þ</sup> is a matrix of

negative matrix as the product of two nonnegative matrices. It is defined by

<sup>þ</sup> is a nonnegative input matrix. <sup>W</sup> <sup>∈</sup> <sup>R</sup>Nf �Nz

<sup>þ</sup> —original nonnegative input data matrix

• Each column is an Nf -dimensional data sample.

• Each row represents a data feature.

spectra (i.e., basis vectors) over time.

vectors, basis functions, or dictionary elements; H ∈ RNz�Nt

audio recordings for scientific and/or legal matters

2. Nonnegative matrix factorization

New Frontiers in Brain-Computer Interfaces

pops, and crackles from (typically old) audio recordings

notations

2.1 NMF model

V ∈ RNf �Nt

basis vectors [1]. V ∈ RNf �Nt

W ∈ RNf �Nz

H ∈ RNz�Nt

34

NMF of a piano performing "Mary had a little lamb" for two measures with Nz = 3. Notice how matrix W captures the harmonic content of the three pitches of the passage and matrix H captures the time onsets and gains of the individual notes [3].

### Figure 2.

NMF interpretation I. the columns of V (i.e., short-time segments of the mixture signal) are approximated as a weighted sum or mixture of basis vectors W [3].

Figure 3. NMF interpretation II. The matrix V (i.e., the mixture signal) is approximated as a sum of matrix "layers" [3].

$$\mathbf{V} \approx \begin{bmatrix} | & | & | & | \\ \mathbf{V1} & \mathbf{V2} & \mathbf{V3} & \dots \mathbf{V\_{N\_t}} \\ | & | & | & | \end{bmatrix} \approx \begin{bmatrix} | & | & | & | \\ \mathbf{W1} & \mathbf{W2} & \mathbf{W3} & \dots \mathbf{W\_{N\_x}} \\ | & | & | & | \end{bmatrix} \begin{bmatrix} h\_1^T \\ h\_2^T \\ h\_3^T \\ \vdots \\ h\_{N\_t}^T \end{bmatrix}$$

$$\mathbf{V} \approx \mathbf{W\_1} h\_1^T + \mathbf{W\_2} h\_2^T + \mathbf{W\_3} h\_3^T + \dots + \mathbf{W\_{N\_x}} h\_{N\_x}^T \tag{3}$$

The application of NMF on noisy speech can be seen in Figure 4.

## 2.2 Optimization formulation

To estimate the basis matrix W and the activation matrix H for a given input data matrix V, NMF algorithm is formulated as an optimization problem. This is written as:

$$\underset{\mathbf{W},\mathbf{H}}{\operatorname{argmin}} \, D(\mathbf{V}|\mathbf{W}\mathbf{H})$$

$$\boldsymbol{W} \ge \mathbf{0}, \boldsymbol{H} \ge \mathbf{0} \tag{4}$$

where ½�ft indicates its argument at row f and column t and D Vð Þ jWH is a scalar

Popular cost functions include the Euclidean distance metric, Kullback-Liebler (KL) divergence, and Itakura-Saito (IS) divergence. Both the KL and IS divergences have been found to be well suited for audio purposes. In this work, we focus on the

where ½�ft indicates its argument at row f and column t and dqp ð Þ j is a scalar cost

 <sup>þ</sup> ½ � WH ft 

<sup>þ</sup> const

W ≥0, H ≥ 0 (7)

<sup>p</sup> � <sup>q</sup> <sup>þ</sup> <sup>p</sup> (6)

case where d qp ð Þ j is generalized (non-normalized) KL divergence:

This results in the following optimization formulation:

∑ Nf

f¼1 ∑ Nt t¼1

dKLð Þ¼ q p<sup>j</sup> <sup>q</sup> ln <sup>q</sup>

�Vft ln WH ½ �ft 

Given this formulation, we notice that the problem is not convex in W and H, limiting our ability to find a globally optimal solution to Eq. (7). It is, however, biconvex or independently convex in W for a fixed value of H and convex in H for a fixed value of W, motivating the use of iterative numerical methods to estimate

To solve Eq. (7), we must use an iterative numerical optimization technique and

The most popular alternative that has been proposed is by Lee and Seung [1, 5] and consists of a fast, simple, and efficient multiplicative gradient descent-based optimization procedure. The method works by breaking down the larger optimization problem into two subproblems and iteratively optimizes over W and then H, back and forth, given an initial feasible solution. The approach monotonically decreases the optimization objective for both the KL divergence and Euclidean cost

The approach is justified using the machinery of majorization-minimization (MM) algorithms [6]. MM algorithms are closely related to expectation maximization (EM) algorithms. In general, MM algorithms operate by approximating an optimization objective with a lower bound auxiliary function. The lower bound is then maximized instead of the original function, which is usually more difficult to

Algorithm 1 shows the complete iterative numerical optimization procedure applied to Eq. (7) with the KL divergence, where the division is element-wise, ⊗ is an element-wise multiplication, and 1 is a vector or matrix of ones with appropri-

hope to find a locally optimal solution. Gradient descent methods are the most common and straightforward for this purpose but typically are slow to converge. Other methods such as Newton's method, interior-point methods, conjugate gradient methods, and similar [4] can converge faster but are typically much more

complicated to implement, motivating alternative approaches.

functions and converges to a local stationary point.

cost function measured between V and WH.

Speech Enhancement Using an Iterative Posterior NMF DOI: http://dx.doi.org/10.5772/intechopen.84976

function measured between q and p.

locally optimal values of W and H.

2.3 Parameter estimation

optimize.

37

ately defined dimensions [5].

Subject to

argmin <sup>W</sup>, <sup>H</sup>

where D Vð Þ jWH is an appropriately defined cost function between V and W H and the inequalities ≥ are element-wise. It is also common to add additional equality constraints to require the columns of W to sum to one, which we enforce. When D Vð Þ jWH is additively separable, the cost function can be reduced to

$$D(\mathbf{V}|\mathbf{W}\mathbf{H}) = \sum\_{f=1}^{N\_f} \sum\_{t=1}^{N\_t} d\left(\mathbf{V}\_{ft} \Big| \mathbf{[W}\mathbf{H}]\_{ft}\right) \tag{5}$$

Figure 4. Applying NMF on noisy speech.

Speech Enhancement Using an Iterative Posterior NMF DOI: http://dx.doi.org/10.5772/intechopen.84976

where ½�ft indicates its argument at row f and column t and D Vð Þ jWH is a scalar cost function measured between V and WH.

Popular cost functions include the Euclidean distance metric, Kullback-Liebler (KL) divergence, and Itakura-Saito (IS) divergence. Both the KL and IS divergences have been found to be well suited for audio purposes. In this work, we focus on the case where d qp ð Þ j is generalized (non-normalized) KL divergence:

$$d\_{\rm KL}(q|p) = q \ln \frac{q}{p} - q + p \tag{6}$$

where ½�ft indicates its argument at row f and column t and dqp ð Þ j is a scalar cost function measured between q and p.

This results in the following optimization formulation:

$$\underset{\text{Wg }\mathbf{H}}{\operatorname{argmin}} \sum\_{f=1}^{N\_f} \sum\_{t=1}^{N\_t} -\mathbf{V}\_{ft} \ln \left| [\mathbf{W}\mathbf{H}]\_{ft} + \left| [\mathbf{W}\mathbf{H}]\_{ft} + \text{const} \right| \right| $$

Subject to

V ≈

Figure 3.

2 6 4

New Frontiers in Brain-Computer Interfaces

2.2 Optimization formulation

written as:

Figure 4.

36

Applying NMF on noisy speech.

j jj j V1 V2 V3……VNt j jj j

V ≈W1h<sup>T</sup>

3 7 5≈

<sup>1</sup> <sup>þ</sup> <sup>W</sup>2hT

The application of NMF on noisy speech can be seen in Figure 4.

argmin <sup>W</sup>, <sup>H</sup>

and the inequalities ≥ are element-wise. It is also common to add additional equality constraints to require the columns of W to sum to one, which we enforce. When D Vð Þ jWH is additively separable, the cost function can be reduced to

Nf

f¼1 ∑ Nt t¼1

D Vð Þ¼ jWH ∑

2 6 4

NMF interpretation II. The matrix V (i.e., the mixture signal) is approximated as a sum of matrix "layers" [3].

<sup>2</sup> <sup>þ</sup> <sup>W</sup>3h<sup>T</sup>

To estimate the basis matrix W and the activation matrix H for a given input data matrix V, NMF algorithm is formulated as an optimization problem. This is

D Vð Þ jWH

where D Vð Þ jWH is an appropriately defined cost function between V and W H

d Vft ½ � WH ft � � � � �

j jj j W1 W2 W3……WNz j jj j

<sup>3</sup> <sup>þ</sup> ::… <sup>þ</sup> WNzhT

W ≥0, H ≥ 0 (4)

3 7 5

hT 1 hT 2 hT 3 : hT Nz

Nz (3)

(5)

$$W \ge 0, H \ge 0 \tag{7}$$

Given this formulation, we notice that the problem is not convex in W and H, limiting our ability to find a globally optimal solution to Eq. (7). It is, however, biconvex or independently convex in W for a fixed value of H and convex in H for a fixed value of W, motivating the use of iterative numerical methods to estimate locally optimal values of W and H.

## 2.3 Parameter estimation

To solve Eq. (7), we must use an iterative numerical optimization technique and hope to find a locally optimal solution. Gradient descent methods are the most common and straightforward for this purpose but typically are slow to converge. Other methods such as Newton's method, interior-point methods, conjugate gradient methods, and similar [4] can converge faster but are typically much more complicated to implement, motivating alternative approaches.

The most popular alternative that has been proposed is by Lee and Seung [1, 5] and consists of a fast, simple, and efficient multiplicative gradient descent-based optimization procedure. The method works by breaking down the larger optimization problem into two subproblems and iteratively optimizes over W and then H, back and forth, given an initial feasible solution. The approach monotonically decreases the optimization objective for both the KL divergence and Euclidean cost functions and converges to a local stationary point.

The approach is justified using the machinery of majorization-minimization (MM) algorithms [6]. MM algorithms are closely related to expectation maximization (EM) algorithms. In general, MM algorithms operate by approximating an optimization objective with a lower bound auxiliary function. The lower bound is then maximized instead of the original function, which is usually more difficult to optimize.

Algorithm 1 shows the complete iterative numerical optimization procedure applied to Eq. (7) with the KL divergence, where the division is element-wise, ⊗ is an element-wise multiplication, and 1 is a vector or matrix of ones with appropriately defined dimensions [5].

## Algorithm 1 KL-NMF parameter estimation

Procedure KL-NMF (V ∈ RNf �Nt <sup>þ</sup> //input data matrix. Nz//number of basic vectors. ) Initialize: W ∈ RNf �Nz <sup>þ</sup> , <sup>H</sup> <sup>∈</sup> <sup>R</sup>Nz�Nt: þ repeat Optimize over W W W ⊗ V WH <sup>H</sup><sup>T</sup> <sup>1</sup>H<sup>T</sup> (8)

Optimize over H

$$H \leftarrow H \otimes \frac{\boldsymbol{W}^T \left( \frac{\boldsymbol{V}}{\boldsymbol{\text{WH}}} \right)}{\boldsymbol{W}^T \mathbf{1}} \tag{9}$$

to p fz ð Þ j , p tz ð Þ j , and/or p zð Þ, would encourage or discourage one source or another to explain a given time-frequency point. We can see this more clearly when we

where X f ð Þ ; t is the observed data that we model as the product of three distinct functions or factors φð Þz , φð Þ f; z , and φð Þ t; z . Note, each factor has different input arguments and each factor has different parameters that we wish to estimate via EM. Also, forget for the moment that the factors must be normalized probabilities. Given this model, if we wish to incorporate additional information, we could

This way of manipulation allows us to maintain our factorized form and can be thought of as prior-based regularization. If we would like to incorporate additional information/regularization that is a function of all three variables z, f, and t, then we must do something else. The first option would be to try to simultaneously modify all factors together to impose regularization that is a function of all three variables. This is unfortunately very difficult—both conceptually difficult to construct and

This motivates the use of posterior regularization (PR). PR provides us with an algorithmic mechanism via EM to incorporate constraints that are complementary to prior-based regularization. Instead of modifying the individual factors of our model as we saw before, we directly modify the posterior distribution of our model. The posterior distribution of our model, very loosely speaking, is a function of all random variables of our model. It is natively computed within each E step of EM and is required to iteratively improve the estimates of our model parameters. In this example, the posterior distribution would be akin to φð Þ z; f; t , which is a function of t, f, and z, as required. We now formally discuss PR below, beginning

The framework of posterior regularization, first introduced by Graca, Ganchev,

In this case, what we do is constrain the distribution q in some way when we maximize the auxiliary bound F qð Þ ; Θ with respect to q in the expectation step of an

and Taskar [9, 10], is a relatively new mechanism for injecting rich, typically data-dependent constraints into latent variable models using the EM algorithm. In contrast to standard Bayesian prior-based regularization, which applies constraints to the model parameters of a latent variable model in the maximization step of EM, posterior regularization applies constraints to the posterior distribution (distribution over the latent variables, conditioned on everything else) computation in the expectation step of EM. The method has found success in many natural language processing tasks, such as statistical word alignment, part-of-speech tagging, and

with a general discussion and concluding with the specific form of PR we

X f ð Þ ; t ≈φð Þz φð Þ f; z φð Þ t; z (11)

consider PLCA to be the following simplified estimation problem:

Speech Enhancement Using an Iterative Posterior NMF DOI: http://dx.doi.org/10.5772/intechopen.84976

• φð Þz to incorporate past knowledge of the variable z

practically difficult to algorithmically solve.

similar tasks that involve latent variable models.

employ within our approach.

3.1 Posterior regularization

EM algorithm, resulting in

39

• φð Þ f; z to incorporate past knowledge of the variable f and z

• φð Þ t; z to incorporate past knowledge of the variable t and z

independently modify:

until convergence return: W and H

NMF is an optimization technique using EM algorithm in terms of matrix, whereas probabilistic latent component analysis (PLCA) is also an optimization technique using EM algorithm in terms of probability. In PLCA, we are going to incorporate probabilities of time and frequency. In the next section, the development of PLCA-based algorithm to incorporate time-frequency constraints is discussed.

## 3. A probabilistic latent variable model with time-frequency constraints

Considering this approach, we now develop a new PLCA-based algorithm to incorporate the time-frequency user-annotations. For clarity, we restate the form of the symmetric two-dimensional PLCA model we use:

$$p(f, t) = \sum\_{x} p(z)p(f|z)p(t|z) \tag{10}$$

Compared to a modified NMF formulation, incorporating optimization constraints as a function of time, frequency, and sound source into the factorized PLCA model is particularly interesting and motivating to our focus.

Incorporating prior information into this model, and PLCA in general, can be done in several ways. The most commonly used methods are by direct observations (i.e., setting probabilities to zero, one, etc.) or by incorporating Bayesian prior probabilities on model parameters. Direct observations do not give us enough control, so we consider incorporating Bayesian prior probabilities. For the case of Eq. (10), this would result in independently modifying the factor terms p fz ð Þ j , p tz ð Þ j , or p zð Þ. Common prior probability distributions used for this purpose include Dirichlet priors [7], gamma priors [8], and others.

Given that we would like to incorporate the user-annotations as a function of time, frequency, and sound source, however, we notice that this is not easily accomplished using standard priors. This is because the model is factorized, and each factor is only a function of one variable and (possibly) conditioned by another, making it difficult to construct a set of prior probabilities that, when jointly applied Algorithm 1 KL-NMF parameter estimation

<sup>þ</sup> , <sup>H</sup> <sup>∈</sup> <sup>R</sup>Nz�Nt: þ

W W ⊗

H H ⊗

NMF is an optimization technique using EM algorithm in terms of matrix, whereas probabilistic latent component analysis (PLCA) is also an optimization technique using EM algorithm in terms of probability. In PLCA, we are going to incorporate probabilities of time and frequency. In the next section, the development of PLCA-based algorithm to incorporate time-frequency constraints is

3. A probabilistic latent variable model with time-frequency constraints

Considering this approach, we now develop a new PLCA-based algorithm to incorporate the time-frequency user-annotations. For clarity, we restate the form of

Compared to a modified NMF formulation, incorporating optimization constraints as a function of time, frequency, and sound source into the factorized PLCA

Incorporating prior information into this model, and PLCA in general, can be done in several ways. The most commonly used methods are by direct observations (i.e., setting probabilities to zero, one, etc.) or by incorporating Bayesian prior probabilities on model parameters. Direct observations do not give us enough control, so we consider incorporating Bayesian prior probabilities. For the case of Eq. (10), this would result in independently modifying the factor terms p fz ð Þ j , p tz ð Þ j , or p zð Þ. Common prior probability distributions used for this purpose include

Given that we would like to incorporate the user-annotations as a function of time, frequency, and sound source, however, we notice that this is not easily accomplished using standard priors. This is because the model is factorized, and each factor is only a function of one variable and (possibly) conditioned by another, making it difficult to construct a set of prior probabilities that, when jointly applied

p f ð Þ¼ ; t ∑zp zð Þp fz ð Þ j p tz ð Þ j (10)

the symmetric two-dimensional PLCA model we use:

Dirichlet priors [7], gamma priors [8], and others.

model is particularly interesting and motivating to our focus.

<sup>þ</sup> //input data matrix.

V WH H<sup>T</sup>

W<sup>T</sup> <sup>V</sup> WH 

<sup>1</sup>H<sup>T</sup> (8)

<sup>W</sup><sup>T</sup><sup>1</sup> (9)

Procedure KL-NMF (V ∈ RNf �Nt

New Frontiers in Brain-Computer Interfaces

Nz//number of basic vectors.

Initialize: W ∈ RNf �Nz

Optimize over W

Optimize over H

until convergence return: W and H

discussed.

38

)

repeat

to p fz ð Þ j , p tz ð Þ j , and/or p zð Þ, would encourage or discourage one source or another to explain a given time-frequency point. We can see this more clearly when we consider PLCA to be the following simplified estimation problem:

$$X(f, t) \approx \rho(z)\rho(f, z)\rho(t, z)\tag{11}$$

where X f ð Þ ; t is the observed data that we model as the product of three distinct functions or factors φð Þz , φð Þ f; z , and φð Þ t; z . Note, each factor has different input arguments and each factor has different parameters that we wish to estimate via EM. Also, forget for the moment that the factors must be normalized probabilities.

Given this model, if we wish to incorporate additional information, we could independently modify:


This way of manipulation allows us to maintain our factorized form and can be thought of as prior-based regularization. If we would like to incorporate additional information/regularization that is a function of all three variables z, f, and t, then we must do something else. The first option would be to try to simultaneously modify all factors together to impose regularization that is a function of all three variables. This is unfortunately very difficult—both conceptually difficult to construct and practically difficult to algorithmically solve.

This motivates the use of posterior regularization (PR). PR provides us with an algorithmic mechanism via EM to incorporate constraints that are complementary to prior-based regularization. Instead of modifying the individual factors of our model as we saw before, we directly modify the posterior distribution of our model. The posterior distribution of our model, very loosely speaking, is a function of all random variables of our model. It is natively computed within each E step of EM and is required to iteratively improve the estimates of our model parameters. In this example, the posterior distribution would be akin to φð Þ z; f; t , which is a function of t, f, and z, as required. We now formally discuss PR below, beginning with a general discussion and concluding with the specific form of PR we employ within our approach.

## 3.1 Posterior regularization

The framework of posterior regularization, first introduced by Graca, Ganchev, and Taskar [9, 10], is a relatively new mechanism for injecting rich, typically data-dependent constraints into latent variable models using the EM algorithm. In contrast to standard Bayesian prior-based regularization, which applies constraints to the model parameters of a latent variable model in the maximization step of EM, posterior regularization applies constraints to the posterior distribution (distribution over the latent variables, conditioned on everything else) computation in the expectation step of EM. The method has found success in many natural language processing tasks, such as statistical word alignment, part-of-speech tagging, and similar tasks that involve latent variable models.

In this case, what we do is constrain the distribution q in some way when we maximize the auxiliary bound F qð Þ ; Θ with respect to q in the expectation step of an EM algorithm, resulting in

$$q^{n+1} = \underset{q}{\text{argmin}} \, KL(q||p) + \mathcal{Q}(q) \tag{12}$$

To solve the above optimization problem for a given time-frequency point, we

With γ being a Lagrange multiplier, take the gradient with respect to q and γ:

<sup>¼</sup> <sup>1</sup> � <sup>q</sup><sup>T</sup>

qft <sup>¼</sup> Pft <sup>⊗</sup> exp �λft PT

Notice the result is computed in closed form and does not require any iterative optimization scheme as may be required in the general posterior regularization framework [9], minimizing the computational cost when incorporating the constraints. Also note, however, that this optimization must be solved for each timefrequency point of our spectrogram data for each E step iteration of our final EM

Now knowing the posterior-regularized expectation step optimization, we can derive a complete EM algorithm for a posterior-regularized two-dimensional PLCA

pzf ð Þ j ; <sup>t</sup> p zð Þp fz ð Þ <sup>j</sup> p tz ð Þ <sup>j</sup> <sup>Λ</sup>ftz

Algorithm 2 PR-PLCA with linear grouping expectation constraints

where Λ ¼ exp f g �Λ . The entire algorithm is outlined in Algorithm 2. Notice we continue to maintain closed-form E and M steps, allowing us to optimize further and draw connections to multiplicative nonnegative matrix factorization algorithms.

Precompute: Λ exp ð Þ �Λ (21)

∑<sup>z</sup>0p z<sup>0</sup> ð Þp fzj <sup>0</sup> ð Þp tzj <sup>0</sup> ð ÞΛftz<sup>0</sup>

ftln qft <sup>þ</sup> <sup>q</sup><sup>T</sup>

ftλft <sup>þ</sup> <sup>γ</sup> <sup>1</sup> � <sup>q</sup><sup>T</sup>

¼ �ln pft <sup>þ</sup> <sup>1</sup> <sup>þ</sup> ln qft <sup>þ</sup> <sup>λ</sup>ft � <sup>γ</sup><sup>1</sup> <sup>¼</sup> <sup>0</sup> (17)

ft1

ft1 (16)

<sup>¼</sup> <sup>0</sup> (18)

ft exp �λft (19)

(20)

ftln pft <sup>þ</sup> <sup>q</sup><sup>T</sup>

∇aL qft; γ

where exp{} is an element-wise exponential function.

set Eqs. (17) and (18) equal to zero, and solve for qft, resulting in

form the Lagrangian

L qft; γ

parameter estimation algorithm.

3.3 Parameter estimation

Procedure PLCA (

Nz//number of basis vectors Ns//number of sources Λ∈ RNf �Nt�Nz //penalties

<sup>þ</sup> //observed normalized data

Initialize: feasible p zð Þ,p fz ð Þ j and p tz ð Þ j

V ∈ RNf �Nt

)

41

model (PR-PLCA):

¼ �q<sup>T</sup>

Speech Enhancement Using an Iterative Posterior NMF DOI: http://dx.doi.org/10.5772/intechopen.84976

∇qftL qft; γ

where Ωð Þq constrains the possible space of q.

Note, the only difference between Eq. (12) and our past discussion on EM is the added term Ωð Þq . If Ωð Þq is set to zero, we get back the original formulation and easily solve the optimization by setting q = p without any computation (except computing the posterior p). Also note to denote the use of constraints in this context, the term "weakly supervised" was introduced by Graca [11] and is similarly adopted here.

This method of regularization is in contrast to prior-based regularization, where the modified maximization step would be

$$\Theta^{n+1} = \underset{\theta}{\operatorname{argmax}} \, F(q^{n+1}, \theta) + \Omega(\theta) \tag{13}$$

where Ω Θð Þ constrains the model parameter Θ.

## 3.2 Linear grouping expectation constraints

Given the general framework of posterior regularization, we need to define a meaningful penalty Ωð Þq for which we map our annotations. We do this by mapping the annotation matrices to linear grouping constraints on the latent variable z. To do so, we first notice that Eq. (12) decouples for each time-frequency point for our specific model. Because of this, we can independently solve Eq. (12) for each time-frequency point, making the optimization much simpler. When we rewrite our E step optimization using vector notation, we get

$$\begin{cases} \underset{q}{\arg\min} \ -q\_{ft}^{T} \ln p\_{ft} + q\_{ft}^{T} \ln q\_{ft} \\\\ q\_{ft}^{T} \mathbf{1} = \mathbf{1}, q\_{ft} \ge \mathbf{0} \end{cases} \tag{14}$$

subject to

$$q\_{ft}^{\prime}\mathbf{1} = \mathbf{1}, q\_{ft} \ge \mathbf{0} \tag{14}$$

$$\mathbf{1} \quad \dots \quad \mathbf{1} \quad \dots \quad \dots \quad \dots \quad \dots \quad \dots$$

where q and pz f ð Þ j ; t for a given value of f and t is written as qft and pft without any modification; we note q is optimal when equal to pz f ð Þ j ; t as before.

We then apply our linear grouping constraints independently for each timefrequency point:

$$\underset{q}{\text{argmin}} \, \boldsymbol{q} - \boldsymbol{q}\_{ft}^{T} \boldsymbol{l} \boldsymbol{n} \boldsymbol{p}\_{ft} + \boldsymbol{q}\_{ft}^{T} \boldsymbol{l} \boldsymbol{n} \boldsymbol{q}\_{ft} + \boldsymbol{q}\_{ft}^{T} \boldsymbol{\lambda}\_{ft}$$

Subject to

$$q\_{ft}^T \mathbf{1} = \mathbf{1}, q\_{ft} \ge \mathbf{0},\tag{15}$$

where we define λft ¼ Λft1::……Λft1Λft2………Λft<sup>2</sup> <sup>T</sup> ∈ RNz as the vector of userdefined penalty weights, T is a matrix transpose, ≥ is element-wise greater than or equal to, and 1 is a column vector of ones. In this case, positive-valued penalties are used to decrease the probability of a given source, while negative-valued coefficients are used to increase the probability of a given source. Note the penalty weights imposed on the group of values of z that correspond to a given source s are identical, linear with respect to the z variables, and applied in the E step of EM, hence the name "linear grouping expectation constraints."

Speech Enhancement Using an Iterative Posterior NMF DOI: http://dx.doi.org/10.5772/intechopen.84976

<sup>q</sup>nþ<sup>1</sup> <sup>¼</sup> argmin q

<sup>Θ</sup>nþ<sup>1</sup> <sup>¼</sup> argmax Θ

where Ω Θð Þ constrains the model parameter Θ.

our E step optimization using vector notation, we get

argmin q

where we define λft ¼ Λft1::……Λft1Λft2………Λft<sup>2</sup>

hence the name "linear grouping expectation constraints."

argmin q

�q<sup>T</sup>

qT

any modification; we note q is optimal when equal to pz f ð Þ j ; t as before.

�q<sup>T</sup>

qT

ftlnpft <sup>þ</sup> <sup>q</sup><sup>T</sup>

where q and pz f ð Þ j ; t for a given value of f and t is written as qft and pft without

We then apply our linear grouping constraints independently for each time-

ftlnpft <sup>þ</sup> <sup>q</sup><sup>T</sup>

<sup>T</sup>

defined penalty weights, T is a matrix transpose, ≥ is element-wise greater than or equal to, and 1 is a column vector of ones. In this case, positive-valued penalties are used to decrease the probability of a given source, while negative-valued coefficients are used to increase the probability of a given source. Note the penalty weights imposed on the group of values of z that correspond to a given source s are identical, linear with respect to the z variables, and applied in the E step of EM,

ftlnqft

ftlnqft <sup>þ</sup> <sup>q</sup><sup>T</sup>

ftλft

ft1 ¼ 1, qft ≥0, (15)

∈ RNz as the vector of user-

ft1 ¼ 1, qft ≥0 (14)

3.2 Linear grouping expectation constraints

Note, the only difference between Eq. (12) and our past discussion on EM is the added term Ωð Þq . If Ωð Þq is set to zero, we get back the original formulation and easily solve the optimization by setting q = p without any computation (except computing the posterior p). Also note to denote the use of constraints in this context, the term "weakly supervised" was introduced by Graca [11] and is simi-

This method of regularization is in contrast to prior-based regularization, where

F qnþ<sup>1</sup>

Given the general framework of posterior regularization, we need to define a meaningful penalty Ωð Þq for which we map our annotations. We do this by mapping the annotation matrices to linear grouping constraints on the latent variable z. To do so, we first notice that Eq. (12) decouples for each time-frequency point for our specific model. Because of this, we can independently solve Eq. (12) for each time-frequency point, making the optimization much simpler. When we rewrite

where Ωð Þq constrains the possible space of q.

the modified maximization step would be

New Frontiers in Brain-Computer Interfaces

larly adopted here.

subject to

frequency point:

Subject to

40

KL q p ð Þþ k Ωð Þq (12)

; <sup>Θ</sup> <sup>þ</sup> Ω Θð Þ (13)

To solve the above optimization problem for a given time-frequency point, we form the Lagrangian

$$L\left(q\_{\rm ft},\chi\right) = -q\_{\rm ft}^T \ln p\_{\rm ft} + q\_{\rm ft}^T \ln q\_{\rm ft} + q\_{\rm ft}^T \lambda\_{\rm ft} + \chi \left(1 - q\_{\rm ft}^T \mathbf{1}\right) \tag{16}$$

With γ being a Lagrange multiplier, take the gradient with respect to q and γ:

$$\nabla\_{q\_{\hat{\mathbb{H}}}} L(q\_{\hat{\mathbb{H}}}, \chi) = -\ln p\_{\hat{\mathbb{H}}} + \mathbb{1} + \ln q\_{\hat{\mathbb{H}}} + \lambda\_{\hat{\mathbb{H}}} - \gamma \mathbb{1} = \mathbf{0} \tag{17}$$

$$\nabla\_{d}L\left(q\_{ft},\chi\right) = \left(\mathbf{1} - q\_{ft}^{T}\mathbf{1}\right) = \mathbf{0} \tag{18}$$

set Eqs. (17) and (18) equal to zero, and solve for qft, resulting in

$$q\_{ft} = \frac{P\_{ft} \otimes \exp\left(-\lambda\_{ft}\right)}{P\_{ft}^T \exp\left(-\lambda\_{ft}\right)}\tag{19}$$

where exp{} is an element-wise exponential function.

Notice the result is computed in closed form and does not require any iterative optimization scheme as may be required in the general posterior regularization framework [9], minimizing the computational cost when incorporating the constraints. Also note, however, that this optimization must be solved for each timefrequency point of our spectrogram data for each E step iteration of our final EM parameter estimation algorithm.

## 3.3 Parameter estimation

Now knowing the posterior-regularized expectation step optimization, we can derive a complete EM algorithm for a posterior-regularized two-dimensional PLCA model (PR-PLCA):

$$p(\boldsymbol{z}|f,t) \leftarrow \frac{p(\boldsymbol{z})p(f|\boldsymbol{z})p(t|\boldsymbol{z})\Lambda\_{\text{flx}}}{\sum\_{\boldsymbol{z}'} p(\boldsymbol{z}')p(f|\boldsymbol{z}')p(t|\boldsymbol{z}')\overline{\Lambda}\_{\text{flx}'}}\tag{20}$$

where Λ ¼ exp f g �Λ . The entire algorithm is outlined in Algorithm 2. Notice we continue to maintain closed-form E and M steps, allowing us to optimize further and draw connections to multiplicative nonnegative matrix factorization algorithms.

Algorithm 2 PR-PLCA with linear grouping expectation constraints

Procedure PLCA ( V ∈ RNf �Nt <sup>þ</sup> //observed normalized data Nz//number of basis vectors Ns//number of sources Λ∈ RNf �Nt�Nz //penalties ) Initialize: feasible p zð Þ,p fz ð Þ j and p tz ð Þ j

$$\mathbf{Precompute:}$$

$$\text{Precompute:} \qquad \qquad \qquad \Lambda \leftarrow \exp\left(-\Lambda\right) \tag{21}$$

repeat

Expectation step for all z, f, t do

$$p(\boldsymbol{z}|\boldsymbol{f},t) \leftarrow \frac{p(\boldsymbol{z})p(\boldsymbol{f}|\boldsymbol{z})p(\boldsymbol{t}|\boldsymbol{z})\overline{\Lambda}\_{\boldsymbol{f}\boldsymbol{t}\boldsymbol{z}}}{\sum\_{\boldsymbol{z}'}p(\boldsymbol{z}')p(\boldsymbol{f}|\boldsymbol{z}')p(\boldsymbol{t}|\boldsymbol{z}')\overline{\Lambda}\_{\boldsymbol{f}\boldsymbol{t}\boldsymbol{z}'}}\tag{22}$$

which fully specifies the iterative updates. By putting Eqs. (30) and (31) in matrix notation, we specify the multiplicative form of the proposed method in

Algorithm 3. PR-PLCA with linear grouping expectation constraints in matrix

Λ<sup>s</sup> exp f g �Λ<sup>s</sup> (32) Xs V ⊗ Λ<sup>s</sup> (33)

ð Þ WsHs ⊗ Λ<sup>s</sup> (34)

<sup>Γ</sup> (35)

(37)

(36)

Algorithm 3.

Procedure PLCA (

Initialize: W ∈ RNf �Nz

Nz//number of basis vectors Ns//number of sources

For all s do

End for

For all s do

End for until convergence return: W and H

<sup>þ</sup> //observed normalized data

Speech Enhancement Using an Iterative Posterior NMF DOI: http://dx.doi.org/10.5772/intechopen.84976

, ∀ ∈ f g 1; :…Ns //penalties

<sup>þ</sup> , <sup>H</sup> <sup>∈</sup> <sup>R</sup>Nz�Nt þ

Γ ∑<sup>s</sup>

Zs Xs

ZsH<sup>T</sup> s 1H<sup>T</sup> s

<sup>s</sup> Zs

Wð Þ<sup>s</sup> Ws ⊗

<sup>H</sup>ð Þ<sup>s</sup> Hs <sup>⊗</sup> <sup>W</sup><sup>T</sup>

4. An iterative posterior NMF method for speech enhancement in the

Over the past several years, research has been carried out in single-channel sound source separation methods. This problem is motivated by speech denoising, speech enhancement [13], music transcription [14], audio-based forensic, and music remixing. One of the most effective approach is nonnegative matrix factorization (NMF) [5]. The user-annotations can be used to obtain the PR terms [15]. If the number of sources is more, then it is difficult to identify sources in the spectrogram. In such cases, the user interaction-based constraint approaches are inefficient. In order to avoid the previous problem, in the proposed method, an automatic iterative procedure is introduced. The spectral components of speech and noise are

presence of additive Gaussian noise (proposed algorithm)

modeled as Gamma and Rayleigh, respectively [16].

V ∈ RNf �Nt

Λ<sup>s</sup> ∈ RNf �Nt

Precompute:

Repeat

43

notation

)

end for

Maximization step for all z, f, t do

$$p(f|x) = \frac{\sum\_{t} V\_{ft} p(z|f, t)}{\sum\_{f'} \sum\_{t'} V\_{f't'} p(z|f', t')} \tag{23}$$

$$p(t|x) = \frac{\sum\_{f} \mathcal{V}\_{ft} p(x|f, t)}{\sum\_{f'} \sum\_{t'} \mathcal{V}\_{f't'} p(x|f', t')} \tag{24}$$

$$p(\mathbf{z}) = \frac{\sum\_{f} \sum\_{t} V\_{ft} p(\mathbf{z}|f, t)}{\sum\_{\mathbf{z}'} \sum\_{f'} \sum\_{t'} V\_{f't'} p(\mathbf{z}|f', t')} \tag{25}$$

## end for

until convergence return:p fz ð Þ j , p tz ð Þ j ,p zð Þ and pz f ð Þ j ; t

## • Multiplicative Update Equations

We can rearrange the expressions in Algorithm 2 and convert to a multiplicative form following similar methodology to Smaragdis and Raj [12].

Rearranging the expectation and maximization steps, in conjunction with Bayes' rule, and

$$Z(f,t) = \sum\_{x} p(z)p(f|z)p(t|z)\overline{\Lambda}\_{fix}$$

we get

$$p(\boldsymbol{z}|\boldsymbol{f},t) = \frac{p(\boldsymbol{f}|\boldsymbol{z})p(\boldsymbol{t},\boldsymbol{z})\overline{\Lambda}\_{\text{f\!{x}}}}{Z(\boldsymbol{f},t)}\tag{26}$$

$$p(t, x) = \sum\_{f} V\_{ft} q(x|f, t) \tag{27}$$

$$p(f|\mathbf{z}) = \frac{\sum\_{t} V\_{ft} q(\mathbf{z}|f, t)}{\sum\_{t} p(t, \mathbf{z})} \tag{28}$$

$$p(\mathbf{z}) = \sum\_{\mathbf{t}} p(\mathbf{t}, \mathbf{z}) \tag{29}$$

Rearranging further, we get

$$p(f|z) = \frac{p(f|z) \sum\_{t} \frac{V\_{ft}\overline{A}\_{\theta t}}{Z(f,t)} p(t,z)}{\sum\_{t} p(t,z)}\tag{30}$$

$$p(t, z) = p(t, z) \sum\_{f} p(f|z) \frac{V\_{ft} \overline{A}\_{fix}}{Z(f, t)} \tag{31}$$

which fully specifies the iterative updates. By putting Eqs. (30) and (31) in matrix notation, we specify the multiplicative form of the proposed method in Algorithm 3.

Algorithm 3. PR-PLCA with linear grouping expectation constraints in matrix notation

Procedure PLCA ( V ∈ RNf �Nt <sup>þ</sup> //observed normalized data Nz//number of basis vectors Ns//number of sources Λ<sup>s</sup> ∈ RNf �Nt , ∀ ∈ f g 1; :…Ns //penalties ) Initialize: W ∈ RNf �Nz <sup>þ</sup> , <sup>H</sup> <sup>∈</sup> <sup>R</sup>Nz�Nt þ Precompute: For all s do

$$
\overline{\Lambda}\_{\mathfrak{t}} \leftarrow \exp \left\{ -\Lambda\_{\mathfrak{t}} \right\} \tag{32}
$$

$$X\_t \leftarrow V \otimes \overline{\Lambda}\_t \tag{33}$$

End for

Repeat

repeat

Expectation step for all z, f, t do

New Frontiers in Brain-Computer Interfaces

end for

return:p fz ð Þ j , p tz ð Þ j ,p zð Þ and pz f ð Þ j ; t

• Multiplicative Update Equations

Rearranging further, we get

until convergence

rule, and

we get

42

Maximization step for all z, f, t do

end for

pz f ð Þ j ; <sup>t</sup> p zð Þp fz ð Þ <sup>j</sup> p tz ð Þ <sup>j</sup> <sup>Λ</sup>ftz

∑<sup>f</sup>0∑<sup>t</sup>0Vf<sup>0</sup>

p tz ð Þ¼ <sup>j</sup> <sup>∑</sup>fVftpz f ð Þ <sup>j</sup> ; <sup>t</sup> ∑<sup>f</sup>0∑<sup>t</sup>0Vf<sup>0</sup>

∑<sup>z</sup>0∑<sup>f</sup>0∑<sup>t</sup>0Vf<sup>0</sup>

We can rearrange the expressions in Algorithm 2 and convert to a multiplicative

Rearranging the expectation and maximization steps, in conjunction with Bayes'

Z f ð Þ¼ ; t ∑zp zð Þp fz ð Þ j p tz ð Þ j Λftz,

pz f ð Þ¼ <sup>j</sup> ; <sup>t</sup> p fz ð Þ <sup>j</sup> p tð Þ ; <sup>z</sup> <sup>Λ</sup>ftz

∑t p tð Þ ; z

∑t p tð Þ ; z

Vftqz f ð Þ j ; t

VftΛftz Z f ð Þ;<sup>t</sup> p tð Þ ; <sup>z</sup>

VftΛftz

p fz ð Þ¼ <sup>j</sup> <sup>∑</sup><sup>t</sup>

p fz ð Þ¼ <sup>j</sup> p fz ð Þ <sup>j</sup> <sup>∑</sup><sup>t</sup>

p tð Þ¼ ; z p tð Þ ; z ∑<sup>f</sup> p fz ð Þ j

p zð Þ¼ ∑<sup>t</sup>

p fz ð Þ¼ <sup>j</sup> <sup>∑</sup><sup>t</sup>

p zð Þ¼ <sup>∑</sup>f∑<sup>t</sup>

form following similar methodology to Smaragdis and Raj [12].

∑<sup>z</sup>0p z<sup>0</sup> ð Þp fzj <sup>0</sup> ð Þp tzj <sup>0</sup> ð ÞΛftz<sup>0</sup>

Vftpz f ð Þ j ; t

<sup>t</sup>0pz f<sup>0</sup>

<sup>t</sup>0pz f<sup>0</sup>

Vftpz f ð Þ j ; t

<sup>t</sup>0pz f<sup>0</sup>

; <sup>t</sup><sup>0</sup> ð Þ <sup>j</sup> (23)

; <sup>t</sup><sup>0</sup> ð Þ <sup>j</sup> (24)

; <sup>t</sup><sup>0</sup> ð Þ <sup>j</sup> (25)

Z f ð Þ ; <sup>t</sup> (26)

p tð Þ ; z (29)

Z f ð Þ ; <sup>t</sup> (31)

(28)

(30)

p tð Þ¼ ; z ∑fVftqz f ð Þ j ; t (27)

(22)

$$
\Gamma \leftarrow \sum\_{s} (\mathcal{W}\_{\mathfrak{s}} H\_{\mathfrak{s}}) \otimes \overline{\Lambda}\_{\mathfrak{s}} \tag{34}
$$

For all s do

$$Z\_s \leftarrow \frac{X\_s}{\Gamma} \tag{35}$$

$$\boldsymbol{W}\_{(s)} \leftarrow \boldsymbol{W}\_{s} \otimes \frac{\boldsymbol{Z}\_{s}\boldsymbol{H}\_{s}^{\boldsymbol{T}}}{\mathbf{1}\boldsymbol{H}\_{s}^{\boldsymbol{T}}} \tag{36}$$

$$H\_{\left(\ast\right)} \gets H\_{\ast} \otimes \left(\mathcal{W}\_{\ast}^{T} \mathcal{Z}\_{\ast}\right) \tag{37}$$

End for until convergence return: W and H

## 4. An iterative posterior NMF method for speech enhancement in the presence of additive Gaussian noise (proposed algorithm)

Over the past several years, research has been carried out in single-channel sound source separation methods. This problem is motivated by speech denoising, speech enhancement [13], music transcription [14], audio-based forensic, and music remixing. One of the most effective approach is nonnegative matrix factorization (NMF) [5]. The user-annotations can be used to obtain the PR terms [15]. If the number of sources is more, then it is difficult to identify sources in the spectrogram. In such cases, the user interaction-based constraint approaches are inefficient.

In order to avoid the previous problem, in the proposed method, an automatic iterative procedure is introduced. The spectral components of speech and noise are modeled as Gamma and Rayleigh, respectively [16].

## 4.1 Notation and basic concepts

Let noisy speech signal x[n] be the sum of clean speech s[n] and noise d[n] and their corresponding magnitude spectrogram be represented as

$$|\mathbf{X}(f,\mathbf{t})| = |\mathbf{S}(f,\mathbf{t})| + |\mathbf{D}(f,\mathbf{t})|\tag{38}$$

f xð Þ¼ ; <sup>k</sup>; <sup>θ</sup> xk�1<sup>e</sup>

xi and <sup>k</sup><sup>≈</sup> <sup>3</sup> � <sup>s</sup> <sup>þ</sup>

The regularization term for the speech samples is defined as (by applying nega-

Special case: When we fix k = 1, the Gamma density simplifies to the exponential

The subscript(s) with parenthesis represents corresponding columns or rows of the matrix assigned to a given source. 1 is an approximately sized matrices of ones,

<sup>θ</sup> ,ΛSP � <sup>Λ</sup><sup>S</sup><sup>2</sup> <sup>¼</sup> <sup>x</sup>

with shape parameter k>0 and scale parameter θ>0:

where the auxiliary variable <sup>s</sup> is defined as <sup>s</sup> <sup>¼</sup> ln <sup>1</sup>

<sup>Λ</sup>SP � <sup>Λ</sup><sup>S</sup><sup>2</sup> ¼ � log f xð Þ¼ ; <sup>k</sup>; <sup>θ</sup> <sup>x</sup>

f xð Þ¼ ; 1; θ

and ⊗ represents element-wise multiplication.

X ∈ R<sup>f</sup>�<sup>t</sup>

Λ<sup>S</sup> ∈ R<sup>f</sup>�<sup>t</sup>

<sup>2</sup>σ<sup>2</sup> � log <sup>X</sup>

1 θ e �x

The proposed multiplicative nonnegative matrix factorization method is summarized in Algorithm 4 [16]. In general, like in the specific case of Algorithm 4,

one can only guarantee the monotonous descent of the iteration through a majorization-minimization approach [19] or the convergence to a stationary

Algorithm 4: Gamma-Rayleigh regularized PLCA method (GR-NMF)

<sup>þ</sup> % Observed normalized data

Λs NEW ð Þ ¼ 0

<sup>σ</sup><sup>2</sup> and <sup>Λ</sup><sup>S</sup><sup>2</sup> <sup>¼</sup> <sup>Λ</sup>SP OLD ð Þ <sup>¼</sup> <sup>X</sup>

Λe<sup>s</sup> ¼ ð Þ 1 � μ Λs OLD ð Þ þ μΛs NEW ð Þ %Update penalties using LMS (50)

<sup>θ</sup> <sup>¼</sup> <sup>1</sup> kN <sup>∑</sup> N i¼1

Speech Enhancement Using an Iterative Posterior NMF DOI: http://dx.doi.org/10.5772/intechopen.84976

tive logarithm in both sides of (44))

density and

point [20].

Procedure

Repeat

45

<sup>Λ</sup><sup>S</sup><sup>1</sup> <sup>¼</sup> <sup>Λ</sup>N OLD ð Þ <sup>¼</sup> <sup>X</sup><sup>2</sup>

For all s do

θk

�x θ

q

12s

<sup>θ</sup> � log <sup>x</sup><sup>k</sup>�<sup>1</sup>

<sup>þ</sup> , s∈ f g 1; ::…; NS % ΛS-Penalties, NS-Number of sources

<sup>Λ</sup>es OLD ð Þ exp f g �Λ<sup>s</sup> (49)

Λs OLD ð Þ ¼ Λs NEW ð Þ (51)

<sup>θ</sup> � log <sup>X</sup><sup>k</sup>�<sup>1</sup> θk Γð Þk

!

(48)

<sup>Γ</sup>ð Þ<sup>k</sup> (44)

<sup>N</sup> <sup>∑</sup><sup>N</sup>

<sup>θ</sup> , x≥<sup>0</sup> (47)

<sup>i</sup>¼1ln ð Þ xi .

, x≥ 0 (46)

(45)

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð Þ <sup>s</sup> � <sup>3</sup> <sup>2</sup> <sup>þ</sup> <sup>24</sup><sup>s</sup>

> <sup>N</sup> <sup>∑</sup><sup>N</sup> <sup>i</sup>¼<sup>1</sup>xi � � � <sup>1</sup>

> > <sup>θ</sup><sup>k</sup>Γð Þ<sup>k</sup>

!

where f represents the frequency bin and t the frame number. The observed magnitudes in time-frequency are arranged in a matrix X ∈ R <sup>f</sup>�<sup>t</sup> <sup>þ</sup> of nonnegative elements. The source separation algorithms based on NMF pursue the factorization of <sup>X</sup> as a product of two nonnegative matrices, <sup>W</sup> <sup>¼</sup> ½ � <sup>w</sup>1; <sup>w</sup>2; …; <sup>w</sup><sup>R</sup> <sup>∈</sup> <sup>R</sup> <sup>f</sup>�<sup>R</sup> <sup>þ</sup> in which the columns collect the basis vectors and <sup>H</sup> <sup>¼</sup> hT <sup>1</sup> ; hT <sup>2</sup> ; :…; <sup>h</sup><sup>T</sup> R <sup>T</sup> <sup>∈</sup> <sup>R</sup>R�<sup>t</sup> <sup>þ</sup> that collects their respective weights, i.e.,

$$X = \mathbf{W} \mathbf{H} = \sum\_{x=1}^{R} W\_x H\_x \tag{39}$$

where R denotes the number of latent components.

## 4.2 Proposed regularization

There are several ways to incorporate the user-annotations into latent variable models, for instance, by using the suitable regularization functions. For expectation maximization (EM) algorithms, posterior regularization was introduced by [9, 11]. This method is data dependent. This method gives richness and also gives the constraints on the posterior distributions of latent variable models. The applications of this method is used in many natural language processing tasks like statistical word alignment, part-of-speech tagging. The main idea is to constrain on the distribution of posterior, when computing expectation step in EM algorithm.

The prior distributions for the magnitude of the noise spectral components are modeled as Rayleigh probability density function (PDF) with scale parameter σ, which is fitted to the observations by a maximum likelihood procedure [16, 17], i.e.,

$$f(\mathbf{x}; \sigma) = \frac{\mathbf{x}}{\sigma^2} e^{-\mathbf{x}^2 / 2\sigma^2} \text{ for } \mathbf{x} \ge \mathbf{0} \text{ with } \sigma^2 = \frac{1}{2N} \sum\_{i=1}^N \mathbf{x}\_i^2 \tag{40}$$

The above equation can be written as

$$f(\mathbf{x}; \sigma) = e^{\log\left(\frac{\mathbf{x}}{\sigma^2}\right)} e^{-\mathbf{x}^2/2\sigma^2} = e^{\log\left(\frac{\mathbf{x}}{\sigma^2}\right) - \frac{\mathbf{x}^2}{2\sigma^2}} \tag{41}$$

By applying negative logarithm on both sides of (41), we will get

$$-\log\left(f(\mathbf{x};\sigma)\right) = -\log\left(e^{\log\left(\frac{\mathbf{x}}{\sigma^2}\right) - \frac{\mathbf{x}^2}{2\sigma^2}}\right) = \frac{\mathbf{x}^2}{2\sigma^2} - \log\left(\frac{\mathbf{x}}{\sigma^2}\right) \tag{42}$$

Then, the regularization term for the noise is defined as

$$
\Lambda\_N \equiv \Lambda\_{\mathbb{S}\_1} = -\log f(\mathfrak{x}; \sigma) = \frac{\mathfrak{x}^2}{2\sigma^2} - \log \frac{\mathfrak{x}}{\sigma^2}.\tag{43}
$$

The spectral components of speech modeled as Gamma probability density function [16, 18]

Speech Enhancement Using an Iterative Posterior NMF DOI: http://dx.doi.org/10.5772/intechopen.84976

4.1 Notation and basic concepts

New Frontiers in Brain-Computer Interfaces

collects their respective weights, i.e.,

4.2 Proposed regularization

Let noisy speech signal x[n] be the sum of clean speech s[n] and noise d[n] and

where f represents the frequency bin and t the frame number. The observed

elements. The source separation algorithms based on NMF pursue the factorization

R z¼1

There are several ways to incorporate the user-annotations into latent variable models, for instance, by using the suitable regularization functions. For expectation maximization (EM) algorithms, posterior regularization was introduced by [9, 11]. This method is data dependent. This method gives richness and also gives the constraints on the posterior distributions of latent variable models. The applications of this method is used in many natural language processing tasks like statistical word alignment, part-of-speech tagging. The main idea is to constrain on the distribution of posterior, when computing expectation step in EM algorithm.

The prior distributions for the magnitude of the noise spectral components are modeled as Rayleigh probability density function (PDF) with scale parameter σ, which is fitted to the observations by a maximum likelihood procedure [16, 17], i.e.,

for x<sup>≥</sup> <sup>0</sup> with <sup>σ</sup><sup>2</sup> <sup>¼</sup> <sup>1</sup>

<sup>¼</sup> <sup>e</sup>log <sup>x</sup> σ2 � <sup>x</sup><sup>2</sup>

<sup>¼</sup> <sup>x</sup><sup>2</sup>

<sup>2</sup>σ<sup>2</sup> � log <sup>x</sup>

<sup>2</sup><sup>N</sup> <sup>∑</sup> N i¼1 x2

<sup>2</sup>σ<sup>2</sup> � log <sup>x</sup>

<sup>i</sup> (40)

(42)

<sup>2</sup>σ<sup>2</sup> (41)

<sup>σ</sup><sup>2</sup> : (43)

σ2 

of <sup>X</sup> as a product of two nonnegative matrices, <sup>W</sup> <sup>¼</sup> ½ � <sup>w</sup>1; <sup>w</sup>2; …; <sup>w</sup><sup>R</sup> <sup>∈</sup> <sup>R</sup> <sup>f</sup>�<sup>R</sup>

X ¼ WH ¼ ∑

j j X f ð Þ ;t ¼ j j S f ð Þ ;t þ j j D f ð Þ ;t (38)

<sup>1</sup> ; hT

<sup>2</sup> ; :…; <sup>h</sup><sup>T</sup> R

WzHz (39)

<sup>T</sup>

<sup>þ</sup> of nonnegative

<sup>þ</sup> in

<sup>∈</sup> <sup>R</sup>R�<sup>t</sup> <sup>þ</sup> that

their corresponding magnitude spectrogram be represented as

magnitudes in time-frequency are arranged in a matrix X ∈ R <sup>f</sup>�<sup>t</sup>

which the columns collect the basis vectors and <sup>H</sup> <sup>¼</sup> hT

where R denotes the number of latent components.

f xð Þ¼ ; <sup>σ</sup> <sup>x</sup>

The above equation can be written as

function [16, 18]

44

σ2 e

f xð Þ¼ ; <sup>σ</sup> <sup>e</sup>log <sup>x</sup>

� log ð Þ¼� f xð Þ ; <sup>σ</sup> log <sup>e</sup>log <sup>x</sup>

Then, the regularization term for the noise is defined as

�x2=2σ<sup>2</sup>

σ2 e �x2=2σ<sup>2</sup>

> σ2 � <sup>x</sup><sup>2</sup> 2σ2

By applying negative logarithm on both sides of (41), we will get

<sup>Λ</sup><sup>N</sup> � <sup>Λ</sup><sup>S</sup><sup>1</sup> ¼ � log f xð Þ¼ ; <sup>σ</sup> <sup>x</sup><sup>2</sup>

The spectral components of speech modeled as Gamma probability density

$$f(\mathbf{x};k,\theta) = \frac{\mathbf{x}^{k-1} e^{\frac{\mathbf{x}}{\sigma}}}{\theta^k \Gamma(k)}\tag{44}$$

with shape parameter k>0 and scale parameter θ>0:

$$\theta = \frac{1}{kN} \sum\_{i=1}^{N} \mathbf{x}\_i \text{ and } k \approx \frac{3 - s + \sqrt{\left(s - 3\right)^2 + 24s}}{12s} \tag{45}$$

where the auxiliary variable <sup>s</sup> is defined as <sup>s</sup> <sup>¼</sup> ln <sup>1</sup> <sup>N</sup> <sup>∑</sup><sup>N</sup> <sup>i</sup>¼<sup>1</sup>xi � � � <sup>1</sup> <sup>N</sup> <sup>∑</sup><sup>N</sup> <sup>i</sup>¼1ln ð Þ xi .

The regularization term for the speech samples is defined as (by applying negative logarithm in both sides of (44))

$$\Lambda\_{\text{SP}} \equiv \Lambda\_{\text{S}\_2} = -\log f(\varkappa; k, \theta) = \frac{\varkappa}{\theta} - \log \left( \frac{\varkappa^{k-1}}{\theta^k \Gamma(k)} \right), \varkappa \ge 0 \tag{46}$$

Special case: When we fix k = 1, the Gamma density simplifies to the exponential density and

$$f(\mathbf{x}; \mathbf{1}, \boldsymbol{\theta}) = \frac{\mathbf{1}}{\boldsymbol{\theta}} e^{\frac{\mathbf{x}}{\boldsymbol{\theta}}}, \Lambda\_{\text{SP}} \equiv \Lambda\_{\text{S}\_2} = \frac{\mathbf{x}}{\boldsymbol{\theta}}, \boldsymbol{\infty} \ge \mathbf{0} \tag{47}$$

The proposed multiplicative nonnegative matrix factorization method is summarized in Algorithm 4 [16]. In general, like in the specific case of Algorithm 4, one can only guarantee the monotonous descent of the iteration through a majorization-minimization approach [19] or the convergence to a stationary point [20].

The subscript(s) with parenthesis represents corresponding columns or rows of the matrix assigned to a given source. 1 is an approximately sized matrices of ones, and ⊗ represents element-wise multiplication.

Algorithm 4: Gamma-Rayleigh regularized PLCA method (GR-NMF)

## Procedure

X ∈ Rf�<sup>t</sup> <sup>þ</sup> % Observed normalized data Λ<sup>S</sup> ∈ R<sup>f</sup>�<sup>t</sup> <sup>þ</sup> , s∈ f g 1; ::…; NS % ΛS-Penalties, NS-Number of sources

$$\mathbf{A}\_{\mathbf{s}(\text{NEW})} = \mathbf{0}$$

$$\Lambda\_{\rm S1} = \Lambda\_{\rm N(OD)} = \frac{X^2}{2\sigma^2} - \log \frac{X}{\sigma^2} \text{ and } \Lambda\_{\rm S1} = \Lambda\_{\rm SP(OD)} = \frac{X}{\theta} - \log \left(\frac{X^{k-1}}{\theta^k \Gamma(k)}\right) \tag{48}$$

$$\ddot{A}\_{s(OLD)} \leftarrow \exp\left\{-A\_s\right\} \tag{49}$$

Repeat

For all s do

$$A\_s = (1 - \mu)\Lambda\_{\text{s}(\text{OD})} + \mu\Lambda\_{\text{s}(\text{NEW})} \text{ \textquotedbl{}GUpdate penalities using LMS} \tag{50}$$

$$
\Lambda\_{\text{s(ODD)}} = \Lambda\_{\text{s(NEW)}} \tag{51}
$$

$$X\_{\mathfrak{s}} \leftarrow X \otimes \tilde{\Lambda}\_{\mathfrak{S}} \tag{52}$$

5. Experimental results

Speech Enhancement Using an Iterative Posterior NMF DOI: http://dx.doi.org/10.5772/intechopen.84976

6. Conclusion

PESQ and SDR for street noise.

Table 2.

47

Table 1.

PESQ and SDR for babble noise.

The speech and noise audio samples were taken from NOIZEUS [21]. Sampling frequency is 8 KHz. The algorithm is iterated until convergence [16]. The proposed method was compared with Euclidean NMF (EUC-NMF) [5], Itakura-Saito NMF (IS-NMF) [22], posterior regularization NMF (PR-NMF) [15], Wiener filtering

implemented by considering nonstationary noise, babble noise and street noise. The performance of proposed method was evaluated by using perceptual evaluation of speech quality (PESQ) [25] and source-to-distortion ratio (SDR) [26]. SDR gives the average quality of separation on dB scale and considers signal distortion as well as noise distortion. For PESQ and SDR, the higher value indicates the better performance. Tables 1 and 2 show the PESQ and SDR values of different NMF algorithms evaluated. The experimental results show that proposed method performs better

A novel speech enhancement method based on an iterative and regularized NMF algorithm for single-channel source separation is proposed. The clean speech and noise magnitude spectra are modeled as Gamma and Rayleigh distributions, respectively. The corresponding log-likelihood functions are used as penalties to

[23], and constrained version of NMF (CNMF)[24]. These methods are

than other existing methods in terms of the PESQ and SDR indices.

End for

$$
\Gamma \leftarrow \sum\_{\mathfrak{s}} \left( \mathcal{W}\_{\mathfrak{s}\mathfrak{l}} H\_{\mathfrak{k}\mathfrak{l}} \right) \otimes \widetilde{\Lambda}\_{\mathfrak{s}} \tag{53}
$$

$$Z\_s \leftarrow \frac{X\_s}{\Gamma} \tag{54}$$

For all s do

$$\mathbf{W}\_{(\circ)} \leftarrow \mathbf{W}\_{(\circ)} \otimes \frac{\mathbf{Z}\_{\circ} \mathbf{H}\_{(\circ)}^{T}}{\mathbf{1} \mathbf{H}\_{(\circ)}^{T}} \tag{55}$$

$$H\_{(\circ)} \leftarrow H\_{(\circ)} \otimes \left(W^T\_{(\circ)} Z\_{\circ}\right) \tag{56}$$

## End for Reconstruction For all s do

$$M\_{\circ} \leftarrow \frac{W\_{(s)}H\_{(s)}}{\text{WH}} \quad \text{@ Compute Filter} \tag{57}$$

$$
\hat{X}\_s \gets M\_s \otimes X \lll \text{Filter Mixture} \tag{58}
$$

$$\mathbf{x}\_{\mathbf{s}} \leftarrow \text{ISTFT}\left(\hat{\mathbf{X}}\_{\mathbf{s}}, \mathcal{L}\mathbf{X}, \boldsymbol{P}\right) \; \forall \; \mathbf{P}-\text{ STFT}\; parameters \tag{59}$$

if update k % Gamma model

$$\mathcal{L} = \ln\left(\frac{1}{N}\sum \hat{X}\_s\right) - \frac{1}{N}\sum \ln\left(\hat{X}\_s\right),\\k \approx \frac{3-s+\sqrt{(s-3)^2+24s}}{12s} \tag{60}$$

else % Exponential model

k = 1,

end

$$\theta = \frac{1}{kN} \sum \hat{X}\_s \tag{61}$$

$$A\_{S\_1} = A\_{N(OLD)} = \frac{\hat{X}\_{s\_1}^2}{2\sigma^2} - \log\frac{\hat{X}\_{s\_1}}{\sigma^2}$$

$$A\_{S\_2} = A\_{SP(OLD)} = \frac{\hat{X}\_{s\_2}}{\theta} - \log\left(\frac{\hat{X}\_{s\_2}^{k-1}}{\theta^k \Gamma(k)}\right) \tag{62}$$

<sup>Λ</sup>s NEW ð Þ <sup>¼</sup> exp �Λs OLD ð Þ � � % <sup>Λ</sup>s OLD ð Þ represents both <sup>Λ</sup>SP and <sup>Λ</sup><sup>N</sup> (63)

End for Until Convergence Return: Time domain signals xs

## 5. Experimental results

Xs X ⊗ Λe<sup>S</sup> (52)

� � <sup>⊗</sup> <sup>Λ</sup>e<sup>S</sup> (53)

<sup>Γ</sup> (54)

(55)

(56)

(60)

Γ ∑<sup>s</sup> Wð Þ<sup>s</sup> Hð Þ<sup>s</sup>

Wð Þ<sup>s</sup> Wð Þ<sup>s</sup> ⊗

<sup>H</sup>ð Þ<sup>s</sup> <sup>H</sup>ð Þ<sup>s</sup> <sup>⊗</sup> <sup>W</sup><sup>T</sup>

Ms <sup>W</sup>ð Þ<sup>s</sup> <sup>H</sup>ð Þ<sup>s</sup>

if update k % Gamma model

<sup>N</sup> <sup>∑</sup>ln <sup>X</sup>^ <sup>s</sup>

Zs Xs

ZsH<sup>T</sup> ð Þs 1H<sup>T</sup> ð Þs

ð Þ<sup>s</sup> Zs � �

WH % Compute Filter (57)

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð Þ <sup>s</sup> � <sup>3</sup> <sup>2</sup> <sup>þ</sup> <sup>24</sup><sup>s</sup>

kN <sup>∑</sup>X^ <sup>s</sup> (61)

1

A (62)

<sup>X</sup>^ <sup>s</sup> Ms <sup>⊗</sup> <sup>X</sup> % Filter Mixture (58)

q

12s

σ2

<sup>X</sup>^ <sup>k</sup>�<sup>1</sup> s2 <sup>θ</sup><sup>k</sup>Γð Þ<sup>k</sup>

0 @

� � % <sup>Λ</sup>s OLD ð Þ represents both <sup>Λ</sup>SP and <sup>Λ</sup><sup>N</sup> (63)

xs ISTFT <sup>X</sup>^ <sup>s</sup>;∠X; <sup>P</sup> � � % <sup>P</sup> � STFT parameters (59)

� �, k<sup>≈</sup> <sup>3</sup> � <sup>s</sup> <sup>þ</sup>

s1 <sup>2</sup>σ<sup>2</sup> � log <sup>X</sup>^ <sup>s</sup><sup>1</sup>

<sup>θ</sup> � log

<sup>θ</sup> <sup>¼</sup> <sup>1</sup>

<sup>Λ</sup><sup>S</sup><sup>1</sup> <sup>¼</sup> <sup>Λ</sup>N OLD ð Þ <sup>¼</sup> <sup>X</sup>^ <sup>2</sup>

<sup>Λ</sup><sup>S</sup><sup>2</sup> <sup>¼</sup> <sup>Λ</sup>SP OLD ð Þ <sup>¼</sup> <sup>X</sup>^ <sup>s</sup><sup>2</sup>

End for

New Frontiers in Brain-Computer Interfaces

For all s do

End for

<sup>s</sup> <sup>¼</sup> ln <sup>1</sup>

end

46

else % Exponential model k = 1,

Λs NEW ð Þ ¼ exp �Λs OLD ð Þ

Return: Time domain signals xs

End for Until Convergence

<sup>N</sup> <sup>∑</sup>X^ <sup>s</sup> � � � 1

For all s do

Reconstruction

The speech and noise audio samples were taken from NOIZEUS [21]. Sampling frequency is 8 KHz. The algorithm is iterated until convergence [16]. The proposed method was compared with Euclidean NMF (EUC-NMF) [5], Itakura-Saito NMF (IS-NMF) [22], posterior regularization NMF (PR-NMF) [15], Wiener filtering [23], and constrained version of NMF (CNMF)[24]. These methods are implemented by considering nonstationary noise, babble noise and street noise. The performance of proposed method was evaluated by using perceptual evaluation of speech quality (PESQ) [25] and source-to-distortion ratio (SDR) [26]. SDR gives the average quality of separation on dB scale and considers signal distortion as well as noise distortion. For PESQ and SDR, the higher value indicates the better performance. Tables 1 and 2 show the PESQ and SDR values of different NMF algorithms evaluated. The experimental results show that proposed method performs better than other existing methods in terms of the PESQ and SDR indices.


Table 1. PESQ and SDR for babble noise.


Table 2.

PESQ and SDR for street noise.

## 6. Conclusion

A novel speech enhancement method based on an iterative and regularized NMF algorithm for single-channel source separation is proposed. The clean speech and noise magnitude spectra are modeled as Gamma and Rayleigh distributions, respectively. The corresponding log-likelihood functions are used as penalties to

regularize the cost function of the NMF. The estimation of basis matrices and excitation matrices are calculated by using proposed regularization of multiplicative update rules. The experiments reveal that the proposed speech enhancement method outperforms other existing benchmark methods in terms of SDR and PESQ values.

References

[1] Lee DD, Seung HS. Learning the parts of objects by non-negative matrix factorization. Nature. 1999;401:788-791

Speech Enhancement Using an Iterative Posterior NMF DOI: http://dx.doi.org/10.5772/intechopen.84976

> [11] Graça J, Ganchev K, Taskar B, Pereira F. Posterior vs. parameter sparsity in latent variable models. NIPS–Advances in Neural Information Processing Systems. 2009:664-672

[12] Smaragdis P, Raj B. Shift-invariant probabilistic latent component analysis. Journal of Machine Learning Research. Technical Report TR2007009, MERL;

[13] Mysore GJ, Smaragdis P. A nonnegative approach to semi-supervised separation of speech from noise with the use of temporal dynamics Gautham J. Mysore Advanced Technology Labs Adobe Systems Inc, University of Illinois at Urbana-Champaign, Adobe Systems Inc. IEEE International Conference on Acoustics, Speech and Signal Processing–

ICASSP 2011; 2011. pp. 17–20

[14] Bertin N, Badeau R, Vincent E. Enforcing harmonicity and smoothness in Bayesian non-negative matrix factorization applied to polyphonic music transcription. IEEE Transactions on Audio, Speech and Language Processing. 2010;18:538-549

[15] Bryan NJ, Mysore GJ. An Efficient Posterior Regularized Latent Variable Model for Interactive Sound Source

[16] Sunnydayal K k, Cruces-Alvarez SA. An iterative posterior NMF method for speech enhancement in the presence

Neurocomputing. 2017;230:312-315

[17] Cruces-Alvarez SA, Cichocki A, ichi Amari S. From blind signal extraction to blind instantaneous signal separation: Criteria, algorithms, and stability. IEEE Transactions on Neural Networks.

[18] Erkelens JS, Hendriks RC, Heusdens R, Jensen J. Minimum mean-square

Separation. in Icml, 2013

of additive Gaussian noise.

2004;15:859-873

December, 2007:5

[2] Smaragdis P. Non-negative matrix factorization for polyphonic music transcription. IEEE Workshop on

Applications of Signal Processing to Audio and Acoustics; 19–22 October 2003. Mohonk Mountain; 2013. pp. 177-180

[3] Bryan NJ, Mysore GJ. An efficient posterior regularized latent variable model for interactive sound source separation. In: International Conference on Machine Learning (ICML); June 2013

[4] Boyd S, Vandenberghe L. Convex Optimization. New York, NY, USA: Cambridge University Press; 2004

[5] Lee DD, Seung HS. Algorithms for Non-negative Matrix Factorization.

[6] Hunter DR, Lange K. A tutorial on MM algorithms. The American Statistician. 2004;58:30-37

[7] Paltz N. Separation by 'Humming': user-guided sound extraction from monophonic mixtures. In: IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA); 2009. pp. 69–72

[8] Fitzgerald D. User assisted separation using tensor factorisations. In: European

Signal Processing Conference (EUSIPCO). 2012. pp. 2412–2416

[9] Graca J, Ganchev K, Taskar B. Expectation maximization and posterior

constraints. Advances in Neural Information Processing Systems. 2008;

[10] Ganchev K, Gillenwater J. Posterior regularization for structured latent variable models. Journal of Machine Learning Research. 2010;11:2001-2049

20:1-8

49

NIPS Proceedings. 2001

## Author details

Sunnydayal Vanambathina Department of Electronics and Communication Engineering, VIT-AP University, Amaravati, India

\*Address all correspondence to: sunny.conference@gmail.com

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Speech Enhancement Using an Iterative Posterior NMF DOI: http://dx.doi.org/10.5772/intechopen.84976

## References

regularize the cost function of the NMF. The estimation of basis matrices and excitation matrices are calculated by using proposed regularization of multiplicative update rules. The experiments reveal that the proposed speech enhancement method outperforms other existing benchmark methods in terms of SDR and PESQ

Department of Electronics and Communication Engineering, VIT-AP University,

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

\*Address all correspondence to: sunny.conference@gmail.com

values.

New Frontiers in Brain-Computer Interfaces

Author details

Amaravati, India

48

Sunnydayal Vanambathina

provided the original work is properly cited.

[1] Lee DD, Seung HS. Learning the parts of objects by non-negative matrix factorization. Nature. 1999;401:788-791

[2] Smaragdis P. Non-negative matrix factorization for polyphonic music transcription. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics; 19–22 October 2003. Mohonk Mountain; 2013. pp. 177-180

[3] Bryan NJ, Mysore GJ. An efficient posterior regularized latent variable model for interactive sound source separation. In: International Conference on Machine Learning (ICML); June 2013

[4] Boyd S, Vandenberghe L. Convex Optimization. New York, NY, USA: Cambridge University Press; 2004

[5] Lee DD, Seung HS. Algorithms for Non-negative Matrix Factorization. NIPS Proceedings. 2001

[6] Hunter DR, Lange K. A tutorial on MM algorithms. The American Statistician. 2004;58:30-37

[7] Paltz N. Separation by 'Humming': user-guided sound extraction from monophonic mixtures. In: IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA); 2009. pp. 69–72

[8] Fitzgerald D. User assisted separation using tensor factorisations. In: European Signal Processing Conference (EUSIPCO). 2012. pp. 2412–2416

[9] Graca J, Ganchev K, Taskar B. Expectation maximization and posterior constraints. Advances in Neural Information Processing Systems. 2008; 20:1-8

[10] Ganchev K, Gillenwater J. Posterior regularization for structured latent variable models. Journal of Machine Learning Research. 2010;11:2001-2049

[11] Graça J, Ganchev K, Taskar B, Pereira F. Posterior vs. parameter sparsity in latent variable models. NIPS–Advances in Neural Information Processing Systems. 2009:664-672

[12] Smaragdis P, Raj B. Shift-invariant probabilistic latent component analysis. Journal of Machine Learning Research. Technical Report TR2007009, MERL; December, 2007:5

[13] Mysore GJ, Smaragdis P. A nonnegative approach to semi-supervised separation of speech from noise with the use of temporal dynamics Gautham J. Mysore Advanced Technology Labs Adobe Systems Inc, University of Illinois at Urbana-Champaign, Adobe Systems Inc. IEEE International Conference on Acoustics, Speech and Signal Processing– ICASSP 2011; 2011. pp. 17–20

[14] Bertin N, Badeau R, Vincent E. Enforcing harmonicity and smoothness in Bayesian non-negative matrix factorization applied to polyphonic music transcription. IEEE Transactions on Audio, Speech and Language Processing. 2010;18:538-549

[15] Bryan NJ, Mysore GJ. An Efficient Posterior Regularized Latent Variable Model for Interactive Sound Source Separation. in Icml, 2013

[16] Sunnydayal K k, Cruces-Alvarez SA. An iterative posterior NMF method for speech enhancement in the presence of additive Gaussian noise. Neurocomputing. 2017;230:312-315

[17] Cruces-Alvarez SA, Cichocki A, ichi Amari S. From blind signal extraction to blind instantaneous signal separation: Criteria, algorithms, and stability. IEEE Transactions on Neural Networks. 2004;15:859-873

[18] Erkelens JS, Hendriks RC, Heusdens R, Jensen J. Minimum mean-square

error estimation of discrete Fourier coefficients with generalized gamma priors. IEEE Transactions on Audio, Speech and Language Processing. 2007; 15(6):1741-1752

Chapter 4

Abstract

systems, respectively.

1. Introduction

51

modeling, quadratic discriminant analysis

A Self-Paced Two-State Mental

Interface with Few EEG Channels

A self-paced brain-computer interface (BCI) system that is activated by mental tasks is introduced. The BCI's output has two operational states, the active state and the inactive state, and is activated by designated mental tasks performed by the user. The BCI could be operated using several EEG brain electrodes (channels) or only few (i.e., five or seven channels) at a small loss in performance. The performance is evaluated on a dataset we have collected from four subjects while

performing one of the four different mental tasks. The dataset contains the signals of 29 EEG electrodes distributed over the scalp. The five and seven highly discriminatory channels are selected using two different methods proposed in the paper. The signal processing structure of the interface is computationally simple. The features used are the scalar autoregressive coefficients. Classification is based on the quadratic discriminant analysis. Model selection and testing procedures are accomplished via cross-validation. The results are highly promising in terms of the rates of false and true positives. The false-positive rates reach zero, while the true-positive rates are sufficiently high, i.e., 54.60 and 59.98% for the 5-channel and 7-channel

Keywords: brain-computer interface, mental task, self-paced, autoregressive

Brain-computer interfaces (BCIs) aim at providing an alternative means of communication for motor-disabled people suffering from diseases such as brain injury, brainstem stroke, high-level spinal cord injury (SCI), amyotrophic lateral sclerosis (ALS, also known as Maladie de Charcot or Lou Gehrig's disease), muscular dystrophies, multiple sclerosis (MS), cerebral palsy (CP), or locked-in syndrome (sometimes called ventral pontine syndrome, cerebromedullospinal disconnection, pseudocoma, and de-efferented state). Well-developed BCI systems are used by motor-disabled people to control their environment. They can also be used by healthy individuals for entertainment purposes such as playing computer games. Existing BCI systems are categorized in two major classes: system-paced (or synchronous) and self-paced (or asynchronous). In system-paced BCIs, the user can only control the BCI during specific time intervals that are predefined by the system and not by the user. A self-paced BCI, on the other hand, can be available for

Task-Based Brain-Computer

Farhad Faradji, Rabab K. Ward and Gary E. Birch

[19] Cichocki A, Cruces S, ichi Amari S. Generalized alpha-beta divergences and their application to robust nonnegative matrix factorization. Entropy;13: 134-170

[20] Lin C-J. On the convergence of multiplicative update for nonnegative matrix factorization. IEEE Transactions on Neural Networks. 2007;18:1589-1596

[21] https://ecs.utdallas.edu/loizou/ speech/noizeus/ [Online]

[22] Févotte C, Bertin N, Durrieu J-L. Nonnegative matrix factorization with the Itakura-Saito divergence: With application to music analysis. Neural Computation. 2009;21:793-830

[23] Ephraim Y, Malah D. Speech enhancement using a minimum meansquare error short-time spectral amplitude estimator. IEEE Transactions on Acoustics. 1984;32:1109-1121

[24] Berry MW, Browne M, Langville AN, Pauca VP, Plemmons RJ. Algorithms and applications for approximate nonnegative matrix factorization. Computational Statistics and Data Analysis. 2007;52(1):155-173

[25] Hu Y, Loizou PC. Evaluation of objective quality measures for speech enhancement. IEEE Transactions on Acoustics, Speech, and Signal Processing. 2008;16(1):229-238

[26] Vincent E, Gribonval R, Fevotte C. Performance measurement in blind audio source separation. IEEE Transactions on Audio, Speech and Language Processing. 2006;14: 1462-1469

## Chapter 4

error estimation of discrete Fourier coefficients with generalized gamma priors. IEEE Transactions on Audio, Speech and Language Processing. 2007;

New Frontiers in Brain-Computer Interfaces

[19] Cichocki A, Cruces S, ichi Amari S. Generalized alpha-beta divergences and their application to robust nonnegative matrix factorization. Entropy;13:

[20] Lin C-J. On the convergence of multiplicative update for nonnegative matrix factorization. IEEE Transactions on Neural Networks. 2007;18:1589-1596

[21] https://ecs.utdallas.edu/loizou/

[22] Févotte C, Bertin N, Durrieu J-L. Nonnegative matrix factorization with the Itakura-Saito divergence: With application to music analysis. Neural Computation. 2009;21:793-830

[23] Ephraim Y, Malah D. Speech enhancement using a minimum meansquare error short-time spectral

on Acoustics. 1984;32:1109-1121

AN, Pauca VP, Plemmons RJ. Algorithms and applications for approximate nonnegative matrix factorization. Computational Statistics and Data Analysis. 2007;52(1):155-173

amplitude estimator. IEEE Transactions

[24] Berry MW, Browne M, Langville

[25] Hu Y, Loizou PC. Evaluation of objective quality measures for speech enhancement. IEEE Transactions on Acoustics, Speech, and Signal Processing. 2008;16(1):229-238

[26] Vincent E, Gribonval R, Fevotte C. Performance measurement in blind audio source separation. IEEE Transactions on Audio, Speech and Language Processing. 2006;14:

1462-1469

50

speech/noizeus/ [Online]

15(6):1741-1752

134-170

## A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels

Farhad Faradji, Rabab K. Ward and Gary E. Birch

## Abstract

A self-paced brain-computer interface (BCI) system that is activated by mental tasks is introduced. The BCI's output has two operational states, the active state and the inactive state, and is activated by designated mental tasks performed by the user. The BCI could be operated using several EEG brain electrodes (channels) or only few (i.e., five or seven channels) at a small loss in performance. The performance is evaluated on a dataset we have collected from four subjects while performing one of the four different mental tasks. The dataset contains the signals of 29 EEG electrodes distributed over the scalp. The five and seven highly discriminatory channels are selected using two different methods proposed in the paper. The signal processing structure of the interface is computationally simple. The features used are the scalar autoregressive coefficients. Classification is based on the quadratic discriminant analysis. Model selection and testing procedures are accomplished via cross-validation. The results are highly promising in terms of the rates of false and true positives. The false-positive rates reach zero, while the true-positive rates are sufficiently high, i.e., 54.60 and 59.98% for the 5-channel and 7-channel systems, respectively.

Keywords: brain-computer interface, mental task, self-paced, autoregressive modeling, quadratic discriminant analysis

## 1. Introduction

Brain-computer interfaces (BCIs) aim at providing an alternative means of communication for motor-disabled people suffering from diseases such as brain injury, brainstem stroke, high-level spinal cord injury (SCI), amyotrophic lateral sclerosis (ALS, also known as Maladie de Charcot or Lou Gehrig's disease), muscular dystrophies, multiple sclerosis (MS), cerebral palsy (CP), or locked-in syndrome (sometimes called ventral pontine syndrome, cerebromedullospinal disconnection, pseudocoma, and de-efferented state). Well-developed BCI systems are used by motor-disabled people to control their environment. They can also be used by healthy individuals for entertainment purposes such as playing computer games.

Existing BCI systems are categorized in two major classes: system-paced (or synchronous) and self-paced (or asynchronous). In system-paced BCIs, the user can only control the BCI during specific time intervals that are predefined by the system and not by the user. A self-paced BCI, on the other hand, can be available for

control by the user at all times. It is clear that the second class is better and more efficient in terms of practicality and applicability to real-life applications.

of the left hand, and imaginary movement of the right hand. The mental tasks used in [74] are the exact calculation of repetitive additions, imagination of left finger movement, mental rotation of a cube, and evocation of a nonverbal audio signal. In [78], the right and left hand extension motor imageries, subtraction, navigation imagery, auditory imagery, phone imagery, and idle task are investigated. The mental tasks considered in [80] include the right and left hand flexion motor imageries, subtraction, navigation imagery, auditory imagery, phone imagery, and idle task. The mental tasks considered in [82] are subtraction, navigation imagery, auditory recall, phone imagery, and motor imageries of the left and right hands. Hand movements and word imagination are the mental tasks used in [83]. Imagination of left and right hand movements, mental rotation of a 3D geometric figure, and mental subtraction of a two-digit number from a three-digit number are con-

A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels

The old and small datasets of Keirn and Aunon [85] that contain non-motor imagery mental tasks are employed in [86–90, 92, 94–96]. Vowel speech imagery (i.e., imaginary speech of the two English vowels /a/ and /u/) is proposed as a control scheme for the BCI system in [91]. In [93], the mental arithmetic and spatial imageries are investigated. Real fist rotation and imagined reverse counting are

[19, 24, 32, 39, 42, 46, 57, 61, 67, 69, 70] and [76, 78, 80, 82] are self-paced. The FPR values are not reported in [57, 69, 70, 76, 78, 80, 82]. Even though the number of FPs and TPs is mentioned in [32, 46], the rates of FPs and TPs are not given.

The FPRs are given in [19, 24, 42, 61]. In [19], the given FPR values are in the 10–77% range. In [24], the BCI system was evaluated in terms of FPs during only one 3-minute interval. No FPs were generated during this interval; however, since the designed BCI is too slow, it is deemed impractical for the real-life applications. The minimum time period between two subsequent active states of the system is 4 s. In [42], the FPRs of the BCI systems are between 3.8% and 32.5%. In [61], the

In [39], the specificity rates (i.e., 100–FPR%) are given. Based on the specificity rates, the FPR values are between 0.38 and 14.38%. Based on the confusion matrices

The ultimate and first goal of conducting this study is to develop a self-paced two-state mental task-based BCI with a zero or near-zero false activation rate using EEG signals. The mental tasks investigated are the visualization of some words as they are written, multiplication, mentally rotating a 3D object, and motor imagery. We collected the EEG signals of these four tasks as they were being performed mentally and also during the baseline state, i.e., when the subjects were not performing any of the four mental tasks as will be explained in Section 2. The number of EEG channels used was 29. The details on each mental task and the dataset are provided in the next section where the experimental protocol is

The second goal is to design a BCI with few channels. Such a BCI has few electrodes to collect the EEG signals and would be significantly more efficient computationally, leading to BCIs that operate in real time. Thus, for practical applications, the number of EEG channels should be small. In Section 3.1, we discuss how we

For each subject, four different BCIs are developed. Each BCI is based on one of the four mental tasks mentioned above, i.e., in each BCI, one mental task is considered as the IC task (i.e., the user is indeed issuing a command). The other three mental tasks are considered as NC tasks. The BCI system should remain in the NC

choose five or seven channels that would yield acceptable performance.

From all BCI systems designed in these studies, only the systems in

reported false activation rate is in the range of 0–3.25 activations/minute.

given in [67], the FPRs are in the range of 0–9%.

mode during the NC tasks and the baseline.

sidered in [84].

DOI: http://dx.doi.org/10.5772/intechopen.83425

investigated in [97].

described.

53

Two types of states (or modes) are usually assumed for the output of a selfpaced BCI: the no-control (NC) state and the intentional control (IC) state. This type of BCIs is in the NC mode most of the time. However, when the user issues a mental command that would lead, for example, to switching the light on, moving the computer cursor to the right, etc., the system changes its state from the NC mode to the IC mode. After that, the BCI returns to the NC state.

Two measures that can properly evaluate the performance of a self-paced BCI are the true-positive rate (TPR) and the false-positive rate (FPR). A true-positive outcome results from correctly classifying a command as an IC state, and a falsepositive outcome results from the BCI misclassifying a no control as an IC state. The ratios of true positives and false positives to the total number of classifications yield the TPR and FPR, respectively. Further details on BCIs can be found in [1–5].

Due to the high false activation rates, BCI systems are deemed unsuccessful for use in real-life applications. This is because false activations are a major cause of user frustration. To further illustrate this point, suppose that the output rate of a self-paced BCI is 5 Hz (i.e., five outputs/s, as in the BCI designed in this paper) and the FPR value is 1%. This FPR of 1% means one false positive in every 100 outputs of the BCI. As the BCI generates 100 outputs in 20 s, there would be three false activations in every minute, which is too high for practical purposes. Considering the fact that a self-paced BCI is in the no-control mode for most of the time, even a low FPR would greatly annoy and frustrate any user. This is why for BCI systems, lowering the FPR is of extreme importance.

Mental tasks are a class of neurological phenomena that can be exploited in BCI systems. They generally refer to intentional cognitive tasks that are done by the brain. Mental tasks can be mental mathematical calculations (such as multiplication and counting), mental rotation of a two- or three-dimensional object, motor imagery, visualization, etc.

Motor imagery is a task that has been investigated by a large majority of BCI studies [6–64]. The use of other types of mental tasks in BCI studies has received little attention in the literature. The papers that have studied non-motor imagery mental tasks along with the motor imagery tasks include [65–84]. Studies [85–96] have only considered non-motor imagery mental tasks.

The mental tasks investigated in [65] are motor imagery of opening and closing of the users' hand(s) and serially subtracting seven from a large number. In [66], four motor imagery tasks of (the users) left hand movement, right hand movement, foot movement, and tongue movement along with a simple calculation task (i.e., repeated subtraction of a constant number from a randomly chosen number) are considered as the mental tasks. The mental tasks used in [67] are the imagination of left and right hand movements, cube rotation, and subtraction. Four imagery tasks, i.e., spatial navigation around a familiar environment, auditory imagery of a familiar tune, and right and left motor imageries of opening and closing the hand, are investigated in [68]. In [69, 70], the imagination of left and right hand (or arm) movements, cube rotation, subtraction, and word association are studied. In [71, 72, 75–77, 79, 81], the imagination of repetitive self-paced left or right hand movement and the generation of words beginning with the same random letter are investigated. The EEG data used in these studies are those provided by the IDIAP Research Institute in Switzerland [70]. The studies in [77, 81] consider the imagination of the left or right hand movement as well using the data collected by the BCI laboratory at Graz University of Technology in Austria [16]. In [73], the mental tasks are auditory recall, mental navigation, sensorimotor attention of the left hand, sensorimotor attention of the right hand, mental calculation, imaginary movement

## A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels DOI: http://dx.doi.org/10.5772/intechopen.83425

of the left hand, and imaginary movement of the right hand. The mental tasks used in [74] are the exact calculation of repetitive additions, imagination of left finger movement, mental rotation of a cube, and evocation of a nonverbal audio signal. In [78], the right and left hand extension motor imageries, subtraction, navigation imagery, auditory imagery, phone imagery, and idle task are investigated. The mental tasks considered in [80] include the right and left hand flexion motor imageries, subtraction, navigation imagery, auditory imagery, phone imagery, and idle task. The mental tasks considered in [82] are subtraction, navigation imagery, auditory recall, phone imagery, and motor imageries of the left and right hands. Hand movements and word imagination are the mental tasks used in [83]. Imagination of left and right hand movements, mental rotation of a 3D geometric figure, and mental subtraction of a two-digit number from a three-digit number are considered in [84].

The old and small datasets of Keirn and Aunon [85] that contain non-motor imagery mental tasks are employed in [86–90, 92, 94–96]. Vowel speech imagery (i.e., imaginary speech of the two English vowels /a/ and /u/) is proposed as a control scheme for the BCI system in [91]. In [93], the mental arithmetic and spatial imageries are investigated. Real fist rotation and imagined reverse counting are investigated in [97].

From all BCI systems designed in these studies, only the systems in [19, 24, 32, 39, 42, 46, 57, 61, 67, 69, 70] and [76, 78, 80, 82] are self-paced. The FPR values are not reported in [57, 69, 70, 76, 78, 80, 82]. Even though the number of FPs and TPs is mentioned in [32, 46], the rates of FPs and TPs are not given.

The FPRs are given in [19, 24, 42, 61]. In [19], the given FPR values are in the 10–77% range. In [24], the BCI system was evaluated in terms of FPs during only one 3-minute interval. No FPs were generated during this interval; however, since the designed BCI is too slow, it is deemed impractical for the real-life applications. The minimum time period between two subsequent active states of the system is 4 s. In [42], the FPRs of the BCI systems are between 3.8% and 32.5%. In [61], the reported false activation rate is in the range of 0–3.25 activations/minute.

In [39], the specificity rates (i.e., 100–FPR%) are given. Based on the specificity rates, the FPR values are between 0.38 and 14.38%. Based on the confusion matrices given in [67], the FPRs are in the range of 0–9%.

The ultimate and first goal of conducting this study is to develop a self-paced two-state mental task-based BCI with a zero or near-zero false activation rate using EEG signals. The mental tasks investigated are the visualization of some words as they are written, multiplication, mentally rotating a 3D object, and motor imagery. We collected the EEG signals of these four tasks as they were being performed mentally and also during the baseline state, i.e., when the subjects were not performing any of the four mental tasks as will be explained in Section 2. The number of EEG channels used was 29. The details on each mental task and the dataset are provided in the next section where the experimental protocol is described.

The second goal is to design a BCI with few channels. Such a BCI has few electrodes to collect the EEG signals and would be significantly more efficient computationally, leading to BCIs that operate in real time. Thus, for practical applications, the number of EEG channels should be small. In Section 3.1, we discuss how we choose five or seven channels that would yield acceptable performance.

For each subject, four different BCIs are developed. Each BCI is based on one of the four mental tasks mentioned above, i.e., in each BCI, one mental task is considered as the IC task (i.e., the user is indeed issuing a command). The other three mental tasks are considered as NC tasks. The BCI system should remain in the NC mode during the NC tasks and the baseline.

control by the user at all times. It is clear that the second class is better and more efficient in terms of practicality and applicability to real-life applications.

Two types of states (or modes) are usually assumed for the output of a selfpaced BCI: the no-control (NC) state and the intentional control (IC) state. This type of BCIs is in the NC mode most of the time. However, when the user issues a mental command that would lead, for example, to switching the light on, moving the computer cursor to the right, etc., the system changes its state from the NC

Two measures that can properly evaluate the performance of a self-paced BCI are the true-positive rate (TPR) and the false-positive rate (FPR). A true-positive outcome results from correctly classifying a command as an IC state, and a falsepositive outcome results from the BCI misclassifying a no control as an IC state. The ratios of true positives and false positives to the total number of classifications yield the TPR and FPR, respectively. Further details on BCIs can be found in [1–5].

Due to the high false activation rates, BCI systems are deemed unsuccessful for use in real-life applications. This is because false activations are a major cause of user frustration. To further illustrate this point, suppose that the output rate of a self-paced BCI is 5 Hz (i.e., five outputs/s, as in the BCI designed in this paper) and the FPR value is 1%. This FPR of 1% means one false positive in every 100 outputs of the BCI. As the BCI generates 100 outputs in 20 s, there would be three false activations in every minute, which is too high for practical purposes. Considering the fact that a self-paced BCI is in the no-control mode for most of the time, even a low FPR would greatly annoy and frustrate any user. This is why for BCI systems,

Mental tasks are a class of neurological phenomena that can be exploited in BCI systems. They generally refer to intentional cognitive tasks that are done by the brain. Mental tasks can be mental mathematical calculations (such as multiplication and counting), mental rotation of a two- or three-dimensional object, motor imag-

Motor imagery is a task that has been investigated by a large majority of BCI studies [6–64]. The use of other types of mental tasks in BCI studies has received little attention in the literature. The papers that have studied non-motor imagery mental tasks along with the motor imagery tasks include [65–84]. Studies [85–96]

The mental tasks investigated in [65] are motor imagery of opening and closing of the users' hand(s) and serially subtracting seven from a large number. In [66], four motor imagery tasks of (the users) left hand movement, right hand movement, foot movement, and tongue movement along with a simple calculation task (i.e., repeated subtraction of a constant number from a randomly chosen number) are considered as the mental tasks. The mental tasks used in [67] are the imagination of left and right hand movements, cube rotation, and subtraction. Four imagery tasks, i.e., spatial navigation around a familiar environment, auditory imagery of a familiar tune, and right and left motor imageries of opening and closing the hand, are investigated in [68]. In [69, 70], the imagination of left and right hand (or arm) movements, cube rotation, subtraction, and word association are studied. In [71, 72, 75–77, 79, 81], the imagination of repetitive self-paced left or right hand movement and the generation of words beginning with the same random letter are investigated. The EEG data used in these studies are those provided by the IDIAP Research Institute in Switzerland [70]. The studies in [77, 81] consider the imagination of the left or right hand movement as well using the data collected by the BCI laboratory at Graz University of Technology in Austria [16]. In [73], the mental tasks are auditory recall, mental navigation, sensorimotor attention of the left hand, sensorimotor attention of the right hand, mental calculation, imaginary movement

mode to the IC mode. After that, the BCI returns to the NC state.

lowering the FPR is of extreme importance.

New Frontiers in Brain-Computer Interfaces

have only considered non-motor imagery mental tasks.

ery, visualization, etc.

52

Even though the system performance is evaluated off-line, the EEG signals are analyzed in a self-paced manner. A signal trial is divided into overlapping segments. Each segment is labeled as either IC or NC, depending on whether or not it belongs to the IC task. The performance of the system is then evaluated in terms of TPR and FPR.

epochs (i.e., five epochs for each of the four mental tasks). An epoch was 32.5 2.5 s long. To avoid possible adaptation, the order of epochs belonging to different

A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels

The timing of the epochs was as follows. At the beginning of each epoch, there was a break with a length of 15 2.5 s. The break had a variable length in different epochs so as to avoid possible adaptation. A "Start" cue was displayed after the break on the screen to prompt the subject to perform a specific mental task. The task to be performed was shown on the screen. The length of the Start cue was 4 s. The subjects had been instructed to start to perform the mental task approximately 1 s after the disappearance of the Start cue and to keep performing the task until a "Stop" cue appeared on the screen. The 1-s delay was used to avoid any possible effects of visual evoked potentials. The time interval between the Start cue and the Stop cue was 10 s. The Stop cue lasted for 2.5 s. After the Stop cue, the break of the

The background of the screen was always black. During the break interval of each epoch, "Break" was written on the screen in white and in a size which could be easily read from a distance of 75 cm. The name of a specific mental task written in white was the Start cue. The size of the Start cue was the same as Break. The word

1. Visualizing some words being written on a board: subjects were told to imagine

2. Non-trivial multiplication: the subjects performed multiplication of two twodigit numbers. The numbers to be multiplied were given to them as the Start

3.Mentally rotating a 3D object: the subjects imagined that they were rotating a

The subjects were asked to be in the baseline state during the break interval of each epoch, i.e., they should not be performing any of the four mental tasks of the experiment and were supposed to remain looking at the screen and not move. They should attain the same physical condition as that assumed when they performed the

Epoch timing. A "Start" cue was displayed on the screen for 4 s after a break of length of 15 2.5 s. The subject was told to wait about 1 s after the cue disappeared before performing a mental task for about 10 s. The "Stop" cue was displayed on the screen for 2.5 s, informing the subject of the end of the 10-s interval. The next epoch

4.Motor imagery: the subjects imagined extending his/her right hand.

mental tasks was changed randomly from one run to the other.

DOI: http://dx.doi.org/10.5772/intechopen.83425

next epoch started. Figure 2 illustrates the timing of each epoch.

a board on which they were writing their full names.

"STOP" written in a green circle was the Stop cue.

The mental tasks were:

laptop mentally.

cue.

mental tasks.

Figure 2.

then started.

55

Classification is based on the quadratic discriminant analysis due to its simplicity and accuracy. The features to be classified are the scalar autoregressive (AR) coefficients of the EEG signals. The feature extraction and the classification methods employed are efficient in terms of computational complexity. The cross-validation process is performed so as to obtain the optimal order of AR coefficients as well as the best EEG channels for every mental task of every subject.

## 2. Dataset

The EEG signals of four subjects were collected while they were seated in a chair approximately 75 cm in front of an LCD monitor in a 4 4 m<sup>2</sup> room. The subjects were asked to keep their eyes open during the recordings. Using an electrode cap, the signals were captured from 29 channels located at Fpz, AF3, AF4, F7, F3, Fz, F4, F8, FC5, FC1, FC2, FC6, T7, C3, Cz, C4, T8, CP5, CP1, CP2, CP6, P7, P3, Pz, P4, P8, PO3, PO4, and Oz, according to the 10–10 system [98, 99]. Electrodes were properly distributed over the scalp to study the signals of the different brain regions. Refer to Figure 1 to see the electrode positions. The earlobes were electrically linked together and used as the reference. The EEG signals were amplified and digitized using a 12-bit analog-to-digital converter. The sampling rate was 500 Hz.

Every subject attended three recording sessions on 3 different days. They were asked to perform four mental tasks. Each session started with the preparation and setup and consisted of six recording runs. Twelve minutes of EEG signals were approximately recorded in a run. The subjects were instructed to complete the six runs one after the other at their own pace. Each run consisted of signals of 20

## A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels DOI: http://dx.doi.org/10.5772/intechopen.83425

epochs (i.e., five epochs for each of the four mental tasks). An epoch was 32.5 2.5 s long. To avoid possible adaptation, the order of epochs belonging to different mental tasks was changed randomly from one run to the other.

The timing of the epochs was as follows. At the beginning of each epoch, there was a break with a length of 15 2.5 s. The break had a variable length in different epochs so as to avoid possible adaptation. A "Start" cue was displayed after the break on the screen to prompt the subject to perform a specific mental task. The task to be performed was shown on the screen. The length of the Start cue was 4 s. The subjects had been instructed to start to perform the mental task approximately 1 s after the disappearance of the Start cue and to keep performing the task until a "Stop" cue appeared on the screen. The 1-s delay was used to avoid any possible effects of visual evoked potentials. The time interval between the Start cue and the Stop cue was 10 s. The Stop cue lasted for 2.5 s. After the Stop cue, the break of the next epoch started. Figure 2 illustrates the timing of each epoch.

The background of the screen was always black. During the break interval of each epoch, "Break" was written on the screen in white and in a size which could be easily read from a distance of 75 cm. The name of a specific mental task written in white was the Start cue. The size of the Start cue was the same as Break. The word "STOP" written in a green circle was the Stop cue.

The mental tasks were:

Even though the system performance is evaluated off-line, the EEG signals are analyzed in a self-paced manner. A signal trial is divided into overlapping segments. Each segment is labeled as either IC or NC, depending on whether or not it belongs to the IC task. The performance of the system is then evaluated in terms of TPR and

Classification is based on the quadratic discriminant analysis due to its simplicity and accuracy. The features to be classified are the scalar autoregressive (AR) coefficients of the EEG signals. The feature extraction and the classification methods employed are efficient in terms of computational complexity. The cross-validation process is performed so as to obtain the optimal order of AR coefficients as well as

The EEG signals of four subjects were collected while they were seated in a chair approximately 75 cm in front of an LCD monitor in a 4 4 m<sup>2</sup> room. The subjects were asked to keep their eyes open during the recordings. Using an electrode cap, the signals were captured from 29 channels located at Fpz, AF3, AF4, F7, F3, Fz, F4, F8, FC5, FC1, FC2, FC6, T7, C3, Cz, C4, T8, CP5, CP1, CP2, CP6, P7, P3, Pz, P4, P8, PO3, PO4, and Oz, according to the 10–10 system [98, 99]. Electrodes were properly distributed over the scalp to study the signals of the different brain regions. Refer to Figure 1 to see the electrode positions. The earlobes were electrically linked together and used as the reference. The EEG signals were amplified and digitized

Every subject attended three recording sessions on 3 different days. They were asked to perform four mental tasks. Each session started with the preparation and setup and consisted of six recording runs. Twelve minutes of EEG signals were approximately recorded in a run. The subjects were instructed to complete the six runs one after the other at their own pace. Each run consisted of signals of 20

using a 12-bit analog-to-digital converter. The sampling rate was 500 Hz.

EEG signals were recorded from 29 electrodes distributed over the scalp according to the 10–10 system.

the best EEG channels for every mental task of every subject.

New Frontiers in Brain-Computer Interfaces

FPR.

2. Dataset

Figure 1.

54


4.Motor imagery: the subjects imagined extending his/her right hand.

The subjects were asked to be in the baseline state during the break interval of each epoch, i.e., they should not be performing any of the four mental tasks of the experiment and were supposed to remain looking at the screen and not move. They should attain the same physical condition as that assumed when they performed the mental tasks.

## Figure 2.

Epoch timing. A "Start" cue was displayed on the screen for 4 s after a break of length of 15 2.5 s. The subject was told to wait about 1 s after the cue disappeared before performing a mental task for about 10 s. The "Stop" cue was displayed on the screen for 2.5 s, informing the subject of the end of the 10-s interval. The next epoch then started.

For each of the four mental tasks, 300 s (30 10-s epochs) of EEG signals and for the baseline, about 1800 s (120 epochs) of EEG signals were recorded in each session. Therefore, at the end of the last session, we had 90 epochs of each mental task and 360 epochs of the baseline for each subject.

channel only. In the second iteration, the BCI system is assumed to have two channels only. One of these two channels is the one already selected in iteration 1. Thus the performance of each channel in the remaining 28 channels, together with the channel selected in the first iteration, is obtained. Among these 28 possibilities, the channel (together with the already selected one) that yields the best performance is selected and added to the list of the best channels. In the third iteration, three channels are considered. These are formed by each channel from the remaining list (i.e., 27 channels) and the two channels already selected in the previous iterations. The channel that together with the two already selected channels yields the 3-channel BCI system with the best performance is added to the list. This procedure is repeated until m channels are added to the list of the best chan-

A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels

The length of each mental task epoch is 10 s. The baseline epochs have a variable

length in the range of 15 2.5 s. Since the sampling frequency is 500 samples/s, each mental task epoch consists of 5000 samples, and the number of samples in a

To process the data, every epoch is divided into overlapping segments. Each segment is of length 1 s (i.e., 500 samples) and overlaps with the previous segment by 400 samples. In other words, the BCI system generates an output every 100 samples using the last 500 samples of the signals. Since 100 samples are equivalent

Feature selection and classification: autoregressive (AR) modeling is used to obtain the features from the segments. Based on the results in [100, 101], 1 s of the EEG signal is sufficiently long for the AR model estimation. The feature vector is formed by concatenating the AR coefficients (estimated from the segments of the selected channels) into a single vector. This vector is then fed to the classifier for classification purposes. Classification is performed using quadratic discriminant analysis. The AR modeling and quadratic discriminant analysis are briefly explained in

Custom designing: custom designing the system for every subject yields improvements in the overall BCI performance [102, 103]. In this study, the BCI system is customized for each subject and for each mental task by selecting the channels and

Cross-validation: to perform the cross-validation, we randomly divide the whole set of segments into five equal-sized sections. Four of the data sections are used to train and validate the system. Testing is carried on the remaining section. The four data sections assigned to training and validation are further divided randomly into five data partitions of equal size. Four partitions are used for training and one is

Selecting the best five and seven channels: we select the top five and also the top seven best performing channels for future processing using MDelete and MForm with the AR model order of 40. We then compare the results of the 5-channel cases with those of the 7-channel cases to figure out the final design for the BCI system. Channel selection is accomplished separately for different subjects and different mental tasks. Tables 1 and 2 list the best channels selected using MDelete and MForm, respectively. For each subject and each mental task, each channel selected

MDelete and MForm give BCI systems with different channel combinations. This is because each of these methods obtains a channel set which is locally optimum. The globally optimum set can be obtained by the exhaustive method

by both MDelete and MForm is shown in bold in the tables.

baseline epoch varies between 6250 and 8750.

DOI: http://dx.doi.org/10.5772/intechopen.83425

to 0.2 s, the output rate of the BCI is 5 Hz.

Appendices A and B of this paper.

AR orders during cross-validation.

used for validation.

57

nels.

3.2 Procedure

The experimental protocol had been approved by the Behavioral Research Ethics Board of the University of British Columbia, and all subjects signed the required consent form.

## 3. Methodology

## 3.1 EEG channel selection

The dataset is formed by the signals from 29 electrodes. However, a BCI system with 29 channels is impractical for use in real-life applications. For practical applications, the number of channels should be as small as possible. We thus select a smaller set of the channels that together yield the best performance for the final design of our system.

Suppose that we have a BCI system with n channels and we need to select the BCI that has m channels (m < n) and yields the best performance. The ideal but not always computationally practical way is to consider all the possible m-channel combinations of the n channels and select the combination that yields the best system performance. For instance, if we want to decrease the number of channels from 29 to 7, the system performance needs to be evaluated for all 1,560,780 different 7-channel combinations and then compared. Moreover, in order to make the results more robust, the performance evaluation of the system is usually carried out for different training and testing sets via a cross-validation process. If we assume that the number of evaluation which runs in cross-validation is 5, the number above (1,560,780) should be multiplied by 5. This forms a prohibitively large amount of computations as the processing time will take several days. It is thus impractical for implementation.

In this study, two approaches are performed for selecting the best system that has m channels. The first approach deletes the channels (from among the 29-channel system) that results in the least reduction in the performance of the remaining system. The channels are deleted one by one. The second approach builds a new system by adding channels one by one to the newly built system. A channel added to the new system is selected from the 29-channel system so that the performance of the new system is maximal. These two approaches result in two methods that we denote as MDelete and MForm. Even though these methods are not optimal as the method mentioned above (i.e., considering all possible combinations), they still reach the goal to a certain extent.

Channel selection method one: in channel selection method one (MDelete), all 29 channels are first considered. The resultant FPR and TPR values after deleting each channel are obtained. The BCI system with 28 channels that yields the best performance is selected. That is, the channel whose removal results in the best 28-channel system is detected and deleted from the list. This task is repeated on the remaining list (i.e., on 28 channels), and the best 27-channel system is found. This is repeated again until all but m channels are omitted.

Channel selection method two: channel selection method two (MForm) is similar to the first one except that it is carried in the reverse direction and the selection of the channels differs as explained below. In the first iteration, the BCI system is assumed to have one channel only. The channel with the best performance among the 29 existing channels is thus detected. This will form the best BCI that has one

A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels DOI: http://dx.doi.org/10.5772/intechopen.83425

channel only. In the second iteration, the BCI system is assumed to have two channels only. One of these two channels is the one already selected in iteration 1. Thus the performance of each channel in the remaining 28 channels, together with the channel selected in the first iteration, is obtained. Among these 28 possibilities, the channel (together with the already selected one) that yields the best performance is selected and added to the list of the best channels. In the third iteration, three channels are considered. These are formed by each channel from the remaining list (i.e., 27 channels) and the two channels already selected in the previous iterations. The channel that together with the two already selected channels yields the 3-channel BCI system with the best performance is added to the list. This procedure is repeated until m channels are added to the list of the best channels.

## 3.2 Procedure

For each of the four mental tasks, 300 s (30 10-s epochs) of EEG signals and for

The experimental protocol had been approved by the Behavioral Research Ethics Board of the University of British Columbia, and all subjects signed the required

The dataset is formed by the signals from 29 electrodes. However, a BCI system with 29 channels is impractical for use in real-life applications. For practical applications, the number of channels should be as small as possible. We thus select a smaller set of the channels that together yield the best performance for the final

Suppose that we have a BCI system with n channels and we need to select the BCI that has m channels (m < n) and yields the best performance. The ideal but not always computationally practical way is to consider all the possible m-channel combinations of the n channels and select the combination that yields the best system performance. For instance, if we want to decrease the number of channels from 29 to 7, the system performance needs to be evaluated for all 1,560,780 different 7-channel combinations and then compared. Moreover, in order to make the results more robust, the performance evaluation of the system is usually carried out for different training and testing sets via a cross-validation process. If we assume that the number of evaluation which runs in cross-validation is 5, the number above (1,560,780) should be multiplied by 5. This forms a prohibitively large amount of computations as the processing time will take several days. It is thus

In this study, two approaches are performed for selecting the best system that has m channels. The first approach deletes the channels (from among the 29-channel system) that results in the least reduction in the performance of the remaining system. The channels are deleted one by one. The second approach builds a new system by adding channels one by one to the newly built system. A channel added to the new system is selected from the 29-channel system so that the performance of the new system is maximal. These two approaches result in two methods that we denote as MDelete and MForm. Even though these methods are not optimal as the method mentioned above (i.e., considering all possible combinations), they still

Channel selection method one: in channel selection method one (MDelete), all 29 channels are first considered. The resultant FPR and TPR values after deleting each channel are obtained. The BCI system with 28 channels that yields the best performance is selected. That is, the channel whose removal results in the best 28-channel system is detected and deleted from the list. This task is repeated on the remaining list (i.e., on 28 channels), and the best 27-channel system is found. This is repeated

Channel selection method two: channel selection method two (MForm) is similar to the first one except that it is carried in the reverse direction and the selection of the channels differs as explained below. In the first iteration, the BCI system is assumed to have one channel only. The channel with the best performance among the 29 existing channels is thus detected. This will form the best BCI that has one

the baseline, about 1800 s (120 epochs) of EEG signals were recorded in each session. Therefore, at the end of the last session, we had 90 epochs of each mental

task and 360 epochs of the baseline for each subject.

New Frontiers in Brain-Computer Interfaces

consent form.

3. Methodology

design of our system.

impractical for implementation.

reach the goal to a certain extent.

56

again until all but m channels are omitted.

3.1 EEG channel selection

The length of each mental task epoch is 10 s. The baseline epochs have a variable length in the range of 15 2.5 s. Since the sampling frequency is 500 samples/s, each mental task epoch consists of 5000 samples, and the number of samples in a baseline epoch varies between 6250 and 8750.

To process the data, every epoch is divided into overlapping segments. Each segment is of length 1 s (i.e., 500 samples) and overlaps with the previous segment by 400 samples. In other words, the BCI system generates an output every 100 samples using the last 500 samples of the signals. Since 100 samples are equivalent to 0.2 s, the output rate of the BCI is 5 Hz.

Feature selection and classification: autoregressive (AR) modeling is used to obtain the features from the segments. Based on the results in [100, 101], 1 s of the EEG signal is sufficiently long for the AR model estimation. The feature vector is formed by concatenating the AR coefficients (estimated from the segments of the selected channels) into a single vector. This vector is then fed to the classifier for classification purposes. Classification is performed using quadratic discriminant analysis. The AR modeling and quadratic discriminant analysis are briefly explained in Appendices A and B of this paper.

Custom designing: custom designing the system for every subject yields improvements in the overall BCI performance [102, 103]. In this study, the BCI system is customized for each subject and for each mental task by selecting the channels and AR orders during cross-validation.

Cross-validation: to perform the cross-validation, we randomly divide the whole set of segments into five equal-sized sections. Four of the data sections are used to train and validate the system. Testing is carried on the remaining section. The four data sections assigned to training and validation are further divided randomly into five data partitions of equal size. Four partitions are used for training and one is used for validation.

Selecting the best five and seven channels: we select the top five and also the top seven best performing channels for future processing using MDelete and MForm with the AR model order of 40. We then compare the results of the 5-channel cases with those of the 7-channel cases to figure out the final design for the BCI system. Channel selection is accomplished separately for different subjects and different mental tasks. Tables 1 and 2 list the best channels selected using MDelete and MForm, respectively. For each subject and each mental task, each channel selected by both MDelete and MForm is shown in bold in the tables.

MDelete and MForm give BCI systems with different channel combinations. This is because each of these methods obtains a channel set which is locally optimum. The globally optimum set can be obtained by the exhaustive method


mentioned earlier in which the best combination is selected by considering all possible combinations of channels. Finding the global optimum is computationally impossible for our application. We hence have to be satisfied with the local optima. Finding the optimum AR model order: after selecting the best five and the best seven channels for each subject and each mental task, we find the optimum AR model order for each of these two cases in the cross-validation process. The initial AR model order is equal to 41. If the FPR for an order reaches zero, that order is selected as the optimum order; if not, the order is increased by one, and the FPR of the new order is then calculated. Increasing the AR order is terminated once the FPR is zero or a maximum order of 136 is reached. The order corresponding to the

Channels selected for different subjects and mental tasks using channel selection method two (MForm).

Sentence visualization CP6 P8 FC5 AF4 T7 CP5 Oz Non-trivial multiplication AF4 F7 T8 P8 P7 T7 Oz 3D object rotation F3 F7 AF4 PO3 FC5 Oz C3 Motor imagery CP2 P3 AF4 F8 P8 F3 F7

A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels

Selected channels (from best to worst) 1 2 34567

minimum FPR is chosen as the optimum AR order for the latter case.

of the four mental tasks for each subject in the 5-channel BCIs.

bold, while those related to the lowest performance are underlined.

the two channel selection methods is shown in Table 7.

For each subject and each mental task, there are two sets of five best channels (one is obtained using MDelete and the other is obtained using MForm). The performance of the 5-channel set that yielded the better performance is summarized in Table 3. For each subject and each mental task, the table shows whether the channels obtained using MDelete (1) or MForm (2) are selected. This is indicated under the channel selection method (CSM) column. It also shows the mean values of the TPR and FPR obtained from the cross-validation and testing processes in two separate rows. The optimum AR model order is also included in the table. In Table 4, the difference in the performance between the 5-channel BCIs using MDelete and MForm is given. Table 5 shows the results of the t-test between any 2

The performance of the 7-channel BCIs using the two channel selection methods are compared with each other, and the performance of the better method is given in Table 6 for each subject and each mental task. The performance difference between

In Tables 3 and 6, the values related to the highest performance are shown in

The Welch's t-test [104], as a statistical significance test, is performed on the TPR values for measuring the performance difference between MDelete and MForm (for each subject and each mental task) and between every pair of the four mental tasks (for each subject). The null hypothesis is that there is no difference between the TPR values of the two groups (these two groups can be the two channel selection methods or any two mental tasks). We assume a 5% significance level. The null hypothesis is rejected if the resultant p-value is less than 0:05. This means that the

4. Experimental results

Subject 4

DOI: http://dx.doi.org/10.5772/intechopen.83425

Table 2.

59

## Table 1.

Channels selected for different subjects and mental tasks using channel selection method one (MDelete).


A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels DOI: http://dx.doi.org/10.5772/intechopen.83425


Table 2.

Selected channels (from best to worst) 1234567

Sentence visualization T8 F8 F3 AF3 Oz CP6 P7 Non-trivial multiplication FC5 FC6 PO3 T7 C4 CP6 C3 3D object rotation FC6 T8 F3 AF3 CP6 AF4 F8 Motor imagery CP6 T8 P8 AF3 F7 FC2 T7

Sentence visualization P8 FC6 T7 F3 PO4 P7 P3 Non-trivial multiplication Oz Fpz Fz T7 FC6 P3 C3 3D object rotation Oz T7 P3 FC5 Fz C4 Cz Motor imagery CP2 Oz F7 Fpz P7 CP6 F4

Sentence visualization PO3 P8 P4 AF4 T7 P7 C4 Non-trivial multiplication Oz Fpz Pz T7 CP1 PO4 CP5 3D object rotation FC2 CP1 PO4 P8 Cz F4 FC6 Motor imagery T7 P7 CP2 Fpz CP6 F7 FC6

Sentence visualization P8 Cz PO3 CP5 T7 PO4 AF3 Non-trivial multiplication F4 CP5 T8 P7 P8 Oz F3 3D object rotation AF3 FC5 AF4 P7 C3 T8 PO4 Motor imagery Cz P8 P7 FC6 T7 F3 CP5

Channels selected for different subjects and mental tasks using channel selection method one (MDelete).

Sentence visualization T8 FC5 F8 P4 T7 P7 Oz Non-trivial multiplication F7 Oz FC6 T8 F8 P7 CP1 3D object rotation T7 FC6 P8 CP2 P7 Oz F4 Motor imagery Oz P7 C3 CP6 T8 P8 FC6

Sentence visualization P8 F7 FC6 T7 FC5 AF3 T8 Non-trivial multiplication CP6 P4 T8 Fpz C4 F7 FC6 3D object rotation P7 P4 C4 Oz FC6 AF4 F7 Motor imagery CP5 Fz FC6 T7 FC1 Oz C4

Sentence visualization PO3 Oz Fpz P8 P7 Pz CP1 Non-trivial multiplication Oz P8 Fpz P7 Cz T8 AF3 3D object rotation CP5 FC2 P7 P4 T7 Oz Fz Motor imagery T8 Oz P3 PO3 P7 AF4 AF3

Selected channels (from best to worst) 1 2 34567

Subject 1

New Frontiers in Brain-Computer Interfaces

Subject 2

Subject 3

Subject 4

Table 1.

Subject 1

Subject 2

Subject 3

58

Channels selected for different subjects and mental tasks using channel selection method two (MForm).

mentioned earlier in which the best combination is selected by considering all possible combinations of channels. Finding the global optimum is computationally impossible for our application. We hence have to be satisfied with the local optima.

Finding the optimum AR model order: after selecting the best five and the best seven channels for each subject and each mental task, we find the optimum AR model order for each of these two cases in the cross-validation process. The initial AR model order is equal to 41. If the FPR for an order reaches zero, that order is selected as the optimum order; if not, the order is increased by one, and the FPR of the new order is then calculated. Increasing the AR order is terminated once the FPR is zero or a maximum order of 136 is reached. The order corresponding to the minimum FPR is chosen as the optimum AR order for the latter case.

## 4. Experimental results

For each subject and each mental task, there are two sets of five best channels (one is obtained using MDelete and the other is obtained using MForm). The performance of the 5-channel set that yielded the better performance is summarized in Table 3. For each subject and each mental task, the table shows whether the channels obtained using MDelete (1) or MForm (2) are selected. This is indicated under the channel selection method (CSM) column. It also shows the mean values of the TPR and FPR obtained from the cross-validation and testing processes in two separate rows. The optimum AR model order is also included in the table. In Table 4, the difference in the performance between the 5-channel BCIs using MDelete and MForm is given. Table 5 shows the results of the t-test between any 2 of the four mental tasks for each subject in the 5-channel BCIs.

The performance of the 7-channel BCIs using the two channel selection methods are compared with each other, and the performance of the better method is given in Table 6 for each subject and each mental task. The performance difference between the two channel selection methods is shown in Table 7.

In Tables 3 and 6, the values related to the highest performance are shown in bold, while those related to the lowest performance are underlined.

The Welch's t-test [104], as a statistical significance test, is performed on the TPR values for measuring the performance difference between MDelete and MForm (for each subject and each mental task) and between every pair of the four mental tasks (for each subject). The null hypothesis is that there is no difference between the TPR values of the two groups (these two groups can be the two channel selection methods or any two mental tasks). We assume a 5% significance level. The null hypothesis is rejected if the resultant p-value is less than 0:05. This means that the


## 3. Cross-validation and testing results of the better channel selection method for 5-channel systems.

Table

Subject

61

1

 Validation

Testing

2

 Validation

Testing

3

 Validation

Testing

4

 Validation

Testing

†dV = VMDelete 

‡<sup>p</sup> , 0:05 means "there is a significant difference." These cases are shown in bold.

Table 4. The difference in performance

 between MDelete and MForm for the 5-channel systems.

VMForm, V ∈

{AR,TPR,FPR}.

26

 6.36 6.41

 0.00

0.0021

 0.00

0.0004

7

 2.43 1.98

 0.00

0.0146

1.83

0.01

 0.0756

 0.00

 0.1970

 0

 0.67

 0.01

 0.5523

2

 2.90 – 2.27 –

0.01

 0.0777

0.01

 0.0878

2.19

 0.00

 0.0620

 6

1.37

 0.00

 0.3534

 9

4.96

6.10

 0.00

0.0001

3.19

 0.00

 0.1863

 0.00

0.0005

6

3.07

 0.00

 0.1215

3

 0.91 1.51

 0.00

 0.1076

 0.00

 0.1889

A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels

2

 1.59 1.63

 0.00

 0.3012

 0.00

0.0121

4

 2.40 2.88

 0.00

 0.0654

 0.00

 0.1009

 5

0.19

0.43

 0.00

 0.7867

 0.00

 0.8667

5

 4. 82

4. 16

 0.00

0.0095

 0.00

0.0000

–8

 1.31 0.29

 0.00

 0.7615

 0.00

 0.2063

 7

5.61

5.24

 0.00

0.0005

1.78

 0.00

 0.2160

 0.00

0.0004

5

2.04

 0.00

 0.0593

 16

5.11

4.57

 0.00

0.0201

 0.00

0.0029

 Process

dAR†

dTPR†

dFPR†

p-value‡

dAR

 dTPR

 dFPR

 p-value

 dAR dTPR

 dFPR

 p-value

 dAR

 dTPR

 dFPR

 p-value

DOI: http://dx.doi.org/10.5772/intechopen.83425

Visualization

Multiplication

Object rotation

Motor imagery


 4. difference in performance between MDelete and MForm for the 5-channel systems.

‡

Table

The

## A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels DOI: http://dx.doi.org/10.5772/intechopen.83425

Subject

60

1

Validation

Testing

> 2

Validation

Testing

> 3

Validation

Testing

> 4

Validation

Testing The table shows the mean values of the TPR and FPR measures. The results of the

and AR orders for each case are also included in the table.

Table 3. Cross-validation

 and testing results of the better channel selection method for 5-channel systems.

1

 101

 55.21 53.89

 0.00

 0.00

1

 129

 42.89 42.61

cross-validation

 and testing processes are given in two separate rows for each subject and each mental task. The selected CSM

 0.01

 0.00

1

136

33.03 33.50

0.18

0.20

 1

 134

 41. 62 42.37

 0.00

 0.01

2

 81

 61.98 62.68

 0.00

 0.00

 2

 82

 63.25 63.72

 0.00

 0.00

2

95

57.23 55.37

0.00

0.00

1

90

57.25 58.51

0.00

0.00

1

 91

 57.37 58.56

 0.00

 0.00

 1

 90

 58.29 57.93

 0.00

 0.00

2

84

57.23 56.00

0.00

0.00

1

90

54.68 54.38

0.00

0.00

1

87

59.82 57.78

0.00

0.00

2

 98

 61.52 61.24

 0.00

 0.00

 2

 89

 59.82 59.71

 0.00

 0.00

2

93

56.56 55.30

0.00

0.00

 Process CSM

 AR

 TPR

 FPR

 CSM

 AR

 TPR

 FPR

 CSM

 AR

 TPR

 FPR

 CSM

 AR

 TPR

 FPR

Visualization

Multiplication

Object rotation

Motor imagery

New Frontiers in Brain-Computer Interfaces


## Table 5.

p-values calculated using Welch's t-test between the four mental tasks for each subject (5-channel systems).

TPR values of the two test groups are significantly different. If p ≥ 0:05, the null hypothesis cannot be rejected at the 5% significance level. This implies that there is no significant difference between the TPR values of the test groups.

The p-value between MDelete and MForm for each mental task of each subject is given in Tables 4 and 7 under the p-value column. Table 8 shows the results of the t-test between any 2 of the four mental tasks for each subject in the 7-channel BCIs.

## 5. Summary and discussion of results

Table 4 shows that the performance of the 5-channel BCI systems obtained using MDelete is close to those obtained using MForm for the majority of the cases. The same is true for the 7-channel systems (Table 7). From Tables 3 and 6, it is shown that MDelete yields better results in nine out of the 16 cases of the 5-channel BCIs and in seven out of the 16 cases of the 7-channel BCIs.

## 5.1 System performance of 5-channel BCIs

From Tables 3 and 5, we find the following:

1. The FPR values are zero for all 5-channel BCIs of Subjects 1, 2, and 3 during the cross-validation and testing processes, irrespective of the task type. For Subject 4, this is also true for the sentence visualization task-based BCI: the BCI based on the multiplication task has 0.01% FPR for the testing process, the BCI based on the motor imagery task has an FPR of 0.01% for the cross-validation process, and the BCI based on the object rotation task has FPR values of 0.20 and 0.18% for cross-validation and testing, respectively.

Subject

63

1

Validation

Testing

> 2

Validation

Testing

> 3

Validation

Testing

> 4

Validation

Testing The table shows the mean values of the TPR and FPR measures. The results of the

and AR orders for each case are also included in the table.

Table 6. Cross-validation

 and testing results of the better channel selection method for 7-channel systems.

2

 79

 61.56 59.71

 0.00

 0.00

2

 95

 51.90 50.21

cross-validation

 and testing processes are given in two separate rows for each subject and each mental task. The selected CSM

 0.00

 0.00

2

129

27.58 27.36

0.01

0.01

 1

 97

 49.30 50.00

 0.00

 0.00

1

64

63.32 64.90

0.00

0.00

2

 56

 69.94

70.51

 0.00

 0.00

1

65

66.33 64.73

0.00

0.00

1

65

64.57 65.32

0.00

0.00

A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels

2

 62

 64.59 64.77

 0.00

 0.00

 2

 61

 63.07 62.73

 0.00

 0.00

2

60

61.55 61.97

0.00

0.00

1

64

59.39 60.68

0.00

0.00

2

63

63.23 63.70

0.00

0.00

2

73

65.91 64.51

0.00

0.00

 1

 64

 65.45 66.11

 0.00

 0.00

1

69

63.35 62.51

0.00

0.00

 Process CSM

 AR

 TPR

 FPR

 CSM

 AR

 TPR

 FPR

 CSM

 AR

 TPR

 FPR

 CSM

 AR

 TPR

 FPR

DOI: http://dx.doi.org/10.5772/intechopen.83425

Visualization

Multiplication

Object rotation

Motor imagery


Table 6. Cross-validation

 and testing results of the better channel selection method for 7-channel systems.

## A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels DOI: http://dx.doi.org/10.5772/intechopen.83425

TPR values of the two test groups are significantly different. If p ≥ 0:05, the null hypothesis cannot be rejected at the 5% significance level. This implies that there is

p-values calculated using Welch's t-test between the four mental tasks for each subject (5-channel systems).

The p-value between MDelete and MForm for each mental task of each subject is given in Tables 4 and 7 under the p-value column. Table 8 shows the results of the t-test between any 2 of the four mental tasks for each subject in the

Table 4 shows that the performance of the 5-channel BCI systems obtained using MDelete is close to those obtained using MForm for the majority of the cases. The same is true for the 7-channel systems (Table 7). From Tables 3 and 6, it is shown that MDelete yields better results in nine out of the 16 cases of the 5-channel

1. The FPR values are zero for all 5-channel BCIs of Subjects 1, 2, and 3 during the cross-validation and testing processes, irrespective of the task type. For Subject 4, this is also true for the sentence visualization task-based BCI: the BCI based on the multiplication task has 0.01% FPR for the testing process, the BCI based on the motor imagery task has an FPR of 0.01% for the cross-validation process, and the BCI based on the object rotation task has FPR values of 0.20

no significant difference between the TPR values of the test groups.

p , 0:05 means "there is a significant difference." These cases are shown in bold.

Subject Mental task Mental task

New Frontiers in Brain-Computer Interfaces

1 Visualization 0.0146 0.1759 0.1427

2 Visualization 0.7203 0.1695 0.0330

3 Visualization 0.3236 0.0049 0.0040

4 Visualization 0.0000 0.0000 0.0000

Multiplication 0.2472 0.0053 Object rotation 0.0251

Multiplication 0.2499 0.0408 Object rotation 0.2949

Multiplication 0.0034 0.0001 Object rotation 0.1062

Multiplication 0.0000 0.8060 Object rotation 0.0000

Multiplication Object rotation Motor imagery

7-channel BCIs.

62

Table 5.

5. Summary and discussion of results

5.1 System performance of 5-channel BCIs

From Tables 3 and 5, we find the following:

BCIs and in seven out of the 16 cases of the 7-channel BCIs.

and 0.18% for cross-validation and testing, respectively.


## 7. difference in performance between MDelete and MForm for the 7-channel systems.

2. Among all 5-channel BCIs, the highest performance (TPR = 63.72%) is reached by the multiplication task of Subject 3. The object rotation task-based BCI of Subject 4 has the lowest performance with TPR and FPR values of 33.50 and

Multiplication Object rotation Motor imagery

although it is not significantly different from object rotation. Motor imagery is the poorest in performance (TPR = 55.30%) although it is not significantly

4.For Subject 2: sentence visualization, multiplication, and object rotation have

5. For Subject 3: multiplication and sentence visualization have similar and the highest performance (TPRs = 63.72 and 62.68%). Object rotation and motor imagery have similar and the lowest performance (TPRs = 55.37 and 58.51%).

6.For Subject 4: sentence visualization has the best performance (TPR = 53.89%),

1. The FPR values reach zero for 15 out of the 16 cases. That is for all cases except for the object rotation task of Subject 4, which has 0.01% FPR for each of the

and object rotation has the poorest performance (TPR = 33.50% and

8, the following are found:

Motor imagery has the poorest performance (TPR = 54.38%) although it is not

–58.56%.

statistically similar performance with TPRs in the range of 56.00

3. For Subject 1: multiplication is the best in performance (TPR = 61.24%)

Subject Mental task Mental task

1 Visualization 0.4889 0.0519 0.3890

A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels

2 Visualization 0.0809 0.1582 0.0035

3 Visualization 0.0007 0.9199 0.6681

4 Visualization 0.0001 0.0000 0.0002

" These cases are shown in bold.

Multiplication 0.2553 0.2243 Object rotation 0.0405

Multiplication 0.6683 0.0328 Object rotation 0.4601

Multiplication 0.0135 0.0013 Object rotation 0.7321

Multiplication 0.0000 0.8619 Object rotation 0.0000

's t-test between the four mental tasks for each subject (7-channel systems).

0.18%, respectively.

p-values calculated using Welch

p , 0 :05 means

Table 8.

FPR = 0.18%).

From Tables 6 and

65

different from sentence visualization.

"there is a significant difference.

DOI: http://dx.doi.org/10.5772/intechopen.83425

5.2 System performance of 7-channel BCIs

cross-validation and testing processes.

significantly different from object rotation.

Table

The


A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels DOI: http://dx.doi.org/10.5772/intechopen.83425

## Table 8.

p-values calculated using Welch's t-test between the four mental tasks for each subject (7-channel systems).


## 5.2 System performance of 7-channel BCIs

From Tables 6 and 8, the following are found:

1. The FPR values reach zero for 15 out of the 16 cases. That is for all cases except for the object rotation task of Subject 4, which has 0.01% FPR for each of the cross-validation and testing processes.

Subject

64

1

 Validation

Testing

2

 Validation

Testing

3

 Validation

Testing

4

 Validation

Testing

†dV = VMDelete

‡

p , 0:05 means "there is a significant difference." These cases are shown in bold.

Table 7. The difference in performance

 between MDelete and MForm for the 7-channel systems.

VMForm, V ∈

{AR,TPR,FPR}.

0.37

 0.00

 0.7741

3

1.59

 0.00

 0.2486

5

 1.62 2.26

 0.01

0.0037

4.02

 0.00

0.0012

0.54

 0.00

 0.7501

 0.00

 0.1907

3

 4.26

 0.01

0.0173

5

 0.00

 0.00

 1.0000

2.22

 0.00

0.0455

5.42

 0.00

0.0009

0.91

 0.00

 0.6633

2

1.00

 0.00

 0.4971

 6

4.86

 0.00

0.0007

4

 0.73

 0.00

 0.7233

 4

0.06

1.52

 0.00

 0.1110

 0.00

 0.9542

2

1.58

2.24

 0.00

 0.0960

 0.00

 0.0954

 4

2.25

0.44

 0.00

 0.7226

 0.00

 0.1739

 6

0.70

2.48

 0.00

 0.2445

 0.00

 0.6486

 4

 1.25 2.58

 0.00

0.0045

 0.00

 0.4317

2

1.77

2.56

 0.00

0.0084

5.12

 0.00

0.0047

0.80

 0.00

 0.5357

 0.00

 0.1354

 7

6.70

 0.00

0.0017

3

 0.58

 0.00

 0.5591

 1

 1.71 2.36

 0.00

 0.1876

 0.00

 0.3435

 Process dAR†

dTPR†

dFPR†

p-value‡

dAR

 dTPR

 dFPR

 p-value

 dAR

 dTPR

 dFPR

 p-value

 dAR

 dTPR

 dFPR

 p-value

Visualization

Multiplication

Object rotation

Motor imagery

New Frontiers in Brain-Computer Interfaces


From Table 9, it can be easily recognized that the system performance depends on the subject and the mental task. Among all subjects, Subject 3 has the BCIs with the highest average performance with average TPRs of 60.07 and 66.37% for the 5-channel and 7-channel systems, respectively. Subject 4 has the least average performance with an average TPR of 43.09% for the 5-channel systems and 46.82%

5-channel design 7-channel design 29-channel design† AR TPR FPR AR TPR FPR AR TPR FPR

Subject 1 92\* 58.51 0.00 67\* 64.21 0.00 24 68.54 0.00 Subject 2 89\* 56.72 0.00 62\* 62.54 0.00 19\* 70.38 0.00 Subject 3 87 60.07 0.00 63\* 66.37 0.00 20\* 70.80 0.00 Subject 4 125 43.09 0.05 100 46.82 0.00 29 59.31 0.00

A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels

Visualization 90 58.23 0.00 67 63.27 0.00 21\* 68.95 0.00 Multiplication 100\* 56.38 0.00 71\* 61.99 0.00 24 67.38 0.00 Object rotation 101 51.15 0.05 80\* 55.04 0.00 24\* 65.71 0.00 Motor imagery 102\* 52.64 0.00 74\* 59.63 0.00 23\* 67.00 0.00 Total average 98\* 54.60 0.01 73\* 59.98 0.00 23 67.26 0.00

Among the mental tasks, sentence visualization and multiplication are the best tasks overall. Object rotation and motor imagery yield the least performance on

Table 9 also shows the average performance of the 29-channel systems over the subjects and mental tasks for comparison purposes [110]. It can be seen that by decreasing the number of EEG channels from 29 to 7 and 5, the system performance degrades by a decrease of 7.28 and 12.66%, respectively, in TPR on average. This degradation in the system performance is the trade-off one has to make to have a simpler system that can be easily set up and requires less computational power.

This study shows that it is feasible to design self-paced mental task-based BCIs with a zero false activation rate using very few (i.e., five or seven) EEG channels. The system performance was evaluated on a dataset we collected from four subjects. Although the evaluation was carried off-line, the methodology can be used in real-time self-paced systems after slight modifications. This is because the feature extraction and the classification processes are not computationally demanding, and there is no need for the timing information of the EEG signals after training the

for the 7-channel systems.

Average over tasks

DOI: http://dx.doi.org/10.5772/intechopen.83425

Average over subjects

Results of study [110].

The value is rounded to the nearest integer.

Average performance of the BCI systems.

average.

†

\*

Table 9.

6. Conclusion

67

classifier with the training data.

## 5.3 Discussion of results

Comparing Tables 3 and 6, it can be noticed that increasing the number of channels from five to seven enhances the performance of every BCI by an increase of 5.38% in TPR on average (see Table 9). Therefore, there is a trade-off between using fewer channels and having better system performance. The choice should be made depending on the applications, the situations in which the BCI system is used, and the computational power available.

In terms of the AR model orders, the 5-channel BCIs need higher orders than the 7-channel BCIs. According to Table 9, the overall mean of the AR model order is 98 and 73 for the 5-channel and 7-channel BCI systems, respectively.

Studies [105–109] have also used high AR model orders in their analyses. The AR model order depends on the sampling frequency [106]. Since our sampling frequency is 500 Hz (which is high), then the AR order can also be high. In other words, if the AR order belonging to the sampling frequency 100 Hz is 25, then the AR order belonging to sampling frequency 500 Hz is 125. This is because:


A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels DOI: http://dx.doi.org/10.5772/intechopen.83425


### Table 9.

2. The results as to which BCIs yield the best and worst performance are exactly as those in the 5-channel systems. Among all the 7-channel BCIs, the best performance with a TPR of 70.51% is reached by the BCI based on the multiplication task of Subject 3 (which is better than the TPR = 63.72% of the corresponding 5-channel BCI). The object rotation task-based BCI of Subject 4 has the worst performance with TPR and FPR values of 27.36% and 0.01%, respectively (which is also better than the FPR = 0.18% of the corresponding

3. For Subject 1: motor imagery is significantly different from object rotation. All other cases are statistically similar to each other. The TPR varies in the range

4.For Subject 2: sentence visualization, multiplication, and object rotation have statistically similar performance with TPRs in the range of 61.97–64.77%. Motor imagery has the least performance (TPR = 60.68%) although it is not

5. For Subject 3: multiplication has the highest performance (TPR = 70.51%). Sentence visualization, object rotation, and motor imagery have similar

6.For Subject 4: sentence visualization has the best performance (TPR = 59.71%),

and object rotation has the poorest performance (TPR = 27.36% and

Comparing Tables 3 and 6, it can be noticed that increasing the number of channels from five to seven enhances the performance of every BCI by an increase of 5.38% in TPR on average (see Table 9). Therefore, there is a trade-off between using fewer channels and having better system performance. The choice should be made depending on the applications, the situations in which the BCI system is used,

In terms of the AR model orders, the 5-channel BCIs need higher orders than the 7-channel BCIs. According to Table 9, the overall mean of the AR model order is 98

Studies [105–109] have also used high AR model orders in their analyses. The AR model order depends on the sampling frequency [106]. Since our sampling frequency is 500 Hz (which is high), then the AR order can also be high. In other words, if the AR order belonging to the sampling frequency 100 Hz is 25, then the AR order belonging to sampling frequency 500 Hz is 125. This is

1. The AR order is the number of previous samples of the signal that represents the current sample of the signal. Please refer to Eq. (1) in Appendix A.

2. An AR order = 25 at freq = 100 Hz means that we need the last 0.25 s of the signal to represent the current sample. For freq = 500 Hz, we will need the last

0.25 s of the signal, and thus the AR order would be 0.25 500 = 125.

5-channel BCI).

New Frontiers in Brain-Computer Interfaces

FPR = 0.01%).

5.3 Discussion of results

because:

66

and the computational power available.

62.51–66.11% for different mental tasks.

significantly different from that of object rotation.

performance with TPRs in the range of 64.73–65.32%.

and 73 for the 5-channel and 7-channel BCI systems, respectively.

Average performance of the BCI systems.

From Table 9, it can be easily recognized that the system performance depends on the subject and the mental task. Among all subjects, Subject 3 has the BCIs with the highest average performance with average TPRs of 60.07 and 66.37% for the 5-channel and 7-channel systems, respectively. Subject 4 has the least average performance with an average TPR of 43.09% for the 5-channel systems and 46.82% for the 7-channel systems.

Among the mental tasks, sentence visualization and multiplication are the best tasks overall. Object rotation and motor imagery yield the least performance on average.

Table 9 also shows the average performance of the 29-channel systems over the subjects and mental tasks for comparison purposes [110]. It can be seen that by decreasing the number of EEG channels from 29 to 7 and 5, the system performance degrades by a decrease of 7.28 and 12.66%, respectively, in TPR on average. This degradation in the system performance is the trade-off one has to make to have a simpler system that can be easily set up and requires less computational power.

## 6. Conclusion

This study shows that it is feasible to design self-paced mental task-based BCIs with a zero false activation rate using very few (i.e., five or seven) EEG channels. The system performance was evaluated on a dataset we collected from four subjects. Although the evaluation was carried off-line, the methodology can be used in real-time self-paced systems after slight modifications. This is because the feature extraction and the classification processes are not computationally demanding, and there is no need for the timing information of the EEG signals after training the classifier with the training data.

The best performance of the BCI systems described above has zero FPRs and sufficiently high TPRs (i.e., 53.89–63.72% for the 5-channel systems and 59.71– 70.51% for the 7-channel systems). Hence, in terms of the system performance, they are acceptable for use in real-life applications. As reported in the study [110], the best performance of the BCI systems with 29 channels has zero FPRs and 65.06–72.65% TPRs.

The frequency domain analysis of the recorded EEG signals is left for future work. By finding the frequency bands with the high amounts of information, we may be able to decrease the sampling frequency from its current value of 500 Hz.

In this study, the EEG trials are divided into 1-s segments for classification, and the BCI system gives an output every 0.2 s. Finding the optimum values for the segment length and the system output rate is the other directions for future work. The use of these values might result in a more accurate system.

Running online experiments is the most important step that needs to be taken in the future in order to evaluate the performance of our BCI system in real time.

## A. Autoregressive modeling

The autoregressive (AR) model of the signal s[t] is defined as

$$s[t] = \sum\_{j=1}^{F} b\_j s[t - j] + n[t] \tag{1}$$

f m,Xð Þ¼ x

DOI: http://dx.doi.org/10.5772/intechopen.83425

Ciπifi,Xð Þ¼ x max

x∈

8 >>><

>>>:

Fqdð Þ¼� x

Using (3) in (7), the discriminant rule becomes

x∈

xT Σ�<sup>1</sup>

ln <sup>∣</sup>Σ1<sup>∣</sup> ∣Σ2∣ � �

<sup>1</sup> � <sup>Σ</sup>�<sup>1</sup> 2 � �<sup>x</sup> <sup>þ</sup> <sup>μ</sup><sup>T</sup>

> � 1 <sup>2</sup> <sup>μ</sup><sup>T</sup> <sup>1</sup> Σ�<sup>1</sup>

The mean vectors (i.e., μ<sup>1</sup> and μ2) and the covariance matrices (i.e., Σ<sup>1</sup> and Σ2) are estimated from the data samples. In this study, the same value for the a priori probabilities π<sup>1</sup> and π<sup>2</sup> and the same value for the cost parameters C<sup>1</sup> and C<sup>2</sup> are

The authors would like to thank the subjects who came and devoted their time to

where the discriminant function, Fqdð Þ x , is defined as

1 2

� 1 2

� ln <sup>π</sup><sup>2</sup> π1 : C2 C1 � �

(

x∈

This is equivalent to

assumed.

69

Acknowledgements

attend the EEG recording sessions.

(

Class Ω<sup>i</sup> if

1

A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels

Based on the Bayes discriminant rule, the input vector x is classified into

where π<sup>i</sup> is the a priori probability of Class Ω<sup>i</sup> and Ci is the total cost of

∑ M j¼1

The decision rule can be simplified as follows if only two classes exist:

j j <sup>Σ</sup><sup>m</sup> <sup>1</sup>=<sup>2</sup>

ð Þ x � μ<sup>m</sup>

Cjπ<sup>j</sup> fj,Xð Þ x

Ω<sup>1</sup> : C1π<sup>1</sup> f <sup>1</sup>,Xð Þ x ≥ C2π<sup>2</sup> f <sup>2</sup>,Xð Þ x Ω<sup>2</sup> : C1π<sup>1</sup> f <sup>1</sup>,Xð Þ x , C2π<sup>2</sup> f <sup>2</sup>,Xð Þ x

<sup>Ω</sup><sup>1</sup> : ln <sup>f</sup> <sup>1</sup>,Xð Þ� <sup>x</sup> ln <sup>f</sup> <sup>2</sup>,Xð Þ <sup>x</sup> <sup>≥</sup> ln <sup>π</sup><sup>2</sup>

<sup>Ω</sup><sup>2</sup> : ln <sup>f</sup> <sup>1</sup>,Xð Þ� <sup>x</sup> ln <sup>f</sup> <sup>2</sup>,Xð Þ <sup>x</sup> , ln <sup>π</sup><sup>2</sup>

Ω<sup>1</sup> : Fqdð Þ x ≥ 0 Ω<sup>2</sup> : Fqdð Þ x , 0

<sup>T</sup>Σ�<sup>1</sup>

<sup>m</sup> ð Þ x � μ<sup>m</sup> � � (3)

� �, j<sup>∈</sup> f g <sup>1</sup>; <sup>2</sup>; …; <sup>M</sup> (4)

π<sup>j</sup> ¼ 1: (5)

π1 : C2 C1 � �

π1 : C2 C1 � �

<sup>1</sup> Σ�<sup>1</sup> <sup>1</sup> � <sup>μ</sup><sup>T</sup>

<sup>1</sup> <sup>μ</sup><sup>1</sup> � <sup>μ</sup><sup>T</sup>

� �

<sup>2</sup> Σ�<sup>1</sup> 2

� �x

<sup>2</sup> Σ�<sup>1</sup> <sup>2</sup> μ<sup>2</sup> (6)

(7)

(8)

(9)

ð Þ <sup>2</sup><sup>π</sup> <sup>k</sup>=<sup>2</sup>

exp � <sup>1</sup> 2

j

misclassifying a member of Class Ω<sup>i</sup> to the other classes. Note that

where F is the order of the model, bj is the jth coefficient, and n[t] is the noise (or error) signal. The noise signal is assumed to be a random process with a zero mean and a finite variance.

In this study, the AR model is estimated using the Burg algorithm [111] which is a popular and widely used algorithm in the field.

There are some methods based on the reflection coefficient [112], the final prediction error criterion [113], the autoregressive transfer function criterion [113], and the Akaike information criterion [113] to find the order of the AR model; however, none of them are efficient in BCI applications [100]. In our previous study [100], it is suggested that instead of using the existing methods, it is better to select the model order by cross-validation based on the system performance. The same approach is taken in this study.

## B. Quadratic discriminant analysis

Quadratic discriminant analysis (QDA) [114] is the quadratic version of linear discriminant analysis (LDA). Like LDA, normal distributions are assumed for the classes. The only difference between QDA and LDA relates to the covariance matrices of the classes. In QDA, unlike LDA, the covariance matrices of the classes are not assumed to be the same. Therefore, QDA is more general than LDA.

Suppose we have k-dimensional vectors of x to be classified into one of the M classes with the normal distributions

$$
\Omega\_m \sim N\_k(\mu\_m, \Sigma\_m) \tag{2}
$$

where m ∈ f g 1; 2; …; M , μ<sup>m</sup> is the k-dimensional mean vector and Σ<sup>m</sup> is the k � k covariance matrix of Class Ωm.

The probability density function of Class Ω<sup>m</sup> can be expressed as

A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels DOI: http://dx.doi.org/10.5772/intechopen.83425

$$\begin{split} f\_{m,X}(\boldsymbol{\kappa}) &= \frac{1}{(2\pi)^{k/2} |\boldsymbol{\Sigma\_m}|^{1/2}} \\ &\exp\left(-\frac{1}{2} (\boldsymbol{\kappa} - \boldsymbol{\mu\_m})^T \boldsymbol{\Sigma\_m^{-1}} (\boldsymbol{\kappa} - \boldsymbol{\mu\_m})\right) \end{split} \tag{3}$$

Based on the Bayes discriminant rule, the input vector x is classified into Class Ω<sup>i</sup> if

$$\mathbf{C}i\pi\_{\vec{\mathcal{I}}}f\_{i,X}(\mathbf{x}) = \max\_{\vec{\mathcal{I}}} \left( \mathbf{C}\_{\vec{\mathcal{I}}}\pi\_{\vec{\mathcal{I}}}f\_{j,X}(\mathbf{x}) \right), j \in \{1, 2, ..., M\} \tag{4}$$

where π<sup>i</sup> is the a priori probability of Class Ω<sup>i</sup> and Ci is the total cost of misclassifying a member of Class Ω<sup>i</sup> to the other classes. Note that

$$\sum\_{j=1}^{M} \pi\_j = \mathbf{1}.\tag{5}$$

The decision rule can be simplified as follows if only two classes exist:

$$\mathfrak{x} \in \begin{cases} \Omega\_1 : \operatorname{C}\_1 \pi\_1 f\_{1,X}(\mathfrak{x}) \ge \operatorname{C}\_2 \pi\_2 f\_{2,X}(\mathfrak{x}) \\ \Omega\_2 : \operatorname{C}\_1 \pi\_1 f\_{1,X}(\mathfrak{x}) < \operatorname{C}\_2 \pi\_2 f\_{2,X}(\mathfrak{x}) \end{cases} \tag{6}$$

This is equivalent to

The best performance of the BCI systems described above has zero FPRs and sufficiently high TPRs (i.e., 53.89–63.72% for the 5-channel systems and 59.71– 70.51% for the 7-channel systems). Hence, in terms of the system performance, they are acceptable for use in real-life applications. As reported in the study [110], the best performance of the BCI systems with 29 channels has zero FPRs and

The frequency domain analysis of the recorded EEG signals is left for future work. By finding the frequency bands with the high amounts of information, we may be able to decrease the sampling frequency from its current value of 500 Hz. In this study, the EEG trials are divided into 1-s segments for classification, and the BCI system gives an output every 0.2 s. Finding the optimum values for the segment length and the system output rate is the other directions for future work.

Running online experiments is the most important step that needs to be taken in

where F is the order of the model, bj is the jth coefficient, and n[t] is the noise (or error) signal. The noise signal is assumed to be a random process with a zero

In this study, the AR model is estimated using the Burg algorithm [111] which is

There are some methods based on the reflection coefficient [112], the final prediction error criterion [113], the autoregressive transfer function criterion [113], and the Akaike information criterion [113] to find the order of the AR model; however, none of them are efficient in BCI applications [100]. In our previous study [100], it is suggested that instead of using the existing methods, it is better to select the model order by cross-validation based on the system performance. The

Quadratic discriminant analysis (QDA) [114] is the quadratic version of linear discriminant analysis (LDA). Like LDA, normal distributions are assumed for the classes. The only difference between QDA and LDA relates to the covariance matrices of the classes. In QDA, unlike LDA, the covariance matrices of the classes are not assumed to be the same. Therefore, QDA is more general than LDA.

Suppose we have k-dimensional vectors of x to be classified into one of the M

where m ∈ f g 1; 2; …; M , μ<sup>m</sup> is the k-dimensional mean vector and Σ<sup>m</sup> is the k � k

The probability density function of Class Ω<sup>m</sup> can be expressed as

Ω<sup>m</sup> � Nkð Þ μm; Σ<sup>m</sup> (2)

bjs t½ �þ � j n t½ � (1)

the future in order to evaluate the performance of our BCI system in real time.

The use of these values might result in a more accurate system.

The autoregressive (AR) model of the signal s[t] is defined as

s t½�¼ ∑ F j¼1

65.06–72.65% TPRs.

New Frontiers in Brain-Computer Interfaces

A. Autoregressive modeling

mean and a finite variance.

a popular and widely used algorithm in the field.

same approach is taken in this study.

B. Quadratic discriminant analysis

classes with the normal distributions

covariance matrix of Class Ωm.

68

$$\chi \in \begin{cases} \Omega\_1 : \ln f\_{1,X}(\mathbf{x}) - \ln f\_{2,X}(\mathbf{x}) \ge \ln \left( \frac{\pi\_2}{\pi\_1} \cdot \frac{C\_2}{C\_1} \right) \\\\ \Omega\_2 : \ln f\_{1,X}(\mathbf{x}) - \ln f\_{2,X}(\mathbf{x}) < \ln \left( \frac{\pi\_2}{\pi\_1} \cdot \frac{C\_2}{C\_1} \right) \end{cases} \tag{7}$$

Using (3) in (7), the discriminant rule becomes

$$\infty \in \begin{cases} \Omega\_1 : F\_{qd}(\boldsymbol{\kappa}) \ge \mathbf{0} \\ \Omega\_2 : F\_{qd}(\boldsymbol{\kappa}) < \mathbf{0} \end{cases} \tag{8}$$

where the discriminant function, Fqdð Þ x , is defined as

$$F\_{qd}(\mathbf{x}) = -\frac{1}{2}\mathbf{x}^T \left(\Sigma\_1^{-1} - \Sigma\_2^{-1}\right)\mathbf{x} + \left(\mu\_1^T \Sigma\_1^{-1} - \mu\_2^T \Sigma\_2^{-1}\right)\mathbf{x}$$

$$-\frac{1}{2}\ln\left(\frac{|\Sigma\_1|}{|\Sigma\_2|}\right) - \frac{1}{2}\left(\mu\_1^T \Sigma\_1^{-1}\mu\_1 - \mu\_2^T \Sigma\_2^{-1}\mu\_2\right) \tag{9}$$

$$-\ln\left(\frac{\pi\_2}{\pi\_1}\cdot\frac{\mathbf{C}\_2}{\mathbf{C}\_1}\right)$$

The mean vectors (i.e., μ<sup>1</sup> and μ2) and the covariance matrices (i.e., Σ<sup>1</sup> and Σ2) are estimated from the data samples. In this study, the same value for the a priori probabilities π<sup>1</sup> and π<sup>2</sup> and the same value for the cost parameters C<sup>1</sup> and C<sup>2</sup> are assumed.

## Acknowledgements

The authors would like to thank the subjects who came and devoted their time to attend the EEG recording sessions.

New Frontiers in Brain-Computer Interfaces

References

5(1):9-23

137-169

S78-S78

186-188

71

2003;11(2):94-109

[1] Mak JN, Wolpaw JR. Clinical applications of brain–computer interfaces: Current state and future prospects. IEEE Reviews in Biomedical

DOI: http://dx.doi.org/10.5772/intechopen.83425

adaptive Gaussian representation. Medical Engineering & Physics. 2000;

[9] Ramoser H, Müller-Gerking J, Pfurtscheller G. Optimal spatial filtering of single trial EEG during imagined hand movement. IEEE Transactions on Rehabilitation Engineering. 2000;8(4):

[10] Guger C, Schlögl A, Neuper C, Walterspacher D, Strein T, Pfurtscheller G. Rapid prototyping of an EEG-based brain–computer interface (BCI). IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2001;9(1):

[11] Obermaier B, Guger C, Neuper C, Pfurtscheller G. Hidden Markov models for online classification of single trial EEG data. Pattern Recognition Letters.

[12] Babiloni F, Cincotti F, Bianchi L, Pirri G, Millán J d R, Mouriño J, et al. Recognition of imagined hand

movements with low resolution surface Laplacian and linear classifiers. Medical Engineering & Physics. 2001;23(5):

[13] Pfurtscheller G, Neuper C. Motor imagery and direct brain–computer communication. Proceedings of the

[14] Millán J d R, Franzé M, Mouriño J, Cincotti F, Babiloni F. Relevant EEG features for the classification of spontaneous motor-related tasks. Biological Cybernetics. 2002;86(2):

[15] Parra L, Alvino C, Tang A, Yeung BPN, Osman A, Sajda P. Linear spatial integration for single-trial detection in encephalography. NeuroImage. 2002;

IEEE. 2001;89(7):1123-1134

2001;22(12):1299-1309

22(5):345-348

A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels

441-446

49-58

323-328

89-95

17(1):223-230

Engineering. 2009;2:187-199

[2] Fatourechi M, Ward RK, Birch GE. A self-paced brain–computer interface system with a low false positive rate. Journal of Neural Engineering. 2008;

[3] Bashashati A, Fatourechi M, Ward RK, Birch GE. A survey of signal processing algorithms in brain-

computer interfaces based on electrical

[4] Mason SG, Bashashati A, Fatourechi

comprehensive survey of brain interface

[5] Vaughan TM. Guest editorial brain– computer interface technology: A review of the second international meeting. IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[6] Millán J d R, Mouriño J, Cincotti F, Varsta M, Heikkonen J, Topani F, et al. EEG patterns associated to spontaneous execution of mental tasks. NeuroImage. 2000;11(5, Supplement 1):

[7] Babiloni F, Cincotti F, Lazzarini L, Millán J, Mouriño J, Varsta M, et al. Linear classification of low-resolution EEG patterns produced by imagined hand movements. IEEE Transactions on Rehabilitation Engineering. 2000;8(2):

[8] Costa EJX, Cabral EF Jr. EEG-based discrimination between imagination of left and right hand movements using

brain signals. Journal of Neural Engineering. 2007;4(2):R32-R57

M, Navarro KF, Birch GE. A

technology designs. Annals of Biomedical Engineering. 2007;35(2):

## Author details

Farhad Faradji<sup>1</sup> \*, Rabab K. Ward1 and Gary E. Birch1,2

1 Electrical and Computer Engineering Department, University of British Columbia, Vancouver, BC, Canada

2 Neil Squire Society, Burnaby, BC, Canada

\*Address all correspondence to: farhadf@ece.ubc.ca

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels DOI: http://dx.doi.org/10.5772/intechopen.83425

## References

[1] Mak JN, Wolpaw JR. Clinical applications of brain–computer interfaces: Current state and future prospects. IEEE Reviews in Biomedical Engineering. 2009;2:187-199

[2] Fatourechi M, Ward RK, Birch GE. A self-paced brain–computer interface system with a low false positive rate. Journal of Neural Engineering. 2008; 5(1):9-23

[3] Bashashati A, Fatourechi M, Ward RK, Birch GE. A survey of signal processing algorithms in braincomputer interfaces based on electrical brain signals. Journal of Neural Engineering. 2007;4(2):R32-R57

[4] Mason SG, Bashashati A, Fatourechi M, Navarro KF, Birch GE. A comprehensive survey of brain interface technology designs. Annals of Biomedical Engineering. 2007;35(2): 137-169

[5] Vaughan TM. Guest editorial brain– computer interface technology: A review of the second international meeting. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2003;11(2):94-109

[6] Millán J d R, Mouriño J, Cincotti F, Varsta M, Heikkonen J, Topani F, et al. EEG patterns associated to spontaneous execution of mental tasks. NeuroImage. 2000;11(5, Supplement 1): S78-S78

[7] Babiloni F, Cincotti F, Lazzarini L, Millán J, Mouriño J, Varsta M, et al. Linear classification of low-resolution EEG patterns produced by imagined hand movements. IEEE Transactions on Rehabilitation Engineering. 2000;8(2): 186-188

[8] Costa EJX, Cabral EF Jr. EEG-based discrimination between imagination of left and right hand movements using

adaptive Gaussian representation. Medical Engineering & Physics. 2000; 22(5):345-348

[9] Ramoser H, Müller-Gerking J, Pfurtscheller G. Optimal spatial filtering of single trial EEG during imagined hand movement. IEEE Transactions on Rehabilitation Engineering. 2000;8(4): 441-446

[10] Guger C, Schlögl A, Neuper C, Walterspacher D, Strein T, Pfurtscheller G. Rapid prototyping of an EEG-based brain–computer interface (BCI). IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2001;9(1): 49-58

[11] Obermaier B, Guger C, Neuper C, Pfurtscheller G. Hidden Markov models for online classification of single trial EEG data. Pattern Recognition Letters. 2001;22(12):1299-1309

[12] Babiloni F, Cincotti F, Bianchi L, Pirri G, Millán J d R, Mouriño J, et al. Recognition of imagined hand movements with low resolution surface Laplacian and linear classifiers. Medical Engineering & Physics. 2001;23(5): 323-328

[13] Pfurtscheller G, Neuper C. Motor imagery and direct brain–computer communication. Proceedings of the IEEE. 2001;89(7):1123-1134

[14] Millán J d R, Franzé M, Mouriño J, Cincotti F, Babiloni F. Relevant EEG features for the classification of spontaneous motor-related tasks. Biological Cybernetics. 2002;86(2): 89-95

[15] Parra L, Alvino C, Tang A, Yeung BPN, Osman A, Sajda P. Linear spatial integration for single-trial detection in encephalography. NeuroImage. 2002; 17(1):223-230

Author details

Farhad Faradji<sup>1</sup>

70

Columbia, Vancouver, BC, Canada

New Frontiers in Brain-Computer Interfaces

2 Neil Squire Society, Burnaby, BC, Canada

provided the original work is properly cited.

\*Address all correspondence to: farhadf@ece.ubc.ca

\*, Rabab K. Ward1 and Gary E. Birch1,2

1 Electrical and Computer Engineering Department, University of British

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

[16] Pfurtscheller G, Neuper C, Müller GR, Obermaier B, Krausz G, Schlögl A, et al. Graz-BCI: State of the art and clinical applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2003;11(2):177-180

[17] Neuper C, Müller GR, Kübler A, Birbaumer N, Pfurtscheller G. Clinical application of an EEG-based brain– computer interface: A case study in a patient with severe motor impairment. Clinical Neurophysiology. 2003;114(3): 399-409

[18] Cincotti F, Mattia D, Babiloni C, Carducci F, Salinari S, Bianchi L, et al. The use of EEG modifications due to motor imagery for brain–computer interfaces. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2003;11(2):131-133

[19] Townsend G, Graimann B, Pfurtscheller G. Continuous EEG classification during motor imagerysimulation of an asynchronous BCI. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2004; 12(2):258-265

[20] Spiegler A, Graimann B, Pfurtscheller G. Phase coupling between different motor areas during tonguemovement imagery. Neuroscience Letters. 2004;369(1):50-54

[21] Gysels E, Celka P. Phase synchronization for the recognition of mental tasks in a brain–computer interface. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2004;12(4):406-415

[22] Wang T, Deng J, He B. Classifying EEG-based motor imagery tasks by means of time-frequency synthesized spatial patterns. Clinical Neurophysiology. 2004;115(12): 2744-2753

[23] Lal TN, Schröder M, Hinterberger T, Weston J, Bogdan M, Birbaumer N, et al. Support vector channel selection in BCI. IEEE Transactions on Biomedical Engineering. 2004;51(6):1003-1010

extraction using fuzzy wavelet packet in brain–computer interfaces. Signal Processing. 2007;87(7):1569-1574

DOI: http://dx.doi.org/10.5772/intechopen.83425

differences between two motor tasks based on energy density maps for brain– computer interface applications. Clinical Neurophysiology. 2008;119(2):446-458

[39] Boye AT, Kristiansen UQ, Billinger M, do Nascimento OF, Farina D. Identification of movement-related cortical potentials with optimized spatial filtering and principal

component analysis. Biomedical Signal Processing and Control. 2008;3(4):

[40] Zhou S-M, Gan JQ, Sepulveda F. Classifying mental tasks based on features of higher-order statistics from

[41] Townsend G, Feng Y. Using phase information to reveal the nature of event-related desynchronization. Biomedical Signal Processing and Control. 2008;3(3):192-202

[42] Scherer R, Lee F, Schlögl A, Leeb R, Bischof H, Pfurtscheller G. Toward selfpaced brain–computer communication: Navigation through virtual worlds. IEEE

[43] Nasihatkon B, Boostani R, Jahromi MZ. An efficient hybrid linear and kernel CSP approach for EEG feature extraction. Neurocomputing. 2009;73

[44] Fazli S, Popescu F, Danóczy M, Blankertz B, Müller K-R, Grozea C. Subject-independent mental state classification in single trials. Neural Networks. 2009;22(9):1305-1312

[45] Nazarpour K, Praamstra P, Miall RC, Sanei S. Steady-state movement related potentials for brain–computer interfacing. IEEE Transactions on Biomedical Engineering. 2009;56(8):

Transactions on Biomedical Engineering. 2008;55(2):675-682

(1–3):432-437

2104-2113

EEG signals in brain–computer interface. Information Sciences. 2008;

178(6):1629-1640

300-304

A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels

[32] Tsui CSL, Gan JQ. Asynchronous BCI control of a robot simulator with supervised online training. In: Yin H, Tino P, Corchado E, Byrne W, Yao X, editors. Intelligent Data Engineering and Automated Learning—IDEAL 2007. Lecture Notes in Computer Science. Vol. 4881. Berlin/Heidelberg: Springer; 2007.

[33] Leeb R, Lee F, Keinrath C, Scherer R, Bischof H, Pfurtscheller G. Braincomputer communication: Motivation, aim, and impact of exploring a virtual apartment. IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[34] Blankertz B, Dornhege G, Krauledat M, Müller K-R, Curio G. The noninvasive berlin brain–computer interface: Fast acquisition of effective performance in untrained subjects. NeuroImage. 2007;37(2):539-550

[35] Geng T, Gan JQ, Dyson M, Tsui CS, Sepulveda F. A novel design of 4-class BCI using two binary classifiers and parallel mental tasks. Computational Intelligence and Neuroscience. 2008;

2008:5. Article ID: 437306

167(1):31-42

82-90

73

[36] Cincotti F, Mattia D, Aloise F, Bufalari S, Astolfi L, Fallani FDV, et al. High-resolution EEG techniques for brain–computer interface applications. Journal of Neuroscience Methods. 2008;

[37] Müller K-R, Tangermann M, Dornhege G, Krauledat M, Curio G, Blankertz B. Machine learning for realtime single-trial EEG-analysis: From brain–computer interfacing to mental

state monitoring. Journal of

[38] Vuckovic A, Sepulveda F. Quantification and visualisation of

Neuroscience Methods. 2008;167(1):

pp. 125-134

2007;15(4):473-482

[24] Müller-Putz GR, Scherer R, Pfurtscheller G, Rupp R. EEG-based neuroprosthesis control: A step towards clinical practice. Neuroscience Letters. 2005;382(1–2):169-174

[25] Gysels E, Renevey P, Celka P. SVMbased recursive feature elimination to compare phase synchronization computed from broadband and narrowband EEG signals in brain– computer interfaces. Signal Processing. 2005;85(11):2178-2189

[26] Pfurtscheller G, Brunner C, Schlögl A, da Silva FHL. Mu rhythm (de) synchronization and EEG single-trial classification of different motor imagery tasks. NeuroImage. 2006;31(1):153-159

[27] Pfurtscheller G, Leeb R, Keinrath C, Friedman D, Neuper C, Guger C, et al. Walking from thought. Brain Research. 2006;1071(1):145-152

[28] Hua Yang B, Zheng Yan G, Guo Yan R, Wu T. Adaptive subject-based feature extraction in brain–computer interfaces using wavelet packet best basis decomposition. Medical Engineering & Physics. 2007;29(1): 48-53

[29] Ince NF, Tewfik AH, Arica S. Extraction subject-specific motor imagery time-frequency patterns for single trial EEG classification. Computers in Biology and Medicine. 2007;37(4):499-508

[30] Krepki R, Curio G, Blankertz B, Müller K-R. Berlin brain–computer interface—The HCI communication channel for discovery. International Journal of Human-Computer Studies. 2007;65(5):460-477

[31] Hua Yang B, Zheng Yan G, Wu T, Guo Yan R. Subject-based feature

A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels DOI: http://dx.doi.org/10.5772/intechopen.83425

extraction using fuzzy wavelet packet in brain–computer interfaces. Signal Processing. 2007;87(7):1569-1574

[16] Pfurtscheller G, Neuper C, Müller GR, Obermaier B, Krausz G, Schlögl A, et al. Graz-BCI: State of the art and clinical applications. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2003;11(2):177-180

New Frontiers in Brain-Computer Interfaces

et al. Support vector channel selection in BCI. IEEE Transactions on Biomedical Engineering. 2004;51(6):1003-1010

[25] Gysels E, Renevey P, Celka P. SVMbased recursive feature elimination to compare phase synchronization computed from broadband and narrowband EEG signals in brain– computer interfaces. Signal Processing.

[26] Pfurtscheller G, Brunner C, Schlögl A, da Silva FHL. Mu rhythm (de) synchronization and EEG single-trial classification of different motor imagery tasks. NeuroImage. 2006;31(1):153-159

[27] Pfurtscheller G, Leeb R, Keinrath C, Friedman D, Neuper C, Guger C, et al. Walking from thought. Brain Research.

[28] Hua Yang B, Zheng Yan G, Guo Yan R, Wu T. Adaptive subject-based feature extraction in brain–computer interfaces using wavelet packet best basis decomposition. Medical Engineering & Physics. 2007;29(1):

[29] Ince NF, Tewfik AH, Arica S. Extraction subject-specific motor imagery time-frequency patterns for single trial EEG classification. Computers in Biology and Medicine.

[30] Krepki R, Curio G, Blankertz B, Müller K-R. Berlin brain–computer interface—The HCI communication channel for discovery. International Journal of Human-Computer Studies.

[31] Hua Yang B, Zheng Yan G, Wu T, Guo Yan R. Subject-based feature

[24] Müller-Putz GR, Scherer R, Pfurtscheller G, Rupp R. EEG-based neuroprosthesis control: A step towards clinical practice. Neuroscience Letters.

2005;382(1–2):169-174

2005;85(11):2178-2189

2006;1071(1):145-152

2007;37(4):499-508

2007;65(5):460-477

48-53

[17] Neuper C, Müller GR, Kübler A, Birbaumer N, Pfurtscheller G. Clinical application of an EEG-based brain– computer interface: A case study in a patient with severe motor impairment. Clinical Neurophysiology. 2003;114(3):

[18] Cincotti F, Mattia D, Babiloni C, Carducci F, Salinari S, Bianchi L, et al. The use of EEG modifications due to motor imagery for brain–computer interfaces. IEEE Transactions on Neural Systems and Rehabilitation Engineering.

399-409

2003;11(2):131-133

12(2):258-265

[19] Townsend G, Graimann B, Pfurtscheller G. Continuous EEG classification during motor imagerysimulation of an asynchronous BCI. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2004;

[20] Spiegler A, Graimann B,

[21] Gysels E, Celka P. Phase

2004;12(4):406-415

spatial patterns. Clinical

2744-2753

72

Neurophysiology. 2004;115(12):

Pfurtscheller G. Phase coupling between different motor areas during tonguemovement imagery. Neuroscience Letters. 2004;369(1):50-54

synchronization for the recognition of mental tasks in a brain–computer interface. IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[22] Wang T, Deng J, He B. Classifying EEG-based motor imagery tasks by means of time-frequency synthesized

[23] Lal TN, Schröder M, Hinterberger T, Weston J, Bogdan M, Birbaumer N, [32] Tsui CSL, Gan JQ. Asynchronous BCI control of a robot simulator with supervised online training. In: Yin H, Tino P, Corchado E, Byrne W, Yao X, editors. Intelligent Data Engineering and Automated Learning—IDEAL 2007. Lecture Notes in Computer Science. Vol. 4881. Berlin/Heidelberg: Springer; 2007. pp. 125-134

[33] Leeb R, Lee F, Keinrath C, Scherer R, Bischof H, Pfurtscheller G. Braincomputer communication: Motivation, aim, and impact of exploring a virtual apartment. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2007;15(4):473-482

[34] Blankertz B, Dornhege G, Krauledat M, Müller K-R, Curio G. The noninvasive berlin brain–computer interface: Fast acquisition of effective performance in untrained subjects. NeuroImage. 2007;37(2):539-550

[35] Geng T, Gan JQ, Dyson M, Tsui CS, Sepulveda F. A novel design of 4-class BCI using two binary classifiers and parallel mental tasks. Computational Intelligence and Neuroscience. 2008; 2008:5. Article ID: 437306

[36] Cincotti F, Mattia D, Aloise F, Bufalari S, Astolfi L, Fallani FDV, et al. High-resolution EEG techniques for brain–computer interface applications. Journal of Neuroscience Methods. 2008; 167(1):31-42

[37] Müller K-R, Tangermann M, Dornhege G, Krauledat M, Curio G, Blankertz B. Machine learning for realtime single-trial EEG-analysis: From brain–computer interfacing to mental state monitoring. Journal of Neuroscience Methods. 2008;167(1): 82-90

[38] Vuckovic A, Sepulveda F. Quantification and visualisation of differences between two motor tasks based on energy density maps for brain– computer interface applications. Clinical Neurophysiology. 2008;119(2):446-458

[39] Boye AT, Kristiansen UQ, Billinger M, do Nascimento OF, Farina D. Identification of movement-related cortical potentials with optimized spatial filtering and principal component analysis. Biomedical Signal Processing and Control. 2008;3(4): 300-304

[40] Zhou S-M, Gan JQ, Sepulveda F. Classifying mental tasks based on features of higher-order statistics from EEG signals in brain–computer interface. Information Sciences. 2008; 178(6):1629-1640

[41] Townsend G, Feng Y. Using phase information to reveal the nature of event-related desynchronization. Biomedical Signal Processing and Control. 2008;3(3):192-202

[42] Scherer R, Lee F, Schlögl A, Leeb R, Bischof H, Pfurtscheller G. Toward selfpaced brain–computer communication: Navigation through virtual worlds. IEEE Transactions on Biomedical Engineering. 2008;55(2):675-682

[43] Nasihatkon B, Boostani R, Jahromi MZ. An efficient hybrid linear and kernel CSP approach for EEG feature extraction. Neurocomputing. 2009;73 (1–3):432-437

[44] Fazli S, Popescu F, Danóczy M, Blankertz B, Müller K-R, Grozea C. Subject-independent mental state classification in single trials. Neural Networks. 2009;22(9):1305-1312

[45] Nazarpour K, Praamstra P, Miall RC, Sanei S. Steady-state movement related potentials for brain–computer interfacing. IEEE Transactions on Biomedical Engineering. 2009;56(8): 2104-2113

[46] Tsui CSL, Gan JQ, Roberts SJ. A self-paced brain–computer interface for controlling a robot simulator: An online event labelling paradigm and an extended Kalman filter based algorithm for online training. Medical and Biological Engineering and Computing. 2009;47:257-265

[47] Ron-Angevin R, Díaz-Estrella A. Brain–computer interface: Changes in performance using virtual reality techniques. Neuroscience Letters. 2009; 449(2):123-127

[48] Xu Q, Zhou H, Wang Y, Huang J. Fuzzy support vector machine for classification of EEG signals using wavelet-based features. Medical Engineering & Physics. 2009;31(7): 858-865

[49] Sannelli C, Braun M, Müller K-R. Improving BCI performance by taskrelated trial pruning. Neural Networks. 2009;22(9):1295-1304

[50] Hsu W-Y, Sun Y-N. EEG-based motor imagery analysis using weighted wavelet transform features. Journal of Neuroscience Methods. 2009;176(2): 310-318

[51] Vidaurre C, Krämer N, Blankertz B, Schlögl A. Time domain parameters as a feature for EEG-based brain–computer interfaces. Neural Networks. 2009; 22(9):1313-1319

[52] Ince NF, Goksu F, Tewfik AH, Arica S. Adapting subject specific motor imagery EEG patterns in space-timefrequency for a brain computer interface. Biomedical Signal Processing and Control. 2009;4(3):236-246

[53] Lei X, Yang P, Yao D. An empirical Bayesian framework for brain– computer interfaces. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2009;17(6):521-529

[54] Hazrati MK, Erfanian A. An online EEG-based brain–computer interface

for controlling hand grasp using an adaptive probabilistic neural network. Medical Engineering & Physics. 2010; 32(7):730-739

towards a hybrid BCI. IEEE

409-414

698-708

21305-21327

2001;9(3):283-288

75

Transactions on Neural Systems and Rehabilitation Engineering. 2010;18(4):

DOI: http://dx.doi.org/10.5772/intechopen.83425

Systems and Rehabilitation Engineering.

2003;12(1):48-54

159-161

A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels

241-259

[69] Millán J d R, Mouriño J. Asynchronous BCI and local neural classifiers: An overview of the adaptive

brain interface project. IEEE

[71] Chiappa S, Barber D. EEG classification using generative independent component analysis. Neurocomputing. 2006;69(7–9):769-777

[72] Sun S, Zhang C, Zhang D. An experimental evaluation of ensemble methods for EEG signal classification. Pattern Recognition Letters. 2007;

[73] Sepulveda F, Dyson M, Gan JQ, Tsui CSL. A comparison of mental task combinations for asynchronous EEGbased BCIs. In: Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 2007. pp. 5055-5058

Voloshynovskiy S, Pun T. EEG-based synchronized brain–computer

interfaces: A model for optimizing the

[75] Sun S, Zhang C, Lu Y. The random electrode selection ensemble for EEG

[76] Galán F, Nuttin M, Lew E, Ferrez P, Vanacker G, Philips J, et al. A brainactuated wheelchair: Asynchronous and non-invasive brain–computer interfaces

for continuous control of robots.

28(15):2157-2163

[74] Kronegg J, Chanel G,

50-58

number of mental tasks. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2007;15(1):

signal classification. Pattern Recognition. 2008;41(5):1663-1675

Transactions on Neural Systems and Rehabilitation Engineering. 2003;11(2):

[70] Millán J d R, Renkens F, Mouriño J, Gerstner W. Brain-actuated interaction. Artificial Intelligence. 2004;159(1–2):

[62] Coyle D, McGinnity TM, Prasad G. Improving the separability of multiple EEG features for a BCI by neuraltime-series-prediction-preprocessing. Biomedical Signal Processing and Control. 2010;5(3):196-204

[63] Xie X, Yu ZL, Gu Z, Zhang J, Cen L, Li Y. Bilinear regularized locality preserving learning on Riemannian graph for motor imagery BCI. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2018;26(3):

[64] Mohamed EA, Yusoff MZ, Malik AS, Bahloul MR, Adam DM, Adam IK.

decomposition methods in classification of motor-imagery BCI. Multimedia Tools and Applications. 2018;77(16):

[65] Penny WD, Roberts SJ, Curran EA, Stokes MJ. EEG-based communication: A pattern recognition approach. IEEE Transactions on Rehabilitation Engineering. 2000;8(2):214-215

[66] Obermaier B, Neuper C, Guger C, Pfurtscheller G. Information transfer rate in a five-classes brain–computer interface. IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[67] Millán J d R, Mouriño J, Franzé M, Cincotti F, Varsta M, Heikkonen J, et al.

recognition of EEG patterns associated to mental tasks. IEEE Transactions on Neural Networks. 2002;13(3):678-686

[68] Curran E, Sykacek P, Stokes M, Roberts SJ, Penny W, Johnsrude I, et al. Cognitive tasks for driving a braincomputer interfacing system: A pilot study. IEEE Transactions on Neural

A local neural classifier for the

Comparison of EEG signal

[55] Liu G, Huang G, Meng J, Zhu X. A frequency-weighted method combined with Common Spatial Patterns for electroencephalogram classification in brain–computer interface. Biomedical Signal Processing and Control. 2010; 5(2):174-180

[56] Brunner C, Allison BZ, Krusienski DJ, Kaiser V, Müller-Putz GR, Pfurtscheller G, et al. Improved signal processing approaches in an offline simulation of a hybrid brain–computer interface. Journal of Neuroscience Methods. 2010;188(1):165-173

[57] Iáñez E, Azorín JM, Úbeda A, Ferrández JM, Fernández E. Mental tasks-based brain–robot interface. Robotics and Autonomous Systems. 2010;58(12):1238-1245

[58] Hsu W-Y. EEG-based motor imagery classification using neuro-fuzzy prediction and wavelet fractal features. Journal of Neuroscience Methods. 2010; 189(2):295-302

[59] Qian K, Nikolov P, Huang D, Fei D-Y, Chen X, Bai O. A motor imagery-based online interactive brain-controlled switch: Paradigm development and preliminary test. Clinical Neurophysiology. 2010;121(8): 1304-1313

[60] Li Y, Long J, Yu T, Yu Z, Wang C, Zhang H, et al. An EEG-based BCI system for 2-D cursor control by combining Mu/Beta rhythm and P300 potential. IEEE Transactions on Biomedical Engineering. 2010;57(10): 2495-2505

[61] Pfurtscheller G, Solis-Escalante T, Ortner R, Linortner P, Müller-Putz G. Self-paced operation of an SSVEP-based orthosis with and without an imagerybased "brain switch:" A feasibility study A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels DOI: http://dx.doi.org/10.5772/intechopen.83425

towards a hybrid BCI. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2010;18(4): 409-414

[46] Tsui CSL, Gan JQ, Roberts SJ. A self-paced brain–computer interface for controlling a robot simulator: An online

New Frontiers in Brain-Computer Interfaces

for controlling hand grasp using an adaptive probabilistic neural network. Medical Engineering & Physics. 2010;

[55] Liu G, Huang G, Meng J, Zhu X. A frequency-weighted method combined with Common Spatial Patterns for electroencephalogram classification in brain–computer interface. Biomedical Signal Processing and Control. 2010;

[56] Brunner C, Allison BZ, Krusienski

Pfurtscheller G, et al. Improved signal processing approaches in an offline simulation of a hybrid brain–computer interface. Journal of Neuroscience Methods. 2010;188(1):165-173

[57] Iáñez E, Azorín JM, Úbeda A, Ferrández JM, Fernández E. Mental tasks-based brain–robot interface. Robotics and Autonomous Systems.

[58] Hsu W-Y. EEG-based motor

[59] Qian K, Nikolov P, Huang D, Fei D-Y, Chen X, Bai O. A motor imagery-based online interactive brain-controlled switch: Paradigm development and preliminary test. Clinical Neurophysiology. 2010;121(8):

[60] Li Y, Long J, Yu T, Yu Z, Wang C, Zhang H, et al. An EEG-based BCI system for 2-D cursor control by combining Mu/Beta rhythm and P300 potential. IEEE Transactions on Biomedical Engineering. 2010;57(10):

[61] Pfurtscheller G, Solis-Escalante T, Ortner R, Linortner P, Müller-Putz G. Self-paced operation of an SSVEP-based orthosis with and without an imagerybased "brain switch:" A feasibility study

imagery classification using neuro-fuzzy prediction and wavelet fractal features. Journal of Neuroscience Methods. 2010;

2010;58(12):1238-1245

189(2):295-302

1304-1313

2495-2505

DJ, Kaiser V, Müller-Putz GR,

32(7):730-739

5(2):174-180

extended Kalman filter based algorithm

Biological Engineering and Computing.

[47] Ron-Angevin R, Díaz-Estrella A. Brain–computer interface: Changes in performance using virtual reality techniques. Neuroscience Letters. 2009;

[48] Xu Q, Zhou H, Wang Y, Huang J. Fuzzy support vector machine for classification of EEG signals using wavelet-based features. Medical Engineering & Physics. 2009;31(7):

[49] Sannelli C, Braun M, Müller K-R. Improving BCI performance by taskrelated trial pruning. Neural Networks.

[50] Hsu W-Y, Sun Y-N. EEG-based motor imagery analysis using weighted wavelet transform features. Journal of Neuroscience Methods. 2009;176(2):

[51] Vidaurre C, Krämer N, Blankertz B, Schlögl A. Time domain parameters as a feature for EEG-based brain–computer interfaces. Neural Networks. 2009;

[52] Ince NF, Goksu F, Tewfik AH, Arica S. Adapting subject specific motor imagery EEG patterns in space-timefrequency for a brain computer

interface. Biomedical Signal Processing and Control. 2009;4(3):236-246

[53] Lei X, Yang P, Yao D. An empirical

computer interfaces. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2009;17(6):521-529

[54] Hazrati MK, Erfanian A. An online EEG-based brain–computer interface

Bayesian framework for brain–

2009;22(9):1295-1304

event labelling paradigm and an

for online training. Medical and

2009;47:257-265

449(2):123-127

858-865

310-318

74

22(9):1313-1319

[62] Coyle D, McGinnity TM, Prasad G. Improving the separability of multiple EEG features for a BCI by neuraltime-series-prediction-preprocessing. Biomedical Signal Processing and Control. 2010;5(3):196-204

[63] Xie X, Yu ZL, Gu Z, Zhang J, Cen L, Li Y. Bilinear regularized locality preserving learning on Riemannian graph for motor imagery BCI. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2018;26(3): 698-708

[64] Mohamed EA, Yusoff MZ, Malik AS, Bahloul MR, Adam DM, Adam IK. Comparison of EEG signal decomposition methods in classification of motor-imagery BCI. Multimedia Tools and Applications. 2018;77(16): 21305-21327

[65] Penny WD, Roberts SJ, Curran EA, Stokes MJ. EEG-based communication: A pattern recognition approach. IEEE Transactions on Rehabilitation Engineering. 2000;8(2):214-215

[66] Obermaier B, Neuper C, Guger C, Pfurtscheller G. Information transfer rate in a five-classes brain–computer interface. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2001;9(3):283-288

[67] Millán J d R, Mouriño J, Franzé M, Cincotti F, Varsta M, Heikkonen J, et al. A local neural classifier for the recognition of EEG patterns associated to mental tasks. IEEE Transactions on Neural Networks. 2002;13(3):678-686

[68] Curran E, Sykacek P, Stokes M, Roberts SJ, Penny W, Johnsrude I, et al. Cognitive tasks for driving a braincomputer interfacing system: A pilot study. IEEE Transactions on Neural

Systems and Rehabilitation Engineering. 2003;12(1):48-54

[69] Millán J d R, Mouriño J. Asynchronous BCI and local neural classifiers: An overview of the adaptive brain interface project. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2003;11(2): 159-161

[70] Millán J d R, Renkens F, Mouriño J, Gerstner W. Brain-actuated interaction. Artificial Intelligence. 2004;159(1–2): 241-259

[71] Chiappa S, Barber D. EEG classification using generative independent component analysis. Neurocomputing. 2006;69(7–9):769-777

[72] Sun S, Zhang C, Zhang D. An experimental evaluation of ensemble methods for EEG signal classification. Pattern Recognition Letters. 2007; 28(15):2157-2163

[73] Sepulveda F, Dyson M, Gan JQ, Tsui CSL. A comparison of mental task combinations for asynchronous EEGbased BCIs. In: Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 2007. pp. 5055-5058

[74] Kronegg J, Chanel G, Voloshynovskiy S, Pun T. EEG-based synchronized brain–computer interfaces: A model for optimizing the number of mental tasks. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2007;15(1): 50-58

[75] Sun S, Zhang C, Lu Y. The random electrode selection ensemble for EEG signal classification. Pattern Recognition. 2008;41(5):1663-1675

[76] Galán F, Nuttin M, Lew E, Ferrez P, Vanacker G, Philips J, et al. A brainactuated wheelchair: Asynchronous and non-invasive brain–computer interfaces for continuous control of robots.

Clinical Neurophysiology. 2008;119(9): 2159-2169

[77] Cichocki A, Lee H, Kim Y-D, Choi S. Non-negative matrix factorization with α-divergence. Pattern Recognition Letters. 2008;29(9):1433-1440

[78] Dyson M, Sepulveda F, Gan JQ. Mental task classification against the idle state: A preliminary investigation. In: Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 2008. pp. 4473-4477

[79] Lin C-J, Hsieh M-H. Classification of mental task from EEG data using neural networks based on particle swarm optimization. Neurocomputing. 2009;72 (4-6):1121-1130

[80] Dyson M, Sepulveda F, Gan JQ, Roberts SJ. Sequential classification of mental tasks vs. idle state for EEG based BCIs. In: Proceedings of the 4th International IEEE EMBS Conference on Neural Engineering. 2009. pp. 351-354

[81] Lee H, Cichocki A, Choi S. Kernel nonnegative matrix factorization for spectral EEG feature extraction. Neurocomputing. 2009;72(13-15): 3182-3190

[82] Dyson M, Sepulveda F, Gan JQ. Localisation of cognitive tasks used in EEG-based BCIs. Clinical Neurophysiology. 2010;121(9): 1481-1493

[83] Szajerman D, Smagur A, Opalka S, Wojciechowski A. Effective bci mental tasks classification with adaptively solved convolutional neural networks. In: 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering (ISEF) Book of Abstracts. 2017. pp. 1-2

[84] Lotte F, Jeunet C. Defining and quantifying users' mental imagerybased BCI skills: A first step. Journal of Neural Engineering. 2018;15(4):046030

mental task classification utilizing reflection coefficients obtained from autocorrelation function of EEG signal. Brain Informatics. 2017;5(1):1-12

DOI: http://dx.doi.org/10.5772/intechopen.83425

[99] Oostenveld R, Praamstra P. The five percent electrode system for highresolution EEG and ERP measurements. Clinical Neurophysiology. 2001;112(4):

[100] Faradji F, Ward RK, Birch GE. Plausibility assessment of a 2-state selfpaced mental task-based BCI using the no-control performance analysis. Journal of Neuroscience Methods. 2009;

[101] Birch GE, Lawrence PD, Lind JC, Hare RD. Application of prewhitening to AR spectral estimation of EEG. IEEE

[102] Blanchard G, Blankertz B. BCI competition 2003-data set IIa: Spatial patterns of self-controlled brain rhythm modulations. IEEE Transactions on Biomedical Engineering. 2004;51(6):

[103] Bashashati A, Fatourechi M, Ward RK, Birch GE. User customization of the feature generator of an asynchronous brain interface. Annals of Biomedical Engineering. 2006;34(6):1051-1060

[104] Welch BL. The generalization of 'student's' problem when several different population variances are involved. Biometrika. 1947;34(1-2):

[105] Aydin S. Determination of

[106] Krusienski DJ, McFarland DJ, Wolpaw JR. An evaluation of autoregressive spectral estimation model order for brain–computer interface applications. In: 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS). 2006. pp. 1323-1326

autoregressive model orders for seizure detection. Turkish Journal of Electrical Engineering & Computer Sciences.

Transactions on Biomedical Engineering. 1988;35(8):640-645

713-719

A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels

180(2):330-339

1062-1066

28-35

2010;18:23-30

[93] Sun H, Zhang Y, Gluckman B, Zhong X, Zhang X. Optimal-channel selection algorithms in mental tasks based brain-computer interface. In:

International Conference on Bioscience, Biochemistry and Bioinformatics, ICBBB 2018. Association for Computing

[94] Gupta A, Kumar D, Chakraborti A. Hurst exponent as a new ingredient to parametric feature set for mental task classification. In: Satapathy SC, Tavares JMRS, Bhateja V, Mohanty JR, editors. Information and Decision Sciences. Singapore: Springer Singapore; 2018.

Proceedings of the 2018 8th

Machinery. 2018. pp. 118-123

[95] Dutta S, Singh M, Kumar A. Automated classification of non-motor mental task in electroencephalogram based brain-computer interface using multivariate autoregressive model in the

intrinsic mode function domain. Biomedical Signal Processing and

[96] Kuremoto T, Baba Y, Obayashi M, Mabu S, Kobayashi K. Enhancing EEG signals recognition using roc curve. Journal of Robotics, Networking and Artificial Life. 2018;4:283-286

[97] Goel P, Joshi R, Sur M, Murthy HA. A common spatial pattern approach for classification of mental counting and motor execution eeg. In: Tiwary US, editor. Intelligent Human Computer

[98] Jurcak V, Tsuzuki D, Dan I. 10/20, 10/10, and 10/5 systems revisited: Their validity as relative head-surface-based positioning systems. NeuroImage. 2007;

Control. 2018;43:174-182

Interaction. Cham: Springer International Publishing; 2018.

pp. 26-35

77

34(4):1600-1611

pp. 129-137

[85] Keirn ZA, Aunon JI. A new mode of communication between man and his surroundings. IEEE Transactions on Biomedical Engineering. 1990;37(12): 1209-1214

[86] Keirn ZA, Aunon JI. Man-machine communications through brain-wave processing. IEEE Engineering in Medicine and Biology Magazine. 1990; 9(1):55-57

[87] Palaniappan R, Paramesran R, Nishida S, Saiwaki N. A new brain– computer interface design using fuzzy ARTMAP. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2002;10(3):140-148

[88] Anderson CW, Knight JN, O'Connor T, Kirby MJ, Sokolov A. Geometric subspace methods and timedelay embedding for EEG artifact removal and classification. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2006;14(2): 142-146

[89] Palaniappan R. Utilizing gamma band to improve mental task based brain-computer interface design. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2006;14(3): 299-303

[90] Guo L, Wu Y, Zhao L, Cao T, Yan W, Shen X. Classification of mental task from EEG signals using immune feature weighted support vector machines. IEEE Transactions on Magnetics. 2011; 47(5):866-869

[91] DaSalla CS, Kambara H, Sato M, Koike Y. Single-trial classification of vowel speech imagery using common spatial patterns. Neural Networks. 2009; 22(9):1334-1339

[92] Rahman MM, Chowdhury MA, Fattah SA. An efficient scheme for

A Self-Paced Two-State Mental Task-Based Brain-Computer Interface with Few EEG Channels DOI: http://dx.doi.org/10.5772/intechopen.83425

mental task classification utilizing reflection coefficients obtained from autocorrelation function of EEG signal. Brain Informatics. 2017;5(1):1-12

Clinical Neurophysiology. 2008;119(9):

New Frontiers in Brain-Computer Interfaces

based BCI skills: A first step. Journal of Neural Engineering. 2018;15(4):046030

[85] Keirn ZA, Aunon JI. A new mode of communication between man and his surroundings. IEEE Transactions on Biomedical Engineering. 1990;37(12):

[86] Keirn ZA, Aunon JI. Man-machine communications through brain-wave processing. IEEE Engineering in Medicine and Biology Magazine. 1990;

[87] Palaniappan R, Paramesran R, Nishida S, Saiwaki N. A new brain– computer interface design using fuzzy ARTMAP. IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[88] Anderson CW, Knight JN, O'Connor T, Kirby MJ, Sokolov A. Geometric subspace methods and timedelay embedding for EEG artifact removal and classification. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2006;14(2):

[89] Palaniappan R. Utilizing gamma band to improve mental task based brain-computer interface design. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2006;14(3):

[90] Guo L, Wu Y, Zhao L, Cao T, Yan W, Shen X. Classification of mental task from EEG signals using immune feature weighted support vector machines. IEEE Transactions on Magnetics. 2011;

[91] DaSalla CS, Kambara H, Sato M, Koike Y. Single-trial classification of vowel speech imagery using common spatial patterns. Neural Networks. 2009;

[92] Rahman MM, Chowdhury MA, Fattah SA. An efficient scheme for

1209-1214

9(1):55-57

142-146

299-303

47(5):866-869

22(9):1334-1339

2002;10(3):140-148

[77] Cichocki A, Lee H, Kim Y-D, Choi S. Non-negative matrix factorization with α-divergence. Pattern Recognition Letters. 2008;29(9):1433-1440

[78] Dyson M, Sepulveda F, Gan JQ. Mental task classification against the idle state: A preliminary investigation. In: Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology

[79] Lin C-J, Hsieh M-H. Classification of mental task from EEG data using neural networks based on particle swarm optimization. Neurocomputing. 2009;72

[80] Dyson M, Sepulveda F, Gan JQ, Roberts SJ. Sequential classification of mental tasks vs. idle state for EEG based

International IEEE EMBS Conference on Neural Engineering. 2009. pp. 351-354

[81] Lee H, Cichocki A, Choi S. Kernel nonnegative matrix factorization for spectral EEG feature extraction. Neurocomputing. 2009;72(13-15):

[82] Dyson M, Sepulveda F, Gan JQ. Localisation of cognitive tasks used in

[83] Szajerman D, Smagur A, Opalka S, Wojciechowski A. Effective bci mental tasks classification with adaptively solved convolutional neural networks. In: 18th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering (ISEF) Book of Abstracts. 2017. pp. 1-2

[84] Lotte F, Jeunet C. Defining and quantifying users' mental imagery-

EEG-based BCIs. Clinical Neurophysiology. 2010;121(9):

BCIs. In: Proceedings of the 4th

Society. 2008. pp. 4473-4477

(4-6):1121-1130

3182-3190

1481-1493

76

2159-2169

[93] Sun H, Zhang Y, Gluckman B, Zhong X, Zhang X. Optimal-channel selection algorithms in mental tasks based brain-computer interface. In: Proceedings of the 2018 8th International Conference on Bioscience, Biochemistry and Bioinformatics, ICBBB 2018. Association for Computing Machinery. 2018. pp. 118-123

[94] Gupta A, Kumar D, Chakraborti A. Hurst exponent as a new ingredient to parametric feature set for mental task classification. In: Satapathy SC, Tavares JMRS, Bhateja V, Mohanty JR, editors. Information and Decision Sciences. Singapore: Springer Singapore; 2018. pp. 129-137

[95] Dutta S, Singh M, Kumar A. Automated classification of non-motor mental task in electroencephalogram based brain-computer interface using multivariate autoregressive model in the intrinsic mode function domain. Biomedical Signal Processing and Control. 2018;43:174-182

[96] Kuremoto T, Baba Y, Obayashi M, Mabu S, Kobayashi K. Enhancing EEG signals recognition using roc curve. Journal of Robotics, Networking and Artificial Life. 2018;4:283-286

[97] Goel P, Joshi R, Sur M, Murthy HA. A common spatial pattern approach for classification of mental counting and motor execution eeg. In: Tiwary US, editor. Intelligent Human Computer Interaction. Cham: Springer International Publishing; 2018. pp. 26-35

[98] Jurcak V, Tsuzuki D, Dan I. 10/20, 10/10, and 10/5 systems revisited: Their validity as relative head-surface-based positioning systems. NeuroImage. 2007; 34(4):1600-1611

[99] Oostenveld R, Praamstra P. The five percent electrode system for highresolution EEG and ERP measurements. Clinical Neurophysiology. 2001;112(4): 713-719

[100] Faradji F, Ward RK, Birch GE. Plausibility assessment of a 2-state selfpaced mental task-based BCI using the no-control performance analysis. Journal of Neuroscience Methods. 2009; 180(2):330-339

[101] Birch GE, Lawrence PD, Lind JC, Hare RD. Application of prewhitening to AR spectral estimation of EEG. IEEE Transactions on Biomedical Engineering. 1988;35(8):640-645

[102] Blanchard G, Blankertz B. BCI competition 2003-data set IIa: Spatial patterns of self-controlled brain rhythm modulations. IEEE Transactions on Biomedical Engineering. 2004;51(6): 1062-1066

[103] Bashashati A, Fatourechi M, Ward RK, Birch GE. User customization of the feature generator of an asynchronous brain interface. Annals of Biomedical Engineering. 2006;34(6):1051-1060

[104] Welch BL. The generalization of 'student's' problem when several different population variances are involved. Biometrika. 1947;34(1-2): 28-35

[105] Aydin S. Determination of autoregressive model orders for seizure detection. Turkish Journal of Electrical Engineering & Computer Sciences. 2010;18:23-30

[106] Krusienski DJ, McFarland DJ, Wolpaw JR. An evaluation of autoregressive spectral estimation model order for brain–computer interface applications. In: 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS). 2006. pp. 1323-1326

[107] Simpson DM, Infantosi AFC, Junior JFC, Peixoto AJ, Abrantes LMdS. On the selection of autoregressive order for electroencephalographic (EEG) signals. In: 38th Midwest Symposium on Circuits and Systems. Vol. 2. 1995. pp. 1353-1356

[108] Anderson NR, Wisneski K, Eisenman L, Moran DW, Leuthardt EC, Krusienski DJ. An offline evaluation of the autoregressive spectrum for electrocorticography. IEEE Transactions on Biomedical Engineering. 2009;56(3): 913-916

[109] Oliveira LF, Simpson DM, Nadal J. Autoregressive spectral analysis of stabilometric signals. In: 16th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Engineering Advances: New Opportunities for Biomedical Engineers. 1994. pp. 1300-1301

[110] Faradji F, Ward RK, Birch GE. Toward development of a two-state brain–computer interface based on mental tasks. Journal of Neural Engineering. 2011;8(4):046014

[111] Burg JP. A new analysis technique for time series data. In: NATO Adv. Study Inst. on Signal Processing with Emphasis on Underwater Acoustics; Enschede, The Netherlands. 1968 (reprinted in Modern Spectrum Analysis, D.G. Childers, ed., IEEE Press, pp. 42-48, 1978)

[112] Palaniappan R, Raveendran P, Nishida S, Saiwaki N. Autoregressive spectral analysis and model order selection criteria for EEG signals. In: Proceedings of the IEEE Region 10 Technical Conference (TENCON 2000). Vol. 2. 2000. pp. 126-129

[113] Franaszczuk PJ, Blinowska KJ, Kowalczyk M. The application of parametric multichannel spectral estimates in the study of electrical brain activity. Biological Cybernetics. January 1985;51(4):239-247

[114] Atkinson AC, Riani M, Cerioli A. Exploring Multivariate Data with the Forward Search. New York: Springer; 2004

**79**

**Chapter 5**

*and Reeha Raza*

**Abstract**

**1. Introduction**

the macro scale [10, 11].

manner remains unidentifiable.

Neural Signaling and

*Syeda Huma Jabeen, Nadeem Ahmed,* 

topology and dynamic models of a brain network.

computational models in neuroscience [1, 2].

*Muhammad Ejaz Sandhu, Nauman Riaz Chaudhry* 

To understand the complex nature of the human brain, network science approaches have played an important role. Neural signaling and communication form the basis for studying the dynamics of brain activity and functions. The neuroscientific community is interested in the network architecture of the human brain its simulation and for prediction of emergent network states. In this chapter we focus on how neurosignaling and communication is playing its part in medical psychology, furthermore, we have also reviewed how the interaction of network

**Keywords:** cognitive science, brain imaging, neural networks, network topology

Network science makes advances for modeling and analyzing the variations in communications. Network science has proved useful for functional brain connectivity assumptions and predicting incipient network states. Research in neural network science or neural information processing has been proven fruitful in the past, providing useful methods both for practical problems in computer science and

The human brain shows a discrete spatiotemporal organization that aids brain function and can be imitated via local brain simulation [3]. Such disturbances to local cortical dynamics are globally merged by discrete neural systems [4, 5]. Brain function depends upon complicated active interactions between distinct brain regions that define neural systems. This system-level architecture and dynamics can be discovered across different scales, from microscopic groups of cells to macroscopically defined brain areas [6–9]. Late neuroimaging works have been subservient in mapping the structural and functional architectures of the human brain at

Recent developments in the field of cognitive neuroscience require computational and theoretical apprehension of neural information processing models which accomplish standards and constraints from cognitive psychology, neuroscience, and computational efficiency [12, 13]. Nowadays, mapping structural connections and recording temporal dependencies among local time series have become feasible, yet signal transferring across the network in flexible and adaptive computational

Communication

## **Chapter 5**

[107] Simpson DM, Infantosi AFC, Junior JFC, Peixoto AJ, Abrantes LMdS. On the selection of autoregressive order for electroencephalographic (EEG) signals. In: 38th Midwest Symposium on Circuits and Systems. Vol. 2. 1995.

New Frontiers in Brain-Computer Interfaces

[114] Atkinson AC, Riani M, Cerioli A. Exploring Multivariate Data with the Forward Search. New York: Springer;

2004

[108] Anderson NR, Wisneski K,

Eisenman L, Moran DW, Leuthardt EC, Krusienski DJ. An offline evaluation of the autoregressive spectrum for

electrocorticography. IEEE Transactions on Biomedical Engineering. 2009;56(3):

[109] Oliveira LF, Simpson DM, Nadal J. Autoregressive spectral analysis of stabilometric signals. In: 16th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Engineering Advances: New Opportunities for Biomedical Engineers.

[110] Faradji F, Ward RK, Birch GE. Toward development of a two-state brain–computer interface based on mental tasks. Journal of Neural Engineering. 2011;8(4):046014

[111] Burg JP. A new analysis technique for time series data. In: NATO Adv. Study Inst. on Signal Processing with Emphasis on Underwater Acoustics; Enschede, The Netherlands. 1968 (reprinted in Modern Spectrum

Analysis, D.G. Childers, ed., IEEE Press,

[112] Palaniappan R, Raveendran P, Nishida S, Saiwaki N. Autoregressive spectral analysis and model order selection criteria for EEG signals. In: Proceedings of the IEEE Region 10 Technical Conference (TENCON 2000). Vol. 2. 2000. pp. 126-129

[113] Franaszczuk PJ, Blinowska KJ, Kowalczyk M. The application of parametric multichannel spectral estimates in the study of electrical brain activity. Biological Cybernetics. January

pp. 1353-1356

913-916

1994. pp. 1300-1301

pp. 42-48, 1978)

1985;51(4):239-247

78

## Neural Signaling and Communication

*Syeda Huma Jabeen, Nadeem Ahmed, Muhammad Ejaz Sandhu, Nauman Riaz Chaudhry and Reeha Raza*

## **Abstract**

To understand the complex nature of the human brain, network science approaches have played an important role. Neural signaling and communication form the basis for studying the dynamics of brain activity and functions. The neuroscientific community is interested in the network architecture of the human brain its simulation and for prediction of emergent network states. In this chapter we focus on how neurosignaling and communication is playing its part in medical psychology, furthermore, we have also reviewed how the interaction of network topology and dynamic models of a brain network.

**Keywords:** cognitive science, brain imaging, neural networks, network topology

## **1. Introduction**

Network science makes advances for modeling and analyzing the variations in communications. Network science has proved useful for functional brain connectivity assumptions and predicting incipient network states. Research in neural network science or neural information processing has been proven fruitful in the past, providing useful methods both for practical problems in computer science and computational models in neuroscience [1, 2].

The human brain shows a discrete spatiotemporal organization that aids brain function and can be imitated via local brain simulation [3]. Such disturbances to local cortical dynamics are globally merged by discrete neural systems [4, 5]. Brain function depends upon complicated active interactions between distinct brain regions that define neural systems. This system-level architecture and dynamics can be discovered across different scales, from microscopic groups of cells to macroscopically defined brain areas [6–9]. Late neuroimaging works have been subservient in mapping the structural and functional architectures of the human brain at the macro scale [10, 11].

Recent developments in the field of cognitive neuroscience require computational and theoretical apprehension of neural information processing models which accomplish standards and constraints from cognitive psychology, neuroscience, and computational efficiency [12, 13]. Nowadays, mapping structural connections and recording temporal dependencies among local time series have become feasible, yet signal transferring across the network in flexible and adaptive computational manner remains unidentifiable.

## **1.1 Psychiatric disorders and network science**

Psychiatric disorders have conflicts related to progression arbitrated by the brain. Developing sign recommends that precise biomarkers might use from integrating information about several brain realms and their connections with one another for psychiatric disorders, instead of seeing local disturbances in brain structure and function [14]. Current progress in the discipline of applied mathematics mostly and network science explicitly offer a language to captivate the complication of cooperating brain sections, and the use of this language for essential inquiries in neuroscience forms an incipient field of network neuroscience. This chapter provides an outline for the application and usefulness of network neuroscience in psychiatry and how network science may cooperate with it.

Most approaches are generally used for animal or human models reckoning however the approach is invasive. The term invasive is specified in this framework as a process that needs a carving in the skin or insert of an apparatus in the body [15].

## **2. Use of scientific and clinical tools to reconcile unhinge brain activity**

Non-invasive brain stimulation such as transcranial magnetic stimulation and invasive brain stimulation such as profound brain stimulation is employed for scientific and clinical tools to unhinge brain activity. Incomparable conditions in which procedures classically kept for experiments on animals, for instance, invasive electrophysiology, can be morally employed in humans. For instance, throughout convinced sorts, I human brain surgery, clinical decisions are controlled by invasive electrophysiological measurements (e.g., ECoG). Along with their medical employment, these recordings also provide expressive information for scientific studies.

Equally, human neuroscience application tools and modeling in animal neurology has aided to expose a fundamental mechanism of these systematic ways. A major illustration is the research separating, cause of practical MRI (fMRI) signal by joining invasive electrophysiological measurement with fMRI in animal experiments.

## **3. Psychiatry experiments on living being**

Naturally, experimenting on animal use invasive electrophysiology, in which electrodes are entrenched straight in the brain for recording action potency and local field potential (LFP and EEG). Furthermore, recording neuronal action by ocular means, calcium and voltage imaging mostly, have become a coercive observational strategy that embellishes electrophysiology in animals although it is well invasive [15]. Optogenetic manipulations (optical measurements and perturbations) can use light to unhinge neuronal activity.

In humans, MRI (magnetic resonance imaging) empowers detailed, non-invasive visualization of brain anatomy. Measurement in blood oxygenation variation as a substitution for neuronal activity can also employ corresponding expertise. On the other hand, electroencephalograph can noninvasively measure electric fields generated by the neuronal activity. Magnetoencephalography and electrocorticography (ECoG) are two less commonly used but significant methods that measure brain signaling. Electric and magnetic fields stimulations record from the brain along with modulating neuronal activity.

**81**

**Figure 1.**

*Basic steps of neuroscience framework.*

*Neural Signaling and Communication*

**3.1 Temporal resolution**

**3.2 Spatial resolution**

ence [19].

**Figure 1**.

**4. Network neuroscience framework**

provide.

*DOI: http://dx.doi.org/10.5772/intechopen.86318*

Moreover, classifying deliberated systematic ways as whichever noninvasive or invasive will concentrate on the *spatial or temporal resolution* these systematic ways

Temporal resolution discusses how much time does a measurement requires to be completed, and hence states the quickest fluctuations in the signal of attention that can be seized exactly [16]. The sampling rate defines the temporal resolution. The frequency of measurement is referred to by the sampling rate. For example, only a second is required to obtain a single image of human fMRI activity scan (sample rate is 1 Hz i.e., 1/s). Thus, signals that expressively fluctuate in a given sub-second time scale (i.e., any given 1 s interval) are not captured properly. This contrasts noticeably to the typical sampling rate of an EEG device (1000 Hz and greater). A millisecond

Spatial resolution refers to the specified measuring strategy that can be taken for least events in space. As an illustration, the brain activity of a cubic millimeter resolution is commonly measured using an fMRI. Indifference, the EEG tested natural signals show deprived spatial resolution as they initiate from indefinite square centimeters of brain tissue [18, 19]. Now centering our attention to the spatial scales reaching out of the full brain to distinct neurons [18]. Since there are in cursive methods present for spatial and temporal resolution improvement. For instance, the spatial resolution for electrophysiological brain activities can be as slight as a 100 mm for transcription; electrodes are surgically implanted into the brain. The invasive transcriptions cater to perfect temporal and spatial resolution along with high temporal resolution of electrophysiological measurements. It is prominent that along with some omissions noninvasive approaches accompanying high temporal resolution (e.g., EEG) are subjected to poor spatial resolution and contrariwise (e.g., fMRI). Consequently, merging appropriate approaches has persisted as one of the utmost prevailing and electrifying procedural approaches in network neurosci-

Network neuroscience conceives brain functions as egress from the collective action of various system elements and their common interconnections as shown in

timescale of action potential fire is the fastest temporal scale [17].

Moreover, classifying deliberated systematic ways as whichever noninvasive or invasive will concentrate on the *spatial or temporal resolution* these systematic ways provide.

## **3.1 Temporal resolution**

*New Frontiers in Brain-Computer Interfaces*

for scientific studies.

**3. Psychiatry experiments on living being**

tions) can use light to unhinge neuronal activity.

along with modulating neuronal activity.

experiments.

**1.1 Psychiatric disorders and network science**

Psychiatric disorders have conflicts related to progression arbitrated by the brain. Developing sign recommends that precise biomarkers might use from integrating information about several brain realms and their connections with one another for psychiatric disorders, instead of seeing local disturbances in brain structure and function [14]. Current progress in the discipline of applied mathematics mostly and network science explicitly offer a language to captivate the complication of cooperating brain sections, and the use of this language for essential inquiries in neuroscience forms an incipient field of network neuroscience. This chapter provides an outline for the application and usefulness of network neurosci-

Most approaches are generally used for animal or human models reckoning however the approach is invasive. The term invasive is specified in this framework as a process that needs a carving in the skin or insert of an apparatus in the body [15].

**2. Use of scientific and clinical tools to reconcile unhinge brain activity**

Non-invasive brain stimulation such as transcranial magnetic stimulation and invasive brain stimulation such as profound brain stimulation is employed for scientific and clinical tools to unhinge brain activity. Incomparable conditions in which procedures classically kept for experiments on animals, for instance, invasive electrophysiology, can be morally employed in humans. For instance, throughout convinced sorts, I human brain surgery, clinical decisions are controlled by invasive electrophysiological measurements (e.g., ECoG). Along with their medical employment, these recordings also provide expressive information

Equally, human neuroscience application tools and modeling in animal neurology has aided to expose a fundamental mechanism of these systematic ways. A major illustration is the research separating, cause of practical MRI (fMRI) signal by joining invasive electrophysiological measurement with fMRI in animal

Naturally, experimenting on animal use invasive electrophysiology, in which electrodes are entrenched straight in the brain for recording action potency and local field potential (LFP and EEG). Furthermore, recording neuronal action by ocular means, calcium and voltage imaging mostly, have become a coercive observational strategy that embellishes electrophysiology in animals although it is well invasive [15]. Optogenetic manipulations (optical measurements and perturba-

In humans, MRI (magnetic resonance imaging) empowers detailed, non-invasive visualization of brain anatomy. Measurement in blood oxygenation variation as a substitution for neuronal activity can also employ corresponding expertise. On the other hand, electroencephalograph can noninvasively measure electric fields generated by the neuronal activity. Magnetoencephalography and electrocorticography (ECoG) are two less commonly used but significant methods that measure brain signaling. Electric and magnetic fields stimulations record from the brain

ence in psychiatry and how network science may cooperate with it.

**80**

Temporal resolution discusses how much time does a measurement requires to be completed, and hence states the quickest fluctuations in the signal of attention that can be seized exactly [16]. The sampling rate defines the temporal resolution. The frequency of measurement is referred to by the sampling rate. For example, only a second is required to obtain a single image of human fMRI activity scan (sample rate is 1 Hz i.e., 1/s). Thus, signals that expressively fluctuate in a given sub-second time scale (i.e., any given 1 s interval) are not captured properly. This contrasts noticeably to the typical sampling rate of an EEG device (1000 Hz and greater). A millisecond timescale of action potential fire is the fastest temporal scale [17].

## **3.2 Spatial resolution**

Spatial resolution refers to the specified measuring strategy that can be taken for least events in space. As an illustration, the brain activity of a cubic millimeter resolution is commonly measured using an fMRI. Indifference, the EEG tested natural signals show deprived spatial resolution as they initiate from indefinite square centimeters of brain tissue [18, 19]. Now centering our attention to the spatial scales reaching out of the full brain to distinct neurons [18]. Since there are in cursive methods present for spatial and temporal resolution improvement. For instance, the spatial resolution for electrophysiological brain activities can be as slight as a 100 mm for transcription; electrodes are surgically implanted into the brain. The invasive transcriptions cater to perfect temporal and spatial resolution along with high temporal resolution of electrophysiological measurements. It is prominent that along with some omissions noninvasive approaches accompanying high temporal resolution (e.g., EEG) are subjected to poor spatial resolution and contrariwise (e.g., fMRI). Consequently, merging appropriate approaches has persisted as one of the utmost prevailing and electrifying procedural approaches in network neuroscience [19].

## **4. Network neuroscience framework**

Network neuroscience conceives brain functions as egress from the collective action of various system elements and their common interconnections as shown in **Figure 1**.

**Figure 1.** *Basic steps of neuroscience framework.*

Currently, large-scale efforts to record neuronal connectivity in various species, including the nematode worm, fruit fly, mouse, macaque, and human, have led to a burst of data developed using a diverse array of measurement methods, and at scales fluctuating from a level of single cells to large brain areas. In similar, quick developments in physics of multifaceted network have directed to a new thoughtful of the association and dynamics of systems of interacting elements, with nervous systems being but one example. The confluence of these approaches lies at the core of network neuroscience, which is linked with understanding how nervous systems function as combined systems [20]. Network neuroscience offers one of the rarely incorporated frameworks for revealing different kinds of brain imaging data, needed in different specifies at various scales and have various measurements methods, by demonstrating all nervous systems in their most intellectual form: as assemblies of nodes connected by edges.

Network neuroscience technique has before now produced many novel visions into brain organization, for example, that nervous systems across scales and species illustrate a hierarchical, segmental and minor-world organization, that they contain decidedly connected hubs, they are economically reinforced. As the field develops, tools and methods settled in other areas of network science are being progressively polished and modified to the neuroscience context [9, 20].

The interpretation of how brain egress from a large number of communicated patterns of neuronal elements stands as one of the most abiding challenges of modern neuroscience. The complex systems draw close to understanding the brain is similar to other disciplines that intermix concepts from network science, the study of social networks through statistical physics and dynamical systems, the propagation of epidemics, rumors or computer viruses, the effects of disturbances or assault on electrical grids or the World Wide Web, or the performance of gene regulatory or metabolic networks [21].

## **5. Brain network topology and communication**

Brain network has a topology with prominent attributes that describe the system as a whole: heterogeneous level and strength dispersions, high clumping and short path lengths, a multi-scale modular organization and an obtusely connected core of high-degree nodes are some of the network characteristics which are shared across species and scales.

Similar to any new field, the best ways for manufacturing and analyzing brain networks are still undergoing development. Amid late evolutions is the understanding that brain networks are essentially multi-scale entities.

The term "scale" can have varying meanings depending upon the context; at present we concentrate on three possible definitions related to the study of brain networks. Foremost, a network's spatial scale refers to the coarseness at which the nodes and edges are defined and can vary from that individual cell and synapses [18]. Secondly, networks can be qualified over temporal scales with accuracy varying from sub-millisecond to that of the entire lifespan, to developing changes across various species. Lastly, networks can be examined on divergent topological scales varying from individual nodes to the network as a whole [17, 21]. Conjointly, these scales specify the axes of a 3D space in which any synthesis of brain network data lives. Major brain network analysis subsists as points in the space—i.e., they concentrate on networks specified singularly at one temporal, spatial, and topological scale. We contend that, when studies have proven enlightening, in order to better understanding the brain's actual multi-modal, multi-scale nature, it is important for our network analysis that we begin analysis to form bridges which join different scale to each other [1, 21].

**83**

*Neural Signaling and Communication*

**5.1 Routing communication**

*DOI: http://dx.doi.org/10.5772/intechopen.86318*

area. This capability is known as *neuronal plasticity*.

**5.2 Information routing and functional integration**

special pattern can be seen in **Figure 2**.

the programmability of digital systems).

of mutual influence between them.

transfer information in a collateral fashion [1, 9, 11, 19, 20].

Routing communication covers the control of paths that data can take over a network. Specified that physical networks have predetermined limits on links, and memory, the main work of routing is to assign paths so that one or extra communication goals are retrieved (e.g., cost, fidelity, fault-tolerance, speed, etc.). Routing is of strongly important for brain's communication via network: inferring sensory data, access of memory, decision making, and several further essential brain functions require that communications can be flexibly directed and acknowledged by several nodes at broadly parted positions on the network, in reply to fluctuating demands [19]. Though, for the reason that of new scientific growths, nowadays the brain is more dynamic than we supposed. The human brain continuously creates new neurons and makes neural pathways during our whole life cycle. Therefore, neurons are dynamic cells that are regularly familiarized to fluctuating situations. If some activity damages an individual's brain (such as an injury or stroke), the neurons have the potential to create a new communication route/path around the injured

In communication networks, the abbreviated path between two nodes has a special role: the extent of the abbreviated path is taken to the topological distance amid nodes. Hence, the abbreviated path extent is referred to as the indicator of

Any solo cognitive function may contain numerous dedicated areas whose association is facilitated by the functional integration between them. Such integration is facilitated by information exchange between brain areas by the means routes that can change with highest timescales at which the structure is fixed, by providing changeable actual connectivity in spite of the inflexibility of the infrastructure on high timescales [9, 11]. The dynamic nature of the information routing and flexible effective connectivity is worth of the multistability of the cooperative dynamics of the brain networks. In addition, single structural connectivity can support numerous degenerate dynamical states, each of which information transfer by use of

The modeling of spatiotemporal dynamics underlying integration and segregation can reveal casual mechanics insights into neuropsychiatric disorders. For being influenced by the full potential of whole-brain computational modeling, it is a requirement to capture temporal evolution of brain's functional network organization along with time-averaged representations of FC (which are strongly inhibited by the SC) *in silico* neural dynamics (a neuromorphic analog chip is presented that is capable of implementing massively parallel neural computations while retaining

Neuronal collective oscillations are assumed to offer such a basis for the dynamic's communication among brain regions. Numerous lines of experimental evidence and theoretical influences specify that the stage relations amongst the oscillations of various brain regions can modify effective connectivity by modulating the result

A more existent account of the consolidative capacity of neuronal systems needs a significant differentiation between the concepts of "communication efficiency" and the usually employed graph theoretic measure called "global efficiency." The mean of the reversed abbreviated length between all pairs of nodes is referred to as the global efficiency, hence captivating the global capacity of the network to

comfort with which signals can be transmitted amid nodes [19, 20].

## **5.1 Routing communication**

*New Frontiers in Brain-Computer Interfaces*

Currently, large-scale efforts to record neuronal connectivity in various species, including the nematode worm, fruit fly, mouse, macaque, and human, have led to a burst of data developed using a diverse array of measurement methods, and at scales fluctuating from a level of single cells to large brain areas. In similar, quick developments in physics of multifaceted network have directed to a new thoughtful of the association and dynamics of systems of interacting elements, with nervous systems being but one example. The confluence of these approaches lies at the core of network neuroscience, which is linked with understanding how nervous systems function as combined systems [20]. Network neuroscience offers one of the rarely incorporated frameworks for revealing different kinds of brain imaging data, needed in different specifies at various scales and have various measurements methods, by demonstrating all nervous systems in their

Network neuroscience technique has before now produced many novel visions into brain organization, for example, that nervous systems across scales and species illustrate a hierarchical, segmental and minor-world organization, that they contain decidedly connected hubs, they are economically reinforced. As the field develops, tools and methods settled in other areas of network science are being progressively

The interpretation of how brain egress from a large number of communicated patterns of neuronal elements stands as one of the most abiding challenges of modern neuroscience. The complex systems draw close to understanding the brain is similar to other disciplines that intermix concepts from network science, the study of social networks through statistical physics and dynamical systems, the propagation of epidemics, rumors or computer viruses, the effects of disturbances or assault on electrical grids or the World Wide Web, or the performance of gene regulatory or

Brain network has a topology with prominent attributes that describe the system as a whole: heterogeneous level and strength dispersions, high clumping and short path lengths, a multi-scale modular organization and an obtusely connected core of high-degree nodes are some of the network characteristics which are shared across

Similar to any new field, the best ways for manufacturing and analyzing brain networks are still undergoing development. Amid late evolutions is the understand-

The term "scale" can have varying meanings depending upon the context; at present we concentrate on three possible definitions related to the study of brain networks. Foremost, a network's spatial scale refers to the coarseness at which the nodes and edges are defined and can vary from that individual cell and synapses [18]. Secondly, networks can be qualified over temporal scales with accuracy varying from sub-millisecond to that of the entire lifespan, to developing changes across various species. Lastly, networks can be examined on divergent topological scales varying from individual nodes to the network as a whole [17, 21]. Conjointly, these scales specify the axes of a 3D space in which any synthesis of brain network data lives. Major brain network analysis subsists as points in the space—i.e., they concentrate on networks specified singularly at one temporal, spatial, and topological scale. We contend that, when studies have proven enlightening, in order to better understanding the brain's actual multi-modal, multi-scale nature, it is important for our network analysis that we begin analysis to form bridges which

most intellectual form: as assemblies of nodes connected by edges.

polished and modified to the neuroscience context [9, 20].

**5. Brain network topology and communication**

ing that brain networks are essentially multi-scale entities.

join different scale to each other [1, 21].

metabolic networks [21].

species and scales.

**82**

Routing communication covers the control of paths that data can take over a network. Specified that physical networks have predetermined limits on links, and memory, the main work of routing is to assign paths so that one or extra communication goals are retrieved (e.g., cost, fidelity, fault-tolerance, speed, etc.). Routing is of strongly important for brain's communication via network: inferring sensory data, access of memory, decision making, and several further essential brain functions require that communications can be flexibly directed and acknowledged by several nodes at broadly parted positions on the network, in reply to fluctuating demands [19].

Though, for the reason that of new scientific growths, nowadays the brain is more dynamic than we supposed. The human brain continuously creates new neurons and makes neural pathways during our whole life cycle. Therefore, neurons are dynamic cells that are regularly familiarized to fluctuating situations. If some activity damages an individual's brain (such as an injury or stroke), the neurons have the potential to create a new communication route/path around the injured area. This capability is known as *neuronal plasticity*.

In communication networks, the abbreviated path between two nodes has a special role: the extent of the abbreviated path is taken to the topological distance amid nodes. Hence, the abbreviated path extent is referred to as the indicator of comfort with which signals can be transmitted amid nodes [19, 20].

## **5.2 Information routing and functional integration**

Any solo cognitive function may contain numerous dedicated areas whose association is facilitated by the functional integration between them. Such integration is facilitated by information exchange between brain areas by the means routes that can change with highest timescales at which the structure is fixed, by providing changeable actual connectivity in spite of the inflexibility of the infrastructure on high timescales [9, 11]. The dynamic nature of the information routing and flexible effective connectivity is worth of the multistability of the cooperative dynamics of the brain networks. In addition, single structural connectivity can support numerous degenerate dynamical states, each of which information transfer by use of special pattern can be seen in **Figure 2**.

The modeling of spatiotemporal dynamics underlying integration and segregation can reveal casual mechanics insights into neuropsychiatric disorders. For being influenced by the full potential of whole-brain computational modeling, it is a requirement to capture temporal evolution of brain's functional network organization along with time-averaged representations of FC (which are strongly inhibited by the SC) *in silico* neural dynamics (a neuromorphic analog chip is presented that is capable of implementing massively parallel neural computations while retaining the programmability of digital systems).

Neuronal collective oscillations are assumed to offer such a basis for the dynamic's communication among brain regions. Numerous lines of experimental evidence and theoretical influences specify that the stage relations amongst the oscillations of various brain regions can modify effective connectivity by modulating the result of mutual influence between them.

A more existent account of the consolidative capacity of neuronal systems needs a significant differentiation between the concepts of "communication efficiency" and the usually employed graph theoretic measure called "global efficiency." The mean of the reversed abbreviated length between all pairs of nodes is referred to as the global efficiency, hence captivating the global capacity of the network to transfer information in a collateral fashion [1, 9, 11, 19, 20].

## **Figure 2.**

*Information routing and functional integration.*

## **5.3 Network dynamics and communication**

Synchronized complete brain neural dynamics are important for appropriate control of functionality in different brain systems, effectual integration of composite and multimodal information, and even adaption to transient regular circumstances. Specified such roles of macroscopic, brain dynamics in our mental and neural information processing, it is sensible to assume that the irrationality of large-scale neural dynamics is a main biological mechanism fundamental autism spectrum disorder (ASD), which is described as the weakening of global information processing.

Structural topology is constricting signal extension is a very important dynamical point of view the communication process. One of the eminent differentiations between dynamical and topological analyses of brain communication is the quantity of information (in the statistical sense) which is required for the extension of the communication process.

## **5.4 Network computation and communication**

The crucial role of communication dynamics in neural computation attracts a great amount of curiosity. Some important and distinct features of brain network communication that enlightens mechanisms by which the brain network carries out computation are as follows:


## **6. Multi-scale community structure**

The attributes of local and global networks are unambiguous to calculate as the unit analysis of individual nodes and the entire network are closely manifest and no extra search is required. Multi-scale structure, though are not always apparent. The occurrence or nonappearance of multi-scale structures is contingent upon the formation of edges between the nodes of a network known as network topology. Real-life networks consist of numerous nodes and edges organized in complicated

**85**

*Neural Signaling and Communication*

*DOI: http://dx.doi.org/10.5772/intechopen.86318*

outlines which can be ambiguous structural symmetries. Because of such complication, if anyone wants to see a mesoscale structured network, he will have a necessity to algorithmically quest for it. In community structure circumstances, there is no lack of algorithms to do so. They depend on how they describe communities along with computational complexity. The range of the method is observed as a deficiency or a benefit, the originality in community detection and repeatedly increasing subfields of network analysis are desirably settled. Although respectively community detection methods suggest its own possessive exceptional perception on how we can classify communities in networks, the technique that is furthermost extensively employed and debatably the often useful is modularity expansion. Modularity expansion divides a network's nodes into communities so as to make the most of a verifiable function acknowledged as modularity (or just "Q"). The comparison of the practical pattern of connections in a network contrary to the perceptual structure that would be likely below a stated null model of network connectivity is performed by the modularity function. The weight of a respective existent edge is compared directly to the weight of the similar edge if, connections were to be shaped under the null model. Nearly, the ascertained connections will be improbable to subsist under the null model or might it be worthier in comparison to the null model expectations. The efforts to locate several conceivable robust than predictable connections inside communities is done in the maximization of modularity. Much clearly, if the weight of the ascertained and anticipated connections between the nodes *i* and *j* are specified as *Aij* and *Pij*, separately, and *σ<sup>i</sup>* ϵ [1, …, *K*] shows the communities of *K* in

which *i* can be allotted, then modularity can be calculated as:

Where Kronecker delta function is denoted by δ and is equal to 1 only if arguments are not 0 and are the same. Various methods used in fact to maximize *Q*, however in conclusion the entire outcome in the estimation of community network structure, a separation into communities. Unfortunately, in the partition, the number and size with the biggest *Q* represent communities not always demonstrated in the network. Other alike quality functions and modularity show a "resolution limit" which bounds detectable communities' size. Smaller size communities are rather undetectable. In one way to determine all size communities, modularity prolonged in current years to comprise ɣ, a resolution parameter, which can be used for exposing different sized communities. Augmented modularity equation then shows like this:

Primarily familiarized method for avoiding resolution limit is known as the resolution parameter. Accidentally, it has imparted the flexibility of modularity measure. The resolution parameter acts as a turning protuberance, making attaining of estimated small communities probable when it is at one situation and bigger communities while it is an alternative situation: while ɣ is big or small maximizing modularity will give parallel minor or major communities. With a smooth tune, from one extreme, the resolution parameter can efficiently find evaluations to the other extreme of a network's community structure, from the finest scale from where network nodes form singleton communities to the unrefined scale where all nodes fall in the same communities. Multi-scale community detection can be recognized as the changing resolution parameter for notifying the communities about various sizes. It must be renowned that there subsist possible descriptions of modularity functions which do not endure from resolution limits in the initial place.

(1)

(2)

## *Neural Signaling and Communication DOI: http://dx.doi.org/10.5772/intechopen.86318*

*New Frontiers in Brain-Computer Interfaces*

**5.3 Network dynamics and communication**

*Information routing and functional integration.*

**5.4 Network computation and communication**

1.Communication dynamics are effective connectivity.

tion processing.

**Figure 2.**

the communication process.

computation are as follows:

2.Computation by networks.

**6. Multi-scale community structure**

Synchronized complete brain neural dynamics are important for appropriate control of functionality in different brain systems, effectual integration of composite and multimodal information, and even adaption to transient regular circumstances. Specified such roles of macroscopic, brain dynamics in our mental and neural information processing, it is sensible to assume that the irrationality of large-scale neural dynamics is a main biological mechanism fundamental autism spectrum disorder (ASD), which is described as the weakening of global informa-

Structural topology is constricting signal extension is a very important dynamical point of view the communication process. One of the eminent differentiations between dynamical and topological analyses of brain communication is the quantity of information (in the statistical sense) which is required for the extension of

The crucial role of communication dynamics in neural computation attracts a great amount of curiosity. Some important and distinct features of brain network communication that enlightens mechanisms by which the brain network carries out

The attributes of local and global networks are unambiguous to calculate as the unit analysis of individual nodes and the entire network are closely manifest and no extra search is required. Multi-scale structure, though are not always apparent. The occurrence or nonappearance of multi-scale structures is contingent upon the formation of edges between the nodes of a network known as network topology. Real-life networks consist of numerous nodes and edges organized in complicated

**84**

outlines which can be ambiguous structural symmetries. Because of such complication, if anyone wants to see a mesoscale structured network, he will have a necessity to algorithmically quest for it. In community structure circumstances, there is no lack of algorithms to do so. They depend on how they describe communities along with computational complexity. The range of the method is observed as a deficiency or a benefit, the originality in community detection and repeatedly increasing subfields of network analysis are desirably settled. Although respectively community detection methods suggest its own possessive exceptional perception on how we can classify communities in networks, the technique that is furthermost extensively employed and debatably the often useful is modularity expansion. Modularity expansion divides a network's nodes into communities so as to make the most of a verifiable function acknowledged as modularity (or just "Q"). The comparison of the practical pattern of connections in a network contrary to the perceptual structure that would be likely below a stated null model of network connectivity is performed by the modularity function. The weight of a respective existent edge is compared directly to the weight of the similar edge if, connections were to be shaped under the null model. Nearly, the ascertained connections will be improbable to subsist under the null model or might it be worthier in comparison to the null model expectations.

The efforts to locate several conceivable robust than predictable connections inside communities is done in the maximization of modularity. Much clearly, if the weight of the ascertained and anticipated connections between the nodes *i* and *j* are specified as *Aij* and *Pij*, separately, and *σ<sup>i</sup>* ϵ [1, …, *K*] shows the communities of *K* in which *i* can be allotted, then modularity can be calculated as:

$$Q = \sum\_{i,4} [A\_{ij} - P\_{ij}] \mathfrak{S}(\sigma\_i \sigma\_j). \tag{1}$$

Where Kronecker delta function is denoted by δ and is equal to 1 only if arguments are not 0 and are the same. Various methods used in fact to maximize *Q*, however in conclusion the entire outcome in the estimation of community network structure, a separation into communities. Unfortunately, in the partition, the number and size with the biggest *Q* represent communities not always demonstrated in the network. Other alike quality functions and modularity show a "resolution limit" which bounds detectable communities' size. Smaller size communities are rather undetectable. In one way to determine all size communities, modularity prolonged in current years to comprise ɣ, a resolution parameter, which can be used for exposing different sized communities. Augmented modularity equation then shows like this:

$$Q(\mathbf{y}) = \sum\_{i,\mathbf{a}} [A\_{ij} - \mathbf{y}P\_{ij}] \delta(\sigma\_i \sigma\_j). \tag{2}$$

Primarily familiarized method for avoiding resolution limit is known as the resolution parameter. Accidentally, it has imparted the flexibility of modularity measure. The resolution parameter acts as a turning protuberance, making attaining of estimated small communities probable when it is at one situation and bigger communities while it is an alternative situation: while ɣ is big or small maximizing modularity will give parallel minor or major communities. With a smooth tune, from one extreme, the resolution parameter can efficiently find evaluations to the other extreme of a network's community structure, from the finest scale from where network nodes form singleton communities to the unrefined scale where all nodes fall in the same communities. Multi-scale community detection can be recognized as the changing resolution parameter for notifying the communities about various sizes. It must be renowned that there subsist possible descriptions of modularity functions which do not endure from resolution limits in the initial place.

## **7. Conclusion**

In the recent past, the network cognitive science has revealed that how the network topology and dynamics outline the flow of neural signal under the brain function to great extent but still there are many gaps remain in our understanding due to data limits and of recording tools. For example, the limited availability of observational tools limits empirical access to communication dynamics. Now a day, it has become possible to map structural connection and recording temporal dependencies among local time series, but the possible mechanism involved in signal transfer across the network in a various manner that allows flexible and adaptive computation remain elusive. In spite of limitations, there is a wide range of opportunities to learn how the brain network functions. The nature of communication may vary, for example, dynamic network model is kind of theoretical framework that is helpful in an understanding of our knowledge of behavior and cognitive science, it also includes the pattern of change with aging and development. Furthermore, it can become an important tool for predicting the effects and outcomes of perturbations, including lesions and focal stimulation. Building on topology and dynamics, the confluence of empirical and theoretical studies are poised to add significant new insights into the network basis of brain function.

## **Author details**

Syeda Huma Jabeen1 \*, Nadeem Ahmed<sup>2</sup> , Muhammad Ejaz Sandhu1 , Nauman Riaz Chaudhry3 and Reeha Raza1

1 Department of Computer Science, The Institute of Management Sciences, Lahore, Pakistan

2 Department of Psychology, University of Management Sciences Lahore, Pakistan

3 Department of Computer Science, University of Gujrat, Gujrat, Pakistan

\*Address all correspondence to: humajabeen@pakaims.edu.pk

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

**87**

*Neural Signaling and Communication*

[1] Laughlin SB, Sejnowski TJ.

Science. 2003;**301**:1870-1874

[3] Leong ATL et al. Longrange projections coordinate distributed brain-wide neural

profile. Proceedings of the National Academy of Sciences of the United States of America.

2016;**113**:E8306-E8315

2007;**25**:3185-3192

[2] Sporns O. Contributions, and challenges for network models in cognitive neuroscience. Nature Neuroscience. 2014;**17**:652-660

**References**

*DOI: http://dx.doi.org/10.5772/intechopen.86318*

Communication in neuronal networks.

[9] Li Y, Liu Y, Li J, Qin W, Li K, Yu C, et al. Brain anatomical network and intelligence. PLoS Computational Biology. 2009;**5**(5):e1000395

[10] Gollo LL, Roberts JA, Cocchi L. Mapping how local perturbations influence systems-level brain dynamics.

NeuroImage. 2017;**160**:97-112

[12] Kaiser M. Mechanisms of connectome development. Trends in Cognitive Sciences. 2017;**21**:703-717

aging, and disease. Biological Psychiatry. 2015;**77**:1089-1097

[14] Vashistha R et al. Artificial intelligence integration for neurodegenerative disorders. In: Leveraging Biomedical and

pp. 77-89

pp. 1-52

Healthcare Data. Academic Press; 2019.

[15] Xu J, Nguyen AT, Yang Z. Advances in neural recording and stimulation devices. In: Engineering in Medicine. Academic Press; 2019. pp. 335-363

[16] Laufs H et al. EEG-correlated fMRI of human alpha activity. NeuroImage.

[18] Snyder AZ. Intrinsic brain activity

Neuroscience in the 21st Century. 2016.

[17] Varela F et al. The brainweb: Phase synchronization and largescale integration. Nature Reviews Neuroscience. 2001;**2**(4):229

and resting state networks. In:

2003;**19**(4):1463-1476

[13] Voytek B, Knight RT. Dynamic network communication as a unifying neural basis for cognition, development,

[11] Dayan E, Censor N, Buch ER, Sandrini M, Cohen LG. Noninvasive brain stimulation: From physiology to network dynamics and back. Nature Neuroscience. 2013;**16**:838-844

activity with a specific spatiotemporal

[4] Roland PE, Hilgetag CC, Deco G. Cortico-cortical communication dynamics. Frontiers in Systems Neuroscience. 2014;**8**(9)

[5] Kaiser M, Martin R, Andras P, Young MP. Simulation of robustness against lesions of cortical networks. The European Journal of Neuroscience.

[6] Hermundstad AM, Bassett DS, Brown KS, Aminoff EM, Clewett D,

[7] Hagmann P, Cammoun L, Gigandet X,

Papadopoulos L, Baum G, Gur R, Gur R, et al. The modular organization of human anatomical brain networks: Accounting for the cost of wiring. 2016.

Meuli R, Honey CJ, Wedeen VJ, et al. Mapping the structural core of human cerebral cortex. PLoS Biology.

[8] Betzel RF, Medaglia JD,

Freeman S, et al. Structural foundations of resting-state and task-based functional connectivity in the human brain. Proceedings of the National Academy of Sciences of the United States of America.

2013;**110**(15):6169-6174

2008;**6**(7):e159

arXiv:1608.01161

## **References**

*New Frontiers in Brain-Computer Interfaces*

insights into the network basis of brain function.

\*, Nadeem Ahmed<sup>2</sup>

\*Address all correspondence to: humajabeen@pakaims.edu.pk

and Reeha Raza1

In the recent past, the network cognitive science has revealed that how the network topology and dynamics outline the flow of neural signal under the brain function to great extent but still there are many gaps remain in our understanding due to data limits and of recording tools. For example, the limited availability of observational tools limits empirical access to communication dynamics. Now a day, it has become possible to map structural connection and recording temporal dependencies among local time series, but the possible mechanism involved in signal transfer across the network in a various manner that allows flexible and adaptive computation remain elusive. In spite of limitations, there is a wide range of opportunities to learn how the brain network functions. The nature of communication may vary, for example, dynamic network model is kind of theoretical framework that is helpful in an understanding of our knowledge of behavior and cognitive science, it also includes the pattern of change with aging and development. Furthermore, it can become an important tool for predicting the effects and outcomes of perturbations, including lesions and focal stimulation. Building on topology and dynamics, the confluence of empirical and theoretical studies are poised to add significant new

**7. Conclusion**

**Author details**

Pakistan

Syeda Huma Jabeen1

Nauman Riaz Chaudhry3

**86**

provided the original work is properly cited.

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

1 Department of Computer Science, The Institute of Management Sciences, Lahore,

2 Department of Psychology, University of Management Sciences Lahore, Pakistan

3 Department of Computer Science, University of Gujrat, Gujrat, Pakistan

, Muhammad Ejaz Sandhu1

,

[1] Laughlin SB, Sejnowski TJ. Communication in neuronal networks. Science. 2003;**301**:1870-1874

[2] Sporns O. Contributions, and challenges for network models in cognitive neuroscience. Nature Neuroscience. 2014;**17**:652-660

[3] Leong ATL et al. Longrange projections coordinate distributed brain-wide neural activity with a specific spatiotemporal profile. Proceedings of the National Academy of Sciences of the United States of America. 2016;**113**:E8306-E8315

[4] Roland PE, Hilgetag CC, Deco G. Cortico-cortical communication dynamics. Frontiers in Systems Neuroscience. 2014;**8**(9)

[5] Kaiser M, Martin R, Andras P, Young MP. Simulation of robustness against lesions of cortical networks. The European Journal of Neuroscience. 2007;**25**:3185-3192

[6] Hermundstad AM, Bassett DS, Brown KS, Aminoff EM, Clewett D, Freeman S, et al. Structural foundations of resting-state and task-based functional connectivity in the human brain. Proceedings of the National Academy of Sciences of the United States of America. 2013;**110**(15):6169-6174

[7] Hagmann P, Cammoun L, Gigandet X, Meuli R, Honey CJ, Wedeen VJ, et al. Mapping the structural core of human cerebral cortex. PLoS Biology. 2008;**6**(7):e159

[8] Betzel RF, Medaglia JD, Papadopoulos L, Baum G, Gur R, Gur R, et al. The modular organization of human anatomical brain networks: Accounting for the cost of wiring. 2016. arXiv:1608.01161

[9] Li Y, Liu Y, Li J, Qin W, Li K, Yu C, et al. Brain anatomical network and intelligence. PLoS Computational Biology. 2009;**5**(5):e1000395

[10] Gollo LL, Roberts JA, Cocchi L. Mapping how local perturbations influence systems-level brain dynamics. NeuroImage. 2017;**160**:97-112

[11] Dayan E, Censor N, Buch ER, Sandrini M, Cohen LG. Noninvasive brain stimulation: From physiology to network dynamics and back. Nature Neuroscience. 2013;**16**:838-844

[12] Kaiser M. Mechanisms of connectome development. Trends in Cognitive Sciences. 2017;**21**:703-717

[13] Voytek B, Knight RT. Dynamic network communication as a unifying neural basis for cognition, development, aging, and disease. Biological Psychiatry. 2015;**77**:1089-1097

[14] Vashistha R et al. Artificial intelligence integration for neurodegenerative disorders. In: Leveraging Biomedical and Healthcare Data. Academic Press; 2019. pp. 77-89

[15] Xu J, Nguyen AT, Yang Z. Advances in neural recording and stimulation devices. In: Engineering in Medicine. Academic Press; 2019. pp. 335-363

[16] Laufs H et al. EEG-correlated fMRI of human alpha activity. NeuroImage. 2003;**19**(4):1463-1476

[17] Varela F et al. The brainweb: Phase synchronization and largescale integration. Nature Reviews Neuroscience. 2001;**2**(4):229

[18] Snyder AZ. Intrinsic brain activity and resting state networks. In: Neuroscience in the 21st Century. 2016. pp. 1-52

[19] Avena-Koenigsberger A, Misic B, Sporn O. Communication dynamics in complex brain networks. Neuroscience. 2017;**19**:17-33

[20] Bassett DS, Sporns O. Network neuroscience. Nature Neuroscience. 2017;**20**(3):353

[21] Rubinov M, Sporns O. Complex network measures of brain connectivity: Uses and interpretations. NeuroImage. 2010;**52**(3):1059-1069

**89**

**1. Introduction**

**Highlights**

timing behavior

**Chapter 6**

**Abstract**

processing.

Interval Timing

*Nicholas A. Lusk*

Integration of Spiking Neural

Networks for Understanding

The ability to perceive the passage of time in the seconds-to-minutes range is a vital and ubiquitous characteristic of life. This ability allows organisms to make behavioral changes based on the temporal contingencies between stimuli and the potential rewards they predict. While the psychophysical manifestations of time perception have been well-characterized, many aspects of its underlying biology are still poorly understood. A major contributor to this is limitations of current *in vivo* techniques that do not allow for proper assessment of the di signaling over micro-, meso- and macroscopic spatial scales. Alternatively, the integration of biologically inspired artificial neural networks (ANNs) based on the dynamics and cyto-architecture of brain regions associated with time perception can help mitigate these limitations and, in conjunction, provide a powerful tool for progressing research in the field. To this end, this chapter aims to: (1) provide insight into the biological complexity of interval timing, (2) outline limitations in our ability to accurately assess these neural mechanisms *in vivo,* and (3) demonstrate potential application of ANNs for better understanding the biological underpinnings of temporal

**Keywords:** interval timing, time perception, neural oscillators, dopamine,

When it comes to understanding the neural underpinnings of time perception, the devil is in the details. As all events inevitably unfold in time, there is no shortage of potential "timing" signals. However, behavioral tasks often possess inherent

• Examine neural signatures, circuitry dynamics, and neuromodulator pathways related to interval

• Discuss the use of artificial neural networks for understanding neural dynamics and timing

• Identify properties of thalamocortical dynamics that may be integral to time perception

basal ganglia, artificial neural networks, spiking neural networks

• Address limitations in current *in vivo* techniques in studying timing

## **Chapter 6**

*New Frontiers in Brain-Computer Interfaces*

[19] Avena-Koenigsberger A, Misic B, Sporn O. Communication dynamics in complex brain networks. Neuroscience.

[20] Bassett DS, Sporns O. Network neuroscience. Nature Neuroscience.

[21] Rubinov M, Sporns O. Complex network measures of brain connectivity: Uses and interpretations. NeuroImage.

2017;**19**:17-33

2017;**20**(3):353

2010;**52**(3):1059-1069

**88**

## Integration of Spiking Neural Networks for Understanding Interval Timing

*Nicholas A. Lusk*

## **Abstract**

The ability to perceive the passage of time in the seconds-to-minutes range is a vital and ubiquitous characteristic of life. This ability allows organisms to make behavioral changes based on the temporal contingencies between stimuli and the potential rewards they predict. While the psychophysical manifestations of time perception have been well-characterized, many aspects of its underlying biology are still poorly understood. A major contributor to this is limitations of current *in vivo* techniques that do not allow for proper assessment of the di signaling over micro-, meso- and macroscopic spatial scales. Alternatively, the integration of biologically inspired artificial neural networks (ANNs) based on the dynamics and cyto-architecture of brain regions associated with time perception can help mitigate these limitations and, in conjunction, provide a powerful tool for progressing research in the field. To this end, this chapter aims to: (1) provide insight into the biological complexity of interval timing, (2) outline limitations in our ability to accurately assess these neural mechanisms *in vivo,* and (3) demonstrate potential application of ANNs for better understanding the biological underpinnings of temporal processing.

**Keywords:** interval timing, time perception, neural oscillators, dopamine, basal ganglia, artificial neural networks, spiking neural networks

## **Highlights**


## **1. Introduction**

When it comes to understanding the neural underpinnings of time perception, the devil is in the details. As all events inevitably unfold in time, there is no shortage of potential "timing" signals. However, behavioral tasks often possess inherent relationship between time, spatial location, and external signals making it difficult to isolate activity dedicated to timing per se. As a result, timing correlates have been observed across nearly all regions of the cortex [1–4] as well as sub-cortical areas and the cerebellum [5, 6]. Reflecting this anatomical diversity, the number of theoretical models dedicated to timing is also vast, and utilize a diverse range of firing dynamics such as oscillatory [7] ramping [8] or synfire chains [9]. Yet, which of these various timing motifs and to what degree they contribute to a unified perception of time remains unclear [10].

A contributing factor to the multitude of timing theories are the limitations of current *in vivo* techniques, which can be spatially restrictive, produce ambiguous information, and contain representation biases. Though remarkable strides have been made in expanding the scope of techniques used to record, image and modulate neural activity, the capacity to selectively manipulate and/or effectively observe the propagation of activity from a large population of neurons within a particular brain region or across multiple regions, is limited. With theories of temporal processing spanning the microscopic level of intrinsic cellular [11, 12] and network dynamics [13–15] to the macroscopic interplay between multiple brain regions [16], understanding how animals track the passage of time has been an arduous task through reliance on *in vivo* techniques alone.

A promising avenue to help circumvent the aforementioned limitations is the integration of biologically inspired ANNs. While neural networks have been around for over half a century [17], recent years have seen a resurgent interest in their development and application within neuroscience. Spurred by substantial advancements in computational power, the ability for labs to integrate complex, biologically constrained neural networks is more viable than ever before. As the integration and development of biologically inspired ANNs into neuroscience has been steadily growing for many decades, adoption into the field of time and time perception has been comparatively slow.

Though examples do exist, current efforts have remained limited [18–20]. Moreover, these networks often lack characteristics considered vital for biologically realism such as bidirectional activation propagation or Hebbian-based learning [21] - characteristics that see widespread use in other fields. Many of these models utilize rate-based units, applying 'activation functions' and highly simplified network motifs, limiting the temporal dynamics of the network. While these simplified

**Figure 1.**

*Proposed cycle for integration of SNNs. Schematic visualization of integrating SNNs into empirical process.*

**91**

*Integration of Spiking Neural Networks for Understanding Interval Timing*

networks provide valuable theoretical insight, the incorporation of spiking neural networks (SNNs) with biologically inspired 'spiking' units and network architectures, allows SNNs to capture neural dynamics more analogous to the biological systems which they are based and a more critical assessment of timing theories. The integration of SNNs can allow researchers to visualize the propagation of temporal information as well as finer control over neuromodulatory systems not possible in

The aim of this chapter is to demonstrate how the field of timing and time perception can benefit from the implementation of biologically constrained SNNs. In so doing, limitations of current *in* vivo recording and imaging techniques will be addressed along with examples of how SNNs can be used to circumvent such constraints and facilitate hypothesis driven research (**Figure 1**). Lastly, specific outstanding questions in the field of interval timing that could most benefit from

Neural correlates of time perception have been observed across nearly the entire

cortical mantel. In theory, any pattern that remains consistent for a given duration yet varies across durations is capable of acting as a biological timer. As these requirements are not particularly stringent, electrophysiological recordings have uncovered many candidate patterns. The prefrontal cortex (PFC) alone contains ramping, peaking, and oscillatory activity meeting such criterion [22–26]. In addition to spiking activity, EEG and MEG recordings in humans [27–29] demonstrate a

robust relationship between mesoscopic oscillations and timing behavior.

Electrophysiological studies in rodent and non-human primates have shown robust correlations between cortical activity and timing behavior across multiple brain regions. Yet, how dynamics within and across cortical regions contribute to these behaviors are still unclear. Though ever expanding, limitations in our current knowledge on how information propagates across interconnected brain regions has made drawing causative relationships between timing behavior and specific neural

One of the most studied neural correlates of timing is 'ramping activity' (**Figure 2A**). These monotonic increases or decreases in firing rate have been observed within the prefrontal [25, 26] primary motor [30] and posterior parietal cortex [31, 32] during various timing tasks. Along with its seemingly ubiquitous presence within the cortex, key observations further indicate ramping as a viable timing mechanism: (i) ramping activity has been demonstrated to occur across multiple timescales from hundreds of milliseconds to multiple seconds (ii) the rate of change can be adjusted through learning different durations (iii) the rate of change during a reproduction task is dependent on the presented duration. Specifically, neuronal activity in the lateral inferior parietal cortex (LIP), recorded in macaques trained on a temporal reproduction task, found changes in firing rate were inversely proportional to the duration being produced. That is, shorter durations had steeper ramping activity therefore reaching threshold sooner [31].

However, pharmacological inactivation of ramping activity within these same

regions of the cortex, namely the posterior parietal cortex (PPC), during evidence accumulation tasks has negligible effects on stimulus categorization [33]. Conversely, decreasing prefrontal cholinergic concentrations reduced temporal

*DOI: http://dx.doi.org/10.5772/intechopen.89781*

integration of SNN models will be identified.

**2. Interval timing networks and dynamics**

**2.1 Cortical spiking and mesoscopic oscillations**

other models.

signals difficult.

## *Integration of Spiking Neural Networks for Understanding Interval Timing DOI: http://dx.doi.org/10.5772/intechopen.89781*

*New Frontiers in Brain-Computer Interfaces*

tion of time remains unclear [10].

through reliance on *in vivo* techniques alone.

been comparatively slow.

relationship between time, spatial location, and external signals making it difficult to isolate activity dedicated to timing per se. As a result, timing correlates have been observed across nearly all regions of the cortex [1–4] as well as sub-cortical areas and the cerebellum [5, 6]. Reflecting this anatomical diversity, the number of theoretical models dedicated to timing is also vast, and utilize a diverse range of firing dynamics such as oscillatory [7] ramping [8] or synfire chains [9]. Yet, which of these various timing motifs and to what degree they contribute to a unified percep-

A contributing factor to the multitude of timing theories are the limitations of current *in vivo* techniques, which can be spatially restrictive, produce ambiguous information, and contain representation biases. Though remarkable strides have been made in expanding the scope of techniques used to record, image and modulate neural activity, the capacity to selectively manipulate and/or effectively observe the propagation of activity from a large population of neurons within a particular brain region or across multiple regions, is limited. With theories of temporal processing spanning the microscopic level of intrinsic cellular [11, 12] and network dynamics [13–15] to the macroscopic interplay between multiple brain regions [16], understanding how animals track the passage of time has been an arduous task

A promising avenue to help circumvent the aforementioned limitations is the integration of biologically inspired ANNs. While neural networks have been around for over half a century [17], recent years have seen a resurgent interest in their development and application within neuroscience. Spurred by substantial advancements in computational power, the ability for labs to integrate complex, biologically constrained neural networks is more viable than ever before. As the integration and development of biologically inspired ANNs into neuroscience has been steadily growing for many decades, adoption into the field of time and time perception has

Though examples do exist, current efforts have remained limited [18–20]. Moreover, these networks often lack characteristics considered vital for biologically realism such as bidirectional activation propagation or Hebbian-based learning [21] - characteristics that see widespread use in other fields. Many of these models utilize rate-based units, applying 'activation functions' and highly simplified network motifs, limiting the temporal dynamics of the network. While these simplified

**90**

**Figure 1.**

*Proposed cycle for integration of SNNs. Schematic visualization of integrating SNNs into empirical process.*

networks provide valuable theoretical insight, the incorporation of spiking neural networks (SNNs) with biologically inspired 'spiking' units and network architectures, allows SNNs to capture neural dynamics more analogous to the biological systems which they are based and a more critical assessment of timing theories. The integration of SNNs can allow researchers to visualize the propagation of temporal information as well as finer control over neuromodulatory systems not possible in other models.

The aim of this chapter is to demonstrate how the field of timing and time perception can benefit from the implementation of biologically constrained SNNs. In so doing, limitations of current *in* vivo recording and imaging techniques will be addressed along with examples of how SNNs can be used to circumvent such constraints and facilitate hypothesis driven research (**Figure 1**). Lastly, specific outstanding questions in the field of interval timing that could most benefit from integration of SNN models will be identified.

## **2. Interval timing networks and dynamics**

Neural correlates of time perception have been observed across nearly the entire cortical mantel. In theory, any pattern that remains consistent for a given duration yet varies across durations is capable of acting as a biological timer. As these requirements are not particularly stringent, electrophysiological recordings have uncovered many candidate patterns. The prefrontal cortex (PFC) alone contains ramping, peaking, and oscillatory activity meeting such criterion [22–26]. In addition to spiking activity, EEG and MEG recordings in humans [27–29] demonstrate a robust relationship between mesoscopic oscillations and timing behavior.

## **2.1 Cortical spiking and mesoscopic oscillations**

Electrophysiological studies in rodent and non-human primates have shown robust correlations between cortical activity and timing behavior across multiple brain regions. Yet, how dynamics within and across cortical regions contribute to these behaviors are still unclear. Though ever expanding, limitations in our current knowledge on how information propagates across interconnected brain regions has made drawing causative relationships between timing behavior and specific neural signals difficult.

One of the most studied neural correlates of timing is 'ramping activity' (**Figure 2A**). These monotonic increases or decreases in firing rate have been observed within the prefrontal [25, 26] primary motor [30] and posterior parietal cortex [31, 32] during various timing tasks. Along with its seemingly ubiquitous presence within the cortex, key observations further indicate ramping as a viable timing mechanism: (i) ramping activity has been demonstrated to occur across multiple timescales from hundreds of milliseconds to multiple seconds (ii) the rate of change can be adjusted through learning different durations (iii) the rate of change during a reproduction task is dependent on the presented duration. Specifically, neuronal activity in the lateral inferior parietal cortex (LIP), recorded in macaques trained on a temporal reproduction task, found changes in firing rate were inversely proportional to the duration being produced. That is, shorter durations had steeper ramping activity therefore reaching threshold sooner [31].

However, pharmacological inactivation of ramping activity within these same regions of the cortex, namely the posterior parietal cortex (PPC), during evidence accumulation tasks has negligible effects on stimulus categorization [33]. Conversely, decreasing prefrontal cholinergic concentrations reduced temporal

### **Figure 2.**

*Distribution of timing correlates and associated activity patterns. (A) Simulation of ramping activity as observed in cortical networks during timing tasks. Parameterized using values from model of neural integration (Simon et al. 2011). Theoretical accumulation of activity (bottom) and distribution of time to threshold (top) (B) Heatmap of simulated activity normalized by max firing rate depicting trajectory dynamics found in the cortex, hippocampus, and striatum. (C) Depiction of pauses in Purkinje cell activity within the cerebellum during 300 millisecond delay (ISI) Pavlovian eye-lid conditioning.*

precision without disrupting ramping activity demonstrating a potential dissociation between ramping and timing behavior [34]. Other regions have demonstrated a more causal relationship, namely, ramping in the frontal orienting field (FOF) was found to be necessary for proper performance possibly implicate ramping activity as a general computational motif within cortical circuits of which timing is localized to a particular cortical region.

Recent evidence suggests that neuronal ramping may not, in fact, be ramping at all, but an artifact of bi-stable neuron activity averaged over multiple trials. Analysis of spiking activity within the LIP during individual trials of a motion discrimination task showed 31 of 40 neurons exhibited 'stepping' behavior [35] as opposed to the deterministic gradual increase expected of a truly ramping dynamic (c.f. [36, 37]). These findings parallel human studies which have cast doubt on the role of the contingent negative variance (CNV) – an EEG correlate of ramping activity – in temporal processing due to the poor predictive ability of the CNV and the high temporal accuracy demonstrated even after the resolution of the signal [38]. How ramping activity develops also remains unclear [39, 40].

Larger scale recordings, containing 55–120 simultaneously recorded neurons, have painted a relatively different picture of spiking activity within the cortex during timing. Bakhurin and colleagues [22] found putative projection neurons contained activity patterns in which individual neurons in the orbital frontal cortex (OFC) displayed sequential activity that tiled a 1.5 s delay period following an olfactory cue. This type of firing is reminiscent of "time cell" activity recorded in other brain regions such as the striatum [6, 41] and hippocampus [42, 43] providing a parsimonious representation of timing signals across distinct brain regions (**Figure 2B**).

At the mesoscopic level, oscillatory activity within cortical regions has also been theorized as an underlying mechanism for time perception. As with spiking correlates of timing, neural oscillations are pervasive throughout the brain and implicated in a multitude of cognitive processes such as attention, memory, movement preparation and even consciousness [44]. Yet, their computational role in these processes is generally unresolved [45]. Nevertheless, researchers have long

**93**

hypothesis.

ability.

*Integration of Spiking Neural Networks for Understanding Interval Timing*

time from hundreds of milliseconds to tens of minutes [46, 47].

high across this entire duration has not been tested.

mal coupling to beta rhythms [53].

**2.2 Neuromodulators and time perception**

recognized the potential of rhythmically repeating oscillators to track the passage of

Increases of delta range (~4 Hz) oscillations in the medial prefrontal cortex (mPFC) were shown to negatively impact the temporal precision of rats performing a 12 s fixed-interval task. Pharmacological attenuation of these increases in delta through blockage of D1 dopamine receptor (D1DR) signaling mitigated the associated deficits in timing [2]. Interestingly, D1DR+ neurons in the prefrontal cortex have strong delta frequency coherence with a subset of neurons exhibiting ramping activity implicating a direct link between microscopic spiking and mesoscopic oscillations during timing [48]. Additional evidence supports the existence of spike phase relationships with the mPFC particularly within the theta frequency (5–10 Hz) [23, 49, 50]. However, Benchenane et al. [23] demonstrated that spike-phase entrainment to theta in the mPFC only occurs during times of high coherence between mPFC and hippocampal (HIPP) theta and is most prominent at times requiring encoding or retrieval of spatial memory. This property likely makes it too transient to track time over multiple seconds. Increases in cortical theta have also been associated with interval timing tasks where sustained increases in cortical theta power occur during the encoding of the standard duration in a temporal comparison task [51, 52], though whether coherence between HIPP and mPFC remains

As with ramping activates relation to timing behavior, the coupling of spikes to oscillations appears to be region specific. Though oscillatory activity has been observed within the cortex across multiple frequency bands, there is limited support for individual neurons firing-rates to entrain to these rhythms. For example, though timing behavior correlates with cortical beta (~15–30 Hz) activity within the dorso- and ventro-lateral prefrontal cortex, premotor cortex, and posterior parietal cortex [28], recent work suggests that spiking activity demonstrates mini-

Timing behavior has been shown to be highly susceptible to manipulations of neuromodulators such as dopamine (DA: [54–58]) serotonin (5-HT: [59–61]) and acetylcholine (Ach: [34, 62, 63]). Additionally, patients suffering from disorders involving these pathways [64–66] demonstrate systematic changes in their timing

Dopamine has been the most widely studied neuromodulator in the field of interval timing. Despite this depth of research, many questions still remain as research has produced seemingly paradoxical effects. Early psychopharmacological studies demonstrated bidirectional shifts in timing accuracy (i.e. over- or underestimations of a target duration) after administration of DA agonists and antagonists, respectively [55–57, 67]. This work suggested that DA changes the speed of a subjects internal timing mechanism (i.e. "clock speed" effect). This could manifest itself as changes in the slope of ramping neurons or oscillator frequency. However, other research suggested that administration of dopaminergic drugs such as the selective D2 and D3 agonist Quinpirole disrupts timing precision rather than accuracy through modifying attentional processes [58, 68]. While later work using a variation of the peak interval procedure supported the changes in accuracy, the directionality of the peak shifts did not align with the original "clock speed" hypothesis [69]. Subsequent experiments further demonstrated these effects to be sensitive to non-temporal aspects of the task similar to the "attentional modulation"

*DOI: http://dx.doi.org/10.5772/intechopen.89781*

## *Integration of Spiking Neural Networks for Understanding Interval Timing DOI: http://dx.doi.org/10.5772/intechopen.89781*

*New Frontiers in Brain-Computer Interfaces*

ized to a particular cortical region.

*during 300 millisecond delay (ISI) Pavlovian eye-lid conditioning.*

**Figure 2.**

precision without disrupting ramping activity demonstrating a potential dissociation between ramping and timing behavior [34]. Other regions have demonstrated a more causal relationship, namely, ramping in the frontal orienting field (FOF) was found to be necessary for proper performance possibly implicate ramping activity as a general computational motif within cortical circuits of which timing is local-

*Distribution of timing correlates and associated activity patterns. (A) Simulation of ramping activity as observed in cortical networks during timing tasks. Parameterized using values from model of neural integration (Simon et al. 2011). Theoretical accumulation of activity (bottom) and distribution of time to threshold (top) (B) Heatmap of simulated activity normalized by max firing rate depicting trajectory dynamics found in the cortex, hippocampus, and striatum. (C) Depiction of pauses in Purkinje cell activity within the cerebellum* 

Recent evidence suggests that neuronal ramping may not, in fact, be ramping at all, but an artifact of bi-stable neuron activity averaged over multiple trials. Analysis of spiking activity within the LIP during individual trials of a motion discrimination task showed 31 of 40 neurons exhibited 'stepping' behavior [35] as opposed to the deterministic gradual increase expected of a truly ramping dynamic (c.f. [36, 37]). These findings parallel human studies which have cast doubt on the role of the contingent negative variance (CNV) – an EEG correlate of ramping activity – in temporal processing due to the poor predictive ability of the CNV and the high temporal accuracy demonstrated even after the resolution of the signal

Larger scale recordings, containing 55–120 simultaneously recorded neurons, have painted a relatively different picture of spiking activity within the cortex during timing. Bakhurin and colleagues [22] found putative projection neurons contained activity patterns in which individual neurons in the orbital frontal cortex (OFC) displayed sequential activity that tiled a 1.5 s delay period following an olfactory cue. This type of firing is reminiscent of "time cell" activity recorded in other brain regions such as the striatum [6, 41] and hippocampus [42, 43] providing a parsimonious representation of timing signals across distinct brain regions (**Figure 2B**). At the mesoscopic level, oscillatory activity within cortical regions has also been theorized as an underlying mechanism for time perception. As with spiking correlates of timing, neural oscillations are pervasive throughout the brain and implicated in a multitude of cognitive processes such as attention, memory, movement preparation and even consciousness [44]. Yet, their computational role in these processes is generally unresolved [45]. Nevertheless, researchers have long

[38]. How ramping activity develops also remains unclear [39, 40].

**92**

recognized the potential of rhythmically repeating oscillators to track the passage of time from hundreds of milliseconds to tens of minutes [46, 47].

Increases of delta range (~4 Hz) oscillations in the medial prefrontal cortex (mPFC) were shown to negatively impact the temporal precision of rats performing a 12 s fixed-interval task. Pharmacological attenuation of these increases in delta through blockage of D1 dopamine receptor (D1DR) signaling mitigated the associated deficits in timing [2]. Interestingly, D1DR+ neurons in the prefrontal cortex have strong delta frequency coherence with a subset of neurons exhibiting ramping activity implicating a direct link between microscopic spiking and mesoscopic oscillations during timing [48]. Additional evidence supports the existence of spike phase relationships with the mPFC particularly within the theta frequency (5–10 Hz) [23, 49, 50]. However, Benchenane et al. [23] demonstrated that spike-phase entrainment to theta in the mPFC only occurs during times of high coherence between mPFC and hippocampal (HIPP) theta and is most prominent at times requiring encoding or retrieval of spatial memory. This property likely makes it too transient to track time over multiple seconds. Increases in cortical theta have also been associated with interval timing tasks where sustained increases in cortical theta power occur during the encoding of the standard duration in a temporal comparison task [51, 52], though whether coherence between HIPP and mPFC remains high across this entire duration has not been tested.

As with ramping activates relation to timing behavior, the coupling of spikes to oscillations appears to be region specific. Though oscillatory activity has been observed within the cortex across multiple frequency bands, there is limited support for individual neurons firing-rates to entrain to these rhythms. For example, though timing behavior correlates with cortical beta (~15–30 Hz) activity within the dorso- and ventro-lateral prefrontal cortex, premotor cortex, and posterior parietal cortex [28], recent work suggests that spiking activity demonstrates minimal coupling to beta rhythms [53].

## **2.2 Neuromodulators and time perception**

Timing behavior has been shown to be highly susceptible to manipulations of neuromodulators such as dopamine (DA: [54–58]) serotonin (5-HT: [59–61]) and acetylcholine (Ach: [34, 62, 63]). Additionally, patients suffering from disorders involving these pathways [64–66] demonstrate systematic changes in their timing ability.

Dopamine has been the most widely studied neuromodulator in the field of interval timing. Despite this depth of research, many questions still remain as research has produced seemingly paradoxical effects. Early psychopharmacological studies demonstrated bidirectional shifts in timing accuracy (i.e. over- or underestimations of a target duration) after administration of DA agonists and antagonists, respectively [55–57, 67]. This work suggested that DA changes the speed of a subjects internal timing mechanism (i.e. "clock speed" effect). This could manifest itself as changes in the slope of ramping neurons or oscillator frequency. However, other research suggested that administration of dopaminergic drugs such as the selective D2 and D3 agonist Quinpirole disrupts timing precision rather than accuracy through modifying attentional processes [58, 68]. While later work using a variation of the peak interval procedure supported the changes in accuracy, the directionality of the peak shifts did not align with the original "clock speed" hypothesis [69]. Subsequent experiments further demonstrated these effects to be sensitive to non-temporal aspects of the task similar to the "attentional modulation" hypothesis.

The wide repertoire of timing behaviors related to DA modulation may be a product of its diffuse circuitry [70, 71]. Alternatively, the complexity may result from interactions with other neuromodulatory pathways. Electrophysiological evidence suggests DA neurons elicit tonic excitatory control over 5-HT neurons within the raphe nucleus [72]. In fact, a mouse model of Parkinson's disease using 6-OHDA lesions lead to increases in spontaneous firing as well as maximum firing rate of 5-HT neurons in the dorsal raphe nucleus [73].

Administration of 5-HT1A receptor agonist 8-OH-DPAT modulated timing precision on retrospective timing tasks, while immediate timing tasks saw changes in accuracy [74]. However, the contribution of 5-HTergic pathways to timing behavior remains precarious as many of these studies were unable to dissociate changes in interval timing from intertemporal choice [60]. In fact, more recent work indicates 5-HT to be more strongly associated with intertemporal choice [75] along with additional factors such as reward rate and temporal uncertainty [76]. This relationship appears to be bidirectional as 5-HT projections from the dorsal raphe nucleus can modulate DA release directly through excitatory synapses onto VTA dopamine neurons [77]. A selective 5-HT2c ligand shown to increase vigor and persistence in goal-directed behavior also leads to increased tonic DA levels in the dorsal medial striatum [78]. The aforementioned interaction within the DMS is likely driven by excitation of striatal cholinergic interneurons which can drive action potentials independent of DA release [79, 80]; thus, linking cholinergic pathways to an already complex system.

Despite considerable work on the role of neuromodulators in timing behavior, dissociating the mechanisms responsible for these effects is yet to be fully understood. These systems contain multiple origins with diffuse and overlapping targets. Moreover, direct as well as indirect connections between these pathways makes it difficult to confidently assign credit to a single pathway and behavioral changes are often dependent on the parameterization of the particular study. As a result, it is likely that non-traditional methods allowing for more systematic modulation of the respective pathways is necessary to fully disentangle their individual roles in timing behavior.

## **3. Limitations of current** *in vivo* **techniques**

Substantial advancements of *in vivo* techniques are now allowing for exceptional insight into neuronal dynamics. The number of simultaneously recorded single neurons has seen a near doubling every 7 years [81]. Coupled with growth of opensource hardware systems, access to these powerful technologies is becoming more feasible and cost effective [82]. However, these techniques are still limited in the amount of information they are able to reliably produce in relation to the dynamic properties of the brain. This has led to debate into the sufficiency of correlations between firing activity and behavioral output. Additionally, many of the current methods used for manipulating endogenous activity suffer from a lack of specificity. The aforementioned diffuseness of the interval timing network coupled with sensitivity to neuromodulation limit the insight from *in vivo* techniques alone.

## **3.1 Recording and imaging**

Extracellular *in vivo* recordings have long been the technique of choice for linking neural activity to ongoing behavior through monitoring action potentials (APs) along with more mesoscopic neural activity in the form of local field potentials (LFP). Though Advancements in recording techniques has been able to mitigate

**95**

used.

*Integration of Spiking Neural Networks for Understanding Interval Timing*

some of the uncertainty in isolating an individual cell's activity, a sizeable degree of error still exists. Though quantification of this error, referred to as the 'spike sorting problem', is difficult due to the lack of 'ground truth' data, estimates suggest semiautomatic clustering error with tetrodes to be on the order of 5–10%, and substantially higher (upwards of 30%) for manual cluster cutting, a process still popular in many labs [83]. Furthermore, the uncertain origin of signals such as gamma range

A second class of errors that can lead produce specious conclusions is 'selection bias'. Combined intra- and extracellular recordings within CA1 of the hippocampus demonstrated that despite an estimated 140 neurons within the recording distance of a single tetrode, rarely are more than a dozen signals ever detected [85]. While innocuous contributions such as acute edema and glial encapsulation can lead to significant decreases in signal strength and subsequent cell counts [86], other causes such as under classification of cells with low firing rates can skew researcher's interpretation of genuine network dynamics. Furthermore, differential firing rates between regions, such as higher firing rates in deep-layers of the cortex in comparison to pyramidal cells of the superficial-layer [87, 88], may bias researchers toward studying areas with high spiking and subsequently overestimating a regions role in the overarching circuit. Recent attempts at addressing and quantifying the quality of *in vivo* recordings is a step toward lessening the effect of electrical

In addition to classic recording techniques, calcium imaging has become a popular tool for visualizing activity. Imaging permits precise spatial mapping of activity [91, 92] and mitigates many of the limitations in recording such as the "spike sorting problem" and "cell selection bias'" [93]. Further improvements in fluorescent indicators [94, 95] and scanning techniques [96] have been able to overcome past limitations in sampling rate allowing for detection of somatic calcium transients

However, in non-laminar low cell density brain regions the number of cells that can be simultaneously observed is highly restricted. Sub-cortical areas such as the striatum can be limited to less than 40 cells [97] and require significant damage to regions dorsal to those being imaged. Paired with the inability to sufficiently account for dynamic interactions within a single brain region imaging technique are even more limited when attempting to study interactions across multiple regions. In all, while *in vivo* techniques are one of the most valuable tools for understanding the relationship between neural signals and behavior, alternative methods in conjunction can provide richer insight into not only regional dynamics, but also

While electrophysiological recording and imaging provide insight into endogenous activity, much of our knowledge into how the brain senses the passage of time has arisen from the manipulation of timing networks. Yet, there are often deviations between a researcher's intent and the actual alterations within the brain. The main contributors are lack of specificity or incomplete knowledge of the technique being

While the foundation of many theories in time perception, pharmacological manipulations are the most susceptible to confounding interactions. Even drugs touted as selective can display affinity for non-target receptors. Concretely, the commonly used D1-antagonist SCH-23390 also demonstrates high affinity for serotonin receptor subtypes 5-HT2 and 5-HT1C [98]. While its affinity for D1 receptors is much higher than that of 5-HT, the expression of both receptor types

artifacts [89, 90], though further work in this direction is still needed.

**3.2 Pharmacological, chemogenetic, and optical manipulations**

*DOI: http://dx.doi.org/10.5772/intechopen.89781*

evoked during action potentials.

interregional interactions.

oscillations makes interpretations speculative [84].

## *Integration of Spiking Neural Networks for Understanding Interval Timing DOI: http://dx.doi.org/10.5772/intechopen.89781*

*New Frontiers in Brain-Computer Interfaces*

complex system.

behavior.

rate of 5-HT neurons in the dorsal raphe nucleus [73].

**3. Limitations of current** *in vivo* **techniques**

The wide repertoire of timing behaviors related to DA modulation may be a product of its diffuse circuitry [70, 71]. Alternatively, the complexity may result from interactions with other neuromodulatory pathways. Electrophysiological evidence suggests DA neurons elicit tonic excitatory control over 5-HT neurons within the raphe nucleus [72]. In fact, a mouse model of Parkinson's disease using 6-OHDA lesions lead to increases in spontaneous firing as well as maximum firing

Administration of 5-HT1A receptor agonist 8-OH-DPAT modulated timing precision on retrospective timing tasks, while immediate timing tasks saw changes in accuracy [74]. However, the contribution of 5-HTergic pathways to timing behavior remains precarious as many of these studies were unable to dissociate changes in interval timing from intertemporal choice [60]. In fact, more recent work indicates 5-HT to be more strongly associated with intertemporal choice [75] along with additional factors such as reward rate and temporal uncertainty [76]. This relationship appears to be bidirectional as 5-HT projections from the dorsal raphe nucleus can modulate DA release directly through excitatory synapses onto VTA dopamine neurons [77]. A selective 5-HT2c ligand shown to increase vigor and persistence in goal-directed behavior also leads to increased tonic DA levels in the dorsal medial striatum [78]. The aforementioned interaction within the DMS is likely driven by excitation of striatal cholinergic interneurons which can drive action potentials independent of DA release [79, 80]; thus, linking cholinergic pathways to an already

Despite considerable work on the role of neuromodulators in timing behavior, dissociating the mechanisms responsible for these effects is yet to be fully understood. These systems contain multiple origins with diffuse and overlapping targets. Moreover, direct as well as indirect connections between these pathways makes it difficult to confidently assign credit to a single pathway and behavioral changes are often dependent on the parameterization of the particular study. As a result, it is likely that non-traditional methods allowing for more systematic modulation of the respective pathways is necessary to fully disentangle their individual roles in timing

Substantial advancements of *in vivo* techniques are now allowing for exceptional

Extracellular *in vivo* recordings have long been the technique of choice for linking neural activity to ongoing behavior through monitoring action potentials (APs) along with more mesoscopic neural activity in the form of local field potentials (LFP). Though Advancements in recording techniques has been able to mitigate

insight into neuronal dynamics. The number of simultaneously recorded single neurons has seen a near doubling every 7 years [81]. Coupled with growth of opensource hardware systems, access to these powerful technologies is becoming more feasible and cost effective [82]. However, these techniques are still limited in the amount of information they are able to reliably produce in relation to the dynamic properties of the brain. This has led to debate into the sufficiency of correlations between firing activity and behavioral output. Additionally, many of the current methods used for manipulating endogenous activity suffer from a lack of specificity. The aforementioned diffuseness of the interval timing network coupled with sensitivity to neuromodulation limit the insight from *in vivo* techniques alone.

**94**

**3.1 Recording and imaging**

some of the uncertainty in isolating an individual cell's activity, a sizeable degree of error still exists. Though quantification of this error, referred to as the 'spike sorting problem', is difficult due to the lack of 'ground truth' data, estimates suggest semiautomatic clustering error with tetrodes to be on the order of 5–10%, and substantially higher (upwards of 30%) for manual cluster cutting, a process still popular in many labs [83]. Furthermore, the uncertain origin of signals such as gamma range oscillations makes interpretations speculative [84].

A second class of errors that can lead produce specious conclusions is 'selection bias'. Combined intra- and extracellular recordings within CA1 of the hippocampus demonstrated that despite an estimated 140 neurons within the recording distance of a single tetrode, rarely are more than a dozen signals ever detected [85]. While innocuous contributions such as acute edema and glial encapsulation can lead to significant decreases in signal strength and subsequent cell counts [86], other causes such as under classification of cells with low firing rates can skew researcher's interpretation of genuine network dynamics. Furthermore, differential firing rates between regions, such as higher firing rates in deep-layers of the cortex in comparison to pyramidal cells of the superficial-layer [87, 88], may bias researchers toward studying areas with high spiking and subsequently overestimating a regions role in the overarching circuit. Recent attempts at addressing and quantifying the quality of *in vivo* recordings is a step toward lessening the effect of electrical artifacts [89, 90], though further work in this direction is still needed.

In addition to classic recording techniques, calcium imaging has become a popular tool for visualizing activity. Imaging permits precise spatial mapping of activity [91, 92] and mitigates many of the limitations in recording such as the "spike sorting problem" and "cell selection bias'" [93]. Further improvements in fluorescent indicators [94, 95] and scanning techniques [96] have been able to overcome past limitations in sampling rate allowing for detection of somatic calcium transients evoked during action potentials.

However, in non-laminar low cell density brain regions the number of cells that can be simultaneously observed is highly restricted. Sub-cortical areas such as the striatum can be limited to less than 40 cells [97] and require significant damage to regions dorsal to those being imaged. Paired with the inability to sufficiently account for dynamic interactions within a single brain region imaging technique are even more limited when attempting to study interactions across multiple regions. In all, while *in vivo* techniques are one of the most valuable tools for understanding the relationship between neural signals and behavior, alternative methods in conjunction can provide richer insight into not only regional dynamics, but also interregional interactions.

## **3.2 Pharmacological, chemogenetic, and optical manipulations**

While electrophysiological recording and imaging provide insight into endogenous activity, much of our knowledge into how the brain senses the passage of time has arisen from the manipulation of timing networks. Yet, there are often deviations between a researcher's intent and the actual alterations within the brain. The main contributors are lack of specificity or incomplete knowledge of the technique being used.

While the foundation of many theories in time perception, pharmacological manipulations are the most susceptible to confounding interactions. Even drugs touted as selective can display affinity for non-target receptors. Concretely, the commonly used D1-antagonist SCH-23390 also demonstrates high affinity for serotonin receptor subtypes 5-HT2 and 5-HT1C [98]. While its affinity for D1 receptors is much higher than that of 5-HT, the expression of both receptor types within this region could explain why timing behavior following local infusions of SCH-23390 into the DMS has been difficult to interpret [99]. As 5-HT2 activation within the striatum has been shown to indirectly reduce striatal MSN activity [100]. Conversely, timing effects attributed to serotonin could be mediated through or in conjunction with indirect increases in DA, which has been demonstrated in 5-HT2 agonists such as Psilocybin [101].

In an effort to minimize off-target effects there has been a renaissance in the development of chemogenetic and optogenetic techniques. Vaunted for their ability to mitigate the confounds of pharmacological methods, a growing literature is revealing these approaches come with their own set of drawbacks. While chemogenetic approaches such as DREADDs (Designer Receptors Exclusively Activated by Designer Drugs) has helped alleviate some of the uncertainty from pharmacological manipulations, recent work shows that CNO (clozapine N-oxide), the most commonly used agonist in DREADDs, can back metabolize into clozapine and has the potential to accumulate in amounts capable of activating endogenous receptors [102]. Importantly, clozapine has been demonstrated to affect temporal accuracy as well as the flexible use of timing mechanisms [103, 104]. While this limitation can be addressed through the use of proper CNO controls in addition to transitioning to low-dose clozapine, these steps constrain DREADDs extended duration of action, likely its greatest advantages over optical techniques.

As with chemogenetic approaches, optogenetic methods have not been immune from technical setbacks, even after widespread implementation. Despite over a decade of use in neuroscience, new caveats in the effectiveness of microbial opsins are still being discovered. pH-dependent calcium influxes from sustained activation of inhibitory proton pump opsins such as eArch3.0 can increase spontaneous neurotransmitter release during terminal stimulation [105]. Inhibitory Cl- channels (i.e. eNpHR3.0 & GtACR1), on the other hand, can drive axonal spiking through positively shifted chloride reversal potentials leading to unintended spiking at the onset as well as offset of stimulation [105, 106].

## **4. Artificial neural network in timing (ANNs)**

Despite being inspired largely by neuroscience, ANNs were initially touted for their powerful computational versatility rather than reliable models of neural or cognitive phenomena. Since their conception, the emergence of conductance-based units, biologically inspired architectures and learning rules has made them an invaluable tool for elucidating how the brain works.

## **4.1 Importance of spiking neural networks (SNNs)**

The 'neuronal' unit embodies the fundamental computational element of an ANN and plays a vital role in the overall capacity of the network. As such, computational neuroscientists have dedicated substantial work transforming early 'neuronal' units, based on the binary threshold unit (i.e. McCulloch-Pitts neurons) into conductance-based 'spiking' models [107–111] that include both detailed biophysical models and simple phenomenological models. Implementation of SNNs allows for rate and temporal dynamics nearly equivalent to those found in biological systems [112].

However, with the increased biological complexity comes an increase in computational cost (i.e. number of floating-point operations per 1-ms of simulation). This has led many to opt for computationally simpler units with continuous 'activationfunctions' rather than spiking dynamics. While these 'rate-based' networks have

**97**

*Integration of Spiking Neural Networks for Understanding Interval Timing*

those who utilize rate-based networks for not choosing SNNs.

proven successful in revealing network architectures conducive to timing [113] as well as how shifts in excitability drive timing activity [114], they remove temporal components of neural signaling related to that limit their explanatory power. Electrophysiological evidence from neural recordings within the superior temporal sulcus (STS: [115]) and motor cortex (M1: [116]) indicates the brain relies heavily on temporal coding (i.e. coherent inputs). Importantly, increases in coincident spiking correspond with temporally relevant timepoints independent of changes in

SNNs thalamocortical [118] and corticostriatal [119] networks, both implicated in proper timing behavior, found sharp transitions in spiking activity are important for normal functioning within the network. This speaks to the diversity of spiking patterns present in the brain that can be captured by spiking units [120], yet absent in rate-based networks. The strongest advocates for SNNs question whether any evidence exists for rate-coding with in the brain [121]. Thus, placing the onus on

Additionally, SNNs allow for implementation of both Hebbian and error-based learning rules. This is of note as the network learning rule can have dramatic effects on the connectivity and dictate whether it develops a feedforward topology scarce in both closed and unique loops or strong reciprocal connectivity motifs like those found in the neocortex [122, 123]. Implementation of these spiking neuron models can lead to dramatic improvements in network performance [124]. The effects of neuromodulators on plasticity also contain tight temporal windows (0.3–2 s) between glutamate release and the presence of DA [125, 126], which is difficult to

Generally absent from earlier ANNs, implicit and/or explicit representations of time within neural network models have been relativity recent [127]. Despite this late adoption, there has been a recent surge in research devoted to elucidating the neural substrates of temporal processing, with a myriad of distinct network motifs being proposed [128]. These models vary in biological realism as it pertains to unit dynamics, network structure, and learning capabilities (**Table 1**), influencing not only the dynamical repertoire of the network, but also the ability to extrapolate findings into the biological systems they are looking to study. That being said, even in their most simplistic form, with little adherence to known biology, network models can provide theoretical insight [132], but these models

Adherence to the underlying neurobiology, allows network models to provide deeper understanding into neural substrates of temporal processing. As interpreting *in vivo* pharmacological manipulations must be done with caution, with many drugs acting on multiple neuromodulatory systems, the ability to isolate these neuromodulatory systems in SNNs allows for uniquely systematic approach. Specifically, a cortico-striatal network model allowed researchers to isolate the effects of changing DA concentrations, free of potential 5-HT confounds, and replicating DA's effect on 'clock speed' [130]. In this same model, modulation of Ach produced the accuracy changes found from systemic injections of drugs acting on the cholinergic pathway [67]. Interestingly, Ach modulation in a hippocampal network produced variations in task precision [134] rather than accuracy. Taken together, these networks may provide evidence indicating dissociated networks for these effects. More recent work has also helped elucidate the potential topographical mapping of duration length across the dorsal-ventral axis of the

*DOI: http://dx.doi.org/10.5772/intechopen.89781*

properly model in rate-based models.

are highly limited.

hippocampus [129, 135].

**4.2 Current use of SNNs in timing research**

firing rate [117].

*Integration of Spiking Neural Networks for Understanding Interval Timing DOI: http://dx.doi.org/10.5772/intechopen.89781*

*New Frontiers in Brain-Computer Interfaces*

agonists such as Psilocybin [101].

likely its greatest advantages over optical techniques.

onset as well as offset of stimulation [105, 106].

**4. Artificial neural network in timing (ANNs)**

invaluable tool for elucidating how the brain works.

**4.1 Importance of spiking neural networks (SNNs)**

within this region could explain why timing behavior following local infusions of SCH-23390 into the DMS has been difficult to interpret [99]. As 5-HT2 activation within the striatum has been shown to indirectly reduce striatal MSN activity [100]. Conversely, timing effects attributed to serotonin could be mediated through or in conjunction with indirect increases in DA, which has been demonstrated in 5-HT2

In an effort to minimize off-target effects there has been a renaissance in the development of chemogenetic and optogenetic techniques. Vaunted for their ability to mitigate the confounds of pharmacological methods, a growing literature is revealing these approaches come with their own set of drawbacks. While chemogenetic approaches such as DREADDs (Designer Receptors Exclusively Activated by Designer Drugs) has helped alleviate some of the uncertainty from pharmacological manipulations, recent work shows that CNO (clozapine N-oxide), the most commonly used agonist in DREADDs, can back metabolize into clozapine and has the potential to accumulate in amounts capable of activating endogenous receptors [102]. Importantly, clozapine has been demonstrated to affect temporal accuracy as well as the flexible use of timing mechanisms [103, 104]. While this limitation can be addressed through the use of proper CNO controls in addition to transitioning to low-dose clozapine, these steps constrain DREADDs extended duration of action,

As with chemogenetic approaches, optogenetic methods have not been immune from technical setbacks, even after widespread implementation. Despite over a decade of use in neuroscience, new caveats in the effectiveness of microbial opsins are still being discovered. pH-dependent calcium influxes from sustained activation of inhibitory proton pump opsins such as eArch3.0 can increase spontaneous neurotransmitter release during terminal stimulation [105]. Inhibitory Cl- channels (i.e. eNpHR3.0 & GtACR1), on the other hand, can drive axonal spiking through positively shifted chloride reversal potentials leading to unintended spiking at the

Despite being inspired largely by neuroscience, ANNs were initially touted for their powerful computational versatility rather than reliable models of neural or cognitive phenomena. Since their conception, the emergence of conductance-based units, biologically inspired architectures and learning rules has made them an

The 'neuronal' unit embodies the fundamental computational element of an ANN and plays a vital role in the overall capacity of the network. As such, computational neuroscientists have dedicated substantial work transforming early 'neuronal' units, based on the binary threshold unit (i.e. McCulloch-Pitts neurons) into conductance-based 'spiking' models [107–111] that include both detailed biophysical models and simple phenomenological models. Implementation of SNNs allows for rate and temporal dynamics nearly equivalent to those found in biological

However, with the increased biological complexity comes an increase in computational cost (i.e. number of floating-point operations per 1-ms of simulation). This has led many to opt for computationally simpler units with continuous 'activationfunctions' rather than spiking dynamics. While these 'rate-based' networks have

**96**

systems [112].

proven successful in revealing network architectures conducive to timing [113] as well as how shifts in excitability drive timing activity [114], they remove temporal components of neural signaling related to that limit their explanatory power. Electrophysiological evidence from neural recordings within the superior temporal sulcus (STS: [115]) and motor cortex (M1: [116]) indicates the brain relies heavily on temporal coding (i.e. coherent inputs). Importantly, increases in coincident spiking correspond with temporally relevant timepoints independent of changes in firing rate [117].

SNNs thalamocortical [118] and corticostriatal [119] networks, both implicated in proper timing behavior, found sharp transitions in spiking activity are important for normal functioning within the network. This speaks to the diversity of spiking patterns present in the brain that can be captured by spiking units [120], yet absent in rate-based networks. The strongest advocates for SNNs question whether any evidence exists for rate-coding with in the brain [121]. Thus, placing the onus on those who utilize rate-based networks for not choosing SNNs.

Additionally, SNNs allow for implementation of both Hebbian and error-based learning rules. This is of note as the network learning rule can have dramatic effects on the connectivity and dictate whether it develops a feedforward topology scarce in both closed and unique loops or strong reciprocal connectivity motifs like those found in the neocortex [122, 123]. Implementation of these spiking neuron models can lead to dramatic improvements in network performance [124]. The effects of neuromodulators on plasticity also contain tight temporal windows (0.3–2 s) between glutamate release and the presence of DA [125, 126], which is difficult to properly model in rate-based models.

## **4.2 Current use of SNNs in timing research**

Generally absent from earlier ANNs, implicit and/or explicit representations of time within neural network models have been relativity recent [127]. Despite this late adoption, there has been a recent surge in research devoted to elucidating the neural substrates of temporal processing, with a myriad of distinct network motifs being proposed [128]. These models vary in biological realism as it pertains to unit dynamics, network structure, and learning capabilities (**Table 1**), influencing not only the dynamical repertoire of the network, but also the ability to extrapolate findings into the biological systems they are looking to study. That being said, even in their most simplistic form, with little adherence to known biology, network models can provide theoretical insight [132], but these models are highly limited.

Adherence to the underlying neurobiology, allows network models to provide deeper understanding into neural substrates of temporal processing. As interpreting *in vivo* pharmacological manipulations must be done with caution, with many drugs acting on multiple neuromodulatory systems, the ability to isolate these neuromodulatory systems in SNNs allows for uniquely systematic approach. Specifically, a cortico-striatal network model allowed researchers to isolate the effects of changing DA concentrations, free of potential 5-HT confounds, and replicating DA's effect on 'clock speed' [130]. In this same model, modulation of Ach produced the accuracy changes found from systemic injections of drugs acting on the cholinergic pathway [67]. Interestingly, Ach modulation in a hippocampal network produced variations in task precision [134] rather than accuracy. Taken together, these networks may provide evidence indicating dissociated networks for these effects. More recent work has also helped elucidate the potential topographical mapping of duration length across the dorsal-ventral axis of the hippocampus [129, 135].


*\* While a spiking model, membrane potential not spikes where used as unit output.*

### **Table 1.**

*Recent publications of ANN models in the study of interval-timing.*

At the synaptic level, computational work using leaky-integrate and fire neurons demonstrated hallmarks of temporal processing such as the 'bias property' and 'scalar property' are strongly influenced by GABAb receptor dynamics [133]. While outside the field of interval timing, work looking at the relationship of pre-post spike pairings to spike-time dependent plasticity (STDP) has shown LTP/LTD dynamics are better explained by 'nearest-neighbor' inputs rather than an 'all-toall' motif [136]. Together, these finding place limits on theories utilizing temporal integration as a potential mechanism for tracking the passage of time, such as those relying on ramping dynamics. Additional theories relating synaptic plasticity to motor-timing, a sub-field of interval timing, are grounded heavily in computational work due to the difficulty of assessing these ideas *in vivo* [137].

These results align with animal work demonstrating how important biophysical features of neuron signaling such as receptor kinetics directly influence the timing of durations up to 100 s of milliseconds [12]. Therefore, researchers must be

**99**

*Integration of Spiking Neural Networks for Understanding Interval Timing*

vigilant in selecting the appropriate level of biological complexity for the question they are looking to address. In this way that time perception may be more sensitive than other senses such as vision, where retention of the biological computations can often be sufficient for producing similarities between representations in higher

While the above examples have focused on the ability of SSNs to provide support for particular theories, these models have proven to be equally useful in excluding alternative theories as well. For example, ramping activity in the cortex, a potential neural manifestation of timing, can arise from various network architectures. Two such networks employ either recurrent synaptic facilitation or firing-rate adaptation. Yet, additional firing properties seen during delay response tasks, namely equivalent responding to matching and non-matching stimuli is only evident in networks based on firing-rate adaptation [131]. Additionally, it had been postulated that time perception could evolve from sequentially firing populations of neurons [139] and may underlie temporal pattern formation in song birds - a subset of motor timing [140]. However, recent work demonstrated this architecture is incapable of producing fundamental properties of interval timing such as scale invariance [141]. The ability to cast doubt on proposed timing mechanisms is an important quality of computational models. If the technique was flexible enough to validate all theories,

Integration of SNN models has proven to be an exciting and fruitful avenue for better understanding neural dynamics related to interval-timing. However, there is still ample room for growth. With the implementation of SNNs still in its early stages, the vast majority of these models lack the biological realism necessary to address open questions in the field. Three areas that have either received little attention or would benefit from greater focus are (1) recurrent interactions between timing circuits (2) Neuromodulator effects on timing signals and (3) biologically

As previously mentioned, time perception is supported by an expansive network of brain regions. However, the vast majority of network models aimed at understanding interval timing are at odds with the multi-regional, recurrent nature of the brain. Models of visual processing have shown recursive networks capture multi-regional cortical dynamics absent in strictly feed-forward models [142] as well as behavioral interactions between reaction time and uncertainty [143]. Recurrent connections may also aid in spontaneously developing high degrees of sparseness within a network like that seen in neocortical circuits [144], which allows for larger networks without compromising effectiveness [145]. In addition to being recursive, connection probabilities vary within and between brain regions. Specifically, cortical areas tend to form 'small world' motifs, con-

In other branches of neuroscience, SNNs have shown promise is their ability to selectively manipulate interactions between as well as distinct activity within neuromodulatory pathways. One of particular interest to understanding time perception is phasic and tonic DA signaling. These methods of DA release are believed to be differentially regulated [148] as well as serve different behavioral purposes [149], though our knowledge is still limited. A SNN of the basal ganglia aimed at understanding PD pathology demonstrates, through methodical control of either phasic or tonic activity, that each system differentially contributed to Parkinsonian akinesia and tremors [150]. As tonic-phasic interactions have been shown to have paradoxical effects in drug-seeking behavior [151], SNNs

*DOI: http://dx.doi.org/10.5772/intechopen.89781*

order processing regions [138].

it would be of little value.

based model of temporal learning.

**4.3 Future directions for SNNs in timing research**

necting more often with nearby cells [146, 147].

## *Integration of Spiking Neural Networks for Understanding Interval Timing DOI: http://dx.doi.org/10.5772/intechopen.89781*

*New Frontiers in Brain-Computer Interfaces*

Spiking\* (ML)

Spiking\* (ML)

Spiking (LIF)

Spiking (prob. threshold)

Gaussian 'state' activity

Spiking (LIF)

Oprisan et al. [129]

Oprisan and Buhusi [130]

Reutimann et al. [131]

Hilton and Parter [132]

Mikael and Gershman [113]

Laje and Buonomano [19]

Simen et al. (2011)

Perez and Merchant [133]

*\**

**Table 1.**

**Lateral inhibition**

At the synaptic level, computational work using leaky-integrate and fire neurons

**Study unit Unit type Network properties Learning rule Findings**

No No N/A Yes

**Recursive Modulators Hebbian Error** 

No No Ach, DA No No Modulator

No No N/A No No Connectivity

No No DA No Yes Bidirectional

Rate units Yes Yes N/A No Yes Intrinsic

Rate units Yes No N/A No Yes Temporal

Yes Yes N/A Yes No Bias

(rate)

No No N/A No No Hipp.

**driven**

No FR

Topology; K\*

in SBF framework

adaptation accounts for 'ramping' activity

motifs for efficient timing;

DA modulation through RPE framework

timing framework; neural trajectories

integration framework; skewness

property; scalar property

These results align with animal work demonstrating how important biophysical features of neuron signaling such as receptor kinetics directly influence the timing of durations up to 100 s of milliseconds [12]. Therefore, researchers must be

demonstrated hallmarks of temporal processing such as the 'bias property' and 'scalar property' are strongly influenced by GABAb receptor dynamics [133]. While outside the field of interval timing, work looking at the relationship of pre-post spike pairings to spike-time dependent plasticity (STDP) has shown LTP/LTD dynamics are better explained by 'nearest-neighbor' inputs rather than an 'all-toall' motif [136]. Together, these finding place limits on theories utilizing temporal integration as a potential mechanism for tracking the passage of time, such as those relying on ramping dynamics. Additional theories relating synaptic plasticity to motor-timing, a sub-field of interval timing, are grounded heavily in computational

work due to the difficulty of assessing these ideas *in vivo* [137].

*While a spiking model, membrane potential not spikes where used as unit output.*

*Recent publications of ANN models in the study of interval-timing.*

**98**

vigilant in selecting the appropriate level of biological complexity for the question they are looking to address. In this way that time perception may be more sensitive than other senses such as vision, where retention of the biological computations can often be sufficient for producing similarities between representations in higher order processing regions [138].

While the above examples have focused on the ability of SSNs to provide support for particular theories, these models have proven to be equally useful in excluding alternative theories as well. For example, ramping activity in the cortex, a potential neural manifestation of timing, can arise from various network architectures. Two such networks employ either recurrent synaptic facilitation or firing-rate adaptation. Yet, additional firing properties seen during delay response tasks, namely equivalent responding to matching and non-matching stimuli is only evident in networks based on firing-rate adaptation [131]. Additionally, it had been postulated that time perception could evolve from sequentially firing populations of neurons [139] and may underlie temporal pattern formation in song birds - a subset of motor timing [140]. However, recent work demonstrated this architecture is incapable of producing fundamental properties of interval timing such as scale invariance [141]. The ability to cast doubt on proposed timing mechanisms is an important quality of computational models. If the technique was flexible enough to validate all theories, it would be of little value.

## **4.3 Future directions for SNNs in timing research**

Integration of SNN models has proven to be an exciting and fruitful avenue for better understanding neural dynamics related to interval-timing. However, there is still ample room for growth. With the implementation of SNNs still in its early stages, the vast majority of these models lack the biological realism necessary to address open questions in the field. Three areas that have either received little attention or would benefit from greater focus are (1) recurrent interactions between timing circuits (2) Neuromodulator effects on timing signals and (3) biologically based model of temporal learning.

As previously mentioned, time perception is supported by an expansive network of brain regions. However, the vast majority of network models aimed at understanding interval timing are at odds with the multi-regional, recurrent nature of the brain. Models of visual processing have shown recursive networks capture multi-regional cortical dynamics absent in strictly feed-forward models [142] as well as behavioral interactions between reaction time and uncertainty [143]. Recurrent connections may also aid in spontaneously developing high degrees of sparseness within a network like that seen in neocortical circuits [144], which allows for larger networks without compromising effectiveness [145]. In addition to being recursive, connection probabilities vary within and between brain regions. Specifically, cortical areas tend to form 'small world' motifs, connecting more often with nearby cells [146, 147].

In other branches of neuroscience, SNNs have shown promise is their ability to selectively manipulate interactions between as well as distinct activity within neuromodulatory pathways. One of particular interest to understanding time perception is phasic and tonic DA signaling. These methods of DA release are believed to be differentially regulated [148] as well as serve different behavioral purposes [149], though our knowledge is still limited. A SNN of the basal ganglia aimed at understanding PD pathology demonstrates, through methodical control of either phasic or tonic activity, that each system differentially contributed to Parkinsonian akinesia and tremors [150]. As tonic-phasic interactions have been shown to have paradoxical effects in drug-seeking behavior [151], SNNs

may provide invaluable for understanding how individual modulation of these two DA dynamics may contribute the paradoxical effects seen in interval timing studies of DA.

SNNs also offer deeper insight into how different neuromodulatory pathways interact in order to produce learned behaviors. Investigating the computational roles of neuromodulated STDP in the hippocampus, researches demonstrated the importance of DA and Ach interactions on learning during a navigation task [134]. Of particular interest was the ability of Ach to enhance precision in navigation, while DA dominated learning overall. This provides insight into a potential mechanism for increases in temporal precision seen from perinatal choline supplementation [152] and dissociates these from changes in accuracy that can accompany increases in precision when Ach is pharmacologically increased [153]. This result expands upon a growing literature dedicated to better understanding synaptic plasticity through spiking neural models [154–156]. Unfortunately, up until this point SNNs dedicated to time perception that have addressed the role of neuromodulators have done so through implementation of their proposed effects, rather than plasticity directly. Additionally, very few models have used a Hebbian learning rule of any type.

## **5. Limitations of SNNs**

The innate connectivity pattern between neurons plays an important role in shaping the trajectory of neural activity within the brain. To this end, biological models can only be as good as our knowledge of the underlying biology. Within the striatum, slight deviations from the biologically relevant range for recurrent connectivity between MSNs is sufficient to suppress the regular sequential firing patterns of coherent cell assemblies found from *in vivo* recordings [157, 158]. While previous neural network models investigating temporal processing within the cerebellum have benefited from its well-characterized, highly conserved cytoarchitecture [159–161], this is not a luxury afforded to those attempting to construct models of other brain regions such as the striatum. Constituting 1.3% of total brain volume in humans [162] and 4% in rodents [163], the dorsal striatum lacks clear internal divisions making it difficult to model accurately. In response, the past decade has witnessed a prodigious effort in mapping the brain's connectivity.

Benefited by advancements in microscopy and neuroanatomical tracing techniques, recent endeavors have uncovered functional domains within regions of the dorsal striatum based on innervation from cortical afferents [164], which can be a valuable tool for modeling timing networks [165]. Further work at the synaptic level will allow for connectivity parameters within neural network models to be tuned more closely tuned to that seen in the brain. However, as this chapter has demonstrated, it is not only through *in vivo* work that our knowledge of the underlying biology can be expanded. Though confirmation inevitably relies on such techniques, theoretical in addition to computational breakthroughs can be done elsewhere.

Along with biological limitations, implementation of many large-scale models – whether in neuron count or complexity – rely on high-speed supercomputers or computing clusters. In labs where computational modeling is not their primary focus, it is impractical to invest the time or money into such resources and therefore places a ceiling on the size and complexity of their simulations. While cost reductions in hardware dedicated to simulating the highly parallel nature of the brain will inevitably address the largest barrier to widespread use of SNNs, providing intuitive software is vital. Along with closed-source options such as MATLAB's,

**101**

*Integration of Spiking Neural Networks for Understanding Interval Timing*

a fervent movement is currently underway in provide highly versatile packages in Python - an open-source, high-level, dynamic programming language. Simulators such as Nengo [166], NEST [167], and Brian [168] provide varying degrees of control over network properties allowing users to model neurons and circuits at various levels and permitting detailed models on general purpose computing

The ubiquity of temporal structure within the brain has made identifying the exact neural processes an arduous task. Single- and multi-unit recordings, along with more recent imaging techniques, have revealed a myriad of neural activity profiles which may underlie temporal processing. Despite technical advancements, the complex interactions between neuromodulators, neuronal activity, and timing behavior has left our current understanding of how the brain tracks durations across multiple seconds decidedly unclear. A promising avenue for overcoming the limitation of current *in vivo* methods is the incorporation of practices form outside the field, such as integration of SNNs. Specifically, through observing how temporal information flows in biologically constrained networks as well as how systematic manipulation of individual neuromodulatory systems changes timing behavior. Though ANNs are already a staple in other domains of neuroscience, their integration into the field of time and time perception has remained relatively rudimentary with limited attention being paid to biological constraints. It is important to note that the use of these models is not for deciding the exact neural mechanism as that is not possible, but only for providing insight into potentially fruitful options. While ANNs relying heavily on algorithmic abstractions can proliferate theoretical models of timing, a prefect digital recreation of the brain exchanges one black-box for another. In this way the use of SNN for understanding time perception is neither

truly top-down not bottom up, but best approximated as 'middle-out'.

underpinnings of timing within the brain.

We are now at a time where widely available computational resources possess the power necessary for construction of SNNs that retain much of the complexity in biological neural networks. The ability to visualize activity profiles and connectivity patterns across thousands of modeled neurons within and across brain regions provides a level of analysis unavailable through current *in vivo* recording and imaging techniques. Furthermore, simulations of potential avenues for future studies can assess the robustness of competing models in their ability to predict behavioral changes from *in vivo* modulation. Deeper integration of SNNs within the field of timing will provide a powerful resource in both understanding the plausibly neural

*DOI: http://dx.doi.org/10.5772/intechopen.89781*

hardware [169].

**6. Conclusion**

## *Integration of Spiking Neural Networks for Understanding Interval Timing DOI: http://dx.doi.org/10.5772/intechopen.89781*

a fervent movement is currently underway in provide highly versatile packages in Python - an open-source, high-level, dynamic programming language. Simulators such as Nengo [166], NEST [167], and Brian [168] provide varying degrees of control over network properties allowing users to model neurons and circuits at various levels and permitting detailed models on general purpose computing hardware [169].

## **6. Conclusion**

*New Frontiers in Brain-Computer Interfaces*

studies of DA.

type.

**5. Limitations of SNNs**

may provide invaluable for understanding how individual modulation of these two DA dynamics may contribute the paradoxical effects seen in interval timing

SNNs also offer deeper insight into how different neuromodulatory pathways interact in order to produce learned behaviors. Investigating the computational roles of neuromodulated STDP in the hippocampus, researches demonstrated the importance of DA and Ach interactions on learning during a navigation task [134]. Of particular interest was the ability of Ach to enhance precision in navigation, while DA dominated learning overall. This provides insight into a potential mechanism for increases in temporal precision seen from perinatal choline supplementation [152] and dissociates these from changes in accuracy that can accompany increases in precision when Ach is pharmacologically increased [153]. This result expands upon a growing literature dedicated to better understanding synaptic plasticity through spiking neural models [154–156]. Unfortunately, up until this point SNNs dedicated to time perception that have addressed the role of neuromodulators have done so through implementation of their proposed effects, rather than plasticity directly. Additionally, very few models have used a Hebbian learning rule of any

The innate connectivity pattern between neurons plays an important role in shaping the trajectory of neural activity within the brain. To this end, biological models can only be as good as our knowledge of the underlying biology. Within the striatum, slight deviations from the biologically relevant range for recurrent connectivity between MSNs is sufficient to suppress the regular sequential firing patterns of coherent cell assemblies found from *in vivo* recordings [157, 158]. While previous neural network models investigating temporal processing within the cerebellum have benefited from its well-characterized, highly conserved cytoarchitecture [159–161], this is not a luxury afforded to those attempting to construct models of other brain regions such as the striatum. Constituting 1.3% of total brain volume in humans [162] and 4% in rodents [163], the dorsal striatum lacks clear internal divisions making it difficult to model accurately. In response, the past decade has witnessed a prodigious effort in mapping the brain's connectivity. Benefited by advancements in microscopy and neuroanatomical tracing techniques, recent endeavors have uncovered functional domains within regions of the dorsal striatum based on innervation from cortical afferents [164], which can be a valuable tool for modeling timing networks [165]. Further work at the synaptic level will allow for connectivity parameters within neural network models to be tuned more closely tuned to that seen in the brain. However, as this chapter has demonstrated, it is not only through *in vivo* work that our knowledge of the underlying biology can be expanded. Though confirmation inevitably relies on such techniques, theoretical in addition to computational breakthroughs can be done

Along with biological limitations, implementation of many large-scale models – whether in neuron count or complexity – rely on high-speed supercomputers or computing clusters. In labs where computational modeling is not their primary focus, it is impractical to invest the time or money into such resources and therefore places a ceiling on the size and complexity of their simulations. While cost reductions in hardware dedicated to simulating the highly parallel nature of the brain will inevitably address the largest barrier to widespread use of SNNs, providing intuitive software is vital. Along with closed-source options such as MATLAB's,

**100**

elsewhere.

The ubiquity of temporal structure within the brain has made identifying the exact neural processes an arduous task. Single- and multi-unit recordings, along with more recent imaging techniques, have revealed a myriad of neural activity profiles which may underlie temporal processing. Despite technical advancements, the complex interactions between neuromodulators, neuronal activity, and timing behavior has left our current understanding of how the brain tracks durations across multiple seconds decidedly unclear. A promising avenue for overcoming the limitation of current *in vivo* methods is the incorporation of practices form outside the field, such as integration of SNNs. Specifically, through observing how temporal information flows in biologically constrained networks as well as how systematic manipulation of individual neuromodulatory systems changes timing behavior.

Though ANNs are already a staple in other domains of neuroscience, their integration into the field of time and time perception has remained relatively rudimentary with limited attention being paid to biological constraints. It is important to note that the use of these models is not for deciding the exact neural mechanism as that is not possible, but only for providing insight into potentially fruitful options. While ANNs relying heavily on algorithmic abstractions can proliferate theoretical models of timing, a prefect digital recreation of the brain exchanges one black-box for another. In this way the use of SNN for understanding time perception is neither truly top-down not bottom up, but best approximated as 'middle-out'.

We are now at a time where widely available computational resources possess the power necessary for construction of SNNs that retain much of the complexity in biological neural networks. The ability to visualize activity profiles and connectivity patterns across thousands of modeled neurons within and across brain regions provides a level of analysis unavailable through current *in vivo* recording and imaging techniques. Furthermore, simulations of potential avenues for future studies can assess the robustness of competing models in their ability to predict behavioral changes from *in vivo* modulation. Deeper integration of SNNs within the field of timing will provide a powerful resource in both understanding the plausibly neural underpinnings of timing within the brain.

*New Frontiers in Brain-Computer Interfaces*

## **Author details**

Nicholas A. Lusk Department of Psychology and Neuroscience, Duke University, Durham, NC, USA

\*Address all correspondence to: nicholas.lusk@duke.edu

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

**103**

*Integration of Spiking Neural Networks for Understanding Interval Timing*

Journal of Computational Neuroscience.

[10] Petter EA, Lusk NA, Hesslow G, Meck WH. Interactive roles of the cerebellum and striatum in sub-second and supra-second timing: Support for an initiation, continuation, adjustment, and termination (ICAT) model of temporal processing. Neuroscience and

Biobehavioral Reviews. 2016

[11] Johansson F, Carlsson HA, Rasmussen A, Yeo CH, Hesslow G. Activation of a temporal memory in Purkinje cells by the mGluR7 receptor. Cell Reports. 2015;**13**(9):1741-1746

[12] Johansson F, Jirenhed DA, Rasmussen A, Zucca R, Hesslow G. Memory trace and timing mechanism localized to cerebellar Purkinje cells. Proceedings of the National Academy of Sciences. 2014;**111**(41):14930-14934

[13] Goel A, Buonomano DV. Timing as

[14] Goel A, Buonomano DV. Temporal interval learning in cortical cultures is encoded in intrinsic network dynamics.

[15] Karmarkar UR, Buonomano DV. Temporal specificity of perceptual learning in an auditory discrimination

[16] Van Rijn H, Gu BM, Meck WH. Dedicated clock/timing-circuit theories of time perception and timed performance. In: Neurobiology of Interval Timing. New York, NY:

an intrinsic property of neural networks: Evidence from in vivo and in vitro experiments.

Philosophical Transactions of the Royal Society, B: Biological Sciences.

2014;**369**(1637):20120460

Neuron. 2016;**91**(2):320-327

task. Learning and Memory.

Springer; 2014. pp. 75-99

2003;**10**(2):141-147

2008;**25**(3):449-464

*DOI: http://dx.doi.org/10.5772/intechopen.89781*

[1] Namboodiri VMK, Huertas MA, Monk KJ, Shouval HZ, Shuler MGH. Visually cued action timing in the primary visual cortex. Neuron.

[2] Narayanan NS, Land BB, Solder JE, Deisseroth K, DiLeone RJ. Prefrontal D1 dopamine signaling is required for temporal control. Proceedings of the National Academy of Sciences.

[3] Shuler MGH. Timing in the visual cortex and its investigation. Current Opinion in Behavioral Sciences.

Poo MM. Representation of interval timing by temporally scalable firing patterns in rat prefrontal cortex.

Proceedings of the National Academy of

[5] Lusk NA, Petter EA, Mac Donald CJ, Meck WH. Cerebellar, hippocampal, and striatal time cells. Current Opinion in Behavioral Sciences. 2016;**8**:186-192

**References**

2015;**86**(1):319-330

2012;**109**(50):20726-20731

[4] Xu M, Zhang SY, Dan Y,

Sciences. 2014;**111**(1):480-485

[6] Matell MS, Meck WH,

2003;**117**(4):760

2004;**21**(2):139-170

[8] Leon MI, Shadlen MN.

Nicolelis MA. Interval timing and the encoding of signal duration by ensembles of cortical and striatal neurons. Behavioral Neuroscience.

[7] Matell MS, Meck WH. Corticostriatal circuits and interval timing: Coincidence detection of oscillatory processes. Cognitive Brain Research.

Representation of time by neurons in the posterior parietal cortex of the macaque. Neuron. 2003;**38**(2):317-327

[9] Haß J, Blaschke S, Rammsayer T, Herrmann JM. A neurocomputational model for optimal temporal processing.

2016;**8**:73-77

*Integration of Spiking Neural Networks for Understanding Interval Timing DOI: http://dx.doi.org/10.5772/intechopen.89781*

## **References**

*New Frontiers in Brain-Computer Interfaces*

**102**

**Author details**

Nicholas A. Lusk

Department of Psychology and Neuroscience, Duke University, Durham, NC, USA

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

\*Address all correspondence to: nicholas.lusk@duke.edu

provided the original work is properly cited.

[1] Namboodiri VMK, Huertas MA, Monk KJ, Shouval HZ, Shuler MGH. Visually cued action timing in the primary visual cortex. Neuron. 2015;**86**(1):319-330

[2] Narayanan NS, Land BB, Solder JE, Deisseroth K, DiLeone RJ. Prefrontal D1 dopamine signaling is required for temporal control. Proceedings of the National Academy of Sciences. 2012;**109**(50):20726-20731

[3] Shuler MGH. Timing in the visual cortex and its investigation. Current Opinion in Behavioral Sciences. 2016;**8**:73-77

[4] Xu M, Zhang SY, Dan Y, Poo MM. Representation of interval timing by temporally scalable firing patterns in rat prefrontal cortex. Proceedings of the National Academy of Sciences. 2014;**111**(1):480-485

[5] Lusk NA, Petter EA, Mac Donald CJ, Meck WH. Cerebellar, hippocampal, and striatal time cells. Current Opinion in Behavioral Sciences. 2016;**8**:186-192

[6] Matell MS, Meck WH, Nicolelis MA. Interval timing and the encoding of signal duration by ensembles of cortical and striatal neurons. Behavioral Neuroscience. 2003;**117**(4):760

[7] Matell MS, Meck WH. Corticostriatal circuits and interval timing: Coincidence detection of oscillatory processes. Cognitive Brain Research. 2004;**21**(2):139-170

[8] Leon MI, Shadlen MN. Representation of time by neurons in the posterior parietal cortex of the macaque. Neuron. 2003;**38**(2):317-327

[9] Haß J, Blaschke S, Rammsayer T, Herrmann JM. A neurocomputational model for optimal temporal processing. Journal of Computational Neuroscience. 2008;**25**(3):449-464

[10] Petter EA, Lusk NA, Hesslow G, Meck WH. Interactive roles of the cerebellum and striatum in sub-second and supra-second timing: Support for an initiation, continuation, adjustment, and termination (ICAT) model of temporal processing. Neuroscience and Biobehavioral Reviews. 2016

[11] Johansson F, Carlsson HA, Rasmussen A, Yeo CH, Hesslow G. Activation of a temporal memory in Purkinje cells by the mGluR7 receptor. Cell Reports. 2015;**13**(9):1741-1746

[12] Johansson F, Jirenhed DA, Rasmussen A, Zucca R, Hesslow G. Memory trace and timing mechanism localized to cerebellar Purkinje cells. Proceedings of the National Academy of Sciences. 2014;**111**(41):14930-14934

[13] Goel A, Buonomano DV. Timing as an intrinsic property of neural networks: Evidence from in vivo and in vitro experiments. Philosophical Transactions of the Royal Society, B: Biological Sciences. 2014;**369**(1637):20120460

[14] Goel A, Buonomano DV. Temporal interval learning in cortical cultures is encoded in intrinsic network dynamics. Neuron. 2016;**91**(2):320-327

[15] Karmarkar UR, Buonomano DV. Temporal specificity of perceptual learning in an auditory discrimination task. Learning and Memory. 2003;**10**(2):141-147

[16] Van Rijn H, Gu BM, Meck WH. Dedicated clock/timing-circuit theories of time perception and timed performance. In: Neurobiology of Interval Timing. New York, NY: Springer; 2014. pp. 75-99

[17] McCulloch WS, Pitts W. A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics. 1943;**5**(4):115-133

[18] Hardy NF, Goudar V, Romero-Sosa JL, Buonomano DV. A model of temporal scaling correctly predicts that motor timing improves with speed. Nature Communications. 2018;**9**(1):4732

[19] Laje R, Buonomano DV. Robust timing and motor patterns by taming chaos in recurrent neural networks. Nature Neuroscience. 2013;**16**(7):925-933

[20] Wang J, Narain D, Hosseini EA, Jazayeri M. Flexible timing by temporal scaling of cortical responses. Nature Neuroscience. 2018;**21**(1):102

[21] O'Reilly RC. Six principles for biologically based computational models of cortical cognition. Trends in Cognitive Sciences. 1998;**2**(11):455-462

[22] Bakhurin KI, Goudar V, Shobe JL, Claar LD, Buonomano DV, Masmanidis SC. Differential encoding of time by prefrontal and striatal network dynamics. Journal of Neuroscience. 2017;**37**(4):854-870

[23] Benchenane K, Peyrache A, Khamassi M, Tierney PL, Gioanni Y, Battaglia FP, et al. Coherent theta oscillations and reorganization of spike timing in the hippocampal-prefrontal network upon learning. Neuron. 2010;**66**(6):921-936

[24] Brody CD, Hernández A, Zainos A, Romo R. Timing and neural encoding of somatosensory parametric working memory in macaque prefrontal cortex. Cerebral Cortex. 2003;**13**(11):1196-1207

[25] Kim J, Ghim JW, Lee JH, Jung MW. Neural correlates of interval timing in

rodent prefrontal cortex. The Journal of Neuroscience. 2013;**33**(34):13834-13847

[26] Rainer G, Rao SC, Miller EK. Prospective coding for objects in primate prefrontal cortex. The Journal of Neuroscience. 1999;**19**(13):5493-5505

[27] Arnal LH, Doelling KB, Poeppel D. Delta–beta coupled oscillations underlie temporal prediction accuracy. Cerebral Cortex. 2014, bhu103

[28] Kulashekhar S, Pekkola J, Palva JM, Palva S. The role of cortical beta oscillations in time estimation. Human Brain Mapping. 2016;**37**(9):3262-3281

[29] Praamstra P, Kourtis D, Kwok HF, Oostenveld R. Neurophysiology of implicit timing in serial choice reaction-time performance. Journal of Neuroscience. 2006;**26**(20):5448-5455

[30] Knudsen EB, Powers ME, Moxon KA. Dissociating movement from movement timing in the rat primary motor cortex. The Journal of Neuroscience. 2014;**34**(47):15576-15586

[31] Jazayeri M, Shadlen MN. A neural mechanism for sensing and reproducing a time interval. Current Biology. 2015;**25**(20):2599-2609

[32] Quintana J, Fuster JM. From perception to action: Temporal integrative functions of prefrontal and parietal neurons. Cerebral Cortex. 1999;**9**(3):213-221

[33] Erlich JC, Brunton BW, Duan CA, Hanks TD, Brody CD. Distinct effects of prefrontal and parietal cortex inactivations on an accumulation of evidence task in the rat. eLife. 2015;**4**:e05457

[34] Zhang Q, Jung D, Larson T, Kim Y, Narayanan N. Scopolamine and medial frontal stimulus-processing during interval timing. bioRxiv. 2019:598862

**105**

*Integration of Spiking Neural Networks for Understanding Interval Timing*

[44] Ward LM. Synchronous neural oscillations and cognitive processes. Trends in Cognitive Sciences.

[45] Sejnowski TJ, Paulsen O. Network oscillations: Emerging computational principles. Journal of Neuroscience.

[46] Miall C. The storage of time intervals using oscillating neurons. Neural Computation. 1989;**1**(3):359-371

[47] Miall RC, Wolpert DM. Forward models for physiological motor control. Neural networks. 1996;**9**(8):1265-1279

[48] Kim YC, Narayanan NS. Prefrontal D1 dopamine-receptor neurons and delta resonance in interval timing. Cerebral Cortex. 2018;**29**(5):2051-2060

[49] Jones MW, Wilson MA. Theta rhythms coordinate hippocampal– prefrontal interactions in a spatial memory task. PLoS Biology.

[50] Sirota A, Montgomery S, Fujisawa S, Isomura Y, Zugaro M, Buzsáki G. Entrainment of neocortical neurons and gamma oscillations by the hippocampal theta rhythm. Neuron.

[51] Gu BM, Jurkowski AJ, Shi Z, Meck WH. Bayesian optimization of interval timing and biases in temporal memory as a function of temporal context, feedback, and dopamine levels in young, aged and Parkinson's disease patients. Timing and Time Perception.

[52] Gu BM, van Rijn H, Meck WH. Oscillatory multiplexing of neural population codes for interval timing and working memory. Neuroscience and Biobehavioral Reviews. 2015;**48**:160-185

2005;**3**(12):e402

2008;**60**(4):683-697

2016;**4**(4):315-342

2003;**7**(12):553-559

2006;**26**(6):1673-1676

*DOI: http://dx.doi.org/10.5772/intechopen.89781*

[35] Latimer KW, Yates JL, Meister ML, Huk AC, Pillow JW. Single-trial spike trains in parietal cortex reveal discrete steps during decision-making. Science.

[36] Shadlen MN, Kiani R, Newsome WT, Gold JI, Wolpert DM, Zylberberg A, et al. Comment on "single-trial spike trains in parietal cortex reveal discrete steps during decision-making". Science.

[37] Zylberberg A, Shadlen MN. Cause for pause before leaping to conclusions about stepping. bioRxiv. 2016:085886

climbing neural activity: A dissociation between CNV and N1P2 amplitudes. The Journal of Neuroscience.

Computational significance of transient

Wassenhove V. In search of oscillatory traces of the internal clock. Frontiers in

[41] Mello GB, Soares S, Paton JJ. A scalable population code for time in the striatum. Current Biology.

[42] Kraus BJ, Robinson RJ, White JA, Eichenbaum H, Hasselmo ME. Hippocampal "time cells": Time versus path integration. Neuron.

[43] MacDonald CJ, Lepage KQ,

Eden UT, Eichenbaum H. Hippocampal "time cells" bridge the gap in memory for discontiguous events. Neuron.

[38] Kononowicz TW, van Rijn H. Decoupling interval timing and

2014;**34**(8):2931-2939

2008;**27**(1):217-227

[39] Durstewitz D, Deco G.

[40] Kononowicz TW, van

Psychology. 2016;**7**:224

2015;**25**(9):1113-1122

2013;**78**(6):1090-1101

2011;**71**(4):737-749

dynamics in cortical networks. European Journal of Neuroscience.

2015;**349**(6244):184-187

2016;**351**(6280):1406-1406

*Integration of Spiking Neural Networks for Understanding Interval Timing DOI: http://dx.doi.org/10.5772/intechopen.89781*

[35] Latimer KW, Yates JL, Meister ML, Huk AC, Pillow JW. Single-trial spike trains in parietal cortex reveal discrete steps during decision-making. Science. 2015;**349**(6244):184-187

*New Frontiers in Brain-Computer Interfaces*

[17] McCulloch WS, Pitts W. A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics.

rodent prefrontal cortex. The Journal of Neuroscience. 2013;**33**(34):13834-13847

[27] Arnal LH, Doelling KB, Poeppel D. Delta–beta coupled oscillations underlie temporal prediction accuracy. Cerebral

[28] Kulashekhar S, Pekkola J, Palva JM, Palva S. The role of cortical beta oscillations in time estimation. Human Brain Mapping. 2016;**37**(9):3262-3281

[29] Praamstra P, Kourtis D, Kwok HF, Oostenveld R. Neurophysiology of implicit timing in serial choice

reaction-time performance. Journal of Neuroscience. 2006;**26**(20):5448-5455

[31] Jazayeri M, Shadlen MN. A neural mechanism for sensing and reproducing

a time interval. Current Biology.

[32] Quintana J, Fuster JM. From perception to action: Temporal integrative functions of prefrontal and parietal neurons. Cerebral Cortex.

[33] Erlich JC, Brunton BW, Duan CA, Hanks TD, Brody CD. Distinct effects of prefrontal and parietal cortex inactivations on an accumulation of evidence task in the rat. eLife.

[34] Zhang Q, Jung D, Larson T, Kim Y, Narayanan N. Scopolamine and medial frontal stimulus-processing during interval timing. bioRxiv. 2019:598862

2015;**25**(20):2599-2609

1999;**9**(3):213-221

2015;**4**:e05457

[30] Knudsen EB, Powers ME, Moxon KA. Dissociating movement from movement timing in the rat primary motor cortex. The Journal of Neuroscience. 2014;**34**(47):15576-15586

[26] Rainer G, Rao SC, Miller EK. Prospective coding for objects in primate prefrontal cortex. The Journal of Neuroscience. 1999;**19**(13):5493-5505

Cortex. 2014, bhu103

Romero-Sosa JL, Buonomano DV. A model of temporal scaling correctly predicts that motor timing improves with speed. Nature Communications.

[19] Laje R, Buonomano DV. Robust timing and motor patterns by taming chaos in recurrent neural networks. Nature Neuroscience.

[20] Wang J, Narain D, Hosseini EA, Jazayeri M. Flexible timing by temporal scaling of cortical responses. Nature Neuroscience. 2018;**21**(1):102

[21] O'Reilly RC. Six principles for biologically based computational models of cortical cognition. Trends in Cognitive Sciences. 1998;**2**(11):455-462

Shobe JL, Claar LD, Buonomano DV, Masmanidis SC. Differential encoding of time by prefrontal and striatal network dynamics. Journal of Neuroscience. 2017;**37**(4):854-870

[23] Benchenane K, Peyrache A, Khamassi M, Tierney PL, Gioanni Y, Battaglia FP, et al. Coherent theta oscillations and reorganization of spike timing in the hippocampal-prefrontal network upon learning. Neuron.

[24] Brody CD, Hernández A, Zainos A, Romo R. Timing and neural encoding of somatosensory parametric working memory in macaque prefrontal cortex. Cerebral Cortex. 2003;**13**(11):1196-1207

[25] Kim J, Ghim JW, Lee JH, Jung MW. Neural correlates of interval timing in

2010;**66**(6):921-936

[22] Bakhurin KI, Goudar V,

1943;**5**(4):115-133

2018;**9**(1):4732

2013;**16**(7):925-933

[18] Hardy NF, Goudar V,

**104**

[36] Shadlen MN, Kiani R, Newsome WT, Gold JI, Wolpert DM, Zylberberg A, et al. Comment on "single-trial spike trains in parietal cortex reveal discrete steps during decision-making". Science. 2016;**351**(6280):1406-1406

[37] Zylberberg A, Shadlen MN. Cause for pause before leaping to conclusions about stepping. bioRxiv. 2016:085886

[38] Kononowicz TW, van Rijn H. Decoupling interval timing and climbing neural activity: A dissociation between CNV and N1P2 amplitudes. The Journal of Neuroscience. 2014;**34**(8):2931-2939

[39] Durstewitz D, Deco G. Computational significance of transient dynamics in cortical networks. European Journal of Neuroscience. 2008;**27**(1):217-227

[40] Kononowicz TW, van Wassenhove V. In search of oscillatory traces of the internal clock. Frontiers in Psychology. 2016;**7**:224

[41] Mello GB, Soares S, Paton JJ. A scalable population code for time in the striatum. Current Biology. 2015;**25**(9):1113-1122

[42] Kraus BJ, Robinson RJ, White JA, Eichenbaum H, Hasselmo ME. Hippocampal "time cells": Time versus path integration. Neuron. 2013;**78**(6):1090-1101

[43] MacDonald CJ, Lepage KQ, Eden UT, Eichenbaum H. Hippocampal "time cells" bridge the gap in memory for discontiguous events. Neuron. 2011;**71**(4):737-749

[44] Ward LM. Synchronous neural oscillations and cognitive processes. Trends in Cognitive Sciences. 2003;**7**(12):553-559

[45] Sejnowski TJ, Paulsen O. Network oscillations: Emerging computational principles. Journal of Neuroscience. 2006;**26**(6):1673-1676

[46] Miall C. The storage of time intervals using oscillating neurons. Neural Computation. 1989;**1**(3):359-371

[47] Miall RC, Wolpert DM. Forward models for physiological motor control. Neural networks. 1996;**9**(8):1265-1279

[48] Kim YC, Narayanan NS. Prefrontal D1 dopamine-receptor neurons and delta resonance in interval timing. Cerebral Cortex. 2018;**29**(5):2051-2060

[49] Jones MW, Wilson MA. Theta rhythms coordinate hippocampal– prefrontal interactions in a spatial memory task. PLoS Biology. 2005;**3**(12):e402

[50] Sirota A, Montgomery S, Fujisawa S, Isomura Y, Zugaro M, Buzsáki G. Entrainment of neocortical neurons and gamma oscillations by the hippocampal theta rhythm. Neuron. 2008;**60**(4):683-697

[51] Gu BM, Jurkowski AJ, Shi Z, Meck WH. Bayesian optimization of interval timing and biases in temporal memory as a function of temporal context, feedback, and dopamine levels in young, aged and Parkinson's disease patients. Timing and Time Perception. 2016;**4**(4):315-342

[52] Gu BM, van Rijn H, Meck WH. Oscillatory multiplexing of neural population codes for interval timing and working memory. Neuroscience and Biobehavioral Reviews. 2015;**48**:160-185

[53] Rule ME, Vargas-Irwin CE, Donoghue JP, Truccolo W. Dissociation between sustained single-neuron spiking β-rhythmicity and transient β-LFP oscillations in primate motor cortex. Journal of Neurophysiology. 2017;**117**(4):1524-1543

[54] Cheng RK, Ali YM, Meck WH. Ketamine "unlocks" the reduced clock-speed effects of cocaine following extended training: Evidence for dopamine–glutamate interactions in timing and time perception. Neurobiology of Learning and Memory. 2007;**88**(2):149-159

[55] Maricq AV, Church RM. The differential effects of haloperidol and methamphetamine on time estimation in the rat. Psychopharmacology. 1983;**79**(1):10-15

[56] Meck WH. Selective adjustment of the speed of internal clock and memory processes. Journal of Experimental Psychology: Animal Behavior Processes. 1983;**9**(2):171

[57] Meck WH. Affinity for the dopamine D 2 receptor predicts neuroleptic potency in decreasing the speed of an internal clock. Pharmacology Biochemistry and Behavior. 1986;**25**(6):1185-1189

[58] Santi A, Weise L, Kuiper D. Amphetamine and memory for event duration in rats and pigeons: Disruption of attention to temporal samples rather than changes in the speed of the internal clock. Psychobiology. 1995;**23**(3):224-232

[59] Asgari K, Body S, Rickard JF, Zhang Z, Fone KCF, Bradshaw CM, et al. Effects of quipazine and m-chlorophenylbiguanide (m-CPBG) on the discrimination of durations: Evidence for the involvement of 5-HT2A but not 5-HT3 receptors. Behavioural Pharmacology. 2005;**16**(1):43-51

[60] Ho MY, Velázquez-Martınez DN, Bradshaw CM, Szabadi E. 5-Hydroxytryptamine and interval timing behaviour. Pharmacology Biochemistry and Behavior. 2002;**71**(4):773-785

[61] Wittmann M, Carter O, Hasler F, Cahn BR, Grimberg U, Spring P, et al. Effects of psilocybin on time perception and temporal control of behaviour in humans. Journal of Psychopharmacology. 2007;**21**(1):50-64

[62] Meck WH. Choline uptake in the frontal cortex is proportional to the absolute error of a temporal memory translation constant in mature and aged rats. Learning and Motivation. 2002;**33**(1):88-104

[63] Meck WH. Distortions in the content of temporal memory. In: Animal Cognition and Sequential Behavior. Boston, MA: Springer; 2002. pp. 175-200

[64] Malapani C, Deweer B, Gibbon J. Separating storage from retrieval dysfunction of temporal memory in Parkinson's disease. Journal of Cognitive Neuroscience. 2002;**14**(2):311-322

[65] Penney TB, Meck WH, Roberts SA, Gibbon J, Erlenmeyer-Kimling L. Interval-timing deficits in individuals at high risk for schizophrenia. Brain and Cognition. 2005;**58**(1):109-118

[66] Rammsayer T. Temporal discrimination in schizophrenic and affective disorders: Evidence for a dopamine-dependent internal clock. International Journal of Neuroscience. 1990;**53**(2-4):111-120

[67] Meck WH. Neuropharmacology of timing and time perception. Cognitive Brain Research. 1996;**3**(3):227-242

[68] Stanford L, Santi A. The dopamine D2 agonist quinpirole disrupts attention

**107**

*Integration of Spiking Neural Networks for Understanding Interval Timing*

[76] Miyazaki K, Miyazaki KW, Yamanaka A, Tokuda T, Tanaka KF, Doya K. Reward probability and timing

uncertainty alter the effect of dorsal raphe serotonin neurons on patience. Nature Communications.

excitatory synapses on VTA

[79] Bonsi P, Cuomo D, Ding J, Sciamanna G, Ulrich S, Tscherter A, et al. Endogenous serotonin excites striatal cholinergic interneurons via the activation of 5-HT 2C, 5-HT6, and 5-HT7 serotonin receptors: Implications for extrapyramidal side effects of serotonin reuptake inhibitors. Neuropsychopharmacology.

[80] Nelson AB, Hammack N, Yang CF, Shah NM, Seal RP, Kreitzer AC. Striatal cholinergic interneurons drive GABA release from dopamine terminals.

[81] Stevenson IH, Kording KP. How advances in neural recording affect data analysis. Nature Neuroscience.

[82] Siegle JH, Hale GJ, Newman JP, Voigts J. Neural ensemble communities: Open-source approaches to hardware for large-scale electrophysiology. Current Opinion in Neurobiology.

2007;**32**(8):1840

Neuron. 2014;**82**(1):63-70

2011;**14**(2):139-142

2015;**32**:53-59

[77] Wang HL, Zhang S, Qi J, Wang H, Cachope R, Mejias-Aponte CA, et al. Dorsal raphe dual serotonin-glutamate neurons drive reward by establishing

mesoaccumbens dopamine neurons. Cell Reports. 2019;**26**(5):1128-1142

[78] Bailey MR, Goldman O, Bello EP, Chohan MO, Jeong N, Winiger V, et al. An interaction between serotonin receptor signaling and dopamine enhances goal-directed vigor and persistence in mice. Journal of Neuroscience. 2018;**38**(9):2149-2162

2018;**9**(1):2048

*DOI: http://dx.doi.org/10.5772/intechopen.89781*

to temporal signals without selectively altering the speed of the internal clock. Psychobiology. 1998;**26**(3):258-266

[69] Lake JI, Meck WH. Differential effects of amphetamine and

[70] Meck WH. Neuroanatomical localization of an internal clock: A functional link between mesolimbic, nigrostriatal, and mesocortical

[71] Takeuchi T, Duszkiewicz AJ,

[72] Di Giovanni G, Di Matteo V, Pierucci M, Esposito E. Serotonin– dopamine interaction: Electrophysiological evidence. Progress in Brain Research. 2008;**172**:45-71

[73] Prinz A, Selesnew LM, Liss B, Roeper J, Carlsson T. Increased excitability in serotonin neurons in the dorsal raphe nucleus in the 6-OHDA mouse model of Parkinson's disease. Experimental Neurology.

[74] Chiang TJ, Al-Ruwaitea ASA, Mobini S, Ho MY, Bradshaw CM, Szabadi E. Effects of 8-hydroxy-2-(din-propylamino) tetralin (8-OH-DPAT) on performance on two operant timing schedules. Psychopharmacology.

[75] Heilbronner SR, Meck WH. Dissociations between interval timing and intertemporal choice following administration of fluoxetine, cocaine, or methamphetamine. Behavioural

Processes. 2014;**101**:123-134

2013;**248**:236-245

2000;**151**(4):379-391

2013;**51**(2):284-292

2006;**1109**(1):93-107

haloperidol on temporal reproduction: Dopaminergic regulation of attention and clock speed. Neuropsychologia.

dopaminergic systems. Brain Research.

Sonneborn A, Spooner PA, Yamasaki M, Watanabe M, et al. Locus coeruleus and dopaminergic consolidation of everyday memory. Nature. 2016;**537**(7620):357

*Integration of Spiking Neural Networks for Understanding Interval Timing DOI: http://dx.doi.org/10.5772/intechopen.89781*

to temporal signals without selectively altering the speed of the internal clock. Psychobiology. 1998;**26**(3):258-266

*New Frontiers in Brain-Computer Interfaces*

Donoghue JP, Truccolo W. Dissociation between sustained single-neuron spiking β-rhythmicity and transient β-LFP oscillations in primate motor cortex. Journal of Neurophysiology.

[60] Ho MY, Velázquez-Martınez DN,

5-Hydroxytryptamine and interval timing behaviour. Pharmacology Biochemistry and Behavior.

[61] Wittmann M, Carter O, Hasler F, Cahn BR, Grimberg U, Spring P, et al. Effects of psilocybin on time perception and temporal control of behaviour in humans. Journal of Psychopharmacology.

[62] Meck WH. Choline uptake in the frontal cortex is proportional to the absolute error of a temporal memory translation constant in mature and aged rats. Learning and Motivation.

[63] Meck WH. Distortions in the content of temporal memory. In: Animal Cognition and Sequential Behavior. Boston, MA: Springer; 2002.

[64] Malapani C, Deweer B, Gibbon J. Separating storage from retrieval dysfunction of temporal memory in Parkinson's disease. Journal of Cognitive Neuroscience. 2002;**14**(2):311-322

[65] Penney TB, Meck WH, Roberts SA, Gibbon J, Erlenmeyer-Kimling L. Interval-timing deficits in individuals at high risk for schizophrenia. Brain and

Cognition. 2005;**58**(1):109-118

[66] Rammsayer T. Temporal

1990;**53**(2-4):111-120

discrimination in schizophrenic and affective disorders: Evidence for a dopamine-dependent internal clock. International Journal of Neuroscience.

[67] Meck WH. Neuropharmacology of timing and time perception. Cognitive Brain Research. 1996;**3**(3):227-242

[68] Stanford L, Santi A. The dopamine D2 agonist quinpirole disrupts attention

Bradshaw CM, Szabadi E.

2002;**71**(4):773-785

2007;**21**(1):50-64

2002;**33**(1):88-104

pp. 175-200

[54] Cheng RK, Ali YM, Meck WH. Ketamine "unlocks" the reduced

extended training: Evidence for dopamine–glutamate interactions in timing and time perception.

[55] Maricq AV, Church RM. The differential effects of haloperidol and methamphetamine on time estimation in the rat. Psychopharmacology.

clock-speed effects of cocaine following

Neurobiology of Learning and Memory.

[56] Meck WH. Selective adjustment of the speed of internal clock and memory processes. Journal of Experimental Psychology: Animal Behavior Processes.

[57] Meck WH. Affinity for the dopamine D 2 receptor predicts neuroleptic potency in decreasing the speed of an internal clock. Pharmacology Biochemistry and Behavior. 1986;**25**(6):1185-1189

[58] Santi A, Weise L, Kuiper D. Amphetamine and memory for event duration in rats and pigeons: Disruption

of attention to temporal samples rather than changes in the speed of the internal clock. Psychobiology.

[59] Asgari K, Body S, Rickard JF, Zhang Z, Fone KCF, Bradshaw CM, et al. Effects of quipazine and

m-chlorophenylbiguanide (m-CPBG) on the discrimination of durations: Evidence for the involvement of 5-HT2A but not 5-HT3 receptors. Behavioural Pharmacology. 2005;**16**(1):43-51

1995;**23**(3):224-232

[53] Rule ME, Vargas-Irwin CE,

2017;**117**(4):1524-1543

2007;**88**(2):149-159

1983;**79**(1):10-15

1983;**9**(2):171

**106**

[69] Lake JI, Meck WH. Differential effects of amphetamine and haloperidol on temporal reproduction: Dopaminergic regulation of attention and clock speed. Neuropsychologia. 2013;**51**(2):284-292

[70] Meck WH. Neuroanatomical localization of an internal clock: A functional link between mesolimbic, nigrostriatal, and mesocortical dopaminergic systems. Brain Research. 2006;**1109**(1):93-107

[71] Takeuchi T, Duszkiewicz AJ, Sonneborn A, Spooner PA, Yamasaki M, Watanabe M, et al. Locus coeruleus and dopaminergic consolidation of everyday memory. Nature. 2016;**537**(7620):357

[72] Di Giovanni G, Di Matteo V, Pierucci M, Esposito E. Serotonin– dopamine interaction: Electrophysiological evidence. Progress in Brain Research. 2008;**172**:45-71

[73] Prinz A, Selesnew LM, Liss B, Roeper J, Carlsson T. Increased excitability in serotonin neurons in the dorsal raphe nucleus in the 6-OHDA mouse model of Parkinson's disease. Experimental Neurology. 2013;**248**:236-245

[74] Chiang TJ, Al-Ruwaitea ASA, Mobini S, Ho MY, Bradshaw CM, Szabadi E. Effects of 8-hydroxy-2-(din-propylamino) tetralin (8-OH-DPAT) on performance on two operant timing schedules. Psychopharmacology. 2000;**151**(4):379-391

[75] Heilbronner SR, Meck WH. Dissociations between interval timing and intertemporal choice following administration of fluoxetine, cocaine, or methamphetamine. Behavioural Processes. 2014;**101**:123-134

[76] Miyazaki K, Miyazaki KW, Yamanaka A, Tokuda T, Tanaka KF, Doya K. Reward probability and timing uncertainty alter the effect of dorsal raphe serotonin neurons on patience. Nature Communications. 2018;**9**(1):2048

[77] Wang HL, Zhang S, Qi J, Wang H, Cachope R, Mejias-Aponte CA, et al. Dorsal raphe dual serotonin-glutamate neurons drive reward by establishing excitatory synapses on VTA mesoaccumbens dopamine neurons. Cell Reports. 2019;**26**(5):1128-1142

[78] Bailey MR, Goldman O, Bello EP, Chohan MO, Jeong N, Winiger V, et al. An interaction between serotonin receptor signaling and dopamine enhances goal-directed vigor and persistence in mice. Journal of Neuroscience. 2018;**38**(9):2149-2162

[79] Bonsi P, Cuomo D, Ding J, Sciamanna G, Ulrich S, Tscherter A, et al. Endogenous serotonin excites striatal cholinergic interneurons via the activation of 5-HT 2C, 5-HT6, and 5-HT7 serotonin receptors: Implications for extrapyramidal side effects of serotonin reuptake inhibitors. Neuropsychopharmacology. 2007;**32**(8):1840

[80] Nelson AB, Hammack N, Yang CF, Shah NM, Seal RP, Kreitzer AC. Striatal cholinergic interneurons drive GABA release from dopamine terminals. Neuron. 2014;**82**(1):63-70

[81] Stevenson IH, Kording KP. How advances in neural recording affect data analysis. Nature Neuroscience. 2011;**14**(2):139-142

[82] Siegle JH, Hale GJ, Newman JP, Voigts J. Neural ensemble communities: Open-source approaches to hardware for large-scale electrophysiology. Current Opinion in Neurobiology. 2015;**32**:53-59

[83] Harris KD, Henze DA, Csicsvari J, Hirase H, Buzsáki G. Accuracy of tetrode spike separation as determined by simultaneous intracellular and extracellular measurements. Journal of Neurophysiology. 2000;**84**(1):401-414

[84] Buzsáki G, Schomburg EW. What does gamma coherence tell us about inter-regional neural communication? Nature Neuroscience. 2015;**18**(4): 484-489

[85] Buzsáki G. Large-scale recording of neuronal ensembles. Nature Neuroscience. 2004;**7**(5):446-451

[86] Moffitt MA, McIntyre CC. Modelbased analysis of cortical recording with silicon microelectrodes. Clinical Neurophysiology. 2005;**116**(9):2240-2250

[87] De Kock CPJ, Bruno RM, Spors H, Sakmann B. Layer-and cell-typespecific suprathreshold stimulus representation in rat primary somatosensory cortex. The Journal of Physiology. 2007;**581**(1):139-154

[88] Sakata S, Harris KD. Laminar structure of spontaneous and sensoryevoked population activity in auditory cortex. Neuron. 2009;**64**(3):404-418

[89] Friend DM, Kemere C, Kravitz AV. Quantifying recording quality in in vivo striatal recordings. Current Protocols in Neuroscience. 2015;**70**(1):6-28

[90] Harris KD, Quiroga RQ, Freeman J, Smith SL. Improving data quality in neuronal population recordings. Nature Neuroscience. 2016;**19**(9):1165-1174

[91] Dombeck DA, Graziano MS, Tank DW. Functional clustering of neurons in motor cortex determined by cellular resolution imaging in awake behaving mice. The Journal of Neuroscience. 2009;**29**(44):13751-13760 [92] Kampa BM, Göbel W, Helmchen F. Measuring neuronal population activity using 3D laser scanning. Cold Spring Harbor Protocols. 2011, 2011;**11**:pdbprot 066597

[93] Gerhard F, Pipa G, Lima B, Neuenschwander S, Gerstner W. Extraction of network topology from multi-electrode recordings: Is there a small-world effect? Frontiers in Computational Neuroscience. 2011;**5**:4

[94] Chen TW, Wardill TJ, Sun Y, Pulver SR, Renninger SL, Baohan A, et al. Ultrasensitive fluorescent proteins for imaging neuronal activity. Nature. 2013;**499**(7458):295-300

[95] Ohkura M, Sasaki T, Kobayashi C, Ikegaya Y, Nakai J. An improved genetically encoded red fluorescent Ca 2+ indicator for detecting optically evoked action potentials. PLoS One. 2012;**7**(7):e39933

[96] Grewe BF, Langer D, Kasper H, Kampa BM, Helmchen F. High-speed in vivo calcium imaging reveals neuronal network activity with nearmillisecond precision. Nature Methods. 2010;**7**(5):399-405

[97] Sato M, Kawano M, Yanagawa Y, Hayashi Y. In vivo two-photon imaging of striatal neuronal circuits in mice. Neurobiology of Learning and Memory. 2016;**135**:146-151

[98] Bischoff S, Heinrich M, Sonntag JM, Krauss J. The D-1 dopamine receptor antagonist SCH 23390 also interacts potently with brain serotonin (5-HT2) receptors. European Journal of Pharmacology. 1986;**129**(3):367-370

[99] De Corte BJ, Wagner LM, Matell MS, Narayanan NS. Striatal dopamine and the temporal control of behavior. Behavioural Brain Research. 2019;**356**:375-379

**109**

*Integration of Spiking Neural Networks for Understanding Interval Timing*

activity. Journal of Neurophysiology.

[109] Izhikevich EM. Simple model of spiking neurons. IEEE Transactions on Neural Networks. 2003;**14**(6):1569-1572

[111] Touboul J. Bifurcation analysis of a general class of nonlinear integrate-andfire neurons. SIAM Journal on Applied Mathematics. 2008;**68**(4):1045-1079

[112] Maass W. Networks of spiking neurons: The third generation of neural network models. Neural Networks.

[113] Mikhael JG, Gershman SJ. Adapting

[115] Perrett DI, Rolls ET, Caan W. Visual neurones responsive to faces in the monkey temporal cortex. Experimental Brain Research. 1982;**47**(3):329-342

[116] Riehle A, Grün S, Diesmann M, Aertsen A. Spike synchronization and rate modulation differentially involved in motor cortical function. Science.

[117] Grammont F, Riehle A. Spike synchronization and firing rate in a

1997;**278**(5345):1950-1953

the flow of time with dopamine. Journal of Neurophysiology. 2019;**121**(5):1748-1760

[114] Hardy NF, Buonomano DV. Encoding time in feedforward trajectories of a recurrent neural network model. Neural Computation.

[110] Morris C, Lecar H. Voltage oscillations in the barnacle giant muscle fiber. Biophysical Journal.

[108] Hodgkin AL, Huxley AF. A quantitative description of membrane

current and its application to conduction and excitation in nerve. The Journal of Physiology.

2005;**94**(5):3637-3642

1952;**117**(4):500-544

1981;**35**(1):193-213

1997;**10**(9):1659-1671

2018;**30**(2):378-396

*DOI: http://dx.doi.org/10.5772/intechopen.89781*

[100] Miguelez C, Morera-Herreras T, Torrecilla M, Ruiz-Ortega JA,

Ugedo L. Interaction between the 5-HT system and the basal ganglia: Functional implication and therapeutic perspective in Parkinson's disease. Frontiers in

Neural Circuits. 2014;**8**:21

1999;**20**(5):424

2007;**74**(2):159-167

2005;**182**(2):232-244

2016;**19**(4):554

[101] Vollenweider FX, Vontobel P, Hell D, Leenders KL. 5-HT modulation of dopamine release in basal ganglia in psilocybin-induced psychosis in man—A PET study with [11 C] raclopride. Neuropsychopharmacology.

[102] Gomez JL, Bonaventura J,

[103] Buhusi CV, Meck WH. Effect of clozapine on interval timing and working memory for time in the peak-interval procedure with gaps. Behavioural Processes.

[104] MacDonald CJ, Meck WH. Differential effects of clozapine and haloperidol on interval timing in the supraseconds range. Psychopharmacology.

[105] Mahn M, Prigge M, Ron S, Levy R, Yizhar O. Biophysical constraints of optogenetic inhibition at presynaptic terminals. Nature Neuroscience.

[106] Li N, Chen S, Guo ZV, Chen H, Huo Y, Inagaki H, et al. Spatiotemporal limits of optogenetic manipulations in cortical circuits. bio Rxiv. 2019:642215

[107] Brette R, Gerstner W. Adaptive exponential integrate-and-fire model as an effective description of neuronal

Lesniak W, Mathews WB, Sysa-Shah P, Rodriguez LA, et al. Chemogenetics revealed: DREADD occupancy and activation via converted clozapine. Science. 2017;**357**(6350):503-507

*Integration of Spiking Neural Networks for Understanding Interval Timing DOI: http://dx.doi.org/10.5772/intechopen.89781*

[100] Miguelez C, Morera-Herreras T, Torrecilla M, Ruiz-Ortega JA, Ugedo L. Interaction between the 5-HT system and the basal ganglia: Functional implication and therapeutic perspective in Parkinson's disease. Frontiers in Neural Circuits. 2014;**8**:21

*New Frontiers in Brain-Computer Interfaces*

[83] Harris KD, Henze DA, Csicsvari J, Hirase H, Buzsáki G. Accuracy of tetrode [92] Kampa BM, Göbel W, Helmchen F. Measuring neuronal population activity using 3D laser scanning. Cold Spring Harbor Protocols. 2011, 2011;**11**:pdb-

[93] Gerhard F, Pipa G, Lima B, Neuenschwander S, Gerstner W. Extraction of network topology from multi-electrode recordings: Is there a small-world effect? Frontiers in Computational Neuroscience. 2011;**5**:4

[94] Chen TW, Wardill TJ, Sun Y, Pulver SR, Renninger SL, Baohan A, et al. Ultrasensitive fluorescent proteins for imaging neuronal activity. Nature.

[95] Ohkura M, Sasaki T, Kobayashi C, Ikegaya Y, Nakai J. An improved genetically encoded red fluorescent Ca 2+ indicator for detecting optically evoked action potentials. PLoS One.

[96] Grewe BF, Langer D, Kasper H, Kampa BM, Helmchen F. High-speed in vivo calcium imaging reveals neuronal network activity with nearmillisecond precision. Nature Methods.

[97] Sato M, Kawano M, Yanagawa Y, Hayashi Y. In vivo two-photon imaging of striatal neuronal circuits in mice. Neurobiology of Learning and Memory.

[98] Bischoff S, Heinrich M, Sonntag JM, Krauss J. The D-1 dopamine receptor antagonist SCH 23390 also interacts potently with brain serotonin (5-HT2)

receptors. European Journal of Pharmacology. 1986;**129**(3):367-370

[99] De Corte BJ, Wagner LM, Matell MS, Narayanan NS. Striatal dopamine and the temporal control of behavior. Behavioural Brain Research.

2013;**499**(7458):295-300

2012;**7**(7):e39933

2010;**7**(5):399-405

2016;**135**:146-151

2019;**356**:375-379

prot 066597

[84] Buzsáki G, Schomburg EW. What does gamma coherence tell us about inter-regional neural communication? Nature Neuroscience. 2015;**18**(4):

[85] Buzsáki G. Large-scale recording of neuronal ensembles. Nature Neuroscience. 2004;**7**(5):446-451

[86] Moffitt MA, McIntyre CC. Modelbased analysis of cortical recording with silicon microelectrodes. Clinical Neurophysiology. 2005;**116**(9):2240-2250

[87] De Kock CPJ, Bruno RM, Spors H, Sakmann B. Layer-and cell-typespecific suprathreshold stimulus representation in rat primary

somatosensory cortex. The Journal of Physiology. 2007;**581**(1):139-154

[89] Friend DM, Kemere C, Kravitz AV. Quantifying recording quality in in vivo striatal recordings. Current Protocols in

[90] Harris KD, Quiroga RQ, Freeman J, Smith SL. Improving data quality in neuronal population recordings. Nature Neuroscience. 2016;**19**(9):1165-1174

Neuroscience. 2015;**70**(1):6-28

[91] Dombeck DA, Graziano MS, Tank DW. Functional clustering of neurons in motor cortex determined by cellular resolution imaging in awake behaving mice. The Journal of Neuroscience. 2009;**29**(44):13751-13760

[88] Sakata S, Harris KD. Laminar structure of spontaneous and sensoryevoked population activity in auditory cortex. Neuron. 2009;**64**(3):404-418

spike separation as determined by simultaneous intracellular and extracellular measurements. Journal of Neurophysiology. 2000;**84**(1):401-414

484-489

**108**

[101] Vollenweider FX, Vontobel P, Hell D, Leenders KL. 5-HT modulation of dopamine release in basal ganglia in psilocybin-induced psychosis in man—A PET study with [11 C] raclopride. Neuropsychopharmacology. 1999;**20**(5):424

[102] Gomez JL, Bonaventura J, Lesniak W, Mathews WB, Sysa-Shah P, Rodriguez LA, et al. Chemogenetics revealed: DREADD occupancy and activation via converted clozapine. Science. 2017;**357**(6350):503-507

[103] Buhusi CV, Meck WH. Effect of clozapine on interval timing and working memory for time in the peak-interval procedure with gaps. Behavioural Processes. 2007;**74**(2):159-167

[104] MacDonald CJ, Meck WH. Differential effects of clozapine and haloperidol on interval timing in the supraseconds range. Psychopharmacology. 2005;**182**(2):232-244

[105] Mahn M, Prigge M, Ron S, Levy R, Yizhar O. Biophysical constraints of optogenetic inhibition at presynaptic terminals. Nature Neuroscience. 2016;**19**(4):554

[106] Li N, Chen S, Guo ZV, Chen H, Huo Y, Inagaki H, et al. Spatiotemporal limits of optogenetic manipulations in cortical circuits. bio Rxiv. 2019:642215

[107] Brette R, Gerstner W. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. Journal of Neurophysiology. 2005;**94**(5):3637-3642

[108] Hodgkin AL, Huxley AF. A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology. 1952;**117**(4):500-544

[109] Izhikevich EM. Simple model of spiking neurons. IEEE Transactions on Neural Networks. 2003;**14**(6):1569-1572

[110] Morris C, Lecar H. Voltage oscillations in the barnacle giant muscle fiber. Biophysical Journal. 1981;**35**(1):193-213

[111] Touboul J. Bifurcation analysis of a general class of nonlinear integrate-andfire neurons. SIAM Journal on Applied Mathematics. 2008;**68**(4):1045-1079

[112] Maass W. Networks of spiking neurons: The third generation of neural network models. Neural Networks. 1997;**10**(9):1659-1671

[113] Mikhael JG, Gershman SJ. Adapting the flow of time with dopamine. Journal of Neurophysiology. 2019;**121**(5):1748-1760

[114] Hardy NF, Buonomano DV. Encoding time in feedforward trajectories of a recurrent neural network model. Neural Computation. 2018;**30**(2):378-396

[115] Perrett DI, Rolls ET, Caan W. Visual neurones responsive to faces in the monkey temporal cortex. Experimental Brain Research. 1982;**47**(3):329-342

[116] Riehle A, Grün S, Diesmann M, Aertsen A. Spike synchronization and rate modulation differentially involved in motor cortical function. Science. 1997;**278**(5345):1950-1953

[117] Grammont F, Riehle A. Spike synchronization and firing rate in a population of motor cortical neurons in relation to movement direction and reaction time. Biological Cybernetics. 2003;**88**(5):360-373

[118] Gribkova ED, Ibrahim BA, Llano DA. A novel mutual information estimator to measure spike train correlations in a model thalamocortical network. Journal of Neurophysiology. 2018;**120**(6):2730-2744

[119] Moyer JT, Halterman BL, Finkel LH, Wolf JA. Lateral and feedforward inhibition suppress asynchronous activity in a large, biophysically-detailed computational model of the striatal network. Frontiers in Computational Neuroscience. 2014;**8**:152

[120] Naud R, Marcille N, Clopath C, Gerstner W. Firing patterns in the adaptive exponential integrate-andfire model. Biological Cybernetics. 2008;**99**(4-5):335

[121] Brette R. Philosophy of the spike: Rate-based vs. spike-based theories of the brain. Frontiers in Systems Neuroscience. 2015;**9**:151

[122] Clopath C, Büsing L, Vasilaki E, Gerstner W. Connectivity reflects coding: A model of voltage-based STDP with homeostasis. Nature Neuroscience. 2010;**13**(3):344

[123] Kozloski J, Cecchi GA. A theory of loop formation and elimination by spike timing-dependent plasticity. Frontiers in Neural Circuits. 2010;**4**:7

[124] Abbott LF, DePasquale B, Memmesheimer RM. Building functional networks of spiking model neurons. Nature Neuroscience. 2016;**19**(3):350

[125] Wieland S, Schindler S, Huber C, Köhr G, Oswald MJ, Kelsch W. Phasic dopamine modifies sensory-driven output of striatal neurons through synaptic plasticity. Journal of Neuroscience. 2015;**35**(27):9946-9956

[126] Yagishita S, Hayashi-Takagi A, Ellis-Davies GC, Urakubo H, Ishii S, Kasai H. A critical time window for dopamine actions on the structural plasticity of dendritic spines. Science. 2014;**345**(6204):1616-1620

[127] Buonomano DV, Maass W. Statedependent computations: Spatiotemporal processing in cortical networks. Nature Reviews Neuroscience. 2009;**10**(2):113-125

[128] Paton JJ, Buonomano DV. The neural basis of timing: Distributed mechanisms for diverse functions. Neuron. 2018;**98**(4):687-705

[129] Oprisan SA, Aft T, Buhusi M, Buhusi CV. Scalar timing in memory: A temporal map in the hippocampus. Journal of Theoretical Biology. 2018;**438**:133-142

[130] Oprisan SA, Buhusi CV. Modeling pharmacological clock and memory patterns of interval timing in a striatal beat-frequency model with realistic, noisy neurons. Frontiers in Integrative Neuroscience. 2011;**5**:52

[131] Reutimann J, Yakovlev V, Fusi S, Senn W. Climbing neuronal activity as an event-based cortical representation of time. Journal of Neuroscience. 2004;**24**(13):3295-3303

[132] Hitron Y, Parter M. Counting to ten with two fingers: Compressed counting with spiking neurons. 2019. ar Xiv preprint ar Xiv: 1902.10369

[133] Pérez O, Merchant H. The synaptic properties of cells define the hallmarks of interval timing in a recurrent neural network. Journal of Neuroscience. 2018;**38**(17):4186-4199

[134] Zannone S, Brzosko Z, Paulsen O, Clopath C. Acetylcholine-modulated plasticity in reward-driven navigation: A computational study. Scientific Reports. 2018;**8**(1):9486

**111**

*Integration of Spiking Neural Networks for Understanding Interval Timing*

architectural and learning constraints in neural network models: A case study on visual space coding. Frontiers in Computational Neuroscience. 2017;**11**:13

[145] Mocanu DC, Mocanu E, Stone P, Nguyen PH, Gibescu M, Liotta A. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science. Nature Communications.

[146] Sporns O, Zwi JD. The small world of the cerebral cortex. Neuroinformatics. 2004;**2**(2):145-162

[147] Yoshimura Y, Dantzker JL, Callaway EM. Excitatory cortical neurons form fine-scale functional networks. Nature. 2005;**433**(7028):868

[148] Floresco SB, West AR, Ash B, Moore H, Grace AA. Afferent modulation of dopamine neuron firing differentially regulates tonic and phasic dopamine transmission. Nature Neuroscience.

[149] Berke JD. What does dopamine mean? Nature Neuroscience.

Baldassarre G. Different dopaminergic dysfunctions underlying Parkinsonian Akinesia and tremor. Frontiers in Neuroscience. 2019;**13**:550

[151] Budygin E, Bass C, Grinevich V, Deal A, Bonin K, Weiner J. Paradoxical effects of tonic and phasic increases in accumbal dopamine transmission on alcohol-seeking behavior. SSRN Electronic Journal. 2019. DOI: 10.2139/

[150] Caligiore D, Mannella F,

[152] Meck WH, Williams CL. Characterization of the facilitative

effects of perinatal choline supplementation on timing and

2018;**9**(1):2383

2003;**6**(9):968

2018;**21**(6):787

ssrn.3399579

*DOI: http://dx.doi.org/10.5772/intechopen.89781*

[135] Oprisan SA, Buhusi M, Buhusi CV. A population-based model of the temporal memory in the hippocampus. Frontiers in Neuroscience. 2018;**12**:521

[136] Izhikevich EM, Desai NS. Relating stdp to bcm. Neural Computation.

Buonomano DV. Short-term synaptic plasticity as a mechanism for sensory timing. Trends in Neurosciences.

[138] Kriegeskorte N. Deep neural networks: A new framework for modeling biological vision and brain information processing. Annual Review of Vision Science. 2015;**1**:417-446

[139] Goldman MS. Memory without feedback in a neural network. Neuron.

[140] Hahnloser RH, Kozhevnikov AA,

[141] Liu Y, Tiganj Z, Hasselmo ME, Howard MW. A neural microcircuit model for a scalable scale-invariant representation of time. Hippocampus.

[142] Kietzmann TC, Spoerer CJ, Sörensen L, Cichy RM, Hauk O, Kriegeskorte N. Recurrence required to capture the dynamic computations of the human ventral visual stream. 2019. ar Xiv preprint ar Xiv: 1903.05946

[143] Spoerer CJ, Kietzmann TC,

[144] Testolin A, De Filippo De Grazia M, Zorzi M. The role of

bio Rxiv. 2019:677237

Kriegeskorte N. Recurrent networks can recycle neural resources to flexibly trade speed for accuracy in visual recognition.

Fee MS. An ultra-sparse code underliesthe generation of neural sequences in a songbird. Nature.

2003;**15**(7):1511-1523

2018;**41**(10):701-711

2009;**61**(4):621-634

2002;**419**(6902):65

2019;**29**(3):260-274

[137] Motanis H, Seay MJ,

*Integration of Spiking Neural Networks for Understanding Interval Timing DOI: http://dx.doi.org/10.5772/intechopen.89781*

[135] Oprisan SA, Buhusi M, Buhusi CV. A population-based model of the temporal memory in the hippocampus. Frontiers in Neuroscience. 2018;**12**:521

*New Frontiers in Brain-Computer Interfaces*

population of motor cortical neurons in relation to movement direction and reaction time. Biological Cybernetics.

[126] Yagishita S, Hayashi-Takagi A, Ellis-Davies GC, Urakubo H, Ishii S, Kasai H. A critical time window for dopamine actions on the structural plasticity of dendritic spines. Science.

[127] Buonomano DV, Maass W. Statedependent computations: Spatiotemporal processing in cortical networks. Nature Reviews Neuroscience. 2009;**10**(2):113-125

[128] Paton JJ, Buonomano DV. The neural basis of timing: Distributed mechanisms for diverse functions. Neuron. 2018;**98**(4):687-705

[129] Oprisan SA, Aft T, Buhusi M, Buhusi CV. Scalar timing in memory: A temporal map in the hippocampus. Journal of Theoretical Biology.

[130] Oprisan SA, Buhusi CV. Modeling pharmacological clock and memory patterns of interval timing in a striatal beat-frequency model with realistic, noisy neurons. Frontiers in Integrative

[131] Reutimann J, Yakovlev V, Fusi S, Senn W. Climbing neuronal activity as an event-based cortical representation of time. Journal of Neuroscience.

[132] Hitron Y, Parter M. Counting to ten with two fingers: Compressed counting with spiking neurons. 2019. ar Xiv preprint ar Xiv: 1902.10369

[133] Pérez O, Merchant H. The synaptic properties of cells define the hallmarks of interval timing in a recurrent neural network. Journal of Neuroscience.

[134] Zannone S, Brzosko Z, Paulsen O, Clopath C. Acetylcholine-modulated plasticity in reward-driven navigation: A computational study. Scientific

2018;**438**:133-142

Neuroscience. 2011;**5**:52

2004;**24**(13):3295-3303

2018;**38**(17):4186-4199

Reports. 2018;**8**(1):9486

2014;**345**(6204):1616-1620

[119] Moyer JT, Halterman BL, Finkel LH, Wolf JA. Lateral and feedforward inhibition suppress asynchronous activity in a large, biophysically-detailed computational model of the striatal network. Frontiers in Computational

[120] Naud R, Marcille N, Clopath C, Gerstner W. Firing patterns in the adaptive exponential integrate-andfire model. Biological Cybernetics.

[121] Brette R. Philosophy of the spike: Rate-based vs. spike-based theories of the brain. Frontiers in Systems

[122] Clopath C, Büsing L, Vasilaki E, Gerstner W. Connectivity reflects coding: A model of voltage-based STDP with homeostasis. Nature Neuroscience.

[123] Kozloski J, Cecchi GA. A theory of loop formation and elimination by spike timing-dependent plasticity. Frontiers

Memmesheimer RM. Building functional networks of spiking model neurons. Nature Neuroscience. 2016;**19**(3):350

[125] Wieland S, Schindler S, Huber C, Köhr G, Oswald MJ, Kelsch W. Phasic dopamine modifies sensory-driven output of striatal neurons through synaptic plasticity. Journal of

Neuroscience. 2015;**35**(27):9946-9956

[118] Gribkova ED, Ibrahim BA, Llano DA. A novel mutual information estimator to measure spike train correlations in a model thalamocortical network. Journal of Neurophysiology.

2003;**88**(5):360-373

2018;**120**(6):2730-2744

Neuroscience. 2014;**8**:152

Neuroscience. 2015;**9**:151

in Neural Circuits. 2010;**4**:7

[124] Abbott LF, DePasquale B,

2008;**99**(4-5):335

2010;**13**(3):344

**110**

[136] Izhikevich EM, Desai NS. Relating stdp to bcm. Neural Computation. 2003;**15**(7):1511-1523

[137] Motanis H, Seay MJ, Buonomano DV. Short-term synaptic plasticity as a mechanism for sensory timing. Trends in Neurosciences. 2018;**41**(10):701-711

[138] Kriegeskorte N. Deep neural networks: A new framework for modeling biological vision and brain information processing. Annual Review of Vision Science. 2015;**1**:417-446

[139] Goldman MS. Memory without feedback in a neural network. Neuron. 2009;**61**(4):621-634

[140] Hahnloser RH, Kozhevnikov AA, Fee MS. An ultra-sparse code underliesthe generation of neural sequences in a songbird. Nature. 2002;**419**(6902):65

[141] Liu Y, Tiganj Z, Hasselmo ME, Howard MW. A neural microcircuit model for a scalable scale-invariant representation of time. Hippocampus. 2019;**29**(3):260-274

[142] Kietzmann TC, Spoerer CJ, Sörensen L, Cichy RM, Hauk O, Kriegeskorte N. Recurrence required to capture the dynamic computations of the human ventral visual stream. 2019. ar Xiv preprint ar Xiv: 1903.05946

[143] Spoerer CJ, Kietzmann TC, Kriegeskorte N. Recurrent networks can recycle neural resources to flexibly trade speed for accuracy in visual recognition. bio Rxiv. 2019:677237

[144] Testolin A, De Filippo De Grazia M, Zorzi M. The role of architectural and learning constraints in neural network models: A case study on visual space coding. Frontiers in Computational Neuroscience. 2017;**11**:13

[145] Mocanu DC, Mocanu E, Stone P, Nguyen PH, Gibescu M, Liotta A. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science. Nature Communications. 2018;**9**(1):2383

[146] Sporns O, Zwi JD. The small world of the cerebral cortex. Neuroinformatics. 2004;**2**(2):145-162

[147] Yoshimura Y, Dantzker JL, Callaway EM. Excitatory cortical neurons form fine-scale functional networks. Nature. 2005;**433**(7028):868

[148] Floresco SB, West AR, Ash B, Moore H, Grace AA. Afferent modulation of dopamine neuron firing differentially regulates tonic and phasic dopamine transmission. Nature Neuroscience. 2003;**6**(9):968

[149] Berke JD. What does dopamine mean? Nature Neuroscience. 2018;**21**(6):787

[150] Caligiore D, Mannella F, Baldassarre G. Different dopaminergic dysfunctions underlying Parkinsonian Akinesia and tremor. Frontiers in Neuroscience. 2019;**13**:550

[151] Budygin E, Bass C, Grinevich V, Deal A, Bonin K, Weiner J. Paradoxical effects of tonic and phasic increases in accumbal dopamine transmission on alcohol-seeking behavior. SSRN Electronic Journal. 2019. DOI: 10.2139/ ssrn.3399579

[152] Meck WH, Williams CL. Characterization of the facilitative effects of perinatal choline supplementation on timing and

temporal memory. Neuroreport. 1997;**8**(13):2831-2835

[153] Meck WH, Church RM. Cholinergic modulation of the content of temporal memory. Behavioral Neuroscience. 1987;**101**(4):457

[154] Foster DJ, Morris RGM, Dayan P. A model of hippocampally dependent navigation, using the temporal difference learning rule. Hippocampus. 2000;**10**(1):1-16

[155] Frémaux N, Sprekeler H, Gerstner W. Reinforcement learning using a continuous time actor-critic framework with spiking neurons. PLoS Computational Biology. 2013;**9**(4):e1003024

[156] Vasilaki E, Frémaux N, Urbanczik R, Senn W, Gerstner W. Spikebased reinforcement learning in continuous state and action space: When policy gradient methods fail. PLoS Computational Biology. 2009;**5**(12):e1000586

[157] Carrillo-Reid L, Tecuapetla F, Tapia D, Hernández-Cruz A, Galarraga E, Drucker-Colin R, et al. Encoding network states by striatal cell assemblies. Journal of Neurophysiology. 2008;**99**(3):1435-1450

[158] Jáidar O, Carrillo-Reid L, Hernández A, Drucker-Colín R, Bargas J, Hernández-Cruz A. Dynamics of the Parkinsonian striatal microcircuit: Entrainment into a dominant network state. Journal of Neuroscience. 2010;**30**(34):11326-11336

[159] Hausknecht M, Li WK, Mauk M, Stone P. Machine learning capabilities of a simulated cerebellum. IEEE Transactions on Neural Networks and Learning Systems. 2016;**28**(3):510-522

[160] Li WK, Hausknecht MJ, Stone P, Mauk MD. Using a million-cell

simulation of the cerebellum: Network scaling and task generality. Neural Networks. 2013;**47**:95-102

[161] Medina JF, Mauk MD. Computer simulation of cerebellar information processing. Nature Neuroscience. 2000;**3**:1205-1211

[162] Walhovd KB, Fjell AM, Reinvang I, Lundervold A, Dale AM, Eilertsen DE, et al. Effects of age on volumes of cortex, white matter and subcortical structures. Neurobiology of Aging. 2005;**26**(9):1261-1270

[163] Maheswaran S, Barjat H, Rueckert D, Bate ST, Howlett DR, Tilling L, et al. Longitudinal regional brain volume changes quantified in normal aging and Alzheimer's APP× PS1 mice using MRI. Brain Research. 2009;**1270**:19-32

[164] Hintiryan H, Foster NN, Bowman I, Bay M, Song MY, Gou L, et al. The mouse cortico-striatal projectome. Nature Neuroscience. 2016

[165] Lusk NA, Buonomano DV. Utilizing the Cortico-striatal Projectome to advance the study of timing and time perception. Timing and Time Perception. 2016;**4**(4):411-422

[166] Bekolay T, Bergstra J, Hunsberger E, DeWolf T, Stewart TC, Rasmussen D, et al. Nengo: A python tool for building large-scale functional brain models. Frontiers in Neuroinformatics. 2013;**7**

[167] Eppler JM, Helias M, Muller E, Diesmann M, Gewaltig MO. PyNEST: A convenient interface to the NEST simulator. Frontiers in Neuroinformatics. 2009;**2**:12

[168] Stimberg M, Goodman DF, Benichoux V, Brette R. Brian 2-the second coming: Spiking neural network simulation in Python with

**113**

*Integration of Spiking Neural Networks for Understanding Interval Timing*

*DOI: http://dx.doi.org/10.5772/intechopen.89781*

code generation. BMC Neuroscience.

[169] Igarashi J, Shouno O, Fukai T, Tsujino H. Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units. Neural Networks.

2013;**14**(1):1

2011;**24**(9):950-960

*Integration of Spiking Neural Networks for Understanding Interval Timing DOI: http://dx.doi.org/10.5772/intechopen.89781*

code generation. BMC Neuroscience. 2013;**14**(1):1

*New Frontiers in Brain-Computer Interfaces*

[153] Meck WH, Church RM. Cholinergic modulation of the content of temporal memory. Behavioral Neuroscience.

simulation of the cerebellum: Network scaling and task generality. Neural

[161] Medina JF, Mauk MD. Computer simulation of cerebellar information processing. Nature Neuroscience.

[162] Walhovd KB, Fjell AM, Reinvang I, Lundervold A, Dale AM, Eilertsen DE, et al. Effects of age on volumes of cortex, white matter and subcortical structures. Neurobiology of Aging.

[164] Hintiryan H, Foster NN, Bowman I, Bay M, Song MY, Gou L, et al. The mouse cortico-striatal projectome. Nature

[165] Lusk NA, Buonomano DV. Utilizing the Cortico-striatal Projectome to advance the study of timing and time perception. Timing and Time Perception. 2016;**4**(4):411-422

Hunsberger E, DeWolf T, Stewart TC, Rasmussen D, et al. Nengo: A python tool for building large-scale functional

[167] Eppler JM, Helias M, Muller E, Diesmann M, Gewaltig MO. PyNEST:

Networks. 2013;**47**:95-102

2000;**3**:1205-1211

2005;**26**(9):1261-1270

2009;**1270**:19-32

Neuroscience. 2016

[166] Bekolay T, Bergstra J,

brain models. Frontiers in Neuroinformatics. 2013;**7**

A convenient interface to the NEST simulator. Frontiers in Neuroinformatics. 2009;**2**:12

[168] Stimberg M, Goodman DF, Benichoux V, Brette R. Brian 2-the second coming: Spiking neural network simulation in Python with

[163] Maheswaran S, Barjat H, Rueckert D, Bate ST, Howlett DR, Tilling L, et al. Longitudinal regional brain volume changes quantified in normal aging and Alzheimer's APP× PS1 mice using MRI. Brain Research.

[154] Foster DJ, Morris RGM, Dayan P. A model of hippocampally dependent navigation, using the temporal

difference learning rule. Hippocampus.

Urbanczik R, Senn W, Gerstner W. Spike-

[155] Frémaux N, Sprekeler H, Gerstner W. Reinforcement learning using a continuous time actor-critic framework with spiking neurons. PLoS Computational Biology.

[156] Vasilaki E, Frémaux N,

based reinforcement learning in continuous state and action space: When policy gradient methods fail. PLoS Computational Biology.

[157] Carrillo-Reid L, Tecuapetla F, Tapia D, Hernández-Cruz A, Galarraga E, Drucker-Colin R, et al. Encoding network

states by striatal cell assemblies. Journal of Neurophysiology. 2008;**99**(3):1435-1450

[158] Jáidar O, Carrillo-Reid L, Hernández A, Drucker-Colín R,

of the Parkinsonian striatal microcircuit: Entrainment into a dominant network state. Journal of Neuroscience. 2010;**30**(34):11326-11336

Bargas J, Hernández-Cruz A. Dynamics

[159] Hausknecht M, Li WK, Mauk M, Stone P. Machine learning capabilities of a simulated cerebellum. IEEE Transactions on Neural Networks and Learning Systems. 2016;**28**(3):510-522

[160] Li WK, Hausknecht MJ, Stone P, Mauk MD. Using a million-cell

temporal memory. Neuroreport.

1997;**8**(13):2831-2835

1987;**101**(4):457

2000;**10**(1):1-16

2013;**9**(4):e1003024

2009;**5**(12):e1000586

**112**

[169] Igarashi J, Shouno O, Fukai T, Tsujino H. Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units. Neural Networks. 2011;**24**(9):950-960

Chapter 7

Abstract

1. Introduction

115

Introducing a Novel Approach to

Function of Memory in Human

This study reviews the crucial role of memory in the human brain. For this purpose, previous investigations and researches about the construction and function of memory were studied. The mechanism of the memory function was reviewed, and crucial drivers for the working of the memory were indicated. Then an applied memory model that could serve as a framework to study the memory function was also introduced. Therefore, the memory unit was introduced as a basic information structure. Also, a structured platform for the memory unit was determined for encoding the information and data in the brain. Then a pattern of information coding was detected. Thus, a basic framework to study the memory function was conceived. The results of this thesis pave the way for the discovery of a basic algorithm to understand the memory function in the human. Also, this study introduces a simple way to overcome Alzheimer disease (AD). This way can be

applied to research on the prevention and treatment of this disease.

memory coding strand, music therapy, Alzheimer disease

Keywords: Meshk theory, triple drivers, memory model, memory unit,

Memory is one of the greatest unsolved secrets encountered by human generation. There are a lot of questions about memory. The memory remains a mystery until now. A lot of scientists have studied memory, but there have been no certain results until now. Only recently has it been determined that memory is a faculty of the mind. They have discovered that information is encoded, stored, and retrieved in a region which is called "hippocampus" [8]. Also, they have found that memory is vital to experiences and related to the limbic system of the brain [8]. Models of memory provide abstract representations of how memory is believed to work [3]. There are several models that have been proposed over the years by various psychologists [3]. Controversy is involved as to whether several memory structures exist [3]. For decades, neuroscientists have attempted to unravel how the brain makes memories [22]. Atkinson-Shiffrin and working memory are the different kinds of memory models that have been available in the last decades.

Study the Construction and

Beings: The Meshk Theory

Mohammad Seyedielmabad

## Chapter 7

## Introducing a Novel Approach to Study the Construction and Function of Memory in Human Beings: The Meshk Theory

Mohammad Seyedielmabad

## Abstract

This study reviews the crucial role of memory in the human brain. For this purpose, previous investigations and researches about the construction and function of memory were studied. The mechanism of the memory function was reviewed, and crucial drivers for the working of the memory were indicated. Then an applied memory model that could serve as a framework to study the memory function was also introduced. Therefore, the memory unit was introduced as a basic information structure. Also, a structured platform for the memory unit was determined for encoding the information and data in the brain. Then a pattern of information coding was detected. Thus, a basic framework to study the memory function was conceived. The results of this thesis pave the way for the discovery of a basic algorithm to understand the memory function in the human. Also, this study introduces a simple way to overcome Alzheimer disease (AD). This way can be applied to research on the prevention and treatment of this disease.

Keywords: Meshk theory, triple drivers, memory model, memory unit, memory coding strand, music therapy, Alzheimer disease

## 1. Introduction

Memory is one of the greatest unsolved secrets encountered by human generation. There are a lot of questions about memory. The memory remains a mystery until now. A lot of scientists have studied memory, but there have been no certain results until now. Only recently has it been determined that memory is a faculty of the mind. They have discovered that information is encoded, stored, and retrieved in a region which is called "hippocampus" [8]. Also, they have found that memory is vital to experiences and related to the limbic system of the brain [8]. Models of memory provide abstract representations of how memory is believed to work [3]. There are several models that have been proposed over the years by various psychologists [3]. Controversy is involved as to whether several memory structures exist [3]. For decades, neuroscientists have attempted to unravel how the brain makes memories [22]. Atkinson-Shiffrin and working memory are the different kinds of memory models that have been available in the last decades.

The human brain is estimated to have approximately 86 billion neurons [12], and each neuron has tens of thousands of synapses, leading to over 100 trillion synaptic connections [2]. Concurrence monitoring of 100 trillion synaptic connections is an impossible work. On top of this astronomical complexity, one needs to map each connection or neuron to a given stimulus, yet possible numbers of stimuli that can be used are infinite given the complex, ever-changing nature of the world we live in [27]. This is one of the most difficult issues faced by neuroscientists. As such, the unifying mathematical principle upon which evolution constructs the brain's basic wiring and computational logic represents one of the topmost difficult and unsolved meta-problems in neuroscience [1, 9]. Because of that, it is required to introduce an innovative approach to solve this problem. Recently, Dr. Tsien and his colleagues introduced an approach about memory that is known to "thought experiment." They have done a lot of studies in the last decades about memory. It seems that this approach has an important contribution in the area of neuroscience. One useful concept in pursuing this line of reasoning is cell assembly, a term coined by Hebb [11] to describe the supposed computational building block or computational primitive in the brain. This notion has attracted keen interest, especially with emerging large-scale recording techniques [6, 13, 15, 16, 18, 19, 25]. Hebbian cell assembly was postulated to be comprised of a group of neurons with strong excitatory connections that are formed after learning [13, 26]. Dr. Tsien and his colleagues focus their research on the hippocampus, particularly a region called CA1, which is important to forming memories of events and places in both people and rodents [19, 24]. The hippocampus has four parts including CA1, CA2, CA3, and CA4, and each section can be divided into nine sections. The study results of scientists add to a growing body of work indicating that a linear flow of signals from one neuron to another is not enough to explain how the brain represents perceptions and memories [17]. Rather the coordinated activity of large populations of neurons is needed [19]. Human memory is a great performance organ. Also, the memory function is due to its structure. While the architecture of memory detects, it is possible to distinguish the memory function. This approach can be used to overcome neural diseases especially Alzheimer disease (AD) according to music. Alzheimer disease is a neurodegenerative disorder featuring gradually progressive cognitive and functional deficits as well as behavioral changes [4]. More than 30 million people in the world are suffering of Alzheimer disease (AD(. It is a deadly disease that resulted in about 1.9 million deaths in 2015 and therefore is one of the most costly diseases. The previous investigation showed that music has an effect on treatment of AD. It is necessary to carry out a comprehensive study about music therapy of Alzheimer disease. It is a simple and low-cost way that takes less time for prevention and treatment of AD. Music is known as part of mathematics, and music composers work in the field of mathematic rules. In mathematics, a circle has 360°. So, all mathematical rules are in this field of degrees. Therefore, human memory works at 360° and connects to a complete loop. Barbad is one of the first ancient musicians in Kurdistan. He compiled 360 types of music that were made in 200 AC. Kurdistan is a strategic region that was separated to four parts between Iran, Iraq, Turkey, and Syria after World War 1. Kurd is a term that concerns people in Kurdistan that are related to the ancient Sumerians (10). Kurdistan is a historical place that had governments in the ancient periods. One of them is known as Sassanian. In this period, music was prevalent, and many people worked on mathematics, and so, music was built on the structure and function of the human brain. According to this approach, music shows the inherence of human memory. Hence, music can relieve and improve the nerves and memory. The last studies that have been done by a lot of scientist prove this claim. Some of these investigations led to the wonderful discovery. It is required to carry out a comprehensive study about human memory

for introducing a simple and applied way to overcome Alzheimer disease. The

Introducing a Novel Approach to Study the Construction and Function of Memory in Human…

What is the simple structure of memory in the human brain like?

More importantly, what is a simple way to overcome Alzheimer disease?

and transmutable. It is possible to explain the relativity theory of Einstein for the brain. According to Meshk theory, the memory structure and function of the human brain depend on three main factors (Figure 1). The word Meshk is known in the Sumerian and Kurdish languages to the human brain. These drivers provide the

> Water equivalence conducted by the water driver C2 Nunch resonance (NR) provided by the energy driver E Substance (mass) proportion provided by the mass driver M

The human memory in the function circle is based on three basic drivers including water, energy, and substances. Water is a driver for creation and operation of memory. The quantum brain dynamic (QBD) theory claims that water comprises 70% of the brain and proposes that the electric dipoles of the water molecules constitute a quantum field [34, 35]. Therefore, human memory works in a general circle provided by water called "water equivalence (WE)." While memory neurons are working, WE is an important factor for coding and encoding memory information. The last studies have shown that conscious experience correlates not with the number of neurons firing but with the synchrony of that firing [38]. Ears are likely to be related to this process because the ear is connected to the brain's equilibrium process. For a long time, the oldest part of the brain was thought of as a "control

The triple drivers that provide the inherence of human brain, the gray circles, blue circle, and green triangular

, energy and mass are equivalent

objectives of this study respond to the following questions:

What is human memory?

DOI: http://dx.doi.org/10.5772/intechopen.87991

2. Methods

Figure 1.

117

indicate the NR, SP, and WE, respectively.

2.1 Triple drivers

How does human memory work?

In Einstein's special theory of relativity, E = mc2

inherence of human memory as shown below:

Introducing a Novel Approach to Study the Construction and Function of Memory in Human… DOI: http://dx.doi.org/10.5772/intechopen.87991

for introducing a simple and applied way to overcome Alzheimer disease. The objectives of this study respond to the following questions:

> What is human memory? What is the simple structure of memory in the human brain like? How does human memory work? More importantly, what is a simple way to overcome Alzheimer disease?

## 2. Methods

The human brain is estimated to have approximately 86 billion neurons [12], and each neuron has tens of thousands of synapses, leading to over 100 trillion synaptic connections [2]. Concurrence monitoring of 100 trillion synaptic connections is an impossible work. On top of this astronomical complexity, one needs to map each connection or neuron to a given stimulus, yet possible numbers of stimuli that can be used are infinite given the complex, ever-changing nature of the world we live in [27]. This is one of the most difficult issues faced by neuroscientists. As such, the unifying mathematical principle upon which evolution constructs the brain's basic wiring and computational logic represents one of the topmost difficult and unsolved meta-problems in neuroscience [1, 9]. Because of that, it is required to introduce an innovative approach to solve this problem. Recently, Dr. Tsien and his colleagues introduced an approach about memory that is known to "thought experiment." They have done a lot of studies in the last decades about memory. It seems that this approach has an important contribution in the area of neuroscience. One useful concept in pursuing this line of reasoning is cell assembly, a term coined by Hebb [11] to describe the supposed computational building block or computational primitive in the brain. This notion has attracted keen interest, especially with emerging large-scale recording techniques [6, 13, 15, 16, 18, 19, 25]. Hebbian cell assembly was postulated to be comprised of a group of neurons with strong excitatory connections that are formed after learning [13, 26]. Dr. Tsien and his colleagues focus their research on the hippocampus, particularly a region called CA1, which is important to forming memories of events and places in both people and rodents [19, 24]. The hippocampus has four parts including CA1, CA2, CA3, and CA4, and each section can be divided into nine sections. The study results of scientists add to a growing body of work indicating that a linear flow of signals from one neuron to another is not enough to explain how the brain represents perceptions and memories [17]. Rather the coordinated activity of large populations of neurons is needed [19]. Human memory is a great performance organ. Also, the memory function is due to its structure. While the architecture of memory detects, it is possible to distinguish the memory function. This approach can be used to overcome neural diseases especially Alzheimer disease (AD) according to music. Alzheimer disease is a neurodegenerative disorder featuring gradually progressive cognitive and functional deficits as well as behavioral changes [4]. More than 30 million people in the world are suffering of Alzheimer disease (AD(. It is a deadly disease that resulted in about 1.9 million deaths in 2015 and therefore is one of the most costly diseases. The previous investigation showed that music has an effect on treatment of AD. It is necessary to carry out a comprehensive study about music therapy of Alzheimer disease. It is a simple and low-cost way that takes less time for prevention and treatment of AD. Music is known as part of mathematics, and music composers work in the field of mathematic rules. In mathematics, a circle has 360°. So, all mathematical rules are in this field of degrees. Therefore, human memory works at 360° and connects to a complete loop. Barbad is one of the first ancient musicians in Kurdistan. He compiled 360 types of music that were made in 200 AC. Kurdistan is a strategic region that was separated to four parts between Iran, Iraq, Turkey, and Syria after World War 1. Kurd is a term that concerns people in Kurdistan that are related to the ancient Sumerians (10). Kurdistan is a historical place that had governments in the ancient periods. One of them is known as Sassanian. In this period, music was prevalent, and many people worked on mathematics, and so, music was built on the structure and function of the human brain. According to this approach, music shows the inherence of human memory. Hence, music can relieve and improve the nerves and memory. The last studies that have been done by a lot of scientist prove this claim. Some of these investigations led to the wonderful discovery. It is required to carry out a comprehensive study about human memory

New Frontiers in Brain-Computer Interfaces

116

## 2.1 Triple drivers

In Einstein's special theory of relativity, E = mc2 , energy and mass are equivalent and transmutable. It is possible to explain the relativity theory of Einstein for the brain. According to Meshk theory, the memory structure and function of the human brain depend on three main factors (Figure 1). The word Meshk is known in the Sumerian and Kurdish languages to the human brain. These drivers provide the inherence of human memory as shown below:


The human memory in the function circle is based on three basic drivers including water, energy, and substances. Water is a driver for creation and operation of memory. The quantum brain dynamic (QBD) theory claims that water comprises 70% of the brain and proposes that the electric dipoles of the water molecules constitute a quantum field [34, 35]. Therefore, human memory works in a general circle provided by water called "water equivalence (WE)." While memory neurons are working, WE is an important factor for coding and encoding memory information. The last studies have shown that conscious experience correlates not with the number of neurons firing but with the synchrony of that firing [38]. Ears are likely to be related to this process because the ear is connected to the brain's equilibrium process. For a long time, the oldest part of the brain was thought of as a "control

### Figure 1.

The triple drivers that provide the inherence of human brain, the gray circles, blue circle, and green triangular indicate the NR, SP, and WE, respectively.

room" for human motions [42]. Now there is evidence that the cerebellum also stores the temporal information about the music we are listening to, and then it recalls this information while reproducing the music [42]. Moreover, an amazing fact is that the cerebellum was discovered to be a center for emotions [42]. Also, the water equivalence of the brain can lead to a powerful electromagnetic (EM) field in the brain. The electromagnetic theories of consciousness propose that consciousness can be understood as an electromagnetic phenomenon [36]. Electromagnetic field theories of consciousness propose that consciousness results when a brain produces an electromagnetic field with specific characteristics [36, 37].

information in memory. According to Einstein's special relativity theory, E = mc2

Introducing a Novel Approach to Study the Construction and Function of Memory in Human…

information in the brain.

DOI: http://dx.doi.org/10.5772/intechopen.87991

2.2 Memory model

Figure 3.

119

fundamental memories, respectively.

energy and mass are equivalent and transmutable. In this equation, c2 is the determinative factor, and so it can be concluded that WE is the main driver for recalling

Most of the memory models presented in the past decades have been based on time. The terms of memory, including short-term, long-term, and working mem-

space are not invariable. According to the Meshk theory, space and time are variable and parts of the inherent memory. Therefore, memory in the human brain is indicated in a new model called "Manna model." In this new model, memory is divided into three parts fundamental, central, and peripheral. The base structure of memory in this model is shown in Figure 3. Fundamental memory is inherent intelligence. This can be called intrinsic memory and has existed since the advent of mankind. This memory is very cryptic and so unknown so far. The spatial and temporal information are parts of the memory nature. The ability of infants to swim and learn a language is clearly an example that confirms the nature of fundamental memory in human beings. All learning processes in the central memory of human are connected to fundamental memory that is called "recalling of the internal information." The environmental information includes visual, auditory, and sensory data. The eye, ear, and heart are the centers of receiving the visual, auditory, and sensory information from the environment and transmitting them to the brain (Figure 2). It is possible that sensitive organs send their signals to the heart and, after coordination, the information is sent from the heart to the brain. The peripheral memory is specialized to receive environmental information from the eye, ear, and heart and transmit them to the central memory. At the start, external information is transferred from the peripheral to the central memory. In the process of being transferred, some of the data is removed, and some remains. By coupling of blocks in the fundamental to the central memory, internal information is retrieved. The "coupling process" is a turning point in reminding of internal information from the fundamental memory. The water equivalence (WE) is an important factor for operating the coupling process. The coupling process can be the starting point for neural momentum in the brain neural circuit. Therefore, in human memory, it is necessary to receive information from the auditory, visual, and sensory systems for making memory coding unit. At the same time, the information in the fundamental and central memory is also binded as binary codes. The coupling process leads to the

The Manna model, the gray circles, blue circle, and green triangular indicate the peripheral, central, and the

ory, are defined by time. In Einstein's special theory of relativity, E = mc<sup>2</sup>

,

, time and

In the conscious electromagnetic information (CEMI) theory, McFadden proposes that the digital information from neurons is integrated to form a conscious electromagnetic information field in the brain [39]. Since the brain is a 300° Kelvin tissue strongly associated with its environment [44], it is possible that the noise and warm environment of the brain cause to transform the water crystals. This transformation, resulting in internal equivalence of the brain, leads to the operation of intrinsic memory. This process can form the basis of memory structure. Dr. Masaru Emoto and colleagues (1996) studied water crystals [31, 32]. They proposed that human consciousness has an effect on the molecular structure of water and therefore emotional energies and vibrations could change the physical structure of water [33]. Brain activities in human infants have shown evidence of existence of the intrinsic memory. The high ability to learn a language in infants is an important evidence that proves this claim. There is a fundamental potential in infants that enables them to advance learning and behaviors. This is a fundamental difference between human beings and others. There is a mechanism in the human brain in which "Nunch resonance (NR)" enables the brain to accelerate and increase reactions to stimuli from the environment. According to Figure 2, each nunchaku includes two parts that are linked together by a connection. A part of the nunchaku inputs the force and energy and then transmits it to the other part by a connection. Owing to the resonance in this process, the output force is much more than the input. It is what McFadden termed as "amplifying the microscopic quantum effects." In the CEMI theory, the synchronous firing of neurons is argued to amplify the influence of the brain's electromagnetic field fluctuations to a much greater extent than would be possible with the unsynchronized firing of neurons [39]. Exciting recent research shows that nontrivial quantum effects are present in biological systems and not just in spite of, but sometimes because of, the interaction with the noisy and warm environment [51]. Furthermore, because the brain is a complex nonlinear system with high sensitivity to small fluctuations, it is likely that it can amplify microscopic quantum effects [51]. It is also possible that light photons derived from the eye are related to this process. In general relativity (GR) and the equivalence principle developed by Einstein, it is argued that black holes are regions of space where gravitational attraction is very strong, because even light cannot escape. It is possible that there are spots with a black energy attribute on the front lobe of the human brain. The mineral substances in the function circle of the human brain are provided by a process called "substance or mass proportion (SP)." It is likely that the heart and sense organs are important parts for this process. The concurrence of the triple drivers is required for encoding information in the memory. Triple driver compatibility is required to encoding

Figure 2. A schematic of the nunchaku function.

information in memory. According to Einstein's special relativity theory, E = mc2 , energy and mass are equivalent and transmutable. In this equation, c2 is the determinative factor, and so it can be concluded that WE is the main driver for recalling information in the brain.

## 2.2 Memory model

room" for human motions [42]. Now there is evidence that the cerebellum also stores the temporal information about the music we are listening to, and then it recalls this information while reproducing the music [42]. Moreover, an amazing fact is that the cerebellum was discovered to be a center for emotions [42]. Also, the water equivalence of the brain can lead to a powerful electromagnetic (EM) field in the brain. The electromagnetic theories of consciousness propose that consciousness can be understood as an electromagnetic phenomenon [36]. Electromagnetic field theories of consciousness propose that consciousness results when a brain produces

In the conscious electromagnetic information (CEMI) theory, McFadden proposes that the digital information from neurons is integrated to form a conscious electromagnetic information field in the brain [39]. Since the brain is a 300° Kelvin tissue strongly associated with its environment [44], it is possible that the noise and warm environment of the brain cause to transform the water crystals. This transformation, resulting in internal equivalence of the brain, leads to the operation of intrinsic memory. This process can form the basis of memory structure. Dr. Masaru Emoto and colleagues (1996) studied water crystals [31, 32]. They proposed that human consciousness has an effect on the molecular structure of water and therefore emotional energies and vibrations could change the physical structure of water [33]. Brain activities in human infants have shown evidence of existence of the intrinsic memory. The high ability to learn a language in infants is an important evidence that proves this claim. There is a fundamental potential in infants that enables them to advance learning and behaviors. This is a fundamental difference between human beings and others. There is a mechanism in the human brain in which "Nunch resonance (NR)" enables the brain to accelerate and increase reactions to stimuli from the environment. According to Figure 2, each nunchaku includes two parts that are linked together by a connection. A part of the nunchaku inputs the force and energy and then transmits it to the other part by a connection. Owing to the resonance in this process, the output force is much more than the input. It is what McFadden termed as "amplifying the microscopic quantum effects." In the CEMI theory, the synchronous firing of neurons is argued to amplify the influence of the brain's electromagnetic field fluctuations to a much greater extent than would be possible with the

unsynchronized firing of neurons [39]. Exciting recent research shows that nontrivial quantum effects are present in biological systems and not just in spite of, but sometimes because of, the interaction with the noisy and warm environment [51]. Furthermore, because the brain is a complex nonlinear system with high sensitivity to small fluctuations, it is likely that it can amplify microscopic quantum effects [51]. It is also possible that light photons derived from the eye are related to this process. In general relativity (GR) and the equivalence principle developed by Einstein, it is argued that black holes are regions of space where gravitational attraction is very strong, because even light cannot escape. It is possible that there are spots with a black energy attribute on the front lobe of the human brain. The mineral substances in the function circle of the human brain are provided by a process called "substance or mass proportion (SP)." It is likely that the heart and sense organs are important parts for this process. The concurrence of the triple drivers is required for encoding information in the memory. Triple driver compatibility is required to encoding

Figure 2.

118

A schematic of the nunchaku function.

an electromagnetic field with specific characteristics [36, 37].

New Frontiers in Brain-Computer Interfaces

Most of the memory models presented in the past decades have been based on time. The terms of memory, including short-term, long-term, and working memory, are defined by time. In Einstein's special theory of relativity, E = mc<sup>2</sup> , time and space are not invariable. According to the Meshk theory, space and time are variable and parts of the inherent memory. Therefore, memory in the human brain is indicated in a new model called "Manna model." In this new model, memory is divided into three parts fundamental, central, and peripheral. The base structure of memory in this model is shown in Figure 3. Fundamental memory is inherent intelligence. This can be called intrinsic memory and has existed since the advent of mankind. This memory is very cryptic and so unknown so far. The spatial and temporal information are parts of the memory nature. The ability of infants to swim and learn a language is clearly an example that confirms the nature of fundamental memory in human beings. All learning processes in the central memory of human are connected to fundamental memory that is called "recalling of the internal information." The environmental information includes visual, auditory, and sensory data. The eye, ear, and heart are the centers of receiving the visual, auditory, and sensory information from the environment and transmitting them to the brain (Figure 2). It is possible that sensitive organs send their signals to the heart and, after coordination, the information is sent from the heart to the brain. The peripheral memory is specialized to receive environmental information from the eye, ear, and heart and transmit them to the central memory. At the start, external information is transferred from the peripheral to the central memory. In the process of being transferred, some of the data is removed, and some remains. By coupling of blocks in the fundamental to the central memory, internal information is retrieved. The "coupling process" is a turning point in reminding of internal information from the fundamental memory. The water equivalence (WE) is an important factor for operating the coupling process. The coupling process can be the starting point for neural momentum in the brain neural circuit. Therefore, in human memory, it is necessary to receive information from the auditory, visual, and sensory systems for making memory coding unit. At the same time, the information in the fundamental and central memory is also binded as binary codes. The coupling process leads to the

## Figure 3.

The Manna model, the gray circles, blue circle, and green triangular indicate the peripheral, central, and the fundamental memories, respectively.

creation of this important structure. This is what Dr. Tsien and colleagues termed as "neural cliques." They discovered that these overall network-level patterns are generated by distinct subsets of neural populations or neural cliques [19].

According to the theory of connectivity by Dr. Tsien, a clique is a group of neurons that respond similarly to a select event and thus operate collectively as a robust coding unit [14, 19–21, 23]. Table 1 indicates each event as three blocks are active and the others are inactive. The investigators represented clique activity as a string of binary codes that revealed details of the event an animal experienced [19]. In the string fragments shown here, 1 means a particular clique is active, and 0 signifies inactivity [19]. The idea of quantum coherent waves in the neuronal network is derived from Frohlich [35]. He viewed these waves as a means by which order could be maintained in living systems and argued that the neuronal network could support the long-range correlation of dipoles [35]. Repeated stimulation in a continuous and rhythmic way is a critical factor in operating the coupling process. As more and more memory blocks are turned on and active, more processes of recalling the internal information are performed. Therefore, more information and data are extracted from the fundamental memory. In the human brain, there are millions of memory blocks that are not used during the lifetime. Thus, people have entire blocks in fundamental memory but only turn on some of the blocks in their lifetime (Table 2). The active blocks make up the central memory in the human brain. The recalling of internal information and data from the fundamental memory is different about people. This process is different from elementary to advanced levels. It is dependent on the quality and the quantity function of coupling process in the memory. In mathematics, the entire angle of a circle is 360 grades (360°). The human brain works in a 360° field and therefore in two mutual directions (Figure 4). The Manna alphabet framework is interestingly conformed to the algorithm of the human brain structure (Table 3). This alphabet could be the best basic format to receive and transmit data and information by the human. Manna is a term that concerns people to the south of Urmia Lake in Kurdistan [7, 29]. There are a lot of mysteries about the Manna that has not been discovered until now. The


Table 1.

The binary codes, (A) earthquake and (B) elevator drop.

only thing that has been distinguished is that the people of that age were blessed with great intelligence. They developed music, agriculture, animal husbandry, industry, medicine, and astronomy. According to ancient literature, Sumerians and Mannea had an advanced alphabet with 37 and 36 letters, respectively [10, 30]. The Mannea were an ancient ethnic group, and a rich trove of literature written by them exists [28, 5]. This literature and books covered various topics including music, agriculture, astronomy, medicine, industry, and more importantly water engineering. The Manna alphabet included four partitions with a specific algorithm, and each part had nine letters. This algorithm made a basic structure for humans to communicate and understand the relation between the people from the ancient period until now. This alphabet can be a suitable pattern for decoding memory

Ai Uka k<sup>u</sup> x<sup>a</sup> ga gu ca A (a1, a2, a3)

Introducing a Novel Approach to Study the Construction and Function of Memory in Human…

DOI: http://dx.doi.org/10.5772/intechopen.87991

<sup>a</sup> t

<sup>a</sup> �\_ S

<sup>a</sup> b<sup>a</sup> m<sup>a</sup> m<sup>i</sup> m<sup>u</sup> y<sup>a</sup> C (c1, c2, c3)

<sup>u</sup> Ɵ<sup>a</sup> ca B (b1, b2, b3)

<sup>a</sup> za ha D (d1, d2, d3)

The circles that indicate the memory coding directions, (A) base direction of the memory coding, (B) complementary direction of the memory coding, and (C) entire directions of the memory coding.

1 2 345 6 7 8 9

<sup>i</sup> d<sup>a</sup> di du t

The structure of Manna alphabet between 600 and 800 BC.

<sup>a</sup> va v<sup>i</sup> s

architecture in humans.

Figure 4.

j <sup>a</sup> j

r <sup>a</sup> r

Table 3.

121

n<sup>a</sup> nu p<sup>a</sup> f

<sup>i</sup> i

### Table 2.

The building blocks of fundamental memory in the human brain. Black blocks are the memory in use, and white blocks are inactive memory.

Introducing a Novel Approach to Study the Construction and Function of Memory in Human… DOI: http://dx.doi.org/10.5772/intechopen.87991

## Figure 4.

creation of this important structure. This is what Dr. Tsien and colleagues termed as "neural cliques." They discovered that these overall network-level patterns are generated by distinct subsets of neural populations or neural cliques [19].

According to the theory of connectivity by Dr. Tsien, a clique is a group of neurons that respond similarly to a select event and thus operate collectively as a robust coding unit [14, 19–21, 23]. Table 1 indicates each event as three blocks are active and the others are inactive. The investigators represented clique activity as a string of binary codes that revealed details of the event an animal experienced [19]. In the string fragments shown here, 1 means a particular clique is active, and 0 signifies inactivity [19]. The idea of quantum coherent waves in the neuronal network is derived from Frohlich [35]. He viewed these waves as a means by which order could be maintained in living systems and argued that the neuronal network could support the long-range correlation of dipoles [35]. Repeated stimulation in a continuous and rhythmic way is a critical factor in operating the coupling process. As more and more memory blocks are turned on and active, more processes of recalling the internal information are performed. Therefore, more information and data are extracted from the fundamental memory. In the human brain, there are millions of memory blocks that are not used during the lifetime. Thus, people have entire blocks in fundamental memory but only turn on some of the blocks in their lifetime (Table 2). The active blocks make up the central memory in the human brain. The recalling of internal information and data from the fundamental memory is different about people. This process is different from elementary to advanced levels. It is dependent on the quality and the quantity function of coupling process in the memory. In mathematics, the entire angle of a circle is 360 grades (360°). The human brain works in a 360° field and therefore in two mutual directions (Figure 4). The Manna alphabet framework is interestingly conformed to the algorithm of the human brain structure (Table 3). This alphabet could be the best basic format to receive and transmit data and information by the human. Manna is a term that concerns people to the south of Urmia Lake in Kurdistan [7, 29]. There are a lot of mysteries about the Manna that has not been discovered until now. The

A11001 B1101 0

The building blocks of fundamental memory in the human brain. Black blocks are the memory in use, and

Table 1.

Table 2.

120

white blocks are inactive memory.

The binary codes, (A) earthquake and (B) elevator drop.

New Frontiers in Brain-Computer Interfaces

The circles that indicate the memory coding directions, (A) base direction of the memory coding, (B) complementary direction of the memory coding, and (C) entire directions of the memory coding.


## Table 3.

The structure of Manna alphabet between 600 and 800 BC.

only thing that has been distinguished is that the people of that age were blessed with great intelligence. They developed music, agriculture, animal husbandry, industry, medicine, and astronomy. According to ancient literature, Sumerians and Mannea had an advanced alphabet with 37 and 36 letters, respectively [10, 30]. The Mannea were an ancient ethnic group, and a rich trove of literature written by them exists [28, 5]. This literature and books covered various topics including music, agriculture, astronomy, medicine, industry, and more importantly water engineering. The Manna alphabet included four partitions with a specific algorithm, and each part had nine letters. This algorithm made a basic structure for humans to communicate and understand the relation between the people from the ancient period until now. This alphabet can be a suitable pattern for decoding memory architecture in humans.

## 3. Results

## 3.1 Memory unit

There is a need for a paradigm shift from behaviorist stimulus-response concepts toward notions of predictive coding in self-organizing recurrent networks with high-dimensional dynamics [45, 47]. Neuronal networks with nonlinear neurons and densely connected feedback loops can generate dynamics that is more complex, variable, and rich than expected [48–50]. Therefore, the two structures of the hippocampus located in the limbic system can operate in reciprocal connection to the coding of the information in the brain. According to the Meshk theory, the information and data received from the environment encode in a structure is called "memory unit." Each memory unit is actually a perception unit in the human brain. It is necessary to operate one or a few perception units to figure out the problems and issues. According to the Manna model features described in the previous sections, each memory unit includes three parts: visual, auditory, and sensory. Therefore, each perception unit makes three functional codes (Figure 5). At the same time, the central and fundamental codes are binded together by coupling process to create the binary codes described in the previous section. For an individual event, three sections in the memory unit have to work, and each section has a functional code. It is likely to operate numbers of the memory units for an individual event. This is dependent on the quality and quantity of internal information that might be extracted from the fundamental memory. Locating consciousness in the brain's electromagnetic (EM) field, rather than the neurons, has the advantage of neatly accounting for how information located in millions of neurons scattered through the brain can be unified into a single consciousness. In this way, EM field

consciousness can be considered to be "joined-up information" [41]. This is an important part of the issue that could help scientist to solve the brain puzzle. When neurons fire together, their EM fields generate stronger EM field disturbances [40]. Therefore, synchronous neuron firing will tend to have a larger impact on the brain's EM field (and thereby consciousness) than the firing of individual neurons [41]. The synchronous neuron firing is like a symphony orchestra or philharmonic orchestra with a lot of musical instruments. The harmony of playing music causes to create of a strong and impressive conclusion. Without each part of the symphony orchestra, the music is imperfect. The generation by synchronous firing is not the only important characteristic of conscious electromagnetic fields as in Pockett's original theory; spatial pattern is the defining feature of a conscious field [36]. In a philharmonic orchestra, there is an accurate spatial and temporal pattern that is well organized. It can lead to creating of a memorable artistic masterpiece. In Meshk's theory, spatial and temporal patterns are part of the intrinsic memory features. The CA1 region of the hippocampus receives inputs from many brain regions and sensory systems, and this feature most likely influences what type of information a

Introducing a Novel Approach to Study the Construction and Function of Memory in Human…

The nervous system can be seen as a nested hierarchy of nonlinear complex networks of molecules, cells, microcircuits, and brain regions [46, 51]. The Manna alphabet has a simple structure for understanding the brain. It is an appropriate programming system for the human brain and could be the first one designed by humans. In this way, the alphabet conforms to the structure of information coding in human memory. Therefore, in the coding process, signals received by the brain from the environment translate into special characters. These characters are arranged based on the external information received from the environment. A recorder strand of information is divided into four parts, and each part includes a memory unit (three memory codes) (Figure 6). The two strands of memory, akin to strands of deoxyribonucleic acid (DNA), run in reciprocal connection and are thus parallel. The two strands of memory are coupled together for making a crucial structure. The base strand is made in the fundamental memory, and the couple one is in the central memory. These two strands make memory coding strand that enables the brain to advance learning and behaviors in humans. In this structure, information processing is done in a simple way by the human brain. Therefore, it could possibly be a fast way to distinguish problems and issues. Figure 7 shows the overall architecture and human memory function in the Manna model. According to this approach, the reminder of the internal information process is carried out in five general stages; A and B have potential levels, and D and E are active levels of

given clique encodes [22].

DOI: http://dx.doi.org/10.5772/intechopen.87991

3.2 Memory coding strand

Figure 6.

123

The pattern of human memory coding. A, B, C, and D are the memory units.

## Figure 5.

The coupling of memory blocks in the human brain. Plates F, C, and P show the fundamental, central, and peripheral memories, respectively.

Introducing a Novel Approach to Study the Construction and Function of Memory in Human… DOI: http://dx.doi.org/10.5772/intechopen.87991

consciousness can be considered to be "joined-up information" [41]. This is an important part of the issue that could help scientist to solve the brain puzzle. When neurons fire together, their EM fields generate stronger EM field disturbances [40]. Therefore, synchronous neuron firing will tend to have a larger impact on the brain's EM field (and thereby consciousness) than the firing of individual neurons [41]. The synchronous neuron firing is like a symphony orchestra or philharmonic orchestra with a lot of musical instruments. The harmony of playing music causes to create of a strong and impressive conclusion. Without each part of the symphony orchestra, the music is imperfect. The generation by synchronous firing is not the only important characteristic of conscious electromagnetic fields as in Pockett's original theory; spatial pattern is the defining feature of a conscious field [36]. In a philharmonic orchestra, there is an accurate spatial and temporal pattern that is well organized. It can lead to creating of a memorable artistic masterpiece. In Meshk's theory, spatial and temporal patterns are part of the intrinsic memory features. The CA1 region of the hippocampus receives inputs from many brain regions and sensory systems, and this feature most likely influences what type of information a given clique encodes [22].

## 3.2 Memory coding strand

3. Results

Figure 5.

122

peripheral memories, respectively.

3.1 Memory unit

New Frontiers in Brain-Computer Interfaces

There is a need for a paradigm shift from behaviorist stimulus-response concepts

toward notions of predictive coding in self-organizing recurrent networks with high-dimensional dynamics [45, 47]. Neuronal networks with nonlinear neurons and densely connected feedback loops can generate dynamics that is more complex, variable, and rich than expected [48–50]. Therefore, the two structures of the hippocampus located in the limbic system can operate in reciprocal connection to the coding of the information in the brain. According to the Meshk theory, the information and data received from the environment encode in a structure is called "memory unit." Each memory unit is actually a perception unit in the human brain. It is necessary to operate one or a few perception units to figure out the problems and issues. According to the Manna model features described in the previous sections, each memory unit includes three parts: visual, auditory, and sensory. Therefore, each perception unit makes three functional codes (Figure 5). At the same time, the central and fundamental codes are binded together by coupling process to create the binary codes described in the previous section. For an individual event, three sections in the memory unit have to work, and each section has a functional code. It is likely to operate numbers of the memory units for an individual event. This is dependent on the quality and quantity of internal information that might be extracted from the fundamental memory. Locating consciousness in the brain's electromagnetic (EM) field, rather than the neurons, has the advantage of neatly accounting for how information located in millions of neurons scattered through the brain can be unified into a single consciousness. In this way, EM field

The coupling of memory blocks in the human brain. Plates F, C, and P show the fundamental, central, and

The nervous system can be seen as a nested hierarchy of nonlinear complex networks of molecules, cells, microcircuits, and brain regions [46, 51]. The Manna alphabet has a simple structure for understanding the brain. It is an appropriate programming system for the human brain and could be the first one designed by humans. In this way, the alphabet conforms to the structure of information coding in human memory. Therefore, in the coding process, signals received by the brain from the environment translate into special characters. These characters are arranged based on the external information received from the environment. A recorder strand of information is divided into four parts, and each part includes a memory unit (three memory codes) (Figure 6). The two strands of memory, akin to strands of deoxyribonucleic acid (DNA), run in reciprocal connection and are thus parallel. The two strands of memory are coupled together for making a crucial structure. The base strand is made in the fundamental memory, and the couple one is in the central memory. These two strands make memory coding strand that enables the brain to advance learning and behaviors in humans. In this structure, information processing is done in a simple way by the human brain. Therefore, it could possibly be a fast way to distinguish problems and issues. Figure 7 shows the overall architecture and human memory function in the Manna model. According to this approach, the reminder of the internal information process is carried out in five general stages; A and B have potential levels, and D and E are active levels of

Figure 6. The pattern of human memory coding. A, B, C, and D are the memory units.

Figure 7.

The functional levels of Manna model.

the memory. In general, in detail of this model, eight levels of human memory function have been deciphered. Also, the basic algorithm for the memory of the human is an exponential function. This function is transcendental and indicates the features of human memory coding. The function is as follows:

$$\mathbf{N} = \mathbf{e}^{\mathrm{i}} - \mathbf{1} \tag{1}$$

4. Conclusion

DOI: http://dx.doi.org/10.5772/intechopen.87991

125

The fundamental memory in animals is alternatively an ability that is known to instinct. In comparison to the fundamental memory in human, the animal instinct is primitive. It means that animal species have inherent intelligence, but the quality and quantity function of coding the information is different from the human. While the fundamental memory function is known, most problems and ambiguities about memory including short-term, long-term, working memory, and learning process are being answered. For instance, in the Manna model, learning is an activity that is described as a reminder of the internal information from intrinsic memory to central memory. It is said that all of the information and data are in the atmosphere and we have to discover them, but it is likely that internal information is in the fundamental memory. While the learning process is a reminder of information from the fundamental memory, it is required to make a connection between the fundamental and central memory. This connection is made by the coupling process. This process creates memory units and, therefore, leads to the formation of a memory coding strand. Indeed, in the Manna model, the memory unit is a unit of reminder information. In this model, the coupling process is a turning point in the neural circuit and can be the basis of the memory function. Also, like to the structure of Deoxyribonucleic acid (DNA), memory is formed to a spiral train structure. According to general relativity of Einstein, the observed gravitational attraction between masse results from the warping of space and time by those masses. Human memory is made of U shape components that convoluted together. The curvature in the U shape components intensively increases efficiency in the structure and function of memory. The two strands of memory are coupled together for making an applied and unique architecture. In human beings, this architecture is working for advance learning and behaviors. The Manna model is a simple and applied memory model that clearly explains the construction and function of memory. This model provides a functional framework to distinguish the memory function and therefore discover a basic algorithm for memory in the human brain. Consequently, using an applied and simple model, scientists can find a simple solution to overcome the

Introducing a Novel Approach to Study the Construction and Function of Memory in Human…

brain and mind disorders, especially for Alzheimer disease (AD).

One of the important solutions to Alzheimer disease is the music therapy that scientists are investigating about. The music utilizes a large variety of basic brain functions; it is closely tied to emotion and seems to be advantageous to survival in line with Darwinian natural selection [43]. It is sometimes claimed that swimming is the best exercise one can do since it requires one to work nearly every group of muscles [43]. The music can be thought of as the brain's analog to swimming [43]. It's a most basic and passive form; it exercises timing functions, matches patterns, and makes predictions [43]. Like swimming as the best exercise to recover and retrieve body muscles, music has the ability to retrieve and remind the brain's internal information. It is necessary to design a technique for music programming in specific time periods. This technique enforces the brain to manage the fundamental memory for the recall of information without using drugs and surgeries. According to the Meshk's theory, a simple technique dubbed as "special music programming" is organized to retrieve memory and mind. This technique is made up of two sections, including harmonic music composes and repetitive courses. Synchronicity of body muscle movement is a basic principle in swimming. Similarly for a particular person, the music harmony and scheduling of repetition time in this technique are key factors that have been ignored in previous research on music therapy of the mind. It is possible to dub these important principles as spatial and temporal equivalence (for a particular person). The repetition of harmonic music in

where N is the number of memory units connected in different possible ways; e is the Napier's constant; i is the information they are receiving; and 1 is just part of the math that enables you to account for all possibilities.

Introducing a Novel Approach to Study the Construction and Function of Memory in Human… DOI: http://dx.doi.org/10.5772/intechopen.87991

## 4. Conclusion

The fundamental memory in animals is alternatively an ability that is known to instinct. In comparison to the fundamental memory in human, the animal instinct is primitive. It means that animal species have inherent intelligence, but the quality and quantity function of coding the information is different from the human. While the fundamental memory function is known, most problems and ambiguities about memory including short-term, long-term, working memory, and learning process are being answered. For instance, in the Manna model, learning is an activity that is described as a reminder of the internal information from intrinsic memory to central memory. It is said that all of the information and data are in the atmosphere and we have to discover them, but it is likely that internal information is in the fundamental memory. While the learning process is a reminder of information from the fundamental memory, it is required to make a connection between the fundamental and central memory. This connection is made by the coupling process. This process creates memory units and, therefore, leads to the formation of a memory coding strand. Indeed, in the Manna model, the memory unit is a unit of reminder information. In this model, the coupling process is a turning point in the neural circuit and can be the basis of the memory function. Also, like to the structure of Deoxyribonucleic acid (DNA), memory is formed to a spiral train structure. According to general relativity of Einstein, the observed gravitational attraction between masse results from the warping of space and time by those masses. Human memory is made of U shape components that convoluted together. The curvature in the U shape components intensively increases efficiency in the structure and function of memory. The two strands of memory are coupled together for making an applied and unique architecture. In human beings, this architecture is working for advance learning and behaviors. The Manna model is a simple and applied memory model that clearly explains the construction and function of memory. This model provides a functional framework to distinguish the memory function and therefore discover a basic algorithm for memory in the human brain. Consequently, using an applied and simple model, scientists can find a simple solution to overcome the brain and mind disorders, especially for Alzheimer disease (AD).

One of the important solutions to Alzheimer disease is the music therapy that scientists are investigating about. The music utilizes a large variety of basic brain functions; it is closely tied to emotion and seems to be advantageous to survival in line with Darwinian natural selection [43]. It is sometimes claimed that swimming is the best exercise one can do since it requires one to work nearly every group of muscles [43]. The music can be thought of as the brain's analog to swimming [43]. It's a most basic and passive form; it exercises timing functions, matches patterns, and makes predictions [43]. Like swimming as the best exercise to recover and retrieve body muscles, music has the ability to retrieve and remind the brain's internal information. It is necessary to design a technique for music programming in specific time periods. This technique enforces the brain to manage the fundamental memory for the recall of information without using drugs and surgeries. According to the Meshk's theory, a simple technique dubbed as "special music programming" is organized to retrieve memory and mind. This technique is made up of two sections, including harmonic music composes and repetitive courses. Synchronicity of body muscle movement is a basic principle in swimming. Similarly for a particular person, the music harmony and scheduling of repetition time in this technique are key factors that have been ignored in previous research on music therapy of the mind. It is possible to dub these important principles as spatial and temporal equivalence (for a particular person). The repetition of harmonic music in

the memory. In general, in detail of this model, eight levels of human memory function have been deciphered. Also, the basic algorithm for the memory of the human is an exponential function. This function is transcendental and indicates the

<sup>N</sup> <sup>¼</sup> ei

where N is the number of memory units connected in different possible ways; e is the Napier's constant; i is the information they are receiving; and 1 is just part of

�1 (1)

features of human memory coding. The function is as follows:

Figure 7.

124

The functional levels of Manna model.

New Frontiers in Brain-Computer Interfaces

the math that enables you to account for all possibilities.

the certain time periods is a turning point in stimulating the mind for recalling information in the Alzheimer disease. This simple technique is divided into 72 sections (36 pairs) each section having special composes and time. Each pair can be performed in a day (morning and one in the evening). Each period is 40 days, because the day after 9 days of the program, there is 1 day to rest the mind. Therefore, there are nine repetition periods in the year. In general, the program includes 324 working days and 36 resting days in the year. This simple technique depends on the architecture and functions of human memory and thus is remarkable to retrieve memory in Alzheimer patients. Also, it is applied to prevent the disease for people in different ages. The Mozart and Beethoven symphonies can be applied in this way, because these symphonies are compatible with the structure and function of human memory. The Kurdish and Iranian traditional music that was named Dastgah, including Mahur, Homayun, Nava, Segah, Cahargah, Rast-Panjgah, and Sur, can be very appealing. According to the Meshk theory, this adaptation is logical and it is not casual. The influence of music and symphonies on the mind has been investigated in the past decades, but is not organized in repeating certain courses (for a special person). That is why previous investigations on the music therapy of the mind have not succeeded and have remained as a cryptic problem up to now. Crucial points of this study can be applied in research about the music therapy of the mind and other investigations.

References

tics.2015.01.007

2012

27042902

[1] Adolphs R. The unsolved problems of

DOI: http://dx.doi.org/10.5772/intechopen.87991

Approach. New York, NY: John Wiley

[12] HerculanoHouzel S. The human brain in numbers: A linearly scaled up primate brain. Frontiers in Human Neuroscience. 2009;3(31):1-11. DOI:

10.3389/neuro.09.031.2009

1999;19:4090-4101

[13] Kudrimoti HS, Barnes CA, McNaughton BL. Reactivation of hippocampal cell assemblies: Effects of behavioral state, experience and EEG dynamics. The Journal of Neuroscience.

[14] Li M, Liu J, Tsien JZ. Theory of connectivity: Nature and nurture of cell assemblies and cognitive computation. Frontiers in Neural Circuits. 2016;10 (34):1-8. DOI: 10.3389/fncir.2016.00034

[15] Lin L, Osan R, Tsien JZ. Organizing

encoding: Neural clique assemblies and universal neural codes. Trends in Neurosciences. 2006;29:48-57. DOI:

[16] Maurer AP, Cowen SL, Burke SN,

principles of real time memory

10.1016/j.tins.2005.11.004

Barnes CA, McNaughton BL. Organization of hippocampal cell assemblies based on theta phase precession. Hippocampus. 2006;16: 785-794. DOI: 10.1002/hipo.20202

[17] Miguel AL. Septal serotonin depletion in rats facilitates working memory in the radial arm maze and increases hippocampal high-frequency theta activity. European Journal of Pharmacology. 2014;734(5):105-113

[18] Nicolelis MAL, Fanselow EE, Ghazanfar AA. Hebb's dream: The resurgence of cell assemblies. Neuron. 1997;19:219-221. DOI: 10.1016/

[19] Tsien JZ. The memory code. Scientific American. 2007;297:52-59. DOI: 10.1038/scientificamerican070752

s08966273(00)809320

& Sons; 1949

Introducing a Novel Approach to Study the Construction and Function of Memory in Human…

Science. 2015;19:173-175. DOI: 10.1016/j.

[2] Andersen P. Synaptic integration in hippocampal CA1 pyramids. Progress in Brain Research. 1990;83:215-222. DOI: 10.1016/S00796123(08)612510

neuroscience. Trends Cognition,

[3] Baddely A. Working memory: Theories, models, and controversies. Annual Review of Psychology. 2007;63:

[4] Apostolova LG. Alzheimer Disease. Lifelong Learning in Neurology. 2016;22 (2, Dementia):419-434. DOI: 10.1212/ CON.0000000000000307, PMID:

[5] Bromiley GW. The International Standard Bible Encyclopedia. Vol. Volume One: A-D. USA: WM. B. Eardmans Publishing Co; 1986

[6] Buzsáki G. Neural syntax: Cell assemblies, synapsembles and readers. Neuron. 2010;68:362-385. DOI: 10.1016/

[7] Diakonof IM. The Cambridge History of Iran: Media. Cambridge: Cambridge

[8] Eysenck MW, Keane MT. Cognitive Psychology, A Student's Handbook. London: Psychology Press; 2015. DOI:

[9] Geman D, Geman S. Opinion: Science in the age of selfies. Proceedings of the National Academy of Sciences of the United States of America. 2016;113: 9384-9387. DOI: 10.1073/pnas.1609793113

[10] Hamarash S. Who are the Kurds? London, UK: YPS-Publishing; 2013

[11] Hebb DO. The Organization of Behavior: A Neuropsychological

127

j.neuron.2010.09.023

University press; 1985

10.4324/9781315778006

## Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

## Author details

Mohammad Seyedielmabad1,2


\*Address all correspondence to: msaedi1225@gmail.com

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Introducing a Novel Approach to Study the Construction and Function of Memory in Human… DOI: http://dx.doi.org/10.5772/intechopen.87991

## References

the certain time periods is a turning point in stimulating the mind for recalling information in the Alzheimer disease. This simple technique is divided into 72 sections (36 pairs) each section having special composes and time. Each pair can be performed in a day (morning and one in the evening). Each period is 40 days, because the day after 9 days of the program, there is 1 day to rest the mind. Therefore, there are nine repetition periods in the year. In general, the program includes 324 working days and 36 resting days in the year. This simple technique depends on the architecture and functions of human memory and thus is remarkable to retrieve memory in Alzheimer patients. Also, it is applied to prevent the disease for people in different ages. The Mozart and Beethoven symphonies can be applied in this way, because these symphonies are compatible with the structure and function of human memory. The Kurdish and Iranian traditional music that was named Dastgah, including Mahur, Homayun, Nava, Segah, Cahargah, Rast-Panjgah, and Sur, can be very appealing. According to the Meshk theory, this adaptation is logical and it is not casual. The influence of music and symphonies on the mind has been investigated in the past decades, but is not organized in repeating certain courses (for a special person). That is why previous investigations on the music therapy of the mind have not succeeded and have remained as a cryptic problem up to now. Crucial points of this study can be applied in research about the

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

music therapy of the mind and other investigations.

New Frontiers in Brain-Computer Interfaces

Conflict of interest

of interest.

Author details

126

Mohammad Seyedielmabad1,2

1 Kurdistan University, Kurdistan, Iran

2 Tarbiat Modares University, Tehran, Iran

provided the original work is properly cited.

\*Address all correspondence to: msaedi1225@gmail.com

[1] Adolphs R. The unsolved problems of neuroscience. Trends Cognition, Science. 2015;19:173-175. DOI: 10.1016/j. tics.2015.01.007

[2] Andersen P. Synaptic integration in hippocampal CA1 pyramids. Progress in Brain Research. 1990;83:215-222. DOI: 10.1016/S00796123(08)612510

[3] Baddely A. Working memory: Theories, models, and controversies. Annual Review of Psychology. 2007;63: 2012

[4] Apostolova LG. Alzheimer Disease. Lifelong Learning in Neurology. 2016;22 (2, Dementia):419-434. DOI: 10.1212/ CON.0000000000000307, PMID: 27042902

[5] Bromiley GW. The International Standard Bible Encyclopedia. Vol. Volume One: A-D. USA: WM. B. Eardmans Publishing Co; 1986

[6] Buzsáki G. Neural syntax: Cell assemblies, synapsembles and readers. Neuron. 2010;68:362-385. DOI: 10.1016/ j.neuron.2010.09.023

[7] Diakonof IM. The Cambridge History of Iran: Media. Cambridge: Cambridge University press; 1985

[8] Eysenck MW, Keane MT. Cognitive Psychology, A Student's Handbook. London: Psychology Press; 2015. DOI: 10.4324/9781315778006

[9] Geman D, Geman S. Opinion: Science in the age of selfies. Proceedings of the National Academy of Sciences of the United States of America. 2016;113: 9384-9387. DOI: 10.1073/pnas.1609793113

[10] Hamarash S. Who are the Kurds? London, UK: YPS-Publishing; 2013

[11] Hebb DO. The Organization of Behavior: A Neuropsychological

Approach. New York, NY: John Wiley & Sons; 1949

[12] HerculanoHouzel S. The human brain in numbers: A linearly scaled up primate brain. Frontiers in Human Neuroscience. 2009;3(31):1-11. DOI: 10.3389/neuro.09.031.2009

[13] Kudrimoti HS, Barnes CA, McNaughton BL. Reactivation of hippocampal cell assemblies: Effects of behavioral state, experience and EEG dynamics. The Journal of Neuroscience. 1999;19:4090-4101

[14] Li M, Liu J, Tsien JZ. Theory of connectivity: Nature and nurture of cell assemblies and cognitive computation. Frontiers in Neural Circuits. 2016;10 (34):1-8. DOI: 10.3389/fncir.2016.00034

[15] Lin L, Osan R, Tsien JZ. Organizing principles of real time memory encoding: Neural clique assemblies and universal neural codes. Trends in Neurosciences. 2006;29:48-57. DOI: 10.1016/j.tins.2005.11.004

[16] Maurer AP, Cowen SL, Burke SN, Barnes CA, McNaughton BL. Organization of hippocampal cell assemblies based on theta phase precession. Hippocampus. 2006;16: 785-794. DOI: 10.1002/hipo.20202

[17] Miguel AL. Septal serotonin depletion in rats facilitates working memory in the radial arm maze and increases hippocampal high-frequency theta activity. European Journal of Pharmacology. 2014;734(5):105-113

[18] Nicolelis MAL, Fanselow EE, Ghazanfar AA. Hebb's dream: The resurgence of cell assemblies. Neuron. 1997;19:219-221. DOI: 10.1016/ s08966273(00)809320

[19] Tsien JZ. The memory code. Scientific American. 2007;297:52-59. DOI: 10.1038/scientificamerican070752 [20] Tsien JZ. A postulate on the brain's basic wiring logic. Trends in Neurosciences. 2015;38:669-671. DOI: 10.1016/j.tins.2015.09.002

[21] Tsien JZ. Principles of intelligence: On evolutionary logic of the brain. Frontiers in Systems Neuroscience. 2015;9:186. DOI: 10.3389/ fnsys.2015.00186

[22] Tsien JZ. CreLox neurogenetics: 20 years of versatile applications in brain research and counting. Frontiers in Genetics. 2016;7(19):1-7. DOI: 10.3389/fgene.2016.00019

[23] Tsien JZ, Chen DF, Gerber D, Tom C, Mercer EH, Anderson DJ, et al. Subregion and cell type–restricted gene knockout in mouse brain. Cell. 1996;87: 1317-1326. DOI: 10.1016/s00928674(00) 818267

[24] Tsien JZ, Huerta PT, Tonegawa S. The essential role of hippocampal CA1 NMDA receptor dependent synaptic plasticity in spatial memory. Cell. 1996; 87:1327-1338. DOI: 10.1016/s00928674 (00)818279

[25] Tsien JZ, Li M, Osan R, Chen G, Lin L, Wang PL, et al. On initial brain activity mapping of episodic and semantic memory code in the hippocampus. Neurobiology of Learning and Memory. 2013;105:200-210. DOI: 10.1016/j.nlm.2013.06.019

[26] Wallace DJ, Kerr JND. Chasing the cell assembly. Current Opinion in Neurobiology. 2010;20:296-305. DOI: 10.1016/j.conb.2010.05.003

[27] Xie K, Fox GE, Liu J, Lyu C, Lee JC, Kuang H, et al. Brain computation is organized via power-of-two-bases permutation logic. Frontiers in System Neuroscience. 2016. DOI: 10.3389/ fnsys.2016.00095

[28] Zadok R. The Ethno-Linguistic Character of Northwestern Iran and Kurdistan in the Neo-Assyrian Period. Michigan, USA: Archaeological center; 2002

[37] McFadden J. The conscious electromagnetic information (Cemi) field theory: The hard problem made easy? Journal of Consciousness Studies.

DOI: http://dx.doi.org/10.5772/intechopen.87991

City: Pontifical Academy of Sciences;

[46] Markov NT, Kennedy H. The importance of being hierarchical. Current Opinion in Neurobiology. 2013;

[47] Singer W. Cortical dynamics revisited. Trends in Cognitive Sciences. 2013;17:616-626. DOI: 10.1016/j.

[48] London M, Häusser M. Dendritic computation. Annual Review of Neuroscience. 2005;28:503-532. DOI:

[49] Deco G, Jirsa VK. Ongoing cortical

multistability, and ghost attractors. The Journal of Neuroscience. 2012;32:

23:187-194. DOI: 10.1016/j.

2015. pp. 209-218

Introducing a Novel Approach to Study the Construction and Function of Memory in Human…

conb.2012.12.008

tics.2013.09.006

10.1146/annurev. neuro.28.061604.135703

activity at rest: Criticality,

3366-3375. DOI: 10.1523/ JNEUROSCI.2523-11.2012

PMC5681944

[50] Singer W, Lazar A. Does the cerebral cortex exploit high-

dimensional, non-linear dynamics for information processing? Frontiers in Computational Neuroscience. 2016;10:99

[51] Jedlicka P. Revisiting the quantum brain hypothesis: Toward quantum (neuro) biology. Frontiers in Molecular Neuroscience. 2017;10:366. DOI: 10.3389/fnmol.2017.00366. PMCID:

[38] Fries P et al. Synchronization of oscillatory responses in visual cortex correlates with perception in interocular rivalry. PNAS. 1997;94(23):2699-2704. DOI: 10.1073/pnas.94.23.12699. Bibcode:1997PNAS. 9412699F. PMC

[39] McFadden J. The CEMI field theory

Consciousness Studies. 2013;20:153-168

Hypothesis: The Scientific Search for the Soul. New York: Charles Scribner's Sons;

[41] McFadden J. Synchronous firing and

electromagnetic field: Evidence for an electromagnetic field theory of

consciousness. Journal of Consciousness

[42] Kaliuk A. Review on the Book "Brain, Mind and Cognition, This is Your Brain on Music". In: Levitin DJ, editor. The Science of a Human Obsession. Dutton, 314; 2012

[43] Kenealy S. In: Daniel J. Levitin, editor. Review on the Book "Brain, Mind and Cognition, this Is your Brain

on Music". Dutton. 2012:314

129

[44] Koch C, Hepp K. Quantum mechanics in the brain. Nature. 2006; 440:611-612. DOI: 10.1038/440611a

[45] Singer W. Complexity as substrate for neuronal computations. In: Arber W, Mittelstra JB, Sánchez Sorondo M, editors. Complexity and Analogy in Science: Theoretical, Methodological and Epistemological Aspects. Vatican

2002;9(8):45-60

25091. PMID 9356513

closing the loop. Journal of

[40] Crick F. The Astonishing

1995. ISBN: 0684801582

its influence on the Brain's

Studies. 2002;9(4):23-50

[29] Zadok R. Lulubi, Country of a People Who Probably Originated in Southern Kurdistan; the form of the Name is Identical in Both Sumerian and Akkadian, Namely Lulubi and Lulubum Respectively [Internet]. 2005. Available from: www.Iranicaonline.org/ article/lulubi [Accessed: 20 November 2011]

[30] Zadok R. Mannea, (Neo-Assyrian Mannayu), Name Referring to a Region Southeast of Lake Urmia Centered Around Modern Saqqez [Internet]. 2006. Available from: www.Iranica online.org/article/Mannea [Accessed: 16 January 2011]

[31] Emoto M. The Message from Water: The Message from Water Is Telling us to Take a Look at Ourselves. 1. Hado; 2000. ISBN 9784939098000. ThriftBooks (AURORA, IL, USA)

[32] Emoto M. Water Crystal Healing: Music and Images to Restore your Well Being. New York; Hillsboro: Beyond Words; 2006. ISBN 9781582701561

[33] Donna G. Message in the Water. Calgary Herald; 2003. p. S8. Retrieved. The Canadian Press. 2014-08-21

[34] Umezawa H. Advanced Field Theory: Micro, Macro and Thermal Physics. American Institute of Physics; AIP-Press. 1993

[35] Frohlich H. Long-range coherence and energy storage in biological systems. International Journal of Quantum Chemistry. 1968;2(5): 641-649. DOI: 10.1002/qua.560020505. Bibcode: 196HJQC.2.641F

[36] Pockett S. The Nature of Consciousness. Writers Club Press. ISBN 978-0-595-12215-8. 2000

Introducing a Novel Approach to Study the Construction and Function of Memory in Human… DOI: http://dx.doi.org/10.5772/intechopen.87991

[37] McFadden J. The conscious electromagnetic information (Cemi) field theory: The hard problem made easy? Journal of Consciousness Studies. 2002;9(8):45-60

[20] Tsien JZ. A postulate on the brain's

New Frontiers in Brain-Computer Interfaces

Kurdistan in the Neo-Assyrian Period. Michigan, USA: Archaeological center;

[29] Zadok R. Lulubi, Country of a People Who Probably Originated in Southern Kurdistan; the form of the Name is Identical in Both Sumerian and Akkadian, Namely Lulubi and Lulubum Respectively [Internet]. 2005. Available from: www.Iranicaonline.org/ article/lulubi [Accessed: 20 November

[30] Zadok R. Mannea, (Neo-Assyrian Mannayu), Name Referring to a Region Southeast of Lake Urmia Centered Around Modern Saqqez [Internet]. 2006. Available from: www.Iranica online.org/article/Mannea [Accessed: 16

[31] Emoto M. The Message from Water: The Message from Water Is Telling us to Take a Look at Ourselves. 1. Hado; 2000. ISBN 9784939098000. ThriftBooks (AURORA, IL, USA)

[32] Emoto M. Water Crystal Healing: Music and Images to Restore your Well Being. New York; Hillsboro: Beyond Words; 2006. ISBN 9781582701561

[33] Donna G. Message in the Water. Calgary Herald; 2003. p. S8. Retrieved. The Canadian Press. 2014-08-21

[34] Umezawa H. Advanced Field Theory: Micro, Macro and Thermal Physics. American Institute of Physics;

[35] Frohlich H. Long-range coherence and energy storage in biological systems. International Journal of Quantum Chemistry. 1968;2(5): 641-649. DOI: 10.1002/qua.560020505.

AIP-Press. 1993

Bibcode: 196HJQC.2.641F

[36] Pockett S. The Nature of Consciousness. Writers Club Press. ISBN 978-0-595-12215-8. 2000

2002

2011]

January 2011]

Neurosciences. 2015;38:669-671. DOI:

[21] Tsien JZ. Principles of intelligence: On evolutionary logic of the brain. Frontiers in Systems Neuroscience.

[22] Tsien JZ. CreLox neurogenetics: 20 years of versatile applications in brain research and counting. Frontiers in Genetics. 2016;7(19):1-7. DOI: 10.3389/fgene.2016.00019

[23] Tsien JZ, Chen DF, Gerber D, Tom C, Mercer EH, Anderson DJ, et al. Subregion and cell type–restricted gene knockout in mouse brain. Cell. 1996;87: 1317-1326. DOI: 10.1016/s00928674(00)

[24] Tsien JZ, Huerta PT, Tonegawa S. The essential role of hippocampal CA1 NMDA receptor dependent synaptic plasticity in spatial memory. Cell. 1996; 87:1327-1338. DOI: 10.1016/s00928674

[25] Tsien JZ, Li M, Osan R, Chen G, Lin L, Wang PL, et al. On initial brain activity mapping of episodic and semantic memory code in the

hippocampus. Neurobiology of Learning and Memory. 2013;105:200-210. DOI:

[26] Wallace DJ, Kerr JND. Chasing the cell assembly. Current Opinion in Neurobiology. 2010;20:296-305. DOI:

[27] Xie K, Fox GE, Liu J, Lyu C, Lee JC, Kuang H, et al. Brain computation is organized via power-of-two-bases permutation logic. Frontiers in System Neuroscience. 2016. DOI: 10.3389/

[28] Zadok R. The Ethno-Linguistic Character of Northwestern Iran and

10.1016/j.nlm.2013.06.019

10.1016/j.conb.2010.05.003

fnsys.2016.00095

128

basic wiring logic. Trends in

10.1016/j.tins.2015.09.002

2015;9:186. DOI: 10.3389/

fnsys.2015.00186

818267

(00)818279

[38] Fries P et al. Synchronization of oscillatory responses in visual cortex correlates with perception in interocular rivalry. PNAS. 1997;94(23):2699-2704. DOI: 10.1073/pnas.94.23.12699. Bibcode:1997PNAS. 9412699F. PMC 25091. PMID 9356513

[39] McFadden J. The CEMI field theory closing the loop. Journal of Consciousness Studies. 2013;20:153-168

[40] Crick F. The Astonishing Hypothesis: The Scientific Search for the Soul. New York: Charles Scribner's Sons; 1995. ISBN: 0684801582

[41] McFadden J. Synchronous firing and its influence on the Brain's electromagnetic field: Evidence for an electromagnetic field theory of consciousness. Journal of Consciousness Studies. 2002;9(4):23-50

[42] Kaliuk A. Review on the Book "Brain, Mind and Cognition, This is Your Brain on Music". In: Levitin DJ, editor. The Science of a Human Obsession. Dutton, 314; 2012

[43] Kenealy S. In: Daniel J. Levitin, editor. Review on the Book "Brain, Mind and Cognition, this Is your Brain on Music". Dutton. 2012:314

[44] Koch C, Hepp K. Quantum mechanics in the brain. Nature. 2006; 440:611-612. DOI: 10.1038/440611a

[45] Singer W. Complexity as substrate for neuronal computations. In: Arber W, Mittelstra JB, Sánchez Sorondo M, editors. Complexity and Analogy in Science: Theoretical, Methodological and Epistemological Aspects. Vatican

City: Pontifical Academy of Sciences; 2015. pp. 209-218

[46] Markov NT, Kennedy H. The importance of being hierarchical. Current Opinion in Neurobiology. 2013; 23:187-194. DOI: 10.1016/j. conb.2012.12.008

[47] Singer W. Cortical dynamics revisited. Trends in Cognitive Sciences. 2013;17:616-626. DOI: 10.1016/j. tics.2013.09.006

[48] London M, Häusser M. Dendritic computation. Annual Review of Neuroscience. 2005;28:503-532. DOI: 10.1146/annurev. neuro.28.061604.135703

[49] Deco G, Jirsa VK. Ongoing cortical activity at rest: Criticality, multistability, and ghost attractors. The Journal of Neuroscience. 2012;32: 3366-3375. DOI: 10.1523/ JNEUROSCI.2523-11.2012

[50] Singer W, Lazar A. Does the cerebral cortex exploit highdimensional, non-linear dynamics for information processing? Frontiers in Computational Neuroscience. 2016;10:99

[51] Jedlicka P. Revisiting the quantum brain hypothesis: Toward quantum (neuro) biology. Frontiers in Molecular Neuroscience. 2017;10:366. DOI: 10.3389/fnmol.2017.00366. PMCID: PMC5681944

## *Edited by Nawaz Mohamudally, Manish Putteeraj and Seyyed Abed Hosseini*

Brain-Computer Interface (BCI) sounds comparable to plugging a USB cable into a human brain with a laptop and accessing brain information. However, it is not as simple as it sounds. BCI is a multidisciplinary discipline with an exponential progress parallel to and with Artificial Intelligence for the past decades. Initially started with the Electroencephalography (EEG) analysis, BCI offers practical applications for cortical physiology today. Although BCI outcomes are more perceptible in medicine such as cognitive assessment, neurofeedback, and neuroprosthetic implants, it opens up amazing avenues for the business community through machine learning and robotics. Thought-to-text is one example of a hot topic in BCI. So, it is quite predictable to see BCI for individual usage given the current affordability of platforms for less technologically savvy users as well as BCI integrated within office automation productivity tools. The current trend is towards vulgarization for businesses benefits, by extension to the society at large. Thus, the interest in preparing a book on BCI. This book aims to compile and disseminate the latest research findings and best practices on how BCI is expanding the frontiers of knowledge in clinical practices, on the brain itself, and the underlying technologies.

Published in London, UK © 2020 IntechOpen © Ali Mazraie Shadi / shutterstock

New Frontiers in Brain-Computer Interfaces

New Frontiers in

Brain-Computer Interfaces

*Edited by Nawaz Mohamudally,* 

*Manish Putteeraj and Seyyed Abed Hosseini*