**Meet the editor**

Dr. Fazel-Rezai received his BSc. and M.Sc. in Electrical Engineering and Biomedical Engineering in 1990 and 1993, respectively. He received his Ph.D. in Electrical Engineering from the University of Manitoba in Winnipeg, Canada in 1999. From 2000 to 2002, he worked in industry as a senior research scientist and research team manager. Then, he joined academia at Sharif University

of Technology and later the University of Manitoba as Assistant Professor in 2002 and 2004, respectively. Currently, he is the Director of Biomedical Signal and Image Processing Laboratory at the Department of Electrical Engineering, University of North Dakota, USA. His research interests include biomedical engineering, biomedical signal and image processing, brain computer interface, seizure detection and prediction, neurofeedback, human performance evaluation, and health monitoring based on physiological systems.

### Contents

#### **Preface XI**


**Responses 137** Tiago H. Falk, Kelly M. Paton and Tom Chau

Chapter 8 **Equivalent-Current-Dipole-Source-Localization-Based BCIs with Motor Imagery 155** Toshimasa Yamazaki, Maiko Sakamoto, Shino Takata, Hiromi Yamaguchi, Kazufumi Tanaka, Takahiro Shibata, Hiroshi Takayanagi, Ken-ichi Kamijo and Takahiro Yamanoi Chapter 9 **Sources of Electrical Brain Activity Most Relevant to Performance of Brain-Computer Interface Based on Motor Imagery 175** Alexander Frolov, Dušan Húsek, Pavel Bobrov, Olesya Mokienko and Jaroslav Tintera Chapter 10 **A Review of P300, SSVEP, and Hybrid P300/SSVEP Brain-Computer Interface Systems 195**

Setare Amiri, Ahmed Rabbi, Leila Azinfar and Reza Fazel-Rezai


### Preface

Chapter 8 **Equivalent-Current-Dipole-Source-Localization-Based BCIs with**

**Performance of Brain-Computer Interface Based on**

Chapter 9 **Sources of Electrical Brain Activity Most Relevant to**

Chapter 10 **A Review of P300, SSVEP, and Hybrid P300/SSVEP Brain-**

Chapter 11 **Review of Wireless Brain-Computer Interface Systems 215**

Chapter 12 **Brain Computer Interface for Epilepsy Treatment 239**

Taciana Saad Rached and Angelo Perkusich

L. Huang and G. van Luijtelaar

**Interface Systems 253**

Chapter 13 **Emotion Recognition Based on Brain-Computer**

**Computer Interface Systems 195**

Toshimasa Yamazaki, Maiko Sakamoto, Shino Takata, Hiromi Yamaguchi, Kazufumi Tanaka, Takahiro Shibata, Hiroshi Takayanagi, Ken-ichi Kamijo and Takahiro Yamanoi

Alexander Frolov, Dušan Húsek, Pavel Bobrov, Olesya Mokienko and

Setare Amiri, Ahmed Rabbi, Leila Azinfar and Reza Fazel-Rezai

Seungchan Lee, Younghak Shin, Soogil Woo, Kiseon Kim and

**Motor Imagery 155**

**VI** Contents

**Motor Imagery 175**

Jaroslav Tintera

Heung-No Lee

Communication and the ability to interact with the environment are basic human needs. Mil‐ lions of people worldwide suffer from severe physical disabilities so that they cannot even meet these basic needs. Although they may have no motor mobility, the sensory and cognitive functions of the physically disabled are usually intact. Brain-Computer Interface (BCI) systems allow communication based on a direct electronic interface which conveys messages and com‐ mands directly from the human brain to a computer. Majority of BCI systems rely on the elec‐ troencephalogram (EEG) signals to detect patterns of brain activity that reflect user intent, since EEG equipment is safe, portable, and requires relatively little preparation. BCI technolo‐ gy involves monitoring conscious brain electrical activity via EEG and detecting characteristics of EEG patterns via digital signal processing algorithms. It has the potential to enable the physically disabled to perform many activities, thus improving their quality of life and pro‐ ductivity, allowing them more independence and reducing social costs. The challenge with BCI, however, is to extract the relevant patterns from the EEG signals produced by the brain.

Recently, there has been a great progress in the development of novel paradigms for EEG sig‐ nal recording, advanced methods for processing them, new applications for BCI systems, and complete software and hardware packages used for BCI applications. In this book a few recent advances and future prospects are discussed. In the first chapter, important issues concerning end-users including needs, gaps, paradigm designs, and BCI evaluations are discussed. Chap‐ ter 2 describes different approaches to interconnect a BCI system with one or more applica‐ tions, and their advantages and disadvantages. In Chapters 3 to 9, different advanced signal processing methods and their applications in BCI systems are discussed. The methods include adaptive network fuzzy inference systems (Chapter 3), Bayesian sequential learning (Chapter 4), fractal features and neural networks (Chapter 5), autoregressive models of wavelet bases (Chapter 6), hidden Markov models (Chapter 7), equivalent current dipole source localization (Chapter 8), and independent component analysis (Chapter 9). Chapters 10 and 11 review hy‐ brid approached and wireless techniques used in BCI systems, respectively. Finally, in the last two chapters, two applications of BCI systems in epilepsy treatment and emotion detections are discussed.

As the editor, I would like to thank all the authors for their contributions. Without these con‐ tributions it would not be possible to have a quality book and help expand the field of BCI systems, especially regarding their utilization in real-world applications.

> **Dr. Reza Fazel-Rezai** University of North Dakota Grand Forks, ND, USA

## **A User Centred Approach for Bringing BCI Controlled Applications to End-Users**

Andrea Kübler, Elisa Holz, Tobias Kaufmann and Claudia Zickler

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/55802

#### **1. Introduction**

In the past 20 years research on BCI has been increasing almost exponentially. While a great deal of experimentation was dedicated to offline analysis for improving signal detection and translation, online studies with the target population are less common. Although BCIs are also developed for entertainment and thus potentially for healthy users, the main focus for BCI applications that are aiming at communication and control are people with severe motor impairment. There is a great need for translational studies that test BCI at home with the target population. Further, long-term studies with users in the field are required to improve reliability of BCI control. The user centred approach appears suitable to foster such studies.

In this chapter we will first define the needs and the gaps for bringing BCI to end-users and explain the model of BCI control which guides our interventions. Then we will describe the user-centered design and report first results of studies that adopted this approach for evalu‐ ating BCI applications. Those results led us to develop novel BCI components which we then tested with healthy and severely ill end-users. More specifically, we will introduce the optimized communication interface, the face speller, and remotely supervised BCI controlled brain painting with a locked-in patient in the field. We will end the chapter with summarizing the requirements for improvement and reasons for cautious optimism that the BCI community will be successful in providing end-users in need with reliable and independent BCI controlled applications.

#### **1.1. The needs and the gaps**

In 1973 J.J. Vidal posed the question whether "electrical brain signals" can "be put to work as carriers of information in man-computer communication or for the purpose of controlling such external apparatus as prosthetic devices…?" (p. 157 [1]). Already in those days Vidal answered

the question with a clear *Yes* and time has proved him right. Since the early nineties, when only few articles on brain-computer interfacing were available, publication activity has increased almost exponentially [2]. We performed a coarse search in Pubmed and PsychInfo with the terms *BCI OR brain computer interface* for 2011 through Sept 12 and received 461 hits. Thus, we may expect at least 700 publications by the end of 2013 indicating unbowed research activity, and thus funding. However, the amount of studies including the major target population, namely severely motor impaired individuals were 39 only. Less than 10 percent of the papers published, which refer to BCI in one way or another, deal with motor impaired individuals, although many authors mention those as target of their research [3, 4]. This illustrates quite overwhelmingly the gap between prosperous and active research in BCI laboratories with healthy participants and the transfer of the gained knowledge to the main target population of BCI, namely patients with severe motor impairment.

We are thus, facing a *translational gap*, i.e. a lack of translational studies that investigate the problems and obstacles that emerge when BCIs are to be applied to severely ill patients in their home environment. Such studies would include a thorough quantitative and qualitative evaluation of BCI. We argue and will describe that a user-centered design may be suitable to bridge this gap.

Further, we are confronted with a *reliability gap*, i.e. intra- and inter-individual performance varies tremendously when controlling an application with a BCI in the short-term and even more so in the long-term use. Many studies exist that introduce one or the other more or less small improvement in accuracy, bit rate or error rate – the main outcome measures of per‐ formance in BCI research. However, only few of them deal with targeted end-users in the field, where multiple sources of artefacts exist including changes of the health status of the user, such as altered brain responses due to neuronal degeneration in the brain. Thus, the reliability gap can only be bridged with longitudinal studies that include end-users in the field. Such studies need to take into account the several aspects that may contribute to successful BCI control. An integration of these aspects leads to a neuro-bio-psychological, data analytical, and ergonomical model of BCI-control (Fig.1) [5], which will be defined in the next section.

#### **2. A model of BCI control**

A BCI acquires input from the human brain, mostly its electrical activity recorded with electroencephalography (EEG), which is filtered, classified and transferred to an output signal. This output signal relates to the brain response or pattern of the BCI users and conveys the respective intention of the user. Importantly, the user receives feedback of his or her action and thus, BCIs imply a closed-loop between the system and the user. The output signal can be used to control an application – ideally, one that meets the desire of the user. Four aspects can be identified that contribute to BCI control: (1) individual characteristics of the BCI user, (2) characteristics of the BCI, (3) type of feedback and instruction, and (4) the BCI-controlled application [5]. The individual characteristics of the user include psychological, physiological and neurobiological factors. For example, visuo-motor coordination and motivation have been identified to predict performance with BCI controlled by sensorimotor rhythms [6] and eventrelated potentials [7]. Better inhibitory control, i.e. ability to allocate attention and inhibit distracting stimuli, measured as heart rate variability was related to better ERP-BCI perform‐ ance [8]. The amplitude of the SMR peak at rest and the P300 amplitude evoked in an auditory oddball paradigm were also related to performance with the respective BCI [9, 10]. Further, the location and quantity of neuronal loss due to accident or disease may deteriorate perform‐ ance. Besides the hardware used, the software components, namely the classifier of the input signal further determines BCI control (for review [11]). Common spatial pattern technique and stepwise linear discriminant analyses proved to perform well in SMR- and ERP-BCIs [12, 13].

the question with a clear *Yes* and time has proved him right. Since the early nineties, when only few articles on brain-computer interfacing were available, publication activity has increased almost exponentially [2]. We performed a coarse search in Pubmed and PsychInfo with the terms *BCI OR brain computer interface* for 2011 through Sept 12 and received 461 hits. Thus, we may expect at least 700 publications by the end of 2013 indicating unbowed research activity, and thus funding. However, the amount of studies including the major target population, namely severely motor impaired individuals were 39 only. Less than 10 percent of the papers published, which refer to BCI in one way or another, deal with motor impaired individuals, although many authors mention those as target of their research [3, 4]. This illustrates quite overwhelmingly the gap between prosperous and active research in BCI laboratories with healthy participants and the transfer of the gained knowledge to the main

We are thus, facing a *translational gap*, i.e. a lack of translational studies that investigate the problems and obstacles that emerge when BCIs are to be applied to severely ill patients in their home environment. Such studies would include a thorough quantitative and qualitative evaluation of BCI. We argue and will describe that a user-centered design may be suitable to

Further, we are confronted with a *reliability gap*, i.e. intra- and inter-individual performance varies tremendously when controlling an application with a BCI in the short-term and even more so in the long-term use. Many studies exist that introduce one or the other more or less small improvement in accuracy, bit rate or error rate – the main outcome measures of per‐ formance in BCI research. However, only few of them deal with targeted end-users in the field, where multiple sources of artefacts exist including changes of the health status of the user, such as altered brain responses due to neuronal degeneration in the brain. Thus, the reliability gap can only be bridged with longitudinal studies that include end-users in the field. Such studies need to take into account the several aspects that may contribute to successful BCI control. An integration of these aspects leads to a neuro-bio-psychological, data analytical, and ergonomical model of BCI-control (Fig.1) [5], which will be defined in the next section.

A BCI acquires input from the human brain, mostly its electrical activity recorded with electroencephalography (EEG), which is filtered, classified and transferred to an output signal. This output signal relates to the brain response or pattern of the BCI users and conveys the respective intention of the user. Importantly, the user receives feedback of his or her action and thus, BCIs imply a closed-loop between the system and the user. The output signal can be used to control an application – ideally, one that meets the desire of the user. Four aspects can be identified that contribute to BCI control: (1) individual characteristics of the BCI user, (2) characteristics of the BCI, (3) type of feedback and instruction, and (4) the BCI-controlled application [5]. The individual characteristics of the user include psychological, physiological and neurobiological factors. For example, visuo-motor coordination and motivation have been

target population of BCI, namely patients with severe motor impairment.

2 Brain-Computer Interface Systems – Recent Progress and Future Prospects

bridge this gap.

**2. A model of BCI control**

Little research is available on how the type of feedback and instruction provided in a BCI setting may influence performance. From early neurofeedback studies it is known that immediate feedback is superior to delayed feedback which held also true in a BCI context [14]. It may also be the case that a more ecologically valid feedback in a virtual environment outperforms traditional two-dimensional feedback on a computer screen [15-17]. A quite robust finding across BCI types is that visual feedback is superior to auditory feedback [18-20]. In the SMR based BCI instruction to imagine motor imagery kinaesthetically leads to increased performance as compared to visual motor imagery [21].

Finally, the complexity of the application influences performance. Usually simple spelling tasks are mastered more accurately and faster than environmental control or control of information technology, such as internet [22, 23].

As can be seen, the model offers multiple toeholds for improvement and user feedback. In the following sections we will introduce novel achievements for BCI that improve and facilitate BCI use and are based on feedback provided by end-users within the user-centred approach. Before we detail the novel approaches, the user-centred design and its application to BCI will be outlined.

#### **3. The user centred design and its application to BCI**

BCI development demands for close investigation of the end-users' needs and requirements and of the restrictions that come along with their diseases. The latter restrictions may range from small artefact contamination of the recorded brain signal up to loss of perception modalities, e.g. loss of ocular control as often the case with progression of neurodegenerative diseases. Furthermore, attention allocation may be limited and long lasting training sessions may be too demanding. BCIs are required to accommodate for such restrictions and to offer appropriate solutions, such as switching to auditory or tactile modalities when vision is impaired. Many of these restrictions are not evident when testing systems with healthy users. Furthermore, a system in daily use has to meet other requirements than a system developed for research purpose only, e.g. with regard to hardware setup, software handling and technical support. Bringing BCI technology to end-users' homes thus, inevitably requires involving them into this developmental processes.

**Figure 1.** A model of BCI-control comprised of 4 aspects: individual characteristics, BCI characteristics, feedback and instruction, BCI-controlled application. Colours serve for distinction of categories only. Boldness of black arrows indi‐ cates possible strength of influence on BCI control [5].

More recently the potential user of a BCI came more into the focus of BCI development and user-centred approaches were adopted [22, 24, 25]. A user-centred approach implies early focus on users, tasks and environment; the active involvement of users; an appropriate allocation of function between user and system; the incorporation of user-derived feedback into system design; and an iterative process whereby a prototype is designed, tested and modified [26]. The user-centred approach was standardized with the International Organiza‐ tion for Standardization (ISO) 9241-210 (Ergonomics of human-system interaction - Part 210: Human-centred design for interactive systems). According to this approach three kinds of requirements have to be taken into account: (1) Business requirements: Here, typically, a specific number is set in terms of how many systems should be sold in a defined time frame. Although our face speller and brain painting (see below) have already been adopted by a company (http://www.intendix.com/) and are thus, available on the market, these products are not yet suitable for daily use in the field. (2) User requirements and functional specification: BCI requirements need to be specified from a user's point of view, including the functions required to support a user's tasks, the user-system and interfaces. Usability goals that must be achieved and the approach for system maintenance at the user's home need to be defined. (3) Technical requirements: It has to be specified how the system will achieve the required functions and what data structure must be available for internal processing for the approach to be successful. Technical constraints need to be defined, such as the maximum data com‐ munication speed over a network or the trade-off between good EEG measurement and comfort with regards to the EEG cap. On the basis of these requirements Zickler and colleagues asked experts in using assistive technology (AT), i.e. people with severe motor impairment, what they would consider the most important requirements for BCI [25]. Those requirements were functionality, independent use, and easiness of use (see section on "User-centred improvements of BCI controlled applications").

Two different approaches to BCI control were subject of evaluation following these standards: BCIs dependent on modulation of sensorimotor rhythms, referred to as SMR-BCI, and on detection of event-related potentials, referred to as ERP-BCI. To better understand the applications and their evaluation, we provide a condensed description of the SMR- and ERP-BCI as implemented for control of the specific applications described below.

#### **3.1. SMR-BCI**

More recently the potential user of a BCI came more into the focus of BCI development and user-centred approaches were adopted [22, 24, 25]. A user-centred approach implies early focus on users, tasks and environment; the active involvement of users; an appropriate allocation of function between user and system; the incorporation of user-derived feedback into system design; and an iterative process whereby a prototype is designed, tested and modified [26]. The user-centred approach was standardized with the International Organiza‐ tion for Standardization (ISO) 9241-210 (Ergonomics of human-system interaction - Part 210: Human-centred design for interactive systems). According to this approach three kinds of requirements have to be taken into account: (1) Business requirements: Here, typically, a

**Figure 1.** A model of BCI-control comprised of 4 aspects: individual characteristics, BCI characteristics, feedback and instruction, BCI-controlled application. Colours serve for distinction of categories only. Boldness of black arrows indi‐

cates possible strength of influence on BCI control [5].

4 Brain-Computer Interface Systems – Recent Progress and Future Prospects

BCIs can be established by detecting an active modulation of sensorimotor rhythms (SMR) over sensorimotor areas of the brain. In a resting state, these rhythms are highly synchronised in the alpha (10-12 Hz) and beta (12-30 Hz) bands. When moving or imagining a movement, these rhythms desynchronise, i.e. the power of these frequency bands can actively be modu‐ lated by the user. Thus, SMR modulation constitutes a signal for BCI control [27, 28]. Different classes of motor imagery can be selected depending on a user's individual brain signals and the degrees of freedom that are required for control of an application. In a typical SMR-BCI, users trigger control signals for two classes by either imagining movement of the right or the left hand. Feedback is provided during the imagery tasks to enhance participants' performance thereby reinforcing correct behaviour. As hand areas are largely separated in the sensorimotor cortex, the evoked patterns are usually well distinguishable. Importantly, it has been shown that people with amyotrophic lateral sclerosis can utilize such modulations of the SMR to operate a BCI [29]. One of the remaining issues, however, is that a large number of participants is not able to achieve sufficient SMR-BCI performance [7, 9, 30, 31]. BCI systems that do not rely on such active modulations of brain signals are available. The most frequently used system is described in the next section.

#### **3.2. Event-related potential (P300) BCI**

A typical BCI based on event-related potentials is the so called P300-Speller, providing muscle independent communication on a character-by-character basis [32]; for recent reviews: [33] and [34]. A character matrix is displayed on a computer screen and groups of characters (usually rows and columns in a matrix) are highlighted (flashed) in random order. Users focus their attention on the desired field of the matrix (the target) by counting the number of flashes whilst ignoring all other characters (non-targets). This pattern constitutes an oddball-para‐ digm as target flashes are rare (odd) as compared to the high amount of non-target flashes. For example, in a 6x6 matrix one row and one column contains the target character whereas 5 rows and 5 columns are to be ignored. Each stimulus triggers distinct event-related potentials among which the P300 usually is the most prominent. It is a positive deflection in the EEG which occurs roughly around 300 ms post stimulus. Its latency may strongly vary with paradigms and across individuals (for review [35]). Yet other ERPs are also elicited, therefore a time window of up to 1000 ms post stimulus (typically 800 ms) is recommended to investigate users' individual ERPs (i.e., negative and positive deflections at distinct latencies). The characteristic sequence of event-related potentials is identified for each row and each column. The row and column with the most prominent ERPs are selected and the respective letter appears on the screen. It has been shown that 72.8% of N=81 healthy BCI users were able to communicate with 100% accuracy by means of such an ERP-BCI and that less than 3% could not achieve any control [30]. Importantly, these results transfer to individuals with severe motor impairment, e.g. due to neurodegenerative disease, in that the speller can be utilized as a muscle independent tool for communication (e.g., [22, 36-39]; for review [40]). Since its first description in 1988, the P300-Speller has been used intensively, further investigated and modified in a plethora of research publications leading to new applications for communication and device control (for review, e.g. [34]).

#### **4. Evaluation of BCI controlled applications**

The ISO 9241-201 (2010) defines usability as the "extent to which a … product … can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use" (ISO 9241-201, 2010, p. 3). Effectiveness refers to how accurate and complete the users accomplish the task. Efficiency relates the invested costs, i.e. users' effort and time, to effectiveness. User satisfaction refers to the perceived comfort and acceptability while using the product. Context of use refers to users, tasks, equipment (hardware, software and materials) and the physical and social environments in which a product is used (ISO 9241-201, 2010, p. 2) [22].

To accommodate for these aspects when evaluating newly developed BCI driven applications, a set of measures has been compiled to assess *effectiveness*, *efficiency* and *satisfaction* [22]. Effectiveness refers to how accurately end-users can communicate with the BCI and is operationalized by the numbers of intended and thus, correct selections in relation to the total number of selections. This measure is also often referred to as accuracy. Efficiency comprises the amount of information transferred (bit rate), which expresses speed and accuracy with one value, and the workload experienced by the end-user. A measure to assess subjective workload is the NASA task load index (TLX) which quantifies the workload for each task and identifies its sources [41]. Workload is defined as physical, mental, and temporal demands, and per‐ formance, effort, and frustration. User satisfaction can be addressed with the Quebec User Evaluation of Satisfaction with assistive technology (QUEST 2.0) which is the only standar‐ dized satisfaction assessment tool that was designed specifically for AT-devices [42]. It explicitly allows for deleting inadequate and adding informative questions with respect to a specific AT so that BCI specific items could be integrated. Reliability, speed, learnability, and aesthetic design were added to accommodate for specific aspects of BCI and the resulting questionnaire was referred to as Extended-QUEST [22]. Possible ratings range from 1 to 5 with 5 indicating best possible satisfaction.

As another measure of device satisfaction the ATD PA Device Form was used. The Assistive Technology Device Predisposition Assessment (ATD PA) is a set of questionnaires based on the Matching Person and Technology Model (MPT) of Scherer (2007) [43]. It addresses characteristics of an AT-device and asks respondents to rate their predisposition for using the AT under consideration. The questionnaire rates the AT-person match and the expected support in using the device, in other words the expected technology benefit [44].

As a coarse measure for overall satisfaction with the device, a visual analogue scale (VAS) ranging from 0 to 10 (not at all – absolutely satisfied) was included in the evaluation procedure. An open interview allowed participants to state their opinion about the BCI and its application and recommendations for further development.

To date, with this instrumentation three studies were performed with severely impaired endusers [22, 45], which we will describe in the following subsections.

#### **4.1. Extended communication**

**3.2. Event-related potential (P300) BCI**

6 Brain-Computer Interface Systems – Recent Progress and Future Prospects

and device control (for review, e.g. [34]).

9241-201, 2010, p. 2) [22].

**4. Evaluation of BCI controlled applications**

A typical BCI based on event-related potentials is the so called P300-Speller, providing muscle independent communication on a character-by-character basis [32]; for recent reviews: [33] and [34]. A character matrix is displayed on a computer screen and groups of characters (usually rows and columns in a matrix) are highlighted (flashed) in random order. Users focus their attention on the desired field of the matrix (the target) by counting the number of flashes whilst ignoring all other characters (non-targets). This pattern constitutes an oddball-para‐ digm as target flashes are rare (odd) as compared to the high amount of non-target flashes. For example, in a 6x6 matrix one row and one column contains the target character whereas 5 rows and 5 columns are to be ignored. Each stimulus triggers distinct event-related potentials among which the P300 usually is the most prominent. It is a positive deflection in the EEG which occurs roughly around 300 ms post stimulus. Its latency may strongly vary with paradigms and across individuals (for review [35]). Yet other ERPs are also elicited, therefore a time window of up to 1000 ms post stimulus (typically 800 ms) is recommended to investigate users' individual ERPs (i.e., negative and positive deflections at distinct latencies). The characteristic sequence of event-related potentials is identified for each row and each column. The row and column with the most prominent ERPs are selected and the respective letter appears on the screen. It has been shown that 72.8% of N=81 healthy BCI users were able to communicate with 100% accuracy by means of such an ERP-BCI and that less than 3% could not achieve any control [30]. Importantly, these results transfer to individuals with severe motor impairment, e.g. due to neurodegenerative disease, in that the speller can be utilized as a muscle independent tool for communication (e.g., [22, 36-39]; for review [40]). Since its first description in 1988, the P300-Speller has been used intensively, further investigated and modified in a plethora of research publications leading to new applications for communication

The ISO 9241-201 (2010) defines usability as the "extent to which a … product … can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use" (ISO 9241-201, 2010, p. 3). Effectiveness refers to how accurate and complete the users accomplish the task. Efficiency relates the invested costs, i.e. users' effort and time, to effectiveness. User satisfaction refers to the perceived comfort and acceptability while using the product. Context of use refers to users, tasks, equipment (hardware, software and materials) and the physical and social environments in which a product is used (ISO

To accommodate for these aspects when evaluating newly developed BCI driven applications, a set of measures has been compiled to assess *effectiveness*, *efficiency* and *satisfaction* [22]. Effectiveness refers to how accurately end-users can communicate with the BCI and is operationalized by the numbers of intended and thus, correct selections in relation to the total number of selections. This measure is also often referred to as accuracy. Efficiency comprises

Zickler and colleagues investigated the first prototype in which BCI was integrated into a commercially available AT software [22]. Control of AT was realized by means of the ERP-BCI described above. Participants tested the text entry, emailing and internet surfing options (Fig 2). The oddball paradigm had to be implemented such that these applications provided by the standard software could be controlled. Instead of rows and columns flashing red dots were assigned to each possible selectable item. The red dots then flashed in random order. Partici‐ pants were able to write a text, send an email and surf the internet for a specific website.

Selection accuracy (*effectiveness*) ranged between 70 and 100% correct responses and for all participants internet surfing was the most difficult task. Information transfer rate (*efficiency*) was between 4.5 and 8 bits per minute. Experienced workload (*efficiency*) was quite different among users. While one user rated workload on all dimensions between 9 and 12 (of 100 possible, with 100 being the maximum possible workload experienced), two participants were always between 34 and 46 indicating moderate workload for all tasks. In one user, who was confronted with BCI for the first time, workload decreased with every session from 49 to 15, which was encouraging as it demonstrated that workload can be decreased with practice.

**Figure 2.** Emailing and internet surfing with the Qualilife software. Possible items to select are indicated with a red frame. The red dots appear randomly at every item which can be selected. Thus, the to-be-selected item again consti‐ tutes a rare target within frequently appearing irrelevant items, and hence, the oddball paradigm is realized (Figure 1 from [22] with permission).

*Satisfaction* was high for safety of the device and the professional services and low for adjust‐ ment. With regards to the BCI specific items, reliability and learnability were rated high while speed and aesthetic design were only moderate. Obstacles for use in daily life were (1) low speed, (2) time needed to set up the system, (3) handling of the complicated software and the (4) demanding strain that accompanies EEG recordings (washing hair, etc.). Overall satisfac‐ tion ranged from 4 to 9 indicating substantial variance and considerable room for improve‐ ment. In the interview participants stated that the greatest obstacle for use in daily life would be the EEG cap, there should be no cables, no gel and it should look less eye catching. Hardware should be within one device (instead of an amplifier, a laptop and a screen) and wheelchair control should be integrated. None of the participants could imagine using the BCI in daily life unless substantially improved.

The above described BCI controlled application already goes beyond simple verbal commu‐ nication and may constitute a step toward inclusion via the world wide web. Some of our patients have been participating in BCI studies for a long time [46] and stated that they would also like to control other, more entertaining applications such as playing games or painting.

#### **4.2. Brain painting**

Together with an artist (Adi Hösle www.retrogradist.com) the letter matrix controlled by the ERP-BCI was transformed into a painting matrix which allowed the user to select shapes, size, colours, and contours and to move a brush on a virtual canvas (Fig 3). One participant stated "Everyone talks about freedom, but the worst oppression is to be locked into my own body. This art form allows me to break from the prison…". With his painting (see Fig 4) he wanted to illustrate that there is a light at the end of a tunnel.

**Figure 3.** Brain Painting matrix. For painting an object and its shape, location and transparency have to be defined. Only after the selection of "color" the object is transferred to the "canvas". In the toolbox at the top of the screen the latest selections are shown (from left to right in this figure): grid size (3), brush size (1), transparency of color (100%), object shape (rectangle), color (black). In the last square of the toolbox the latest selection is shown, which in this ex‐ ample is "black".

**Figure 4.** Painting "Who" by a brain painter with locked-in syndrome.

*Satisfaction* was high for safety of the device and the professional services and low for adjust‐ ment. With regards to the BCI specific items, reliability and learnability were rated high while speed and aesthetic design were only moderate. Obstacles for use in daily life were (1) low speed, (2) time needed to set up the system, (3) handling of the complicated software and the (4) demanding strain that accompanies EEG recordings (washing hair, etc.). Overall satisfac‐ tion ranged from 4 to 9 indicating substantial variance and considerable room for improve‐ ment. In the interview participants stated that the greatest obstacle for use in daily life would be the EEG cap, there should be no cables, no gel and it should look less eye catching. Hardware should be within one device (instead of an amplifier, a laptop and a screen) and wheelchair control should be integrated. None of the participants could imagine using the BCI in daily

**Figure 2.** Emailing and internet surfing with the Qualilife software. Possible items to select are indicated with a red frame. The red dots appear randomly at every item which can be selected. Thus, the to-be-selected item again consti‐ tutes a rare target within frequently appearing irrelevant items, and hence, the oddball paradigm is realized (Figure 1

The above described BCI controlled application already goes beyond simple verbal commu‐ nication and may constitute a step toward inclusion via the world wide web. Some of our patients have been participating in BCI studies for a long time [46] and stated that they would also like to control other, more entertaining applications such as playing games or painting.

Together with an artist (Adi Hösle www.retrogradist.com) the letter matrix controlled by the ERP-BCI was transformed into a painting matrix which allowed the user to select shapes, size, colours, and contours and to move a brush on a virtual canvas (Fig 3). One participant stated "Everyone talks about freedom, but the worst oppression is to be locked into my own body. This art form allows me to break from the prison…". With his painting (see Fig 4) he wanted

life unless substantially improved.

to illustrate that there is a light at the end of a tunnel.

**4.2. Brain painting**

from [22] with permission).

8 Brain-Computer Interface Systems – Recent Progress and Future Prospects

Four severely motor impaired potential end-users participated in the evaluation study which comprised seven daily sessions. In five of those sessions, participants could freely paint pictures of their choice. *Effectiveness* ranged between 80 and 90%, i.e. in 80 to 90% of the time participants selected the item they intended to. With an average around five bits per minute the information transfer rate (*efficiency*) was relatively low. This was due to an extended break between selection of items, to provide the user with sufficient time to think about what to select next ("creative pause"), and users explicitly appreciated this adaptation of the selection speed. Workload varied considerably between 20 and 50 and was sometimes due to disease related physical problems experienced by the users, and thus, independent of the specific BCI application. Like in the communication application described above, reliability and learnabil‐ ity were rated high (4.2 and 5.0) whereas users were not so satisfied with speed, adjustment and dimensions [44]. For two users the ATD PA Device Form indicated a good match between the system and the user (4.3 and 4.2 of 5 possible), but for the other two only 3.4 and 3.8 indicating that the match could be improved [44]. Overall satisfaction ranged between 5 and 8 also leaving room for improvement.

Taken together, users enjoyed painting and painted up to one picture per session. Three users would have liked to use Brain Painting in daily life once or twice a week. They reported high satisfaction with the learnability, ease of use, and reliability of the device. The EEG-cap and system operability clearly required improvement if the BCI application was to be used in daily life [44].

#### **4.3. Gaming**

Four severely disabled end-users - two in the locked-in state – evaluated the gaming applica‐ tion *Connect-Four (*http://en.wikipedia.org/wiki/Connect\_Four) [45]. Connect-Four is a SMR-BCI based prototype, enabling end-users to select either a row or column and setting a coin by regulating their brain activity. In six BCI sessions end-users were trained to regulate their brain activity in copy-tasks (location of coins were pre-defined by the experimenter), which were followed by free mode game playing. *Effectiveness* in the copy-task was low to medium in three of four end-users, with accuracies varying between 47% and 73%, and only one end-user, in the locked-in state, achieved high BCI control with up to 80% accuracy. With an ITR ranging between 0.05 and 1.44 bits/min, *efficiency* was low. The end-users rated their subjective workload moderate (on average between 28 and 52 of 100), with mental and temporal demand contributing most to their workload (*efficiency*). Two end-users reported high frustration which first increased and then decreased again with sessions. Nevertheless, the BCI game was accepted well by the end-users. On average end-users were moderately to highly (3.8 for the total Quest score and 3.9 for the added BCI items total score; ratings ranging between 1 and 5 with 1 indicating "not satisfied at all" and 5 "very satisfied") satisfied with the BCI (*satisfac‐ tion*). End-users were highly satisfied with *weight, safety* and *learnability* (4.3, 4.5 and 4.8). *Reliability* and *speed* were rated moderately (3.5). Main obstacles were the EEG-cap and electrodes, time-consuming and complex adjustment, difficulty to handle BCI equipment and low *effectiveness*. Like in the other two BCI controlled applications, the evaluation by the endusers implied that there is need for improvement. It seems to be more challenging to implement an SMR-BCI in activities of daily living of end-users as compared to an ERP-BCI controlled application [22, 47]. Two end-users (one of them locked-in), however, stated that they could imagine using Connect Four in their daily life. The other end-user in the locked-in state could imagine using the BCI in his daily life provided substantial improvement. The fact that both locked-in end-users were highly motivated throughout the BCI sessions and did not report any frustration, even when BCI control was low, implies the need and hope of these patients that BCI may provide better communication and control opportunities.

Taken together, such evaluation studies are first steps toward bridging the translational gap experienced in BCI research and development. Based on these evaluation results we state that to date ERP-BCIs are more effective and efficient for communication and interaction as compared to SMR-BCIs (Table 1). End-users indicated that the speed of the BCI controlled application was too low. Users would have liked to use the Brain Painting application several times a week, but none could imagine to use the BCI for emailing and internet surfing unless substantially improved. Somewhat surprisingly two end-users could imagine playing Connect Four in daily life despite low control. Table 1 summarizes the evaluation results for all applications.


**Table 1.** Summarized evaluation results for the three applications. Clearly, all of them leave room for improvement. However, end-users would have liked to use the Painting and Gaming applications in their daily life.

#### **5. User-centred improvements of BCI controlled applications**

As outlined above, functionality, independent use, and easiness of use were rated by expert users of assistive technology (AT) as most important for BCI use in daily life. In the next sections we will describe how we addressed and improved these three aspects.

#### **5.1. Functionality**

Four severely motor impaired potential end-users participated in the evaluation study which comprised seven daily sessions. In five of those sessions, participants could freely paint pictures of their choice. *Effectiveness* ranged between 80 and 90%, i.e. in 80 to 90% of the time participants selected the item they intended to. With an average around five bits per minute the information transfer rate (*efficiency*) was relatively low. This was due to an extended break between selection of items, to provide the user with sufficient time to think about what to select next ("creative pause"), and users explicitly appreciated this adaptation of the selection speed. Workload varied considerably between 20 and 50 and was sometimes due to disease related physical problems experienced by the users, and thus, independent of the specific BCI application. Like in the communication application described above, reliability and learnabil‐ ity were rated high (4.2 and 5.0) whereas users were not so satisfied with speed, adjustment and dimensions [44]. For two users the ATD PA Device Form indicated a good match between the system and the user (4.3 and 4.2 of 5 possible), but for the other two only 3.4 and 3.8 indicating that the match could be improved [44]. Overall satisfaction ranged between 5 and

Taken together, users enjoyed painting and painted up to one picture per session. Three users would have liked to use Brain Painting in daily life once or twice a week. They reported high satisfaction with the learnability, ease of use, and reliability of the device. The EEG-cap and system operability clearly required improvement if the BCI application was to be used in daily

Four severely disabled end-users - two in the locked-in state – evaluated the gaming applica‐ tion *Connect-Four (*http://en.wikipedia.org/wiki/Connect\_Four) [45]. Connect-Four is a SMR-BCI based prototype, enabling end-users to select either a row or column and setting a coin by regulating their brain activity. In six BCI sessions end-users were trained to regulate their brain activity in copy-tasks (location of coins were pre-defined by the experimenter), which were followed by free mode game playing. *Effectiveness* in the copy-task was low to medium in three of four end-users, with accuracies varying between 47% and 73%, and only one end-user, in the locked-in state, achieved high BCI control with up to 80% accuracy. With an ITR ranging between 0.05 and 1.44 bits/min, *efficiency* was low. The end-users rated their subjective workload moderate (on average between 28 and 52 of 100), with mental and temporal demand contributing most to their workload (*efficiency*). Two end-users reported high frustration which first increased and then decreased again with sessions. Nevertheless, the BCI game was accepted well by the end-users. On average end-users were moderately to highly (3.8 for the total Quest score and 3.9 for the added BCI items total score; ratings ranging between 1 and 5 with 1 indicating "not satisfied at all" and 5 "very satisfied") satisfied with the BCI (*satisfac‐ tion*). End-users were highly satisfied with *weight, safety* and *learnability* (4.3, 4.5 and 4.8). *Reliability* and *speed* were rated moderately (3.5). Main obstacles were the EEG-cap and electrodes, time-consuming and complex adjustment, difficulty to handle BCI equipment and low *effectiveness*. Like in the other two BCI controlled applications, the evaluation by the endusers implied that there is need for improvement. It seems to be more challenging to implement

8 also leaving room for improvement.

10 Brain-Computer Interface Systems – Recent Progress and Future Prospects

life [44].

**4.3. Gaming**

In an effort to bridge the reliability gap and to address speed of the BCI, we changed the stimulation mode of the widely used P300 spelling matrix. In the commonly used ERP-BCI, characters are light flashed and attention to one of the characters will usually elicit a distinct P300 [32] and sometimes other ERP components such as N100 or N200 (e.g., [48-50]). One option to increase reliability of the system is to enhance signal to noise ratio of the recorded ERPs. It is well known that familiar faces elicit characteristic ERPs, among which the N170 and N400f (f for faces, Figure 5) are very reliable ERPs. Thus, instead of flashing the letters of the matrix we overlaid row- and column-wise a famous face (the face of Albert Einstein or Ernesto Che Guevara, [51]). Figure 5 provides a screenshot from such modified BCI matrix and illustrates the grand average event-related potentials across N=20 healthy participants. Increasing the signal-to-noise ratio by eliciting more target specific ERPs, significantly boosted offline BCI performance. Importantly, these findings were replicated online in a group of possible end-user of BCI with severe motor impairment, e.g. users with amyotrophic lateral sclerosis or spinal muscular atrophy [38]. They benefited to such an extent that even some users who were unable to operate the traditional ERP-BCI, reached an online accuracy of 100% due to the face stimulation. As such it was possible to decrease the number of stimulation cycles without negatively affecting performance, i.e. bit rate was strongly increased. In six online runs, the number of stimulation cycles was decreased from 10 to 6, 3, 2 and 1 (i.e. single trial) stimulation sequences. Performance in N=9 users with neurodegenerative disease was significantly increased in all runs when exposed to the face speller as compared to the classic ERP-BCI. Furthermore, we compared their single trial performance to the online performance of N=16 healthy participants. As usual, performance was significantly worse in the classic ERP-BCI, however, no difference was found for the face speller. These results clearly underline how modifications to the system can diminish performance drops in end-user samples. Zhang and colleagues (2012) reported that inversion of faces may further increase the N170 component and thus, performance in the BCI task. Face motion, face emotion and face familiarity, however, did not affect BCI performance [38, 52]. We conclude that investigating stimulus material other than the classical character highlighting is a very promising direction for addressing speed and reliability of the system.

**Figure 5.** left: Instead of flashing letters in the rows and columns, rows and columns are overlaid with the face (Ein‐ stein is not shown due to copyright). Right: Averaged evoked potentials as response to targets and non-targets. In the face condition prominent N170 and N400f appear in addition to the P300. The ERP amplitude is depicted as a func‐ tion of time [51].

#### **5.2. Easiness of use**

ERPs. It is well known that familiar faces elicit characteristic ERPs, among which the N170 and N400f (f for faces, Figure 5) are very reliable ERPs. Thus, instead of flashing the letters of the matrix we overlaid row- and column-wise a famous face (the face of Albert Einstein or Ernesto Che Guevara, [51]). Figure 5 provides a screenshot from such modified BCI matrix and illustrates the grand average event-related potentials across N=20 healthy participants. Increasing the signal-to-noise ratio by eliciting more target specific ERPs, significantly boosted offline BCI performance. Importantly, these findings were replicated online in a group of possible end-user of BCI with severe motor impairment, e.g. users with amyotrophic lateral sclerosis or spinal muscular atrophy [38]. They benefited to such an extent that even some users who were unable to operate the traditional ERP-BCI, reached an online accuracy of 100% due to the face stimulation. As such it was possible to decrease the number of stimulation cycles without negatively affecting performance, i.e. bit rate was strongly increased. In six online runs, the number of stimulation cycles was decreased from 10 to 6, 3, 2 and 1 (i.e. single trial) stimulation sequences. Performance in N=9 users with neurodegenerative disease was significantly increased in all runs when exposed to the face speller as compared to the classic ERP-BCI. Furthermore, we compared their single trial performance to the online performance of N=16 healthy participants. As usual, performance was significantly worse in the classic ERP-BCI, however, no difference was found for the face speller. These results clearly underline how modifications to the system can diminish performance drops in end-user samples. Zhang and colleagues (2012) reported that inversion of faces may further increase the N170 component and thus, performance in the BCI task. Face motion, face emotion and face familiarity, however, did not affect BCI performance [38, 52]. We conclude that investigating stimulus material other than the classical character highlighting is a very promising direction for addressing speed

12 Brain-Computer Interface Systems – Recent Progress and Future Prospects

**Figure 5.** left: Instead of flashing letters in the rows and columns, rows and columns are overlaid with the face (Ein‐ stein is not shown due to copyright). Right: Averaged evoked potentials as response to targets and non-targets. In the face condition prominent N170 and N400f appear in addition to the P300. The ERP amplitude is depicted as a func‐

and reliability of the system.

tion of time [51].

We developed a so-called optimized communication interface which allows for auto-calibra‐ tion and word completion and is controlled with a user-friendly graphical interface [47]. After the subject is set up with the electrode cap and connected to the BCI by an expert, the calibration process for parameterizing the classifier can be started by pressing a single button on the screen. No familiarity with technical or scientific details of the BCI is required. Data from calibration is automatically analysed in the background, invisible to the user who only receives a feedback on successful or unsuccessful outcome of the calibration. In the latter case, calibra‐ tion can be performed again with one click. Yet, if successfully calibrated, communication with the P300-BCI can be initiated with another button press. We tested if such a user-friendly BCI implementation can be handled independently by naïve users. All healthy subjects (N=19) handled the BCI software completely on their own and stated that the procedure was easy to understand and that they could explain it to a third person. A text completion option signifi‐ cantly decreased communication speed. We conclude that from a software perspective, a BCI system can easily be integrated into an automated application that allows caregivers, friends or relatives to control such complex systems without prior knowledge at the end-user's home or bedside.

#### **5.3. Independent use**

Finally, to bridge the reliability gap we implemented BCI controlled brain painting for longterm use at the home of a 72-years old locked-in patient diagnosed with amyotrophic lateral sclerosis (ALS) who used to be a painter [53]. The brain painting application, which was successfully tested and evaluated by healthy subjects [23], as well as patients ([44] and see above) was embedded into an easy-to-use interface enabling to use the application after a few steps only. Family was trained to set-up the 8-channel EEG-cap and amplifier and to control the brain painting interface. The brain painting software automatically saved the duration of painting time, number of runs, and the paintings, and transfered them to our lab for remote supervision. After every session, satisfaction was rated. In a separate window familiy and caregivers can comment on the session. In doing so, occurring problems can be noticed and remotely solved by our experts via remote internet access. Figure 6 shows the end-user in a brain painting session at her home.

After each session the end-user is asked to rate her satisfaction with the visual analogue scale (VAS) (Figure 7) and after approximately 10 sessions the workload and device satisfaction are assessed with the NASA TLX [41] and the Extended QUEST 2.0 [22, 42]. Her responses as well as her data can be observed by our experts remotely to allow for system modifications or other interventions if necessary (e.g. advise for recalibration of the system).

In more than 8 months the end-user has been painting in more than 86 BCI sessions with an average paiting duration of 66.2 minutes. Satisfaction with device strongly depended on functioning of the BCI (Figure 7). When implementing a remote-controlled BCI application problems of malfunctioning arise which are immediately visible in the satisfaction rating (e.g., sessions 9 and 17 in Figure 7). Three types of sources for her dissatisfaction could be identified: In most of the cases dissatisfaction was due to technical problems (software/hardware;

**Figure 6.** ALS patient at her home, after finishing her brain painting. While painting, the brain painting matrix appears on one screen while on an additional monitor, placed on the table in the background, she can follow the progress of her painting. The brain painting software is operated by the family or caregivers and requires few steps only for set up.

**Figure 7.** Ratings of satisfaction (VAS Satisfaction; VAS = visual analogue scale) after each of 86 sessions with the brain-painting application with 0 indicating "not satisfied at all" and 10 indicating "very satisfied". Satisfaction ratings vary strongly between very low satisfaction (rating between 0 and 3) and very high satisfaction (rating between 7 and 10). The low ratings in the first 20 sessions were always due to malfunction of the BCI which was still in the set-up state. Continuous remote access to these data allowed for in-time modifications to the system by our experts (Holz et al., in preparation).

especially in the first sessions after set-up of the BCI system at the end-user's home); second, due to problems from the end-user's side, e.g. low concentration or exhaustion or not being able to realize the desired painting; and third, due to bad control (e.g. due to false cap placement, insufficient electrode gel) or loss of control over time (e.g. due to electrode gel drying).

Also for this locked-in BCI end-user effectiveness, reliability and easiness of use were the most important aspects for device satisfaction. Additionally, she mentioned professional support, specifically during times in which the system was not running properly. With a mean VAS satisfaction score of 6.2, her overall satisfaction is moderate to high. However, there is high variability with lowest satisfaction when the system was not working (early sessions) and when the painting was not as she expected it to be (later sessions). Highest ratings indicate that the system worked properly and that she was satisfied with her painting. Despite initial problems with the BCI, her motivation to continue brain painting has remained high even after more than 80 sessions. The end-user is currently painting 2-3 times a week, but stated that she would like to paint every day, if she could do so. The limiting factor is the available time of the family setting up the BCI. But currently also caregivers and friends are willing to learn the set-up and control the application to enable her to paint more often. In conclusion, our results demonstrate that expert-independent BCI use by end-users in the field is possible and illustrate the important role of family and caregivers when transferring BCI technology from the research environment to the end-user's daily life. Figure 8 depicts some of her brain paintings.

**Figure 8.** Example Brain Paintings of the BCI -user with locked-in syndrome. All paintings were painted with the BCI in her daily life, independent of BCI expert's control, (with friendly permission from the brain painting artist).

#### **6. Conclusions**

especially in the first sessions after set-up of the BCI system at the end-user's home); second, due to problems from the end-user's side, e.g. low concentration or exhaustion or not being able to realize the desired painting; and third, due to bad control (e.g. due to false cap placement, insufficient electrode gel) or loss of control over time (e.g. due to electrode gel

**Figure 7.** Ratings of satisfaction (VAS Satisfaction; VAS = visual analogue scale) after each of 86 sessions with the brain-painting application with 0 indicating "not satisfied at all" and 10 indicating "very satisfied". Satisfaction ratings vary strongly between very low satisfaction (rating between 0 and 3) and very high satisfaction (rating between 7 and 10). The low ratings in the first 20 sessions were always due to malfunction of the BCI which was still in the set-up state. Continuous remote access to these data allowed for in-time modifications to the system by our experts (Holz et

**Figure 6.** ALS patient at her home, after finishing her brain painting. While painting, the brain painting matrix appears on one screen while on an additional monitor, placed on the table in the background, she can follow the progress of her painting. The brain painting software is operated by the family or caregivers and requires few steps only for set up.

14 Brain-Computer Interface Systems – Recent Progress and Future Prospects

Also for this locked-in BCI end-user effectiveness, reliability and easiness of use were the most important aspects for device satisfaction. Additionally, she mentioned professional support, specifically during times in which the system was not running properly. With a mean VAS satisfaction score of 6.2, her overall satisfaction is moderate to high. However, there is high variability with lowest satisfaction when the system was not working (early sessions) and

drying).

al., in preparation).

Taken these results together, we can state that milestones were achieved in bringing BCIs to end-users. BCIs were combined with standard assistive technology, set up of the system including handling of software was facilitated tremendously and spelling speed was increased whilst maintaining high accuracy levels by altering the stimulation mode. In one exemplary end-user with severe motor impairment an application was installed at home such that family and caregivers can set up the system and maintenance and support is provided remotely. With innovative applications to be set-up at the end-users' home and long-term studies first steps have been undertaken to bridge the translational and reliability gaps encountered when bringing BCIs to end-users. The user-centred iterative process between developers and endusers proved successful and the results are powerful demonstrators that BCIs are well coming of age and can face the transfer out of the lab to the end-users' home.

#### **Author details**

Andrea Kübler1\*, Elisa Holz1 , Tobias Kaufmann1 and Claudia Zickler2

\*Address all correspondence to: andrea.kuebler@uni-wuerzburg.de, elisa.holz@uni-wuerz‐ burg.de, tobias.kaufmann@uni-wuerzburg.de

1 Institute of Psychology, University of Würzburg, Würzburg, Germany

2 Institute of Medical Psychology and Behavioural Neurobiology, University of Tübingen, Tübingen, Germany

#### **References**


[10] Halder, S, et al. *Prediction of auditory and visual 300 brain-computer interface aptitude.* PlosOne, in press.

users proved successful and the results are powerful demonstrators that BCIs are well coming

\*Address all correspondence to: andrea.kuebler@uni-wuerzburg.de, elisa.holz@uni-wuerz‐

2 Institute of Medical Psychology and Behavioural Neurobiology, University of Tübingen,

[1] Vidal, J. J. (1973). *Toward direct brain-computer communication.* Annu Rev Biophys Bio‐

[2] Wolpaw, J. R, & Wolpaw, E. Winter. (2012). *Brain-Computer Interfaces: Something new under the sun*, in *Brain-Computer Interfaces:Principles and Practice: Principles and Practice* J.R. Wolpaw and E. Winter Wolpaw, Editors. Oxford University Press: New York,

[3] Kübler, A, & Kotchoubey, B. (2007). *Brain-computer interfaces in the continuum of con‐*

[5] Kübler, A, et al. (2011). *A model of BCI control*. in *5th International Brain-Computer Inter‐*

[6] Hammer, E. M, et al. (2012). *Psychological predictors of SMR-BCI performance.* Biol Psy‐

[7] Kleih, S. C, et al. (2010). *Motivation modulates the 300 amplitude during brain-computer*

[8] Kaufmann, T, et al. (2012). *Effects of resting heart rate variability on performance in the*

[9] Blankertz, B, et al. (2010). *Neurophysiological predictor of SMR-based BCI performance.*

[4] Kübler, A. *Brain-computer Interfacing- Science Fiction has come true.* Brain, in press.

and Claudia Zickler2

of age and can face the transfer out of the lab to the end-users' home.

16 Brain-Computer Interface Systems – Recent Progress and Future Prospects

, Tobias Kaufmann1

1 Institute of Psychology, University of Würzburg, Würzburg, Germany

**Author details**

Tübingen, Germany

eng, 157-180.

USA, 3-12.

chol, 80-86.

Neuroimage, 1303-1309.

**References**

Andrea Kübler1\*, Elisa Holz1

burg.de, tobias.kaufmann@uni-wuerzburg.de

*sciousness.* Curr Opin Neurol, 643-649.

*interface use.* Clin Neurophysiol, 1023-31.

*face Conference*. Austria: Graz University of Technology.

*300 brain-computer interface.* Int J Psychophysiol, 336-41.


[42] Demers, L, Weiss-lambrou, R, & Ska, B. (2000). *Quebec user Evaluation of Satisfaction with assistive Technology. QUEST version 2.0. An outcome measure for assistive technology devices* Webster, New York: Institute for Matching Person and Technology.

[26] Maguire, M. C. (1998). *User-Centred Requirements Handbook WP5 D5.3 of the Telematics Applications Project TE- RESPECT: Requirements Engineering and Specification in Tele‐*

[27] Pfurtscheller, G, et al. (1997). *EEG-based discrimination between imagination of right and*

[28] Pfurtscheller, G, & Neuper, C. (1997). *Motor imagery activates primary sensorimotor area*

[29] Kübler, A, et al. (2005). *Patients with ALS can use sensorimotor rhythms to operate a*

[30] Guger, C, et al. (2009). *How many people are able to control a 300 brain-computer interface*

[31] Halder, S, et al. (2011). *Neural mechanisms of brain-computer interface control.* Neuro‐

[32] Farwell, L. A, & Donchin, E. (1988). *Talking off the top of your head: toward a mental pros‐ thesis utilizing event-related brain potentials.* Electroencephalogr Clin Neurophysiol,

[33] Sellers, E. W, Arbel, Y, & Donchin, E. (2012). *BCIs that use 300 Event-Related Potentials.*, in *Brain Computer Interfaces: Principles and Practice.,* J.R. Wolpaw and E. Winter Wol‐

[34] Kleih, S. C, et al. (2011). *Out of the frying pan into the fire--the 300 BCI faces real-world*

[35] Polich, J. (2007). *Updating P an integrative theory of 3a and P3b*. Clin Neurophysiol,

[36] Nijboer, F, et al. (2008). *A 300 brain-computer interface for people with amyotrophic lateral*

[37] Sellers, E. W, Vaughan, T. M, & Wolpaw, J. R. (2010). *A Brain-computer interface for*

[38] Kaufmann, T, et al. *Face stimuli effectively prevent brain-computer interface inefficiency in*

[39] Hoffmann, U, et al. (2008). *An efficient 300 brain-computer interface for disabled subjects.* J

[40] Mak, J. N, et al. (2011). *Optimizing the 300brain-computer interface: current status, limita‐*

[41] Hart, S. G, & Staveland, L. E. (1988). *Development of NASA-TLX (Task Load Index): Re‐ sults of experimental and theoretical research*, in *Human mental workload*, P.A. Hancock

*long-term independent home use.* Amyotroph Lateral Scler, 449-455.

*patients with neurodegenerative disease.* Clin Neurophysiol, in press.

and N. Meshkati, Editors. North-Holland: Amsterdam, 139-183.

*left hand movement.* Electroencephalogr Clin Neurophysiol, 642-651.

*matics.*

*in humans.* Neurosci Lett, 65-68.

18 Brain-Computer Interface Systems – Recent Progress and Future Prospects

*(BCI)?* Neurosci Lett, 94-8.

image, 1779-1790.

510-523.

2128-48.

*brain-computer interface.* Neurology, 1775-1777.

paw, Editors. Oxford University Press.

*challenges.* Prog Brain Res, 27-46.

*sclerosis.* Clin Neurophysiol, 1909-16.

*tions and future directions.* J Neural Eng, 025003.

Neurosci Methods, 115-25.


### **BCI Integration: Application Interfaces**

Christoph Hintermüller, Christoph Kapeller, Günter Edlinger and Christoph Guger

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/55806

#### **1. Introduction**

Manydisorders,like spinal cordinjury, strokeoramyotrophic lateral sclerosis (ALS), canimpair or even completely disable the usual communication channels a person needs to communicate and interact with his or her environment. In such severe cases, a brain-computer interface (BCI) might be the only remaining way to communicate [1]. In a BCI, the brain's electrical activity during predefined mental tasks is analyzed and translated into corresponding actions intend‐ ed by the user. But even for less severe disabilities, a BCI can improve quality of life by allow‐ ingusers tocontrol a computeror speciallypreparedelectronicdevices,ortostayincontactwith friendsthroughsocialnetworksandgames.P300evokedpotential[2,3,4]basedBCIscanprovide goal-oriented control as needed to operating spelling devices [5] or control computer games [6]. For navigating in space e.g. moving a computer mouse [7]), controlling the motion and move‐ ment of a robot or a wheelchair, steady state visual evoked potential (SSVEP) [8, 9, 10, 10] and motor imagination [12, 13] based BCI paradigms can be used.

All these applications require integrating the BCI with an external software application or device. This chapter will discuss different ways this integration can be achieved, such as transmitting the user's intention to the application for execution, updating the options available to the user, related actions and integrating visual BCI stimulation and feedback paradigms with the Graphical User Interface (GUI) of external applications. The current developments and efforts to standardize the application interfaces will be presented.

### **2. General structure of a BCI**

A BCI system consists of several components (figure 1). The first is the biosignal acquisition system, which records the body's signals (like the EEG) used to extract the user's intentions

and responses to the presented stimuli and feedback. It consists of a set of electrodes and a dedicated biosignal amplifier, which typically directly digitizes the signals and transmits them to the feature extraction system. The feature extraction processes the signals and analyzes them with respect to specific signals like P300, SSVEP or error potentials [14, 15, 16, 17].

**Figure 1.** A BCI system consists of various components, which acquire, process and classify signals from the user's brain. Other components handle the presentation of dedicated stimuli and the feedback of the classification results. A dedicated mapping component converts them into commands and tasks to be executed by an attached or embedded application or service.

The classification determines the intention of the user based on the extracted features, which may reflect that the user does not intend to communicate at that time. The classification results are converted by a dedicated mapping component or appropriate methods into commands, actions and tasks to be executed by the attached applications such as a spelling device [2, 3, 4] or robot [10]. In order to enhance the user's response to the presented stimuli and to help assess the system's efficacy, a feedback related to the classification results is presented and the stimulation is adopted accordingly. Furthermore, the stimulation unit provides information about the presented stimuli using trigger signals and events, which are used to synchronously process the input signals and extract corresponding features.

#### **3. Methods for integrating a BCI with an application**

There exist three basic approaches to interconnect the BCI with a user application. The following sections, 3.1-3.3, discuss the different designs, their advantages and disadvantages. Each section will present some of the currently available BCI systems and frameworks that use that design. Section 3.4 compares the possibilities to establish standardized interfaces for interconnecting an application with the BCI system.

#### **3.1. Direct integration**

and responses to the presented stimuli and feedback. It consists of a set of electrodes and a dedicated biosignal amplifier, which typically directly digitizes the signals and transmits them to the feature extraction system. The feature extraction processes the signals and analyzes them

**Figure 1.** A BCI system consists of various components, which acquire, process and classify signals from the user's brain. Other components handle the presentation of dedicated stimuli and the feedback of the classification results. A dedicated mapping component converts them into commands and tasks to be executed by an attached or embedded

The classification determines the intention of the user based on the extracted features, which may reflect that the user does not intend to communicate at that time. The classification results are converted by a dedicated mapping component or appropriate methods into commands, actions and tasks to be executed by the attached applications such as a spelling device [2, 3, 4] or robot [10]. In order to enhance the user's response to the presented stimuli and to help assess the system's efficacy, a feedback related to the classification results is presented and the stimulation is adopted accordingly. Furthermore, the stimulation unit provides information about the presented stimuli using trigger signals and events, which are used to synchronously

There exist three basic approaches to interconnect the BCI with a user application. The following sections, 3.1-3.3, discuss the different designs, their advantages and disadvantages. Each section will present some of the currently available BCI systems and frameworks that use that design. Section 3.4 compares the possibilities to establish standardized interfaces for

process the input signals and extract corresponding features.

interconnecting an application with the BCI system.

**3. Methods for integrating a BCI with an application**

application or service.

with respect to specific signals like P300, SSVEP or error potentials [14, 15, 16, 17].

22 Brain-Computer Interface Systems – Recent Progress and Future Prospects

The most straight forward approach is to integrate the application within the BCI system. This approach, which is sketched in figure 1, allows developers to hardcode the symbols and feedback presented. In other words, the conversion of the classification results into application commands, tasks and actions, and the application itself, represent a static addendum to the BCI systems. This approach was used for the first proof of concept systems, and can still be found in simple spelling devices such as the P300 speller distributed by g.tec medical engi‐ neering along with the g.HIGHsys, a Highspeed Online Processing block set [18] for Matlab/ Simulink™ (Mathworks, USA) and other BCI systems used for demonstration and educational purposes.

Modern BCI frameworks [19] such as OpenViBE [20], BCILAB [21] or xBCI [22] that are based on this design use a module based approach to allow application developers and designers to integrate their own application within the existing BCI framework.

The advantage of directly integrating the application within the BCI system is that it can be distributed and used as a compact, all-in-one component. Despite the need for appropriate acquisition and processing hardware, no additional interfaces or protocols are required.

The downside of this approach is that application developers and designers require some knowledge of the interpretation of the feature signals and classification results and how to convert them into appropriate commands, tasks and actions. Whenever an application must be added, updated, exchanged or removed, the presentation of stimuli and feedback has to be adjusted.

#### **3.2. External executable component**

The limitations of the tight integration approach can be reduced or eliminated by modeling the application as an individual executable that act as an external component of the BCI. As shown in figure 2, all other components like feature extraction, classification, stimulus and feedback presentation remain inside the core BCI system. This design is utilized by BCI systems which use the BCI2000 [23] or the TOBI platform [24], among other examples.

A well defined interface, such as the TiC output interface [25] used by the TOBI platform, allows applications to receive information based on the action that the user chooses to execute. If supported by the stimulus and presentation components, applications even may update and modify the choices available to the user through the BCI. Within TOBI, the TiD trigger interface [24] can be used by the application to initiate the processing of the change requests.

As a consequence, applications become independent from the BCI and may be connected and disconnected to and from the BCI any time. Both the BCI and the applications can, for example, be adapted to the user's needs independently without affecting each other. On the BCI side, this includes the selection of the acquired biosignals and features, improvements in the algorithms used to assess the user's intentions and the stimulation and feedback paradigms used. On the other hand, applications may dynamically adopt the available choices to some internal state or provide new commands and actions to the user without the need to modify any of the core BCI components.

As shown in figure 2, converting the feature classification results has to be done by the application. This basically requires that developers and designers of BCI enabled applications have some knowledge of how to interpret these results with respect to the services, actions and tasks offered by their applications. Propagating changes from the internal state of the application to the user is only possible if the BCI is able to reflect these changes by adopting the options presented to the user accordingly.

**Figure 2.** The application acts as additional, external executable component of the BCI system. It attaches to the dif‐ ferent data and trigger interfaces to receive information about the user's intention and initiate the adoption of the stimulus and feedback presentation to its internal states.

#### **3.3. Message based**

The limitations of the approaches described in the previous sections, 3.1 and 3.2, can be overcome by integrating a dedicated interface module or component within the BCI system. The mapping component collects the classification results and converts them to corresponding application control messages and the connection interface component transmits each of them to the application, where they are interpreted and the requested services, actions or tasks are executed.

Depending on the capabilities of the core components of the BCI and the interface component, the application may request an update of the presented stimuli and feedback based on the application's internal state. Figure 3 shows the bidirectional case, where the application can acknowledge the reception of the transmitted messages and send update requests to the BCI whenever its internal state changes. The interface component of the connected BCI system decodes the corresponding messages and initiates the required changes and updates of the stimulus and feedback presentation. Hence, the paradigms and input modalities the BCI uses do not matter.

internal state or provide new commands and actions to the user without the need to modify

As shown in figure 2, converting the feature classification results has to be done by the application. This basically requires that developers and designers of BCI enabled applications have some knowledge of how to interpret these results with respect to the services, actions and tasks offered by their applications. Propagating changes from the internal state of the application to the user is only possible if the BCI is able to reflect these changes by adopting

**Figure 2.** The application acts as additional, external executable component of the BCI system. It attaches to the dif‐ ferent data and trigger interfaces to receive information about the user's intention and initiate the adoption of the

The limitations of the approaches described in the previous sections, 3.1 and 3.2, can be overcome by integrating a dedicated interface module or component within the BCI system. The mapping component collects the classification results and converts them to corresponding application control messages and the connection interface component transmits each of them to the application, where they are interpreted and the requested services, actions or tasks are

Depending on the capabilities of the core components of the BCI and the interface component, the application may request an update of the presented stimuli and feedback based on the application's internal state. Figure 3 shows the bidirectional case, where the application can acknowledge the reception of the transmitted messages and send update requests to the BCI whenever its internal state changes. The interface component of the connected BCI system decodes the corresponding messages and initiates the required changes and updates of the

any of the core BCI components.

the options presented to the user accordingly.

24 Brain-Computer Interface Systems – Recent Progress and Future Prospects

stimulus and feedback presentation to its internal states.

**3.3. Message based**

executed.

**Figure 3.** The BCI and the application are loosely coupled through dedicated interface components. An optional BCI overlay module allows embedding the presentation of the BCI stimulation and feedback remotely within the applica‐ tions when requested by the application.

The message based approach provides a loose coupling between the BCI system and the attached applications. It is used, for example, by the intendiX™ system (g.tec medical engi‐ neering GmbH, Austria), which is further described in section 4. Structural and technical changes applied to the BCI system, such as modifications of the signal acquisition, feature extraction, classification, stimulus or feedback presentation, have no impact on the attached application. Improvements and modifications of the services, actions and tasks offered by the applications do not require any technical or algorithmic changes to any part of the BCI system. The dedicated interface component, in combination with a well defined, message based and standardized protocol, enables applications to provide new features to the user any time, without modifying any of the BCI components.

In contrast to the approaches described in the previous sections, 3.1 and 3.2, a dedicated mapping component enclosed within the BCI (figure 3) interprets the classification results and converts them to their corresponding application control message. This encapsulation enables developers with little or no expertise in biosignal processing, analysis, classification and interpreting results to develop applications controllable by a BCI.

The resulting decoupling of the application from the BCI can be taken one step further by transparently embedding it within the user interface of any application. This is achieved by overlaying the BCI stimuli and feedback on top of the application's standard user interfaces. This may be achieved using an overlay module as described in section 4.3.

#### **3.4. Connection design**

The previous sections, 3.1-3.3, described the different ways to integrate the BCI within an application. Independent of these different approaches, the dedicated communication and control interface between the BCI and the application can be implemented using two distinct methods. Platforms like TOBI, BCI2000 or the unidirectional extendiX clients described in section 4.1 and in [26] provide dedicated application programming interface (API) libraries that handle all the connection and data transfer related issues. An application, as shown in figure 4a, calls the API functions to retrieve information about the user's intention and initiate the update of the presented stimuli and feedback. All the details about the underlying protocol and connection are hidden by the library. This allows administering modifications and extensions to the protocol any time, without affecting the application, unless the interfaces of the API functions need to be adopted to reflect the changes made.

A large variety of programming languages exist to implement applications and the API library. If the application is implemented using a different language than the API, a so called language wrapper library is necessary. Such a wrapper provides access to the BCI client API from within the language used to implement the application.

User data, selections and information about the internal state of the application are stored in dedicated data structures and fields. Hence, it is likely that these differ from ones used by the API library to receive instructions from and to collect update requests to be sent to the BCI. As a consequence, an application has to convert all data to and from its internal data structures before and after calling a BCI client API function. Depending on the intended usage and the related requirements concerning responsiveness, user experience and usage of computational resources, this may cause additional, probably undesired effects from the application.

**Figure 4.** The interconnection of the BCI and the application can either be encapsulated within a dedicated library a) or be based on a dedicated protocol b).

Figure 4b shows a different way to establish a connection between the BCI system and an application. It is based on a well defined and standardized protocol only. Both the BCI and the application utilize their own, dedicated connection manager to handle the typically network based connection, process all received requests and generate all outgoing messages based on their current state. The connection manager interprets the incoming messages, extracts relevant data and information and converts them directly into the data structures and representations used by the application or the BCI respectively.

Changes to the application's internal state are directly converted into appropriate messages requesting changes and updates to the stimuli, the feedback presented to the user and the conversion of classification results to be returned to the application. Thus, it does not matter which programming language is used to implement the application. Instead of an API library wrapper, a dedicated connection manager has to be developed. This requires that the com‐ munication protocol uses a well defined and properly described handshake and communica‐ tion procedures to avoid unnecessary workarounds, which might prevent the application from being used with BCI systems supporting a different version of the protocol.

### **4. Intendix™**

**3.4. Connection design**

The previous sections, 3.1-3.3, described the different ways to integrate the BCI within an application. Independent of these different approaches, the dedicated communication and control interface between the BCI and the application can be implemented using two distinct methods. Platforms like TOBI, BCI2000 or the unidirectional extendiX clients described in section 4.1 and in [26] provide dedicated application programming interface (API) libraries that handle all the connection and data transfer related issues. An application, as shown in figure 4a, calls the API functions to retrieve information about the user's intention and initiate the update of the presented stimuli and feedback. All the details about the underlying protocol and connection are hidden by the library. This allows administering modifications and extensions to the protocol any time, without affecting the application, unless the interfaces of

A large variety of programming languages exist to implement applications and the API library. If the application is implemented using a different language than the API, a so called language wrapper library is necessary. Such a wrapper provides access to the BCI client API from within

User data, selections and information about the internal state of the application are stored in dedicated data structures and fields. Hence, it is likely that these differ from ones used by the API library to receive instructions from and to collect update requests to be sent to the BCI. As a consequence, an application has to convert all data to and from its internal data structures before and after calling a BCI client API function. Depending on the intended usage and the related requirements concerning responsiveness, user experience and usage of computational resources, this may cause additional, probably undesired effects from the application.

**Figure 4.** The interconnection of the BCI and the application can either be encapsulated within a dedicated library a)

Figure 4b shows a different way to establish a connection between the BCI system and an application. It is based on a well defined and standardized protocol only. Both the BCI and the application utilize their own, dedicated connection manager to handle the typically network based connection, process all received requests and generate all outgoing messages based on their current state. The connection manager interprets the incoming messages, extracts relevant data and information and converts them directly into the data structures and

the API functions need to be adopted to reflect the changes made.

the language used to implement the application.

26 Brain-Computer Interface Systems – Recent Progress and Future Prospects

(a) (b)

representations used by the application or the BCI respectively.

or be based on a dedicated protocol b).

The previous section discussed 3 different approaches to interconnect the BCI with user applications, services and devices. This section presents the intendiX™ system [26], which is used by the applications and projects presented in section 5.

The intendiX BCI system was designed to be operated by caregivers or the patient's family at home. It consists of active EEG electrodes to avoid skin abrasion, a portable biosignal amplifier and a laptop or netbook running the software under Windows (see figure 5a). The electrodes are integrated into the cap to allow a fast and easy montage of the equipment so it can be mounted quickly and easily. The software lets users view the raw EEG to inspect data quality, but automatically informs the inexperienced user if the data quality on a specific channel is good or bad.

**Figure 5.** The user who is wearing the EEG cap equipped with active electrodes can run the intendiX system on a laptop a). By default, the intendiX presents a matrix of 50 characters using a layout comparable to a computer keyboard b).

This control can be realized by extracting the P300 evoked potential from the EEG data in realtime. The characters of the English alphabet, Arabic numbers and icons were arranged in a matrix on a computer screen (see figure 5b). Then the characters are highlighted in a random order while the user concentrates on the specific character he/she wants to spell. The BCI system is first trained on the P300 response of several characters with multiple flashes per character to adapt to the specific person.

During this training, 5-10 training characters are typically designated for the user to copy. The EEG data is used to calculate the user specific weight vector, which is stored for later usage. Then the software automatically switches into the spelling mode and the user can spell as many characters as desired. The system was tested with 100 subjects who had to spell the word LUCAS after 5 minutes of training. 72 % were able to spell it correctly without any mistake [4].

The speed and accuracy of the classification can be optimized by choosing the appropriate number of flashes manually. A statistical approach could also automatically determine the optimal number of flashes for a desired accuracy threshold. This latter approach could also determine whether the user is paying attention to the BCI system. The statistical approach could have a major advantage: no characters are selected if the user is not looking at the matrix or does not want to use the speller.

The intendiX™ user can perform different actions after spelling: (i) copy the spelled text into an editor; (ii) copy the text into an email; (iii) send the text via text-to-speech facilities to the loud speakers; (iv) print the text or (v) send the text via UDP to another computer by selecting a dedicated icon. The intendiX™ system offers two distinct ways to manage this interaction with external software and control various applications [26], described below.

#### **4.1. extendiX**

The first is the unidirectional extendiX protocol. It is accessible through the dedicated closed source extendiX library encapsulating the extendiX protocol.

For simple control applications, the extendiX batch file starter client allows users to start dedicated batch scripts whenever it receives information about the symbols selected by the user. This approach is suitable for all applications, services and devices that offer dedicated command-line interfaces for controlling their state and executing specific actions.

The intendix™ Painting [26] program, a small painting program comparable to Microsoft Windows Paint, allows users to create paintings and other images, as well as store, load and print them.

#### **4.2. intendiX ACTOR protocol**

For complex control tasks, the intendiX™ Application ConTrol and Online Reconfiguration (ACTOR) Protocol is provided. Compared to extendiX, it offers a bidirectional User Datagram Protocol (UDP) message based connection between the BCI and the application. A short summary of the standardized intendiX ACTOR protocol can be found in [27] and its detailed description is available from [26]. No dedicated API library is available. All applications have to implement their own dedicated connection manager, which handles all intendiX ACTOR based communication.

The Interface Unit handles the UDP connections to all simultaneously attached applications, services and devices and converts the classification results to the corresponding UDP message string. This dedicated connection manager processes all connect, disconnect requests and status updates received, prepares the next set of stimuli and feedback modalities to be presented to the user and instructs the corresponding components to adjust their outputs accordingly.

During this training, 5-10 training characters are typically designated for the user to copy. The EEG data is used to calculate the user specific weight vector, which is stored for later usage. Then the software automatically switches into the spelling mode and the user can spell as many characters as desired. The system was tested with 100 subjects who had to spell the word LUCAS after 5 minutes of training. 72 % were able to spell it correctly without any mistake [4].

The speed and accuracy of the classification can be optimized by choosing the appropriate number of flashes manually. A statistical approach could also automatically determine the optimal number of flashes for a desired accuracy threshold. This latter approach could also determine whether the user is paying attention to the BCI system. The statistical approach could have a major advantage: no characters are selected if the user is not looking at the matrix

The intendiX™ user can perform different actions after spelling: (i) copy the spelled text into an editor; (ii) copy the text into an email; (iii) send the text via text-to-speech facilities to the loud speakers; (iv) print the text or (v) send the text via UDP to another computer by selecting a dedicated icon. The intendiX™ system offers two distinct ways to manage this interaction

The first is the unidirectional extendiX protocol. It is accessible through the dedicated closed

For simple control applications, the extendiX batch file starter client allows users to start dedicated batch scripts whenever it receives information about the symbols selected by the user. This approach is suitable for all applications, services and devices that offer dedicated

The intendix™ Painting [26] program, a small painting program comparable to Microsoft Windows Paint, allows users to create paintings and other images, as well as store, load and

For complex control tasks, the intendiX™ Application ConTrol and Online Reconfiguration (ACTOR) Protocol is provided. Compared to extendiX, it offers a bidirectional User Datagram Protocol (UDP) message based connection between the BCI and the application. A short summary of the standardized intendiX ACTOR protocol can be found in [27] and its detailed description is available from [26]. No dedicated API library is available. All applications have to implement their own dedicated connection manager, which handles all intendiX ACTOR

The Interface Unit handles the UDP connections to all simultaneously attached applications, services and devices and converts the classification results to the corresponding UDP message string. This dedicated connection manager processes all connect, disconnect requests and status updates received, prepares the next set of stimuli and feedback modalities to be

with external software and control various applications [26], described below.

command-line interfaces for controlling their state and executing specific actions.

source extendiX library encapsulating the extendiX protocol.

or does not want to use the speller.

28 Brain-Computer Interface Systems – Recent Progress and Future Prospects

**4.1. extendiX**

print them.

**4.2. intendiX ACTOR protocol**

based communication.

**Figure 6.** a) Handshake procedure used on startup by the BCI to identify all active applications and clients. b) Hand‐ shake procedure initiated by a starting application or client to register with a running BCI system.

The intendiX ACTOR protocol uses eXtensible Markup Language (XML) formatted message strings to exchange information between the BCI and the attached system. Whenever the BCI system is started, it broadcasts a dedicated hello message to identify the available and active applications, as shown in figure 6a. Each client responds by sending an appropriate acknowl‐ edgement message. As soon as the BCI has received this message, it will request from the application the list of applications and services available from this client. The BCI will acknowledge the received list of commands, services and actions and report whether it was able to process it successfully.

A similar handshake procedure is used when a newly started application connects to the BCI system. The only difference is that the BCI acknowledges the hello broadcast sent by the application by requesting the list of available commands, as shown in figure 6b.

Attached clients will receive a control message whenever the user has selected a service, action or task (figure 7a). The content of this message is fully defined by the application, which can ask the BCI to change this message and the related symbols, sounds and sequences presented to the user during stimulation and feedback. In this case, configured along with the definition of a single BCI control element, the BCI will pause the presentation of any stimuli and feedback until the application acknowledges the reception and execution of the control message or requests an update of the BCI user interface.

A single UDP message sent to the BCI may contain several distinct requests that are processed as an atomic batch. The presentation of stimuli and feedback that was paused while sending the last control message is not restarted before the last message within this batch has been processed. Each single message within such a batch may request the change of one single stimulus or feedback element, or contain the content of a whole XML formatted configuration file that describes complex BCI screens and groups of stimuli. The latter is used by the IU to determine which stimuli and feedback modalities should be presented to the user. The detailed description of the configuration file format used is available, along with the definitions of the intendiX ACTOR protocol from [26].

Whenever the BCI receives an update request, it will return an acknowledge message indi‐ cating that it is trying to process and execute the update request. After the update is finished, a message indicating whether the request was executed successfully or failed is returned.

Whenever an application, service or device is terminated, it sends a good bye message to the BCI, which acknowledges this message sending a simple bye message. As shown in figure 7b, the same message is used by the BCI to inform connected clients that it will terminate operation.

**Figure 7.** a) Sequences used by the BCI and application to transfer translated results and request updates. b) Sequen‐ ces used by the application to disconnect from the BCI and by the BCI to dismiss applications on shutdown.

#### **4.3. intendiX SOCI**

For applications, especially virtual reality (VR) applications and remote control of robots, it is desirable to enhance the standard user interface by directly embedding the BCI stimuli (figure 8). The intendiX platform can be configured to remotely display its stimuli and feedback using the intendiX Screen Overlay Control Interface (SOCI) module [26]. The intendiX SOCI system implements a runtime loadable library based on OpenGL [28]. It is implemented as a dynamic linked library (DLL) for Microsoft Windows and as a shared object for LINUX, and can be used by OpenGL based host applications to embed targets for visual stimulation within the displayed scene. The host applications could be virtual reality environments or real-world videos acquired with a camera. Figure 8 presents an example of how to use the intendiX SOCI module to simultaneously control the camera direction and select amongst different actions and objects.

stimulus or feedback element, or contain the content of a whole XML formatted configuration file that describes complex BCI screens and groups of stimuli. The latter is used by the IU to determine which stimuli and feedback modalities should be presented to the user. The detailed description of the configuration file format used is available, along with the definitions of the

Whenever the BCI receives an update request, it will return an acknowledge message indi‐ cating that it is trying to process and execute the update request. After the update is finished, a message indicating whether the request was executed successfully or failed is returned.

Whenever an application, service or device is terminated, it sends a good bye message to the BCI, which acknowledges this message sending a simple bye message. As shown in figure 7b, the same message is used by the BCI to inform connected clients that it will terminate

**Figure 7.** a) Sequences used by the BCI and application to transfer translated results and request updates. b) Sequen‐

For applications, especially virtual reality (VR) applications and remote control of robots, it is desirable to enhance the standard user interface by directly embedding the BCI stimuli (figure 8). The intendiX platform can be configured to remotely display its stimuli and feedback using the intendiX Screen Overlay Control Interface (SOCI) module [26]. The intendiX SOCI system implements a runtime loadable library based on OpenGL [28]. It is implemented as a dynamic linked library (DLL) for Microsoft Windows and as a shared object for LINUX, and can be used by OpenGL based host applications to embed targets for visual stimulation within the displayed scene. The host applications could be virtual reality environments or real-world videos acquired with a camera. Figure 8 presents an example of how to use the intendiX SOCI

ces used by the application to disconnect from the BCI and by the BCI to dismiss applications on shutdown.

(a) (b)

intendiX ACTOR protocol from [26].

30 Brain-Computer Interface Systems – Recent Progress and Future Prospects

operation.

**4.3. intendiX SOCI**

**Figure 8.** Example of how to use the intendiX SOCI module. Different BCI controls flash together within a running video application. In this example, the user uses the five outer controls to control the direction of the camera and the inner six controls to handle utensils like spoon, fork and knife.

The intendiX SOCI library is able to generate frequency (f-VEP) or code based (c-VEP) SSVEP stimuli, and supports single symbol and row column based P300 stimulation paradigms. It is initialized and fully controlled by the BCI system using a dedicated UDP based network connection. The application only needs to provide information on the network interface and port to be used when connecting to the BCI system and the screen refresh rate, which reliably can be achieved by the user display system. All other parameters are defined by the BCI system during startup and initialization phase. The intendiX SOCI module provides a standardized API, which allows the application to handle the stimulation and feedback devices attached to the user system appropriately.

Figure 9 shows the call sequence that the application must use to augment its own interface with the BCI controls. This sequence has to be repeated for each display, stimulation and feedback device that the BCI user wishes to use. The init function activates the BCI support for the selected device.

Every time before it calls the draw function, the application has to ensure that the openGL environment is initialized properly to display the BCI controls on top of any other graphical element. This is indicated in figure 9 by the call to the "set transformation" pseudo openGL function. After the swap buffer command from openGL has been called, the application has to call the displayed function to indicate that the stimuli have been updated.

**Figure 9.** The order in which the init, draw, displayed and reset functions of the intendiX SOCI module have to be called by the application. The draw and displayed functions are called for every screen refresh cycle, which has a dura‐ tion of 16.6 ms for a standard 60 Hz LCD flat screen.

When the application terminates, it disconnects the displays from the BCI by calling the reset function for each screen previously attached to the BCI through the init function. As indicated in figure 9, the application has to ensure that the interval between two consecutive calls to the displayed API function strictly corresponds to the screen refresh rate proposed by the application during initialization.

### **5. Applications**

In section 3, the different concepts used to interconnect BCI systems and user applications were discussed. The application interfaces provided by the intendiX™ system were briefly present‐ ed in section 4. This section will list some projects and applications that utilize the intendiX ACTOR protocol and the indendiX SOCI API library to control virtual and robotic avatars and ambient assistance systems (sections 5.1, 5.2 and 5.3). Section 5.4 will present some examples in which the BCI was used to control cooperative multi player games such as World of Warcarft by Blizzard Entertainment [29] and social network platforms like Twitter (Twitter, Inc., USA). All of these applications either use the P300 matrix or a SSVEP paradigm with frequency coded stimuli.

#### **5.1. VERE**

**Figure 9.** The order in which the init, draw, displayed and reset functions of the intendiX SOCI module have to be called by the application. The draw and displayed functions are called for every screen refresh cycle, which has a dura‐

When the application terminates, it disconnects the displays from the BCI by calling the reset function for each screen previously attached to the BCI through the init function. As indicated in figure 9, the application has to ensure that the interval between two consecutive calls to the displayed API function strictly corresponds to the screen refresh rate proposed by the

tion of 16.6 ms for a standard 60 Hz LCD flat screen.

32 Brain-Computer Interface Systems – Recent Progress and Future Prospects

application during initialization.

The VERE project [30] is concerned with embodiment of people in surrogate bodies so that they have the illusion that the surrogate body is their own body – and that they can move and control it as if it were their own. There are two types of embodiment considered. The first type is robotic embodiment (figure 10a), where the person is embodied in a remote physical robotic device and control it through a brain-computer interface. For example, a patient confined to a wheelchair or bed, who is unable to physically move, may nevertheless re-enter the world actively and physically through such remote embodiment. The second type of embodiment (figure 10b) is virtual, where participants enter into a virtual reality with a virtual body representation. The basic and practical goal of this type of embodiment is to explore its use in the context of rehabilitation settings.

The VERE project uses the intendiX ACTOR protocol (section 4.2) to access the BCI output from within the eXtreme Virtual Reality (XVR) environment (VRMedia S.r.l., Pisa, Italy) to control both the virtual and robotic avatars. The BCI is part of the intention recognition and inference component of the embodiment station, which is developed through the VERE project.

The intention recognition and inference unit takes inputs from fMRI, EEG and other physio‐ logical sensors to create a control signal together with access to a knowledge base, taking into account body movements and facial movements. This output is used to control the virtual representation of the avatar in XVR and to control the robotic avatar. The user gets feedback showing the scene and the BCI control via either the HMD or a display and the tactile and auditory stimuli provided by the so called embodiment station. Thereby the intendiX SOCI module is used to embed the BCI stimuli and feedback modalities within video streams recorded by the robot (figure 10c, e) and the virtual environment of the user's avatar.

The user is situated inside the embodiment station, which provides different stimuli and feedback modalities such as visual, auditory and tactile. Figure 10d shows the setup for inducing the illusion of hand movement by mechanically stimulating the flexor and extensor muscles of the hand.

Images courtesy of VERE, Event lab Universitat de Barcelona, Centere National de la Recherche Scientifique France, Perco laboratory Scuola Superiore Sant' Anna Pisa and Institute of Automatic control Engineering at the Technical Uni‐ versity Munich, 2012.

**Figure 10.** VERE project aims at dissolving the boundary between the human body and surrogate representations in physical reality (a) and immersive virtual reality (b).One of the key aspects is to develop a brain body computer inter‐ face enabling the user to control the movement of his robotic avatar robot (c), open doors (e) and to provide him with visual (c, e) and tactile feedback on the body movements executed by the avatar (d).

Depending on the selected stimuli, the BCI system offers distinct levels of control. These levels range from high level commands, tasks and actions such as turning on the TV or grasping a can to moving the robot within its environment (figure 10c) or teaching the robot new high level tasks such as opening a door (figure 10e). The message based intendiX ACTOR protocol can switch smoothly between the different control levels and control paradigms.

#### **5.2. ALIAS**

The Ambient Assisted Living (AAL) research programme supports projects that develop technology to compensate for the drawbacks of the aging society by applying modern information and communication technologies (ICTs). The Adaptable Ambient LIving ASsis‐ tant (ALIAS) project [31] is one the projects funded by AAL. It aims to improve the commu‐ nication of elderly people, thus ensuring a safe and long independent life in their own homes. A mobile robot platform without manipulation capabilities serves as a communication platform for improving the social inclusion of the user by offering a wide range of services, such as web applications for basic communication, multimedia and event search and games.

**Figure 11.** The ALIAS robot utilizes several different sensors to perceive the user's input and intentions (a). In addition, it supports BCI systems through the intendiX ACTOR protocol and by embedding visual stimuli and feedback modali‐ ties using the intendiX SOCI module (b).

The ALIAS robot is equipped with sensing devices including cameras, microphones for speech input and a touch-screen (figure 11a) to perceive the user's input. The robot utilizes different modalities such as audio output, a graphical user interface (GUI) and proactive and autono‐ mous navigation to interact with the user.

A so called dialog manager ensures that the dialog system can be controlled in a reasonable way. It is the central decision making unit for the behavior of the ALIAS robot and its inter‐ actions with the human user. It manages the interplay between input and output modalities of the ALIAS robot, communicates with all involved modules of ALIAS and controls them.

Besides the touch-screen, which is used to display the GUI and two independent automatic speech recognition (ASR) systems, a keyword spotter and a continuous context search also operate in parallel. The ALIAS dialog systems support the intendiX ACTOR protocol for receiving input from BCI systems and updating the presented stimuli and feedback online, based on the active state of the robot.

The intendiX SOCI module is used to embed the BCI stimuli within the GUI (figure 11b), aligned to their corresponding buttons. This tight integration of the BCI enables users to easily utilize the ALIAS platform during recovery, such as recover from stroke. The BCI interface of ALIAS allows them to navigate through the different menus, start programs, chat with friends using Skype, call and dismiss the robot or issue an emergency call.

#### **5.3. BrainAble**

(a) (b)

34 Brain-Computer Interface Systems – Recent Progress and Future Prospects

(c) (d) (e)

visual (c, e) and tactile feedback on the body movements executed by the avatar (d).

versity Munich, 2012.

**5.2. ALIAS**

)

Images courtesy of VERE, Event lab Universitat de Barcelona, Centere National de la Recherche Scientifique France, Perco laboratory Scuola Superiore Sant' Anna Pisa and Institute of Automatic control Engineering at the Technical Uni‐

**Figure 10.** VERE project aims at dissolving the boundary between the human body and surrogate representations in physical reality (a) and immersive virtual reality (b).One of the key aspects is to develop a brain body computer inter‐ face enabling the user to control the movement of his robotic avatar robot (c), open doors (e) and to provide him with

Depending on the selected stimuli, the BCI system offers distinct levels of control. These levels range from high level commands, tasks and actions such as turning on the TV or grasping a can to moving the robot within its environment (figure 10c) or teaching the robot new high level tasks such as opening a door (figure 10e). The message based intendiX ACTOR protocol

The Ambient Assisted Living (AAL) research programme supports projects that develop technology to compensate for the drawbacks of the aging society by applying modern information and communication technologies (ICTs). The Adaptable Ambient LIving ASsis‐ tant (ALIAS) project [31] is one the projects funded by AAL. It aims to improve the commu‐ nication of elderly people, thus ensuring a safe and long independent life in their own homes. A mobile robot platform without manipulation capabilities serves as a communication platform for improving the social inclusion of the user by offering a wide range of services, such as web applications for basic communication, multimedia and event search and games.

can switch smoothly between the different control levels and control paradigms.

The BrainAble project [32] conceives, researches, designs, implements and validates an ICTbased Human Computer Interface (HCI). Such an interface is composed of Brain Neural Computer Interface (BNCI) sensors combined with affective computing and virtual environ‐ ments to restore and augment the two main shortcomings of people with disabilities. It entails inner and outer components. The inner component aims at providing functional independence for daily life activities and autonomy based on accessible and interoperable home automation. The outer component provides social inclusion through advanced and adapted social network services. The latter component is expected to dramatically improve the user's quality of life.

Within the BrainAble project, the core structures of the intendiX ACTOR protocol were designed, developed and extended. The ACTOR protocol is used to interconnect the BNCI system with the user's living environment. This includes elements such as lighting, shades, heating, ventilation, audio, video services such as radio or TV, intercoms and many more.

It further provides access to social network and online communication services, thereby augmenting the user's social inclusion. The user has access to all of these devices, services and social interaction tools and is able to control them through the BCI system.

#### **5.4. Games and social media**

The intendiX ACTOR protocol, in connection with the intendiX SOCI API, can also be used to control games such as World of War craft (WoW) [29] and social media like Twitter (Twitter Inc. USA) or Second Life (Linden Lab, USA).

World of Warcraft is a common Massively Multiplayer Online Role-Playing Game (MMORPG) in which the player controls an avatar in a virtual environment. The BCI system uses an SSVEP paradigm to control an avatar in WoW [29]. For basic movements, selecting objects or firing weapons, four control icons as shown in figure 12 are required. The bottom three icons are used to move the avatar forward and turn left or right. The fourth icon, the action icon, is located top left. It used to perform actions like grasping objects or attacking other opponents. Stimulation is done on the same 60 Hz LCD-display that also renders the game itself.

#### Courtesy of g.tec medical engineering GmbH, 2012

**Figure 12.** The intendiX SOCI module is used in combination with the intendiX ACTOR protocol to control the move‐ ments and actions of an avatar within the World of Warcraft multiplayer online game from Blizzard Entertainment, Inc.

Twitter (Twitter Inc.) is a social network that enables the user to send and read messages. The messages are limited to 140 characters and are displayed in the author's profile page. Messages can be sent via the Twitter website or via smart phones or SMS (Short Message Service). Twitter provides also an application programming interface to send and receive SMS.

The outer component provides social inclusion through advanced and adapted social network services. The latter component is expected to dramatically improve the user's quality of life.

Within the BrainAble project, the core structures of the intendiX ACTOR protocol were designed, developed and extended. The ACTOR protocol is used to interconnect the BNCI system with the user's living environment. This includes elements such as lighting, shades, heating, ventilation, audio, video services such as radio or TV, intercoms and many more.

It further provides access to social network and online communication services, thereby augmenting the user's social inclusion. The user has access to all of these devices, services and

The intendiX ACTOR protocol, in connection with the intendiX SOCI API, can also be used to control games such as World of War craft (WoW) [29] and social media like Twitter (Twitter

World of Warcraft is a common Massively Multiplayer Online Role-Playing Game (MMORPG) in which the player controls an avatar in a virtual environment. The BCI system uses an SSVEP paradigm to control an avatar in WoW [29]. For basic movements, selecting objects or firing weapons, four control icons as shown in figure 12 are required. The bottom three icons are used to move the avatar forward and turn left or right. The fourth icon, the action icon, is located top left. It used to perform actions like grasping objects or attacking other opponents.

**Figure 12.** The intendiX SOCI module is used in combination with the intendiX ACTOR protocol to control the move‐ ments and actions of an avatar within the World of Warcraft multiplayer online game from Blizzard Entertainment, Inc.

Twitter (Twitter Inc.) is a social network that enables the user to send and read messages. The messages are limited to 140 characters and are displayed in the author's profile page. Messages

Stimulation is done on the same 60 Hz LCD-display that also renders the game itself.

social interaction tools and is able to control them through the BCI system.

**5.4. Games and social media**

Inc. USA) or Second Life (Linden Lab, USA).

36 Brain-Computer Interface Systems – Recent Progress and Future Prospects

Courtesy of g.tec medical engineering GmbH, 2012

Figure 13a shows a UML diagram of the actions required to use e.g. the Twitter service. The intendiX ACTOR protocol is used to interconnect the Twitter interface with the BCI. This system uses a standard P300 spelling matrix (figure 13b), which was extended with commands required for Twitter. The two top rows contain symbols representing corresponding Twitter services, and the remaining characters are used for spelling.

**Figure 13.** UML diagram of Twitter (a). A P300 BCI with a Twitter interface mask (b). Screenshot of a Second Life envi‐ ronment (c). The Second Life interface main mask for moving the avatar, climbing, running, flying, teleporting home, displaying a map, showing the search mask, taking snapshots, chatting with other members and managing the sec‐ ond life session (d).

Second Life is a free 3D online virtual world that can be accessed through the "Second Life Viewer", which is free client software. A dedicated user account is necessary to participate in Second Life. One of the main activities in Second Life is socializing with other so-called residents. Each resident represents a person in the real world. Furthermore, it is possible to perform different actions like holding business meetings, taking pictures or making movies, attending courses, etc. Communication takes place via text chats, voice chats and gestures. Hence, handicapped people could also participate in Second Life just like any other user if an appropriate interface is available.

To control Second Life, three different interface masks were developed. Figure 13c displays a screenshot of a Second Life scene. The main mask (figure 13d) offers 31 different choices. The masks for control like 'chatting' provided 55 control elements and the one for 'searching' 40 selections. Each of the icons represents an actual command to be executed within Second Life. Whenever a certain icon is selected, Second Life is notified to execute this individual action. Thereby, a dedicated keyboard event generator is used to convert messages based on the intendiX ACTOR protocol into appropriate key strokes. Further details on the BCI integration with Twitter and Second Life, including the results achieved by healthy subjects, are discussed in [33].

#### **6. Conclusion**

Current research projects aim to establish BCI systems as assistive technologies for disabled people, thereby helping them interact with their living environment and facilitating social interaction. These efforts rely on properly interfacing the BCI with supporting systems, devices, services and tools. For example, this interface could embody the user in an avatar or robot, as done within VERE, or in a robotic assistant, as implemented within ALIAS. The design and implementation of the interfaces between the BCI and the application do have an impact on the flexibility of the resulting assistive system or device and thus on the autonomy and independence of the user. Highly flexible interfaces like intendiX ACTOR and intendiX SOCI make it possible to adapt the BCI to the user's needs while providing a standardized interface for using the BCI as control and interaction device with a large and constantly growing number of applications, assistive services and devices.

#### **Acknowledgements**

This work was supported in part by the European Union FP7 Integrated Project VERE, grant agreement no. 257695 and BrainAble by the European Community's, Seventh Framework Programme FP7/2007-2013, grant agreement n° 247447. The authors gratefully acknowledge the support of the ALIAS project funded by the by the German BMBF, the French ANR and the Austrian BMVIT within the AAL-2009-2 strategic objective of the Ambient Assisted Living (AAL) Joint Programme.

#### **Author details**

Christoph Hintermüller, Christoph Kapeller, Günter Edlinger and Christoph Guger

g.tec Mmedical Eengineering GmbH/ Guger Technologies OG, Austria

#### **References**

masks for control like 'chatting' provided 55 control elements and the one for 'searching' 40 selections. Each of the icons represents an actual command to be executed within Second Life. Whenever a certain icon is selected, Second Life is notified to execute this individual action. Thereby, a dedicated keyboard event generator is used to convert messages based on the intendiX ACTOR protocol into appropriate key strokes. Further details on the BCI integration with Twitter and Second Life, including the results achieved by healthy subjects, are discussed

Current research projects aim to establish BCI systems as assistive technologies for disabled people, thereby helping them interact with their living environment and facilitating social interaction. These efforts rely on properly interfacing the BCI with supporting systems, devices, services and tools. For example, this interface could embody the user in an avatar or robot, as done within VERE, or in a robotic assistant, as implemented within ALIAS. The design and implementation of the interfaces between the BCI and the application do have an impact on the flexibility of the resulting assistive system or device and thus on the autonomy and independence of the user. Highly flexible interfaces like intendiX ACTOR and intendiX SOCI make it possible to adapt the BCI to the user's needs while providing a standardized interface for using the BCI as control and interaction device with a large and constantly growing number

This work was supported in part by the European Union FP7 Integrated Project VERE, grant agreement no. 257695 and BrainAble by the European Community's, Seventh Framework Programme FP7/2007-2013, grant agreement n° 247447. The authors gratefully acknowledge the support of the ALIAS project funded by the by the German BMBF, the French ANR and the Austrian BMVIT within the AAL-2009-2 strategic objective of the Ambient Assisted Living

Christoph Hintermüller, Christoph Kapeller, Günter Edlinger and Christoph Guger

g.tec Mmedical Eengineering GmbH/ Guger Technologies OG, Austria

in [33].

**6. Conclusion**

of applications, assistive services and devices.

38 Brain-Computer Interface Systems – Recent Progress and Future Prospects

**Acknowledgements**

(AAL) Joint Programme.

**Author details**


[26] g.tec medical engineering GmbH, "intendiX", Retrieved September 2012, from http:// www.intendix.com/ and http://www.gtec.at/Products/Complete-Solutions/intendiX-Specs-Features

[13] Blankertz, B, Tomioka, R, Lemm, S, Kawanabe, M, & Müller, K. R. Optimizing spatial

[14] Schalk, G, Wolpaw, J, Mcfarland, D, & Pfurtscheller, G. EEG-based communication: Presence of an error potential," *Clin. Neurophysiol.*, (2000). , 111, 2138-2144.

[15] Parra, L, Spence, C, Gerson, A, & Sajda, P. Response error correction-A demonstra‐ tion of improved human-machine performance using real-time EEG monitoring,"

[16] Blankertz, B, Dornhege, G, Schäfer, C, Krepki, R, Kohlmorgen, J, Müller, K. -R, Kunz‐ mann, V, Losch, F, & Curio, G. Boosting bit rates and error detection for the classifi‐ cation of fast-paced motor commands based on single-trial EEG analysis," *IEEE*

[17] Ferrez, P, Del, J, & Millán, R. You are wrong!-Automatic detection of interaction er‐ rors from brain waves," in *Proc. 19th Int. Joint Conf. Artificial Intell.*, (2005).

[18] g.tec medical Engineering GmBH. (2012). "g.HIGHsys." Retrieved September 2012, from http://www.gtec.at/Products/Software/High-Speed-Online-Processing-under-

[19] Brunner, C, Andreoni, G, Bianchi, L, Blankertz, B, Breitweiser, C, Kanoh, S, Kothe, C, Lecuyer, A, Makeig, S, Mellinger, J, Perego, P, Renard, Y, Schalk, G, Susila, I. P, Ven‐ thur, B, & Müller-putz, G. (2013). BCI Software Platforms. In: Toward Practical BCIs: Bridging the Gap from Research to Real-World Applications, Allison, B.Z., Dunne, S.,

[20] French National Institute for Research in Computer Science and Control (INRIA)IN‐ RIA), Rennes, France, "OpenViBe", Retrieved September (2012). from http://open‐

[21] Swartz Center for Computational NeuroscienceUniversity of California, CA, San Die‐ go, USA, "BCILAB", Retrieved September (2012). from http://sccn.ucsd.edu/wiki/

[22] Department of Electronics and Intelligent SystemsTohoku Institute of Technology, Sendai, Japan, "xBCI", Retrieved September (2012). from http://xbci.sourceforge.net

[23] Wadsworth CenterNew York State Department of Health, Albany, NY, USA,

[24] TOBI Tools for Brain-Computer Interaction projectTOBI", Retrieved September

[25] Realtime bio-signal standardsTobi iC Definition, implementation and scenarios", Re‐ trieved August (2012). from http://www.bcistandards.org/softwarestandards/tic

"BCI2000", Retrieved September (2012). from http://www.bci2000.org

(2012). from http://www.tobi-project.org

Leeb, R., Millan, J., and Nijholt, A. Springer-Verlag Berlin, , 303-331.

filters for robust EEG, *IEEE Signal Processing Magazine*, 25(1) ((2008).

40 Brain-Computer Interface Systems – Recent Progress and Future Prospects

*IEEE Trans. Neural Syst. Rehabil. Eng.*, Jun. (2003). , 11(2), 173-177.

*Trans. Neural Syst. Rehabil. Eng.*, Jun. (2003). , 11(2), 127-131.

Simulink-Specs-Features.

vibe.inria.fr

BCILAB


## **Adaptive Network Fuzzy Inference Systems for Classification in a Brain Computer Interface**

Vahid Asadpour, Mohammd Reza Ravanfar and Reza Fazel-Rezai

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/55989

#### **1. Introduction**

Fuzzy theory provides the basis for Fuzzy Inference Systems (FIS) which is a useful tool for classifications of data, static and dynamic process modeling and identification, decision making, classification, and control of processes. These characteristics could be used in different kinds of FIS and be applied to Brain Computer Interface (BCI) systems which are discussed in this chapter.

The first kind of FIS is designed based on the ability of fuzzy logic to model human percep‐ tion. These FIS elaborates fuzzy rules originates from expert knowledge and they are called fuzzy expert system. Expert knowledge was also used prior to FIS to construct expert systems for simulation purposes. These expert systems were based on Boolean algebra and were not well defined to adapt to regressive intrinsic of underlying process phenomena. Despite that, fuzzy logic allows the rules to be gradually introduced into expert simulators due to input in a knowledge based manner. It also depicts the limitations of human knowledge, particular‐ ly the ambiguities in formalizing interactions in complex processes. This type of FIS offers high semantic degree and good generalization ability. Unfortunately, the complexity of large systems may lead to high ambiguities and insufficient accuracies which lead to poor performances [1].

Another class of modeling tools is based on adaptive knowledge based learning from data. This category includes supervised learning and outputs of observations when the training data is provided. A numerical performance index can be defined in such simulators which are usually based on the mean square error. Neural networks have become very popular and efficient in this field. Their main characteristic is the numerical accuracy while they also

© 2013 Asadpour et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 Asadpour et al.; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

provide a qualitative black box behavior. The first self-learning FIS was proposed by Sugeno that provided a way to design the second kind of FIS [2]. In this case, even if the fuzzy rules are expressed in the form of expert rules, a loss of semantic occurs because of direct generation of weights from data. These types of simulators are usually named Adaptive Network Fuzzy Inference System (ANFIS).

There are methods for constructing fuzzy structures by using rule-based inference. These methods extract the rules directly from data and can be considered as rule generation ap‐ proaches. Generation of the rules includes preliminary rule generation and rule adaptation according to input and output data. Automatic rule generation methods were applied to simple systems with limited number of variables. These simple systems do not need to optimize the rule base. It is different for complex systems. The number of generated rules becomes enor‐ mous and the description of rules becomes more complex due to the number of variables. The simulator will be easier to interpret if it is defined by the most influential variables. Also, the system behavior will be more comprehensive when the number of rules becomes smaller. Therefore, variable selection and rule reduction are two important subcategories of the rule generation process which is called structure optimization. A FIS has many more parameters that can also be optimized including membership functions and rule conclusions. A thorough study in these fields and their respective advantages and considerations are provided in the following sections.

In this chapter, several feature extraction and classification methods which could be applied to BCI systems are discussed.

#### **2. Feature extraction**

In the following sections, the features which are used to compose feature vectors are discussed. These features provide various views of EEG signal which can be used in ANFIS classification system.

#### **2.1. Energy ratio features**

EEG features in a BCI system can be obtained by the frequency analysis of the observed data sequence. For example, in steady state visual evoked potential (SSVEP) BCIs, the frequencies of the light oscillation should be detected. The frequency domain analysis gives a clear picture of changes. Because of frequency changes during BCI, energy ratios between different EEG sub-bands can be computed for each channel during BCI. It is shown that there would be BCI related changes in the EEG according to the brain activities and electrode locations ([3], [4] and [5]). For instance, to discover the brain rhythms during BCI, alpha (8-13 Hz), beta (13-35 Hz), delta (0-4 Hz), and theta (4-8 Hz) band energy ratios of spectrogram SPEC(t, f) at time t and frequency f may be calculated as shown in equations (1) to (4) [6]. They show the total energy of each defined spectral band relative to total signal energy:

Adaptive Network Fuzzy Inference Systems for Classification in a Brain Computer Interface http://dx.doi.org/10.5772/55989 45

$$\alpha = \frac{\int\_{0}^{13} \text{SPEC(t, f)df}}{\int\_{0}^{35} \text{SPEC(t, f)df}} \tag{1}$$

$$\beta = \frac{\int\_{1\}^{35} \text{SPEC}(\text{t}, \text{f}) \text{df}}{\int\_{0}^{35} \text{SPEC}(\text{t}, \text{f}) \text{df}} \tag{2}$$

$$\text{LSS} = \frac{\int\_0^4 \text{SPEC(t, f)df}}{\int\_0^{\text{fS}} \text{SPEC(t, f)df}} \tag{3}$$

$$
\Theta = \frac{\int\_0^8 \text{SPEC(t, f)df}}{\int\_0^{\text{SPEC(t, f)df}} \text{f} \, \text{df}} \tag{4}
$$

#### **2.2. Approximate entropy**

provide a qualitative black box behavior. The first self-learning FIS was proposed by Sugeno that provided a way to design the second kind of FIS [2]. In this case, even if the fuzzy rules are expressed in the form of expert rules, a loss of semantic occurs because of direct generation of weights from data. These types of simulators are usually named Adaptive Network Fuzzy

There are methods for constructing fuzzy structures by using rule-based inference. These methods extract the rules directly from data and can be considered as rule generation ap‐ proaches. Generation of the rules includes preliminary rule generation and rule adaptation according to input and output data. Automatic rule generation methods were applied to simple systems with limited number of variables. These simple systems do not need to optimize the rule base. It is different for complex systems. The number of generated rules becomes enor‐ mous and the description of rules becomes more complex due to the number of variables. The simulator will be easier to interpret if it is defined by the most influential variables. Also, the system behavior will be more comprehensive when the number of rules becomes smaller. Therefore, variable selection and rule reduction are two important subcategories of the rule generation process which is called structure optimization. A FIS has many more parameters that can also be optimized including membership functions and rule conclusions. A thorough study in these fields and their respective advantages and considerations are provided in the

In this chapter, several feature extraction and classification methods which could be applied

In the following sections, the features which are used to compose feature vectors are discussed. These features provide various views of EEG signal which can be used in ANFIS classification

EEG features in a BCI system can be obtained by the frequency analysis of the observed data sequence. For example, in steady state visual evoked potential (SSVEP) BCIs, the frequencies of the light oscillation should be detected. The frequency domain analysis gives a clear picture of changes. Because of frequency changes during BCI, energy ratios between different EEG sub-bands can be computed for each channel during BCI. It is shown that there would be BCI related changes in the EEG according to the brain activities and electrode locations ([3], [4] and [5]). For instance, to discover the brain rhythms during BCI, alpha (8-13 Hz), beta (13-35 Hz), delta (0-4 Hz), and theta (4-8 Hz) band energy ratios of spectrogram SPEC(t, f) at time t and frequency f may be calculated as shown in equations (1) to (4) [6]. They show the total energy

of each defined spectral band relative to total signal energy:

Inference System (ANFIS).

44 Brain-Computer Interface Systems – Recent Progress and Future Prospects

following sections.

to BCI systems are discussed.

**2. Feature extraction**

**2.1. Energy ratio features**

system.

Approximate entropy is a recently formulated family of parameters and statistics quantifying regularity (orderliness) in serial data [4]. It has been used mainly in the analysis of heart rate variability [7, 8], endocrine hormone release pulsatility [9], estimating regularity in epileptic seizure time series data [10] and estimating the depth of anesthesia [2]. Approximate entropy assigns a non-negative number to a time series, with larger values corresponding to more complexity or irregularity in the data. EEG signal represents regular and uniform pattern during synchronized cooperative function of cortical cells. This pattern results to low entropy values. In contrast, concentric functions and higher levels of brain activity lead to high values of entropy. Shannon entropy *H* is defined as:

$$H = -\sum\_{i=1}^{N} P\_i \log\_2 P\_i \tag{5}$$

in which *Pi* is the average probability that amplitude of *i* th frequency band of brain rhythm be greater than *r* times standard deviation and *N* is the total number of frequency bands. *H* is 0 for a single frequency and 1 for uniform frequency distribution over total spectrum. Because of the non-linear characteristics of EEG signals, approximate entropy can be used as a powerful tool in the study of the EEG activity. In principle, the accuracy and confidence of the entropy estimate improve as the number of matches of length r and *m* + 1 increases. Although *m* and *r* are critical in determining the outcome of approximate entropy, no guide‐ lines exist for optimizing their values. *m*=3 and *r* =0.25 could be selected based on an investi‐ gation on original data sequence. Therefore, one dimensions of feature vector could be provided.

#### **2.3. Fractal dimension**

Fractal dimension emphasizes the geometric property of basin of attraction. These dimension show geometrical property of attractors and is also computed very fast [15]. Our goal was to associate each 5-second segment data as a trial to its corresponding class. To do this, features were extracted from each 1 second segment with 50% overlap, and sequence of 9 extracted features were considered as the feature vector of a 5-second segment, which was to be modeled and classified. In Higuchi's algorithm, k new time series are constructed from the signal *x*(1), *x*(2), *x*(3), …, *x*(*N* ) under study is [3]:

$$\mathbf{x}\_m^k = \begin{bmatrix} \mathbf{x}(m), & \mathbf{x}(m+k), & \mathbf{x}(m+2k), & \dots, & \mathbf{x}\left(m+\left\lceil \frac{N-m}{k} \right\rceil k\right) \end{bmatrix} \tag{6}$$

in which *m*=1, 2, …, *k* and *k* indicate the initial time value, and the discrete time interval between points, respectively. For each of the *k* time series *xm k* the length *L <sup>m</sup>*(*k*) is computed by:

$$\mathcal{L}\_{\text{max}}(k) = \frac{\sum\_{i} \mathbb{1}\left(m + ik\right) - \mathbb{1}\left(m + (i - 1)k\right)\mathbb{1}\left(N - 1\right)}{\left[\frac{N - m}{k}\right]k} \tag{7}$$

in which *N* is the total length of the signal *x*. An average length is computed as the mean of the *k* lengths *L <sup>m</sup>*(*k*) (for *m*=1, 2, …, *k* ). This procedure is repeated for each k ranging from 1 to *kmax*, obtaining an average length for each *k*. In the curve of *ln*(*L* (*k*)) versus *ln*(1 / *k*), the slope of the best fitted line to this curve is the estimate of the fractal dimension.

#### **2.4. Lyapunov exponent**

Lyapunov exponents are a quantitative measure for distinguishing among the various types of orbits based upon their sensitive dependence on the initial conditions, and are used to determine the stability of any steady-state behavior, including chaotic solutions [4]. The reason why chaotic systems show aperiodic dynamics is that phase space trajectories that have nearly identical initial states will separate from each other at an exponentially increasing rate captured by the so called Lyapunov exponent. The Lyapunov exponents can be estimated from the observed time series [5]. This approach is described as follows: consider two (usually the nearest) neighboring points in phase space at time 0 and at time t, distances of the points in the i*th* direction being *δXi* (0) and *δXi* (*t*) , respectively. The Lyapunov exponent is then defined by the average growth rate *λ* i of the initial distance [6]

$$\frac{\left\|\delta \, \boldsymbol{X}\_i(t)\right\|}{\left\|\delta \, \boldsymbol{X}\_i(0)\right\|} = 2^{\lambda\_i} (\mathbf{t} \to \infty) \tag{8}$$

or

$$\lambda\_i = \lim\_{t \to \infty} \frac{1}{t} \log\_2 \frac{\|\delta \, \boldsymbol{X}\_i(t)\|}{\|\delta \, \boldsymbol{X}\_i(0)\|}. \tag{9}$$

Generally, Lyapunov exponents can be extracted from observed signals in two different ways [7]. The first method is based on the idea of following the time evolution of nearby points in the state space [17]. This method provides an estimation of the largest Lyapunov exponent only. The second method is based on the estimation of local Jacobi matrices and is capable of estimating all the Lyapunov exponents [18]. Vectors of all the Lyapunov exponents for particular systems are often called their Lyapunov spectra. This method was used for Lyapu‐ nov vector extraction in this section. An optimized size of 7 is considered for the vector which led to the best mean classification rate on the support vector machine (SVM) classifier.

#### **2.5. Kalman feature extractor**

features were considered as the feature vector of a 5-second segment, which was to be modeled and classified. In Higuchi's algorithm, k new time series are constructed from the signal

in which *m*=1, 2, …, *k* and *k* indicate the initial time value, and the discrete time interval


in which *N* is the total length of the signal *x*. An average length is computed as the mean of the *k* lengths *L <sup>m</sup>*(*k*) (for *m*=1, 2, …, *k* ). This procedure is repeated for each k ranging from 1 to *kmax*, obtaining an average length for each *k*. In the curve of *ln*(*L* (*k*)) versus *ln*(1 / *k*), the

Lyapunov exponents are a quantitative measure for distinguishing among the various types of orbits based upon their sensitive dependence on the initial conditions, and are used to determine the stability of any steady-state behavior, including chaotic solutions [4]. The reason why chaotic systems show aperiodic dynamics is that phase space trajectories that have nearly identical initial states will separate from each other at an exponentially increasing rate captured by the so called Lyapunov exponent. The Lyapunov exponents can be estimated from the observed time series [5]. This approach is described as follows: consider two (usually the nearest) neighboring points in phase space at time 0 and at time t, distances of the points in

*N* - *m*

(*t*) , respectively. The Lyapunov exponent is then

(t→∞) (8)

(0) . (9)

*k*

*<sup>k</sup> k*)} (6)

the length *L <sup>m</sup>*(*k*) is computed by:

(7)

*<sup>k</sup>* ={*x*(*m*), *x*(*m* + *k*), *x*(*m* + 2*k*), …, *x*(*m* +

slope of the best fitted line to this curve is the estimate of the fractal dimension.

(0) and *δXi*

*δ Xi* (*t*) *δ Xi*

*λ<sup>i</sup>* =lim *t*→∞ 1 *<sup>t</sup>* log2

(0) =2 *λi*

> *δ Xi* (*t*) *δ Xi*

Generally, Lyapunov exponents can be extracted from observed signals in two different ways [7]. The first method is based on the idea of following the time evolution of nearby points in the state space [17]. This method provides an estimation of the largest Lyapunov exponent

defined by the average growth rate *λ* i of the initial distance [6]

*x*(1), *x*(2), *x*(3), …, *x*(*N* ) under study is [3]:

46 Brain-Computer Interface Systems – Recent Progress and Future Prospects

between points, respectively. For each of the *k* time series *xm*

∑ *i*

*L <sup>m</sup>*(*k*)=

*xm*

**2.4. Lyapunov exponent**

the i*th* direction being *δXi*

or

The algorithm to be discussed in this section is based on the Kalman estimation, which is well known in statistical estimation and control theory ([8], [9], and [10]) but perhaps not so in parameter estimation. Therefore, the next paragraphs explain its function in the special context. Kalman filter is essentially a set of mathematical expressions that provides a predictor-modifier estimator. This estimator minimizes the error covariance and therefore is an optimum estimator if appropriate initial condition is selected. The condition for optimum estimation is rarely satisfied however the estimator performs well in sub-optimum situations. Kalman estimator is used for adaptive estimation of dynamic parameters of EEG. The estimator reduces the error variance adaptively and after a period of time a unique estimation is achieved [11].

A Kalman filter computes the response *x* ∈R*<sup>n</sup>* for the system which is defined by linear differential equation:

$$\mathbf{x}\_{k} = A\mathbf{x}\_{k\cdot 1} + Bu\_{k} + w\_{k\cdot 1} \tag{10}$$

in which *x* is the system state, *A* and *B* are the state and input matrices, u is input and w is the process error. *z* ∈R*m* is the measured value and is defined as:

$$\mathbf{z}\_k = H \,\mathbf{x}\_k + \boldsymbol{\upsilon}\_k \tag{11}$$

in which *H* is the output matrix and *v* is the measurement error.

The estimate and process errors are considered independent additive white Gaussian noises. In practice these are time varying processes which are considered stationary for simplicity.

If there is not input or process noise the matrix *An*×*<sup>n</sup>* relates the state of the system at last stage to current stage. *A* is practically a time varying matrix but it is considered constant in compu‐ tations. *Bn*×1 relates the control input *u* to the state *x*. *Hm*×*<sup>n</sup>* relates the state *x* to the measure‐ ments *zk* . *H* is also time varying but is considered constant in computations.

Consider the system

$$\mathbf{x}\_{k+1} = \left(A + \Delta A\_k\right)\mathbf{x}\_k + B\mathbf{w}\_k \tag{12}$$

$$\mathbf{y}\_k = \{\mathbf{C} + \Delta \mathbf{C}\_k\} \mathbf{x}\_k + \mathbf{v}\_k \tag{13}$$

in which *xk* <sup>∈</sup>R*<sup>n</sup>* is state space, *wk* <sup>∈</sup>R*<sup>q</sup>* is process noise, *yk* <sup>∈</sup>R*m* is measurements and *vk* <sup>∈</sup>R*<sup>m</sup>* is measurement noise. Furthermore ∆ *Ak* and ∆*Ck* represent the variations of parameters. They could be considered as

$$
\begin{bmatrix}
\Delta A\_k \\
\Delta C\_k
\end{bmatrix} = \begin{bmatrix}
H\_1 \\
H\_2
\end{bmatrix} F\_k E \tag{14}
$$

in which *Fk* <sup>∈</sup>R*i*<sup>×</sup> *<sup>j</sup>* is a real time invariant matrix that satisfies the condition

$$\begin{bmatrix} F\_k^T F\_k \le I, & k \ge 0 \end{bmatrix} \tag{15}$$

and *H*1, *H*2 and *E* are real matrices that define how *A* and *C* elements are affected due to *Fk* variations. These matrices are estimated using a separate recursive least square estimation [12].

The state-space representation of a linear system is much more flexible and useful than the transfer function form because it includes both time-dependent and time-independent systems and also encompasses stochastic and deterministic systems. Furthermore, it is possible to evaluate precisely concepts of observability and controllability, which are useful in determin‐ ing whether the desired unknown parameters of a system can be estimated from the given observations for instance.

A modification is performed on state estimation algorithm in this section to overcame the lake of deterministic and stationary input *wk* . The algorithm is based on observations rather than inputs. The algorithm estimates the state vector given observations up to sample *k*. The algorithm will fail if the desired unknown states cannot be found from the observations gathered, therefore the observability of the system states must first be verified. This is per‐ formed by determining the rank of the observability matrix in the observation interval *n*<sup>1</sup> ≤*n* ≤*n*2, defined as [13]

$$\mathbf{O} = \sum\_{i=u\_1}^{u\_2} \left[ \left| \prod\_{k=0}^{i-1} A\_k^T \right| H\_i^T H\_i \left| \prod\_{k=0}^{i-1} A\_k^T \right|^T \right] \tag{16}$$

If *O* is rank-deficient, then it is not possible to obtain unique estimates of {*xn*<sup>1</sup> , …, *xn*<sup>2</sup> }. A timeinvariant system, in which *H* and *A* are not time-varying, *O* can be on a much simpler form, but in the nonlinear parameter estimator described here, this simplification is not available. Assuming the system model observable, the state-space parameter estimation problem may be stated as follows. Given observations {*y*0, …, *yn*} and the state-space model (2) and (3), find the optimal estimate of *xn*, denoted *x* ^ *<sup>n</sup>*|*n*.

If the noise vectors *wk* and *vk* assumed to be independent individually and mutually and uncorrelated with correlation matrices

Adaptive Network Fuzzy Inference Systems for Classification in a Brain Computer Interface http://dx.doi.org/10.5772/55989 49

$$E\left[\boldsymbol{w}\_{i}\boldsymbol{w}\_{j}^{T}\right] = \boldsymbol{Q}\_{i}\boldsymbol{\delta}\_{ij} \tag{17}$$

$$E\left[\boldsymbol{\upsilon}\_{i}\boldsymbol{\upsilon}\_{j}^{T}\right] = \boldsymbol{R}\_{i}\boldsymbol{\delta}\_{ij}\tag{18}$$

$$E\left[w\_i v\_j^T\right] = 0\tag{19}$$

Where *δij* is the two dimensional delta function, then the Kalman filter provides the MMSE estimate of *xk* as

$$\stackrel{\wedge}{\mathbf{x}}\_{n\wedge n} = \arg\min\_{\stackrel{\wedge}{\mathbf{x}}\_n} E\left\{ \|\mathbf{x}\_n - \stackrel{\wedge}{\mathbf{x}}\_n\|^2 \|n\| \right\} \tag{20}$$

The recursive Riccati equations are used to estimate the parameters [14]. The algorithm for kalman feature extraction is as follows [15]:


$$\stackrel{\Lambda^-}{\infty^-\_k} = A\_k \stackrel{\Lambda^-\_\*}{\infty^-\_{k-1}}$$

in which *xk* <sup>∈</sup>R*<sup>n</sup>*

in which *Fk* <sup>∈</sup>R*i*<sup>×</sup> *<sup>j</sup>*

observations for instance.

*n*<sup>1</sup> ≤*n* ≤*n*2, defined as [13]

*O* = ∑ *i*=*n*<sup>1</sup> *n*2 |∏ *k*=0 *i*-1 *Ak <sup>T</sup>* <sup>|</sup>*Hi*

find the optimal estimate of *xn*, denoted *x*

uncorrelated with correlation matrices

If *O* is rank-deficient, then it is not possible to obtain unique estimates of {*xn*<sup>1</sup>

^ *<sup>n</sup>*|*n*.

could be considered as

is state space, *wk* <sup>∈</sup>R*<sup>q</sup>*

48 Brain-Computer Interface Systems – Recent Progress and Future Prospects

is measurement noise. Furthermore ∆ *Ak* and ∆*Ck* represent the variations of parameters. They

is a real time invariant matrix that satisfies the condition

and *H*1, *H*2 and *E* are real matrices that define how *A* and *C* elements are affected due to *Fk* variations. These matrices are estimated using a separate recursive least square estimation [12].

The state-space representation of a linear system is much more flexible and useful than the transfer function form because it includes both time-dependent and time-independent systems and also encompasses stochastic and deterministic systems. Furthermore, it is possible to evaluate precisely concepts of observability and controllability, which are useful in determin‐ ing whether the desired unknown parameters of a system can be estimated from the given

A modification is performed on state estimation algorithm in this section to overcame the lake of deterministic and stationary input *wk* . The algorithm is based on observations rather than inputs. The algorithm estimates the state vector given observations up to sample *k*. The algorithm will fail if the desired unknown states cannot be found from the observations gathered, therefore the observability of the system states must first be verified. This is per‐ formed by determining the rank of the observability matrix in the observation interval

> *<sup>T</sup> Hi*|<sup>∏</sup> *k*=0 *i*-1 *Ak T* |*T*

invariant system, in which *H* and *A* are not time-varying, *O* can be on a much simpler form, but in the nonlinear parameter estimator described here, this simplification is not available. Assuming the system model observable, the state-space parameter estimation problem may be stated as follows. Given observations {*y*0, …, *yn*} and the state-space model (2) and (3),

If the noise vectors *wk* and *vk* assumed to be independent individually and mutually and

∆ *Ak* ∆*Ck* = *H*1 *H*2

*Fk*

is process noise, *yk* <sup>∈</sup>R*m* is measurements and *vk* <sup>∈</sup>R*<sup>m</sup>*

*<sup>T</sup> Fk* <sup>≤</sup> *<sup>I</sup>*, *<sup>k</sup>* <sup>≥</sup><sup>0</sup> (15)

*FkE* (14)

(16)

}. A time-

, …, *xn*<sup>2</sup>

**3.** Project the error covariance ahead:

*Pk* - = *AkPk* -1 <sup>+</sup> *Ak <sup>T</sup>* <sup>+</sup> *wk*

**4.** Compute the Kalman gain:

$$\boldsymbol{K}\_{k} = \boldsymbol{P}\_{k}^{-} \boldsymbol{A}\_{k}^{T} \left( \boldsymbol{A}\_{k} \boldsymbol{P}\_{k}^{-} \boldsymbol{A}\_{k}^{T} + \boldsymbol{R}\_{k} \right)^{-1}$$

**5.** Update feature vector estimate *x* ^ *k* + with measurement *yk* :

$$
\stackrel{\frown}{\mathfrak{X}}\_{k}^{+} = \stackrel{\frown}{\mathfrak{X}}\_{k}^{-} + \mathcal{K}\_{k} \left( y\_{k} - A\_{k} \stackrel{\frown}{\mathfrak{X}}\_{k}^{-} \right),
$$

**6.** Update the error covariance:

$$P\_k^{-\*} = (I - K\_k A\_k) P\_k^{-\*}$$

Kalman feature extractor uses the estimates of dynamics system state equation, with the new information contained in *yk* fed back into the system through the Kalman gain. The block diagram description of the Kalman filter is given in Figure 1. A very important feature of the Kalman filter is that the error covariance does not depend on the observations. Hence *Pk* <sup>|</sup>*<sup>k</sup>* -1 can be pre-computed and the accuracy of the filter assessed before the observations are made. In particular, we may investigate the asymptotic behaviour of the filter by analyzing the discrete time Riccati equation.

**Figure 1.** Block diagram of Kalman feature extractor [15].

#### **3. Classification**

The support vector machine (SVM) method has been used extensively for classification of EEG signals [16]. It is shown that EEG signal has separable intrinsic vectors which could be used in SVM classifier. SVM classifiers use discriminant hyper-planes for classification. The selected hyper-planes are those that maximize the margin of classification edges. The distance from the nearest training points usually measured based on a non-linear kernel to map the problem to a linear solvation space [17]. A Radial Basis Function (RBF) kernel based SVM is proposed here that Lagrangian optimization is performed using an adjustable ANFIS algorithm. It will be shown that due to conceptual nature of BCI for patients this proposed method leads to adjustable soft decision classification.

#### **3.1. Classification using nonlinear SVM with RBF kernel**

Training the SVM is a quadratic optimization problem in which the hyperplane is defined as [18]:

$$y\_i \{ w \: \mathbf{0} \{ \mathbf{x}\_{i'} \cdot y\_{\ j} \} + b \} \succeq \mathbf{1} - \xi\_{i'} \quad \xi\_i \ge 0, \ i = 1, \ \dots, l, \ j = 1, \ \dots, m \tag{21}$$

in which xi is input vector, b is bias, w is adapted weights, ξ<sup>i</sup> is class separation, Φ(xi , yj ) is the mapping kernel, l is the number of training vectors, j is the number of output vectors, and yi is desired output vector. The weight parameters should be achieved so that the margin between the hyperplane and the nearest point to be maximized. The only free parameter, C, in SVMs controlled the trade-off between the maximization of margin and the amount of misclassifi‐ cations. Optimization of equation (9) yields to optimum w which is the answer of problem. It could be done using Lagrange multipliers defined as [11]:

$$\text{L }\{w\_{\cdot}, b\_{\cdot}, \alpha\_{\cdot}\} \text{=} \frac{1}{2} \|\!\|w\|\!\|^{2} + \text{C} \sum\_{i=1}^{l} \xi\_{i} \cdot \frac{1}{\cdot} \sum\_{i=1}^{m} \alpha\_{i} \alpha\_{j} \{y\_{j} \{w \bullet \{x\_{i}, y\_{j}\} + b - 1 + \xi\_{i}\}\} \text{--} \sum\_{i=1}^{l} \mu\_{i} \xi\_{i} \tag{22}$$

in which *C* >*α<sup>i</sup>* , *α <sup>j</sup>* ≥0 *and μi* ≥0 *for i* =1, …, *l and j* =1, …, *m* and *C* is the upper bond for Lagrange coefficient. The coefficient *C* is a representation of error penalty so that higher values yield to bigger penalties. Karush-Kuhn-Tucher conditions lead to optimization of Lagrange multipliers [19]:

$$\frac{\partial L}{\partial \mathcal{W}\_v} = \mathcal{W}\_v - \sum\_i \alpha\_i y\_i \mathbf{x}\_{iv} = \mathbf{0} \tag{23}$$

$$\frac{\partial L}{\partial b} = -\sum\_{i} \alpha\_{i} y\_{i} = 0 \tag{24}$$

$$\frac{\partial L}{\partial \mathbb{E}\_i} = \mathbb{C} \text{ - } \alpha \text{ - } \mu\_i \tag{25}$$

subject to

**3. Classification**

adjustable soft decision classification.

**Figure 1.** Block diagram of Kalman feature extractor [15].

50 Brain-Computer Interface Systems – Recent Progress and Future Prospects

*yi* (*w*Φ(*xi*

*<sup>L</sup>* (*w*, *<sup>b</sup>*, *<sup>α</sup>*, *<sup>μ</sup>*)= <sup>1</sup>

**3.1. Classification using nonlinear SVM with RBF kernel**

could be done using Lagrange multipliers defined as [11]:

<sup>2</sup> *<sup>w</sup>* <sup>2</sup> <sup>+</sup> *<sup>C</sup>*<sup>∑</sup>

*i*=1 *l ξ<sup>i</sup>* - ∑ *i*=1 *l* ∑ *j*=1 *m αi α j* (*y <sup>j</sup>*

) + *b*) ≥1 - *ξ<sup>i</sup>*

, *y <sup>j</sup>*

The support vector machine (SVM) method has been used extensively for classification of EEG signals [16]. It is shown that EEG signal has separable intrinsic vectors which could be used in SVM classifier. SVM classifiers use discriminant hyper-planes for classification. The selected hyper-planes are those that maximize the margin of classification edges. The distance from the nearest training points usually measured based on a non-linear kernel to map the problem to a linear solvation space [17]. A Radial Basis Function (RBF) kernel based SVM is proposed here that Lagrangian optimization is performed using an adjustable ANFIS algorithm. It will be shown that due to conceptual nature of BCI for patients this proposed method leads to

Training the SVM is a quadratic optimization problem in which the hyperplane is defined as [18]:

mapping kernel, l is the number of training vectors, j is the number of output vectors, and yi is desired output vector. The weight parameters should be achieved so that the margin between the hyperplane and the nearest point to be maximized. The only free parameter, C, in SVMs controlled the trade-off between the maximization of margin and the amount of misclassifi‐ cations. Optimization of equation (9) yields to optimum w which is the answer of problem. It

(*w*Φ(*xi*

, yj

) + *b* -1+ *ξ<sup>i</sup>*

)) - ∑ *i*=1 *l μi*

in which xi is input vector, b is bias, w is adapted weights, ξ<sup>i</sup> is class separation, Φ(xi

, *ξ<sup>i</sup>* ≥0, *i* =1, …, *l*, *j* =1, …, *m* (21)

, yj

*ξ<sup>i</sup>* (22)

) is the

$$
\mu\_i \{ w \Phi(\mathbf{x}\_{i'} \ \mathbf{y}\_j) + b \cdot 1 + \xi\_i \} \ge 0, \ \xi\_i \ge 0, \ \alpha\_i \ge 0 \tag{26}
$$

and,

$$a\_i \{ \mathbf{y}\_i \{ w \Phi(\mathbf{x}\_{i'} \ \mathbf{y}\_j) + b \ -1 + \xi\_i \} \} = 0. \tag{27}$$

Usually nonlinear kernels provide classification hyper-planes that cannot be achieved by linear weighting [20]. Using appropriate kernels, SVM offered an efficient tool for flexible classifi‐ cation with a highly nonlinear decision boundary. RBF kernel was used in this section which is defined as:

$$\Phi(\mathbf{x}, \ y) = \exp\left(\frac{-\left\|\mathbf{x} - \mathbf{y}\right\|^2}{2\sigma^2}\right) \tag{28}$$

in which *σ* is the standard deviation. The proposed feature extraction method is depicted in Figure 2. The outputs are fed to the adjustable ANFIS described in next section. Two param‐ eters had to be selected beforehand: the trade-off parameter *C* and the kernel standard deviation *σ*. They could be optimized for an optimal generalization performance in the traditional way, by using an independent test set or *n*-fold cross-validation. It has been suggested that the parameters could be chosen by optimizing the upper bound of the gener‐ alization error solely based on training data [26]. The fraction of support vectors, i.e., the quotient between the number of support vectors and all training samples, gave an upper bound on the leave-one-out error estimate because the resulting decision function changed only when support vectors were omitted. Therefore, a low fraction of support vectors could be used as a criterion for the parameter selection.

**Figure 2.** An example of feature vector for radial basis function kernel [15].

#### **3.2. Adjustable ANFIS optimization**

An ANFIS system could be used with Sugeno fuzzy model for fine adjustment of SVM classification kernels. Such framework makes the ANFIS modeling more systematic and less reliant on expert knowledge and therefore facilitates learning and adjustment. The ANFIS structure is shown in Figure 3. In the first layer, all the nodes are adaptive nodes. The outputs of layer 1 are the fuzzy membership grade of the inputs, which are given by [21]:

$$O\_i^1 = \mu\_{A\_i}(\mathbf{x})\,,\ i = 1,\ 2 \text{ and } O\_i^1 = \mu\_{B\_{i\cdot 2}}(\mathbf{x})\,,\ i = 3,\ 4\tag{29}$$

**Figure 3.** An example of feature vector for radial basis function kernel.

*Oi* 1 is the *i* th output of layer 1, *μAi* (*x*) and *μBi*-2 (*x*) are type A and type B arbitrary fuzzy membership functions of nodes *i* and *i* - 2, respectively. In the second and third layer, the nodes are fixed nodes. They are labeled *M* and *N* respectively, indicating they perform as a simple multiplier. The outputs of these layers can be represented as:

$$\left| O \right|^2 = w\_i = \mu\_{A\_i}(\mathbf{x}) \mu\_{B\_i}(\mathbf{x}) \quad i = 1, \ldots, 4 \tag{30}$$

Adaptive Network Fuzzy Inference Systems for Classification in a Brain Computer Interface http://dx.doi.org/10.5772/55989 53

$$\_iO\_i^{\;3} = \overline{w}\_i = \frac{w\_i}{w\_i + w\_{i+1}} i = 1, \quad \dots, \; 4 \tag{31}$$

which are the so-called normalized firing strengths. In the fourth layer, the nodes are adaptive nodes. The output of each node in this layer is simply the product of the normalized firing strength and a first order polynomial for the first order Sugeno model. The outputs of this layer are given by:

$$\mathbf{C}\_{i}^{A} = \overline{\boldsymbol{w}}\_{i} \boldsymbol{f}\_{i} = \overline{\boldsymbol{w}}\_{i} (\boldsymbol{p}\_{i} \mathbf{x} + \boldsymbol{q}\_{i} \mathbf{y} + \boldsymbol{r}\_{i}) \mathbf{i} = \mathbf{1}, \quad \dots, \mathbf{4} \tag{32}$$

in which *f <sup>i</sup>* is the firing rate, *pi* is the *x* scale, *qi* is the *y* scale, and *ri* is the bias for *i* th node. In the fifth layer, there is only one single fixed node that performs the summation of all incoming signals:

**Figure 2.** An example of feature vector for radial basis function kernel [15].

52 Brain-Computer Interface Systems – Recent Progress and Future Prospects

An ANFIS system could be used with Sugeno fuzzy model for fine adjustment of SVM classification kernels. Such framework makes the ANFIS modeling more systematic and less reliant on expert knowledge and therefore facilitates learning and adjustment. The ANFIS structure is shown in Figure 3. In the first layer, all the nodes are adaptive nodes. The outputs

<sup>1</sup> <sup>=</sup>*μBi*-2

(*x*), *i* =3, 4 (29)

(*x*) are type A and type B arbitrary fuzzy

(*x*) , *i* =1, …, 4 (30)

of layer 1 are the fuzzy membership grade of the inputs, which are given by [21]:

(*x*) and *μBi*-2

(*x*)*μBi*

membership functions of nodes *i* and *i* - 2, respectively. In the second and third layer, the nodes are fixed nodes. They are labeled *M* and *N* respectively, indicating they perform as a simple

(*x*) , *i* =1, 2 and *Oi*

**3.2. Adjustable ANFIS optimization**

*Oi* <sup>1</sup> <sup>=</sup>*μAi*

**Figure 3.** An example of feature vector for radial basis function kernel.

multiplier. The outputs of these layers can be represented as:

<sup>2</sup> <sup>=</sup>*wi* <sup>=</sup>*μAi*

*Oi*

is the *i* th output of layer 1, *μAi*

*Oi* 1

$$\mathbf{O}\_i^{\ \ 5} = \sum\_{i=1}^2 \overline{w}\_i \, f\_i = \sum\_{i=1}^2 \frac{w\_i \, f\_i}{w\_i + w\_{i+1}} \tag{33}$$

It can be observed that there are two adaptive layers in this ANFIS architecture, namely the first layer and the fourth layer. In the first layer, there are three modifiable parameters {*ai* , *bi* , *ci* }, which are related to the input membership functions. These parameters are the socalled premise parameters. In the fourth layer, there are also three modifiable parameters {*pi* , *qi* , *ri* }, pertaining to the first order polynomial. These parameters are so-called consequent parameters.

The task of the learning algorithm for this architecture is to tune all the above mentioned modifiable parameters to make the ANFIS output match the training data. When the premise parameters of the membership function are fixed, the output of the ANFIS model can be written as:

$$
\sigma = (\overline{w}\_1 \mathbf{x}) p\_1 + (\overline{w}\_1 \mathbf{x}) q\_1 + (\overline{w}\_1 \mathbf{x}) r\_1 + (\overline{w}\_2 \mathbf{x}) p\_2 + (\overline{w}\_2 \mathbf{y}) q\_2 + (\overline{w}\_2) r\_2. \tag{34}
$$

This is a linear combination of the modifiable consequent parameters *p*1, *q*1, *r*1, *p*2, *q*2 and *r*2. When the premise parameters are fixed the least squares method is used easily to identify the optimal values of these parameters after adjustment of ANFIS weights using SVM. When the premise parameters are not fixed, the search space becomes larger and the convergence of the training becomes slower.

A hybrid algorithm combining the least squares method and the gradient descent method was adopted to identify the optimal values of these parameters [22]. The hybrid algorithm is composed of a forward pass and a backward pass. The least squares method (forward pass) was used to optimize the consequent parameters with the premise parameters fixed. Once the optimal consequent parameters are found, the backward pass starts immediately. The gradient descent method (backward pass) was used to adjust optimally the premise parameters corresponding to the fuzzy sets in the input domain. The output of the ANFIS was calculated by employing the consequent parameters found in the forward pass. The output error was used to adapt the premise parameters by means of a standard back propagation algorithm. It has been shown that this hybrid algorithm is highly efficient in training the ANFIS [21].

A classification method base on ANFIS adapted SVM proposed in this section. It showed that non-linear features improve classification rate as an effective component. Through the feature space constructed using approximate entropy and fractal dimension in addition to conven‐ tional spectral features, different stages of EEG signals can be recognized from each other expressly. The successful implementations of ANFIS-SVM for EEG signal classification was reported in this context. The results confirmed that this method provides better performance in datasets with lower dimension. This performance achieved for RBF kernel of SVM modified by ANFIS. This section can be a strong base for improved methods in the field of BCI cognition and analysis for therapeutic applications.

#### **3.3. Recursive least square**

In this method a third order AR model is used that directly estimates EEG parameters without using any intrinsic model. Such linear methods could be used to estimate nonlinear systems in piece wise linear manner. *φ <sup>T</sup>* (*t* - 1)= - *X* (*t* - 1) *X* (*t* - 2) *X* (*t* - 3) *<sup>T</sup>* is input matrix in which *X* (*t*) is input at time *t* and *θ* ^(*t*)= - *<sup>b</sup>*<sup>1</sup> *<sup>b</sup>*<sup>2</sup> *<sup>b</sup>*<sup>3</sup> *<sup>T</sup>* is the model parameters. Iterative least square parameter modification is computed by following equations [23].

$$
\stackrel{\Delta}{\Theta}(t) = \stackrel{\Delta}{\Theta}(t-1) + \left\{\boldsymbol{\varrho}^{\top}(t)\boldsymbol{\varrho}(t)\right\}^{\perp}\boldsymbol{\varrho}(t-1)\boldsymbol{\xi}(t) \tag{35}
$$

$$\boldsymbol{\xi} \cdot \boldsymbol{\xi}(t) = \boldsymbol{X}\left(t\right) - \boldsymbol{\varphi}^T \left(t \ -1\right) \overset{\scriptstyle \Delta}{\boldsymbol{\Theta}}(t \ -1) \tag{36}$$

Use of parametric model provides a time variant estimate of state equations of system at the working point. It is in accordance to the highly variable nature of EEG signals. In fact the modeling of nonlinear and time variant signal using a feature vector with limited dimensions provides a filtering on data. The abstract characteristics of signal would be extracted that leads to lower variance compared to the frequency feature extraction methods. Dimension of feature vector is a critical point to avoid over learning for high values of features or lake of convergence due to small size of feature vector.

#### **3.4. Coupled hidden Markov models**

Use of multimodal approaches provides improvement in robustness of classification under disturbances. This could extend the margin of security for BCI systems. Integration of two sets of features namely set *a* and set *b* could be done by various methods. Several integration models have been proposed in the literature that can be divided into early integration (EI) and late integration (LI). In the EI model, information is integrated in the feature space to form a feature vector combined of both features. Classification is based on this composite feature vector. The model is based on assumption of conditional dependence between different modes and therefore is more general than the LI model. In the LI model, the modules are pre-classified independently of each other. The final classification is based on the fusion of both modules by evaluating their joint occurrence. This method is based on assumption of conditional inde‐ pendency of both data streams. It is generally accepted that the auditory system performs partial identification in independent channels, whereas, BCI classification seems to be based on early integration which assumes conditional dependence between both modules [24]. This theory is based on LI method and models pseudo-synchronization between modules to account some temporal dependency between them. It is a compromise of EI and LI schemes. The bimodal signal is considered as an observation vector consists of two sets of features. Optimum classifier which is based on Bayesian decision theory could be obtained using the maximum *a posteriori* probability function [25]:

parameters corresponding to the fuzzy sets in the input domain. The output of the ANFIS was calculated by employing the consequent parameters found in the forward pass. The output error was used to adapt the premise parameters by means of a standard back propagation algorithm. It has been shown that this hybrid algorithm is highly efficient in

A classification method base on ANFIS adapted SVM proposed in this section. It showed that non-linear features improve classification rate as an effective component. Through the feature space constructed using approximate entropy and fractal dimension in addition to conven‐ tional spectral features, different stages of EEG signals can be recognized from each other expressly. The successful implementations of ANFIS-SVM for EEG signal classification was reported in this context. The results confirmed that this method provides better performance in datasets with lower dimension. This performance achieved for RBF kernel of SVM modified by ANFIS. This section can be a strong base for improved methods in the field of BCI cognition

In this method a third order AR model is used that directly estimates EEG parameters without using any intrinsic model. Such linear methods could be used to estimate nonlinear systems in piece wise linear manner. *φ <sup>T</sup>* (*t* - 1)= - *X* (*t* - 1) *X* (*t* - 2) *X* (*t* - 3) *<sup>T</sup>* is input matrix in which

Use of parametric model provides a time variant estimate of state equations of system at the working point. It is in accordance to the highly variable nature of EEG signals. In fact the modeling of nonlinear and time variant signal using a feature vector with limited dimensions provides a filtering on data. The abstract characteristics of signal would be extracted that leads to lower variance compared to the frequency feature extraction methods. Dimension of feature vector is a critical point to avoid over learning for high values of features or lake of convergence

Use of multimodal approaches provides improvement in robustness of classification under disturbances. This could extend the margin of security for BCI systems. Integration of two sets of features namely set *a* and set *b* could be done by various methods. Several integration models have been proposed in the literature that can be divided into early integration (EI) and late integration (LI). In the EI model, information is integrated in the feature space to form a feature vector combined of both features. Classification is based on this composite feature vector. The

*<sup>T</sup>* is the model parameters. Iterative least square

*φ*(*t* - 1)*ξ*(*t*) (35)

^(*<sup>t</sup>* - 1) (36)

^(*t*)= - *<sup>b</sup>*<sup>1</sup> *<sup>b</sup>*<sup>2</sup> *<sup>b</sup>*<sup>3</sup>

^(*<sup>t</sup>* - 1) <sup>+</sup> (*<sup>φ</sup> <sup>T</sup>* (*t*)*φ*(*t*))-1

*ξ*(*t*)= *X* (*t*) - *φ <sup>T</sup>* (*t* - 1)*θ*

parameter modification is computed by following equations [23].

*θ* ^(*t*)=*<sup>θ</sup>*

training the ANFIS [21].

**3.3. Recursive least square**

*X* (*t*) is input at time *t* and *θ*

due to small size of feature vector.

**3.4. Coupled hidden Markov models**

and analysis for therapeutic applications.

54 Brain-Computer Interface Systems – Recent Progress and Future Prospects

$$\lambda\_0 \text{=} \max\_{\lambda} P(\lambda \, ^{\parallel} \{ \mathbf{O} \, ^{a}, \mathbf{O} \, ^{b} \} ) = P(\{ \mathbf{O} \, ^{a}, \mathbf{O} \, ^{b} \} \, \lambda \, \mathbf{I}) P(\lambda \, \lambda \, / \, P \{ \mathbf{O} \, ^{a}, \mathbf{O} \, ^{b} \} \tag{37}$$

$$
\lambda = \{A, \ B, \ \pi\}\tag{38}
$$

Where *A* is state transition matrix, *B* is observation probability matrix, *π* is initial condition probability matrix, *O* represents the sequence of feature vectors. In this context the superscript *a*, denotes the first stream parameters and superscript *b* denotes the second stream parameters.

The parameters are nonlinear which could be seen in equations 3 and 4. The system identifi‐ cation is based on linear ARMA model that means the parameters are computed well only around the operating point of the system. The operating point of system is time-varying and therefore the parameters vary with time. Therefore, the feature vector parameters are com‐ puted at each frame during train and test of HMM.

Training of the above mentioned multistream model could be done by synchronous and asynchronous methods. In this section the multistream approach is based on pseudo-synchro‐ nous coupled hidden Markov model. As it will be shown, it is an interesting option for multimodal continuous speech identification because of, 1) synchronous multimodal contin‐ uous speech identification, 2) consideration of asynchrony between streams. Some resynch‐ ronization points are defined at the beginning and end of BCI segments including phonemes or words.

Some resynchronization points are defined at the beginning and end of BCI segments including phonemes or words. Combination of the independent likelihoods is done by multiplying the segment likelihoods from the two streams, thus assuming conditional independence of the streams according to:

$$P\left(\left(\big{O}\,^{a},\big{O}\,^{b}\right)\mid\lambda\right) = P\left(\big{O}\,^{a}\,^{\mid}\lambda\,^{a}\right)^{w}P\left(\big{O}\,^{b}\,^{b}\right)^{\{1\cdot w\}}\tag{39}$$

The weighting factor *w*0≤*w* ≤1 represents the reliability of the two modalities. It generally depends on the performance obtained by each modality and on the presence of noise and disturbance. Here, we estimate the optimal weighting factor on the development set which is subject to the same noise as the test set. The method used for final experiments however was to automatically estimate the SNR from the test data and to adjust the weighting factor accordingly. It can be observed empirically that the optimal weight is related almost linearly to the SNR ratio.

BCI with dynamic parameters was studied in this section. The algorithm adopted is based on pseudo-synchronous hidden Markov chains to model the asynchrony between events. The proposed combination of ARMA model and Kalman filtering for feature extraction resulted to the best identification rates compared to usual methods. Complimentary effect of video information and dynamic parameters on BCI was studied. Effectiveness of the proposed identification system was more beneficial in low signal to noise ratios which reveals the robustness of the algorithm adopted in this study. Owing to high rate of information, the voice of speakers has significant effect on the identification rate; however, it reduces rapidly due to environmental noise. It was also shown that a specific combination weight of voice and video information provides optimum identification rate which is dependent on the signal to noise ratio and provides low dependency on the environmental noise. The phonetic content of spoken phrases was evaluated and the phonemes were sorted based on their influence on BCI rate. Identification rate of proposed model based system was compared to other parameter extraction methods including Kalman filtering, neural network, ANFIS system and auto regressive moving average. The combination of proposed model with Kalman filter led to the best identification performance. Combination of feature vectors content has a great roll on identification rate, Therefore, more efficient methods rather than pseudo-synchronized hidden Markov chain could be used for better achievements. Application of feature extraction methods like sample entropy, fractal dimension and nonlinear model based approaches have shown appropriate performance in BCI processing and could result to better identification rates in this area. Lip shape extraction is a critical point in this identification method and more robust algorithms provide accurate and precise results.

#### **4. Conclusions**

Some promising methods for feature extraction and classification of EEG signal are described in this chapter. The aim of these methods is to overcome the ambiguities encountering BCI applications.

A feature extraction algorithm based on the Kalman estimation discussed. This estimator minimizes the error covariance and therefore is an optimum estimator if appropriate initial condition is selected. Kalman estimator is used for adaptive estimation of dynamic parameters of EEG. The estimator reduces the error variance adaptively and after a period of time a unique estimation is achieved.

The SVM has been used extensively for classification of EEG signals. It is shown that EEG signal has separable intrinsic vectors which could be used in SVM classifier. SVM classifiers use discriminant hyper-planes for classification. The selected hyper-planes are those that maximize the margin of classification edges. The distance from the nearest training points usually measured based on a non-linear kernel to map the problem to a linear solvation space. A RBF kernel based SVM is proposed here that Lagrangian optimization is performed using an adjustable ANFIS algorithm. It will be shown that this method leads to adjustable soft decision classification.

The combination of proposed model with Kalman filter can lead to the best identification performance. Combination of different feature sets has a great roll on classification rate, Therefore, more efficient methods rather than pseudo-synchronized hidden Markov chain could be used for better achievements. Application of feature extraction methods like sample entropy, fractal dimension and nonlinear model based approaches have shown appropriate performance and could result to better identification rates in this area.

Through the feature space constructed using approximate entropy and fractal dimension in addition to conventional spectral features, different stages of EEG signals can be recognized from each other expressly. These methods provide better performance in datasets with lower dimension. The RBF kernel of SVM modified by ANFIS can be a strong base for improved methods in the field of BCI cognition and analysis for therapeutic applications.

#### **Author details**

disturbance. Here, we estimate the optimal weighting factor on the development set which is subject to the same noise as the test set. The method used for final experiments however was to automatically estimate the SNR from the test data and to adjust the weighting factor accordingly. It can be observed empirically that the optimal weight is related almost linearly

BCI with dynamic parameters was studied in this section. The algorithm adopted is based on pseudo-synchronous hidden Markov chains to model the asynchrony between events. The proposed combination of ARMA model and Kalman filtering for feature extraction resulted to the best identification rates compared to usual methods. Complimentary effect of video information and dynamic parameters on BCI was studied. Effectiveness of the proposed identification system was more beneficial in low signal to noise ratios which reveals the robustness of the algorithm adopted in this study. Owing to high rate of information, the voice of speakers has significant effect on the identification rate; however, it reduces rapidly due to environmental noise. It was also shown that a specific combination weight of voice and video information provides optimum identification rate which is dependent on the signal to noise ratio and provides low dependency on the environmental noise. The phonetic content of spoken phrases was evaluated and the phonemes were sorted based on their influence on BCI rate. Identification rate of proposed model based system was compared to other parameter extraction methods including Kalman filtering, neural network, ANFIS system and auto regressive moving average. The combination of proposed model with Kalman filter led to the best identification performance. Combination of feature vectors content has a great roll on identification rate, Therefore, more efficient methods rather than pseudo-synchronized hidden Markov chain could be used for better achievements. Application of feature extraction methods like sample entropy, fractal dimension and nonlinear model based approaches have shown appropriate performance in BCI processing and could result to better identification rates in this area. Lip shape extraction is a critical point in this identification method and more

Some promising methods for feature extraction and classification of EEG signal are described in this chapter. The aim of these methods is to overcome the ambiguities encountering BCI

A feature extraction algorithm based on the Kalman estimation discussed. This estimator minimizes the error covariance and therefore is an optimum estimator if appropriate initial condition is selected. Kalman estimator is used for adaptive estimation of dynamic parameters of EEG. The estimator reduces the error variance adaptively and after a period of time a unique

The SVM has been used extensively for classification of EEG signals. It is shown that EEG signal has separable intrinsic vectors which could be used in SVM classifier. SVM classifiers use discriminant hyper-planes for classification. The selected hyper-planes are those that

robust algorithms provide accurate and precise results.

56 Brain-Computer Interface Systems – Recent Progress and Future Prospects

to the SNR ratio.

**4. Conclusions**

estimation is achieved.

applications.

Vahid Asadpour, Mohammd Reza Ravanfar and Reza Fazel-Rezai

University of North Dakota, USA

#### **References**


[5] Derya, E. Recurrent neural networks employing Lyapunov exponents for analysis of

[6] Derya, E. Lyapunov exponents/probabilistic neural networks for analysis of EEG sig‐

[7] Lia, J, Chena, Y, Zhangb, W, & Tiana, Y. Computation of Lyapunov values for two planar polynomial differential systems. Applied Mathematics and Computation

[8] Mendel, J. M. Lessons in Estimation Theory for Signal Processing, Communications,

[9] Franklin, G. F, Powell, J. D, & Chapterman, M. L. Digital Control of Dynamic Sys‐

[10] Choi, J, Lima, A. C, & Haykin, S. Kalman Filter-Trained Recurrent Neural Equalizers for Time-Varying Channels. IEEE Transaction on Communications (2005). , 53(3),

[11] Brown, R. G. Hwang PYC. Introduction to Random Signals and Applied Kalman Fil‐

[12] Bishop, G, & Welch, G. An Introduction to the Kalman Filter. University of North

[13] Verdu, S. Minimum probability for error for synchronous Gaussian multiple-access

[14] Grewal, M. S, & Andrews, A. P. Kalman Filtering: Theory and Practice. Prentice Hall;

[15] Vatankhah, M, & Asadpour, V. Reza Fazel-Rezai, "Perceptual Pain Classification us‐ ing ANFIS adapted RBF Kernel Support Vector Machine for Therapeutic Usage", Ap‐

[16] Derya, E. Least squares support vector machine employing model-based methods co‐ efficients for analysis of EEG signals. Expert Systems with Applications; (2009).

[17] Cheng, C, Tutwiler, R. L, & Slobounov, S. Automatic Classification of Athletes With Residual Functional Deficits Following Concussion by Means of EEG Signal Using Support Vector Machine. IEEE Transactions on Neural Systems and Rehabilitation

[18] Taylor, J. S, & Cristianini, N. Support Vector Machines and other kernel-based learn‐

[19] Vahdani, B, Iranmanesh, S. H, Mousavi, S. M, & Abdollahzade, M. A locally linear neuro-fuzzy model for supplier selection in cosmetics industry. Applied Mathemati‐

channels. IEEE Transactions on Information Theory (1986). , 32-85.

ECG signals. Expert Systems with Applications (2010). , 37(2), 1192-1199.

nals 2010. Expert Systems with Applications (2010). , 37(2), 985-992.

(2008). , 204(1), 240-248.

472-480.

(1993).

tering. Wiley; (1992).

and Control. Englewood Cliffs; (1995).

58 Brain-Computer Interface Systems – Recent Progress and Future Prospects

Carolina at Chapel Hill, Lesson Course; (2001).

tems. Addison-Wesely; (1990).

plied Soft Computing, (2013).

Engineering (2008). , 16(4), 327-335.

cal Modeling (2012). , 36(10), 4714-4727.

ing methods. Cambridge University Press; (2000).

[25] Doud, H. Y, & Gururajan, A. Bin He A. Cortical Imaging of Event-Related (de)Syn‐ chronization During Online Control of Brain-Computer Interface Using Minimum-Norm Estimates in Frequency Domain. IEEE Transactions on Neural Systems and Rehabilitation Engineering (2008). , 16(5), 425-43.
