*2.2.1. SSVEP for BCI based wheelchair*

Singla in 2014 spearheaded the research on the effects of stimuli color, of the flickering targets, on the accuracy of decision making to drive a wheelchair. In the study, SSVEPs were selected as compared to VEP because they are less vulnerable to artifacts produced by the eye blinks, eye movements as well as EMG noise [44].

SSVEP data was acquired, which originated due to four different flickering target frequencies, from the occipital region of the brain. The frequency features of the data were extracted using fast Fourier transform (FFT) and wavelet transform (WT). Three different classification methods were tried, two based on ANN with back propagation algorithm and one based on

**Figure 1.** Single graphic stimuli.

**Figure 2.** Pattern reversal stimuli.

**iii.** Pseudorandom code modulated (c-VEP) BCI: In this BCI, a pseudorandom sequence defines the duration of ON and OFF states of each stimulus. This mode yields highest com-

In SSVEP-based experiments, the user is asked to identify the target with eye-gaze. The attention of the user is supposed to be visually fixed on the target and the target is identified by feature extraction and its analysis [42]. In case of single graphic stimuli, stimulus appears and disappears at some particular rate just like displayed in **Figure 1**. In case of pattern reversal stimuli, at least two graphical patterns are displayed by alternative oscillations like shown in

With flashing stimulus, SSVEP appears as a sinusoidal-like waveform with fundamental frequency as that of blinking frequency of the stimulus. With graphic pattern stimulus, SSVEP appears at the reversal rate and their harmonics [8]. The SSVEP discrete frequency compo-

Singla in 2014 spearheaded the research on the effects of stimuli color, of the flickering targets, on the accuracy of decision making to drive a wheelchair. In the study, SSVEPs were selected as compared to VEP because they are less vulnerable to artifacts produced by the eye blinks,

SSVEP data was acquired, which originated due to four different flickering target frequencies, from the occipital region of the brain. The frequency features of the data were extracted using fast Fourier transform (FFT) and wavelet transform (WT). Three different classification methods were tried, two based on ANN with back propagation algorithm and one based on

nents stay intently constant in terms of amplitude and phase for long time [9].

**Figure 2**. Such stimulus maybe of checkerboard or grating type.

munication speed.

96 Evolving BCI Therapy - Engaging Brain State Dynamics

**2.2. Applications of SSVEP in BCI's**

*2.2.1. SSVEP for BCI based wheelchair*

**Figure 1.** Single graphic stimuli.

eye movements as well as EMG noise [44].

**2.1. Stimulus types**

SVM with One against All (OAA) strategy. The control signals were assigned for each of the five classes detected (7, 9, 11, 13 and rest signal). Corresponding to five classes, five movement positions such as forward (F), backward (B), left (L), right (R) and stop (S) were obtained.

The SSVEP stimulus produces a response in the EEG signal, which is characterized by oscillations of the order of the stimulation frequency and sometimes at harmonics or sub harmonics of it. The visual system can be divided into three subsystems [45].


The ability of the human eye to distinguish colors is based upon the varying sensitivity of cone cells to the light of different wavelengths [46]. There are three kinds of cone cells and are conventionally labeled as short (S), medium (M), and long (L) cones according to the wavelengths of the peaks of their spectral sensitivities. S, M and L cone cells are therefore sensitive to blue (short-wavelength), green (medium-wavelength) and red (long-wavelength) light respectively. The brain combines the information from each cone cells to give different perceptions for different colors and as a result the SSVEP strength elicited with different colors of the stimuli will be different [46]. In this work blue, green, red and violet were selected as stimuli colors to explore how different colors influence the elicited SSVEPs and the performance of SSVEP based system.

The research used, repetitive visual stimuli (RVS) with four different flickering frequencies was designed by using LabVIEW software (National Instrument Inc., USA). The front panel of RVS is shown in **Figure 3**. RVS with violet, red, green and blue flickering bars were designed as four different sets. The back ground color of the RVS was selected as black. The visual stimuli were square (4 × 4 cm) in shape and were placed on the four corners of the LCD screen. Four frequencies 7, 9, 11 and 13 Hz, i.e., in the low frequency range were selected by considering 60 Hz refreshing rate of LCD monitor [45]. In order to select any particular stimuli the four visual stimuli were separated in pair of two each, i.e., 7, 11 and 9, 13. Further in an interval of

**Figure 3.** Visual stimuli with four different flickering frequencies.

2 s if eye blink once then first pair is selected, i.e., 7, 11 and if eye is blinked twice then the next pair is selected, i.e., 9, 13. Once a pair of stimuli is selected then again in next interval of 2 s if eye blink once then upper stimuli is selected and if it is blinked twice then the lower stimuli is selected in that pair of stimuli.

was used. By changing the polarity of the signal given to the motors, it moves each of the

**Table 1.** Samples of extracted feature components of different frequencies and relax state for two subjects.

**7 14 9 18 11 22 13 26 Stimulus frequency (Hz)**

SSVEP-Based BCIs

99

http://dx.doi.org/10.5772/intechopen.75693

27.62 8.01 9.2 5.25 5.50 1.81 4.61 1.40 7 18.21 7.40 6.97 1.42 7.92 2.34 0.99 2.38 7 2.65 4.17 23.02 9.91 9.2 1.15 1.00 2.22 9 3.57 6.02 20.4 7.83 4.04 2.52 0.70 1.13 9 11.72 3.62 2.25 2.92 19.91 5.20 3.91 2.24 11 6.83 4.60 4.7 2.40 14.22 3.40 1.42 1.40 11 3.27 6.82 11.83 4.85 9.19 2.02 16.63 4.83 13 8.81 3.82 12.7 5.25 3.62 0.91 14.22 5.66 13 4.75 6.60 5.00 1.09 2.55 1.42 6.48 1.53 Relax 2.44 3.14 5.06 2.34 3.62 1.65 6.36 3.11 Relax

Lesenfants et al. in [47] conducted studies with a basic aim of developing independent SSVEP based BCI applications for locked in patients. Lesenfants et al. used the covert attention of healthy as well as locked-in patients by developing an independent, covert two-class paradigm of flashing targets. The study was divided over two groups of subjects. Group A consisted of 12 healthy subjects and Group B consisted of 12 healthy and 6 Locked-in Syndrome (LIS) patients. For both the groups 12 channels of EEG were recorded (P3, P1, P2, P4, PO7,

The visual stimulation was delivered via a custom made stimulus device, which had two subsystems: a control unit and a stimulation panel, based on the paradigm introduced in [48].

motors in both forward and backward directions [32].

**Figure 4.** Schematic representation of BCI-based wheelchair control.

PO3, POz, PO4, PO8, O1, Oz, and O2).

*2.2.2. SSVEP based BCI as independent application for locked-in syndrome*

The research used, repetitive visual stimuli (RVS) with four different flickering frequencies was designed by using LabVIEW software (National Instrument Inc., USA). The front panel of RVS is shown in **Figure 3**. RVS with violet, red, green and blue flickering bars were designed as four different sets. The back ground color of the RVS was selected as black. The visual stimuli were square (4 × 4 cm) in shape and were placed on the four corners of the LCD screen. Four frequencies 7, 9, 11 and 13 Hz, i.e., in the low frequency range were selected by considering 60 Hz refreshing rate of LCD monitor [45].

In order to select any particular stimuli the four visual stimuli were separated in pair of two each, i.e., 7, 11 and 9, 13. Further in an interval of 2 s if eye blink once then first pair is selected, i.e., 7, 11 and if eye is blinked twice then the next pair is selected, i.e., 9, 13. Once a pair of stimuli is selected then again in next interval of 2 s if eye blink once then upper stimuli is selected and if it is blinked twice then the lower stimuli is selected in that pair of stimuli.

The EEG signals recorded from each channel were digitized and segmented into 1-s time window in every 0.25 s. The coefficients of first (fundamental frequency) and second harmonic of all the four target frequencies were considered as the feature vector for classification. It can be seen from **Table 1** that for SSVEP input of 7 Hz, maximum values of amplitude exists at 7, followed by 14.

In case of ANN, there were total eight parameters (first and second harmonics of all the four frequencies) so the input vector contains eight rows. Another set of Q target vectors (the correct output vectors in four digits for each of the input vectors) formed a second matrix. They developed wheelchair prototype to control in forward, backward, left, right and stop positions. The schematic representation of the BCI wheelchair control is shown in **Figure 4**. The wheelchair prototype is shown in **Figure 5**. Motor driver IC, L293D (www.instructables.com)


**Table 1.** Samples of extracted feature components of different frequencies and relax state for two subjects.

was used. By changing the polarity of the signal given to the motors, it moves each of the motors in both forward and backward directions [32].

#### *2.2.2. SSVEP based BCI as independent application for locked-in syndrome*

2 s if eye blink once then first pair is selected, i.e., 7, 11 and if eye is blinked twice then the next pair is selected, i.e., 9, 13. Once a pair of stimuli is selected then again in next interval of 2 s if eye blink once then upper stimuli is selected and if it is blinked twice then the lower stimuli

The research used, repetitive visual stimuli (RVS) with four different flickering frequencies was designed by using LabVIEW software (National Instrument Inc., USA). The front panel of RVS is shown in **Figure 3**. RVS with violet, red, green and blue flickering bars were designed as four different sets. The back ground color of the RVS was selected as black. The visual stimuli were square (4 × 4 cm) in shape and were placed on the four corners of the LCD screen. Four frequencies 7, 9, 11 and 13 Hz, i.e., in the low frequency range were selected by consider-

In order to select any particular stimuli the four visual stimuli were separated in pair of two each, i.e., 7, 11 and 9, 13. Further in an interval of 2 s if eye blink once then first pair is selected, i.e., 7, 11 and if eye is blinked twice then the next pair is selected, i.e., 9, 13. Once a pair of stimuli is selected then again in next interval of 2 s if eye blink once then upper stimuli is selected and if it is blinked twice then the lower stimuli is selected in that pair of stimuli.

The EEG signals recorded from each channel were digitized and segmented into 1-s time window in every 0.25 s. The coefficients of first (fundamental frequency) and second harmonic of all the four target frequencies were considered as the feature vector for classification. It can be seen from **Table 1** that for SSVEP input of 7 Hz, maximum values of amplitude exists at 7,

In case of ANN, there were total eight parameters (first and second harmonics of all the four frequencies) so the input vector contains eight rows. Another set of Q target vectors (the correct output vectors in four digits for each of the input vectors) formed a second matrix. They developed wheelchair prototype to control in forward, backward, left, right and stop positions. The schematic representation of the BCI wheelchair control is shown in **Figure 4**. The wheelchair prototype is shown in **Figure 5**. Motor driver IC, L293D (www.instructables.com)

is selected in that pair of stimuli.

98 Evolving BCI Therapy - Engaging Brain State Dynamics

followed by 14.

ing 60 Hz refreshing rate of LCD monitor [45].

**Figure 3.** Visual stimuli with four different flickering frequencies.

Lesenfants et al. in [47] conducted studies with a basic aim of developing independent SSVEP based BCI applications for locked in patients. Lesenfants et al. used the covert attention of healthy as well as locked-in patients by developing an independent, covert two-class paradigm of flashing targets. The study was divided over two groups of subjects. Group A consisted of 12 healthy subjects and Group B consisted of 12 healthy and 6 Locked-in Syndrome (LIS) patients. For both the groups 12 channels of EEG were recorded (P3, P1, P2, P4, PO7, PO3, POz, PO4, PO8, O1, Oz, and O2).

The visual stimulation was delivered via a custom made stimulus device, which had two subsystems: a control unit and a stimulation panel, based on the paradigm introduced in [48].

**Figure 4.** Schematic representation of BCI-based wheelchair control.

**Figure 5.** Wheelchair prototype for SSVEP-based BCI control.

The panel, placed at 30 cm from subject's head, was a 7 × 7 cm<sup>2</sup> "interlaced square" made of red and yellow 1 × 1 cm<sup>2</sup> light emitting diode (LED) - squares with a white fixation cross in the middle (**Figure 6**).

The yellow squares (represented by white squares here) flicker at the frequency of 10 Hz. The red squares (represented by grey squares here) flash at 14 Hz.

The interlaced square pattern showed a 10% improvement in accuracy in comparison with a "line" pattern [49]. The control unit was designed to precisely control the red and yellow flickering frequencies independently between 1 and 99 Hz by microcontroller based circuit.

**Figure 7.** Overt block pattern.

11.5–12.5 Hz.

The yellow and red squares were programmed to flicker at 10 and 14 Hz, respectively. The

The subjects were asked 33 yes/no questions (e.g., "is your name Paul?"). To answer "yes," the subjects had to focus their attention over yellow flashes for 7 s or over the red for "no." Epochs of 7 s were used as a unique window, where after four different feature extraction algorithms like DFT, multitaper spectral analysis (PMTM) [53, 54], CCA, lock-in analyser system (LAS) [49–51]. A automatic channel selection algorithm (ACSA) based on distinction sensitive learning vector quantization (DSLVQ) [52] selected an optimal channel set specific to each subject out of the 12 available channels. Classification was performed using LDA or a SVM (linear

PMTM obtained maximum accuracy of 77.0 ± 3.4% averaged over subject population, while LAS produced a similar mean accuracy of 74.4 ± 3.2% (**Tables 2** and **3**). DFT and CCA gave worse results as compared to PMTM and LAS (respectively, 69.4 ± 3.4% and 58.4 ± 3.9%).

Another comparison was done with the results obtained from the feature extraction methods using the ACSA as well as a single harmonic. PMTM and LAS produced significantly greater accuracy than DFT and CCA, with an accuracy of 84.7 ± 2.0 and 83.1 ± 2.3%, respectively. DFT obtained a 79.3 ± 2.7% accuracy and CCA was able to attain 72.4 ± 1.6% but in only five out of the 10 subjects. The performance with and without ACSA could therefore not be compared with CCA. For a single harmonic, a significant mean accuracy increase of 7.8% for PMTM,

Martišius and Damaševičius in 2016 [55] proposed an SSVEP based BCI gaming system. The researchers developed a 3-class BCI system based on SSVEP and emotive EPOC Headset. The game involved target shooting developed in the OpenVIBE environment which provided the user feedback. Emotive EPOC, a 16 electrode based gaming headset, was used in combination with the SSVEP paradigm. Raw EEG data from the head set was acquired with internal

At first, data was split into three groups, according to their corresponding class labels, LEFT, RIGHT, and CENTER. Each group of signals was subjected to band-pass filter centered on the target frequency of interest: for the LEFT class, 29.5–30.5 Hz; CENTER, 19.5–20.5 Hz; RIGHT,

sampling of 2048 Hz. Signals from the O1, O2, P7, and P8 were taken.

blocks made of 1 × 1 cm<sup>2</sup>

LED squares separated by

http://dx.doi.org/10.5772/intechopen.75693

SSVEP-Based BCIs

101

pattern was composed of two 2 × 2 cm<sup>2</sup>

12 cm with a white fixation cross in between (**Figure 7**).

kernel), and assessed with a 10 × 10 fold cross validation.

7.9% for LAS and 7.6% for DFT was obtained.

*2.2.3. SSVEP based virtual gaming application*

**Figure 6.** Electronic visual stimulation unit.

The yellow and red squares were programmed to flicker at 10 and 14 Hz, respectively. The pattern was composed of two 2 × 2 cm<sup>2</sup> blocks made of 1 × 1 cm<sup>2</sup> LED squares separated by 12 cm with a white fixation cross in between (**Figure 7**).

The subjects were asked 33 yes/no questions (e.g., "is your name Paul?"). To answer "yes," the subjects had to focus their attention over yellow flashes for 7 s or over the red for "no." Epochs of 7 s were used as a unique window, where after four different feature extraction algorithms like DFT, multitaper spectral analysis (PMTM) [53, 54], CCA, lock-in analyser system (LAS) [49–51]. A automatic channel selection algorithm (ACSA) based on distinction sensitive learning vector quantization (DSLVQ) [52] selected an optimal channel set specific to each subject out of the 12 available channels. Classification was performed using LDA or a SVM (linear kernel), and assessed with a 10 × 10 fold cross validation.

PMTM obtained maximum accuracy of 77.0 ± 3.4% averaged over subject population, while LAS produced a similar mean accuracy of 74.4 ± 3.2% (**Tables 2** and **3**). DFT and CCA gave worse results as compared to PMTM and LAS (respectively, 69.4 ± 3.4% and 58.4 ± 3.9%).

Another comparison was done with the results obtained from the feature extraction methods using the ACSA as well as a single harmonic. PMTM and LAS produced significantly greater accuracy than DFT and CCA, with an accuracy of 84.7 ± 2.0 and 83.1 ± 2.3%, respectively. DFT obtained a 79.3 ± 2.7% accuracy and CCA was able to attain 72.4 ± 1.6% but in only five out of the 10 subjects. The performance with and without ACSA could therefore not be compared with CCA. For a single harmonic, a significant mean accuracy increase of 7.8% for PMTM, 7.9% for LAS and 7.6% for DFT was obtained.
